parquet-converter commited on
Commit
6777e78
·
1 Parent(s): 416d631

Update parquet files (step 97 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1368565466ki/ZSTRD/models.py +0 -533
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Docklight 2.1.10 Serial Number 631 PATCHED.md +0 -113
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Cracked Version of Filmora The Good the Bad and the Ugly.md +0 -32
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gts Fc 518ls Lan Driver.11 Download and Install the Latest Version for Your LAN Card.md +0 -147
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Segundos Um jogo de escolhas difceis e consequncias hilrias.md +0 -128
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Royale Versi Terbaru dan Bangun Klan Anda.md +0 -90
  7. spaces/1phancelerku/anime-remove-background/Atrvete a Descargar Geometry Dash Meltdown Hackeado APK y Vive una Aventura Increble.md +0 -126
  8. spaces/1phancelerku/anime-remove-background/Download and Stream Greatest Love Of All the Latest Single by Prince Chike.md +0 -109
  9. spaces/1phancelerku/anime-remove-background/Enjoy the Best Stick War Experience with Hugo Gamings Stick War Legacy Mod VIP.md +0 -131
  10. spaces/1toTree/lora_test/ppdiffusers/pipelines/dance_diffusion/__init__.py +0 -17
  11. spaces/1toTree/lora_test/ppdiffusers/pipelines/ddim/__init__.py +0 -17
  12. spaces/232labs/VToonify/vtoonify/model/stylegan/op_gpu/conv2d_gradfix.py +0 -227
  13. spaces/4Taps/SadTalker/src/face3d/models/template_model.py +0 -100
  14. spaces/801artistry/RVC801/infer_uvr5.py +0 -363
  15. spaces/AI-Hobbyist/Hoyo-RVC/docs/training_tips_en.md +0 -65
  16. spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_melody_32khz.py +0 -65
  17. spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/get_nets.py +0 -171
  18. spaces/AIGText/GlyphControl/ldm/modules/midas/midas/blocks.py +0 -342
  19. spaces/AIKey/facetofacechat/README.md +0 -10
  20. spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/notebook.py +0 -32
  21. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Template.ts +0 -23
  22. spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/__init__.py +0 -0
  23. spaces/AgentVerse/agentVerse/ui/src/classes/actor.ts +0 -54
  24. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/RemoveChild.js +0 -19
  25. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Pinch.js +0 -2
  26. spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/mandarin.py +0 -326
  27. spaces/AlexWang/lama/saicinpainting/training/modules/pix2pixhd.py +0 -669
  28. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py +0 -700
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/semantic_stable_diffusion/__init__.py +0 -0
  30. spaces/Andy1621/uniformer_image_detection/mmcv_custom/runner/checkpoint.py +0 -85
  31. spaces/Andy1621/uniformer_image_detection/mmdet/apis/train.py +0 -185
  32. spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/focal_loss.py +0 -181
  33. spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_40k_cityscapes.py +0 -2
  34. spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_769x769_80k_cityscapes.py +0 -9
  35. spaces/AngoHF/ANGO-Leaderboard/app.py +0 -24
  36. spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/classifier_train.py +0 -226
  37. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_160k.py +0 -9
  38. spaces/Anonymous-sub/Rerender/gmflow_module/utils/dist_utils.py +0 -99
  39. spaces/Anonymous-sub/Rerender/gmflow_module/utils/logger.py +0 -68
  40. spaces/ArtGAN/Video-Diffusion-WebUI/README.md +0 -15
  41. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/locations/__init__.py +0 -467
  42. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/link.py +0 -531
  43. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/emoji.py +0 -96
  44. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco_evaluation.py +0 -138
  45. spaces/Bart92/RVC_HF/infer/lib/train/utils.py +0 -478
  46. spaces/Bart92/RVC_HF/tools/torchgate/utils.py +0 -66
  47. spaces/Beasto/Face_To_Anime_Cyclegan/app.py +0 -45
  48. spaces/Benson/text-generation/Examples/Amazon Prime Mod Apk Premium Descargar La ltima Versin (2020).md +0 -125
  49. spaces/Benson/text-generation/Examples/Armas De La Gloria Isla Perdida.md +0 -91
  50. spaces/Benson/text-generation/Examples/Belote Juego De Cartas.md +0 -58
spaces/1368565466ki/ZSTRD/models.py DELETED
@@ -1,533 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- import modules
8
- import attentions
9
- import monotonic_align
10
-
11
- from torch.nn import Conv1d, ConvTranspose1d, Conv2d
12
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
13
- from commons import init_weights, get_padding
14
-
15
-
16
- class StochasticDurationPredictor(nn.Module):
17
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
18
- super().__init__()
19
- filter_channels = in_channels # it needs to be removed from future version.
20
- self.in_channels = in_channels
21
- self.filter_channels = filter_channels
22
- self.kernel_size = kernel_size
23
- self.p_dropout = p_dropout
24
- self.n_flows = n_flows
25
- self.gin_channels = gin_channels
26
-
27
- self.log_flow = modules.Log()
28
- self.flows = nn.ModuleList()
29
- self.flows.append(modules.ElementwiseAffine(2))
30
- for i in range(n_flows):
31
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
32
- self.flows.append(modules.Flip())
33
-
34
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
35
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
36
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
37
- self.post_flows = nn.ModuleList()
38
- self.post_flows.append(modules.ElementwiseAffine(2))
39
- for i in range(4):
40
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
41
- self.post_flows.append(modules.Flip())
42
-
43
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
44
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
45
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
46
- if gin_channels != 0:
47
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
48
-
49
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
50
- x = torch.detach(x)
51
- x = self.pre(x)
52
- if g is not None:
53
- g = torch.detach(g)
54
- x = x + self.cond(g)
55
- x = self.convs(x, x_mask)
56
- x = self.proj(x) * x_mask
57
-
58
- if not reverse:
59
- flows = self.flows
60
- assert w is not None
61
-
62
- logdet_tot_q = 0
63
- h_w = self.post_pre(w)
64
- h_w = self.post_convs(h_w, x_mask)
65
- h_w = self.post_proj(h_w) * x_mask
66
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
67
- z_q = e_q
68
- for flow in self.post_flows:
69
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
70
- logdet_tot_q += logdet_q
71
- z_u, z1 = torch.split(z_q, [1, 1], 1)
72
- u = torch.sigmoid(z_u) * x_mask
73
- z0 = (w - u) * x_mask
74
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
75
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
76
-
77
- logdet_tot = 0
78
- z0, logdet = self.log_flow(z0, x_mask)
79
- logdet_tot += logdet
80
- z = torch.cat([z0, z1], 1)
81
- for flow in flows:
82
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
83
- logdet_tot = logdet_tot + logdet
84
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
85
- return nll + logq # [b]
86
- else:
87
- flows = list(reversed(self.flows))
88
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
89
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
90
- for flow in flows:
91
- z = flow(z, x_mask, g=x, reverse=reverse)
92
- z0, z1 = torch.split(z, [1, 1], 1)
93
- logw = z0
94
- return logw
95
-
96
-
97
- class DurationPredictor(nn.Module):
98
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
99
- super().__init__()
100
-
101
- self.in_channels = in_channels
102
- self.filter_channels = filter_channels
103
- self.kernel_size = kernel_size
104
- self.p_dropout = p_dropout
105
- self.gin_channels = gin_channels
106
-
107
- self.drop = nn.Dropout(p_dropout)
108
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
109
- self.norm_1 = modules.LayerNorm(filter_channels)
110
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
111
- self.norm_2 = modules.LayerNorm(filter_channels)
112
- self.proj = nn.Conv1d(filter_channels, 1, 1)
113
-
114
- if gin_channels != 0:
115
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
116
-
117
- def forward(self, x, x_mask, g=None):
118
- x = torch.detach(x)
119
- if g is not None:
120
- g = torch.detach(g)
121
- x = x + self.cond(g)
122
- x = self.conv_1(x * x_mask)
123
- x = torch.relu(x)
124
- x = self.norm_1(x)
125
- x = self.drop(x)
126
- x = self.conv_2(x * x_mask)
127
- x = torch.relu(x)
128
- x = self.norm_2(x)
129
- x = self.drop(x)
130
- x = self.proj(x * x_mask)
131
- return x * x_mask
132
-
133
-
134
- class TextEncoder(nn.Module):
135
- def __init__(self,
136
- n_vocab,
137
- out_channels,
138
- hidden_channels,
139
- filter_channels,
140
- n_heads,
141
- n_layers,
142
- kernel_size,
143
- p_dropout):
144
- super().__init__()
145
- self.n_vocab = n_vocab
146
- self.out_channels = out_channels
147
- self.hidden_channels = hidden_channels
148
- self.filter_channels = filter_channels
149
- self.n_heads = n_heads
150
- self.n_layers = n_layers
151
- self.kernel_size = kernel_size
152
- self.p_dropout = p_dropout
153
-
154
- self.emb = nn.Embedding(n_vocab, hidden_channels)
155
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
156
-
157
- self.encoder = attentions.Encoder(
158
- hidden_channels,
159
- filter_channels,
160
- n_heads,
161
- n_layers,
162
- kernel_size,
163
- p_dropout)
164
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
165
-
166
- def forward(self, x, x_lengths):
167
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
168
- x = torch.transpose(x, 1, -1) # [b, h, t]
169
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
170
-
171
- x = self.encoder(x * x_mask, x_mask)
172
- stats = self.proj(x) * x_mask
173
-
174
- m, logs = torch.split(stats, self.out_channels, dim=1)
175
- return x, m, logs, x_mask
176
-
177
-
178
- class ResidualCouplingBlock(nn.Module):
179
- def __init__(self,
180
- channels,
181
- hidden_channels,
182
- kernel_size,
183
- dilation_rate,
184
- n_layers,
185
- n_flows=4,
186
- gin_channels=0):
187
- super().__init__()
188
- self.channels = channels
189
- self.hidden_channels = hidden_channels
190
- self.kernel_size = kernel_size
191
- self.dilation_rate = dilation_rate
192
- self.n_layers = n_layers
193
- self.n_flows = n_flows
194
- self.gin_channels = gin_channels
195
-
196
- self.flows = nn.ModuleList()
197
- for i in range(n_flows):
198
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
199
- self.flows.append(modules.Flip())
200
-
201
- def forward(self, x, x_mask, g=None, reverse=False):
202
- if not reverse:
203
- for flow in self.flows:
204
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
205
- else:
206
- for flow in reversed(self.flows):
207
- x = flow(x, x_mask, g=g, reverse=reverse)
208
- return x
209
-
210
-
211
- class PosteriorEncoder(nn.Module):
212
- def __init__(self,
213
- in_channels,
214
- out_channels,
215
- hidden_channels,
216
- kernel_size,
217
- dilation_rate,
218
- n_layers,
219
- gin_channels=0):
220
- super().__init__()
221
- self.in_channels = in_channels
222
- self.out_channels = out_channels
223
- self.hidden_channels = hidden_channels
224
- self.kernel_size = kernel_size
225
- self.dilation_rate = dilation_rate
226
- self.n_layers = n_layers
227
- self.gin_channels = gin_channels
228
-
229
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
230
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
231
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
232
-
233
- def forward(self, x, x_lengths, g=None):
234
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
235
- x = self.pre(x) * x_mask
236
- x = self.enc(x, x_mask, g=g)
237
- stats = self.proj(x) * x_mask
238
- m, logs = torch.split(stats, self.out_channels, dim=1)
239
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
240
- return z, m, logs, x_mask
241
-
242
-
243
- class Generator(torch.nn.Module):
244
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
245
- super(Generator, self).__init__()
246
- self.num_kernels = len(resblock_kernel_sizes)
247
- self.num_upsamples = len(upsample_rates)
248
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
249
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
250
-
251
- self.ups = nn.ModuleList()
252
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
253
- self.ups.append(weight_norm(
254
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
255
- k, u, padding=(k-u)//2)))
256
-
257
- self.resblocks = nn.ModuleList()
258
- for i in range(len(self.ups)):
259
- ch = upsample_initial_channel//(2**(i+1))
260
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
261
- self.resblocks.append(resblock(ch, k, d))
262
-
263
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
264
- self.ups.apply(init_weights)
265
-
266
- if gin_channels != 0:
267
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
268
-
269
- def forward(self, x, g=None):
270
- x = self.conv_pre(x)
271
- if g is not None:
272
- x = x + self.cond(g)
273
-
274
- for i in range(self.num_upsamples):
275
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
276
- x = self.ups[i](x)
277
- xs = None
278
- for j in range(self.num_kernels):
279
- if xs is None:
280
- xs = self.resblocks[i*self.num_kernels+j](x)
281
- else:
282
- xs += self.resblocks[i*self.num_kernels+j](x)
283
- x = xs / self.num_kernels
284
- x = F.leaky_relu(x)
285
- x = self.conv_post(x)
286
- x = torch.tanh(x)
287
-
288
- return x
289
-
290
- def remove_weight_norm(self):
291
- print('Removing weight norm...')
292
- for l in self.ups:
293
- remove_weight_norm(l)
294
- for l in self.resblocks:
295
- l.remove_weight_norm()
296
-
297
-
298
- class DiscriminatorP(torch.nn.Module):
299
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
300
- super(DiscriminatorP, self).__init__()
301
- self.period = period
302
- self.use_spectral_norm = use_spectral_norm
303
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
304
- self.convs = nn.ModuleList([
305
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
306
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
307
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
308
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
309
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
310
- ])
311
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
312
-
313
- def forward(self, x):
314
- fmap = []
315
-
316
- # 1d to 2d
317
- b, c, t = x.shape
318
- if t % self.period != 0: # pad first
319
- n_pad = self.period - (t % self.period)
320
- x = F.pad(x, (0, n_pad), "reflect")
321
- t = t + n_pad
322
- x = x.view(b, c, t // self.period, self.period)
323
-
324
- for l in self.convs:
325
- x = l(x)
326
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
327
- fmap.append(x)
328
- x = self.conv_post(x)
329
- fmap.append(x)
330
- x = torch.flatten(x, 1, -1)
331
-
332
- return x, fmap
333
-
334
-
335
- class DiscriminatorS(torch.nn.Module):
336
- def __init__(self, use_spectral_norm=False):
337
- super(DiscriminatorS, self).__init__()
338
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
339
- self.convs = nn.ModuleList([
340
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
341
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
342
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
343
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
344
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
345
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
346
- ])
347
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
348
-
349
- def forward(self, x):
350
- fmap = []
351
-
352
- for l in self.convs:
353
- x = l(x)
354
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
355
- fmap.append(x)
356
- x = self.conv_post(x)
357
- fmap.append(x)
358
- x = torch.flatten(x, 1, -1)
359
-
360
- return x, fmap
361
-
362
-
363
- class MultiPeriodDiscriminator(torch.nn.Module):
364
- def __init__(self, use_spectral_norm=False):
365
- super(MultiPeriodDiscriminator, self).__init__()
366
- periods = [2,3,5,7,11]
367
-
368
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
369
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
370
- self.discriminators = nn.ModuleList(discs)
371
-
372
- def forward(self, y, y_hat):
373
- y_d_rs = []
374
- y_d_gs = []
375
- fmap_rs = []
376
- fmap_gs = []
377
- for i, d in enumerate(self.discriminators):
378
- y_d_r, fmap_r = d(y)
379
- y_d_g, fmap_g = d(y_hat)
380
- y_d_rs.append(y_d_r)
381
- y_d_gs.append(y_d_g)
382
- fmap_rs.append(fmap_r)
383
- fmap_gs.append(fmap_g)
384
-
385
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
386
-
387
-
388
-
389
- class SynthesizerTrn(nn.Module):
390
- """
391
- Synthesizer for Training
392
- """
393
-
394
- def __init__(self,
395
- n_vocab,
396
- spec_channels,
397
- segment_size,
398
- inter_channels,
399
- hidden_channels,
400
- filter_channels,
401
- n_heads,
402
- n_layers,
403
- kernel_size,
404
- p_dropout,
405
- resblock,
406
- resblock_kernel_sizes,
407
- resblock_dilation_sizes,
408
- upsample_rates,
409
- upsample_initial_channel,
410
- upsample_kernel_sizes,
411
- n_speakers=0,
412
- gin_channels=0,
413
- use_sdp=True,
414
- **kwargs):
415
-
416
- super().__init__()
417
- self.n_vocab = n_vocab
418
- self.spec_channels = spec_channels
419
- self.inter_channels = inter_channels
420
- self.hidden_channels = hidden_channels
421
- self.filter_channels = filter_channels
422
- self.n_heads = n_heads
423
- self.n_layers = n_layers
424
- self.kernel_size = kernel_size
425
- self.p_dropout = p_dropout
426
- self.resblock = resblock
427
- self.resblock_kernel_sizes = resblock_kernel_sizes
428
- self.resblock_dilation_sizes = resblock_dilation_sizes
429
- self.upsample_rates = upsample_rates
430
- self.upsample_initial_channel = upsample_initial_channel
431
- self.upsample_kernel_sizes = upsample_kernel_sizes
432
- self.segment_size = segment_size
433
- self.n_speakers = n_speakers
434
- self.gin_channels = gin_channels
435
-
436
- self.use_sdp = use_sdp
437
-
438
- self.enc_p = TextEncoder(n_vocab,
439
- inter_channels,
440
- hidden_channels,
441
- filter_channels,
442
- n_heads,
443
- n_layers,
444
- kernel_size,
445
- p_dropout)
446
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
447
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
448
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
449
-
450
- if use_sdp:
451
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
452
- else:
453
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
454
-
455
- if n_speakers > 1:
456
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
457
-
458
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
459
-
460
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
461
- if self.n_speakers > 0:
462
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
463
- else:
464
- g = None
465
-
466
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
467
- z_p = self.flow(z, y_mask, g=g)
468
-
469
- with torch.no_grad():
470
- # negative cross-entropy
471
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
472
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
473
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
474
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
475
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
476
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
477
-
478
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
479
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
480
-
481
- w = attn.sum(2)
482
- if self.use_sdp:
483
- l_length = self.dp(x, x_mask, w, g=g)
484
- l_length = l_length / torch.sum(x_mask)
485
- else:
486
- logw_ = torch.log(w + 1e-6) * x_mask
487
- logw = self.dp(x, x_mask, g=g)
488
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
489
-
490
- # expand prior
491
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
492
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
493
-
494
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
495
- o = self.dec(z_slice, g=g)
496
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
497
-
498
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
499
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
500
- if self.n_speakers > 0:
501
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
502
- else:
503
- g = None
504
-
505
- if self.use_sdp:
506
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
507
- else:
508
- logw = self.dp(x, x_mask, g=g)
509
- w = torch.exp(logw) * x_mask * length_scale
510
- w_ceil = torch.ceil(w)
511
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
512
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
513
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
514
- attn = commons.generate_path(w_ceil, attn_mask)
515
-
516
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
517
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
518
-
519
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
520
- z = self.flow(z_p, y_mask, g=g, reverse=True)
521
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
522
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
523
-
524
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
525
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
526
- g_src = self.emb_g(sid_src).unsqueeze(-1)
527
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
528
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
529
- z_p = self.flow(z, y_mask, g=g_src)
530
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
531
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
532
- return o_hat, y_mask, (z, z_p, z_hat)
533
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Docklight 2.1.10 Serial Number 631 PATCHED.md DELETED
@@ -1,113 +0,0 @@
1
- <br />
2
- <h1>Docklight 2.1.10 Serial Number 631: What Is It and How to Use It</h1>
3
- <p>If you are working with serial communication protocols, such as RS232, TCP, USB HID, Bluetooth, etc., you may need a tool that can help you test, analyze and simulate them. One such tool is <strong>Docklight</strong>, a software that can monitor the communication between two serial devices or test the serial communication of a single device.</p>
4
- <p>However, to use the full features and benefits of Docklight software, you need a <strong>serial number</strong>, which is a unique product key that activates the software. One example of a serial number for Docklight software is <strong>631</strong>, which corresponds to the version <strong>2.1.10</strong> of the software.</p>
5
- <h2>Docklight 2.1.10 serial number 631</h2><br /><p><b><b>Download</b> &#10026;&#10026;&#10026; <a href="https://byltly.com/2uKzZI">https://byltly.com/2uKzZI</a></b></p><br /><br />
6
- <p>In this article, we will explain what Docklight is and what are its features, what a serial number is and why it is important, what Docklight 2.1.10 serial number 631 is and how to get it, and how to use it to activate the software.</p>
7
- <h2>What Is Docklight and What Are Its Features</h2>
8
- <h3>Docklight is a testing, analysis and simulation tool for serial communication protocols</h3>
9
- <p>Docklight is a software that allows you to monitor the communication between two serial devices or to test the serial communication of a single device. It is designed for engineers, developers, testers and hobbyists who work with serial communication protocols.</p>
10
- <p>Docklight can simulate serial protocols by sending predefined data sequences or responding to incoming data. It can also log the serial data in various formats, such as ASCII, HEX, binary, etc. It can also detect specific data sequences and trigger actions based on them.</p>
11
- <h3>Docklight supports various interfaces and connections, such as RS232, TCP, USB HID, etc.</h3>
12
- <p>Docklight can work with different types of serial interfaces and connections, such as RS232, RS485/RS422, TCP/IP, UDP/IP, USB HID, Bluetooth SPP, virtual COM ports, etc. It can also handle various communication parameters, such as baud rate, parity, data bits, stop bits, flow control, etc.</p>
13
- <p>Docklight can connect to multiple devices simultaneously and display the communication data in separate windows or tabs. It can also synchronize the communication data with the system clock or an external time source.</p>
14
- <h3>Docklight has many applications, such as simulating serial protocols, logging RS232 data, detecting specific data sequences, etc.</h3>
15
- <p>Docklight can be used for various purposes and scenarios involving serial communication protocols. Some examples are:</p>
16
- <p></p>
17
- <ul>
18
- <li>Simulating serial protocols: Docklight can act as a sender or a receiver of serial data and simulate the behavior of a device or a system. For example, you can use Docklight to test how your device responds to different commands or data inputs.</li>
19
- <li>Logging RS232 data: Docklight can record the serial data exchanged between two devices or from a single device and save it in a file or a database. For example, you can use Docklight to monitor the performance or the status of your device over time.</li>
20
- <li>Detecting specific data sequences: Docklight can search for specific patterns or sequences of serial data and trigger actions based on them. For example, you can use Docklight to send an email or a sound alert when your device sends an error code or a warning message.</li>
21
- </ul>
22
- <h2>What Is a Serial Number and Why Is It Important</h2>
23
- <h3>A serial number is a unique identifier assigned to an item or a software</h3>
24
- <p>A serial number is a sequence of numbers and/or letters that is assigned to an item or a software to identify it uniquely. Serial numbers are usually printed on labels or stickers attached to the item or the software package.</p>
25
- <p>Serial numbers are different from model numbers or part numbers, which are used to identify the type or the category of an item or a software. Serial numbers are also different from barcodes or QR codes, which are used to encode information about an item or a software in a machine-readable format.</p>
26
- <h3>Serial numbers are used for various purposes, such as preventing theft and counterfeiting, ensuring quality control, enabling software activation, etc.</h3>
27
- <p>Serial numbers have many benefits and applications for both the manufacturers and the consumers of items or software. Some examples are:</p>
28
- <ul>
29
- <li>Preventing theft and counterfeiting: Serial numbers can help track the ownership and the location of an item or a software and deter theft and counterfeiting. For example, if your laptop is stolen, you can report its serial number to the police or the manufacturer and increase the chances of recovering it.</li>
30
- <li>Ensuring quality control: Serial numbers can help trace the origin and the history of an item or a software and ensure quality control. For example, if your car has a defect or a recall issue, you can check its serial number to see when and where it was made and what parts were used.</li>
31
- <li>Enabling software activation: Serial numbers can help activate and register software and prevent unauthorized use or distribution. For example, if you buy a software online, you will receive a serial number that you need to enter to install and use the software.</li>
32
- </ul>
33
- <h2>What Is Docklight 2.1.10 Serial Number 631 and How to Get It</h2>
34
- <h3>Docklight 2.1.10 serial number 631 is a product key that allows you to use the full version of Docklight software</h3>
35
- <p>Docklight 2.1.10 serial number 631 is a 16-digit alphanumeric code that corresponds to the version 2.1.10 of Docklight software. It is also known as a product key or a license key.</p>
36
- <p>Docklight 2.1.10 serial number 631 enables you to use the full features and benefits of Docklight software without any limitations or restrictions. It also allows you to receive free updates and technical support from the manufacturer.[^4 ^]^</p>
37
- <h3>You can get Docklight 2.1.10 serial number 631 by purchasing it from the official website or from authorized resellers</h3>
38
- <p>The best and the most legal way to get Docklight 2.1.10 serial number 631 is to buy it from the official website of Docklight software or from authorized resellers. The price of Docklight 2.1.10 serial number 631 is $69 for a single user license, $199 for a site license, and $499 for a corporate license.</p>
39
- <p>When you purchase Docklight 2.1.10 serial number 631, you will receive an email with the serial number and a download link for the software. You will also receive an invoice and a receipt for your payment.</p>
40
- <h3>You can also get Docklight 2.1.10 serial number 631 by using a keygen or a crack, but this is illegal and risky</h3>
41
- <p>Another way to get Docklight 2.1.10 serial number 631 is to use a keygen or a crack, which are tools that generate or bypass serial numbers for software. However, this is illegal and risky, as it violates the terms and conditions of Docklight software and may expose your computer to viruses, malware, or other threats.</p>
42
- <p>Using a keygen or a crack may also result in poor performance, errors, or compatibility issues with Docklight software. Moreover, you may face legal consequences, such as fines or lawsuits, if you are caught using a keygen or a crack.</p>
43
- <h2>How to Use Docklight 2.1.10 Serial Number 631 to Activate the Software</h2>
44
- <h3>Download and install Docklight software from the official website or from a trusted source</h3>
45
- <p>The first step to use Docklight 2.1.10 serial number 631 is to download and install Docklight software from the official website of Docklight software or from a trusted source. You can download the latest version of Docklight software (version 2.3.26) from here: [text].</p>
46
- <p>The installation process is simple and straightforward. You just need to follow the instructions on the screen and accept the license agreement.</p>
47
- <h3>Run the software and enter Docklight 2.1.10 serial number 631 when prompted</h3>
48
- <p>The next step is to run the software and enter Docklight 2.1.10 serial number 631 when prompted. You can do this by clicking on the "Help" menu and selecting "Enter License Key". A dialog box will appear where you can enter your serial number.</p>
49
- <p>After entering your serial number, click on "OK" and wait for the confirmation message that your software has been activated successfully.</p>
50
- <h3>Enjoy the full features and benefits of Docklight software</h3>
51
- <p>The final step is to enjoy the full features and benefits of Docklight software. You can now use Docklight software to test, analyze and simulate serial communication protocols without any limitations or restrictions.</p>
52
- <p>You can also access the online documentation, tutorials, examples, and support forums of Docklight software from here: [text].</p>
53
- <h2>Conclusion</h2>
54
- <h3>Docklight is a powerful and versatile tool for testing, analyzing and simulating serial communication protocols</h3>
55
- <p>Docklight is a software that can help you monitor the communication between two serial devices or test the serial communication of a single device. It can simulate serial protocols by sending predefined data sequences or responding to incoming data. It can also log the serial data in various formats and detect specific data sequences and trigger actions based on them.</p>
56
- <h3>Docklight 2.1.10 serial number 631 is a product key that enables you to use the full version of Docklight software</h3>
57
- <p>Docklight 2.1.10 serial number 631 is a product key that corresponds to the version 2.1.10 of Docklight software. It enables you to use the full features and benefits of Docklight software without any limitations or restrictions.</p>
58
- <h3>You can get Docklight 2.1.10 serial number 631 by buying it legally or by using illegal methods, but the latter is not recommended</h3>
59
- <p>You can get Docklight 2.1.10 serial number 631 by purchasing it from the official website of Docklight software or from authorized resellers, which is the best and the most legal way to get it.</p>
60
- <p>You can also get Docklight 2.1.10 serial number 631 by using a keygen or a crack, which are tools that generate or bypass serial numbers for software, but this is illegal and risky, as it violates the terms and conditions of Docklight software and may expose your computer to viruses, malware, or other threats. Therefore, we do not recommend using this method.</p>
61
- <h2>FAQs</h2>
62
- <h4>Q1: What are the system requirements for using Docklight software?</h4>
63
- <p>A1: The system requirements for using Docklight software are as follows:</p>
64
- <ul>
65
- <li>Operating system: Windows 10, 8.1, 8, 7, Vista, XP (32-bit or 64-bit)</li>
66
- <li>Processor: Pentium III or higher</li>
67
- <li>Memory: 256 MB RAM or more</li>
68
- <li>Disk space: 50 MB or more</li>
69
- <li>Serial port: One or more standard serial ports (COM1..COM256) or virtual COM ports</li>
70
- <li>Network adapter: For TCP/IP communication</li>
71
- <li>USB port: For USB HID communication</li>
72
- <li>Bluetooth adapter: For Bluetooth SPP communication</li>
73
- </ul>
74
- <h4>Q2: What are the limitations of the evaluation version of Docklight software?</h4>
75
- <p>A2: The evaluation version of Docklight software is free to download and use for testing purposes, but it has some limitations, such as:</p>
76
- <ul>
77
- <li>The evaluation period is limited to 30 days</li>
78
- <li>The communication window can only display up to 1000 lines of data</li>
79
- <li>The logging function can only save up to 1000 lines of data per file</li>
80
- <li>The script function can only run up to 1000 lines of code per script</li>
81
- <li>The project function can only save up to 10 projects</li>
82
- <li>The sequence detection function can only detect up to 10 sequences</li>
83
- <li>The send function can only send up to 10 data sequences per project</li>
84
- <li>The receive function can only receive up to 10 data sequences per project</li>
85
- <li>The simulation function can only simulate up to 10 devices per project</li>
86
- </ul>
87
- <h4>Q3: What are some alternatives to Docklight software?</h4>
88
- <p>A3: Some alternatives to Docklight software are:</p>
89
- <ul>
90
- <li><a href="">Realterm</a>: A terminal program that can capture and debug binary and other difficult data streams.</li>
91
- <li><a href="">Termite</a>: A simple and easy-to-use terminal emulator that supports various communication protocols.</li>
92
- <li><a href="">Hercules SETUP utility</a>: A multi-purpose utility that can test and configure various serial devices and connections.</li>
93
- <li><a href="">Serial Port Monitor</a>: A professional tool that can monitor, analyze and emulate serial port activity.</li>
94
- <li><a href="">PuTTY</a>: A popular and free SSH and telnet client that can also handle serial communication.</li>
95
- </ul>
96
- <h4>Q4: What does the serial number 631 mean in numerology or angel numbers?</h4>
97
- <p>A4: In numerology or angel numbers, the serial number 631 may have different meanings depending on the context and the interpretation. However, some possible meanings are: </p>
98
- <ul>
99
- <li>The number 631 is a combination of the energies and attributes of the numbers 6, 3 and 1. It signifies balance, harmony, creativity, communication, self-expression, optimism, joy, inspiration, initiative, leadership, independence, new beginnings, progress and success.</li>
100
- <li>The number 631 is a message from your angels that they are supporting you in your endeavors and encouraging you to pursue your passions and talents. They also want you to be confident, positive and assertive in achieving your goals and fulfilling your life purpose.</li>
101
- <li>The number 631 is a reminder that you have the power and the potential to create your own reality with your thoughts, words and actions. You should focus on what you want rather than what you don't want and trust that the universe will provide you with everything you need.</li>
102
- <li>The number 631 is an indication that you are on the right path and that you are making positive changes in your life. You should be grateful for the opportunities and blessings that come your way and share them with others.</li>
103
- </ul>
104
- <h4>Q5: How can I contact the support team of Docklight software?</h4>
105
- <p>A5: You can contact the support team of Docklight software by using one of the following methods:</p>
106
- <ul>
107
- <li>Email: [email protected]</li>
108
- <li>Phone: +49 (0)89 / 38169745 <li>Website: https://docklight.de/contact/</li>
109
- <li>Forum: https://docklight.de/forum/</li>
110
- </ul>
111
- <p>The support team of Docklight software is friendly, helpful and responsive. They can assist you with any questions, issues or feedback regarding Docklight software.</p> b2dd77e56b<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Cracked Version of Filmora The Good the Bad and the Ugly.md DELETED
@@ -1,32 +0,0 @@
1
- <br />
2
- <h1>How to Download Cracked Version of Filmora for Free</h1>
3
- <p>Filmora is a popular video editing software that offers many features and tools to create stunning videos. However, it is not free and requires a license key to activate. If you want to use Filmora without paying for it, you might be tempted to download a cracked version of it from the internet. But is it safe and legal to do so?</p>
4
- <p>In this article, we will explain why you should avoid downloading cracked versions of Filmora and what are the risks and consequences of doing so. We will also show you how to get Filmora legally and affordably with some tips and tricks.</p>
5
- <h2>download cracked version of filmora</h2><br /><p><b><b>Download File</b> &#187; <a href="https://byltly.com/2uKxPc">https://byltly.com/2uKxPc</a></b></p><br /><br />
6
-
7
- <h2>Why You Should Not Download Cracked Version of Filmora</h2>
8
- <p>Downloading cracked versions of any software is illegal and unethical. You are violating the copyright laws and the terms of service of the software developer. You are also depriving them of their rightful income and support.</p>
9
- <p>But aside from the legal and moral issues, downloading cracked versions of Filmora can also harm your computer and your data. Here are some of the dangers of using cracked versions of Filmora:</p>
10
- <ul>
11
- <li><b>Viruses and malware:</b> Cracked versions of Filmora are often infected with viruses, malware, spyware, ransomware, or other malicious programs that can damage your system, steal your personal information, or lock your files until you pay a ransom.</li>
12
- <li><b>Poor performance and quality:</b> Cracked versions of Filmora are usually outdated, unstable, buggy, or missing some features or functions. They can cause crashes, freezes, errors, or glitches in your video editing process. They can also produce low-quality videos with watermarks, artifacts, or missing audio.</li>
13
- <li><b>No updates or support:</b> Cracked versions of Filmora do not receive any updates or patches from the developer. This means that they are vulnerable to security breaches, compatibility issues, or new bugs. They also do not have any customer support or technical assistance in case you encounter any problems or questions.</li>
14
- <li><b>Legal actions:</b> Downloading cracked versions of Filmora can expose you to legal actions from the software developer or the authorities. You can face fines, lawsuits, or even criminal charges for piracy or theft.</li>
15
- </ul>
16
- <p>As you can see, downloading cracked versions of Filmora is not worth the risk or the hassle. You are better off using a legitimate version of Filmora that is safe, reliable, updated, supported, and legal.</p>
17
-
18
- <h2>How to Get Filmora Legally and Affordably</h2>
19
- <p>If you want to use Filmora for your video editing needs, you have several options to get it legally and affordably. Here are some of them:</p>
20
- <ul>
21
- <li><b>Free trial:</b> Filmora offers a free trial version that you can download and use for 30 days. The free trial version has all the features and functions of the paid version, except that it adds a watermark to your exported videos. You can use the free trial version to test out Filmora and see if it suits your needs.</li>
22
- <li><b>Discounts and coupons:</b> Filmora often offers discounts and coupons for its products. You can check their official website, social media pages, newsletters, or blogs for any promotions or deals. You can also search online for third-party websites that offer coupons or codes for Filmora.</li>
23
- <li><b>Bundles and plans:</b> Filmora has different bundles and plans that cater to different users and budgets. You can choose from a monthly subscription, a yearly subscription, a lifetime license, or a bundle with other Wondershare products. You can compare the prices and benefits of each option and pick the one that suits you best.</li>
24
- </ul>
25
- <p>By using these options, you can get Filmora legally and affordably without compromising your security, quality, or ethics.</p>
26
- <p></p>
27
-
28
- <h2>Conclusion</h2>
29
- <p>Filmora is a great video editing software that can help you create amazing videos for your personal or professional projects. However, you should not download cracked versions of Filmora from the internet as they are illegal, unsafe, unreliable, and unsupported. Instead, you should use legitimate versions of Filmora that are legal, safe, reliable, updated, supported, and affordable.</p>
30
- <p>We hope this article has helped you understand why you should avoid downloading cracked versions of Filmora and how to get Filmora legally and affordably.</p> ddb901b051<br />
31
- <br />
32
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gts Fc 518ls Lan Driver.11 Download and Install the Latest Version for Your LAN Card.md DELETED
@@ -1,147 +0,0 @@
1
-
2
- <h1>Gts Fc 518ls Lan Driver: What You Need to Know</h1>
3
- <p>If you are looking for a reliable and fast network adapter for your computer, you might want to consider the Gts Fc 518ls Lan Driver. This driver is designed to enable your network card to communicate with your router, modem, or other network devices. It also allows you to access the internet and enjoy various online activities, such as gaming, streaming, browsing, and more.</p>
4
- <h2>Gts Fc 518ls Lan Driver.11</h2><br /><p><b><b>Download Zip</b> &#8250;&#8250;&#8250;&#8250;&#8250; <a href="https://byltly.com/2uKwPL">https://byltly.com/2uKwPL</a></b></p><br /><br />
5
- <p>However, before you can use the Gts Fc 518ls Lan Driver, you need to download and install it on your computer. This might sound complicated, but don't worry. In this article, we will show you how to do it step by step. We will also explain the features, benefits, and drawbacks of this driver, and how to fix some common problems that might occur. By the end of this article, you will have a better understanding of what the Gts Fc 518ls Lan Driver is and how to use it.</p>
6
- <h2>Introduction</h2>
7
- <h3>What is Gts Fc 518ls Lan Driver and why do you need it?</h3>
8
- <p>The Gts Fc 518ls Lan Driver is a software program that controls the network card or adapter on your computer. A network card or adapter is a hardware device that connects your computer to a network, such as a local area network (LAN) or a wide area network (WAN). A network card or adapter can be either built-in or external.</p>
9
- <p>The Gts Fc 518ls Lan Driver allows your network card or adapter to communicate with other network devices, such as routers, modems, switches, hubs, etc. It also enables your computer to access the internet and perform various online activities.</p>
10
- <p>You need the Gts Fc 518ls Lan Driver because without it, your network card or adapter will not work properly. You might experience slow or unstable network connection, limited or no internet access, or even no network connection at all. Therefore, it is important to have the correct and updated driver for your network card or adapter.</p>
11
- <h3>How to download and install Gts Fc 518ls Lan Driver for Windows 11</h3>
12
- <p>To download and install the Gts Fc 518ls Lan Driver for Windows 11, you need to follow these steps:</p>
13
- <p>Gts Fc 518ls network adapter driver download<br />
14
- How to install Gts Fc 518ls Lan Driver on Windows 10<br />
15
- Gts Fc 518ls Lan Driver for Mac OS X<br />
16
- Gts Fc 518ls Lan Driver troubleshooting guide<br />
17
- Gts Fc 518ls Lan Driver compatibility issues<br />
18
- Gts Fc 518ls Lan Driver update and upgrade<br />
19
- Gts Fc 518ls Lan Driver features and specifications<br />
20
- Gts Fc 518ls Lan Driver reviews and ratings<br />
21
- Gts Fc 518ls Lan Driver alternatives and comparisons<br />
22
- Gts Fc 518ls Lan Driver price and availability<br />
23
- Gts Fc 518ls Lan Driver warranty and support<br />
24
- Gts Fc 518ls Lan Driver manual and instructions<br />
25
- Gts Fc 518ls Lan Driver error codes and solutions<br />
26
- Gts Fc 518ls Lan Driver performance and speed test<br />
27
- Gts Fc 518ls Lan Driver installation video tutorial<br />
28
- Gts Fc 518ls Lan Driver soundcloud stream and download<br />
29
- Gts Fc 518ls Lan Driver benefits and advantages<br />
30
- Gts Fc 518ls Lan Driver problems and fixes<br />
31
- Gts Fc 518ls Lan Driver best practices and tips<br />
32
- Gts Fc 518ls Lan Driver FAQs and answers<br />
33
- Gts Fc 518ls Lan Driver software and hardware requirements<br />
34
- Gts Fc 518ls Lan Driver security and privacy issues<br />
35
- Gts Fc 518ls Lan Driver latest news and updates<br />
36
- Gts Fc 518ls Lan Driver testimonials and feedbacks<br />
37
- Gts Fc 518ls Lan Driver forum and community<br />
38
- Gts Fc 518ls Lan Driver affiliate program and commission<br />
39
- Gts Fc 518ls Lan Driver discount and coupon codes<br />
40
- Gts Fc 518ls Lan Driver free trial and demo version<br />
41
- Gts Fc 518ls Lan Driver online course and training<br />
42
- Gts Fc 518ls Lan Driver case study and success story<br />
43
- Gts Fc 518ls Lan Driver blog post and article<br />
44
- Gts Fc 518ls Lan Driver podcast and interview<br />
45
- Gts Fc 518ls Lan Driver webinar and presentation<br />
46
- Gts Fc 518ls Lan Driver ebook and report<br />
47
- Gts Fc 518ls Lan Driver infographic and chart<br />
48
- Gts Fc 518ls Lan Driver checklist and template<br />
49
- Gts Fc 518ls Lan Driver worksheet and quiz<br />
50
- Gts Fc 518ls Lan Driver calculator and tool<br />
51
- Gts Fc 518ls Lan Driver app and game<br />
52
- Gts Fc 518ls Lan Driver plugin and extension<br />
53
- Gts Fc 518ls Lan Driver theme and design<br />
54
- Gts Fc 518ls Lan Driver logo and icon<br />
55
- Gts Fc 518ls Lan Driver meme and gif<br />
56
- Gts Fc 518ls Lan Driver wallpaper and screensaver<br />
57
- Gts Fc 518ls Lan Driver merchandise and product<br />
58
- Gts Fc 518ls Lan Driver event and conference<br />
59
- Gts Fc 518ls Lan Driver contest and giveaway<br />
60
- Gts Fc 518ls Lan Driver hashtag and trend</p>
61
- <ol>
62
- <li>Go to the official website of the manufacturer of your network card or adapter. For example, if you have a Realtek network card or adapter, go to <a href="https://www.realtek.com/en/">https://www.realtek.com/en/</a>.</li>
63
- <li>Find the model number of your network card or adapter. You can usually find it on a sticker on the device itself, or on the packaging box. For example, if you have a Realtek RTL8139D network card or adapter, the model number is RTL8139D.</li>
64
- <li>Search for the model number on the website and find the driver that matches your operating system. For example, if you have Windows 11 (64-bit), look for the driver that says "Windows 11 (64-bit)".</li>
65
- <li>Download the driver file to your computer. It will usually be in a ZIP or RAR format.</li>
66
- <li>Extract the driver file using a program like WinRAR or WinZip.</li>
67
- <li>Open the extracted folder and find the setup file. It will usually have an .exe extension.</li>
68
- <li>Double-click on the setup file and follow the instructions on the screen to install the driver.</li>
69
- <li>Restart your computer after the installation is complete.</li>
70
- </ol>
71
- <p>Congratulations! You have successfully downloaded and installed the Gts Fc 518ls Lan Driver for Windows 11.</p>
72
- <h2>Features of Gts Fc 518ls Lan Driver</h2>
73
- <h3>High-speed and stable network connection</h3>
74
- <p>One of the main features of the Gts Fc 518ls Lan Driver is that it provides high-speed and stable network connection for your computer. It supports various network standards and protocols, such as Ethernet, Fast Ethernet, Gigabit Ethernet, TCP/IP, UDP/IP, etc. It also supports various data transfer rates, such as 10 Mbps, 100 Mbps, or even up to 1000 Mbps (depending on your network card or adapter).</p>
75
- <p>With the Gts Fc 518ls Lan Driver, you can enjoy fast and smooth online gaming, streaming, browsing, downloading, uploading, and more. You can also connect multiple devices to your network without experiencing lagging or buffering issues.</p>
76
- <h3>Compatible with various devices and operating systems</h3>
77
- <p>Another feature of the Gts Fc 518ls Lan Driver is that it is compatible with various devices and operating systems. It can work with different types of network cards or adapters, such as PCI-E cards, USB adapters, wireless adapters, etc. It can also work with different versions of Windows operating systems, such as Windows XP, Windows Vista, Windows 7, Windows 8/8.1/10/11 etc.</p>
78
- <p>This means that you can use the Gts Fc 518ls Lan Driver with any device or system that has a compatible network card or adapter. You don't have to worry about compatibility issues or driver conflicts.</p>
79
- <h3>Easy to update and troubleshoot</h3>
80
- <p>The last feature of the Gts Fc 518ls Lan Driver is that it is easy to update and troubleshoot. The manufacturer of your network card or adapter regularly releases new versions of the driver that fix bugs, improve performance, and add new features. You can easily download and install these updates from their website.</p>
81
- <p>If you encounter any problems with your network connection or driver installation, you can also use various tools and methods to troubleshoot them. For example, you can use Device Manager to check if your driver is working properly, you can use Network Troubleshooter to diagnose and fix common network problems, or you can use System Restore to undo any changes that might have caused problems.</p>
82
- <h2>Benefits of Gts Fc 518ls Lan Driver</h2>
83
- <h3>Enhance your online gaming experience</h3>
84
- <p>One of the benefits of using the Gts Fc 518ls Lan Driver is that it can enhance your online gaming experience. If you are a gamer who loves playing online games, you know how important it is to have a fast and stable network connection. You don't want to suffer from lagging, freezing, or disconnecting issues that ruin your game play.</p>
85
- <p>The Gts Fc 518ls Lan Driver can provide you with high-speed and stable network connection that allows you to play online games smoothly and seamlessly. You can join multiplayer games, chat with other players, and enjoy high-quality graphics and sound without any interruptions. You can also play online games on different platforms, such as PC, console, or mobile devices, as long as they have compatible network cards or adapters.</p>
86
- <h3>Improve your work productivity and efficiency</h3>
87
- <p>Another benefit of using the Gts Fc 518ls Lan Driver is that it can improve your work productivity and efficiency. If you are a professional who relies on internet access for your work, you know how important it is to have a reliable and fast network connection. You don't want to waste time waiting for web pages to load, files to download or upload, or emails to send or receive.</p>
88
- <p>The Gts Fc 518ls Lan Driver can provide you with reliable and fast network connection that allows you to work online efficiently and effectively. You can access various web applications, cloud services, and online tools that help you complete your tasks faster and easier. You can also collaborate with your colleagues, clients, or partners through video conferencing, file sharing, or instant messaging without any delays or disruptions.</p>
89
- <h3>Secure your data and privacy</h3>
90
- <p>The last benefit of using the Gts Fc 518ls Lan Driver is that it can secure your data and privacy. If you are a user who cares about your online security and privacy, you know how important it is to have a secure and encrypted network connection. You don't want to expose your personal or sensitive information to hackers, cybercriminals, or third parties who might misuse it.</p>
91
- <p>The Gts Fc 518ls Lan Driver can provide you with secure and encrypted network connection that protects your data and privacy from unauthorized access or interception. It supports various security features and protocols, such as WEP, WPA, WPA2, AES, TKIP, etc. It also prevents malware, spyware, viruses, or other threats from infecting your computer or network devices.</p>
92
- <h2>Drawbacks of Gts Fc 518ls Lan Driver</h2>
93
- <h3>Potential compatibility issues with some hardware or software</h3>
94
- <p>One of the drawbacks of using the Gts Fc 518ls Lan Driver is that it might cause potential compatibility issues with some hardware or software. Although the driver is compatible with most network cards or adapters and operating systems, there might be some exceptions or limitations. For example, some older or newer models of network cards or adapters might not be supported by the driver, or some specific features or functions might not work properly with the driver.</p>
95
- <p>To avoid compatibility issues, you should always check the compatibility list of the driver before downloading and installing it. You should also update your hardware or software regularly to ensure they are compatible with the latest version of the driver.</p>
96
- <h3>Possible driver conflicts or errors</h3>
97
- <p>Another drawback of using the Gts Fc 518ls Lan Driver is that it might cause possible driver conflicts or errors. Although the driver is designed to work smoothly and stably with your network card or adapter and operating system, there might be some glitches or bugs that affect its performance or functionality. For example, some driver files might be corrupted, missing, outdated, or incompatible with other drivers on your computer, causing network connection problems, system crashes, blue screens, or other errors.</p>
98
- <p>To avoid driver conflicts or errors, you should always backup your driver files before installing a new version of the driver. You should also uninstall any previous versions of the driver before installing a new one. If you encounter any driver problems, you should use various tools and methods to troubleshoot them.</p>
99
- <h3>How to fix common problems with Gts Fc 518ls Lan Driver</h3>
100
- <p>If you encounter any common problems with the Gts Fc 518ls Lan Driver, such as no network connection, slow network speed, limited internet access, etc., you can try these solutions:</p>
101
- <ol>
102
- <li>Check if your network card or adapter is working properly. Make sure it is connected securely to your computer and router/modem. You can also try plugging it into a different port or using a different cable.</li>
103
- <li>Check if your router/modem is working properly. Make sure it is powered on and has a good signal strength. You can also try restarting it or resetting it to its factory settings.</li>
104
- <li>Check if your driver is installed correctly. Make sure you have downloaded and installed the correct version of the driver for your network card or adapter and operating system. You can also try reinstalling the driver or updating it to the latest version.</li>
105
- <li>Check if your firewall or antivirus software is blocking your network connection. Make sure you have allowed the driver and your network card or adapter through your firewall or antivirus settings. You can also try disabling them temporarily while using the network connection.</li>
106
- <li>Check if there are any other devices or programs that are interfering with your network connection. Make sure you have closed any unnecessary applications or processes that might consume bandwidth or cause conflicts with your network connection. You can also try disconnecting any other devices that are using the same network.</li>
107
- </ol>
108
- <p>If none of these solutions work, you can contact the manufacturer of your network card or adapter or the developer of the driver for further assistance.</p>
109
- <h2>Conclusion</h2>
110
- <p>The Gts Fc 518ls Lan Driver is a software program that controls the network card or adapter on your computer. It allows you to access the internet and enjoy various online activities, such as gaming, streaming, browsing, and more. It also provides high-speed and stable network connection, compatibility with various devices and operating systems, and easy update and troubleshoot features.</p>
111
- <p>However, the Gts Fc 518ls Lan Driver also has some drawbacks, such as potential compatibility issues with some hardware or software, possible driver conflicts or errors, and common problems that might affect its performance or functionality. To avoid these drawbacks, you should always check the compatibility list of the driver, backup your driver files, uninstall any previous versions of the driver, and troubleshoot any driver problems.</p>
112
- <p>We hope this article has helped you understand what the Gts Fc 518ls Lan Driver is and how to use it. If you have any questions or feedback, please feel free to leave a comment below.</p>
113
- <h2>FAQs</h2>
114
- <h3>What is a LAN driver?</h3>
115
- <p>A LAN driver is a software program that controls the network card or adapter on your computer. It allows your computer to communicate with other network devices, such as routers, modems, switches, hubs, etc. It also enables your computer to access the internet and perform various online activities.</p>
116
- <h3>How do I find out what LAN driver I have?</h3>
117
- <p>To find out what LAN driver you have, you can use Device Manager on your Windows computer. To do this, follow these steps:</p>
118
- <ol>
119
- <li>Press Windows + X keys on your keyboard and select Device Manager from the menu.</li>
120
- <li>Expand the Network adapters category and find your network card or adapter.</li>
121
- <li>Right-click on your network card or adapter and select Properties.</li>
122
- <li>Click on the Driver tab and check the Driver Provider, Driver Date, Driver Version, and Digital Signer information.</li>
123
- </ol>
124
- <h3>How do I update my LAN driver?</h3>
125
- <p>To update your LAN driver, you can use Windows Update or Device Manager on your Windows computer. To do this, follow these steps:</p>
126
- <ol>
127
- <li>Press Windows + I keys on your keyboard and select Update & Security from the Settings window.</li>
128
- <li>Click on Check for updates and wait for Windows to scan for available updates.</li>
129
- <li>If there is an update for your LAN driver, click on Download and install and follow the instructions on the screen.</li>
130
- <li>If there is no update for your LAN driver, go back to Device Manager and right-click on your network card or adapter.</li>
131
- <li>Select Update driver and choose Search automatically for updated driver software.</li>
132
- <li>Wait for Windows to search for and install the latest driver for your network card or adapter.</li>
133
- </ol>
134
- <h3>How do I uninstall my LAN driver?</h3>
135
- <p>To uninstall your LAN driver, you can use Device Manager on your Windows computer. To do this, follow these steps:</p>
136
- <ol>
137
- <li>Press Windows + X keys on your keyboard and select Device Manager from the menu.</li>
138
- <li>Expand the Network adapters category and find your network card or adapter.</li>
139
- <li>Right-click on your network card or adapter and select Uninstall device.</li>
140
- <li>Check the box that says Delete the driver software for this device and click on Uninstall.</li>
141
- <li>Restart your computer after the uninstallation is complete.</li>
142
- </ol>
143
- <h3>Where can I download Gts Fc 518ls Lan Driver?</h3>
144
- <p>You can download Gts Fc 518ls Lan Driver from the official website of the manufacturer of your network card or adapter. For example, if you have a Realtek network card or adapter, you can go to <a href="https://www.realtek.com/en/">https://www.realtek.com/en/</a>. You can also use a third-party website that provides drivers for various devices and operating systems. However, you should be careful when downloading drivers from unknown sources as they might contain malware or viruses that can harm your computer or network devices.</p>
145
- </p> 0a6ba089eb<br />
146
- <br />
147
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Segundos Um jogo de escolhas difceis e consequncias hilrias.md DELETED
@@ -1,128 +0,0 @@
1
-
2
- <h1>60 Segundos Jogo APK Download: A Comedy Atomic Adventure of Scavenge and Survival</h1>
3
- <p>Have you ever wondered what it would be like to survive a nuclear apocalypse? Well, now you can find out with 60 Segundos Jogo, a hilarious and thrilling game that puts you in the shoes of Ted, a responsible citizen and a family man who has only 60 seconds to gather supplies and rescue his loved ones before the nuke hits. Sounds easy, right? Think again. Everything will be against you - time, your very own furniture, the house that's different every time you play, and the fundamental question - what to take with you and who to leave behind?</p>
4
- <p>In this article, we will tell you everything you need to know about 60 Segundos Jogo, including what it is, how to download and install it on your Android device, how to play it, why you should play it, and some tips and tricks to help you survive. So, buckle up and get ready for an atomic adventure of scavenge and survival!</p>
5
- <h2>60 segundos jogo apk download</h2><br /><p><b><b>Download File</b> &#10026;&#10026;&#10026; <a href="https://urlin.us/2uSUSy">https://urlin.us/2uSUSy</a></b></p><br /><br />
6
- <h2>What is 60 Segundos Jogo?</h2>
7
- <h3>A brief introduction to the game and its features</h3>
8
- <p>60 Segundos Jogo is a comedy atomic adventure game developed by Robot Gentleman and Sixty Seconds Entertainment. It is based on the original PC game 60 Seconds!, which was released in 2015. The game has two main phases: scavenge and survival. In the scavenge phase, you have 60 seconds to run around your house and collect as many items as you can, such as food, water, weapons, tools, radio, etc. You also have to decide which family members to save: your wife Dolores, your daughter Mary Jane, your son Timmy, or your dog Sharak. You can only carry a limited amount of items in your hands or in a suitcase, so you have to be quick and smart.</p>
9
- <p>In the survival phase, you have to stay alive in your fallout shelter with whatever you scavenged and whoever you saved. You have to ration food and water, make best use of your supplies, face difficult choices, and even venture into the wasteland. Every day will surprise you with unexpected events that will test your sanity and morality. You will encounter mutant cockroaches, bandits, raiders, soldiers, traders, visitors, diseases, accidents, etc. You will also have to deal with the consequences of your actions and decisions. Will you survive? Or will you succumb to starvation, radiation, madness, or despair?</p>
10
- <p>The game features:</p>
11
- <ul>
12
- <li>A procedurally generated house that changes every time you play</li>
13
- <li>A variety of items that can help or hinder your survival</li>
14
- <li>A dynamic family that reacts differently depending on their personality and situation</li>
15
- <li>A rich story with multiple endings that depend on your choices</li>
16
- <li>A dark comedy tone that balances humor and horror</li>
17
- <li>A retro-style graphics that evoke the Cold War era</li>
18
- <li>A catchy soundtrack that sets the mood</li>
19
- <li>A simple and intuitive control scheme that suits mobile devices</li>
20
- <li>A free APK file that lets you play the game without any restrictions</li>
21
- </ul>
22
- <h3>How to download and install the APK file on your Android device</h3>
23
- <p>If you want to play 60 Segundos Jogo on your Android device, you will need to download and install the APK file from a reliable source. An APK file is an An APK file is an Android Package Kit that contains all the files and data needed to run an app on an Android device. It is similar to an EXE file on a Windows PC. However, you cannot install an APK file directly from the Google Play Store, as it is not an official app. You will need to enable the option to install apps from unknown sources on your device settings. Here are the steps to do so: 1. Go to your device settings and look for the security or privacy option. 2. Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on. 3. Confirm your choice by tapping OK or Allow. 4. Now you can download the APK file from a trusted website, such as [APKPure] or [APKMirror]. 5. Once the download is complete, open the file and tap Install. 6. Wait for the installation process to finish and then tap Open to launch the game. Congratulations, you have successfully installed 60 Segundos Jogo on your Android device! <h3>How to play the game and what to expect</h3>
24
- <p>Playing 60 Segundos Jogo is easy and fun, but also challenging and unpredictable. You can choose between two modes: Apocalypse and Survival. In Apocalypse mode, you have to complete both the scavenge and survival phases. In Survival mode, you can skip the scavenge phase and start with a random set of items and family members in the shelter.</p>
25
- <p>In the scavenge phase, you have to use the joystick on the left side of the screen to move around your house and tap the items you want to collect. You can see your inventory on the right side of the screen, which shows how many items you can carry in your hands or in a suitcase. You can also see a timer on the top of the screen, which shows how much time you have left before the nuke hits. You have to be fast and strategic, as you cannot go back once you leave your house.</p>
26
- <p>In the survival phase, you have to use the buttons on the bottom of the screen to interact with your shelter and your family. You can see your family portraits on the left side of the screen, which show their health, hunger, thirst, and mental status. You can also see your supplies on the right side of the screen, which show how much food, water, soup cans, medkits, etc. you have left. You can also access your radio, map, journal, handbook, etc. by tapping on their icons on the top of the screen.</p>
27
- <p>60 segundos atomic adventure apk download<br />
28
- 60 segundos game android download<br />
29
- 60 segundos jogo de sobrevivência apk<br />
30
- 60 seconds apk free download<br />
31
- 60 segundos jogo nuclear apk<br />
32
- 60 segundos jogo para android<br />
33
- 60 seconds game apk mod<br />
34
- 60 segundos jogo de aventura apk<br />
35
- 60 segundos jogo gratis download<br />
36
- 60 seconds survival game apk<br />
37
- 60 segundos jogo de comédia apk<br />
38
- 60 seconds atomic adventure android<br />
39
- 60 segundos jogo de escolhas apk<br />
40
- 60 seconds game free download<br />
41
- 60 segundos jogo de simulação apk<br />
42
- 60 seconds apk full version<br />
43
- 60 segundos jogo de ação apk<br />
44
- 60 seconds game android apk<br />
45
- 60 segundos jogo de desafio apk<br />
46
- 60 seconds atomic adventure mod<br />
47
- 60 segundos jogo de estratégia apk<br />
48
- 60 seconds game download pc<br />
49
- 60 segundos jogo de caos apk<br />
50
- 60 seconds atomic adventure free<br />
51
- 60 segundos jogo de humor apk<br />
52
- 60 seconds game online play<br />
53
- 60 segundos jogo de família apk<br />
54
- 60 seconds atomic adventure guide<br />
55
- 60 segundos jogo de bunker apk<br />
56
- 60 seconds game tips and tricks<br />
57
- 60 segundos jogo de suprimentos apk<br />
58
- 60 seconds atomic adventure wiki<br />
59
- 60 segundos jogo de baratas apk<br />
60
- 60 seconds game endings list<br />
61
- 60 segundos jogo de perigos apk<br />
62
- 60 seconds atomic adventure cheats<br />
63
- 60 segundos jogo de cenários apk<br />
64
- 60 seconds game review and rating<br />
65
- 60 segundos jogo de mutantes apk<br />
66
- 60 seconds atomic adventure update<br />
67
- 60 segundos jogo de sorte apk<br />
68
- 60 seconds game walkthrough and hints<br />
69
- 60 segundos jogo de história apk<br />
70
- 60 seconds atomic adventure trailer<br />
71
- 60 segundos jogo de diversão apk</p>
72
- <p>Every day, you will receive a report that summarizes what happened during the day and what you need to do for the next day. You will also face random events that will require you to make choices or take actions. Some events will be beneficial, such as finding new supplies or meeting friendly survivors. Some events will be harmful, such as getting sick or attacked by enemies. Some events will be neutral, such as receiving broadcasts or messages from outside.</p>
73
- <p>Your goal is to survive as long as possible until you find a way to escape or get rescued. However, this is easier said than done, as there are many factors that can affect your survival chances. You have to balance your needs and wants, your morals and ethics, your risks and rewards, and your hopes and fears. You have to be prepared for anything and everything that can happen in a post-apocalyptic world.</p>
74
- <h2>Why should you play 60 Segundos Jogo?</h2>
75
- <h3>The game is fun, challenging, and unpredictable</h3>
76
- <p>One of the main reasons why you should play 60 Segundos Jogo is because it is a fun game that will keep you entertained and engaged for hours. The game is not just a simple survival simulator, but also a comedy adventure that will make you laugh and cry at the same time. The game is full of humor and satire that poke fun at the absurdity and irony of surviving a nuclear war.</p>
77
- <p>The game is also challenging and unpredictable, as it will test your skills and luck in every situation. The game is different every time you play it, as it has a procedurally generated house that changes its layout and items every time. The game also has a dynamic family that reacts differently depending on their personality and situation. The game also has a rich story with multiple endings that depend on your choices.</p>
78
- <p>The game will never bore you or frustrate you, as it will always surprise you with new events and scenarios that will keep you on your toes. The game will make you feel like you are living in a real nuclear fallout shelter with real people and real problems.</p> <h3>The game has a dark humor and a unique art style</h3>
79
- <p>Another reason why you should play 60 Segundos Jogo is because it has a dark humor and a unique art style that set it apart from other survival games. The game does not take itself too seriously, but rather makes fun of the grim and bleak reality of living in a nuclear wasteland. The game has a lot of jokes, references, and Easter eggs that will make you chuckle and smile. The game also has a lot of sarcasm, irony, and absurdity that will make you question your sanity and morality.</p>
80
- <p>The game also has a unique art style that matches its tone and theme. The game has a retro-style graphics that evoke the Cold War era and the 1950s Americana. The game has a cartoonish and colorful look that contrasts with the dark and gloomy atmosphere. The game also has a lot of details and animations that make the game more lively and immersive.</p>
81
- <h3>The game offers different modes, endings, and scenarios</h3>
82
- <p>A final reason why you should play 60 Segundos Jogo is because it offers different modes, endings, and scenarios that make the game more replayable and enjoyable. The game has two modes: Apocalypse and Survival. In Apocalypse mode, you have to complete both the scavenge and survival phases. In Survival mode, you can skip the scavenge phase and start with a random set of items and family members in the shelter.</p>
83
- <p>The game also has multiple endings that depend on your choices and actions. You can either escape or get rescued by different factions, such as the military, the government, the rebels, the mutants, etc. You can also die or go insane in various ways, such as starvation, radiation, disease, suicide, murder, etc. You can also unlock different achievements and trophies that reward your performance and behavior.</p>
84
- <p>The game also has different scenarios that change the difficulty and the story of the game. You can choose between normal, hard, or extreme scenarios that affect the amount of items, family members, events, etc. You can also choose between different characters, such as Ted's brother Bob or his cousin Joe, who have different personalities and backgrounds. You can also choose between different themes, such as winter or Halloween, that add new elements and challenges to the game.</p>
85
- <h2>Tips and tricks for playing 60 Segundos Jogo</h2>
86
- <h3>How to scavenge effectively and what to prioritize</h3>
87
- <p>One of the most important aspects of playing 60 Segundos Jogo is scavenging effectively and prioritizing what to take with you to the shelter. Here are some tips and tricks to help you with this phase:</p>
88
- <ul>
89
- <li>Before you start scavenging, take a quick look around your house and plan your route. Try to memorize where the items and family members are located.</li>
90
- <li>Start with the most essential items, such as food, water, radio, map, suitcase, etc. These items will help you survive longer and communicate with the outside world.</li>
91
- <li>Next, look for useful items, such as weapons, tools, medkits, gas mask, etc. These items will help you deal with threats and problems in the shelter or in the wasteland.</li>
92
- <li>Finally, look for optional items, such as books, games, toys, etc. These items will help you boost your morale and sanity in the shelter.</li>
93
- <li>Don't forget to save your family members as well. They will provide you with company and support in the shelter. However, be careful not to save too many people or too few people. Saving too many people will increase your resource consumption and conflict potential. Saving too few people will decrease your survival chances and morale level.</li>
94
- <li>Don't waste time on useless items or actions. For example, don't bother with clothes or furniture or opening doors or windows. They will only slow you down and take up space in your inventory.</li>
95
- <li>Don't be greedy or indecisive. You only have 60 seconds to scavenge everything you need. If you try to take everything or change your mind too often, You will miss the opportunity to grab the most important items or family members. Be smart and decisive, and stick to your plan.</li>
96
- </ul>
97
- <h3>How to manage your resources and make decisions in the shelter</h3>
98
- <p>Another important aspect of playing 60 Segundos Jogo is managing your resources and making decisions in the shelter. Here are some tips and tricks to help you with this phase:</p>
99
- <ul>
100
- <li>Feed and water your family members regularly, but not too often. You have to ration your food and water supplies, as they are limited and hard to replenish. A good rule of thumb is to feed and water your family members every three or four days, unless they are sick or injured.</li>
101
- <li>Use your supplies wisely, but not too sparingly. You have to make best use of your supplies, such as medkits, weapons, tools, etc., as they are valuable and scarce. However, don't be afraid to use them when necessary, as they can save your life or improve your situation.</li>
102
- <li>Listen to the radio frequently, but not too obsessively. You have to listen to the radio regularly, as it will provide you with useful information, such as weather forecasts, rescue signals, survival tips, etc. However, don't listen to the radio too much, as it can also broadcast propaganda, lies, or disturbing noises that can affect your sanity.</li>
103
- <li>Send someone to explore the wasteland occasionally, but not too recklessly. You have to send someone to explore the wasteland from time to time, as it will give you a chance to find new supplies, allies, or opportunities. However, don't send someone too often or without proper equipment, as it can also expose them to dangers, enemies, or traps that can harm them or kill them.</li>
104
- <li>Make choices carefully, but not too cautiously. You have to make choices frequently, as they will shape your story and determine your fate. However, don't make choices too hastily or too timidly, as they can also backfire or limit your options.</li>
105
- </ul>
106
- <h3>How to deal with random events and survive the wasteland</h3>
107
- <p>The final important aspect of playing 60 Segundos Jogo is dealing with random events and surviving the wasteland. Here are some tips and tricks to help you with this phase:</p>
108
- <ul>
109
- <li>Expect the unexpected and be ready for anything. You never know what will happen in the wasteland or in the shelter. You might encounter friendly survivors who will offer you help or trade. You might encounter hostile raiders who will demand your supplies or attack you. You might encounter strange creatures who will scare you or harm you. You might encounter mysterious phenomena who will confuse you or enlighten you.</li>
110
- <li>Be flexible and adaptable. You have to adjust your strategy and behavior according to the situation and the outcome. You might have to change your plans or priorities depending on what you find or what happens. You might have to cooperate or compete with other survivors depending on their attitude or agenda. You might have to fight or flee from threats depending on their strength or weakness.</li>
111
- <li>Be creative and resourceful. You have to use your imagination and ingenuity to overcome challenges and problems. You might have to improvise or combine items to create new solutions or possibilities. You might have to use humor or logic to persuade or trick others. You might have to use courage or cunning to face or avoid dangers.</li>
112
- </ul>
113
- <h2>Conclusion</h2>
114
- <p>60 Segundos Jogo is a comedy atomic adventure game that will make you laugh and cry at the same time. It is a game that will challenge your skills and luck in every situation. It is a game that will surprise you with new events and scenarios every time you play it. It is a game that will test your sanity and morality in a post-apocalyptic world.</p>
115
- <p>If you are looking for a fun, thrilling, and unpredictable game that will keep you entertained and engaged for hours, then 60 Segundos Jogo is the game for you. Download the APK file now and start your atomic adventure of scavenge and survival!</p>
116
- <h2>Frequently Asked Questions</h2>
117
- <h3>Q: Is 60 Segundos Jogo free?</h3>
118
- <p>A: Yes, 60 Segundos Jogo is free to download and play on your Android device. However, you will need to download the APK file from a reliable source instead of the Google Play Store.</p>
119
- <h3>Q: Is 60 Segundos Jogo safe?</h3>
120
- <p>A: Yes, 60 Segundos Jogo is safe as long as you download the APK file from a trusted website such as [APKPure] or [APKMirror]. However, you should always scan the file for viruses before installing it on your device.</p>
121
- <h3>Q: Is Q: Is 60 Segundos Jogo available in other languages?</h3>
122
- <p>A: Yes, 60 Segundos Jogo is available in English, Portuguese, Spanish, French, German, Italian, Russian, Polish, and Turkish. You can change the language in the game settings.</p>
123
- <h3>Q: Is 60 Segundos Jogo based on a true story?</h3>
124
- <p>A: No, 60 Segundos Jogo is not based on a true story. It is a fictional game that parodies the Cold War era and the nuclear paranoia. However, some of the events and characters in the game are inspired by real historical or cultural references.</p>
125
- <h3>Q: Is 60 Segundos Jogo suitable for children?</h3>
126
- <p>A: No, 60 Segundos Jogo is not suitable for children. It is a game that contains violence, gore, profanity, drugs, alcohol, and mature themes. It is a game that deals with dark and sensitive topics such as death, suicide, murder, cannibalism, etc. It is a game that requires parental guidance and discretion.</p> 197e85843d<br />
127
- <br />
128
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Royale Versi Terbaru dan Bangun Klan Anda.md DELETED
@@ -1,90 +0,0 @@
1
- <br />
2
- <h1>Download Clash Royale Versi Terbaru: A Guide for Android Users</h1>
3
- <p>If you are looking for a fun, addictive, and competitive game to play on your Android device, you should definitely try Clash Royale. This game is one of the most popular and successful games from Supercell, the creators of Clash of Clans. In this article, we will tell you what Clash Royale is, why you should download the latest version, and how to do it easily and safely.</p>
4
- <h2>download clash royale versi terbaru</h2><br /><p><b><b>DOWNLOAD</b> &#9658;&#9658;&#9658;&#9658;&#9658; <a href="https://urlin.us/2uSYDK">https://urlin.us/2uSYDK</a></b></p><br /><br />
5
- <h2>What is Clash Royale?</h2>
6
- <p>Clash Royale is a real-time multiplayer game that combines elements of card games, tower defense, and strategy. You can play with or against other players from around the world in various modes and arenas. You can also join or create a clan to share cards, chat, and participate in clan wars.</p>
7
- <h3>A real-time multiplayer game with your favorite Clash characters</h3>
8
- <p>In Clash Royale, you can choose from dozens of cards featuring the Clash of Clans troops, spells, and defenses you know and love, as well as new characters like the Royales: Princes, Knights, Baby Dragons, and more. Each card has its own strengths, weaknesses, and abilities. You can build your own battle deck and use it to fight against other players in fast-paced matches that last for three minutes or less.</p>
9
- <h3>A strategic and fun card battle game with various modes and features</h3>
10
- <p>Clash Royale is not just about spamming cards on the battlefield. You need to think carefully about when, where, and how to use them. You also need to manage your elixir, which is the resource that allows you to play cards. You can win by destroying the enemy's king tower or by having more crowns than them when the time runs out. You can also use spells, buildings, and traps to defend your towers and counter your opponent's moves.</p>
11
- <p>Clash Royale offers various modes and features to keep you entertained and challenged. You can play in different arenas that have different themes and rules. You can also join special events like seasonal challenges, global tournaments, clan wars, draft battles, 2v2 battles, and more. You can earn chests, gold, gems, cards, trophies, crowns, and other rewards by playing and winning matches.</p>
12
- <p>download clash royale versi terbaru apk<br />
13
- download clash royale versi terbaru mod<br />
14
- download clash royale versi terbaru 2023<br />
15
- download clash royale versi terbaru android<br />
16
- download clash royale versi terbaru gratis<br />
17
- download clash royale versi terbaru offline<br />
18
- download clash royale versi terbaru update<br />
19
- download clash royale versi terbaru hack<br />
20
- download clash royale versi terbaru pc<br />
21
- download clash royale versi terbaru uptodown<br />
22
- download clash royale versi terbaru unlimited gems<br />
23
- download clash royale versi terbaru tanpa obb<br />
24
- download clash royale versi terbaru no root<br />
25
- download clash royale versi terbaru full version<br />
26
- download clash royale versi terbaru dari play store<br />
27
- download clash royale versi terbaru dengan mudah<br />
28
- download clash royale versi terbaru untuk laptop<br />
29
- download clash royale versi terbaru anti banned<br />
30
- download clash royale versi terbaru link mediafire<br />
31
- download clash royale versi terbaru cara instal<br />
32
- download clash royale versi terbaru support semua device<br />
33
- download clash royale versi terbaru fitur baru<br />
34
- download clash royale versi terbaru server private<br />
35
- download clash royale versi terbaru google drive<br />
36
- download clash royale versi terbaru tanpa iklan<br />
37
- download clash royale versi terbaru ukuran kecil<br />
38
- download clash royale versi terbaru cheat engine<br />
39
- download clash royale versi terbaru bahasa indonesia<br />
40
- download clash royale versi terbaru mega mod<br />
41
- download clash royale versi terbaru original<br />
42
- download clash royale versi terbaru ios<br />
43
- download clash royale versi terbaru kaskus<br />
44
- download clash royale versi terbaru revdl<br />
45
- download clash royale versi terbaru rexdl<br />
46
- download clash royale versi terbaru zip file<br />
47
- download clash royale versi terbaru tanpa verifikasi<br />
48
- download clash royale versi terbaru work 100%<br />
49
- download clash royale versi terbaru high compress<br />
50
- download clash royale versi terbaru youtube video<br />
51
- download clash royale versi terbaru blogspot.com</p>
52
- <h3>A global phenomenon with millions of players and fans</h3>
53
- <p>Clash Royale is not just a game. It is a community of millions of players and fans who love the game and its characters. You can interact with other players through chat, emotes, clans, social media, forums, blogs, podcasts, videos, streams, fan art, fan fiction, merchandise, and more. You can also watch professional players compete in esports tournaments like the Clash Royale League and the World Finals.</p>
54
- <h2>Why download clash royale versi terbaru?</h2>
55
- <p>If you already have Clash Royale on your device, you might wonder why you need to download the latest version. The answer is simple: because it will make your gaming experience better. Here are some reasons why you should download clash royale versi terbaru:</p>
56
- <h3>Enjoy the latest updates and improvements</h3>
57
- <p>Supercell is constantly working on improving Clash Royale by fixing bugs, balancing cards, optimizing performance, enhancing graphics, adding new features, and more. By downloading clash royale versi terbaru, you can enjoy the latest updates and improvements that will make your game run smoother and faster.</p>
58
- <h3>Access new cards, arenas, events, and rewards</h3>
59
- <p>Supercell is also releasing new cards, arenas, events, and rewards regularly to keep the game fresh and exciting. By downloading clash royale versi terbaru, you can access the newest additions to the game and try them out for yourself. You can also collect more cards, upgrade them, and unlock new strategies and combinations.</p>
60
- <h3>Join the seasonal challenges and tournaments</h3>
61
- <p>Every month, Clash Royale has a new season with a different theme and a unique set of challenges and tournaments. By downloading clash royale versi terbaru, you can join the seasonal fun and compete for exclusive rewards and glory. You can also enjoy the seasonal changes in the game's appearance and atmosphere.</p>
62
- <h2>How to download clash royale versi terbaru?</h2>
63
- <p>Now that you know why you should download clash royale versi terbaru, you might wonder how to do it. Don't worry, it's very easy and simple. Just follow these steps:</p>
64
- <h3>Check the compatibility of your device and internet connection</h3>
65
- <p>Before you download clash royale versi terbaru, you need to make sure that your device and internet connection meet the minimum requirements for the game. According to the official website, you need an Android device with version 4.1 or higher and at least 100 MB of free space. You also need a stable internet connection, preferably Wi-Fi or 4G.</p>
66
- <h3>Choose a reliable and safe source to download the game</h3>
67
- <p>The best way to download clash royale versi terbaru is to use the official Google Play Store app on your device. This way, you can ensure that you are getting the authentic and updated version of the game from a trusted source. You can also avoid any malware or viruses that might harm your device or compromise your personal information.</p>
68
- <p>To download clash royale versi terbaru from the Google Play Store, you need to open the app on your device and search for "Clash Royale" in the search bar. Then, you need to tap on the game icon and then on the "Install" button. The game will start downloading automatically and will be installed on your device once it's done.</p>
69
- <h3>Follow the instructions to install and launch the game</h3>
70
- <p>After you download clash royale versi terbaru from the Google Play Store, you need to follow the instructions on your screen to install and launch the game. You might need to accept some permissions and terms of service before you can start playing. You might also need to update some data or settings depending on your device and region.</p>
71
- <p>Once you launch the game, you can either create a new account or log in with your existing one. You can also link your account to your Google Play Games, Facebook, or Supercell ID accounts for backup and synchronization purposes. Then, you can enjoy playing clash royale versi terbaru on your Android device.</p>
72
- <h2>Conclusion</h2>
73
- <p>Clash Royale is a fun, addictive, and competitive game that you can play on your Android device. It is a real-time multiplayer game that combines elements of card games, tower defense, and strategy. It is also a global phenomenon with millions of players and fans who love the game and its characters.</p>
74
- <p>If you want to have the best gaming experience possible, you should download clash royale versi terbaru. This way, you can enjoy the latest updates and improvements, access new cards, arenas, events, and rewards, and join the seasonal challenges and tournaments. You can also interact with other players through chat, emotes, clans, social media, forums, blogs, podcasts, videos, streams, fan art, fan fiction, merchandise, esports tournaments, and more.</p>
75
- <p>To download clash royale versi terbaru, you just need to check the compatibility of your device and internet connection, choose a reliable and safe source to download the game, and follow the instructions to install and launch the game. It's very easy and simple.</p>
76
- <p>So what are you waiting for? Download clash royale versi terbaru today and join the Clash Royale community!</p>
77
- <h2>FAQs</h2>
78
- <p>Here are some frequently asked questions about downloading clash royale versi terbaru:</p>
79
- <h4>Is Clash Royale free to play?</h4>
80
- <p>Yes, Clash Royale is free to play. You can download it from the Google Play Store without paying anything. However, the game also offers some optional in-app purchases that can enhance your gaming experience. You can buy gems, gold, pass royale, special offers, and more with real money. These purchases are not required to play or progress in the game.</p>
81
- <h4>Is Clash Royale safe for kids?</h4>
82
- <p>Clash Royale is rated 7+ by PEGI[^2^ ) and 9+ by ESRB, which means that it is suitable for older children and teens. However, the game does contain some mild violence, cartoon blood, and online interactions that may not be appropriate for younger or sensitive kids. Parents should monitor their kids' gameplay and use the parental control features to limit or disable the chat, emotes, in-app purchases, and other options that might expose their kids to inappropriate content or behavior.</p>
83
- <h4>How can I play Clash Royale on PC?</h4>
84
- <p>If you want to play Clash Royale on your PC, you need to use an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. There are many Android emulators available online, but some of the most popular ones are BlueStacks, NoxPlayer, and LDPlayer. To play Clash Royale on PC using an Android emulator, you need to download and install the emulator on your PC, then download and install Clash Royale from the Google Play Store within the emulator. Then, you can launch the game and play it on your PC with your keyboard and mouse.</p>
85
- <h4>How can I transfer my Clash Royale account to another device?</h4>
86
- <p>If you want to transfer your Clash Royale account to another device, you need to link your account to a Supercell ID. A Supercell ID is a free service that allows you to save your game progress and access it from any device. To create a Supercell ID, you need to open the game settings and tap on "Supercell ID". Then, you need to enter your email address and confirm it. You can also choose a name and a profile picture for your Supercell ID. Once you have created a Supercell ID, you can use it to log in to your account on any device. You can also switch between multiple accounts using the same Supercell ID.</p>
87
- <h4>How can I contact the Clash Royale support team?</h4>
88
- <p>If you have any questions, issues, or feedback about Clash Royale, you can contact the support team through the game. To do so, you need to open the game settings and tap on "Help and Support". Then, you can browse through the FAQs or tap on "Contact Us" to send a message to the support team. You can also contact the support team through their official website, Twitter, Facebook, or Reddit. The support team is usually very responsive and helpful.</p> 197e85843d<br />
89
- <br />
90
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Atrvete a Descargar Geometry Dash Meltdown Hackeado APK y Vive una Aventura Increble.md DELETED
@@ -1,126 +0,0 @@
1
- <br />
2
- <h1>Descargar Geometry Dash Meltdown Hackeado APK: Cómo Jugar y Beneficios</h1>
3
- <p>¿Te gustan los juegos de plataformas, ritmo y acción? ¿Quieres disfrutar de un juego divertido, desafiante y adictivo? ¿Quieres descargar Geometry Dash Meltdown hackeado APK para tener todos los niveles, iconos y colores desbloqueados? Si tu respuesta es sí, entonces este artículo es para ti.</p>
4
- <p>En este artículo, te vamos a explicar qué es Geometry Dash Meltdown, cómo descargar la versión hackeada APK, cómo jugar y qué beneficios tiene. También te vamos a responder algunas preguntas frecuentes sobre el juego. Así que sigue leyendo y prepárate para una nueva aventura de Geometry Dash.</p>
5
- <h2>descargar geometry dash meltdown hackeado apk</h2><br /><p><b><b>Download File</b> &#9658; <a href="https://jinyurl.com/2uNR9A">https://jinyurl.com/2uNR9A</a></b></p><br /><br />
6
- <h2>Qué es Geometry Dash Meltdown</h2>
7
- <p>Geometry Dash Meltdown es una aplicación gratuita de expansión de Geometry Dash, desarrollada y publicada por RobTop Games. Se lanzó el 19 de diciembre de 2015 para iOS y Android. Cuenta con tres niveles exclusivos, con logros, iconos y coleccionables limitados. También puedes transferir algunos datos desbloqueables a la versión completa a través de las cuentas de usuario.</p>
8
- <h3>Un juego de plataformas de ritmo y acción</h3>
9
- <p>Geometry Dash Meltdown es un juego de plataformas en 2D donde tienes que saltar al ritmo de la música del cantante canadiense F-777 e intentar pasar por muchos niveles llenos de obstáculos. Tienes que flexionar tu dedo clickeador mientras saltas, vuelas y giras por cavernas oscuras y obstáculos puntiagudos.</p>
10
- <h3>Tres niveles únicos con música de F-777</h3>
11
- <p>El juego cuenta con tres niveles únicos con música electrónica de F-777. Los niveles son:</p>
12
- <table>
13
- <tr><th>Nivel</th><th>Dificultad</th><th>Estrellas</th><th>Música</th></tr>
14
- <tr><td>The Seven Seas</td><td>Fácil</td><td>1</td><td>[The Seven Seas](^15^)</td></tr>
15
- <tr><td>Viking Arena</td><td>Normal</td><td>2</td><td>[Viking Arena](^14^)</td></tr>
16
- <tr><td>Airborne Robots</td><td>Difícil</td><td>3</td><td>[Airborne Robots](^13^)</td></tr>
17
- </table>
18
- <p>Cada nivel tiene un diseño único y una atmósfera diferente. También hay monedas secretas que puedes recolectar para desbloquear más iconos y colores.</p>
19
- <h3>Personaliza tu personaje con iconos y colores</h3>
20
- <p>Puedes personalizar tu personaje con diferentes iconos y colores que puedes desbloquear al completar los niveles, conseguir las monedas secretas o transferir los datos desde la versión completa. Hay más de 40 iconos y 10 colores para elegir. Puedes combinarlos como quieras para crear tu propio estilo.</p>
21
- <h2>Cómo descargar Geometry Dash Meltdown Hackeado APK</h2 <p>Si quieres descargar Geometry Dash Meltdown hackeado APK, tienes que saber que hay muchos beneficios de hacerlo. Por ejemplo, podrás tener todos los niveles, iconos y colores desbloqueados desde el principio, sin tener que completar los desafíos o recolectar las monedas. También podrás jugar sin anuncios ni compras integradas.</p>
22
- <h3>Los beneficios de la versión hackeada</h3>
23
- <p>La versión hackeada de Geometry Dash Meltdown te ofrece las siguientes ventajas:</p>
24
- <ul>
25
- <li>Todos los niveles desbloqueados: puedes jugar a los tres niveles sin restricciones ni límites.</li>
26
- <li>Todos los iconos y colores desbloqueados: puedes personalizar tu personaje con más de 40 iconos y 10 colores diferentes.</li>
27
- <li>Sin anuncios ni compras integradas: puedes disfrutar del juego sin interrupciones ni gastos adicionales.</li>
28
- </ul>
29
- <h3>Los pasos para instalar el archivo APK</h3>
30
- <p>Para instalar el archivo APK de Geometry Dash Meltdown hackeado, tienes que seguir estos pasos:</p>
31
- <p>descargar geometry dash meltdown hackeado apk 2023<br />
32
- descargar geometry dash meltdown hackeado apk ultima version<br />
33
- descargar geometry dash meltdown hackeado apk mediafıre<br />
34
- descargar geometry dash meltdown hackeado apk todo desbloqueado<br />
35
- descargar geometry dash meltdown hackeado apk sin anuncios<br />
36
- descargar geometry dash meltdown hackeado apk mega<br />
37
- descargar geometry dash meltdown hackeado apk para android<br />
38
- descargar geometry dash meltdown hackeado apk gratis<br />
39
- descargar geometry dash meltdown hackeado apk full<br />
40
- descargar geometry dash meltdown hackeado apk mod<br />
41
- descargar geometry dash meltdown hackeado apk sin root<br />
42
- descargar geometry dash meltdown hackeado apk facil y rapido<br />
43
- descargar geometry dash meltdown hackeado apk con monedas infinitas<br />
44
- descargar geometry dash meltdown hackeado apk sin internet<br />
45
- descargar geometry dash meltdown hackeado apk 2022<br />
46
- descargar geometry dash meltdown hackeado apk por uptodown<br />
47
- descargar geometry dash meltdown hackeado apk sin virus<br />
48
- descargar geometry dash meltdown hackeado apk para pc<br />
49
- descargar geometry dash meltdown hackeado apk online<br />
50
- descargar geometry dash meltdown hackeado apk actualizado<br />
51
- descargar geometry dash meltdown hackeado apk no root<br />
52
- descargar geometry dash meltdown hackeado apk 1.01<br />
53
- descargar geometry dash meltdown hackeado apk 1.00<br />
54
- descargar geometry dash meltdown hackeado apk premium<br />
55
- descargar geometry dash meltdown hackeado apk pro<br />
56
- descargar geometry dash meltdown hackeado apk original<br />
57
- descargar geometry dash meltdown hackeado apk oficial<br />
58
- descargar geometry dash meltdown hackeado apk 2021<br />
59
- descargar geometry dash meltdown hackeado apk 2020<br />
60
- descargar geometry dash meltdown hackeado apk 2019<br />
61
- como descargar geometry dash meltdown hackeado apk<br />
62
- donde descargar geometry dash meltdown hackeado apk<br />
63
- porque descargar geometry dash meltdown hackeado apk<br />
64
- para que sirve descargar geometry dash meltdown hackeado apk<br />
65
- que es descargar geometry dash meltdown hackeado apk<br />
66
- que contiene descargar geometry dash meltdown hackeado apk<br />
67
- que ofrece descargar geometry dash meltdown hackeado apk<br />
68
- que ventajas tiene descargar geometry dash meltdown hackeado apk<br />
69
- que necesito para descargar geometry dash meltdown hackeado apk<br />
70
- que requisitos tiene descargar geometry dash meltdown hackeado apk</p>
71
- <ol>
72
- <li>Descarga el archivo APK desde un sitio web confiable, como [APKdone](^1^).</li>
73
- <li>Abre el archivo APK y haz clic en instalar. Si te pide permisos, acepta los que sean necesarios.</li>
74
- <li>Espera a que se complete la instalación y luego abre el juego.</li>
75
- <li>Disfruta de Geometry Dash Meltdown hackeado con todos los niveles, iconos y colores desbloqueados.</li>
76
- </ol>
77
- <h3>Los requisitos del sistema y la compatibilidad</h3>
78
- <p>Para poder jugar a Geometry Dash Meltdown hackeado APK, necesitas tener un dispositivo Android con las siguientes características:</p>
79
- <ul>
80
- <li>Versión de Android: 4.0 o superior.</li>
81
- <li>Espacio libre: al menos 50 MB.</li>
82
- <li>Conexión a internet: no es necesaria.</li>
83
- </ul>
84
- <p>El juego es compatible con la mayoría de los dispositivos Android, pero puede haber algunos problemas de rendimiento o estabilidad en algunos modelos. Si tienes algún problema, puedes contactar con el desarrollador o buscar una solución en internet.</p>
85
- <h2>Cómo jugar Geometry Dash Meltdown</h2 <p>Ahora que ya sabes cómo descargar Geometry Dash Meltdown hackeado APK, es hora de aprender cómo jugar. El juego es muy simple de entender, pero muy difícil de dominar. Tienes que tener mucha paciencia, concentración y habilidad para superar los niveles.</p>
86
- <h3>Usa el dedo o el ratón para saltar y esquivar los obstáculos</h3>
87
- <p>El juego se controla con un solo toque en la pantalla o un clic en el ratón. Cada vez que tocas o clicas, tu personaje salta. Si mantienes el dedo o el botón presionado, tu personaje salta más alto o vuela si está en un vehículo. Tienes que usar el salto para esquivar los obstáculos que se presentan en el camino, como pinchos, bloques, portales, etc.</p>
88
- <p>El juego se basa en el ritmo de la música, así que tienes que sincronizar tus saltos con el sonido. Si chocas con algún obstáculo, pierdes y tienes que empezar de nuevo desde el principio o desde un punto de control si lo has activado. El juego no tiene pausa ni opción de guardar, así que tienes que estar atento y no distraerte.</p>
89
- <h3>Aprovecha los modos de juego y las funciones especiales</h3>
90
- <p>El juego tiene diferentes modos de juego y funciones especiales que puedes aprovechar para mejorar tu experiencia. Algunos de ellos son:</p>
91
- <ul>
92
- <li>Modo práctica: puedes activar este modo para practicar los niveles sin perder. Cada vez que mueres, apareces en un punto de control que puedes colocar donde quieras. Así puedes memorizar los patrones y las trampas de cada nivel.</li>
93
- <li>Vehículos: hay diferentes vehículos que puedes usar en algunos niveles, como el cohete, el UFO, la nave espacial, etc. Cada vehículo tiene su propia forma de moverse y saltar, así que tienes que adaptarte a ellos.</li>
94
- <li>Portales: hay diferentes portales que cambian la gravedad, la velocidad, el tamaño o la dirección de tu personaje. Tienes que estar preparado para estos cambios y reaccionar rápido.</li>
95
- <li>Monedas secretas: hay tres monedas secretas en cada nivel que puedes recolectar para desbloquear más iconos y colores. Algunas monedas están escondidas o requieren un camino alternativo para conseguirlas.</li>
96
- </ul>
97
- <h3>Supera los desafíos y mejora tus habilidades</h3>
98
- <p>El juego tiene varios desafíos y recompensas que puedes superar y obtener para mejorar tus habilidades y demostrar tu progreso. Algunos de ellos son:</p>
99
- <ul>
100
- <li>Logros: hay 21 logros que puedes conseguir al completar los niveles, recolectar las monedas secretas, desbloquear los iconos y colores, etc.</li>
101
- <li>Estadísticas: puedes ver tus estadísticas en el menú principal, como el número de saltos, intentos, muertes, niveles completados, etc.</li>
102
- <li>Clasificación: puedes ver tu posición en la clasificación global o local según tu puntuación en cada nivel.</li>
103
- <li>Comentarios: puedes dejar comentarios en los niveles o leer los comentarios de otros jugadores.</li>
104
- </ul>
105
- <h2>Conclusión</h2 <p>Geometry Dash Meltdown es un juego de plataformas de ritmo y acción que te hará vibrar con su música, sus gráficos y su dificultad. Si quieres descargar la versión hackeada APK, podrás disfrutar de todos los niveles, iconos y colores desbloqueados, sin anuncios ni compras integradas. Solo tienes que seguir los pasos que te hemos explicado y estar listo para saltar al ritmo de F-777.</p>
106
- <p>Esperamos que este artículo te haya sido útil y que te diviertas con Geometry Dash Meltdown. Si tienes alguna duda o sugerencia, puedes dejarnos un comentario o contactarnos por correo electrónico. También puedes compartir este artículo con tus amigos y familiares que les guste este tipo de juegos. ¡Gracias por leernos!</p>
107
- <h2>Preguntas frecuentes</h2>
108
- <p>A continuación, te respondemos algunas preguntas frecuentes sobre Geometry Dash Meltdown y su versión hackeada APK.</p>
109
- <h3>¿Qué diferencia hay entre Geometry Dash Meltdown y Geometry Dash?</h3>
110
- <p>Geometry Dash Meltdown es una aplicación gratuita de expansión de Geometry Dash, que tiene tres niveles exclusivos con música de F-777. Geometry Dash es la versión completa del juego, que tiene más de 20 niveles oficiales y miles de niveles creados por los usuarios, con música de diferentes artistas. También tiene más opciones de personalización, modos de juego y funciones especiales.</p>
111
- <h3>¿Es seguro descargar Geometry Dash Meltdown hackeado APK?</h3>
112
- <p>Depende de la fuente desde donde lo descargues. Hay muchos sitios web que ofrecen archivos APK hackeados, pero algunos pueden contener virus, malware o publicidad engañosa. Por eso, te recomendamos que descargues el archivo APK desde un sitio web confiable, como [APKdone], que tiene una buena reputación y comentarios positivos de los usuarios.</p>
113
- <h3>¿Puedo jugar a Geometry Dash Meltdown hackeado APK sin conexión a internet?</h3>
114
- <p>Sí, puedes jugar a Geometry Dash Meltdown hackeado APK sin conexión a internet. El juego no requiere conexión a internet para funcionar, solo para acceder a algunas funciones como la clasificación o los comentarios. Así que puedes jugar en cualquier lugar y momento, sin preocuparte por el consumo de datos o la cobertura.</p>
115
- <h3>¿Puedo transferir mis datos desbloqueados a la versión completa?</h3>
116
- <p>Sí, puedes transferir algunos datos desbloqueados a la versión completa a través de las cuentas de usuario. Para hacerlo, tienes que crear una cuenta de usuario en Geometry Dash Meltdown y luego iniciar sesión con la misma cuenta en Geometry Dash. Así podrás transferir algunos iconos y colores desbloqueados en Geometry Dash Meltdown a Geometry Dash.</p>
117
- <h3>¿Qué otros juegos similares a Geometry Dash Meltdown hay?</h3>
118
- <p>Si te gustan los juegos similares a Geometry Dash Meltdown, puedes probar otros juegos como:</p>
119
- <ul>
120
- <li>[Geometry Dash World]: otra aplicación gratuita de expansión de Geometry Dash, con 10 niveles exclusivos con música de Dex Arson y Waterflame.</li>
121
- <li>[Geometry Dash SubZero]: otra aplicación gratuita de expansión de Geometry Dash, con tres niveles exclusivos con música de MDK, Bossfight y Boom Kitty.</li>
122
- <li>[The Impossible Game]: un juego de plataformas en 2D donde tienes que saltar sobre obstáculos cuadrados al ritmo de la música.</li>
123
- <li>[Bit.Trip Runner]: un juego de plataformas en 2D donde tienes que correr, saltar y golpear al ritmo de la música retro.</li>
124
- </ul></p> 401be4b1e0<br />
125
- <br />
126
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download and Stream Greatest Love Of All the Latest Single by Prince Chike.md DELETED
@@ -1,109 +0,0 @@
1
- <br />
2
- <h1>Download Prince Chike Greatest Love of All</h1>
3
- <p>If you are looking for a soul-lifting worship song that expresses the unconditional love of God, you should download Prince Chike Greatest Love of All. This song is a powerful testimony of how God loves us first and how we can love Him back with all our hearts. In this article, we will tell you more about Prince Chike, his song Greatest Love of All, and how you can download it for free.</p>
4
- <h2>Who is Prince Chike?</h2>
5
- <p>Prince Chike is a Nigerian gospel singer, songwriter, and worship leader who has a passion for spreading the gospel through music. He started singing at a young age in his church choir and later joined a gospel group called The Glorious Voices. He has released several albums and singles, including Ekwueme, Covenant Keeping God, and Greatest Love of All.</p>
6
- <h2>download prince chike greatest love of all</h2><br /><p><b><b>Download Zip</b> &#10038;&#10038;&#10038; <a href="https://jinyurl.com/2uNUnD">https://jinyurl.com/2uNUnD</a></b></p><br /><br />
7
- <h3>Biography and background</h3>
8
- <p>Prince Chike was born in Abia State, Nigeria, to a Christian family. He grew up in a musical environment, as his father was a choir master and his mother was a singer. He learned to play the keyboard and the guitar from his father and developed his vocal skills from his mother. He studied music at the University of Nigeria, Nsukka, where he graduated with a bachelor's degree in music education. He is married to Princess Chike and they have three children.</p>
9
- <h3>Music style and influences</h3>
10
- <p>Prince Chike's music style is a blend of contemporary gospel, Afro-pop, and soul. He sings in English, Igbo, and other Nigerian languages. He draws inspiration from the Holy Spirit, the Bible, and his personal experiences. He also admires and respects other gospel artists such as Nathaniel Bassey, Sinach, Frank Edwards, Moses Bliss, and Mercy Chinwo.</p>
11
- <h2>What is Greatest Love of All?</h2>
12
- <p>Greatest Love of All is one of Prince Chike's most popular songs. It was released in 2021 as part of his album with the same title. It has received over 348K views on YouTube and over 129K streams on Boomplay . It is also available on other platforms such as PraiseZion , Spotify, Apple Music, and Deezer.</p>
13
- <p>download prince chike greatest love of all mp3<br />
14
- download prince chike greatest love of all lyrics<br />
15
- download prince chike greatest love of all song<br />
16
- download prince chike greatest love of all video<br />
17
- download prince chike greatest love of all album<br />
18
- download prince chike greatest love of all shazam<br />
19
- download prince chike greatest love of all spotify<br />
20
- download prince chike greatest love of all praisezion<br />
21
- download prince chike greatest love of all worship<br />
22
- download prince chike greatest love of all free<br />
23
- download prince chike greatest love of all audio<br />
24
- download prince chike greatest love of all music<br />
25
- download prince chike greatest love of all youtube<br />
26
- download prince chike greatest love of all live<br />
27
- download prince chike greatest love of all instrumental<br />
28
- download prince chike greatest love of all karaoke<br />
29
- download prince chike greatest love of all remix<br />
30
- download prince chike greatest love of all cover<br />
31
- download prince chike greatest love of all acoustic<br />
32
- download prince chike greatest love of all gospel<br />
33
- download prince chike greatest love of all chords<br />
34
- download prince chike greatest love of all piano<br />
35
- download prince chike greatest love of all guitar<br />
36
- download prince chike greatest love of all saxophone<br />
37
- download prince chike greatest love of all flute<br />
38
- download prince chike greatest love of all violin<br />
39
- download prince chike greatest love of all cello<br />
40
- download prince chike greatest love of all drum<br />
41
- download prince chike greatest love of all bass<br />
42
- download prince chike greatest love of all trumpet<br />
43
- download prince chike greatest love of all clarinet<br />
44
- download prince chike greatest love of all harp<br />
45
- download prince chike greatest love of all organ<br />
46
- download prince chike greatest love of all choir<br />
47
- download prince chike greatest love of all solo<br />
48
- download prince chike greatest love of all duet<br />
49
- download prince chike greatest love of all quartet<br />
50
- download prince chike greatest love of all orchestra<br />
51
- download prince chike greatest love of all band<br />
52
- download prince chike greatest love of all performance<br />
53
- download prince chike greatest love of all review<br />
54
- download prince chike greatest love of all rating<br />
55
- download prince chike greatest love of all meaning<br />
56
- download prince chike greatest love of all message<br />
57
- download prince chike greatest love of all inspiration<br />
58
- download prince chike greatest love of all testimony<br />
59
- download prince chike greatest love of all biography<br />
60
- download prince chike greatest love of all interview</p>
61
- <h3>Song history and meaning</h3>
62
- <p>Greatest Love of All was written by Prince Chike as a tribute to God's love for him and for humanity. He said that he wrote the song after experiencing God's love in a personal way during a difficult time in his life. He said that he wanted to share his testimony with others and encourage them to trust God's love no matter what they are going through. He also said that he wanted to remind people that God loves them first before they love Him back.</p>
63
- <h3>Song lyrics and message</h3>
64
- <p>The song lyrics are based on the Bible verses John 15:13 (\"Greater love has no one than this: to lay down one’s life for one’s friends.\") and 1 John 4:19 (\"We love because he first loved us.\"). The song message is that God's love is the greatest love of all because He gave His only Son Jesus Christ to die for our sins so that we can have eternal life. The song also urges us to love God with all our hearts, souls, minds, and strength because He loved us first.</p>
65
- <h2>How to download Prince Chike Greatest Love of All?</h2>
66
- <p>If you want to download Prince Chike Greatest Love of All for free, you have several options. Here are some of them:</p>
67
- <h3>Download from YouTube</h3>
68
- <p>You can download the song from YouTube using a YouTube downloader tool such as Y2mate or SaveFrom.net. Here are the steps:</p>
69
- <ol>
70
- <li>Go to YouTube <li>Search for Prince Chike Greatest Love of All or copy and paste this link: </li>
71
- <li>Copy the URL of the video</li>
72
- <li>Go to a YouTube downloader tool such as Y2mate or SaveFrom.net</li>
73
- <li>Paste the URL of the video and choose the format and quality you want</li>
74
- <li>Click on download and save the file to your device</li>
75
- </ol>
76
- <h3>Download from PraiseZion</h3>
77
- <p>You can also download the song from PraiseZion, a website that offers free gospel music downloads. Here are the steps:</p>
78
- <ol>
79
- <li>Go to PraiseZion or copy and paste this link: </li>
80
- <li>Scroll down to find Prince Chike Greatest Love of All or use the search bar to find it</li>
81
- <li>Click on the download button and wait for the file to load</li>
82
- <li>Save the file to your device and enjoy</li>
83
- </ol>
84
- <h3>Download from Boomplay</h3>
85
- <p>Another option is to download the song from Boomplay, a music streaming and download service that has a large collection of African music. Here are the steps:</p>
86
- <ol>
87
- <li>Go to Boomplay or copy and paste this link: </li>
88
- <li>Login or sign up for a free account if you don't have one</li>
89
- <li>Find Prince Chike Greatest Love of All or use the search bar to find it</li>
90
- <li>Click on the download icon and choose the quality you want</li>
91
- <li>Save the file to your device and enjoy</li>
92
- </ol>
93
- <h2>Conclusion</h2>
94
- <p>Prince Chike Greatest Love of All is a beautiful song that celebrates God's love for us and our love for Him. It is a song that will inspire you, uplift you, and draw you closer to God. You can download it for free from various sources such as YouTube, PraiseZion, and Boomplay. We hope you enjoy listening to this song and share it with others.</p>
95
- <h2>FAQs</h2>
96
- <ul>
97
- <li><b>Q: Who wrote Prince Chike Greatest Love of All?</b></li>
98
- <li>A: Prince Chike wrote the song himself as a tribute to God's love for him and for humanity.</li>
99
- <li><b>Q: What is the genre of Prince Chike Greatest Love of All?</b></li>
100
- <li>A: The song is a blend of contemporary gospel, Afro-pop, and soul.</li>
101
- <li><b>Q: When was Prince Chike Greatest Love of All released?</b></li>
102
- <li>A: The song was released in 2021 as part of his album with the same title.</li>
103
- <li><b>Q: How can I contact Prince Chike?</b></li>
104
- <li>A: You can contact him through his social media handles such as Facebook, Instagram, and Twitter. You can also send him an email at [email protected].</li>
105
- <li><b>Q: Where can I find more songs by Prince Chike?</b></li>
106
- <li>A: You can find more songs by Prince Chike on his YouTube channel, Spotify, Apple Music, Deezer, and other music platforms.</li>
107
- </ul></p> 401be4b1e0<br />
108
- <br />
109
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy the Best Stick War Experience with Hugo Gamings Stick War Legacy Mod VIP.md DELETED
@@ -1,131 +0,0 @@
1
- <br />
2
- <h1>Stick War Legacy Mod VIP Hugo Download: How to Get the Best Version of the Game</h1>
3
- <p>Do you love playing Stick War Legacy, the addictive strategy game with stick figures? Do you want to experience the game in a new and exciting way? If yes, then you might be interested in downloading Stick War Legacy Mod VIP Hugo, a modified version of the game by Hugo Gaming. In this article, we will tell you everything you need to know about this mod, how to download and install it, and how to play and enjoy it. Let's get started!</p>
4
- <h2>What is Stick War Legacy?</h2>
5
- <h3>A popular strategy game with stick figures</h3>
6
- <p>Stick War Legacy is a strategy game developed by Max Games Studios. It is one of the most popular and highest-rated games on Google Play and App Store, with over 100 million downloads. The game is set in a world called Inamorta, where different nations of stick figures are at war with each other. Each nation has its own unique way of fighting, such as archers, spearmen, swordsmen, magicians, giants, etc. You play as the leader of a nation called Order, which believes in peace and harmony. Your goal is to unite all the nations under your banner and end the war.</p>
7
- <h2>stick war legacy mod vip hugo download</h2><br /><p><b><b>Download File</b> &#9658; <a href="https://jinyurl.com/2uNQij">https://jinyurl.com/2uNQij</a></b></p><br /><br />
8
- <h3>The main features and modes of the game</h3>
9
- <p>Stick War Legacy has many features and modes that make it fun and challenging. Some of them are:</p>
10
- <ul>
11
- <li><b>Campaign mode:</b> This is the main mode of the game, where you have to complete different missions and conquer all the enemy nations. You can choose from three difficulty levels: normal, hard, and insane. Each mission has its own objectives, rewards, and challenges.</li>
12
- <li><b>Endless Deads mode:</b> This is a survival mode, where you have to defend your base from endless waves of zombies. You can upgrade your units and abilities, and use different strategies to survive as long as possible.</li>
13
- <li><b>Tournament mode:</b> This is a multiplayer mode, where you can compete with other players online. You can choose from different modes, such as classic, deathmatch, king of the hill, capture the flag, etc. You can also customize your army and avatar.</li>
14
- <li><b>Skins:</b> You can unlock and use different skins for your units, such as leaf, ice, savage, lava, etc. Each skin has its own advantages and disadvantages.</li>
15
- <li><b>Gems:</b> You can collect gems by playing the game or watching ads. You can use gems to buy skins, abilities, upgrades, etc.</li>
16
- </ul>
17
- <h2>What is Stick War Legacy Mod VIP Hugo?</h2>
18
- <h3>A modified version of the game by Hugo Gaming</h3>
19
- <p>Stick War Legacy Mod VIP Hugo is a modified version of the game by Hugo Gaming, a YouTube channel that creates mods for various games. This mod adds many new features and changes to the original game, such as:</p>
20
- <ul>
21
- <li><b>New units:</b> You can use new units in your army, such as ninjas, assassins, snipers, bombers, etc.</li>
22
- <li><b>New abilities:</b> You can use new abilities for your units, such as stealth, teleportation, healing, etc.</li>
23
- <li><b>New modes:</b> You can play new modes in the modded game, such as boss fight, zombie invasion , etc.</li>
24
- <li><b>New skins:</b> You can use new skins for your units, such as gold, diamond, rainbow, etc.</li>
25
- <li><b>Unlimited gems:</b> You can get unlimited gems in the modded game, which you can use to buy anything you want.</li>
26
- <li><b>No ads:</b> You can play the modded game without any ads or interruptions.</li>
27
- </ul>
28
- <h3>The benefits and drawbacks of using the mod</h3>
29
- <p>Using Stick War Legacy Mod VIP Hugo can have some benefits and drawbacks. Some of the benefits are:</p>
30
- <ul>
31
- <li><b>More fun and variety:</b> You can enjoy the game in a different way, with more options and features to choose from. You can experiment with different units, abilities, modes, and skins, and have more fun and excitement.</li>
32
- <li><b>More challenge and competition:</b> You can challenge yourself and other players with the new modes and features of the modded game. You can test your skills and strategies against different enemies and scenarios, and compete with other players online.</li>
33
- <li><b>More satisfaction and reward:</b> You can feel more satisfied and rewarded with the modded game, as you can unlock and use everything you want without any limitations or costs. You can also show off your achievements and progress to other players.</li>
34
- </ul>
35
- <p>Some of the drawbacks are:</p>
36
- <ul>
37
- <li><b>Potential risks and issues:</b> You might face some risks and issues when downloading and installing the modded game, such as viruses, malware, errors, crashes, etc. You might also lose your original game data or account if you are not careful.</li>
38
- <li><b>Lack of balance and fairness:</b> You might find the modded game too easy or too hard, depending on the features you use. You might also lose the balance and fairness of the original game, as some units, abilities, modes, and skins might be overpowered or underpowered.</li>
39
- <li><b>Lack of support and updates:</b> You might not get any support or updates from the original developers or the modders of the game. You might miss out on some new features or fixes that are added to the original game. You might also face compatibility issues with different devices or versions of the game.</li>
40
- </ul>
41
- <h2>How to download and install Stick War Legacy Mod VIP Hugo?</h2>
42
- <h3>The requirements and precautions for downloading the mod</h3>
43
- <p>If you want to download and install Stick War Legacy Mod VIP Hugo, you need to meet some requirements and take some precautions. Some of them are:</p>
44
- <ul>
45
- <li><b>A compatible device:</b> You need to have a device that can run the modded game smoothly and without any problems. The device should have enough storage space, memory, battery, etc. The device should also be compatible with the version of the modded game you want to download.</li>
46
- <li><b>A reliable source:</b> You need to find a reliable source that provides the modded game for download. The source should be trustworthy, safe, and updated. The source should also provide clear instructions and information about the modded game.</li>
47
- <li><b>A backup plan:</b> You need to have a backup plan in case something goes wrong with the download or installation process. You should backup your original game data and account before downloading or installing the modded game. You should also have a way to uninstall or restore the original game if needed.</li>
48
- </ul>
49
- <h3>The steps to download and install the mod from different sources</h3>
50
- <p>The steps to download and install Stick War Legacy Mod VIP Hugo may vary depending on the source you choose. However, here are some general steps that you can follow:</p>
51
- <ol>
52
- <li><b>Find a source:</b> Search online for a source that provides Stick War Legacy Mod VIP Hugo for download. Compare different sources based on their reputation, reviews, ratings, etc. Choose a source that suits your needs and preferences.</li>
53
- <li><b>Download the mod:</b> Follow the instructions provided by the source to download the modded game file. The file may be in APK format or ZIP format. Make sure you have enough space on your device to download the file.</li>
54
- <li><b>Enable unknown sources:</b> Go to your device settings and enable unknown sources option. This will allow you to install apps from sources other than Google Play or App Store.</li>
55
- <li><b>Install the mod:</b> Locate the downloaded file on your device and tap on it to install it. Follow the instructions on the screen to complete the installation process. If the file is in ZIP format, you may need to extract it first using a file manager app or a ZIP extractor app.</li>
56
- <li><b>Launch the mod:</b > Launch the mod:</b> After the installation is done, you can launch the modded game from your device. You may need to allow some permissions or accept some terms and conditions before playing the game.</li>
57
- </ol>
58
- <h2>How to play and enjoy Stick War Legacy Mod VIP Hugo?</h2>
59
- <h3>The tips and tricks for playing the modded game</h3>
60
- <p>Playing Stick War Legacy Mod VIP Hugo can be a lot of fun, but it can also be challenging and confusing at times. Here are some tips and tricks that can help you play and enjoy the modded game better:</p>
61
- <ul>
62
- <li><b>Explore the new features and modes:</b> The modded game has many new features and modes that you can try out and have fun with. You can use the new units, abilities, modes, and skins to create your own army and strategy. You can also discover new secrets and surprises in the game.</li>
63
- <li><b>Balance the difficulty and fun:</b> The modded game can be very easy or very hard, depending on the features you use. You can adjust the difficulty level of the game to suit your preference and skill level. You can also mix and match different features to create your own challenge and fun.</li>
64
- <li><b>Play with friends or strangers:</b> The modded game has a multiplayer mode that you can play with other players online. You can play with your friends or strangers, and choose from different modes and rules. You can also chat and communicate with other players in the game.</li>
65
- </ul>
66
- <h3>The best features and modes to try out in the mod</h3>
67
- <p>The modded game has many features and modes that you can try out, but some of them are more interesting and enjoyable than others. Here are some of the best features and modes that you should try out in the mod:</p>
68
- <p>stick war legacy hugo gaming mod apk download<br />
69
- stick war legacy mod menu by hugo gaming<br />
70
- stick war legacy vip mod hugo gaming mediafire<br />
71
- stick war legacy hugo gaming mod link<br />
72
- stick war legacy mod vip hugo gaming youtube<br />
73
- stick war legacy hugo gaming mod password<br />
74
- stick war legacy mod menu hugo gaming linkgenie<br />
75
- stick war legacy vip mod hugo gaming ihh gaming<br />
76
- stick war legacy hugo gaming mod zip file<br />
77
- stick war legacy mod vip hugo gaming 555<br />
78
- stick war legacy hugo gaming mod apk mediafire<br />
79
- stick war legacy mod menu hugo gaming youtube<br />
80
- stick war legacy vip mod hugo gaming download link<br />
81
- stick war legacy hugo gaming mod unlock link<br />
82
- stick war legacy mod vip hugo gaming enjoy<br />
83
- stick war legacy hugo gaming mod apk linkgenie<br />
84
- stick war legacy mod menu hugo gaming mediafire<br />
85
- stick war legacy vip mod hugo gaming 8 views<br />
86
- stick war legacy hugo gaming mod unlock zip file<br />
87
- stick war legacy mod vip hugo gaming 15:10<br />
88
- stick war legacy hugo gaming mod apk download link<br />
89
- stick war legacy mod menu hugo gaming download link<br />
90
- stick war legacy vip mod hugo gaming 2 minutes ago<br />
91
- stick war legacy hugo gaming mod unlock my link<br />
92
- stick war legacy mod vip hugo gaming 0:00<br />
93
- stick war legacy hugo gaming mod apk link<br />
94
- stick war legacy mod menu hugo gaming link<br />
95
- stick war legacy vip mod hugo gaming 2.29k subscribers<br />
96
- stick war legacy hugo gaming mod unlock my zip file<br />
97
- stick war legacy mod vip hugo gaming vdomdhtmltml><br />
98
- stick war legacy hugo gaming mod apk enjoy<br />
99
- stick war legacy mod menu hugo gaming enjoy<br />
100
- stick war legacy vip mod hugo gaming 20.1k subscribers<br />
101
- stick war legacy hugo gaming mod watch full video<br />
102
- stick war legacy mod vip hugo gaming new scientist<br />
103
- stick war legacy hugo gaming mod apk subscribe<br />
104
- stick war legacy mod menu hugo gaming subscribe<br />
105
- stick war legacy vip mod hugo gaming aye gaming star<br />
106
- stick war legacy hugo gaming mod watch time<br />
107
- stick war legacy mod vip hugo gaming korea institute of fusion energy</p>
108
- <ul>
109
- <li><b>Boss fight mode:</b> This is a mode where you have to fight against powerful bosses that have different abilities and attacks. You have to use your units and skills wisely to defeat them. You can also unlock new bosses by completing certain missions or achievements.</li>
110
- <li><b>Zombie invasion mode:</b> This is a mode where you have to survive against endless waves of zombies that are trying to destroy your base. You have to use your units and abilities to defend your base and kill as many zombies as possible. You can also upgrade your units and abilities, and use different strategies to survive longer.</li>
111
- <li><b>Rainbow skin:</b> This is a skin that you can use for your units, which makes them look colorful and shiny. The skin also gives your units some extra benefits, such as increased speed, damage, health, etc. The skin also makes your units look more attractive and cool.</li>
112
- </ul>
113
- <h2>Conclusion</h2>
114
- <p>Stick War Legacy Mod VIP Hugo is a modified version of Stick War Legacy, a popular strategy game with stick figures. The mod adds many new features and changes to the original game, such as new units, abilities, modes, skins, unlimited gems, no ads, etc. The mod can make the game more fun and challenging, but it can also have some risks and issues. To download and install the mod, you need to meet some requirements and take some precautions. You also need to follow some steps to download and install the mod from different sources. To play and enjoy the mod, you need to explore the new features and modes, balance the difficulty and fun, play with friends or strangers, and try out the best features and modes in the mod.</p>
115
- <p>If you are a fan of Stick War Legacy, or if you are looking for a new and exciting way to play the game, you should definitely give Stick War Legacy Mod VIP Hugo a try. It is one of the best mods for Stick War Legacy that you can find online. However, you should also be careful and responsible when using the mod, as it might affect your original game or device in some ways. We hope this article has helped you learn more about Stick War Legacy Mod VIP Hugo, how to download and install it, and how to play and enjoy it. Have fun!</p>
116
- <h3>Frequently Asked Questions</h3>
117
- <p>Here are some frequently asked questions about Stick War Legacy Mod VIP Hugo:</p>
118
- <ul>
119
- <li><b>Q: Is Stick War Legacy Mod VIP Hugo safe to use?</b></li>
120
- <li>A: Stick War Legacy Mod VIP Hugo is generally safe to use, as long as you download it from a reliable source and follow the instructions carefully. However, there is always a possibility of encountering some viruses, malware, errors, crashes, etc., when using any modded game. Therefore, you should always backup your original game data and account before using the mod, and have a way to uninstall or restore the original game if needed.</li>
121
- <li><b>Q: Is Stick War Legacy Mod VIP Hugo legal to use?</b></li>
122
- <li>A: Stick War Legacy Mod VIP Hugo is not legal to use, as it violates the terms and conditions of the original game and its developers. Using the mod might result in some legal actions or consequences, such as banning, suspending, or deleting your account, or facing some lawsuits or fines. Therefore, you should use the mod at your own risk and discretion.</li>
123
- <li><b>Q: Is Stick War Legacy Mod VIP Hugo compatible with all devices and versions of the game?</b></li>
124
- <li>A: Stick War Legacy Mod VIP Hugo is not compatible with all devices and versions of the game, as it might have some compatibility issues or conflicts with different devices or versions of the game. The mod might not work properly or at all on some devices or versions of the game. Therefore, you should check the compatibility of the mod with your device and version of the game before downloading or installing it.</li>
125
- <li><b>Q: How can I update Stick War Legacy Mod VIP Hugo?</b></li>
126
- <li>A: Stick War Legacy Mod VIP Hugo is not updated regularly or automatically, as it depends on the modders of the game. The mod might not have any updates or fixes for some time, or ever. Therefore, you should check the source of the mod for any updates or fixes, and follow the instructions to download and install them.</li>
127
- <li><b>Q: How can I contact the modders of Stick War Legacy Mod VIP Hugo?</b></li>
128
- <li>A: Stick War Legacy Mod VIP Hugo is created by Hugo Gaming, a YouTube channel that creates mods for various games. You can contact them through their YouTube channel, where they post videos and links about their mods. You can also leave comments or messages on their videos or channel, and they might reply to you.</li>
129
- </ul></p> 197e85843d<br />
130
- <br />
131
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/dance_diffusion/__init__.py DELETED
@@ -1,17 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # flake8: noqa
17
- from .pipeline_dance_diffusion import DanceDiffusionPipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/ddim/__init__.py DELETED
@@ -1,17 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # flake8: noqa
17
- from .pipeline_ddim import DDIMPipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/model/stylegan/op_gpu/conv2d_gradfix.py DELETED
@@ -1,227 +0,0 @@
1
- import contextlib
2
- import warnings
3
-
4
- import torch
5
- from torch import autograd
6
- from torch.nn import functional as F
7
-
8
- enabled = True
9
- weight_gradients_disabled = False
10
-
11
-
12
- @contextlib.contextmanager
13
- def no_weight_gradients():
14
- global weight_gradients_disabled
15
-
16
- old = weight_gradients_disabled
17
- weight_gradients_disabled = True
18
- yield
19
- weight_gradients_disabled = old
20
-
21
-
22
- def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
23
- if could_use_op(input):
24
- return conv2d_gradfix(
25
- transpose=False,
26
- weight_shape=weight.shape,
27
- stride=stride,
28
- padding=padding,
29
- output_padding=0,
30
- dilation=dilation,
31
- groups=groups,
32
- ).apply(input, weight, bias)
33
-
34
- return F.conv2d(
35
- input=input,
36
- weight=weight,
37
- bias=bias,
38
- stride=stride,
39
- padding=padding,
40
- dilation=dilation,
41
- groups=groups,
42
- )
43
-
44
-
45
- def conv_transpose2d(
46
- input,
47
- weight,
48
- bias=None,
49
- stride=1,
50
- padding=0,
51
- output_padding=0,
52
- groups=1,
53
- dilation=1,
54
- ):
55
- if could_use_op(input):
56
- return conv2d_gradfix(
57
- transpose=True,
58
- weight_shape=weight.shape,
59
- stride=stride,
60
- padding=padding,
61
- output_padding=output_padding,
62
- groups=groups,
63
- dilation=dilation,
64
- ).apply(input, weight, bias)
65
-
66
- return F.conv_transpose2d(
67
- input=input,
68
- weight=weight,
69
- bias=bias,
70
- stride=stride,
71
- padding=padding,
72
- output_padding=output_padding,
73
- dilation=dilation,
74
- groups=groups,
75
- )
76
-
77
-
78
- def could_use_op(input):
79
- if (not enabled) or (not torch.backends.cudnn.enabled):
80
- return False
81
-
82
- if input.device.type != "cuda":
83
- return False
84
-
85
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
86
- return True
87
-
88
- warnings.warn(
89
- f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
90
- )
91
-
92
- return False
93
-
94
-
95
- def ensure_tuple(xs, ndim):
96
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
97
-
98
- return xs
99
-
100
-
101
- conv2d_gradfix_cache = dict()
102
-
103
-
104
- def conv2d_gradfix(
105
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
106
- ):
107
- ndim = 2
108
- weight_shape = tuple(weight_shape)
109
- stride = ensure_tuple(stride, ndim)
110
- padding = ensure_tuple(padding, ndim)
111
- output_padding = ensure_tuple(output_padding, ndim)
112
- dilation = ensure_tuple(dilation, ndim)
113
-
114
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
115
- if key in conv2d_gradfix_cache:
116
- return conv2d_gradfix_cache[key]
117
-
118
- common_kwargs = dict(
119
- stride=stride, padding=padding, dilation=dilation, groups=groups
120
- )
121
-
122
- def calc_output_padding(input_shape, output_shape):
123
- if transpose:
124
- return [0, 0]
125
-
126
- return [
127
- input_shape[i + 2]
128
- - (output_shape[i + 2] - 1) * stride[i]
129
- - (1 - 2 * padding[i])
130
- - dilation[i] * (weight_shape[i + 2] - 1)
131
- for i in range(ndim)
132
- ]
133
-
134
- class Conv2d(autograd.Function):
135
- @staticmethod
136
- def forward(ctx, input, weight, bias):
137
- if not transpose:
138
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
139
-
140
- else:
141
- out = F.conv_transpose2d(
142
- input=input,
143
- weight=weight,
144
- bias=bias,
145
- output_padding=output_padding,
146
- **common_kwargs,
147
- )
148
-
149
- ctx.save_for_backward(input, weight)
150
-
151
- return out
152
-
153
- @staticmethod
154
- def backward(ctx, grad_output):
155
- input, weight = ctx.saved_tensors
156
- grad_input, grad_weight, grad_bias = None, None, None
157
-
158
- if ctx.needs_input_grad[0]:
159
- p = calc_output_padding(
160
- input_shape=input.shape, output_shape=grad_output.shape
161
- )
162
- grad_input = conv2d_gradfix(
163
- transpose=(not transpose),
164
- weight_shape=weight_shape,
165
- output_padding=p,
166
- **common_kwargs,
167
- ).apply(grad_output, weight, None)
168
-
169
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
170
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
171
-
172
- if ctx.needs_input_grad[2]:
173
- grad_bias = grad_output.sum((0, 2, 3))
174
-
175
- return grad_input, grad_weight, grad_bias
176
-
177
- class Conv2dGradWeight(autograd.Function):
178
- @staticmethod
179
- def forward(ctx, grad_output, input):
180
- op = torch._C._jit_get_operation(
181
- "aten::cudnn_convolution_backward_weight"
182
- if not transpose
183
- else "aten::cudnn_convolution_transpose_backward_weight"
184
- )
185
- flags = [
186
- torch.backends.cudnn.benchmark,
187
- torch.backends.cudnn.deterministic,
188
- torch.backends.cudnn.allow_tf32,
189
- ]
190
- grad_weight = op(
191
- weight_shape,
192
- grad_output,
193
- input,
194
- padding,
195
- stride,
196
- dilation,
197
- groups,
198
- *flags,
199
- )
200
- ctx.save_for_backward(grad_output, input)
201
-
202
- return grad_weight
203
-
204
- @staticmethod
205
- def backward(ctx, grad_grad_weight):
206
- grad_output, input = ctx.saved_tensors
207
- grad_grad_output, grad_grad_input = None, None
208
-
209
- if ctx.needs_input_grad[0]:
210
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
211
-
212
- if ctx.needs_input_grad[1]:
213
- p = calc_output_padding(
214
- input_shape=input.shape, output_shape=grad_output.shape
215
- )
216
- grad_grad_input = conv2d_gradfix(
217
- transpose=(not transpose),
218
- weight_shape=weight_shape,
219
- output_padding=p,
220
- **common_kwargs,
221
- ).apply(grad_output, grad_grad_weight, None)
222
-
223
- return grad_grad_output, grad_grad_input
224
-
225
- conv2d_gradfix_cache[key] = Conv2d
226
-
227
- return Conv2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/models/template_model.py DELETED
@@ -1,100 +0,0 @@
1
- """Model class template
2
-
3
- This module provides a template for users to implement custom models.
4
- You can specify '--model template' to use this model.
5
- The class name should be consistent with both the filename and its model option.
6
- The filename should be <model>_dataset.py
7
- The class name should be <Model>Dataset.py
8
- It implements a simple image-to-image translation baseline based on regression loss.
9
- Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss:
10
- min_<netG> ||netG(data_A) - data_B||_1
11
- You need to implement the following functions:
12
- <modify_commandline_options>: Add model-specific options and rewrite default values for existing options.
13
- <__init__>: Initialize this model class.
14
- <set_input>: Unpack input data and perform data pre-processing.
15
- <forward>: Run forward pass. This will be called by both <optimize_parameters> and <test>.
16
- <optimize_parameters>: Update network weights; it will be called in every training iteration.
17
- """
18
- import numpy as np
19
- import torch
20
- from .base_model import BaseModel
21
- from . import networks
22
-
23
-
24
- class TemplateModel(BaseModel):
25
- @staticmethod
26
- def modify_commandline_options(parser, is_train=True):
27
- """Add new model-specific options and rewrite default values for existing options.
28
-
29
- Parameters:
30
- parser -- the option parser
31
- is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options.
32
-
33
- Returns:
34
- the modified parser.
35
- """
36
- parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset.
37
- if is_train:
38
- parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model.
39
-
40
- return parser
41
-
42
- def __init__(self, opt):
43
- """Initialize this model class.
44
-
45
- Parameters:
46
- opt -- training/test options
47
-
48
- A few things can be done here.
49
- - (required) call the initialization function of BaseModel
50
- - define loss function, visualization images, model names, and optimizers
51
- """
52
- BaseModel.__init__(self, opt) # call the initialization method of BaseModel
53
- # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk.
54
- self.loss_names = ['loss_G']
55
- # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images.
56
- self.visual_names = ['data_A', 'data_B', 'output']
57
- # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks.
58
- # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them.
59
- self.model_names = ['G']
60
- # define networks; you can use opt.isTrain to specify different behaviors for training and test.
61
- self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids)
62
- if self.isTrain: # only defined during training time
63
- # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss.
64
- # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device)
65
- self.criterionLoss = torch.nn.L1Loss()
66
- # define and initialize optimizers. You can define one optimizer for each network.
67
- # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
68
- self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
69
- self.optimizers = [self.optimizer]
70
-
71
- # Our program will automatically call <model.setup> to define schedulers, load networks, and print networks
72
-
73
- def set_input(self, input):
74
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
75
-
76
- Parameters:
77
- input: a dictionary that contains the data itself and its metadata information.
78
- """
79
- AtoB = self.opt.direction == 'AtoB' # use <direction> to swap data_A and data_B
80
- self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A
81
- self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B
82
- self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths
83
-
84
- def forward(self):
85
- """Run forward pass. This will be called by both functions <optimize_parameters> and <test>."""
86
- self.output = self.netG(self.data_A) # generate output image given the input data_A
87
-
88
- def backward(self):
89
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
90
- # caculate the intermediate results if necessary; here self.output has been computed during function <forward>
91
- # calculate loss given the input and intermediate results
92
- self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression
93
- self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G
94
-
95
- def optimize_parameters(self):
96
- """Update network weights; it will be called in every training iteration."""
97
- self.forward() # first call forward to calculate intermediate results
98
- self.optimizer.zero_grad() # clear network G's existing gradients
99
- self.backward() # calculate gradients for network G
100
- self.optimizer.step() # update gradients for network G
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/infer_uvr5.py DELETED
@@ -1,363 +0,0 @@
1
- import os, sys, torch, warnings, pdb
2
-
3
- now_dir = os.getcwd()
4
- sys.path.append(now_dir)
5
- from json import load as ll
6
-
7
- warnings.filterwarnings("ignore")
8
- import librosa
9
- import importlib
10
- import numpy as np
11
- import hashlib, math
12
- from tqdm import tqdm
13
- from lib.uvr5_pack.lib_v5 import spec_utils
14
- from lib.uvr5_pack.utils import _get_name_params, inference
15
- from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters
16
- import soundfile as sf
17
- from lib.uvr5_pack.lib_v5.nets_new import CascadedNet
18
- from lib.uvr5_pack.lib_v5 import nets_61968KB as nets
19
-
20
-
21
- class _audio_pre_:
22
- def __init__(self, agg, model_path, device, is_half):
23
- self.model_path = model_path
24
- self.device = device
25
- self.data = {
26
- # Processing Options
27
- "postprocess": False,
28
- "tta": False,
29
- # Constants
30
- "window_size": 512,
31
- "agg": agg,
32
- "high_end_process": "mirroring",
33
- }
34
- mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json")
35
- model = nets.CascadedASPPNet(mp.param["bins"] * 2)
36
- cpk = torch.load(model_path, map_location="cpu")
37
- model.load_state_dict(cpk)
38
- model.eval()
39
- if is_half:
40
- model = model.half().to(device)
41
- else:
42
- model = model.to(device)
43
-
44
- self.mp = mp
45
- self.model = model
46
-
47
- def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"):
48
- if ins_root is None and vocal_root is None:
49
- return "No save root."
50
- name = os.path.basename(music_file)
51
- if ins_root is not None:
52
- os.makedirs(ins_root, exist_ok=True)
53
- if vocal_root is not None:
54
- os.makedirs(vocal_root, exist_ok=True)
55
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
56
- bands_n = len(self.mp.param["band"])
57
- # print(bands_n)
58
- for d in range(bands_n, 0, -1):
59
- bp = self.mp.param["band"][d]
60
- if d == bands_n: # high-end band
61
- (
62
- X_wave[d],
63
- _,
64
- ) = librosa.core.load(
65
- music_file,
66
- bp["sr"],
67
- False,
68
- dtype=np.float32,
69
- res_type=bp["res_type"],
70
- )
71
- if X_wave[d].ndim == 1:
72
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
73
- else: # lower bands
74
- X_wave[d] = librosa.core.resample(
75
- X_wave[d + 1],
76
- self.mp.param["band"][d + 1]["sr"],
77
- bp["sr"],
78
- res_type=bp["res_type"],
79
- )
80
- # Stft of wave source
81
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
82
- X_wave[d],
83
- bp["hl"],
84
- bp["n_fft"],
85
- self.mp.param["mid_side"],
86
- self.mp.param["mid_side_b2"],
87
- self.mp.param["reverse"],
88
- )
89
- # pdb.set_trace()
90
- if d == bands_n and self.data["high_end_process"] != "none":
91
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
92
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
93
- )
94
- input_high_end = X_spec_s[d][
95
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
96
- ]
97
-
98
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
99
- aggresive_set = float(self.data["agg"] / 100)
100
- aggressiveness = {
101
- "value": aggresive_set,
102
- "split_bin": self.mp.param["band"][1]["crop_stop"],
103
- }
104
- with torch.no_grad():
105
- pred, X_mag, X_phase = inference(
106
- X_spec_m, self.device, self.model, aggressiveness, self.data
107
- )
108
- # Postprocess
109
- if self.data["postprocess"]:
110
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
111
- pred = spec_utils.mask_silence(pred, pred_inv)
112
- y_spec_m = pred * X_phase
113
- v_spec_m = X_spec_m - y_spec_m
114
-
115
- if ins_root is not None:
116
- if self.data["high_end_process"].startswith("mirroring"):
117
- input_high_end_ = spec_utils.mirroring(
118
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
119
- )
120
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
121
- y_spec_m, self.mp, input_high_end_h, input_high_end_
122
- )
123
- else:
124
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
125
- print("%s instruments done" % name)
126
- if format in ["wav", "flac"]:
127
- sf.write(
128
- os.path.join(
129
- ins_root,
130
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
131
- ),
132
- (np.array(wav_instrument) * 32768).astype("int16"),
133
- self.mp.param["sr"],
134
- ) #
135
- else:
136
- path = os.path.join(
137
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
138
- )
139
- sf.write(
140
- path,
141
- (np.array(wav_instrument) * 32768).astype("int16"),
142
- self.mp.param["sr"],
143
- )
144
- if os.path.exists(path):
145
- os.system(
146
- "ffmpeg -i %s -vn %s -q:a 2 -y"
147
- % (path, path[:-4] + ".%s" % format)
148
- )
149
- if vocal_root is not None:
150
- if self.data["high_end_process"].startswith("mirroring"):
151
- input_high_end_ = spec_utils.mirroring(
152
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
153
- )
154
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
155
- v_spec_m, self.mp, input_high_end_h, input_high_end_
156
- )
157
- else:
158
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
159
- print("%s vocals done" % name)
160
- if format in ["wav", "flac"]:
161
- sf.write(
162
- os.path.join(
163
- vocal_root,
164
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
165
- ),
166
- (np.array(wav_vocals) * 32768).astype("int16"),
167
- self.mp.param["sr"],
168
- )
169
- else:
170
- path = os.path.join(
171
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
172
- )
173
- sf.write(
174
- path,
175
- (np.array(wav_vocals) * 32768).astype("int16"),
176
- self.mp.param["sr"],
177
- )
178
- if os.path.exists(path):
179
- os.system(
180
- "ffmpeg -i %s -vn %s -q:a 2 -y"
181
- % (path, path[:-4] + ".%s" % format)
182
- )
183
-
184
-
185
- class _audio_pre_new:
186
- def __init__(self, agg, model_path, device, is_half):
187
- self.model_path = model_path
188
- self.device = device
189
- self.data = {
190
- # Processing Options
191
- "postprocess": False,
192
- "tta": False,
193
- # Constants
194
- "window_size": 512,
195
- "agg": agg,
196
- "high_end_process": "mirroring",
197
- }
198
- mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json")
199
- nout = 64 if "DeReverb" in model_path else 48
200
- model = CascadedNet(mp.param["bins"] * 2, nout)
201
- cpk = torch.load(model_path, map_location="cpu")
202
- model.load_state_dict(cpk)
203
- model.eval()
204
- if is_half:
205
- model = model.half().to(device)
206
- else:
207
- model = model.to(device)
208
-
209
- self.mp = mp
210
- self.model = model
211
-
212
- def _path_audio_(
213
- self, music_file, vocal_root=None, ins_root=None, format="flac"
214
- ): # 3个VR模型vocal和ins是反的
215
- if ins_root is None and vocal_root is None:
216
- return "No save root."
217
- name = os.path.basename(music_file)
218
- if ins_root is not None:
219
- os.makedirs(ins_root, exist_ok=True)
220
- if vocal_root is not None:
221
- os.makedirs(vocal_root, exist_ok=True)
222
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
223
- bands_n = len(self.mp.param["band"])
224
- # print(bands_n)
225
- for d in range(bands_n, 0, -1):
226
- bp = self.mp.param["band"][d]
227
- if d == bands_n: # high-end band
228
- (
229
- X_wave[d],
230
- _,
231
- ) = librosa.core.load(
232
- music_file,
233
- bp["sr"],
234
- False,
235
- dtype=np.float32,
236
- res_type=bp["res_type"],
237
- )
238
- if X_wave[d].ndim == 1:
239
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
240
- else: # lower bands
241
- X_wave[d] = librosa.core.resample(
242
- X_wave[d + 1],
243
- self.mp.param["band"][d + 1]["sr"],
244
- bp["sr"],
245
- res_type=bp["res_type"],
246
- )
247
- # Stft of wave source
248
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
249
- X_wave[d],
250
- bp["hl"],
251
- bp["n_fft"],
252
- self.mp.param["mid_side"],
253
- self.mp.param["mid_side_b2"],
254
- self.mp.param["reverse"],
255
- )
256
- # pdb.set_trace()
257
- if d == bands_n and self.data["high_end_process"] != "none":
258
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
259
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
260
- )
261
- input_high_end = X_spec_s[d][
262
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
263
- ]
264
-
265
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
266
- aggresive_set = float(self.data["agg"] / 100)
267
- aggressiveness = {
268
- "value": aggresive_set,
269
- "split_bin": self.mp.param["band"][1]["crop_stop"],
270
- }
271
- with torch.no_grad():
272
- pred, X_mag, X_phase = inference(
273
- X_spec_m, self.device, self.model, aggressiveness, self.data
274
- )
275
- # Postprocess
276
- if self.data["postprocess"]:
277
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
278
- pred = spec_utils.mask_silence(pred, pred_inv)
279
- y_spec_m = pred * X_phase
280
- v_spec_m = X_spec_m - y_spec_m
281
-
282
- if ins_root is not None:
283
- if self.data["high_end_process"].startswith("mirroring"):
284
- input_high_end_ = spec_utils.mirroring(
285
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
286
- )
287
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
288
- y_spec_m, self.mp, input_high_end_h, input_high_end_
289
- )
290
- else:
291
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
292
- print("%s instruments done" % name)
293
- if format in ["wav", "flac"]:
294
- sf.write(
295
- os.path.join(
296
- ins_root,
297
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
298
- ),
299
- (np.array(wav_instrument) * 32768).astype("int16"),
300
- self.mp.param["sr"],
301
- ) #
302
- else:
303
- path = os.path.join(
304
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
305
- )
306
- sf.write(
307
- path,
308
- (np.array(wav_instrument) * 32768).astype("int16"),
309
- self.mp.param["sr"],
310
- )
311
- if os.path.exists(path):
312
- os.system(
313
- "ffmpeg -i %s -vn %s -q:a 2 -y"
314
- % (path, path[:-4] + ".%s" % format)
315
- )
316
- if vocal_root is not None:
317
- if self.data["high_end_process"].startswith("mirroring"):
318
- input_high_end_ = spec_utils.mirroring(
319
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
320
- )
321
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
322
- v_spec_m, self.mp, input_high_end_h, input_high_end_
323
- )
324
- else:
325
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
326
- print("%s vocals done" % name)
327
- if format in ["wav", "flac"]:
328
- sf.write(
329
- os.path.join(
330
- vocal_root,
331
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
332
- ),
333
- (np.array(wav_vocals) * 32768).astype("int16"),
334
- self.mp.param["sr"],
335
- )
336
- else:
337
- path = os.path.join(
338
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
339
- )
340
- sf.write(
341
- path,
342
- (np.array(wav_vocals) * 32768).astype("int16"),
343
- self.mp.param["sr"],
344
- )
345
- if os.path.exists(path):
346
- os.system(
347
- "ffmpeg -i %s -vn %s -q:a 2 -y"
348
- % (path, path[:-4] + ".%s" % format)
349
- )
350
-
351
-
352
- if __name__ == "__main__":
353
- device = "cuda"
354
- is_half = True
355
- # model_path = "uvr5_weights/2_HP-UVR.pth"
356
- # model_path = "uvr5_weights/VR-DeEchoDeReverb.pth"
357
- # model_path = "uvr5_weights/VR-DeEchoNormal.pth"
358
- model_path = "uvr5_weights/DeEchoNormal.pth"
359
- # pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True,agg=10)
360
- pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10)
361
- audio_path = "雪雪伴奏对消HP5.wav"
362
- save_path = "opt"
363
- pre_fun._path_audio_(audio_path, save_path, save_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/docs/training_tips_en.md DELETED
@@ -1,65 +0,0 @@
1
- Instructions and tips for RVC training
2
- ======================================
3
- This TIPS explains how data training is done.
4
-
5
- # Training flow
6
- I will explain along the steps in the training tab of the GUI.
7
-
8
- ## step1
9
- Set the experiment name here.
10
-
11
- You can also set here whether the model should take pitch into account.
12
- If the model doesn't consider pitch, the model will be lighter, but not suitable for singing.
13
-
14
- Data for each experiment is placed in `/logs/your-experiment-name/`.
15
-
16
- ## step2a
17
- Loads and preprocesses audio.
18
-
19
- ### load audio
20
- If you specify a folder with audio, the audio files in that folder will be read automatically.
21
- For example, if you specify `C:Users\hoge\voices`, `C:Users\hoge\voices\voice.mp3` will be loaded, but `C:Users\hoge\voices\dir\voice.mp3` will Not loaded.
22
-
23
- Since ffmpeg is used internally for reading audio, if the extension is supported by ffmpeg, it will be read automatically.
24
- After converting to int16 with ffmpeg, convert to float32 and normalize between -1 to 1.
25
-
26
- ### denoising
27
- The audio is smoothed by scipy's filtfilt.
28
-
29
- ### Audio Split
30
- First, the input audio is divided by detecting parts of silence that last longer than a certain period (max_sil_kept=5 seconds?). After splitting the audio on silence, split the audio every 4 seconds with an overlap of 0.3 seconds. For audio separated within 4 seconds, after normalizing the volume, convert the wav file to `/logs/your-experiment-name/0_gt_wavs` and then convert it to 16k sampling rate to `/logs/your-experiment-name/1_16k_wavs ` as a wav file.
31
-
32
- ## step2b
33
- ### Extract pitch
34
- Extract pitch information from wav files. Extract the pitch information (=f0) using the method built into parselmouth or pyworld and save it in `/logs/your-experiment-name/2a_f0`. Then logarithmically convert the pitch information to an integer between 1 and 255 and save it in `/logs/your-experiment-name/2b-f0nsf`.
35
-
36
- ### Extract feature_print
37
- Convert the wav file to embedding in advance using HuBERT. Read the wav file saved in `/logs/your-experiment-name/1_16k_wavs`, convert the wav file to 256-dimensional features with HuBERT, and save in npy format in `/logs/your-experiment-name/3_feature256`.
38
-
39
- ## step3
40
- train the model.
41
- ### Glossary for Beginners
42
- In deep learning, the data set is divided and the learning proceeds little by little. In one model update (step), batch_size data are retrieved and predictions and error corrections are performed. Doing this once for a dataset counts as one epoch.
43
-
44
- Therefore, the learning time is the learning time per step x (the number of data in the dataset / batch size) x the number of epochs. In general, the larger the batch size, the more stable the learning becomes (learning time per step ÷ batch size) becomes smaller, but it uses more GPU memory. GPU RAM can be checked with the nvidia-smi command. Learning can be done in a short time by increasing the batch size as much as possible according to the machine of the execution environment.
45
-
46
- ### Specify pretrained model
47
- RVC starts training the model from pretrained weights instead of from 0, so it can be trained with a small dataset.
48
-
49
- By default
50
-
51
- - If you consider pitch, it loads `rvc-location/pretrained/f0G40k.pth` and `rvc-location/pretrained/f0D40k.pth`.
52
- - If you don't consider pitch, it loads `rvc-location/pretrained/f0G40k.pth` and `rvc-location/pretrained/f0D40k.pth`.
53
-
54
- When learning, model parameters are saved in `logs/your-experiment-name/G_{}.pth` and `logs/your-experiment-name/D_{}.pth` for each save_every_epoch, but by specifying this path, you can start learning. You can restart or start training from model weights learned in a different experiment.
55
-
56
- ### learning index
57
- RVC saves the HuBERT feature values used during training, and during inference, searches for feature values that are similar to the feature values used during learning to perform inference. In order to perform this search at high speed, the index is learned in advance.
58
- For index learning, we use the approximate neighborhood search library faiss. Read the feature value of `logs/your-experiment-name/3_feature256` and use it to learn the index, and save it as `logs/your-experiment-name/add_XXX.index`.
59
-
60
- (From the 20230428update version, it is read from the index, and saving / specifying is no longer necessary.)
61
-
62
- ### Button description
63
- - Train model: After executing step2b, press this button to train the model.
64
- - Train feature index: After training the model, perform index learning.
65
- - One-click training: step2b, model training and feature index training all at once.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_melody_32khz.py DELETED
@@ -1,65 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from ._explorers import LMExplorer
8
- from ...environment import AudioCraftEnvironment
9
-
10
-
11
- @LMExplorer
12
- def explorer(launcher):
13
- partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global'])
14
- launcher.slurm_(gpus=32, partition=partitions)
15
- launcher.bind_(solver='musicgen/musicgen_melody_32khz')
16
- # replace this by the desired music dataset
17
- launcher.bind_(dset='internal/music_400k_32khz')
18
-
19
- fsdp = {'autocast': False, 'fsdp.use': True}
20
- medium = {'model/lm/model_scale': 'medium'}
21
- large = {'model/lm/model_scale': 'large'}
22
-
23
- cfg_low = {'classifier_free_guidance.training_dropout': 0.2}
24
- wd_low = {'conditioners.description.t5.word_dropout': 0.2}
25
-
26
- adam = {'optim.optimizer': 'adamw', 'optim.lr': 1e-4}
27
-
28
- cache_path = {'conditioners.self_wav.chroma_stem.cache_path':
29
- '/fsx-audio-craft-llm/jadecopet/experiments/audiocraft/caches/chroma_stem'}
30
-
31
- # CACHE GENERATION JOBS
32
- n_cache_gen_jobs = 4
33
- gen_sub = launcher.slurm(gpus=1)
34
- gen_sub.bind_(
35
- cache_path, {
36
- # the cache is always computed over the whole file, so duration doesn't matter here.
37
- 'dataset.segment_duration': 2.,
38
- 'dataset.batch_size': 8,
39
- 'dataset.train.permutation_on_files': True, # try to not repeat files.
40
- 'optim.epochs': 10,
41
- 'model/lm/model_scale': 'xsmall',
42
-
43
- })
44
- with gen_sub.job_array():
45
- for gen_job in range(n_cache_gen_jobs):
46
- gen_sub({'dataset.train.shuffle_seed': gen_job})
47
-
48
- # ACTUAL TRAINING JOBS.
49
- launcher.bind_(fsdp)
50
-
51
- launcher.slurm_(gpus=32).bind_(label='32gpus')
52
- with launcher.job_array():
53
- sub = launcher.bind()
54
- sub()
55
- sub(cache_path)
56
-
57
- launcher.slurm_(gpus=64).bind_(label='64gpus')
58
- with launcher.job_array():
59
- sub = launcher.bind()
60
- sub(medium, adam)
61
-
62
- launcher.slurm_(gpus=96).bind_(label='96gpus')
63
- with launcher.job_array():
64
- sub = launcher.bind()
65
- sub(large, cfg_low, wd_low, adam, {'optim.max_norm': 3})
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/get_nets.py DELETED
@@ -1,171 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from collections import OrderedDict
5
- import numpy as np
6
-
7
- from configs.paths_config import model_paths
8
- PNET_PATH = model_paths["mtcnn_pnet"]
9
- ONET_PATH = model_paths["mtcnn_onet"]
10
- RNET_PATH = model_paths["mtcnn_rnet"]
11
-
12
-
13
- class Flatten(nn.Module):
14
-
15
- def __init__(self):
16
- super(Flatten, self).__init__()
17
-
18
- def forward(self, x):
19
- """
20
- Arguments:
21
- x: a float tensor with shape [batch_size, c, h, w].
22
- Returns:
23
- a float tensor with shape [batch_size, c*h*w].
24
- """
25
-
26
- # without this pretrained model isn't working
27
- x = x.transpose(3, 2).contiguous()
28
-
29
- return x.view(x.size(0), -1)
30
-
31
-
32
- class PNet(nn.Module):
33
-
34
- def __init__(self):
35
- super().__init__()
36
-
37
- # suppose we have input with size HxW, then
38
- # after first layer: H - 2,
39
- # after pool: ceil((H - 2)/2),
40
- # after second conv: ceil((H - 2)/2) - 2,
41
- # after last conv: ceil((H - 2)/2) - 4,
42
- # and the same for W
43
-
44
- self.features = nn.Sequential(OrderedDict([
45
- ('conv1', nn.Conv2d(3, 10, 3, 1)),
46
- ('prelu1', nn.PReLU(10)),
47
- ('pool1', nn.MaxPool2d(2, 2, ceil_mode=True)),
48
-
49
- ('conv2', nn.Conv2d(10, 16, 3, 1)),
50
- ('prelu2', nn.PReLU(16)),
51
-
52
- ('conv3', nn.Conv2d(16, 32, 3, 1)),
53
- ('prelu3', nn.PReLU(32))
54
- ]))
55
-
56
- self.conv4_1 = nn.Conv2d(32, 2, 1, 1)
57
- self.conv4_2 = nn.Conv2d(32, 4, 1, 1)
58
-
59
- weights = np.load(PNET_PATH, allow_pickle=True)[()]
60
- for n, p in self.named_parameters():
61
- p.data = torch.FloatTensor(weights[n])
62
-
63
- def forward(self, x):
64
- """
65
- Arguments:
66
- x: a float tensor with shape [batch_size, 3, h, w].
67
- Returns:
68
- b: a float tensor with shape [batch_size, 4, h', w'].
69
- a: a float tensor with shape [batch_size, 2, h', w'].
70
- """
71
- x = self.features(x)
72
- a = self.conv4_1(x)
73
- b = self.conv4_2(x)
74
- a = F.softmax(a, dim=-1)
75
- return b, a
76
-
77
-
78
- class RNet(nn.Module):
79
-
80
- def __init__(self):
81
- super().__init__()
82
-
83
- self.features = nn.Sequential(OrderedDict([
84
- ('conv1', nn.Conv2d(3, 28, 3, 1)),
85
- ('prelu1', nn.PReLU(28)),
86
- ('pool1', nn.MaxPool2d(3, 2, ceil_mode=True)),
87
-
88
- ('conv2', nn.Conv2d(28, 48, 3, 1)),
89
- ('prelu2', nn.PReLU(48)),
90
- ('pool2', nn.MaxPool2d(3, 2, ceil_mode=True)),
91
-
92
- ('conv3', nn.Conv2d(48, 64, 2, 1)),
93
- ('prelu3', nn.PReLU(64)),
94
-
95
- ('flatten', Flatten()),
96
- ('conv4', nn.Linear(576, 128)),
97
- ('prelu4', nn.PReLU(128))
98
- ]))
99
-
100
- self.conv5_1 = nn.Linear(128, 2)
101
- self.conv5_2 = nn.Linear(128, 4)
102
-
103
- weights = np.load(RNET_PATH, allow_pickle=True)[()]
104
- for n, p in self.named_parameters():
105
- p.data = torch.FloatTensor(weights[n])
106
-
107
- def forward(self, x):
108
- """
109
- Arguments:
110
- x: a float tensor with shape [batch_size, 3, h, w].
111
- Returns:
112
- b: a float tensor with shape [batch_size, 4].
113
- a: a float tensor with shape [batch_size, 2].
114
- """
115
- x = self.features(x)
116
- a = self.conv5_1(x)
117
- b = self.conv5_2(x)
118
- a = F.softmax(a, dim=-1)
119
- return b, a
120
-
121
-
122
- class ONet(nn.Module):
123
-
124
- def __init__(self):
125
- super().__init__()
126
-
127
- self.features = nn.Sequential(OrderedDict([
128
- ('conv1', nn.Conv2d(3, 32, 3, 1)),
129
- ('prelu1', nn.PReLU(32)),
130
- ('pool1', nn.MaxPool2d(3, 2, ceil_mode=True)),
131
-
132
- ('conv2', nn.Conv2d(32, 64, 3, 1)),
133
- ('prelu2', nn.PReLU(64)),
134
- ('pool2', nn.MaxPool2d(3, 2, ceil_mode=True)),
135
-
136
- ('conv3', nn.Conv2d(64, 64, 3, 1)),
137
- ('prelu3', nn.PReLU(64)),
138
- ('pool3', nn.MaxPool2d(2, 2, ceil_mode=True)),
139
-
140
- ('conv4', nn.Conv2d(64, 128, 2, 1)),
141
- ('prelu4', nn.PReLU(128)),
142
-
143
- ('flatten', Flatten()),
144
- ('conv5', nn.Linear(1152, 256)),
145
- ('drop5', nn.Dropout(0.25)),
146
- ('prelu5', nn.PReLU(256)),
147
- ]))
148
-
149
- self.conv6_1 = nn.Linear(256, 2)
150
- self.conv6_2 = nn.Linear(256, 4)
151
- self.conv6_3 = nn.Linear(256, 10)
152
-
153
- weights = np.load(ONET_PATH, allow_pickle=True)[()]
154
- for n, p in self.named_parameters():
155
- p.data = torch.FloatTensor(weights[n])
156
-
157
- def forward(self, x):
158
- """
159
- Arguments:
160
- x: a float tensor with shape [batch_size, 3, h, w].
161
- Returns:
162
- c: a float tensor with shape [batch_size, 10].
163
- b: a float tensor with shape [batch_size, 4].
164
- a: a float tensor with shape [batch_size, 2].
165
- """
166
- x = self.features(x)
167
- a = self.conv6_1(x)
168
- b = self.conv6_2(x)
169
- c = self.conv6_3(x)
170
- a = F.softmax(a, dim=-1)
171
- return c, b, a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGText/GlyphControl/ldm/modules/midas/midas/blocks.py DELETED
@@ -1,342 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
-
4
- from .vit import (
5
- _make_pretrained_vitb_rn50_384,
6
- _make_pretrained_vitl16_384,
7
- _make_pretrained_vitb16_384,
8
- forward_vit,
9
- )
10
-
11
- def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",):
12
- if backbone == "vitl16_384":
13
- pretrained = _make_pretrained_vitl16_384(
14
- use_pretrained, hooks=hooks, use_readout=use_readout
15
- )
16
- scratch = _make_scratch(
17
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
18
- ) # ViT-L/16 - 85.0% Top1 (backbone)
19
- elif backbone == "vitb_rn50_384":
20
- pretrained = _make_pretrained_vitb_rn50_384(
21
- use_pretrained,
22
- hooks=hooks,
23
- use_vit_only=use_vit_only,
24
- use_readout=use_readout,
25
- )
26
- scratch = _make_scratch(
27
- [256, 512, 768, 768], features, groups=groups, expand=expand
28
- ) # ViT-H/16 - 85.0% Top1 (backbone)
29
- elif backbone == "vitb16_384":
30
- pretrained = _make_pretrained_vitb16_384(
31
- use_pretrained, hooks=hooks, use_readout=use_readout
32
- )
33
- scratch = _make_scratch(
34
- [96, 192, 384, 768], features, groups=groups, expand=expand
35
- ) # ViT-B/16 - 84.6% Top1 (backbone)
36
- elif backbone == "resnext101_wsl":
37
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
38
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
39
- elif backbone == "efficientnet_lite3":
40
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
41
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
42
- else:
43
- print(f"Backbone '{backbone}' not implemented")
44
- assert False
45
-
46
- return pretrained, scratch
47
-
48
-
49
- def _make_scratch(in_shape, out_shape, groups=1, expand=False):
50
- scratch = nn.Module()
51
-
52
- out_shape1 = out_shape
53
- out_shape2 = out_shape
54
- out_shape3 = out_shape
55
- out_shape4 = out_shape
56
- if expand==True:
57
- out_shape1 = out_shape
58
- out_shape2 = out_shape*2
59
- out_shape3 = out_shape*4
60
- out_shape4 = out_shape*8
61
-
62
- scratch.layer1_rn = nn.Conv2d(
63
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
64
- )
65
- scratch.layer2_rn = nn.Conv2d(
66
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
67
- )
68
- scratch.layer3_rn = nn.Conv2d(
69
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
70
- )
71
- scratch.layer4_rn = nn.Conv2d(
72
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
73
- )
74
-
75
- return scratch
76
-
77
-
78
- def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
79
- efficientnet = torch.hub.load(
80
- "rwightman/gen-efficientnet-pytorch",
81
- "tf_efficientnet_lite3",
82
- pretrained=use_pretrained,
83
- exportable=exportable
84
- )
85
- return _make_efficientnet_backbone(efficientnet)
86
-
87
-
88
- def _make_efficientnet_backbone(effnet):
89
- pretrained = nn.Module()
90
-
91
- pretrained.layer1 = nn.Sequential(
92
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
93
- )
94
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
95
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
96
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
97
-
98
- return pretrained
99
-
100
-
101
- def _make_resnet_backbone(resnet):
102
- pretrained = nn.Module()
103
- pretrained.layer1 = nn.Sequential(
104
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
105
- )
106
-
107
- pretrained.layer2 = resnet.layer2
108
- pretrained.layer3 = resnet.layer3
109
- pretrained.layer4 = resnet.layer4
110
-
111
- return pretrained
112
-
113
-
114
- def _make_pretrained_resnext101_wsl(use_pretrained):
115
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
116
- return _make_resnet_backbone(resnet)
117
-
118
-
119
-
120
- class Interpolate(nn.Module):
121
- """Interpolation module.
122
- """
123
-
124
- def __init__(self, scale_factor, mode, align_corners=False):
125
- """Init.
126
-
127
- Args:
128
- scale_factor (float): scaling
129
- mode (str): interpolation mode
130
- """
131
- super(Interpolate, self).__init__()
132
-
133
- self.interp = nn.functional.interpolate
134
- self.scale_factor = scale_factor
135
- self.mode = mode
136
- self.align_corners = align_corners
137
-
138
- def forward(self, x):
139
- """Forward pass.
140
-
141
- Args:
142
- x (tensor): input
143
-
144
- Returns:
145
- tensor: interpolated data
146
- """
147
-
148
- x = self.interp(
149
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
150
- )
151
-
152
- return x
153
-
154
-
155
- class ResidualConvUnit(nn.Module):
156
- """Residual convolution module.
157
- """
158
-
159
- def __init__(self, features):
160
- """Init.
161
-
162
- Args:
163
- features (int): number of features
164
- """
165
- super().__init__()
166
-
167
- self.conv1 = nn.Conv2d(
168
- features, features, kernel_size=3, stride=1, padding=1, bias=True
169
- )
170
-
171
- self.conv2 = nn.Conv2d(
172
- features, features, kernel_size=3, stride=1, padding=1, bias=True
173
- )
174
-
175
- self.relu = nn.ReLU(inplace=True)
176
-
177
- def forward(self, x):
178
- """Forward pass.
179
-
180
- Args:
181
- x (tensor): input
182
-
183
- Returns:
184
- tensor: output
185
- """
186
- out = self.relu(x)
187
- out = self.conv1(out)
188
- out = self.relu(out)
189
- out = self.conv2(out)
190
-
191
- return out + x
192
-
193
-
194
- class FeatureFusionBlock(nn.Module):
195
- """Feature fusion block.
196
- """
197
-
198
- def __init__(self, features):
199
- """Init.
200
-
201
- Args:
202
- features (int): number of features
203
- """
204
- super(FeatureFusionBlock, self).__init__()
205
-
206
- self.resConfUnit1 = ResidualConvUnit(features)
207
- self.resConfUnit2 = ResidualConvUnit(features)
208
-
209
- def forward(self, *xs):
210
- """Forward pass.
211
-
212
- Returns:
213
- tensor: output
214
- """
215
- output = xs[0]
216
-
217
- if len(xs) == 2:
218
- output += self.resConfUnit1(xs[1])
219
-
220
- output = self.resConfUnit2(output)
221
-
222
- output = nn.functional.interpolate(
223
- output, scale_factor=2, mode="bilinear", align_corners=True
224
- )
225
-
226
- return output
227
-
228
-
229
-
230
-
231
- class ResidualConvUnit_custom(nn.Module):
232
- """Residual convolution module.
233
- """
234
-
235
- def __init__(self, features, activation, bn):
236
- """Init.
237
-
238
- Args:
239
- features (int): number of features
240
- """
241
- super().__init__()
242
-
243
- self.bn = bn
244
-
245
- self.groups=1
246
-
247
- self.conv1 = nn.Conv2d(
248
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
249
- )
250
-
251
- self.conv2 = nn.Conv2d(
252
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
253
- )
254
-
255
- if self.bn==True:
256
- self.bn1 = nn.BatchNorm2d(features)
257
- self.bn2 = nn.BatchNorm2d(features)
258
-
259
- self.activation = activation
260
-
261
- self.skip_add = nn.quantized.FloatFunctional()
262
-
263
- def forward(self, x):
264
- """Forward pass.
265
-
266
- Args:
267
- x (tensor): input
268
-
269
- Returns:
270
- tensor: output
271
- """
272
-
273
- out = self.activation(x)
274
- out = self.conv1(out)
275
- if self.bn==True:
276
- out = self.bn1(out)
277
-
278
- out = self.activation(out)
279
- out = self.conv2(out)
280
- if self.bn==True:
281
- out = self.bn2(out)
282
-
283
- if self.groups > 1:
284
- out = self.conv_merge(out)
285
-
286
- return self.skip_add.add(out, x)
287
-
288
- # return out + x
289
-
290
-
291
- class FeatureFusionBlock_custom(nn.Module):
292
- """Feature fusion block.
293
- """
294
-
295
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
296
- """Init.
297
-
298
- Args:
299
- features (int): number of features
300
- """
301
- super(FeatureFusionBlock_custom, self).__init__()
302
-
303
- self.deconv = deconv
304
- self.align_corners = align_corners
305
-
306
- self.groups=1
307
-
308
- self.expand = expand
309
- out_features = features
310
- if self.expand==True:
311
- out_features = features//2
312
-
313
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
314
-
315
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
316
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
317
-
318
- self.skip_add = nn.quantized.FloatFunctional()
319
-
320
- def forward(self, *xs):
321
- """Forward pass.
322
-
323
- Returns:
324
- tensor: output
325
- """
326
- output = xs[0]
327
-
328
- if len(xs) == 2:
329
- res = self.resConfUnit1(xs[1])
330
- output = self.skip_add.add(output, res)
331
- # output += res
332
-
333
- output = self.resConfUnit2(output)
334
-
335
- output = nn.functional.interpolate(
336
- output, scale_factor=2, mode="bilinear", align_corners=self.align_corners
337
- )
338
-
339
- output = self.out_conv(output)
340
-
341
- return output
342
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIKey/facetofacechat/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Facetofacechat
3
- emoji: 📊
4
- colorFrom: gray
5
- colorTo: green
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/notebook.py DELETED
@@ -1,32 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- try:
8
- import IPython.display as ipd # type: ignore
9
- except ImportError:
10
- # Note in a notebook...
11
- pass
12
-
13
-
14
- import torch
15
-
16
-
17
- def display_audio(samples: torch.Tensor, sample_rate: int):
18
- """Renders an audio player for the given audio samples.
19
-
20
- Args:
21
- samples (torch.Tensor): a Tensor of decoded audio samples
22
- with shapes [B, C, T] or [C, T]
23
- sample_rate (int): sample rate audio should be displayed with.
24
- """
25
- assert samples.dim() == 2 or samples.dim() == 3
26
-
27
- samples = samples.detach().cpu()
28
- if samples.dim() == 2:
29
- samples = samples[None, ...]
30
-
31
- for audio in samples:
32
- ipd.display(ipd.Audio(audio, rate=sample_rate))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/Template.ts DELETED
@@ -1,23 +0,0 @@
1
- import type { Message } from "./Message";
2
-
3
- export type LegacyParamatersTemplateInput = {
4
- preprompt?: string;
5
- userMessageToken: string;
6
- userMessageEndToken: string;
7
- assistantMessageToken: string;
8
- assistantMessageEndToken: string;
9
- };
10
-
11
- export type ChatTemplateInput = {
12
- messages: Pick<Message, "from" | "content">[];
13
- preprompt?: string;
14
- };
15
-
16
- export type WebSearchSummaryTemplateInput = {
17
- answer: string;
18
- query: string;
19
- };
20
-
21
- export type WebSearchQueryTemplateInput = {
22
- messages: Pick<Message, "from" | "content">[];
23
- };
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/__init__.py DELETED
File without changes
spaces/AgentVerse/agentVerse/ui/src/classes/actor.ts DELETED
@@ -1,54 +0,0 @@
1
- import { Physics } from "phaser";
2
- export class Actor extends Physics.Arcade.Sprite {
3
- constructor(
4
- scene: Phaser.Scene,
5
- x: number,
6
- y: number,
7
- texture: string,
8
- frame?: string | number
9
- ) {
10
- super(scene, x, y, texture, frame);
11
- scene.add.existing(this);
12
- scene.physics.add.existing(this);
13
- this.getBody().setCollideWorldBounds(true);
14
- }
15
-
16
- protected getBody(): Physics.Arcade.Body {
17
- return this.body as Physics.Arcade.Body;
18
- }
19
-
20
- initAnimations(): void {
21
- this.scene.anims.create({
22
- key: this.name + "-walk-down",
23
- frames: this.scene.anims.generateFrameNumbers(this.name, {
24
- start: 0,
25
- end: 2,
26
- }),
27
- frameRate: 6,
28
- });
29
- this.scene.anims.create({
30
- key: this.name + "-walk-up",
31
- frames: this.scene.anims.generateFrameNumbers(this.name, {
32
- start: 3,
33
- end: 5,
34
- }),
35
- frameRate: 6,
36
- });
37
- this.scene.anims.create({
38
- key: this.name + "-walk-left",
39
- frames: this.scene.anims.generateFrameNumbers(this.name, {
40
- start: 6,
41
- end: 8,
42
- }),
43
- frameRate: 6,
44
- });
45
- this.scene.anims.create({
46
- key: this.name + "-walk-right",
47
- frames: this.scene.anims.generateFrameNumbers(this.name, {
48
- start: 9,
49
- end: 11,
50
- }),
51
- frameRate: 6,
52
- });
53
- }
54
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/RemoveChild.js DELETED
@@ -1,19 +0,0 @@
1
- import Container from '../../container/Container.js';
2
-
3
- const RemoveItem = Phaser.Utils.Array.Remove;
4
- const ContainerRemove = Container.prototype.remove;
5
-
6
- var RemoveChild = function (gameObject, destroyChild) {
7
- if (this.isBackground(gameObject)) {
8
- RemoveItem(this.backgroundChildren, gameObject);
9
- }
10
- ContainerRemove.call(this, gameObject, destroyChild);
11
-
12
- if (!destroyChild && this.sizerEventsEnable) {
13
- gameObject.emit('sizer.remove', gameObject, this);
14
- this.emit('remove', gameObject, this);
15
- }
16
- return this;
17
- }
18
-
19
- export default RemoveChild;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Pinch.js DELETED
@@ -1,2 +0,0 @@
1
- import { Pinch } from '../../../plugins/gestures.js';
2
- export default Pinch;
 
 
 
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/mandarin.py DELETED
@@ -1,326 +0,0 @@
1
- import os
2
- import sys
3
- import re
4
- from pypinyin import lazy_pinyin, BOPOMOFO
5
- import jieba
6
- import cn2an
7
- import logging
8
-
9
-
10
- # List of (Latin alphabet, bopomofo) pairs:
11
- _latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
12
- ('a', 'ㄟˉ'),
13
- ('b', 'ㄅㄧˋ'),
14
- ('c', 'ㄙㄧˉ'),
15
- ('d', 'ㄉㄧˋ'),
16
- ('e', 'ㄧˋ'),
17
- ('f', 'ㄝˊㄈㄨˋ'),
18
- ('g', 'ㄐㄧˋ'),
19
- ('h', 'ㄝˇㄑㄩˋ'),
20
- ('i', 'ㄞˋ'),
21
- ('j', 'ㄐㄟˋ'),
22
- ('k', 'ㄎㄟˋ'),
23
- ('l', 'ㄝˊㄛˋ'),
24
- ('m', 'ㄝˊㄇㄨˋ'),
25
- ('n', 'ㄣˉ'),
26
- ('o', 'ㄡˉ'),
27
- ('p', 'ㄆㄧˉ'),
28
- ('q', 'ㄎㄧㄡˉ'),
29
- ('r', 'ㄚˋ'),
30
- ('s', 'ㄝˊㄙˋ'),
31
- ('t', 'ㄊㄧˋ'),
32
- ('u', 'ㄧㄡˉ'),
33
- ('v', 'ㄨㄧˉ'),
34
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
35
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
36
- ('y', 'ㄨㄞˋ'),
37
- ('z', 'ㄗㄟˋ')
38
- ]]
39
-
40
- # List of (bopomofo, romaji) pairs:
41
- _bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
42
- ('ㄅㄛ', 'p⁼wo'),
43
- ('ㄆㄛ', 'pʰwo'),
44
- ('ㄇㄛ', 'mwo'),
45
- ('ㄈㄛ', 'fwo'),
46
- ('ㄅ', 'p⁼'),
47
- ('ㄆ', 'pʰ'),
48
- ('ㄇ', 'm'),
49
- ('ㄈ', 'f'),
50
- ('ㄉ', 't⁼'),
51
- ('ㄊ', 'tʰ'),
52
- ('ㄋ', 'n'),
53
- ('ㄌ', 'l'),
54
- ('ㄍ', 'k⁼'),
55
- ('ㄎ', 'kʰ'),
56
- ('ㄏ', 'h'),
57
- ('ㄐ', 'ʧ⁼'),
58
- ('ㄑ', 'ʧʰ'),
59
- ('ㄒ', 'ʃ'),
60
- ('ㄓ', 'ʦ`⁼'),
61
- ('ㄔ', 'ʦ`ʰ'),
62
- ('ㄕ', 's`'),
63
- ('ㄖ', 'ɹ`'),
64
- ('ㄗ', 'ʦ⁼'),
65
- ('ㄘ', 'ʦʰ'),
66
- ('ㄙ', 's'),
67
- ('ㄚ', 'a'),
68
- ('ㄛ', 'o'),
69
- ('ㄜ', 'ə'),
70
- ('ㄝ', 'e'),
71
- ('ㄞ', 'ai'),
72
- ('ㄟ', 'ei'),
73
- ('ㄠ', 'au'),
74
- ('ㄡ', 'ou'),
75
- ('ㄧㄢ', 'yeNN'),
76
- ('ㄢ', 'aNN'),
77
- ('ㄧㄣ', 'iNN'),
78
- ('ㄣ', 'əNN'),
79
- ('ㄤ', 'aNg'),
80
- ('ㄧㄥ', 'iNg'),
81
- ('ㄨㄥ', 'uNg'),
82
- ('ㄩㄥ', 'yuNg'),
83
- ('ㄥ', 'əNg'),
84
- ('ㄦ', 'əɻ'),
85
- ('ㄧ', 'i'),
86
- ('ㄨ', 'u'),
87
- ('ㄩ', 'ɥ'),
88
- ('ˉ', '→'),
89
- ('ˊ', '↑'),
90
- ('ˇ', '↓↑'),
91
- ('ˋ', '↓'),
92
- ('˙', ''),
93
- (',', ','),
94
- ('。', '.'),
95
- ('!', '!'),
96
- ('?', '?'),
97
- ('—', '-')
98
- ]]
99
-
100
- # List of (romaji, ipa) pairs:
101
- _romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
102
- ('ʃy', 'ʃ'),
103
- ('ʧʰy', 'ʧʰ'),
104
- ('ʧ⁼y', 'ʧ⁼'),
105
- ('NN', 'n'),
106
- ('Ng', 'ŋ'),
107
- ('y', 'j'),
108
- ('h', 'x')
109
- ]]
110
-
111
- # List of (bopomofo, ipa) pairs:
112
- _bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
113
- ('ㄅㄛ', 'p⁼wo'),
114
- ('ㄆㄛ', 'pʰwo'),
115
- ('ㄇㄛ', 'mwo'),
116
- ('ㄈㄛ', 'fwo'),
117
- ('ㄅ', 'p⁼'),
118
- ('ㄆ', 'pʰ'),
119
- ('ㄇ', 'm'),
120
- ('ㄈ', 'f'),
121
- ('ㄉ', 't⁼'),
122
- ('ㄊ', 'tʰ'),
123
- ('ㄋ', 'n'),
124
- ('ㄌ', 'l'),
125
- ('ㄍ', 'k⁼'),
126
- ('ㄎ', 'kʰ'),
127
- ('ㄏ', 'x'),
128
- ('ㄐ', 'tʃ⁼'),
129
- ('ㄑ', 'tʃʰ'),
130
- ('ㄒ', 'ʃ'),
131
- ('ㄓ', 'ts`⁼'),
132
- ('ㄔ', 'ts`ʰ'),
133
- ('ㄕ', 's`'),
134
- ('ㄖ', 'ɹ`'),
135
- ('ㄗ', 'ts⁼'),
136
- ('ㄘ', 'tsʰ'),
137
- ('ㄙ', 's'),
138
- ('ㄚ', 'a'),
139
- ('ㄛ', 'o'),
140
- ('ㄜ', 'ə'),
141
- ('ㄝ', 'ɛ'),
142
- ('ㄞ', 'aɪ'),
143
- ('ㄟ', 'eɪ'),
144
- ('ㄠ', 'ɑʊ'),
145
- ('ㄡ', 'oʊ'),
146
- ('ㄧㄢ', 'jɛn'),
147
- ('ㄩㄢ', 'ɥæn'),
148
- ('ㄢ', 'an'),
149
- ('ㄧㄣ', 'in'),
150
- ('ㄩㄣ', 'ɥn'),
151
- ('ㄣ', 'ən'),
152
- ('ㄤ', 'ɑŋ'),
153
- ('ㄧㄥ', 'iŋ'),
154
- ('ㄨㄥ', 'ʊŋ'),
155
- ('ㄩㄥ', 'jʊŋ'),
156
- ('ㄥ', 'əŋ'),
157
- ('ㄦ', 'əɻ'),
158
- ('ㄧ', 'i'),
159
- ('ㄨ', 'u'),
160
- ('ㄩ', 'ɥ'),
161
- ('ˉ', '→'),
162
- ('ˊ', '↑'),
163
- ('ˇ', '↓↑'),
164
- ('ˋ', '↓'),
165
- ('˙', ''),
166
- (',', ','),
167
- ('。', '.'),
168
- ('!', '!'),
169
- ('?', '?'),
170
- ('—', '-')
171
- ]]
172
-
173
- # List of (bopomofo, ipa2) pairs:
174
- _bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
175
- ('ㄅㄛ', 'pwo'),
176
- ('ㄆㄛ', 'pʰwo'),
177
- ('ㄇㄛ', 'mwo'),
178
- ('ㄈㄛ', 'fwo'),
179
- ('ㄅ', 'p'),
180
- ('ㄆ', 'pʰ'),
181
- ('ㄇ', 'm'),
182
- ('ㄈ', 'f'),
183
- ('ㄉ', 't'),
184
- ('ㄊ', 'tʰ'),
185
- ('ㄋ', 'n'),
186
- ('ㄌ', 'l'),
187
- ('ㄍ', 'k'),
188
- ('ㄎ', 'kʰ'),
189
- ('ㄏ', 'h'),
190
- ('ㄐ', 'tɕ'),
191
- ('ㄑ', 'tɕʰ'),
192
- ('ㄒ', 'ɕ'),
193
- ('ㄓ', 'tʂ'),
194
- ('ㄔ', 'tʂʰ'),
195
- ('ㄕ', 'ʂ'),
196
- ('ㄖ', 'ɻ'),
197
- ('ㄗ', 'ts'),
198
- ('ㄘ', 'tsʰ'),
199
- ('ㄙ', 's'),
200
- ('ㄚ', 'a'),
201
- ('ㄛ', 'o'),
202
- ('ㄜ', 'ɤ'),
203
- ('ㄝ', 'ɛ'),
204
- ('ㄞ', 'aɪ'),
205
- ('ㄟ', 'eɪ'),
206
- ('ㄠ', 'ɑʊ'),
207
- ('ㄡ', 'oʊ'),
208
- ('ㄧㄢ', 'jɛn'),
209
- ('ㄩㄢ', 'yæn'),
210
- ('ㄢ', 'an'),
211
- ('ㄧㄣ', 'in'),
212
- ('ㄩㄣ', 'yn'),
213
- ('ㄣ', 'ən'),
214
- ('ㄤ', 'ɑŋ'),
215
- ('ㄧㄥ', 'iŋ'),
216
- ('ㄨㄥ', 'ʊŋ'),
217
- ('ㄩㄥ', 'jʊŋ'),
218
- ('ㄥ', 'ɤŋ'),
219
- ('ㄦ', 'əɻ'),
220
- ('ㄧ', 'i'),
221
- ('ㄨ', 'u'),
222
- ('ㄩ', 'y'),
223
- ('ˉ', '˥'),
224
- ('ˊ', '˧˥'),
225
- ('ˇ', '˨˩˦'),
226
- ('ˋ', '˥˩'),
227
- ('˙', ''),
228
- (',', ','),
229
- ('。', '.'),
230
- ('!', '!'),
231
- ('?', '?'),
232
- ('—', '-')
233
- ]]
234
-
235
-
236
- def number_to_chinese(text):
237
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
238
- for number in numbers:
239
- text = text.replace(number, cn2an.an2cn(number), 1)
240
- return text
241
-
242
-
243
- def chinese_to_bopomofo(text):
244
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
245
- words = jieba.lcut(text, cut_all=False)
246
- text = ''
247
- for word in words:
248
- bopomofos = lazy_pinyin(word, BOPOMOFO)
249
- if not re.search('[\u4e00-\u9fff]', word):
250
- text += word
251
- continue
252
- for i in range(len(bopomofos)):
253
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
254
- if text != '':
255
- text += ' '
256
- text += ''.join(bopomofos)
257
- return text
258
-
259
-
260
- def latin_to_bopomofo(text):
261
- for regex, replacement in _latin_to_bopomofo:
262
- text = re.sub(regex, replacement, text)
263
- return text
264
-
265
-
266
- def bopomofo_to_romaji(text):
267
- for regex, replacement in _bopomofo_to_romaji:
268
- text = re.sub(regex, replacement, text)
269
- return text
270
-
271
-
272
- def bopomofo_to_ipa(text):
273
- for regex, replacement in _bopomofo_to_ipa:
274
- text = re.sub(regex, replacement, text)
275
- return text
276
-
277
-
278
- def bopomofo_to_ipa2(text):
279
- for regex, replacement in _bopomofo_to_ipa2:
280
- text = re.sub(regex, replacement, text)
281
- return text
282
-
283
-
284
- def chinese_to_romaji(text):
285
- text = number_to_chinese(text)
286
- text = chinese_to_bopomofo(text)
287
- text = latin_to_bopomofo(text)
288
- text = bopomofo_to_romaji(text)
289
- text = re.sub('i([aoe])', r'y\1', text)
290
- text = re.sub('u([aoəe])', r'w\1', text)
291
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
292
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
293
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
294
- return text
295
-
296
-
297
- def chinese_to_lazy_ipa(text):
298
- text = chinese_to_romaji(text)
299
- for regex, replacement in _romaji_to_ipa:
300
- text = re.sub(regex, replacement, text)
301
- return text
302
-
303
-
304
- def chinese_to_ipa(text):
305
- text = number_to_chinese(text)
306
- text = chinese_to_bopomofo(text)
307
- text = latin_to_bopomofo(text)
308
- text = bopomofo_to_ipa(text)
309
- text = re.sub('i([aoe])', r'j\1', text)
310
- text = re.sub('u([aoəe])', r'w\1', text)
311
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
312
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
313
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
314
- return text
315
-
316
-
317
- def chinese_to_ipa2(text):
318
- text = number_to_chinese(text)
319
- text = chinese_to_bopomofo(text)
320
- text = latin_to_bopomofo(text)
321
- text = bopomofo_to_ipa2(text)
322
- text = re.sub(r'i([aoe])', r'j\1', text)
323
- text = re.sub(r'u([aoəe])', r'w\1', text)
324
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
325
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
326
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/saicinpainting/training/modules/pix2pixhd.py DELETED
@@ -1,669 +0,0 @@
1
- # original: https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py
2
- import collections
3
- from functools import partial
4
- import functools
5
- import logging
6
- from collections import defaultdict
7
-
8
- import numpy as np
9
- import torch.nn as nn
10
-
11
- from saicinpainting.training.modules.base import BaseDiscriminator, deconv_factory, get_conv_block_ctor, get_norm_layer, get_activation
12
- from saicinpainting.training.modules.ffc import FFCResnetBlock
13
- from saicinpainting.training.modules.multidilated_conv import MultidilatedConv
14
-
15
- class DotDict(defaultdict):
16
- # https://stackoverflow.com/questions/2352181/how-to-use-a-dot-to-access-members-of-dictionary
17
- """dot.notation access to dictionary attributes"""
18
- __getattr__ = defaultdict.get
19
- __setattr__ = defaultdict.__setitem__
20
- __delattr__ = defaultdict.__delitem__
21
-
22
- class Identity(nn.Module):
23
- def __init__(self):
24
- super().__init__()
25
-
26
- def forward(self, x):
27
- return x
28
-
29
-
30
- class ResnetBlock(nn.Module):
31
- def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default',
32
- dilation=1, in_dim=None, groups=1, second_dilation=None):
33
- super(ResnetBlock, self).__init__()
34
- self.in_dim = in_dim
35
- self.dim = dim
36
- if second_dilation is None:
37
- second_dilation = dilation
38
- self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout,
39
- conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups,
40
- second_dilation=second_dilation)
41
-
42
- if self.in_dim is not None:
43
- self.input_conv = nn.Conv2d(in_dim, dim, 1)
44
-
45
- self.out_channnels = dim
46
-
47
- def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default',
48
- dilation=1, in_dim=None, groups=1, second_dilation=1):
49
- conv_layer = get_conv_block_ctor(conv_kind)
50
-
51
- conv_block = []
52
- p = 0
53
- if padding_type == 'reflect':
54
- conv_block += [nn.ReflectionPad2d(dilation)]
55
- elif padding_type == 'replicate':
56
- conv_block += [nn.ReplicationPad2d(dilation)]
57
- elif padding_type == 'zero':
58
- p = dilation
59
- else:
60
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
61
-
62
- if in_dim is None:
63
- in_dim = dim
64
-
65
- conv_block += [conv_layer(in_dim, dim, kernel_size=3, padding=p, dilation=dilation),
66
- norm_layer(dim),
67
- activation]
68
- if use_dropout:
69
- conv_block += [nn.Dropout(0.5)]
70
-
71
- p = 0
72
- if padding_type == 'reflect':
73
- conv_block += [nn.ReflectionPad2d(second_dilation)]
74
- elif padding_type == 'replicate':
75
- conv_block += [nn.ReplicationPad2d(second_dilation)]
76
- elif padding_type == 'zero':
77
- p = second_dilation
78
- else:
79
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
80
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding=p, dilation=second_dilation, groups=groups),
81
- norm_layer(dim)]
82
-
83
- return nn.Sequential(*conv_block)
84
-
85
- def forward(self, x):
86
- x_before = x
87
- if self.in_dim is not None:
88
- x = self.input_conv(x)
89
- out = x + self.conv_block(x_before)
90
- return out
91
-
92
- class ResnetBlock5x5(nn.Module):
93
- def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False, conv_kind='default',
94
- dilation=1, in_dim=None, groups=1, second_dilation=None):
95
- super(ResnetBlock5x5, self).__init__()
96
- self.in_dim = in_dim
97
- self.dim = dim
98
- if second_dilation is None:
99
- second_dilation = dilation
100
- self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout,
101
- conv_kind=conv_kind, dilation=dilation, in_dim=in_dim, groups=groups,
102
- second_dilation=second_dilation)
103
-
104
- if self.in_dim is not None:
105
- self.input_conv = nn.Conv2d(in_dim, dim, 1)
106
-
107
- self.out_channnels = dim
108
-
109
- def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout, conv_kind='default',
110
- dilation=1, in_dim=None, groups=1, second_dilation=1):
111
- conv_layer = get_conv_block_ctor(conv_kind)
112
-
113
- conv_block = []
114
- p = 0
115
- if padding_type == 'reflect':
116
- conv_block += [nn.ReflectionPad2d(dilation * 2)]
117
- elif padding_type == 'replicate':
118
- conv_block += [nn.ReplicationPad2d(dilation * 2)]
119
- elif padding_type == 'zero':
120
- p = dilation * 2
121
- else:
122
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
123
-
124
- if in_dim is None:
125
- in_dim = dim
126
-
127
- conv_block += [conv_layer(in_dim, dim, kernel_size=5, padding=p, dilation=dilation),
128
- norm_layer(dim),
129
- activation]
130
- if use_dropout:
131
- conv_block += [nn.Dropout(0.5)]
132
-
133
- p = 0
134
- if padding_type == 'reflect':
135
- conv_block += [nn.ReflectionPad2d(second_dilation * 2)]
136
- elif padding_type == 'replicate':
137
- conv_block += [nn.ReplicationPad2d(second_dilation * 2)]
138
- elif padding_type == 'zero':
139
- p = second_dilation * 2
140
- else:
141
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
142
- conv_block += [conv_layer(dim, dim, kernel_size=5, padding=p, dilation=second_dilation, groups=groups),
143
- norm_layer(dim)]
144
-
145
- return nn.Sequential(*conv_block)
146
-
147
- def forward(self, x):
148
- x_before = x
149
- if self.in_dim is not None:
150
- x = self.input_conv(x)
151
- out = x + self.conv_block(x_before)
152
- return out
153
-
154
-
155
- class MultidilatedResnetBlock(nn.Module):
156
- def __init__(self, dim, padding_type, conv_layer, norm_layer, activation=nn.ReLU(True), use_dropout=False):
157
- super().__init__()
158
- self.conv_block = self.build_conv_block(dim, padding_type, conv_layer, norm_layer, activation, use_dropout)
159
-
160
- def build_conv_block(self, dim, padding_type, conv_layer, norm_layer, activation, use_dropout, dilation=1):
161
- conv_block = []
162
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type),
163
- norm_layer(dim),
164
- activation]
165
- if use_dropout:
166
- conv_block += [nn.Dropout(0.5)]
167
-
168
- conv_block += [conv_layer(dim, dim, kernel_size=3, padding_mode=padding_type),
169
- norm_layer(dim)]
170
-
171
- return nn.Sequential(*conv_block)
172
-
173
- def forward(self, x):
174
- out = x + self.conv_block(x)
175
- return out
176
-
177
-
178
- class MultiDilatedGlobalGenerator(nn.Module):
179
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3,
180
- n_blocks=3, norm_layer=nn.BatchNorm2d,
181
- padding_type='reflect', conv_kind='default',
182
- deconv_kind='convtranspose', activation=nn.ReLU(True),
183
- up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True),
184
- add_out_act=True, max_features=1024, multidilation_kwargs={},
185
- ffc_positions=None, ffc_kwargs={}):
186
- assert (n_blocks >= 0)
187
- super().__init__()
188
-
189
- conv_layer = get_conv_block_ctor(conv_kind)
190
- resnet_conv_layer = functools.partial(get_conv_block_ctor('multidilated'), **multidilation_kwargs)
191
- norm_layer = get_norm_layer(norm_layer)
192
- if affine is not None:
193
- norm_layer = partial(norm_layer, affine=affine)
194
- up_norm_layer = get_norm_layer(up_norm_layer)
195
- if affine is not None:
196
- up_norm_layer = partial(up_norm_layer, affine=affine)
197
-
198
- model = [nn.ReflectionPad2d(3),
199
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
200
- norm_layer(ngf),
201
- activation]
202
-
203
- identity = Identity()
204
- ### downsample
205
- for i in range(n_downsampling):
206
- mult = 2 ** i
207
-
208
- model += [conv_layer(min(max_features, ngf * mult),
209
- min(max_features, ngf * mult * 2),
210
- kernel_size=3, stride=2, padding=1),
211
- norm_layer(min(max_features, ngf * mult * 2)),
212
- activation]
213
-
214
- mult = 2 ** n_downsampling
215
- feats_num_bottleneck = min(max_features, ngf * mult)
216
-
217
- ### resnet blocks
218
- for i in range(n_blocks):
219
- if ffc_positions is not None and i in ffc_positions:
220
- model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU,
221
- inline=True, **ffc_kwargs)]
222
- model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type,
223
- conv_layer=resnet_conv_layer, activation=activation,
224
- norm_layer=norm_layer)]
225
-
226
- ### upsample
227
- for i in range(n_downsampling):
228
- mult = 2 ** (n_downsampling - i)
229
- model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features)
230
- model += [nn.ReflectionPad2d(3),
231
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
232
- if add_out_act:
233
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
234
- self.model = nn.Sequential(*model)
235
-
236
- def forward(self, input):
237
- return self.model(input)
238
-
239
- class ConfigGlobalGenerator(nn.Module):
240
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3,
241
- n_blocks=3, norm_layer=nn.BatchNorm2d,
242
- padding_type='reflect', conv_kind='default',
243
- deconv_kind='convtranspose', activation=nn.ReLU(True),
244
- up_norm_layer=nn.BatchNorm2d, affine=None, up_activation=nn.ReLU(True),
245
- add_out_act=True, max_features=1024,
246
- manual_block_spec=[],
247
- resnet_block_kind='multidilatedresnetblock',
248
- resnet_conv_kind='multidilated',
249
- resnet_dilation=1,
250
- multidilation_kwargs={}):
251
- assert (n_blocks >= 0)
252
- super().__init__()
253
-
254
- conv_layer = get_conv_block_ctor(conv_kind)
255
- resnet_conv_layer = functools.partial(get_conv_block_ctor(resnet_conv_kind), **multidilation_kwargs)
256
- norm_layer = get_norm_layer(norm_layer)
257
- if affine is not None:
258
- norm_layer = partial(norm_layer, affine=affine)
259
- up_norm_layer = get_norm_layer(up_norm_layer)
260
- if affine is not None:
261
- up_norm_layer = partial(up_norm_layer, affine=affine)
262
-
263
- model = [nn.ReflectionPad2d(3),
264
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
265
- norm_layer(ngf),
266
- activation]
267
-
268
- identity = Identity()
269
-
270
- ### downsample
271
- for i in range(n_downsampling):
272
- mult = 2 ** i
273
- model += [conv_layer(min(max_features, ngf * mult),
274
- min(max_features, ngf * mult * 2),
275
- kernel_size=3, stride=2, padding=1),
276
- norm_layer(min(max_features, ngf * mult * 2)),
277
- activation]
278
-
279
- mult = 2 ** n_downsampling
280
- feats_num_bottleneck = min(max_features, ngf * mult)
281
-
282
- if len(manual_block_spec) == 0:
283
- manual_block_spec = [
284
- DotDict(lambda : None, {
285
- 'n_blocks': n_blocks,
286
- 'use_default': True})
287
- ]
288
-
289
- ### resnet blocks
290
- for block_spec in manual_block_spec:
291
- def make_and_add_blocks(model, block_spec):
292
- block_spec = DotDict(lambda : None, block_spec)
293
- if not block_spec.use_default:
294
- resnet_conv_layer = functools.partial(get_conv_block_ctor(block_spec.resnet_conv_kind), **block_spec.multidilation_kwargs)
295
- resnet_conv_kind = block_spec.resnet_conv_kind
296
- resnet_block_kind = block_spec.resnet_block_kind
297
- if block_spec.resnet_dilation is not None:
298
- resnet_dilation = block_spec.resnet_dilation
299
- for i in range(block_spec.n_blocks):
300
- if resnet_block_kind == "multidilatedresnetblock":
301
- model += [MultidilatedResnetBlock(feats_num_bottleneck, padding_type=padding_type,
302
- conv_layer=resnet_conv_layer, activation=activation,
303
- norm_layer=norm_layer)]
304
- if resnet_block_kind == "resnetblock":
305
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
306
- conv_kind=resnet_conv_kind)]
307
- if resnet_block_kind == "resnetblock5x5":
308
- model += [ResnetBlock5x5(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
309
- conv_kind=resnet_conv_kind)]
310
- if resnet_block_kind == "resnetblockdwdil":
311
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
312
- conv_kind=resnet_conv_kind, dilation=resnet_dilation, second_dilation=resnet_dilation)]
313
- make_and_add_blocks(model, block_spec)
314
-
315
- ### upsample
316
- for i in range(n_downsampling):
317
- mult = 2 ** (n_downsampling - i)
318
- model += deconv_factory(deconv_kind, ngf, mult, up_norm_layer, up_activation, max_features)
319
- model += [nn.ReflectionPad2d(3),
320
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
321
- if add_out_act:
322
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
323
- self.model = nn.Sequential(*model)
324
-
325
- def forward(self, input):
326
- return self.model(input)
327
-
328
-
329
- def make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs):
330
- blocks = []
331
- for i in range(dilated_blocks_n):
332
- if dilation_block_kind == 'simple':
333
- blocks.append(ResnetBlock(**dilated_block_kwargs, dilation=2 ** (i + 1)))
334
- elif dilation_block_kind == 'multi':
335
- blocks.append(MultidilatedResnetBlock(**dilated_block_kwargs))
336
- else:
337
- raise ValueError(f'dilation_block_kind could not be "{dilation_block_kind}"')
338
- return blocks
339
-
340
-
341
- class GlobalGenerator(nn.Module):
342
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
343
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
344
- up_norm_layer=nn.BatchNorm2d, affine=None,
345
- up_activation=nn.ReLU(True), dilated_blocks_n=0, dilated_blocks_n_start=0,
346
- dilated_blocks_n_middle=0,
347
- add_out_act=True,
348
- max_features=1024, is_resblock_depthwise=False,
349
- ffc_positions=None, ffc_kwargs={}, dilation=1, second_dilation=None,
350
- dilation_block_kind='simple', multidilation_kwargs={}):
351
- assert (n_blocks >= 0)
352
- super().__init__()
353
-
354
- conv_layer = get_conv_block_ctor(conv_kind)
355
- norm_layer = get_norm_layer(norm_layer)
356
- if affine is not None:
357
- norm_layer = partial(norm_layer, affine=affine)
358
- up_norm_layer = get_norm_layer(up_norm_layer)
359
- if affine is not None:
360
- up_norm_layer = partial(up_norm_layer, affine=affine)
361
-
362
- if ffc_positions is not None:
363
- ffc_positions = collections.Counter(ffc_positions)
364
-
365
- model = [nn.ReflectionPad2d(3),
366
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
367
- norm_layer(ngf),
368
- activation]
369
-
370
- identity = Identity()
371
- ### downsample
372
- for i in range(n_downsampling):
373
- mult = 2 ** i
374
-
375
- model += [conv_layer(min(max_features, ngf * mult),
376
- min(max_features, ngf * mult * 2),
377
- kernel_size=3, stride=2, padding=1),
378
- norm_layer(min(max_features, ngf * mult * 2)),
379
- activation]
380
-
381
- mult = 2 ** n_downsampling
382
- feats_num_bottleneck = min(max_features, ngf * mult)
383
-
384
- dilated_block_kwargs = dict(dim=feats_num_bottleneck, padding_type=padding_type,
385
- activation=activation, norm_layer=norm_layer)
386
- if dilation_block_kind == 'simple':
387
- dilated_block_kwargs['conv_kind'] = conv_kind
388
- elif dilation_block_kind == 'multi':
389
- dilated_block_kwargs['conv_layer'] = functools.partial(
390
- get_conv_block_ctor('multidilated'), **multidilation_kwargs)
391
-
392
- # dilated blocks at the start of the bottleneck sausage
393
- if dilated_blocks_n_start is not None and dilated_blocks_n_start > 0:
394
- model += make_dil_blocks(dilated_blocks_n_start, dilation_block_kind, dilated_block_kwargs)
395
-
396
- # resnet blocks
397
- for i in range(n_blocks):
398
- # dilated blocks at the middle of the bottleneck sausage
399
- if i == n_blocks // 2 and dilated_blocks_n_middle is not None and dilated_blocks_n_middle > 0:
400
- model += make_dil_blocks(dilated_blocks_n_middle, dilation_block_kind, dilated_block_kwargs)
401
-
402
- if ffc_positions is not None and i in ffc_positions:
403
- for _ in range(ffc_positions[i]): # same position can occur more than once
404
- model += [FFCResnetBlock(feats_num_bottleneck, padding_type, norm_layer, activation_layer=nn.ReLU,
405
- inline=True, **ffc_kwargs)]
406
-
407
- if is_resblock_depthwise:
408
- resblock_groups = feats_num_bottleneck
409
- else:
410
- resblock_groups = 1
411
-
412
- model += [ResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation=activation,
413
- norm_layer=norm_layer, conv_kind=conv_kind, groups=resblock_groups,
414
- dilation=dilation, second_dilation=second_dilation)]
415
-
416
-
417
- # dilated blocks at the end of the bottleneck sausage
418
- if dilated_blocks_n is not None and dilated_blocks_n > 0:
419
- model += make_dil_blocks(dilated_blocks_n, dilation_block_kind, dilated_block_kwargs)
420
-
421
- # upsample
422
- for i in range(n_downsampling):
423
- mult = 2 ** (n_downsampling - i)
424
- model += [nn.ConvTranspose2d(min(max_features, ngf * mult),
425
- min(max_features, int(ngf * mult / 2)),
426
- kernel_size=3, stride=2, padding=1, output_padding=1),
427
- up_norm_layer(min(max_features, int(ngf * mult / 2))),
428
- up_activation]
429
- model += [nn.ReflectionPad2d(3),
430
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
431
- if add_out_act:
432
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
433
- self.model = nn.Sequential(*model)
434
-
435
- def forward(self, input):
436
- return self.model(input)
437
-
438
-
439
- class GlobalGeneratorGated(GlobalGenerator):
440
- def __init__(self, *args, **kwargs):
441
- real_kwargs=dict(
442
- conv_kind='gated_bn_relu',
443
- activation=nn.Identity(),
444
- norm_layer=nn.Identity
445
- )
446
- real_kwargs.update(kwargs)
447
- super().__init__(*args, **real_kwargs)
448
-
449
-
450
- class GlobalGeneratorFromSuperChannels(nn.Module):
451
- def __init__(self, input_nc, output_nc, n_downsampling, n_blocks, super_channels, norm_layer="bn", padding_type='reflect', add_out_act=True):
452
- super().__init__()
453
- self.n_downsampling = n_downsampling
454
- norm_layer = get_norm_layer(norm_layer)
455
- if type(norm_layer) == functools.partial:
456
- use_bias = (norm_layer.func == nn.InstanceNorm2d)
457
- else:
458
- use_bias = (norm_layer == nn.InstanceNorm2d)
459
-
460
- channels = self.convert_super_channels(super_channels)
461
- self.channels = channels
462
-
463
- model = [nn.ReflectionPad2d(3),
464
- nn.Conv2d(input_nc, channels[0], kernel_size=7, padding=0, bias=use_bias),
465
- norm_layer(channels[0]),
466
- nn.ReLU(True)]
467
-
468
- for i in range(n_downsampling): # add downsampling layers
469
- mult = 2 ** i
470
- model += [nn.Conv2d(channels[0+i], channels[1+i], kernel_size=3, stride=2, padding=1, bias=use_bias),
471
- norm_layer(channels[1+i]),
472
- nn.ReLU(True)]
473
-
474
- mult = 2 ** n_downsampling
475
-
476
- n_blocks1 = n_blocks // 3
477
- n_blocks2 = n_blocks1
478
- n_blocks3 = n_blocks - n_blocks1 - n_blocks2
479
-
480
- for i in range(n_blocks1):
481
- c = n_downsampling
482
- dim = channels[c]
483
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer)]
484
-
485
- for i in range(n_blocks2):
486
- c = n_downsampling+1
487
- dim = channels[c]
488
- kwargs = {}
489
- if i == 0:
490
- kwargs = {"in_dim": channels[c-1]}
491
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)]
492
-
493
- for i in range(n_blocks3):
494
- c = n_downsampling+2
495
- dim = channels[c]
496
- kwargs = {}
497
- if i == 0:
498
- kwargs = {"in_dim": channels[c-1]}
499
- model += [ResnetBlock(dim, padding_type=padding_type, norm_layer=norm_layer, **kwargs)]
500
-
501
- for i in range(n_downsampling): # add upsampling layers
502
- mult = 2 ** (n_downsampling - i)
503
- model += [nn.ConvTranspose2d(channels[n_downsampling+3+i],
504
- channels[n_downsampling+3+i+1],
505
- kernel_size=3, stride=2,
506
- padding=1, output_padding=1,
507
- bias=use_bias),
508
- norm_layer(channels[n_downsampling+3+i+1]),
509
- nn.ReLU(True)]
510
- model += [nn.ReflectionPad2d(3)]
511
- model += [nn.Conv2d(channels[2*n_downsampling+3], output_nc, kernel_size=7, padding=0)]
512
-
513
- if add_out_act:
514
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
515
- self.model = nn.Sequential(*model)
516
-
517
- def convert_super_channels(self, super_channels):
518
- n_downsampling = self.n_downsampling
519
- result = []
520
- cnt = 0
521
-
522
- if n_downsampling == 2:
523
- N1 = 10
524
- elif n_downsampling == 3:
525
- N1 = 13
526
- else:
527
- raise NotImplementedError
528
-
529
- for i in range(0, N1):
530
- if i in [1,4,7,10]:
531
- channel = super_channels[cnt] * (2 ** cnt)
532
- config = {'channel': channel}
533
- result.append(channel)
534
- logging.info(f"Downsample channels {result[-1]}")
535
- cnt += 1
536
-
537
- for i in range(3):
538
- for counter, j in enumerate(range(N1 + i * 3, N1 + 3 + i * 3)):
539
- if len(super_channels) == 6:
540
- channel = super_channels[3] * 4
541
- else:
542
- channel = super_channels[i + 3] * 4
543
- config = {'channel': channel}
544
- if counter == 0:
545
- result.append(channel)
546
- logging.info(f"Bottleneck channels {result[-1]}")
547
- cnt = 2
548
-
549
- for i in range(N1+9, N1+21):
550
- if i in [22, 25,28]:
551
- cnt -= 1
552
- if len(super_channels) == 6:
553
- channel = super_channels[5 - cnt] * (2 ** cnt)
554
- else:
555
- channel = super_channels[7 - cnt] * (2 ** cnt)
556
- result.append(int(channel))
557
- logging.info(f"Upsample channels {result[-1]}")
558
- return result
559
-
560
- def forward(self, input):
561
- return self.model(input)
562
-
563
-
564
- # Defines the PatchGAN discriminator with the specified arguments.
565
- class NLayerDiscriminator(BaseDiscriminator):
566
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d,):
567
- super().__init__()
568
- self.n_layers = n_layers
569
-
570
- kw = 4
571
- padw = int(np.ceil((kw-1.0)/2))
572
- sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),
573
- nn.LeakyReLU(0.2, True)]]
574
-
575
- nf = ndf
576
- for n in range(1, n_layers):
577
- nf_prev = nf
578
- nf = min(nf * 2, 512)
579
-
580
- cur_model = []
581
- cur_model += [
582
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw),
583
- norm_layer(nf),
584
- nn.LeakyReLU(0.2, True)
585
- ]
586
- sequence.append(cur_model)
587
-
588
- nf_prev = nf
589
- nf = min(nf * 2, 512)
590
-
591
- cur_model = []
592
- cur_model += [
593
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
594
- norm_layer(nf),
595
- nn.LeakyReLU(0.2, True)
596
- ]
597
- sequence.append(cur_model)
598
-
599
- sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]
600
-
601
- for n in range(len(sequence)):
602
- setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
603
-
604
- def get_all_activations(self, x):
605
- res = [x]
606
- for n in range(self.n_layers + 2):
607
- model = getattr(self, 'model' + str(n))
608
- res.append(model(res[-1]))
609
- return res[1:]
610
-
611
- def forward(self, x):
612
- act = self.get_all_activations(x)
613
- return act[-1], act[:-1]
614
-
615
-
616
- class MultidilatedNLayerDiscriminator(BaseDiscriminator):
617
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, multidilation_kwargs={}):
618
- super().__init__()
619
- self.n_layers = n_layers
620
-
621
- kw = 4
622
- padw = int(np.ceil((kw-1.0)/2))
623
- sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),
624
- nn.LeakyReLU(0.2, True)]]
625
-
626
- nf = ndf
627
- for n in range(1, n_layers):
628
- nf_prev = nf
629
- nf = min(nf * 2, 512)
630
-
631
- cur_model = []
632
- cur_model += [
633
- MultidilatedConv(nf_prev, nf, kernel_size=kw, stride=2, padding=[2, 3], **multidilation_kwargs),
634
- norm_layer(nf),
635
- nn.LeakyReLU(0.2, True)
636
- ]
637
- sequence.append(cur_model)
638
-
639
- nf_prev = nf
640
- nf = min(nf * 2, 512)
641
-
642
- cur_model = []
643
- cur_model += [
644
- nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
645
- norm_layer(nf),
646
- nn.LeakyReLU(0.2, True)
647
- ]
648
- sequence.append(cur_model)
649
-
650
- sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]
651
-
652
- for n in range(len(sequence)):
653
- setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
654
-
655
- def get_all_activations(self, x):
656
- res = [x]
657
- for n in range(self.n_layers + 2):
658
- model = getattr(self, 'model' + str(n))
659
- res.append(model(res[-1]))
660
- return res[1:]
661
-
662
- def forward(self, x):
663
- act = self.get_all_activations(x)
664
- return act[-1], act[:-1]
665
-
666
-
667
- class NLayerDiscriminatorAsGen(NLayerDiscriminator):
668
- def forward(self, x):
669
- return super().forward(x)[0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py DELETED
@@ -1,700 +0,0 @@
1
- import argparse
2
- import inspect
3
- import logging
4
- import math
5
- import os
6
- from pathlib import Path
7
- from typing import Optional
8
-
9
- import accelerate
10
- import datasets
11
- import torch
12
- import torch.nn.functional as F
13
- from accelerate import Accelerator
14
- from accelerate.logging import get_logger
15
- from accelerate.utils import ProjectConfiguration
16
- from datasets import load_dataset
17
- from huggingface_hub import HfFolder, Repository, create_repo, whoami
18
- from onnxruntime.training.optim.fp16_optimizer import FP16_Optimizer as ORT_FP16_Optimizer
19
- from onnxruntime.training.ortmodule import ORTModule
20
- from packaging import version
21
- from torchvision import transforms
22
- from tqdm.auto import tqdm
23
-
24
- import diffusers
25
- from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel
26
- from diffusers.optimization import get_scheduler
27
- from diffusers.training_utils import EMAModel
28
- from diffusers.utils import check_min_version, is_accelerate_version, is_tensorboard_available, is_wandb_available
29
- from diffusers.utils.import_utils import is_xformers_available
30
-
31
-
32
- # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
33
- check_min_version("0.17.0.dev0")
34
-
35
- logger = get_logger(__name__, log_level="INFO")
36
-
37
-
38
- def _extract_into_tensor(arr, timesteps, broadcast_shape):
39
- """
40
- Extract values from a 1-D numpy array for a batch of indices.
41
-
42
- :param arr: the 1-D numpy array.
43
- :param timesteps: a tensor of indices into the array to extract.
44
- :param broadcast_shape: a larger shape of K dimensions with the batch
45
- dimension equal to the length of timesteps.
46
- :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims.
47
- """
48
- if not isinstance(arr, torch.Tensor):
49
- arr = torch.from_numpy(arr)
50
- res = arr[timesteps].float().to(timesteps.device)
51
- while len(res.shape) < len(broadcast_shape):
52
- res = res[..., None]
53
- return res.expand(broadcast_shape)
54
-
55
-
56
- def parse_args():
57
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
58
- parser.add_argument(
59
- "--dataset_name",
60
- type=str,
61
- default=None,
62
- help=(
63
- "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
64
- " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
65
- " or to a folder containing files that HF Datasets can understand."
66
- ),
67
- )
68
- parser.add_argument(
69
- "--dataset_config_name",
70
- type=str,
71
- default=None,
72
- help="The config of the Dataset, leave as None if there's only one config.",
73
- )
74
- parser.add_argument(
75
- "--model_config_name_or_path",
76
- type=str,
77
- default=None,
78
- help="The config of the UNet model to train, leave as None to use standard DDPM configuration.",
79
- )
80
- parser.add_argument(
81
- "--train_data_dir",
82
- type=str,
83
- default=None,
84
- help=(
85
- "A folder containing the training data. Folder contents must follow the structure described in"
86
- " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
87
- " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
88
- ),
89
- )
90
- parser.add_argument(
91
- "--output_dir",
92
- type=str,
93
- default="ddpm-model-64",
94
- help="The output directory where the model predictions and checkpoints will be written.",
95
- )
96
- parser.add_argument("--overwrite_output_dir", action="store_true")
97
- parser.add_argument(
98
- "--cache_dir",
99
- type=str,
100
- default=None,
101
- help="The directory where the downloaded models and datasets will be stored.",
102
- )
103
- parser.add_argument(
104
- "--resolution",
105
- type=int,
106
- default=64,
107
- help=(
108
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
109
- " resolution"
110
- ),
111
- )
112
- parser.add_argument(
113
- "--center_crop",
114
- default=False,
115
- action="store_true",
116
- help=(
117
- "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
118
- " cropped. The images will be resized to the resolution first before cropping."
119
- ),
120
- )
121
- parser.add_argument(
122
- "--random_flip",
123
- default=False,
124
- action="store_true",
125
- help="whether to randomly flip images horizontally",
126
- )
127
- parser.add_argument(
128
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
129
- )
130
- parser.add_argument(
131
- "--eval_batch_size", type=int, default=16, help="The number of images to generate for evaluation."
132
- )
133
- parser.add_argument(
134
- "--dataloader_num_workers",
135
- type=int,
136
- default=0,
137
- help=(
138
- "The number of subprocesses to use for data loading. 0 means that the data will be loaded in the main"
139
- " process."
140
- ),
141
- )
142
- parser.add_argument("--num_epochs", type=int, default=100)
143
- parser.add_argument("--save_images_epochs", type=int, default=10, help="How often to save images during training.")
144
- parser.add_argument(
145
- "--save_model_epochs", type=int, default=10, help="How often to save the model during training."
146
- )
147
- parser.add_argument(
148
- "--gradient_accumulation_steps",
149
- type=int,
150
- default=1,
151
- help="Number of updates steps to accumulate before performing a backward/update pass.",
152
- )
153
- parser.add_argument(
154
- "--learning_rate",
155
- type=float,
156
- default=1e-4,
157
- help="Initial learning rate (after the potential warmup period) to use.",
158
- )
159
- parser.add_argument(
160
- "--lr_scheduler",
161
- type=str,
162
- default="cosine",
163
- help=(
164
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
165
- ' "constant", "constant_with_warmup"]'
166
- ),
167
- )
168
- parser.add_argument(
169
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
170
- )
171
- parser.add_argument("--adam_beta1", type=float, default=0.95, help="The beta1 parameter for the Adam optimizer.")
172
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
173
- parser.add_argument(
174
- "--adam_weight_decay", type=float, default=1e-6, help="Weight decay magnitude for the Adam optimizer."
175
- )
176
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer.")
177
- parser.add_argument(
178
- "--use_ema",
179
- action="store_true",
180
- help="Whether to use Exponential Moving Average for the final model weights.",
181
- )
182
- parser.add_argument("--ema_inv_gamma", type=float, default=1.0, help="The inverse gamma value for the EMA decay.")
183
- parser.add_argument("--ema_power", type=float, default=3 / 4, help="The power value for the EMA decay.")
184
- parser.add_argument("--ema_max_decay", type=float, default=0.9999, help="The maximum decay magnitude for EMA.")
185
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
186
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
187
- parser.add_argument(
188
- "--hub_model_id",
189
- type=str,
190
- default=None,
191
- help="The name of the repository to keep in sync with the local `output_dir`.",
192
- )
193
- parser.add_argument(
194
- "--hub_private_repo", action="store_true", help="Whether or not to create a private repository."
195
- )
196
- parser.add_argument(
197
- "--logger",
198
- type=str,
199
- default="tensorboard",
200
- choices=["tensorboard", "wandb"],
201
- help=(
202
- "Whether to use [tensorboard](https://www.tensorflow.org/tensorboard) or [wandb](https://www.wandb.ai)"
203
- " for experiment tracking and logging of model metrics and model checkpoints"
204
- ),
205
- )
206
- parser.add_argument(
207
- "--logging_dir",
208
- type=str,
209
- default="logs",
210
- help=(
211
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
212
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
213
- ),
214
- )
215
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
216
- parser.add_argument(
217
- "--mixed_precision",
218
- type=str,
219
- default="no",
220
- choices=["no", "fp16", "bf16"],
221
- help=(
222
- "Whether to use mixed precision. Choose"
223
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
224
- "and an Nvidia Ampere GPU."
225
- ),
226
- )
227
- parser.add_argument(
228
- "--prediction_type",
229
- type=str,
230
- default="epsilon",
231
- choices=["epsilon", "sample"],
232
- help="Whether the model should predict the 'epsilon'/noise error or directly the reconstructed image 'x0'.",
233
- )
234
- parser.add_argument("--ddpm_num_steps", type=int, default=1000)
235
- parser.add_argument("--ddpm_num_inference_steps", type=int, default=1000)
236
- parser.add_argument("--ddpm_beta_schedule", type=str, default="linear")
237
- parser.add_argument(
238
- "--checkpointing_steps",
239
- type=int,
240
- default=500,
241
- help=(
242
- "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
243
- " training using `--resume_from_checkpoint`."
244
- ),
245
- )
246
- parser.add_argument(
247
- "--checkpoints_total_limit",
248
- type=int,
249
- default=None,
250
- help=(
251
- "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`."
252
- " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state"
253
- " for more docs"
254
- ),
255
- )
256
- parser.add_argument(
257
- "--resume_from_checkpoint",
258
- type=str,
259
- default=None,
260
- help=(
261
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
262
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
263
- ),
264
- )
265
- parser.add_argument(
266
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
267
- )
268
-
269
- args = parser.parse_args()
270
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
271
- if env_local_rank != -1 and env_local_rank != args.local_rank:
272
- args.local_rank = env_local_rank
273
-
274
- if args.dataset_name is None and args.train_data_dir is None:
275
- raise ValueError("You must specify either a dataset name from the hub or a train data directory.")
276
-
277
- return args
278
-
279
-
280
- def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
281
- if token is None:
282
- token = HfFolder.get_token()
283
- if organization is None:
284
- username = whoami(token)["name"]
285
- return f"{username}/{model_id}"
286
- else:
287
- return f"{organization}/{model_id}"
288
-
289
-
290
- def main(args):
291
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
292
- accelerator_project_config = ProjectConfiguration(
293
- total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir
294
- )
295
-
296
- accelerator = Accelerator(
297
- gradient_accumulation_steps=args.gradient_accumulation_steps,
298
- mixed_precision=args.mixed_precision,
299
- log_with=args.report_to,
300
- project_config=accelerator_project_config,
301
- )
302
-
303
- if args.logger == "tensorboard":
304
- if not is_tensorboard_available():
305
- raise ImportError("Make sure to install tensorboard if you want to use it for logging during training.")
306
-
307
- elif args.logger == "wandb":
308
- if not is_wandb_available():
309
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
310
- import wandb
311
-
312
- # `accelerate` 0.16.0 will have better support for customized saving
313
- if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
314
- # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
315
- def save_model_hook(models, weights, output_dir):
316
- if args.use_ema:
317
- ema_model.save_pretrained(os.path.join(output_dir, "unet_ema"))
318
-
319
- for i, model in enumerate(models):
320
- model.save_pretrained(os.path.join(output_dir, "unet"))
321
-
322
- # make sure to pop weight so that corresponding model is not saved again
323
- weights.pop()
324
-
325
- def load_model_hook(models, input_dir):
326
- if args.use_ema:
327
- load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DModel)
328
- ema_model.load_state_dict(load_model.state_dict())
329
- ema_model.to(accelerator.device)
330
- del load_model
331
-
332
- for i in range(len(models)):
333
- # pop models so that they are not loaded again
334
- model = models.pop()
335
-
336
- # load diffusers style into model
337
- load_model = UNet2DModel.from_pretrained(input_dir, subfolder="unet")
338
- model.register_to_config(**load_model.config)
339
-
340
- model.load_state_dict(load_model.state_dict())
341
- del load_model
342
-
343
- accelerator.register_save_state_pre_hook(save_model_hook)
344
- accelerator.register_load_state_pre_hook(load_model_hook)
345
-
346
- # Make one log on every process with the configuration for debugging.
347
- logging.basicConfig(
348
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
349
- datefmt="%m/%d/%Y %H:%M:%S",
350
- level=logging.INFO,
351
- )
352
- logger.info(accelerator.state, main_process_only=False)
353
- if accelerator.is_local_main_process:
354
- datasets.utils.logging.set_verbosity_warning()
355
- diffusers.utils.logging.set_verbosity_info()
356
- else:
357
- datasets.utils.logging.set_verbosity_error()
358
- diffusers.utils.logging.set_verbosity_error()
359
-
360
- # Handle the repository creation
361
- if accelerator.is_main_process:
362
- if args.push_to_hub:
363
- if args.hub_model_id is None:
364
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
365
- else:
366
- repo_name = args.hub_model_id
367
- create_repo(repo_name, exist_ok=True, token=args.hub_token)
368
- repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
369
-
370
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
371
- if "step_*" not in gitignore:
372
- gitignore.write("step_*\n")
373
- if "epoch_*" not in gitignore:
374
- gitignore.write("epoch_*\n")
375
- elif args.output_dir is not None:
376
- os.makedirs(args.output_dir, exist_ok=True)
377
-
378
- # Initialize the model
379
- if args.model_config_name_or_path is None:
380
- model = UNet2DModel(
381
- sample_size=args.resolution,
382
- in_channels=3,
383
- out_channels=3,
384
- layers_per_block=2,
385
- block_out_channels=(128, 128, 256, 256, 512, 512),
386
- down_block_types=(
387
- "DownBlock2D",
388
- "DownBlock2D",
389
- "DownBlock2D",
390
- "DownBlock2D",
391
- "AttnDownBlock2D",
392
- "DownBlock2D",
393
- ),
394
- up_block_types=(
395
- "UpBlock2D",
396
- "AttnUpBlock2D",
397
- "UpBlock2D",
398
- "UpBlock2D",
399
- "UpBlock2D",
400
- "UpBlock2D",
401
- ),
402
- )
403
- else:
404
- config = UNet2DModel.load_config(args.model_config_name_or_path)
405
- model = UNet2DModel.from_config(config)
406
-
407
- # Create EMA for the model.
408
- if args.use_ema:
409
- ema_model = EMAModel(
410
- model.parameters(),
411
- decay=args.ema_max_decay,
412
- use_ema_warmup=True,
413
- inv_gamma=args.ema_inv_gamma,
414
- power=args.ema_power,
415
- model_cls=UNet2DModel,
416
- model_config=model.config,
417
- )
418
-
419
- if args.enable_xformers_memory_efficient_attention:
420
- if is_xformers_available():
421
- import xformers
422
-
423
- xformers_version = version.parse(xformers.__version__)
424
- if xformers_version == version.parse("0.0.16"):
425
- logger.warn(
426
- "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
427
- )
428
- model.enable_xformers_memory_efficient_attention()
429
- else:
430
- raise ValueError("xformers is not available. Make sure it is installed correctly")
431
-
432
- # Initialize the scheduler
433
- accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys())
434
- if accepts_prediction_type:
435
- noise_scheduler = DDPMScheduler(
436
- num_train_timesteps=args.ddpm_num_steps,
437
- beta_schedule=args.ddpm_beta_schedule,
438
- prediction_type=args.prediction_type,
439
- )
440
- else:
441
- noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule)
442
-
443
- # Initialize the optimizer
444
- optimizer = torch.optim.AdamW(
445
- model.parameters(),
446
- lr=args.learning_rate,
447
- betas=(args.adam_beta1, args.adam_beta2),
448
- weight_decay=args.adam_weight_decay,
449
- eps=args.adam_epsilon,
450
- )
451
-
452
- optimizer = ORT_FP16_Optimizer(optimizer)
453
-
454
- # Get the datasets: you can either provide your own training and evaluation files (see below)
455
- # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
456
-
457
- # In distributed training, the load_dataset function guarantees that only one local process can concurrently
458
- # download the dataset.
459
- if args.dataset_name is not None:
460
- dataset = load_dataset(
461
- args.dataset_name,
462
- args.dataset_config_name,
463
- cache_dir=args.cache_dir,
464
- split="train",
465
- )
466
- else:
467
- dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train")
468
- # See more about loading custom images at
469
- # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
470
-
471
- # Preprocessing the datasets and DataLoaders creation.
472
- augmentations = transforms.Compose(
473
- [
474
- transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
475
- transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
476
- transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
477
- transforms.ToTensor(),
478
- transforms.Normalize([0.5], [0.5]),
479
- ]
480
- )
481
-
482
- def transform_images(examples):
483
- images = [augmentations(image.convert("RGB")) for image in examples["image"]]
484
- return {"input": images}
485
-
486
- logger.info(f"Dataset size: {len(dataset)}")
487
-
488
- dataset.set_transform(transform_images)
489
- train_dataloader = torch.utils.data.DataLoader(
490
- dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers
491
- )
492
-
493
- # Initialize the learning rate scheduler
494
- lr_scheduler = get_scheduler(
495
- args.lr_scheduler,
496
- optimizer=optimizer,
497
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
498
- num_training_steps=(len(train_dataloader) * args.num_epochs),
499
- )
500
-
501
- # Prepare everything with our `accelerator`.
502
- model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
503
- model, optimizer, train_dataloader, lr_scheduler
504
- )
505
-
506
- if args.use_ema:
507
- ema_model.to(accelerator.device)
508
-
509
- # We need to initialize the trackers we use, and also store our configuration.
510
- # The trackers initializes automatically on the main process.
511
- if accelerator.is_main_process:
512
- run = os.path.split(__file__)[-1].split(".")[0]
513
- accelerator.init_trackers(run)
514
-
515
- model = ORTModule(model)
516
-
517
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
518
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
519
- max_train_steps = args.num_epochs * num_update_steps_per_epoch
520
-
521
- logger.info("***** Running training *****")
522
- logger.info(f" Num examples = {len(dataset)}")
523
- logger.info(f" Num Epochs = {args.num_epochs}")
524
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
525
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
526
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
527
- logger.info(f" Total optimization steps = {max_train_steps}")
528
-
529
- global_step = 0
530
- first_epoch = 0
531
-
532
- # Potentially load in the weights and states from a previous save
533
- if args.resume_from_checkpoint:
534
- if args.resume_from_checkpoint != "latest":
535
- path = os.path.basename(args.resume_from_checkpoint)
536
- else:
537
- # Get the most recent checkpoint
538
- dirs = os.listdir(args.output_dir)
539
- dirs = [d for d in dirs if d.startswith("checkpoint")]
540
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
541
- path = dirs[-1] if len(dirs) > 0 else None
542
-
543
- if path is None:
544
- accelerator.print(
545
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
546
- )
547
- args.resume_from_checkpoint = None
548
- else:
549
- accelerator.print(f"Resuming from checkpoint {path}")
550
- accelerator.load_state(os.path.join(args.output_dir, path))
551
- global_step = int(path.split("-")[1])
552
-
553
- resume_global_step = global_step * args.gradient_accumulation_steps
554
- first_epoch = global_step // num_update_steps_per_epoch
555
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
556
-
557
- # Train!
558
- for epoch in range(first_epoch, args.num_epochs):
559
- model.train()
560
- progress_bar = tqdm(total=num_update_steps_per_epoch, disable=not accelerator.is_local_main_process)
561
- progress_bar.set_description(f"Epoch {epoch}")
562
- for step, batch in enumerate(train_dataloader):
563
- # Skip steps until we reach the resumed step
564
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
565
- if step % args.gradient_accumulation_steps == 0:
566
- progress_bar.update(1)
567
- continue
568
-
569
- clean_images = batch["input"]
570
- # Sample noise that we'll add to the images
571
- noise = torch.randn(
572
- clean_images.shape, dtype=(torch.float32 if args.mixed_precision == "no" else torch.float16)
573
- ).to(clean_images.device)
574
- bsz = clean_images.shape[0]
575
- # Sample a random timestep for each image
576
- timesteps = torch.randint(
577
- 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=clean_images.device
578
- ).long()
579
-
580
- # Add noise to the clean images according to the noise magnitude at each timestep
581
- # (this is the forward diffusion process)
582
- noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)
583
-
584
- with accelerator.accumulate(model):
585
- # Predict the noise residual
586
- model_output = model(noisy_images, timesteps, return_dict=False)[0]
587
-
588
- if args.prediction_type == "epsilon":
589
- loss = F.mse_loss(model_output, noise) # this could have different weights!
590
- elif args.prediction_type == "sample":
591
- alpha_t = _extract_into_tensor(
592
- noise_scheduler.alphas_cumprod, timesteps, (clean_images.shape[0], 1, 1, 1)
593
- )
594
- snr_weights = alpha_t / (1 - alpha_t)
595
- loss = snr_weights * F.mse_loss(
596
- model_output, clean_images, reduction="none"
597
- ) # use SNR weighting from distillation paper
598
- loss = loss.mean()
599
- else:
600
- raise ValueError(f"Unsupported prediction type: {args.prediction_type}")
601
-
602
- accelerator.backward(loss)
603
-
604
- if accelerator.sync_gradients:
605
- accelerator.clip_grad_norm_(model.parameters(), 1.0)
606
- optimizer.step()
607
- lr_scheduler.step()
608
- optimizer.zero_grad()
609
-
610
- # Checks if the accelerator has performed an optimization step behind the scenes
611
- if accelerator.sync_gradients:
612
- if args.use_ema:
613
- ema_model.step(model.parameters())
614
- progress_bar.update(1)
615
- global_step += 1
616
-
617
- if global_step % args.checkpointing_steps == 0:
618
- if accelerator.is_main_process:
619
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
620
- accelerator.save_state(save_path)
621
- logger.info(f"Saved state to {save_path}")
622
-
623
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
624
- if args.use_ema:
625
- logs["ema_decay"] = ema_model.cur_decay_value
626
- progress_bar.set_postfix(**logs)
627
- accelerator.log(logs, step=global_step)
628
- progress_bar.close()
629
-
630
- accelerator.wait_for_everyone()
631
-
632
- # Generate sample images for visual inspection
633
- if accelerator.is_main_process:
634
- if epoch % args.save_images_epochs == 0 or epoch == args.num_epochs - 1:
635
- unet = accelerator.unwrap_model(model)
636
-
637
- if args.use_ema:
638
- ema_model.store(unet.parameters())
639
- ema_model.copy_to(unet.parameters())
640
-
641
- pipeline = DDPMPipeline(
642
- unet=unet,
643
- scheduler=noise_scheduler,
644
- )
645
-
646
- generator = torch.Generator(device=pipeline.device).manual_seed(0)
647
- # run pipeline in inference (sample random noise and denoise)
648
- images = pipeline(
649
- generator=generator,
650
- batch_size=args.eval_batch_size,
651
- num_inference_steps=args.ddpm_num_inference_steps,
652
- output_type="numpy",
653
- ).images
654
-
655
- if args.use_ema:
656
- ema_model.restore(unet.parameters())
657
-
658
- # denormalize the images and save to tensorboard
659
- images_processed = (images * 255).round().astype("uint8")
660
-
661
- if args.logger == "tensorboard":
662
- if is_accelerate_version(">=", "0.17.0.dev0"):
663
- tracker = accelerator.get_tracker("tensorboard", unwrap=True)
664
- else:
665
- tracker = accelerator.get_tracker("tensorboard")
666
- tracker.add_images("test_samples", images_processed.transpose(0, 3, 1, 2), epoch)
667
- elif args.logger == "wandb":
668
- # Upcoming `log_images` helper coming in https://github.com/huggingface/accelerate/pull/962/files
669
- accelerator.get_tracker("wandb").log(
670
- {"test_samples": [wandb.Image(img) for img in images_processed], "epoch": epoch},
671
- step=global_step,
672
- )
673
-
674
- if epoch % args.save_model_epochs == 0 or epoch == args.num_epochs - 1:
675
- # save the model
676
- unet = accelerator.unwrap_model(model)
677
-
678
- if args.use_ema:
679
- ema_model.store(unet.parameters())
680
- ema_model.copy_to(unet.parameters())
681
-
682
- pipeline = DDPMPipeline(
683
- unet=unet,
684
- scheduler=noise_scheduler,
685
- )
686
-
687
- pipeline.save_pretrained(args.output_dir)
688
-
689
- if args.use_ema:
690
- ema_model.restore(unet.parameters())
691
-
692
- if args.push_to_hub:
693
- repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=False)
694
-
695
- accelerator.end_training()
696
-
697
-
698
- if __name__ == "__main__":
699
- args = parse_args()
700
- main(args)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/semantic_stable_diffusion/__init__.py DELETED
File without changes
spaces/Andy1621/uniformer_image_detection/mmcv_custom/runner/checkpoint.py DELETED
@@ -1,85 +0,0 @@
1
- # Copyright (c) Open-MMLab. All rights reserved.
2
- import os.path as osp
3
- import time
4
- from tempfile import TemporaryDirectory
5
-
6
- import torch
7
- from torch.optim import Optimizer
8
-
9
- import mmcv
10
- from mmcv.parallel import is_module_wrapper
11
- from mmcv.runner.checkpoint import weights_to_cpu, get_state_dict
12
-
13
- try:
14
- import apex
15
- except:
16
- print('apex is not installed')
17
-
18
-
19
- def save_checkpoint(model, filename, optimizer=None, meta=None):
20
- """Save checkpoint to file.
21
-
22
- The checkpoint will have 4 fields: ``meta``, ``state_dict`` and
23
- ``optimizer``, ``amp``. By default ``meta`` will contain version
24
- and time info.
25
-
26
- Args:
27
- model (Module): Module whose params are to be saved.
28
- filename (str): Checkpoint filename.
29
- optimizer (:obj:`Optimizer`, optional): Optimizer to be saved.
30
- meta (dict, optional): Metadata to be saved in checkpoint.
31
- """
32
- if meta is None:
33
- meta = {}
34
- elif not isinstance(meta, dict):
35
- raise TypeError(f'meta must be a dict or None, but got {type(meta)}')
36
- meta.update(mmcv_version=mmcv.__version__, time=time.asctime())
37
-
38
- if is_module_wrapper(model):
39
- model = model.module
40
-
41
- if hasattr(model, 'CLASSES') and model.CLASSES is not None:
42
- # save class name to the meta
43
- meta.update(CLASSES=model.CLASSES)
44
-
45
- checkpoint = {
46
- 'meta': meta,
47
- 'state_dict': weights_to_cpu(get_state_dict(model))
48
- }
49
- # save optimizer state dict in the checkpoint
50
- if isinstance(optimizer, Optimizer):
51
- checkpoint['optimizer'] = optimizer.state_dict()
52
- elif isinstance(optimizer, dict):
53
- checkpoint['optimizer'] = {}
54
- for name, optim in optimizer.items():
55
- checkpoint['optimizer'][name] = optim.state_dict()
56
-
57
- # save amp state dict in the checkpoint
58
- checkpoint['amp'] = apex.amp.state_dict()
59
-
60
- if filename.startswith('pavi://'):
61
- try:
62
- from pavi import modelcloud
63
- from pavi.exception import NodeNotFoundError
64
- except ImportError:
65
- raise ImportError(
66
- 'Please install pavi to load checkpoint from modelcloud.')
67
- model_path = filename[7:]
68
- root = modelcloud.Folder()
69
- model_dir, model_name = osp.split(model_path)
70
- try:
71
- model = modelcloud.get(model_dir)
72
- except NodeNotFoundError:
73
- model = root.create_training_model(model_dir)
74
- with TemporaryDirectory() as tmp_dir:
75
- checkpoint_file = osp.join(tmp_dir, model_name)
76
- with open(checkpoint_file, 'wb') as f:
77
- torch.save(checkpoint, f)
78
- f.flush()
79
- model.create_file(checkpoint_file, name=model_name)
80
- else:
81
- mmcv.mkdir_or_exist(osp.dirname(filename))
82
- # immediately flush buffer
83
- with open(filename, 'wb') as f:
84
- torch.save(checkpoint, f)
85
- f.flush()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/apis/train.py DELETED
@@ -1,185 +0,0 @@
1
- import random
2
- import warnings
3
-
4
- import numpy as np
5
- import torch
6
- from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
7
- from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,
8
- Fp16OptimizerHook, OptimizerHook, build_optimizer,
9
- build_runner)
10
- from mmcv.utils import build_from_cfg
11
-
12
- from mmdet.core import DistEvalHook, EvalHook
13
- from mmdet.datasets import (build_dataloader, build_dataset,
14
- replace_ImageToTensor)
15
- from mmdet.utils import get_root_logger
16
- from mmcv_custom.runner import EpochBasedRunnerAmp
17
- try:
18
- import apex
19
- except:
20
- print('apex is not installed')
21
-
22
-
23
- def set_random_seed(seed, deterministic=False):
24
- """Set random seed.
25
-
26
- Args:
27
- seed (int): Seed to be used.
28
- deterministic (bool): Whether to set the deterministic option for
29
- CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
30
- to True and `torch.backends.cudnn.benchmark` to False.
31
- Default: False.
32
- """
33
- random.seed(seed)
34
- np.random.seed(seed)
35
- torch.manual_seed(seed)
36
- torch.cuda.manual_seed_all(seed)
37
- if deterministic:
38
- torch.backends.cudnn.deterministic = True
39
- torch.backends.cudnn.benchmark = False
40
-
41
-
42
- def train_detector(model,
43
- dataset,
44
- cfg,
45
- distributed=False,
46
- validate=False,
47
- timestamp=None,
48
- meta=None):
49
- logger = get_root_logger(cfg.log_level)
50
-
51
- # prepare data loaders
52
- dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
53
- if 'imgs_per_gpu' in cfg.data:
54
- logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. '
55
- 'Please use "samples_per_gpu" instead')
56
- if 'samples_per_gpu' in cfg.data:
57
- logger.warning(
58
- f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and '
59
- f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"'
60
- f'={cfg.data.imgs_per_gpu} is used in this experiments')
61
- else:
62
- logger.warning(
63
- 'Automatically set "samples_per_gpu"="imgs_per_gpu"='
64
- f'{cfg.data.imgs_per_gpu} in this experiments')
65
- cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu
66
-
67
- data_loaders = [
68
- build_dataloader(
69
- ds,
70
- cfg.data.samples_per_gpu,
71
- cfg.data.workers_per_gpu,
72
- # cfg.gpus will be ignored if distributed
73
- len(cfg.gpu_ids),
74
- dist=distributed,
75
- seed=cfg.seed) for ds in dataset
76
- ]
77
-
78
- # build optimizer
79
- optimizer = build_optimizer(model, cfg.optimizer)
80
-
81
- # use apex fp16 optimizer
82
- if cfg.optimizer_config.get("type", None) and cfg.optimizer_config["type"] == "DistOptimizerHook":
83
- if cfg.optimizer_config.get("use_fp16", False):
84
- model, optimizer = apex.amp.initialize(
85
- model.cuda(), optimizer, opt_level="O1")
86
- for m in model.modules():
87
- if hasattr(m, "fp16_enabled"):
88
- m.fp16_enabled = True
89
-
90
- # put model on gpus
91
- if distributed:
92
- find_unused_parameters = cfg.get('find_unused_parameters', False)
93
- # Sets the `find_unused_parameters` parameter in
94
- # torch.nn.parallel.DistributedDataParallel
95
- model = MMDistributedDataParallel(
96
- model.cuda(),
97
- device_ids=[torch.cuda.current_device()],
98
- broadcast_buffers=False,
99
- find_unused_parameters=find_unused_parameters)
100
- else:
101
- model = MMDataParallel(
102
- model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
103
-
104
- if 'runner' not in cfg:
105
- cfg.runner = {
106
- 'type': 'EpochBasedRunner',
107
- 'max_epochs': cfg.total_epochs
108
- }
109
- warnings.warn(
110
- 'config is now expected to have a `runner` section, '
111
- 'please set `runner` in your config.', UserWarning)
112
- else:
113
- if 'total_epochs' in cfg:
114
- assert cfg.total_epochs == cfg.runner.max_epochs
115
-
116
- # build runner
117
- runner = build_runner(
118
- cfg.runner,
119
- default_args=dict(
120
- model=model,
121
- optimizer=optimizer,
122
- work_dir=cfg.work_dir,
123
- logger=logger,
124
- meta=meta))
125
-
126
- # an ugly workaround to make .log and .log.json filenames the same
127
- runner.timestamp = timestamp
128
-
129
- # fp16 setting
130
- fp16_cfg = cfg.get('fp16', None)
131
- if fp16_cfg is not None:
132
- optimizer_config = Fp16OptimizerHook(
133
- **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
134
- elif distributed and 'type' not in cfg.optimizer_config:
135
- optimizer_config = OptimizerHook(**cfg.optimizer_config)
136
- else:
137
- optimizer_config = cfg.optimizer_config
138
-
139
- # register hooks
140
- runner.register_training_hooks(cfg.lr_config, optimizer_config,
141
- cfg.checkpoint_config, cfg.log_config,
142
- cfg.get('momentum_config', None))
143
- if distributed:
144
- if isinstance(runner, EpochBasedRunner):
145
- runner.register_hook(DistSamplerSeedHook())
146
-
147
- # register eval hooks
148
- if validate:
149
- # Support batch_size > 1 in validation
150
- val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1)
151
- if val_samples_per_gpu > 1:
152
- # Replace 'ImageToTensor' to 'DefaultFormatBundle'
153
- cfg.data.val.pipeline = replace_ImageToTensor(
154
- cfg.data.val.pipeline)
155
- val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
156
- val_dataloader = build_dataloader(
157
- val_dataset,
158
- samples_per_gpu=val_samples_per_gpu,
159
- workers_per_gpu=cfg.data.workers_per_gpu,
160
- dist=distributed,
161
- shuffle=False)
162
- eval_cfg = cfg.get('evaluation', {})
163
- eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'
164
- eval_hook = DistEvalHook if distributed else EvalHook
165
- runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
166
-
167
- # user-defined hooks
168
- if cfg.get('custom_hooks', None):
169
- custom_hooks = cfg.custom_hooks
170
- assert isinstance(custom_hooks, list), \
171
- f'custom_hooks expect list type, but got {type(custom_hooks)}'
172
- for hook_cfg in cfg.custom_hooks:
173
- assert isinstance(hook_cfg, dict), \
174
- 'Each item in custom_hooks expects dict type, but got ' \
175
- f'{type(hook_cfg)}'
176
- hook_cfg = hook_cfg.copy()
177
- priority = hook_cfg.pop('priority', 'NORMAL')
178
- hook = build_from_cfg(hook_cfg, HOOKS)
179
- runner.register_hook(hook, priority=priority)
180
-
181
- if cfg.resume_from:
182
- runner.resume(cfg.resume_from)
183
- elif cfg.load_from:
184
- runner.load_checkpoint(cfg.load_from)
185
- runner.run(data_loaders, cfg.workflow)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/focal_loss.py DELETED
@@ -1,181 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss
5
-
6
- from ..builder import LOSSES
7
- from .utils import weight_reduce_loss
8
-
9
-
10
- # This method is only for debugging
11
- def py_sigmoid_focal_loss(pred,
12
- target,
13
- weight=None,
14
- gamma=2.0,
15
- alpha=0.25,
16
- reduction='mean',
17
- avg_factor=None):
18
- """PyTorch version of `Focal Loss <https://arxiv.org/abs/1708.02002>`_.
19
-
20
- Args:
21
- pred (torch.Tensor): The prediction with shape (N, C), C is the
22
- number of classes
23
- target (torch.Tensor): The learning label of the prediction.
24
- weight (torch.Tensor, optional): Sample-wise loss weight.
25
- gamma (float, optional): The gamma for calculating the modulating
26
- factor. Defaults to 2.0.
27
- alpha (float, optional): A balanced form for Focal Loss.
28
- Defaults to 0.25.
29
- reduction (str, optional): The method used to reduce the loss into
30
- a scalar. Defaults to 'mean'.
31
- avg_factor (int, optional): Average factor that is used to average
32
- the loss. Defaults to None.
33
- """
34
- pred_sigmoid = pred.sigmoid()
35
- target = target.type_as(pred)
36
- pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
37
- focal_weight = (alpha * target + (1 - alpha) *
38
- (1 - target)) * pt.pow(gamma)
39
- loss = F.binary_cross_entropy_with_logits(
40
- pred, target, reduction='none') * focal_weight
41
- if weight is not None:
42
- if weight.shape != loss.shape:
43
- if weight.size(0) == loss.size(0):
44
- # For most cases, weight is of shape (num_priors, ),
45
- # which means it does not have the second axis num_class
46
- weight = weight.view(-1, 1)
47
- else:
48
- # Sometimes, weight per anchor per class is also needed. e.g.
49
- # in FSAF. But it may be flattened of shape
50
- # (num_priors x num_class, ), while loss is still of shape
51
- # (num_priors, num_class).
52
- assert weight.numel() == loss.numel()
53
- weight = weight.view(loss.size(0), -1)
54
- assert weight.ndim == loss.ndim
55
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
56
- return loss
57
-
58
-
59
- def sigmoid_focal_loss(pred,
60
- target,
61
- weight=None,
62
- gamma=2.0,
63
- alpha=0.25,
64
- reduction='mean',
65
- avg_factor=None):
66
- r"""A warpper of cuda version `Focal Loss
67
- <https://arxiv.org/abs/1708.02002>`_.
68
-
69
- Args:
70
- pred (torch.Tensor): The prediction with shape (N, C), C is the number
71
- of classes.
72
- target (torch.Tensor): The learning label of the prediction.
73
- weight (torch.Tensor, optional): Sample-wise loss weight.
74
- gamma (float, optional): The gamma for calculating the modulating
75
- factor. Defaults to 2.0.
76
- alpha (float, optional): A balanced form for Focal Loss.
77
- Defaults to 0.25.
78
- reduction (str, optional): The method used to reduce the loss into
79
- a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum".
80
- avg_factor (int, optional): Average factor that is used to average
81
- the loss. Defaults to None.
82
- """
83
- # Function.apply does not accept keyword arguments, so the decorator
84
- # "weighted_loss" is not applicable
85
- loss = _sigmoid_focal_loss(pred.contiguous(), target, gamma, alpha, None,
86
- 'none')
87
- if weight is not None:
88
- if weight.shape != loss.shape:
89
- if weight.size(0) == loss.size(0):
90
- # For most cases, weight is of shape (num_priors, ),
91
- # which means it does not have the second axis num_class
92
- weight = weight.view(-1, 1)
93
- else:
94
- # Sometimes, weight per anchor per class is also needed. e.g.
95
- # in FSAF. But it may be flattened of shape
96
- # (num_priors x num_class, ), while loss is still of shape
97
- # (num_priors, num_class).
98
- assert weight.numel() == loss.numel()
99
- weight = weight.view(loss.size(0), -1)
100
- assert weight.ndim == loss.ndim
101
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
102
- return loss
103
-
104
-
105
- @LOSSES.register_module()
106
- class FocalLoss(nn.Module):
107
-
108
- def __init__(self,
109
- use_sigmoid=True,
110
- gamma=2.0,
111
- alpha=0.25,
112
- reduction='mean',
113
- loss_weight=1.0):
114
- """`Focal Loss <https://arxiv.org/abs/1708.02002>`_
115
-
116
- Args:
117
- use_sigmoid (bool, optional): Whether to the prediction is
118
- used for sigmoid or softmax. Defaults to True.
119
- gamma (float, optional): The gamma for calculating the modulating
120
- factor. Defaults to 2.0.
121
- alpha (float, optional): A balanced form for Focal Loss.
122
- Defaults to 0.25.
123
- reduction (str, optional): The method used to reduce the loss into
124
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
125
- "sum".
126
- loss_weight (float, optional): Weight of loss. Defaults to 1.0.
127
- """
128
- super(FocalLoss, self).__init__()
129
- assert use_sigmoid is True, 'Only sigmoid focal loss supported now.'
130
- self.use_sigmoid = use_sigmoid
131
- self.gamma = gamma
132
- self.alpha = alpha
133
- self.reduction = reduction
134
- self.loss_weight = loss_weight
135
-
136
- def forward(self,
137
- pred,
138
- target,
139
- weight=None,
140
- avg_factor=None,
141
- reduction_override=None):
142
- """Forward function.
143
-
144
- Args:
145
- pred (torch.Tensor): The prediction.
146
- target (torch.Tensor): The learning label of the prediction.
147
- weight (torch.Tensor, optional): The weight of loss for each
148
- prediction. Defaults to None.
149
- avg_factor (int, optional): Average factor that is used to average
150
- the loss. Defaults to None.
151
- reduction_override (str, optional): The reduction method used to
152
- override the original reduction method of the loss.
153
- Options are "none", "mean" and "sum".
154
-
155
- Returns:
156
- torch.Tensor: The calculated loss
157
- """
158
- assert reduction_override in (None, 'none', 'mean', 'sum')
159
- reduction = (
160
- reduction_override if reduction_override else self.reduction)
161
- if self.use_sigmoid:
162
- if torch.cuda.is_available() and pred.is_cuda:
163
- calculate_loss_func = sigmoid_focal_loss
164
- else:
165
- num_classes = pred.size(1)
166
- target = F.one_hot(target, num_classes=num_classes + 1)
167
- target = target[:, :num_classes]
168
- calculate_loss_func = py_sigmoid_focal_loss
169
-
170
- loss_cls = self.loss_weight * calculate_loss_func(
171
- pred,
172
- target,
173
- weight,
174
- gamma=self.gamma,
175
- alpha=self.alpha,
176
- reduction=reduction,
177
- avg_factor=avg_factor)
178
-
179
- else:
180
- raise NotImplementedError
181
- return loss_cls
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_40k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './dnl_r50-d8_512x1024_40k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_769x769_80k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/gcnet_r50-d8.py',
3
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_80k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(align_corners=True),
8
- auxiliary_head=dict(align_corners=True),
9
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
 
 
 
 
 
 
 
 
 
 
spaces/AngoHF/ANGO-Leaderboard/app.py DELETED
@@ -1,24 +0,0 @@
1
- import gradio as gr
2
-
3
- from components.about import create_about
4
- from components.detail import create_detail
5
- from components.submit import create_submit
6
- from components.top import create_top
7
- from components.data import create_data
8
- from components.result import create_result
9
-
10
- with gr.Blocks(title="Ango Leaderboard") as app:
11
- top_components = create_top()
12
- with gr.Tab("Result"):
13
- create_result(top_components)
14
- with gr.Tab("Details"):
15
- create_detail(top_components)
16
- with gr.Tab("Data"):
17
- create_data(top_components)
18
- with gr.Tab("Submit"):
19
- create_submit()
20
- with gr.Tab("About"):
21
- create_about()
22
-
23
- app.queue()
24
- app.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/classifier_train.py DELETED
@@ -1,226 +0,0 @@
1
- """
2
- Train a noised image classifier on ImageNet.
3
- """
4
-
5
- import argparse
6
- import os
7
-
8
- import blobfile as bf
9
- import torch as th
10
- import torch.distributed as dist
11
- import torch.nn.functional as F
12
- from torch.nn.parallel.distributed import DistributedDataParallel as DDP
13
- from torch.optim import AdamW
14
-
15
- from guided_diffusion import dist_util, logger
16
- from guided_diffusion.fp16_util import MixedPrecisionTrainer
17
- from guided_diffusion.image_datasets import load_data
18
- from guided_diffusion.resample import create_named_schedule_sampler
19
- from guided_diffusion.script_util import (
20
- add_dict_to_argparser,
21
- args_to_dict,
22
- classifier_and_diffusion_defaults,
23
- create_classifier_and_diffusion,
24
- )
25
- from guided_diffusion.train_util import parse_resume_step_from_filename, log_loss_dict
26
-
27
-
28
- def main():
29
- args = create_argparser().parse_args()
30
-
31
- dist_util.setup_dist()
32
- logger.configure()
33
-
34
- logger.log("creating model and diffusion...")
35
- model, diffusion = create_classifier_and_diffusion(
36
- **args_to_dict(args, classifier_and_diffusion_defaults().keys())
37
- )
38
- model.to(dist_util.dev())
39
- if args.noised:
40
- schedule_sampler = create_named_schedule_sampler(
41
- args.schedule_sampler, diffusion
42
- )
43
-
44
- resume_step = 0
45
- if args.resume_checkpoint:
46
- resume_step = parse_resume_step_from_filename(args.resume_checkpoint)
47
- if dist.get_rank() == 0:
48
- logger.log(
49
- f"loading model from checkpoint: {args.resume_checkpoint}... at {resume_step} step"
50
- )
51
- model.load_state_dict(
52
- dist_util.load_state_dict(
53
- args.resume_checkpoint, map_location=dist_util.dev()
54
- )
55
- )
56
-
57
- # Needed for creating correct EMAs and fp16 parameters.
58
- dist_util.sync_params(model.parameters())
59
-
60
- mp_trainer = MixedPrecisionTrainer(
61
- model=model, use_fp16=args.classifier_use_fp16, initial_lg_loss_scale=16.0
62
- )
63
-
64
- model = DDP(
65
- model,
66
- device_ids=[dist_util.dev()],
67
- output_device=dist_util.dev(),
68
- broadcast_buffers=False,
69
- bucket_cap_mb=128,
70
- find_unused_parameters=False,
71
- )
72
-
73
- logger.log("creating data loader...")
74
- data = load_data(
75
- data_dir=args.data_dir,
76
- batch_size=args.batch_size,
77
- image_size=args.image_size,
78
- class_cond=True,
79
- random_crop=True,
80
- )
81
- if args.val_data_dir:
82
- val_data = load_data(
83
- data_dir=args.val_data_dir,
84
- batch_size=args.batch_size,
85
- image_size=args.image_size,
86
- class_cond=True,
87
- )
88
- else:
89
- val_data = None
90
-
91
- logger.log(f"creating optimizer...")
92
- opt = AdamW(mp_trainer.master_params, lr=args.lr, weight_decay=args.weight_decay)
93
- if args.resume_checkpoint:
94
- opt_checkpoint = bf.join(
95
- bf.dirname(args.resume_checkpoint), f"opt{resume_step:06}.pt"
96
- )
97
- logger.log(f"loading optimizer state from checkpoint: {opt_checkpoint}")
98
- opt.load_state_dict(
99
- dist_util.load_state_dict(opt_checkpoint, map_location=dist_util.dev())
100
- )
101
-
102
- logger.log("training classifier model...")
103
-
104
- def forward_backward_log(data_loader, prefix="train"):
105
- batch, extra = next(data_loader)
106
- labels = extra["y"].to(dist_util.dev())
107
-
108
- batch = batch.to(dist_util.dev())
109
- # Noisy images
110
- if args.noised:
111
- t, _ = schedule_sampler.sample(batch.shape[0], dist_util.dev())
112
- batch = diffusion.q_sample(batch, t)
113
- else:
114
- t = th.zeros(batch.shape[0], dtype=th.long, device=dist_util.dev())
115
-
116
- for i, (sub_batch, sub_labels, sub_t) in enumerate(
117
- split_microbatches(args.microbatch, batch, labels, t)
118
- ):
119
- logits = model(sub_batch, timesteps=sub_t)
120
- loss = F.cross_entropy(logits, sub_labels, reduction="none")
121
-
122
- losses = {}
123
- losses[f"{prefix}_loss"] = loss.detach()
124
- losses[f"{prefix}_acc@1"] = compute_top_k(
125
- logits, sub_labels, k=1, reduction="none"
126
- )
127
- losses[f"{prefix}_acc@5"] = compute_top_k(
128
- logits, sub_labels, k=5, reduction="none"
129
- )
130
- log_loss_dict(diffusion, sub_t, losses)
131
- del losses
132
- loss = loss.mean()
133
- if loss.requires_grad:
134
- if i == 0:
135
- mp_trainer.zero_grad()
136
- mp_trainer.backward(loss * len(sub_batch) / len(batch))
137
-
138
- for step in range(args.iterations - resume_step):
139
- logger.logkv("step", step + resume_step)
140
- logger.logkv(
141
- "samples",
142
- (step + resume_step + 1) * args.batch_size * dist.get_world_size(),
143
- )
144
- if args.anneal_lr:
145
- set_annealed_lr(opt, args.lr, (step + resume_step) / args.iterations)
146
- forward_backward_log(data)
147
- mp_trainer.optimize(opt)
148
- if val_data is not None and not step % args.eval_interval:
149
- with th.no_grad():
150
- with model.no_sync():
151
- model.eval()
152
- forward_backward_log(val_data, prefix="val")
153
- model.train()
154
- if not step % args.log_interval:
155
- logger.dumpkvs()
156
- if (
157
- step
158
- and dist.get_rank() == 0
159
- and not (step + resume_step) % args.save_interval
160
- ):
161
- logger.log("saving model...")
162
- save_model(mp_trainer, opt, step + resume_step)
163
-
164
- if dist.get_rank() == 0:
165
- logger.log("saving model...")
166
- save_model(mp_trainer, opt, step + resume_step)
167
- dist.barrier()
168
-
169
-
170
- def set_annealed_lr(opt, base_lr, frac_done):
171
- lr = base_lr * (1 - frac_done)
172
- for param_group in opt.param_groups:
173
- param_group["lr"] = lr
174
-
175
-
176
- def save_model(mp_trainer, opt, step):
177
- if dist.get_rank() == 0:
178
- th.save(
179
- mp_trainer.master_params_to_state_dict(mp_trainer.master_params),
180
- os.path.join(logger.get_dir(), f"model{step:06d}.pt"),
181
- )
182
- th.save(opt.state_dict(), os.path.join(logger.get_dir(), f"opt{step:06d}.pt"))
183
-
184
-
185
- def compute_top_k(logits, labels, k, reduction="mean"):
186
- _, top_ks = th.topk(logits, k, dim=-1)
187
- if reduction == "mean":
188
- return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item()
189
- elif reduction == "none":
190
- return (top_ks == labels[:, None]).float().sum(dim=-1)
191
-
192
-
193
- def split_microbatches(microbatch, *args):
194
- bs = len(args[0])
195
- if microbatch == -1 or microbatch >= bs:
196
- yield tuple(args)
197
- else:
198
- for i in range(0, bs, microbatch):
199
- yield tuple(x[i : i + microbatch] if x is not None else None for x in args)
200
-
201
-
202
- def create_argparser():
203
- defaults = dict(
204
- data_dir="",
205
- val_data_dir="",
206
- noised=True,
207
- iterations=150000,
208
- lr=3e-4,
209
- weight_decay=0.0,
210
- anneal_lr=False,
211
- batch_size=4,
212
- microbatch=-1,
213
- schedule_sampler="uniform",
214
- resume_checkpoint="",
215
- log_interval=10,
216
- eval_interval=5,
217
- save_interval=10000,
218
- )
219
- defaults.update(classifier_and_diffusion_defaults())
220
- parser = argparse.ArgumentParser()
221
- add_dict_to_argparser(parser, defaults)
222
- return parser
223
-
224
-
225
- if __name__ == "__main__":
226
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_160k.py DELETED
@@ -1,9 +0,0 @@
1
- # optimizer
2
- optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
3
- optimizer_config = dict()
4
- # learning policy
5
- lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
6
- # runtime settings
7
- runner = dict(type='IterBasedRunner', max_iters=160000)
8
- checkpoint_config = dict(by_epoch=False, interval=16000)
9
- evaluation = dict(interval=16000, metric='mIoU')
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/gmflow_module/utils/dist_utils.py DELETED
@@ -1,99 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- # https://github.com/open-mmlab/mmcv/blob/7540cf73ac7e5d1e14d0ffbd9b6759e83929ecfc/mmcv/runner/dist_utils.py
3
-
4
- import os
5
- import subprocess
6
-
7
- import torch
8
- import torch.multiprocessing as mp
9
- from torch import distributed as dist
10
-
11
-
12
- def init_dist(launcher, backend='nccl', **kwargs):
13
- if mp.get_start_method(allow_none=True) is None:
14
- mp.set_start_method('spawn')
15
- if launcher == 'pytorch':
16
- _init_dist_pytorch(backend, **kwargs)
17
- elif launcher == 'mpi':
18
- _init_dist_mpi(backend, **kwargs)
19
- elif launcher == 'slurm':
20
- _init_dist_slurm(backend, **kwargs)
21
- else:
22
- raise ValueError(f'Invalid launcher type: {launcher}')
23
-
24
-
25
- def _init_dist_pytorch(backend, **kwargs):
26
- # TODO: use local_rank instead of rank % num_gpus
27
- rank = int(os.environ['RANK'])
28
- num_gpus = torch.cuda.device_count()
29
- torch.cuda.set_device(rank % num_gpus)
30
- dist.init_process_group(backend=backend, **kwargs)
31
-
32
-
33
- def _init_dist_mpi(backend, **kwargs):
34
- rank = int(os.environ['OMPI_COMM_WORLD_RANK'])
35
- num_gpus = torch.cuda.device_count()
36
- torch.cuda.set_device(rank % num_gpus)
37
- dist.init_process_group(backend=backend, **kwargs)
38
-
39
-
40
- def _init_dist_slurm(backend, port=None):
41
- """Initialize slurm distributed training environment.
42
- If argument ``port`` is not specified, then the master port will be system
43
- environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system
44
- environment variable, then a default port ``29500`` will be used.
45
- Args:
46
- backend (str): Backend of torch.distributed.
47
- port (int, optional): Master port. Defaults to None.
48
- """
49
- proc_id = int(os.environ['SLURM_PROCID'])
50
- ntasks = int(os.environ['SLURM_NTASKS'])
51
- node_list = os.environ['SLURM_NODELIST']
52
- num_gpus = torch.cuda.device_count()
53
- torch.cuda.set_device(proc_id % num_gpus)
54
- addr = subprocess.getoutput(
55
- f'scontrol show hostname {node_list} | head -n1')
56
- # specify master port
57
- if port is not None:
58
- os.environ['MASTER_PORT'] = str(port)
59
- elif 'MASTER_PORT' in os.environ:
60
- pass # use MASTER_PORT in the environment variable
61
- else:
62
- # 29500 is torch.distributed default port
63
- os.environ['MASTER_PORT'] = '29500'
64
- # use MASTER_ADDR in the environment variable if it already exists
65
- if 'MASTER_ADDR' not in os.environ:
66
- os.environ['MASTER_ADDR'] = addr
67
- os.environ['WORLD_SIZE'] = str(ntasks)
68
- os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)
69
- os.environ['RANK'] = str(proc_id)
70
- dist.init_process_group(backend=backend)
71
-
72
-
73
- def get_dist_info():
74
- if dist.is_available():
75
- initialized = dist.is_initialized()
76
- else:
77
- initialized = False
78
- if initialized:
79
- rank = dist.get_rank()
80
- world_size = dist.get_world_size()
81
- else:
82
- rank = 0
83
- world_size = 1
84
- return rank, world_size
85
-
86
-
87
- def setup_for_distributed(is_master):
88
- """
89
- This function disables printing when not in master process
90
- """
91
- import builtins as __builtin__
92
- builtin_print = __builtin__.print
93
-
94
- def print(*args, **kwargs):
95
- force = kwargs.pop('force', False)
96
- if is_master or force:
97
- builtin_print(*args, **kwargs)
98
-
99
- __builtin__.print = print
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/gmflow_module/utils/logger.py DELETED
@@ -1,68 +0,0 @@
1
- import torch
2
-
3
- from utils.flow_viz import flow_tensor_to_image
4
-
5
-
6
- class Logger:
7
- def __init__(self, lr_scheduler,
8
- summary_writer,
9
- summary_freq=100,
10
- start_step=0,
11
- ):
12
- self.lr_scheduler = lr_scheduler
13
- self.total_steps = start_step
14
- self.running_loss = {}
15
- self.summary_writer = summary_writer
16
- self.summary_freq = summary_freq
17
-
18
- def print_training_status(self, mode='train'):
19
-
20
- print('step: %06d \t epe: %.3f' % (self.total_steps, self.running_loss['epe'] / self.summary_freq))
21
-
22
- for k in self.running_loss:
23
- self.summary_writer.add_scalar(mode + '/' + k,
24
- self.running_loss[k] / self.summary_freq, self.total_steps)
25
- self.running_loss[k] = 0.0
26
-
27
- def lr_summary(self):
28
- lr = self.lr_scheduler.get_last_lr()[0]
29
- self.summary_writer.add_scalar('lr', lr, self.total_steps)
30
-
31
- def add_image_summary(self, img1, img2, flow_preds, flow_gt, mode='train',
32
- ):
33
- if self.total_steps % self.summary_freq == 0:
34
- img_concat = torch.cat((img1[0].detach().cpu(), img2[0].detach().cpu()), dim=-1)
35
- img_concat = img_concat.type(torch.uint8) # convert to uint8 to visualize in tensorboard
36
-
37
- flow_pred = flow_tensor_to_image(flow_preds[-1][0])
38
- forward_flow_gt = flow_tensor_to_image(flow_gt[0])
39
- flow_concat = torch.cat((torch.from_numpy(flow_pred),
40
- torch.from_numpy(forward_flow_gt)), dim=-1)
41
-
42
- concat = torch.cat((img_concat, flow_concat), dim=-2)
43
-
44
- self.summary_writer.add_image(mode + '/img_pred_gt', concat, self.total_steps)
45
-
46
- def push(self, metrics, mode='train'):
47
- self.total_steps += 1
48
-
49
- self.lr_summary()
50
-
51
- for key in metrics:
52
- if key not in self.running_loss:
53
- self.running_loss[key] = 0.0
54
-
55
- self.running_loss[key] += metrics[key]
56
-
57
- if self.total_steps % self.summary_freq == 0:
58
- self.print_training_status(mode)
59
- self.running_loss = {}
60
-
61
- def write_dict(self, results):
62
- for key in results:
63
- tag = key.split('_')[0]
64
- tag = tag + '/' + key
65
- self.summary_writer.add_scalar(tag, results[key], self.total_steps)
66
-
67
- def close(self):
68
- self.summary_writer.close()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArtGAN/Video-Diffusion-WebUI/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: Video Diffusion WebUI
3
- emoji: 🏃
4
- colorFrom: gray
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.19.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- tags:
12
- - making-demos
13
- ---
14
-
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/locations/__init__.py DELETED
@@ -1,467 +0,0 @@
1
- import functools
2
- import logging
3
- import os
4
- import pathlib
5
- import sys
6
- import sysconfig
7
- from typing import Any, Dict, Generator, Optional, Tuple
8
-
9
- from pip._internal.models.scheme import SCHEME_KEYS, Scheme
10
- from pip._internal.utils.compat import WINDOWS
11
- from pip._internal.utils.deprecation import deprecated
12
- from pip._internal.utils.virtualenv import running_under_virtualenv
13
-
14
- from . import _sysconfig
15
- from .base import (
16
- USER_CACHE_DIR,
17
- get_major_minor_version,
18
- get_src_prefix,
19
- is_osx_framework,
20
- site_packages,
21
- user_site,
22
- )
23
-
24
- __all__ = [
25
- "USER_CACHE_DIR",
26
- "get_bin_prefix",
27
- "get_bin_user",
28
- "get_major_minor_version",
29
- "get_platlib",
30
- "get_purelib",
31
- "get_scheme",
32
- "get_src_prefix",
33
- "site_packages",
34
- "user_site",
35
- ]
36
-
37
-
38
- logger = logging.getLogger(__name__)
39
-
40
-
41
- _PLATLIBDIR: str = getattr(sys, "platlibdir", "lib")
42
-
43
- _USE_SYSCONFIG_DEFAULT = sys.version_info >= (3, 10)
44
-
45
-
46
- def _should_use_sysconfig() -> bool:
47
- """This function determines the value of _USE_SYSCONFIG.
48
-
49
- By default, pip uses sysconfig on Python 3.10+.
50
- But Python distributors can override this decision by setting:
51
- sysconfig._PIP_USE_SYSCONFIG = True / False
52
- Rationale in https://github.com/pypa/pip/issues/10647
53
-
54
- This is a function for testability, but should be constant during any one
55
- run.
56
- """
57
- return bool(getattr(sysconfig, "_PIP_USE_SYSCONFIG", _USE_SYSCONFIG_DEFAULT))
58
-
59
-
60
- _USE_SYSCONFIG = _should_use_sysconfig()
61
-
62
- if not _USE_SYSCONFIG:
63
- # Import distutils lazily to avoid deprecation warnings,
64
- # but import it soon enough that it is in memory and available during
65
- # a pip reinstall.
66
- from . import _distutils
67
-
68
- # Be noisy about incompatibilities if this platforms "should" be using
69
- # sysconfig, but is explicitly opting out and using distutils instead.
70
- if _USE_SYSCONFIG_DEFAULT and not _USE_SYSCONFIG:
71
- _MISMATCH_LEVEL = logging.WARNING
72
- else:
73
- _MISMATCH_LEVEL = logging.DEBUG
74
-
75
-
76
- def _looks_like_bpo_44860() -> bool:
77
- """The resolution to bpo-44860 will change this incorrect platlib.
78
-
79
- See <https://bugs.python.org/issue44860>.
80
- """
81
- from distutils.command.install import INSTALL_SCHEMES
82
-
83
- try:
84
- unix_user_platlib = INSTALL_SCHEMES["unix_user"]["platlib"]
85
- except KeyError:
86
- return False
87
- return unix_user_platlib == "$usersite"
88
-
89
-
90
- def _looks_like_red_hat_patched_platlib_purelib(scheme: Dict[str, str]) -> bool:
91
- platlib = scheme["platlib"]
92
- if "/$platlibdir/" in platlib:
93
- platlib = platlib.replace("/$platlibdir/", f"/{_PLATLIBDIR}/")
94
- if "/lib64/" not in platlib:
95
- return False
96
- unpatched = platlib.replace("/lib64/", "/lib/")
97
- return unpatched.replace("$platbase/", "$base/") == scheme["purelib"]
98
-
99
-
100
- @functools.lru_cache(maxsize=None)
101
- def _looks_like_red_hat_lib() -> bool:
102
- """Red Hat patches platlib in unix_prefix and unix_home, but not purelib.
103
-
104
- This is the only way I can see to tell a Red Hat-patched Python.
105
- """
106
- from distutils.command.install import INSTALL_SCHEMES
107
-
108
- return all(
109
- k in INSTALL_SCHEMES
110
- and _looks_like_red_hat_patched_platlib_purelib(INSTALL_SCHEMES[k])
111
- for k in ("unix_prefix", "unix_home")
112
- )
113
-
114
-
115
- @functools.lru_cache(maxsize=None)
116
- def _looks_like_debian_scheme() -> bool:
117
- """Debian adds two additional schemes."""
118
- from distutils.command.install import INSTALL_SCHEMES
119
-
120
- return "deb_system" in INSTALL_SCHEMES and "unix_local" in INSTALL_SCHEMES
121
-
122
-
123
- @functools.lru_cache(maxsize=None)
124
- def _looks_like_red_hat_scheme() -> bool:
125
- """Red Hat patches ``sys.prefix`` and ``sys.exec_prefix``.
126
-
127
- Red Hat's ``00251-change-user-install-location.patch`` changes the install
128
- command's ``prefix`` and ``exec_prefix`` to append ``"/local"``. This is
129
- (fortunately?) done quite unconditionally, so we create a default command
130
- object without any configuration to detect this.
131
- """
132
- from distutils.command.install import install
133
- from distutils.dist import Distribution
134
-
135
- cmd: Any = install(Distribution())
136
- cmd.finalize_options()
137
- return (
138
- cmd.exec_prefix == f"{os.path.normpath(sys.exec_prefix)}/local"
139
- and cmd.prefix == f"{os.path.normpath(sys.prefix)}/local"
140
- )
141
-
142
-
143
- @functools.lru_cache(maxsize=None)
144
- def _looks_like_slackware_scheme() -> bool:
145
- """Slackware patches sysconfig but fails to patch distutils and site.
146
-
147
- Slackware changes sysconfig's user scheme to use ``"lib64"`` for the lib
148
- path, but does not do the same to the site module.
149
- """
150
- if user_site is None: # User-site not available.
151
- return False
152
- try:
153
- paths = sysconfig.get_paths(scheme="posix_user", expand=False)
154
- except KeyError: # User-site not available.
155
- return False
156
- return "/lib64/" in paths["purelib"] and "/lib64/" not in user_site
157
-
158
-
159
- @functools.lru_cache(maxsize=None)
160
- def _looks_like_msys2_mingw_scheme() -> bool:
161
- """MSYS2 patches distutils and sysconfig to use a UNIX-like scheme.
162
-
163
- However, MSYS2 incorrectly patches sysconfig ``nt`` scheme. The fix is
164
- likely going to be included in their 3.10 release, so we ignore the warning.
165
- See msys2/MINGW-packages#9319.
166
-
167
- MSYS2 MINGW's patch uses lowercase ``"lib"`` instead of the usual uppercase,
168
- and is missing the final ``"site-packages"``.
169
- """
170
- paths = sysconfig.get_paths("nt", expand=False)
171
- return all(
172
- "Lib" not in p and "lib" in p and not p.endswith("site-packages")
173
- for p in (paths[key] for key in ("platlib", "purelib"))
174
- )
175
-
176
-
177
- def _fix_abiflags(parts: Tuple[str]) -> Generator[str, None, None]:
178
- ldversion = sysconfig.get_config_var("LDVERSION")
179
- abiflags = getattr(sys, "abiflags", None)
180
-
181
- # LDVERSION does not end with sys.abiflags. Just return the path unchanged.
182
- if not ldversion or not abiflags or not ldversion.endswith(abiflags):
183
- yield from parts
184
- return
185
-
186
- # Strip sys.abiflags from LDVERSION-based path components.
187
- for part in parts:
188
- if part.endswith(ldversion):
189
- part = part[: (0 - len(abiflags))]
190
- yield part
191
-
192
-
193
- @functools.lru_cache(maxsize=None)
194
- def _warn_mismatched(old: pathlib.Path, new: pathlib.Path, *, key: str) -> None:
195
- issue_url = "https://github.com/pypa/pip/issues/10151"
196
- message = (
197
- "Value for %s does not match. Please report this to <%s>"
198
- "\ndistutils: %s"
199
- "\nsysconfig: %s"
200
- )
201
- logger.log(_MISMATCH_LEVEL, message, key, issue_url, old, new)
202
-
203
-
204
- def _warn_if_mismatch(old: pathlib.Path, new: pathlib.Path, *, key: str) -> bool:
205
- if old == new:
206
- return False
207
- _warn_mismatched(old, new, key=key)
208
- return True
209
-
210
-
211
- @functools.lru_cache(maxsize=None)
212
- def _log_context(
213
- *,
214
- user: bool = False,
215
- home: Optional[str] = None,
216
- root: Optional[str] = None,
217
- prefix: Optional[str] = None,
218
- ) -> None:
219
- parts = [
220
- "Additional context:",
221
- "user = %r",
222
- "home = %r",
223
- "root = %r",
224
- "prefix = %r",
225
- ]
226
-
227
- logger.log(_MISMATCH_LEVEL, "\n".join(parts), user, home, root, prefix)
228
-
229
-
230
- def get_scheme(
231
- dist_name: str,
232
- user: bool = False,
233
- home: Optional[str] = None,
234
- root: Optional[str] = None,
235
- isolated: bool = False,
236
- prefix: Optional[str] = None,
237
- ) -> Scheme:
238
- new = _sysconfig.get_scheme(
239
- dist_name,
240
- user=user,
241
- home=home,
242
- root=root,
243
- isolated=isolated,
244
- prefix=prefix,
245
- )
246
- if _USE_SYSCONFIG:
247
- return new
248
-
249
- old = _distutils.get_scheme(
250
- dist_name,
251
- user=user,
252
- home=home,
253
- root=root,
254
- isolated=isolated,
255
- prefix=prefix,
256
- )
257
-
258
- warning_contexts = []
259
- for k in SCHEME_KEYS:
260
- old_v = pathlib.Path(getattr(old, k))
261
- new_v = pathlib.Path(getattr(new, k))
262
-
263
- if old_v == new_v:
264
- continue
265
-
266
- # distutils incorrectly put PyPy packages under ``site-packages/python``
267
- # in the ``posix_home`` scheme, but PyPy devs said they expect the
268
- # directory name to be ``pypy`` instead. So we treat this as a bug fix
269
- # and not warn about it. See bpo-43307 and python/cpython#24628.
270
- skip_pypy_special_case = (
271
- sys.implementation.name == "pypy"
272
- and home is not None
273
- and k in ("platlib", "purelib")
274
- and old_v.parent == new_v.parent
275
- and old_v.name.startswith("python")
276
- and new_v.name.startswith("pypy")
277
- )
278
- if skip_pypy_special_case:
279
- continue
280
-
281
- # sysconfig's ``osx_framework_user`` does not include ``pythonX.Y`` in
282
- # the ``include`` value, but distutils's ``headers`` does. We'll let
283
- # CPython decide whether this is a bug or feature. See bpo-43948.
284
- skip_osx_framework_user_special_case = (
285
- user
286
- and is_osx_framework()
287
- and k == "headers"
288
- and old_v.parent.parent == new_v.parent
289
- and old_v.parent.name.startswith("python")
290
- )
291
- if skip_osx_framework_user_special_case:
292
- continue
293
-
294
- # On Red Hat and derived Linux distributions, distutils is patched to
295
- # use "lib64" instead of "lib" for platlib.
296
- if k == "platlib" and _looks_like_red_hat_lib():
297
- continue
298
-
299
- # On Python 3.9+, sysconfig's posix_user scheme sets platlib against
300
- # sys.platlibdir, but distutils's unix_user incorrectly coninutes
301
- # using the same $usersite for both platlib and purelib. This creates a
302
- # mismatch when sys.platlibdir is not "lib".
303
- skip_bpo_44860 = (
304
- user
305
- and k == "platlib"
306
- and not WINDOWS
307
- and sys.version_info >= (3, 9)
308
- and _PLATLIBDIR != "lib"
309
- and _looks_like_bpo_44860()
310
- )
311
- if skip_bpo_44860:
312
- continue
313
-
314
- # Slackware incorrectly patches posix_user to use lib64 instead of lib,
315
- # but not usersite to match the location.
316
- skip_slackware_user_scheme = (
317
- user
318
- and k in ("platlib", "purelib")
319
- and not WINDOWS
320
- and _looks_like_slackware_scheme()
321
- )
322
- if skip_slackware_user_scheme:
323
- continue
324
-
325
- # Both Debian and Red Hat patch Python to place the system site under
326
- # /usr/local instead of /usr. Debian also places lib in dist-packages
327
- # instead of site-packages, but the /usr/local check should cover it.
328
- skip_linux_system_special_case = (
329
- not (user or home or prefix or running_under_virtualenv())
330
- and old_v.parts[1:3] == ("usr", "local")
331
- and len(new_v.parts) > 1
332
- and new_v.parts[1] == "usr"
333
- and (len(new_v.parts) < 3 or new_v.parts[2] != "local")
334
- and (_looks_like_red_hat_scheme() or _looks_like_debian_scheme())
335
- )
336
- if skip_linux_system_special_case:
337
- continue
338
-
339
- # On Python 3.7 and earlier, sysconfig does not include sys.abiflags in
340
- # the "pythonX.Y" part of the path, but distutils does.
341
- skip_sysconfig_abiflag_bug = (
342
- sys.version_info < (3, 8)
343
- and not WINDOWS
344
- and k in ("headers", "platlib", "purelib")
345
- and tuple(_fix_abiflags(old_v.parts)) == new_v.parts
346
- )
347
- if skip_sysconfig_abiflag_bug:
348
- continue
349
-
350
- # MSYS2 MINGW's sysconfig patch does not include the "site-packages"
351
- # part of the path. This is incorrect and will be fixed in MSYS.
352
- skip_msys2_mingw_bug = (
353
- WINDOWS and k in ("platlib", "purelib") and _looks_like_msys2_mingw_scheme()
354
- )
355
- if skip_msys2_mingw_bug:
356
- continue
357
-
358
- # CPython's POSIX install script invokes pip (via ensurepip) against the
359
- # interpreter located in the source tree, not the install site. This
360
- # triggers special logic in sysconfig that's not present in distutils.
361
- # https://github.com/python/cpython/blob/8c21941ddaf/Lib/sysconfig.py#L178-L194
362
- skip_cpython_build = (
363
- sysconfig.is_python_build(check_home=True)
364
- and not WINDOWS
365
- and k in ("headers", "include", "platinclude")
366
- )
367
- if skip_cpython_build:
368
- continue
369
-
370
- warning_contexts.append((old_v, new_v, f"scheme.{k}"))
371
-
372
- if not warning_contexts:
373
- return old
374
-
375
- # Check if this path mismatch is caused by distutils config files. Those
376
- # files will no longer work once we switch to sysconfig, so this raises a
377
- # deprecation message for them.
378
- default_old = _distutils.distutils_scheme(
379
- dist_name,
380
- user,
381
- home,
382
- root,
383
- isolated,
384
- prefix,
385
- ignore_config_files=True,
386
- )
387
- if any(default_old[k] != getattr(old, k) for k in SCHEME_KEYS):
388
- deprecated(
389
- reason=(
390
- "Configuring installation scheme with distutils config files "
391
- "is deprecated and will no longer work in the near future. If you "
392
- "are using a Homebrew or Linuxbrew Python, please see discussion "
393
- "at https://github.com/Homebrew/homebrew-core/issues/76621"
394
- ),
395
- replacement=None,
396
- gone_in=None,
397
- )
398
- return old
399
-
400
- # Post warnings about this mismatch so user can report them back.
401
- for old_v, new_v, key in warning_contexts:
402
- _warn_mismatched(old_v, new_v, key=key)
403
- _log_context(user=user, home=home, root=root, prefix=prefix)
404
-
405
- return old
406
-
407
-
408
- def get_bin_prefix() -> str:
409
- new = _sysconfig.get_bin_prefix()
410
- if _USE_SYSCONFIG:
411
- return new
412
-
413
- old = _distutils.get_bin_prefix()
414
- if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="bin_prefix"):
415
- _log_context()
416
- return old
417
-
418
-
419
- def get_bin_user() -> str:
420
- return _sysconfig.get_scheme("", user=True).scripts
421
-
422
-
423
- def _looks_like_deb_system_dist_packages(value: str) -> bool:
424
- """Check if the value is Debian's APT-controlled dist-packages.
425
-
426
- Debian's ``distutils.sysconfig.get_python_lib()`` implementation returns the
427
- default package path controlled by APT, but does not patch ``sysconfig`` to
428
- do the same. This is similar to the bug worked around in ``get_scheme()``,
429
- but here the default is ``deb_system`` instead of ``unix_local``. Ultimately
430
- we can't do anything about this Debian bug, and this detection allows us to
431
- skip the warning when needed.
432
- """
433
- if not _looks_like_debian_scheme():
434
- return False
435
- if value == "/usr/lib/python3/dist-packages":
436
- return True
437
- return False
438
-
439
-
440
- def get_purelib() -> str:
441
- """Return the default pure-Python lib location."""
442
- new = _sysconfig.get_purelib()
443
- if _USE_SYSCONFIG:
444
- return new
445
-
446
- old = _distutils.get_purelib()
447
- if _looks_like_deb_system_dist_packages(old):
448
- return old
449
- if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="purelib"):
450
- _log_context()
451
- return old
452
-
453
-
454
- def get_platlib() -> str:
455
- """Return the default platform-shared lib location."""
456
- new = _sysconfig.get_platlib()
457
- if _USE_SYSCONFIG:
458
- return new
459
-
460
- from . import _distutils
461
-
462
- old = _distutils.get_platlib()
463
- if _looks_like_deb_system_dist_packages(old):
464
- return old
465
- if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="platlib"):
466
- _log_context()
467
- return old
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/link.py DELETED
@@ -1,531 +0,0 @@
1
- import functools
2
- import itertools
3
- import logging
4
- import os
5
- import posixpath
6
- import re
7
- import urllib.parse
8
- from dataclasses import dataclass
9
- from typing import (
10
- TYPE_CHECKING,
11
- Any,
12
- Dict,
13
- List,
14
- Mapping,
15
- NamedTuple,
16
- Optional,
17
- Tuple,
18
- Union,
19
- )
20
-
21
- from pip._internal.utils.deprecation import deprecated
22
- from pip._internal.utils.filetypes import WHEEL_EXTENSION
23
- from pip._internal.utils.hashes import Hashes
24
- from pip._internal.utils.misc import (
25
- pairwise,
26
- redact_auth_from_url,
27
- split_auth_from_netloc,
28
- splitext,
29
- )
30
- from pip._internal.utils.models import KeyBasedCompareMixin
31
- from pip._internal.utils.urls import path_to_url, url_to_path
32
-
33
- if TYPE_CHECKING:
34
- from pip._internal.index.collector import IndexContent
35
-
36
- logger = logging.getLogger(__name__)
37
-
38
-
39
- # Order matters, earlier hashes have a precedence over later hashes for what
40
- # we will pick to use.
41
- _SUPPORTED_HASHES = ("sha512", "sha384", "sha256", "sha224", "sha1", "md5")
42
-
43
-
44
- @dataclass(frozen=True)
45
- class LinkHash:
46
- """Links to content may have embedded hash values. This class parses those.
47
-
48
- `name` must be any member of `_SUPPORTED_HASHES`.
49
-
50
- This class can be converted to and from `ArchiveInfo`. While ArchiveInfo intends to
51
- be JSON-serializable to conform to PEP 610, this class contains the logic for
52
- parsing a hash name and value for correctness, and then checking whether that hash
53
- conforms to a schema with `.is_hash_allowed()`."""
54
-
55
- name: str
56
- value: str
57
-
58
- _hash_url_fragment_re = re.compile(
59
- # NB: we do not validate that the second group (.*) is a valid hex
60
- # digest. Instead, we simply keep that string in this class, and then check it
61
- # against Hashes when hash-checking is needed. This is easier to debug than
62
- # proactively discarding an invalid hex digest, as we handle incorrect hashes
63
- # and malformed hashes in the same place.
64
- r"[#&]({choices})=([^&]*)".format(
65
- choices="|".join(re.escape(hash_name) for hash_name in _SUPPORTED_HASHES)
66
- ),
67
- )
68
-
69
- def __post_init__(self) -> None:
70
- assert self.name in _SUPPORTED_HASHES
71
-
72
- @classmethod
73
- def parse_pep658_hash(cls, dist_info_metadata: str) -> Optional["LinkHash"]:
74
- """Parse a PEP 658 data-dist-info-metadata hash."""
75
- if dist_info_metadata == "true":
76
- return None
77
- name, sep, value = dist_info_metadata.partition("=")
78
- if not sep:
79
- return None
80
- if name not in _SUPPORTED_HASHES:
81
- return None
82
- return cls(name=name, value=value)
83
-
84
- @classmethod
85
- @functools.lru_cache(maxsize=None)
86
- def find_hash_url_fragment(cls, url: str) -> Optional["LinkHash"]:
87
- """Search a string for a checksum algorithm name and encoded output value."""
88
- match = cls._hash_url_fragment_re.search(url)
89
- if match is None:
90
- return None
91
- name, value = match.groups()
92
- return cls(name=name, value=value)
93
-
94
- def as_dict(self) -> Dict[str, str]:
95
- return {self.name: self.value}
96
-
97
- def as_hashes(self) -> Hashes:
98
- """Return a Hashes instance which checks only for the current hash."""
99
- return Hashes({self.name: [self.value]})
100
-
101
- def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool:
102
- """
103
- Return True if the current hash is allowed by `hashes`.
104
- """
105
- if hashes is None:
106
- return False
107
- return hashes.is_hash_allowed(self.name, hex_digest=self.value)
108
-
109
-
110
- def _clean_url_path_part(part: str) -> str:
111
- """
112
- Clean a "part" of a URL path (i.e. after splitting on "@" characters).
113
- """
114
- # We unquote prior to quoting to make sure nothing is double quoted.
115
- return urllib.parse.quote(urllib.parse.unquote(part))
116
-
117
-
118
- def _clean_file_url_path(part: str) -> str:
119
- """
120
- Clean the first part of a URL path that corresponds to a local
121
- filesystem path (i.e. the first part after splitting on "@" characters).
122
- """
123
- # We unquote prior to quoting to make sure nothing is double quoted.
124
- # Also, on Windows the path part might contain a drive letter which
125
- # should not be quoted. On Linux where drive letters do not
126
- # exist, the colon should be quoted. We rely on urllib.request
127
- # to do the right thing here.
128
- return urllib.request.pathname2url(urllib.request.url2pathname(part))
129
-
130
-
131
- # percent-encoded: /
132
- _reserved_chars_re = re.compile("(@|%2F)", re.IGNORECASE)
133
-
134
-
135
- def _clean_url_path(path: str, is_local_path: bool) -> str:
136
- """
137
- Clean the path portion of a URL.
138
- """
139
- if is_local_path:
140
- clean_func = _clean_file_url_path
141
- else:
142
- clean_func = _clean_url_path_part
143
-
144
- # Split on the reserved characters prior to cleaning so that
145
- # revision strings in VCS URLs are properly preserved.
146
- parts = _reserved_chars_re.split(path)
147
-
148
- cleaned_parts = []
149
- for to_clean, reserved in pairwise(itertools.chain(parts, [""])):
150
- cleaned_parts.append(clean_func(to_clean))
151
- # Normalize %xx escapes (e.g. %2f -> %2F)
152
- cleaned_parts.append(reserved.upper())
153
-
154
- return "".join(cleaned_parts)
155
-
156
-
157
- def _ensure_quoted_url(url: str) -> str:
158
- """
159
- Make sure a link is fully quoted.
160
- For example, if ' ' occurs in the URL, it will be replaced with "%20",
161
- and without double-quoting other characters.
162
- """
163
- # Split the URL into parts according to the general structure
164
- # `scheme://netloc/path;parameters?query#fragment`.
165
- result = urllib.parse.urlparse(url)
166
- # If the netloc is empty, then the URL refers to a local filesystem path.
167
- is_local_path = not result.netloc
168
- path = _clean_url_path(result.path, is_local_path=is_local_path)
169
- return urllib.parse.urlunparse(result._replace(path=path))
170
-
171
-
172
- class Link(KeyBasedCompareMixin):
173
- """Represents a parsed link from a Package Index's simple URL"""
174
-
175
- __slots__ = [
176
- "_parsed_url",
177
- "_url",
178
- "_hashes",
179
- "comes_from",
180
- "requires_python",
181
- "yanked_reason",
182
- "dist_info_metadata",
183
- "cache_link_parsing",
184
- "egg_fragment",
185
- ]
186
-
187
- def __init__(
188
- self,
189
- url: str,
190
- comes_from: Optional[Union[str, "IndexContent"]] = None,
191
- requires_python: Optional[str] = None,
192
- yanked_reason: Optional[str] = None,
193
- dist_info_metadata: Optional[str] = None,
194
- cache_link_parsing: bool = True,
195
- hashes: Optional[Mapping[str, str]] = None,
196
- ) -> None:
197
- """
198
- :param url: url of the resource pointed to (href of the link)
199
- :param comes_from: instance of IndexContent where the link was found,
200
- or string.
201
- :param requires_python: String containing the `Requires-Python`
202
- metadata field, specified in PEP 345. This may be specified by
203
- a data-requires-python attribute in the HTML link tag, as
204
- described in PEP 503.
205
- :param yanked_reason: the reason the file has been yanked, if the
206
- file has been yanked, or None if the file hasn't been yanked.
207
- This is the value of the "data-yanked" attribute, if present, in
208
- a simple repository HTML link. If the file has been yanked but
209
- no reason was provided, this should be the empty string. See
210
- PEP 592 for more information and the specification.
211
- :param dist_info_metadata: the metadata attached to the file, or None if no such
212
- metadata is provided. This is the value of the "data-dist-info-metadata"
213
- attribute, if present, in a simple repository HTML link. This may be parsed
214
- into its own `Link` by `self.metadata_link()`. See PEP 658 for more
215
- information and the specification.
216
- :param cache_link_parsing: A flag that is used elsewhere to determine
217
- whether resources retrieved from this link should be cached. PyPI
218
- URLs should generally have this set to False, for example.
219
- :param hashes: A mapping of hash names to digests to allow us to
220
- determine the validity of a download.
221
- """
222
-
223
- # url can be a UNC windows share
224
- if url.startswith("\\\\"):
225
- url = path_to_url(url)
226
-
227
- self._parsed_url = urllib.parse.urlsplit(url)
228
- # Store the url as a private attribute to prevent accidentally
229
- # trying to set a new value.
230
- self._url = url
231
-
232
- link_hash = LinkHash.find_hash_url_fragment(url)
233
- hashes_from_link = {} if link_hash is None else link_hash.as_dict()
234
- if hashes is None:
235
- self._hashes = hashes_from_link
236
- else:
237
- self._hashes = {**hashes, **hashes_from_link}
238
-
239
- self.comes_from = comes_from
240
- self.requires_python = requires_python if requires_python else None
241
- self.yanked_reason = yanked_reason
242
- self.dist_info_metadata = dist_info_metadata
243
-
244
- super().__init__(key=url, defining_class=Link)
245
-
246
- self.cache_link_parsing = cache_link_parsing
247
- self.egg_fragment = self._egg_fragment()
248
-
249
- @classmethod
250
- def from_json(
251
- cls,
252
- file_data: Dict[str, Any],
253
- page_url: str,
254
- ) -> Optional["Link"]:
255
- """
256
- Convert an pypi json document from a simple repository page into a Link.
257
- """
258
- file_url = file_data.get("url")
259
- if file_url is None:
260
- return None
261
-
262
- url = _ensure_quoted_url(urllib.parse.urljoin(page_url, file_url))
263
- pyrequire = file_data.get("requires-python")
264
- yanked_reason = file_data.get("yanked")
265
- dist_info_metadata = file_data.get("dist-info-metadata")
266
- hashes = file_data.get("hashes", {})
267
-
268
- # The Link.yanked_reason expects an empty string instead of a boolean.
269
- if yanked_reason and not isinstance(yanked_reason, str):
270
- yanked_reason = ""
271
- # The Link.yanked_reason expects None instead of False.
272
- elif not yanked_reason:
273
- yanked_reason = None
274
-
275
- return cls(
276
- url,
277
- comes_from=page_url,
278
- requires_python=pyrequire,
279
- yanked_reason=yanked_reason,
280
- hashes=hashes,
281
- dist_info_metadata=dist_info_metadata,
282
- )
283
-
284
- @classmethod
285
- def from_element(
286
- cls,
287
- anchor_attribs: Dict[str, Optional[str]],
288
- page_url: str,
289
- base_url: str,
290
- ) -> Optional["Link"]:
291
- """
292
- Convert an anchor element's attributes in a simple repository page to a Link.
293
- """
294
- href = anchor_attribs.get("href")
295
- if not href:
296
- return None
297
-
298
- url = _ensure_quoted_url(urllib.parse.urljoin(base_url, href))
299
- pyrequire = anchor_attribs.get("data-requires-python")
300
- yanked_reason = anchor_attribs.get("data-yanked")
301
- dist_info_metadata = anchor_attribs.get("data-dist-info-metadata")
302
-
303
- return cls(
304
- url,
305
- comes_from=page_url,
306
- requires_python=pyrequire,
307
- yanked_reason=yanked_reason,
308
- dist_info_metadata=dist_info_metadata,
309
- )
310
-
311
- def __str__(self) -> str:
312
- if self.requires_python:
313
- rp = f" (requires-python:{self.requires_python})"
314
- else:
315
- rp = ""
316
- if self.comes_from:
317
- return "{} (from {}){}".format(
318
- redact_auth_from_url(self._url), self.comes_from, rp
319
- )
320
- else:
321
- return redact_auth_from_url(str(self._url))
322
-
323
- def __repr__(self) -> str:
324
- return f"<Link {self}>"
325
-
326
- @property
327
- def url(self) -> str:
328
- return self._url
329
-
330
- @property
331
- def filename(self) -> str:
332
- path = self.path.rstrip("/")
333
- name = posixpath.basename(path)
334
- if not name:
335
- # Make sure we don't leak auth information if the netloc
336
- # includes a username and password.
337
- netloc, user_pass = split_auth_from_netloc(self.netloc)
338
- return netloc
339
-
340
- name = urllib.parse.unquote(name)
341
- assert name, f"URL {self._url!r} produced no filename"
342
- return name
343
-
344
- @property
345
- def file_path(self) -> str:
346
- return url_to_path(self.url)
347
-
348
- @property
349
- def scheme(self) -> str:
350
- return self._parsed_url.scheme
351
-
352
- @property
353
- def netloc(self) -> str:
354
- """
355
- This can contain auth information.
356
- """
357
- return self._parsed_url.netloc
358
-
359
- @property
360
- def path(self) -> str:
361
- return urllib.parse.unquote(self._parsed_url.path)
362
-
363
- def splitext(self) -> Tuple[str, str]:
364
- return splitext(posixpath.basename(self.path.rstrip("/")))
365
-
366
- @property
367
- def ext(self) -> str:
368
- return self.splitext()[1]
369
-
370
- @property
371
- def url_without_fragment(self) -> str:
372
- scheme, netloc, path, query, fragment = self._parsed_url
373
- return urllib.parse.urlunsplit((scheme, netloc, path, query, ""))
374
-
375
- _egg_fragment_re = re.compile(r"[#&]egg=([^&]*)")
376
-
377
- # Per PEP 508.
378
- _project_name_re = re.compile(
379
- r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", re.IGNORECASE
380
- )
381
-
382
- def _egg_fragment(self) -> Optional[str]:
383
- match = self._egg_fragment_re.search(self._url)
384
- if not match:
385
- return None
386
-
387
- # An egg fragment looks like a PEP 508 project name, along with
388
- # an optional extras specifier. Anything else is invalid.
389
- project_name = match.group(1)
390
- if not self._project_name_re.match(project_name):
391
- deprecated(
392
- reason=f"{self} contains an egg fragment with a non-PEP 508 name",
393
- replacement="to use the req @ url syntax, and remove the egg fragment",
394
- gone_in="25.0",
395
- issue=11617,
396
- )
397
-
398
- return project_name
399
-
400
- _subdirectory_fragment_re = re.compile(r"[#&]subdirectory=([^&]*)")
401
-
402
- @property
403
- def subdirectory_fragment(self) -> Optional[str]:
404
- match = self._subdirectory_fragment_re.search(self._url)
405
- if not match:
406
- return None
407
- return match.group(1)
408
-
409
- def metadata_link(self) -> Optional["Link"]:
410
- """Implementation of PEP 658 parsing."""
411
- # Note that Link.from_element() parsing the "data-dist-info-metadata" attribute
412
- # from an HTML anchor tag is typically how the Link.dist_info_metadata attribute
413
- # gets set.
414
- if self.dist_info_metadata is None:
415
- return None
416
- metadata_url = f"{self.url_without_fragment}.metadata"
417
- metadata_link_hash = LinkHash.parse_pep658_hash(self.dist_info_metadata)
418
- if metadata_link_hash is None:
419
- return Link(metadata_url)
420
- return Link(metadata_url, hashes=metadata_link_hash.as_dict())
421
-
422
- def as_hashes(self) -> Hashes:
423
- return Hashes({k: [v] for k, v in self._hashes.items()})
424
-
425
- @property
426
- def hash(self) -> Optional[str]:
427
- return next(iter(self._hashes.values()), None)
428
-
429
- @property
430
- def hash_name(self) -> Optional[str]:
431
- return next(iter(self._hashes), None)
432
-
433
- @property
434
- def show_url(self) -> str:
435
- return posixpath.basename(self._url.split("#", 1)[0].split("?", 1)[0])
436
-
437
- @property
438
- def is_file(self) -> bool:
439
- return self.scheme == "file"
440
-
441
- def is_existing_dir(self) -> bool:
442
- return self.is_file and os.path.isdir(self.file_path)
443
-
444
- @property
445
- def is_wheel(self) -> bool:
446
- return self.ext == WHEEL_EXTENSION
447
-
448
- @property
449
- def is_vcs(self) -> bool:
450
- from pip._internal.vcs import vcs
451
-
452
- return self.scheme in vcs.all_schemes
453
-
454
- @property
455
- def is_yanked(self) -> bool:
456
- return self.yanked_reason is not None
457
-
458
- @property
459
- def has_hash(self) -> bool:
460
- return bool(self._hashes)
461
-
462
- def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool:
463
- """
464
- Return True if the link has a hash and it is allowed by `hashes`.
465
- """
466
- if hashes is None:
467
- return False
468
- return any(hashes.is_hash_allowed(k, v) for k, v in self._hashes.items())
469
-
470
-
471
- class _CleanResult(NamedTuple):
472
- """Convert link for equivalency check.
473
-
474
- This is used in the resolver to check whether two URL-specified requirements
475
- likely point to the same distribution and can be considered equivalent. This
476
- equivalency logic avoids comparing URLs literally, which can be too strict
477
- (e.g. "a=1&b=2" vs "b=2&a=1") and produce conflicts unexpecting to users.
478
-
479
- Currently this does three things:
480
-
481
- 1. Drop the basic auth part. This is technically wrong since a server can
482
- serve different content based on auth, but if it does that, it is even
483
- impossible to guarantee two URLs without auth are equivalent, since
484
- the user can input different auth information when prompted. So the
485
- practical solution is to assume the auth doesn't affect the response.
486
- 2. Parse the query to avoid the ordering issue. Note that ordering under the
487
- same key in the query are NOT cleaned; i.e. "a=1&a=2" and "a=2&a=1" are
488
- still considered different.
489
- 3. Explicitly drop most of the fragment part, except ``subdirectory=`` and
490
- hash values, since it should have no impact the downloaded content. Note
491
- that this drops the "egg=" part historically used to denote the requested
492
- project (and extras), which is wrong in the strictest sense, but too many
493
- people are supplying it inconsistently to cause superfluous resolution
494
- conflicts, so we choose to also ignore them.
495
- """
496
-
497
- parsed: urllib.parse.SplitResult
498
- query: Dict[str, List[str]]
499
- subdirectory: str
500
- hashes: Dict[str, str]
501
-
502
-
503
- def _clean_link(link: Link) -> _CleanResult:
504
- parsed = link._parsed_url
505
- netloc = parsed.netloc.rsplit("@", 1)[-1]
506
- # According to RFC 8089, an empty host in file: means localhost.
507
- if parsed.scheme == "file" and not netloc:
508
- netloc = "localhost"
509
- fragment = urllib.parse.parse_qs(parsed.fragment)
510
- if "egg" in fragment:
511
- logger.debug("Ignoring egg= fragment in %s", link)
512
- try:
513
- # If there are multiple subdirectory values, use the first one.
514
- # This matches the behavior of Link.subdirectory_fragment.
515
- subdirectory = fragment["subdirectory"][0]
516
- except (IndexError, KeyError):
517
- subdirectory = ""
518
- # If there are multiple hash values under the same algorithm, use the
519
- # first one. This matches the behavior of Link.hash_value.
520
- hashes = {k: fragment[k][0] for k in _SUPPORTED_HASHES if k in fragment}
521
- return _CleanResult(
522
- parsed=parsed._replace(netloc=netloc, query="", fragment=""),
523
- query=urllib.parse.parse_qs(parsed.query),
524
- subdirectory=subdirectory,
525
- hashes=hashes,
526
- )
527
-
528
-
529
- @functools.lru_cache(maxsize=None)
530
- def links_equivalent(link1: Link, link2: Link) -> bool:
531
- return _clean_link(link1) == _clean_link(link2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/emoji.py DELETED
@@ -1,96 +0,0 @@
1
- import sys
2
- from typing import TYPE_CHECKING, Optional, Union
3
-
4
- from .jupyter import JupyterMixin
5
- from .segment import Segment
6
- from .style import Style
7
- from ._emoji_codes import EMOJI
8
- from ._emoji_replace import _emoji_replace
9
-
10
- if sys.version_info >= (3, 8):
11
- from typing import Literal
12
- else:
13
- from pip._vendor.typing_extensions import Literal # pragma: no cover
14
-
15
-
16
- if TYPE_CHECKING:
17
- from .console import Console, ConsoleOptions, RenderResult
18
-
19
-
20
- EmojiVariant = Literal["emoji", "text"]
21
-
22
-
23
- class NoEmoji(Exception):
24
- """No emoji by that name."""
25
-
26
-
27
- class Emoji(JupyterMixin):
28
- __slots__ = ["name", "style", "_char", "variant"]
29
-
30
- VARIANTS = {"text": "\uFE0E", "emoji": "\uFE0F"}
31
-
32
- def __init__(
33
- self,
34
- name: str,
35
- style: Union[str, Style] = "none",
36
- variant: Optional[EmojiVariant] = None,
37
- ) -> None:
38
- """A single emoji character.
39
-
40
- Args:
41
- name (str): Name of emoji.
42
- style (Union[str, Style], optional): Optional style. Defaults to None.
43
-
44
- Raises:
45
- NoEmoji: If the emoji doesn't exist.
46
- """
47
- self.name = name
48
- self.style = style
49
- self.variant = variant
50
- try:
51
- self._char = EMOJI[name]
52
- except KeyError:
53
- raise NoEmoji(f"No emoji called {name!r}")
54
- if variant is not None:
55
- self._char += self.VARIANTS.get(variant, "")
56
-
57
- @classmethod
58
- def replace(cls, text: str) -> str:
59
- """Replace emoji markup with corresponding unicode characters.
60
-
61
- Args:
62
- text (str): A string with emojis codes, e.g. "Hello :smiley:!"
63
-
64
- Returns:
65
- str: A string with emoji codes replaces with actual emoji.
66
- """
67
- return _emoji_replace(text)
68
-
69
- def __repr__(self) -> str:
70
- return f"<emoji {self.name!r}>"
71
-
72
- def __str__(self) -> str:
73
- return self._char
74
-
75
- def __rich_console__(
76
- self, console: "Console", options: "ConsoleOptions"
77
- ) -> "RenderResult":
78
- yield Segment(self._char, console.get_style(self.style))
79
-
80
-
81
- if __name__ == "__main__": # pragma: no cover
82
- import sys
83
-
84
- from pip._vendor.rich.columns import Columns
85
- from pip._vendor.rich.console import Console
86
-
87
- console = Console(record=True)
88
-
89
- columns = Columns(
90
- (f":{name}: {name}" for name in sorted(EMOJI.keys()) if "\u200D" not in name),
91
- column_first=True,
92
- )
93
-
94
- console.print(columns)
95
- if len(sys.argv) > 1:
96
- console.save_html(sys.argv[1])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco_evaluation.py DELETED
@@ -1,138 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import contextlib
3
- import copy
4
- import io
5
- import json
6
- import numpy as np
7
- import os
8
- import tempfile
9
- import unittest
10
- import torch
11
- from pycocotools.coco import COCO
12
- from pycocotools.cocoeval import COCOeval
13
-
14
- from detectron2.data import DatasetCatalog
15
- from detectron2.evaluation import COCOEvaluator
16
- from detectron2.evaluation.fast_eval_api import COCOeval_opt
17
- from detectron2.structures import Boxes, Instances
18
-
19
-
20
- class TestCOCOeval(unittest.TestCase):
21
- def test_fast_eval(self):
22
- # A small set of images/categories from COCO val
23
- # fmt: off
24
- detections = [{"image_id": 139, "category_id": 1, "bbox": [417.3332824707031, 159.27003479003906, 47.66064453125, 143.00193786621094], "score": 0.9949821829795837, "segmentation": {"size": [426, 640], "counts": "Tc`52W=3N0N4aNN^E7]:4XE1g:8kDMT;U100000001O1gE[Nk8h1dFiNY9Z1aFkN]9g2J3NdN`FlN`9S1cFRN07]9g1bFoM6;X9c1cFoM=8R9g1bFQN>3U9Y30O01OO1O001N2O1N1O4L4L5UNoE3V:CVF6Q:@YF9l9@ZF<k9[O`F=];HYnX2"}}, {"image_id": 139, "category_id": 1, "bbox": [383.5909118652344, 172.0777587890625, 17.959075927734375, 36.94813537597656], "score": 0.7685421705245972, "segmentation": {"size": [426, 640], "counts": "lZP5m0Z<300O100O100000001O00]OlC0T<OnCOT<OnCNX<JnC2bQT3"}}, {"image_id": 139, "category_id": 1, "bbox": [457.8359069824219, 158.88027954101562, 9.89764404296875, 8.771820068359375], "score": 0.07092753797769547, "segmentation": {"size": [426, 640], "counts": "bSo54T=2N2O1001O006ImiW2"}}] # noqa
25
- gt_annotations = {"categories": [{"supercategory": "person", "id": 1, "name": "person"}, {"supercategory": "furniture", "id": 65, "name": "bed"}], "images": [{"license": 4, "file_name": "000000000285.jpg", "coco_url": "http://images.cocodataset.org/val2017/000000000285.jpg", "height": 640, "width": 586, "date_captured": "2013-11-18 13:09:47", "flickr_url": "http://farm8.staticflickr.com/7434/9138147604_c6225224b8_z.jpg", "id": 285}, {"license": 2, "file_name": "000000000139.jpg", "coco_url": "http://images.cocodataset.org/val2017/000000000139.jpg", "height": 426, "width": 640, "date_captured": "2013-11-21 01:34:01", "flickr_url": "http://farm9.staticflickr.com/8035/8024364858_9c41dc1666_z.jpg", "id": 139}], "annotations": [{"segmentation": [[428.19, 219.47, 430.94, 209.57, 430.39, 210.12, 421.32, 216.17, 412.8, 217.27, 413.9, 214.24, 422.42, 211.22, 429.29, 201.6, 430.67, 181.8, 430.12, 175.2, 427.09, 168.06, 426.27, 164.21, 430.94, 159.26, 440.29, 157.61, 446.06, 163.93, 448.53, 168.06, 448.53, 173.01, 449.08, 174.93, 454.03, 185.1, 455.41, 188.4, 458.43, 195.0, 460.08, 210.94, 462.28, 226.61, 460.91, 233.76, 454.31, 234.04, 460.08, 256.85, 462.56, 268.13, 465.58, 290.67, 465.85, 293.14, 463.38, 295.62, 452.66, 295.34, 448.26, 294.52, 443.59, 282.7, 446.06, 235.14, 446.34, 230.19, 438.09, 232.39, 438.09, 221.67, 434.24, 221.12, 427.09, 219.74]], "area": 2913.1103999999987, "iscrowd": 0, "image_id": 139, "bbox": [412.8, 157.61, 53.05, 138.01], "category_id": 1, "id": 230831}, {"segmentation": [[384.98, 206.58, 384.43, 199.98, 385.25, 193.66, 385.25, 190.08, 387.18, 185.13, 387.18, 182.93, 386.08, 181.01, 385.25, 178.81, 385.25, 175.79, 388.0, 172.76, 394.88, 172.21, 398.72, 173.31, 399.27, 176.06, 399.55, 183.48, 397.9, 185.68, 395.15, 188.98, 396.8, 193.38, 398.45, 194.48, 399.0, 205.75, 395.43, 207.95, 388.83, 206.03]], "area": 435.1449499999997, "iscrowd": 0, "image_id": 139, "bbox": [384.43, 172.21, 15.12, 35.74], "category_id": 1, "id": 233201}]} # noqa
26
- # fmt: on
27
-
28
- # Test a small dataset for typical COCO format
29
- experiments = {"full": (detections, gt_annotations, {})}
30
-
31
- # Test what happens if the list of detections or ground truth annotations is empty
32
- experiments["empty_dt"] = ([], gt_annotations, {})
33
- gt = copy.deepcopy(gt_annotations)
34
- gt["annotations"] = []
35
- experiments["empty_gt"] = (detections, gt, {})
36
-
37
- # Test changing parameter settings
38
- experiments["no_categories"] = (detections, gt_annotations, {"useCats": 0})
39
- experiments["no_ious"] = (detections, gt_annotations, {"iouThrs": []})
40
- experiments["no_rec_thrs"] = (detections, gt_annotations, {"recThrs": []})
41
- experiments["no_max_dets"] = (detections, gt_annotations, {"maxDets": []})
42
- experiments["one_max_det"] = (detections, gt_annotations, {"maxDets": [1]})
43
- experiments["no_area"] = (detections, gt_annotations, {"areaRng": [], "areaRngLbl": []})
44
-
45
- # Test what happens if one omits different fields from the annotation structure
46
- annotation_fields = [
47
- "id",
48
- "image_id",
49
- "category_id",
50
- "score",
51
- "area",
52
- "iscrowd",
53
- "ignore",
54
- "bbox",
55
- "segmentation",
56
- ]
57
- for a in annotation_fields:
58
- gt = copy.deepcopy(gt_annotations)
59
- for g in gt["annotations"]:
60
- if a in g:
61
- del g[a]
62
- dt = copy.deepcopy(detections)
63
- for d in dt:
64
- if a in d:
65
- del d[a]
66
- experiments["omit_gt_" + a] = (detections, gt, {})
67
- experiments["omit_dt_" + a] = (dt, gt_annotations, {})
68
-
69
- # Compare precision/recall for original COCO PythonAPI to custom optimized one
70
- for name, (dt, gt, params) in experiments.items():
71
- # Dump to json.
72
- try:
73
- with tempfile.TemporaryDirectory() as tmpdir:
74
- json_file_name = os.path.join(tmpdir, "gt_" + name + ".json")
75
- with open(json_file_name, "w") as f:
76
- json.dump(gt, f)
77
- with contextlib.redirect_stdout(io.StringIO()):
78
- coco_api = COCO(json_file_name)
79
- except Exception:
80
- pass
81
-
82
- for iou_type in ["bbox", "segm", "keypoints"]:
83
- # Run original COCOeval PythonAPI
84
- api_exception = None
85
- try:
86
- with contextlib.redirect_stdout(io.StringIO()):
87
- coco_dt = coco_api.loadRes(dt)
88
- coco_eval = COCOeval(coco_api, coco_dt, iou_type)
89
- for p, v in params.items():
90
- setattr(coco_eval.params, p, v)
91
- coco_eval.evaluate()
92
- coco_eval.accumulate()
93
- coco_eval.summarize()
94
- except Exception as ex:
95
- api_exception = ex
96
-
97
- # Run optimized COCOeval_opt API
98
- opt_exception = None
99
- try:
100
- with contextlib.redirect_stdout(io.StringIO()):
101
- coco_dt = coco_api.loadRes(dt)
102
- coco_eval_opt = COCOeval_opt(coco_api, coco_dt, iou_type)
103
- for p, v in params.items():
104
- setattr(coco_eval_opt.params, p, v)
105
- coco_eval_opt.evaluate()
106
- coco_eval_opt.accumulate()
107
- coco_eval_opt.summarize()
108
- except Exception as ex:
109
- opt_exception = ex
110
-
111
- if api_exception is not None and opt_exception is not None:
112
- # Original API and optimized API should throw the same exception if annotation
113
- # format is bad
114
- api_error = "" if api_exception is None else type(api_exception).__name__
115
- opt_error = "" if opt_exception is None else type(opt_exception).__name__
116
- msg = "%s: comparing COCO APIs, '%s' != '%s'" % (name, api_error, opt_error)
117
- self.assertTrue(api_error == opt_error, msg=msg)
118
- else:
119
- # Original API and optimized API should produce the same precision/recalls
120
- for k in ["precision", "recall"]:
121
- diff = np.abs(coco_eval.eval[k] - coco_eval_opt.eval[k])
122
- abs_diff = np.max(diff) if diff.size > 0 else 0.0
123
- msg = "%s: comparing COCO APIs, %s differs by %f" % (name, k, abs_diff)
124
- self.assertTrue(abs_diff < 1e-4, msg=msg)
125
-
126
- def test_unknown_category(self):
127
- dataset = "coco_2017_val_100"
128
- evaluator = COCOEvaluator(dataset)
129
- evaluator.reset()
130
- inputs = DatasetCatalog.get(dataset)[:2]
131
- pred = Instances((100, 100))
132
- pred.pred_boxes = Boxes(torch.rand(2, 4))
133
- pred.scores = torch.rand(2)
134
- pred.pred_classes = torch.tensor([10, 80])
135
- output = {"instances": pred}
136
- evaluator.process(inputs, [output, output])
137
- with self.assertRaises(AssertionError):
138
- evaluator.evaluate()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/infer/lib/train/utils.py DELETED
@@ -1,478 +0,0 @@
1
- import argparse
2
- import glob
3
- import json
4
- import logging
5
- import os
6
- import subprocess
7
- import sys
8
- import shutil
9
-
10
- import numpy as np
11
- import torch
12
- from scipy.io.wavfile import read
13
-
14
- MATPLOTLIB_FLAG = False
15
-
16
- logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
17
- logger = logging
18
-
19
-
20
- def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
21
- assert os.path.isfile(checkpoint_path)
22
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
23
-
24
- ##################
25
- def go(model, bkey):
26
- saved_state_dict = checkpoint_dict[bkey]
27
- if hasattr(model, "module"):
28
- state_dict = model.module.state_dict()
29
- else:
30
- state_dict = model.state_dict()
31
- new_state_dict = {}
32
- for k, v in state_dict.items(): # 模型需要的shape
33
- try:
34
- new_state_dict[k] = saved_state_dict[k]
35
- if saved_state_dict[k].shape != state_dict[k].shape:
36
- logger.warn(
37
- "shape-%s-mismatch. need: %s, get: %s",
38
- k,
39
- state_dict[k].shape,
40
- saved_state_dict[k].shape,
41
- ) #
42
- raise KeyError
43
- except:
44
- # logger.info(traceback.format_exc())
45
- logger.info("%s is not in the checkpoint", k) # pretrain缺失的
46
- new_state_dict[k] = v # 模型自带的随机值
47
- if hasattr(model, "module"):
48
- model.module.load_state_dict(new_state_dict, strict=False)
49
- else:
50
- model.load_state_dict(new_state_dict, strict=False)
51
- return model
52
-
53
- go(combd, "combd")
54
- model = go(sbd, "sbd")
55
- #############
56
- logger.info("Loaded model weights")
57
-
58
- iteration = checkpoint_dict["iteration"]
59
- learning_rate = checkpoint_dict["learning_rate"]
60
- if (
61
- optimizer is not None and load_opt == 1
62
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
63
- # try:
64
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
65
- # except:
66
- # traceback.print_exc()
67
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
68
- return model, optimizer, learning_rate, iteration
69
-
70
-
71
- # def load_checkpoint(checkpoint_path, model, optimizer=None):
72
- # assert os.path.isfile(checkpoint_path)
73
- # checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
74
- # iteration = checkpoint_dict['iteration']
75
- # learning_rate = checkpoint_dict['learning_rate']
76
- # if optimizer is not None:
77
- # optimizer.load_state_dict(checkpoint_dict['optimizer'])
78
- # # print(1111)
79
- # saved_state_dict = checkpoint_dict['model']
80
- # # print(1111)
81
- #
82
- # if hasattr(model, 'module'):
83
- # state_dict = model.module.state_dict()
84
- # else:
85
- # state_dict = model.state_dict()
86
- # new_state_dict= {}
87
- # for k, v in state_dict.items():
88
- # try:
89
- # new_state_dict[k] = saved_state_dict[k]
90
- # except:
91
- # logger.info("%s is not in the checkpoint" % k)
92
- # new_state_dict[k] = v
93
- # if hasattr(model, 'module'):
94
- # model.module.load_state_dict(new_state_dict)
95
- # else:
96
- # model.load_state_dict(new_state_dict)
97
- # logger.info("Loaded checkpoint '{}' (epoch {})" .format(
98
- # checkpoint_path, iteration))
99
- # return model, optimizer, learning_rate, iteration
100
- def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1):
101
- assert os.path.isfile(checkpoint_path)
102
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
103
-
104
- saved_state_dict = checkpoint_dict["model"]
105
- if hasattr(model, "module"):
106
- state_dict = model.module.state_dict()
107
- else:
108
- state_dict = model.state_dict()
109
- new_state_dict = {}
110
- for k, v in state_dict.items(): # 模型需要的shape
111
- try:
112
- new_state_dict[k] = saved_state_dict[k]
113
- if saved_state_dict[k].shape != state_dict[k].shape:
114
- logger.warn(
115
- "shape-%s-mismatch|need-%s|get-%s",
116
- k,
117
- state_dict[k].shape,
118
- saved_state_dict[k].shape,
119
- ) #
120
- raise KeyError
121
- except:
122
- # logger.info(traceback.format_exc())
123
- logger.info("%s is not in the checkpoint", k) # pretrain缺失的
124
- new_state_dict[k] = v # 模型自带的随机值
125
- if hasattr(model, "module"):
126
- model.module.load_state_dict(new_state_dict, strict=False)
127
- else:
128
- model.load_state_dict(new_state_dict, strict=False)
129
- logger.info("Loaded model weights")
130
-
131
- iteration = checkpoint_dict["iteration"]
132
- learning_rate = checkpoint_dict["learning_rate"]
133
- if (
134
- optimizer is not None and load_opt == 1
135
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
136
- # try:
137
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
138
- # except:
139
- # traceback.print_exc()
140
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
141
- return model, optimizer, learning_rate, iteration
142
-
143
-
144
- def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
145
- logger.info(
146
- "Saving model and optimizer state at epoch {} to {}".format(
147
- iteration, checkpoint_path
148
- )
149
- )
150
- if hasattr(model, "module"):
151
- state_dict = model.module.state_dict()
152
- else:
153
- state_dict = model.state_dict()
154
- torch.save(
155
- {
156
- "model": state_dict,
157
- "iteration": iteration,
158
- "optimizer": optimizer.state_dict(),
159
- "learning_rate": learning_rate,
160
- },
161
- checkpoint_path,
162
- )
163
-
164
-
165
- def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path):
166
- logger.info(
167
- "Saving model and optimizer state at epoch {} to {}".format(
168
- iteration, checkpoint_path
169
- )
170
- )
171
- if hasattr(combd, "module"):
172
- state_dict_combd = combd.module.state_dict()
173
- else:
174
- state_dict_combd = combd.state_dict()
175
- if hasattr(sbd, "module"):
176
- state_dict_sbd = sbd.module.state_dict()
177
- else:
178
- state_dict_sbd = sbd.state_dict()
179
- torch.save(
180
- {
181
- "combd": state_dict_combd,
182
- "sbd": state_dict_sbd,
183
- "iteration": iteration,
184
- "optimizer": optimizer.state_dict(),
185
- "learning_rate": learning_rate,
186
- },
187
- checkpoint_path,
188
- )
189
-
190
-
191
- def summarize(
192
- writer,
193
- global_step,
194
- scalars={},
195
- histograms={},
196
- images={},
197
- audios={},
198
- audio_sampling_rate=22050,
199
- ):
200
- for k, v in scalars.items():
201
- writer.add_scalar(k, v, global_step)
202
- for k, v in histograms.items():
203
- writer.add_histogram(k, v, global_step)
204
- for k, v in images.items():
205
- writer.add_image(k, v, global_step, dataformats="HWC")
206
- for k, v in audios.items():
207
- writer.add_audio(k, v, global_step, audio_sampling_rate)
208
-
209
-
210
- def latest_checkpoint_path(dir_path, regex="G_*.pth"):
211
- f_list = glob.glob(os.path.join(dir_path, regex))
212
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
213
- x = f_list[-1]
214
- logger.debug(x)
215
- return x
216
-
217
-
218
- def plot_spectrogram_to_numpy(spectrogram):
219
- global MATPLOTLIB_FLAG
220
- if not MATPLOTLIB_FLAG:
221
- import matplotlib
222
-
223
- matplotlib.use("Agg")
224
- MATPLOTLIB_FLAG = True
225
- mpl_logger = logging.getLogger("matplotlib")
226
- mpl_logger.setLevel(logging.WARNING)
227
- import matplotlib.pylab as plt
228
- import numpy as np
229
-
230
- fig, ax = plt.subplots(figsize=(10, 2))
231
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
232
- plt.colorbar(im, ax=ax)
233
- plt.xlabel("Frames")
234
- plt.ylabel("Channels")
235
- plt.tight_layout()
236
-
237
- fig.canvas.draw()
238
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
239
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
240
- plt.close()
241
- return data
242
-
243
-
244
- def plot_alignment_to_numpy(alignment, info=None):
245
- global MATPLOTLIB_FLAG
246
- if not MATPLOTLIB_FLAG:
247
- import matplotlib
248
-
249
- matplotlib.use("Agg")
250
- MATPLOTLIB_FLAG = True
251
- mpl_logger = logging.getLogger("matplotlib")
252
- mpl_logger.setLevel(logging.WARNING)
253
- import matplotlib.pylab as plt
254
- import numpy as np
255
-
256
- fig, ax = plt.subplots(figsize=(6, 4))
257
- im = ax.imshow(
258
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
259
- )
260
- fig.colorbar(im, ax=ax)
261
- xlabel = "Decoder timestep"
262
- if info is not None:
263
- xlabel += "\n\n" + info
264
- plt.xlabel(xlabel)
265
- plt.ylabel("Encoder timestep")
266
- plt.tight_layout()
267
-
268
- fig.canvas.draw()
269
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
270
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
271
- plt.close()
272
- return data
273
-
274
-
275
- def load_wav_to_torch(full_path):
276
- sampling_rate, data = read(full_path)
277
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
278
-
279
-
280
- def load_filepaths_and_text(filename, split="|"):
281
- with open(filename, encoding="utf-8") as f:
282
- filepaths_and_text = [line.strip().split(split) for line in f]
283
- return filepaths_and_text
284
-
285
-
286
- def get_hparams(init=True):
287
- """
288
- todo:
289
- 结尾七人组:
290
- 保存频率、总epoch done
291
- bs done
292
- pretrainG、pretrainD done
293
- 卡号:os.en["CUDA_VISIBLE_DEVICES"] done
294
- if_latest done
295
- 模型:if_f0 done
296
- 采样率:自动选择config done
297
- 是否缓存数据集进GPU:if_cache_data_in_gpu done
298
-
299
- -m:
300
- 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done
301
- -c不要了
302
- """
303
- parser = argparse.ArgumentParser()
304
- parser.add_argument(
305
- "-se",
306
- "--save_every_epoch",
307
- type=int,
308
- required=True,
309
- help="checkpoint save frequency (epoch)",
310
- )
311
- parser.add_argument(
312
- "-te", "--total_epoch", type=int, required=True, help="total_epoch"
313
- )
314
- parser.add_argument(
315
- "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path"
316
- )
317
- parser.add_argument(
318
- "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path"
319
- )
320
- parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -")
321
- parser.add_argument(
322
- "-bs", "--batch_size", type=int, required=True, help="batch size"
323
- )
324
- parser.add_argument(
325
- "-e", "--experiment_dir", type=str, required=True, help="experiment dir"
326
- ) # -m
327
- parser.add_argument(
328
- "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
329
- )
330
- parser.add_argument(
331
- "-sw",
332
- "--save_every_weights",
333
- type=str,
334
- default="0",
335
- help="save the extracted model in weights directory when saving checkpoints",
336
- )
337
- parser.add_argument(
338
- "-v", "--version", type=str, required=True, help="model version"
339
- )
340
- parser.add_argument(
341
- "-f0",
342
- "--if_f0",
343
- type=int,
344
- required=True,
345
- help="use f0 as one of the inputs of the model, 1 or 0",
346
- )
347
- parser.add_argument(
348
- "-l",
349
- "--if_latest",
350
- type=int,
351
- required=True,
352
- help="if only save the latest G/D pth file, 1 or 0",
353
- )
354
- parser.add_argument(
355
- "-c",
356
- "--if_cache_data_in_gpu",
357
- type=int,
358
- required=True,
359
- help="if caching the dataset in GPU memory, 1 or 0",
360
- )
361
-
362
- args = parser.parse_args()
363
- name = args.experiment_dir
364
- experiment_dir = os.path.join("./logs", args.experiment_dir)
365
-
366
- config_save_path = os.path.join(experiment_dir, "config.json")
367
- with open(config_save_path, "r") as f:
368
- config = json.load(f)
369
-
370
- hparams = HParams(**config)
371
- hparams.model_dir = hparams.experiment_dir = experiment_dir
372
- hparams.save_every_epoch = args.save_every_epoch
373
- hparams.name = name
374
- hparams.total_epoch = args.total_epoch
375
- hparams.pretrainG = args.pretrainG
376
- hparams.pretrainD = args.pretrainD
377
- hparams.version = args.version
378
- hparams.gpus = args.gpus
379
- hparams.train.batch_size = args.batch_size
380
- hparams.sample_rate = args.sample_rate
381
- hparams.if_f0 = args.if_f0
382
- hparams.if_latest = args.if_latest
383
- hparams.save_every_weights = args.save_every_weights
384
- hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
385
- hparams.data.training_files = "%s/filelist.txt" % experiment_dir
386
- return hparams
387
-
388
-
389
- def get_hparams_from_dir(model_dir):
390
- config_save_path = os.path.join(model_dir, "config.json")
391
- with open(config_save_path, "r") as f:
392
- data = f.read()
393
- config = json.loads(data)
394
-
395
- hparams = HParams(**config)
396
- hparams.model_dir = model_dir
397
- return hparams
398
-
399
-
400
- def get_hparams_from_file(config_path):
401
- with open(config_path, "r") as f:
402
- data = f.read()
403
- config = json.loads(data)
404
-
405
- hparams = HParams(**config)
406
- return hparams
407
-
408
-
409
- def check_git_hash(model_dir):
410
- source_dir = os.path.dirname(os.path.realpath(__file__))
411
- if not os.path.exists(os.path.join(source_dir, ".git")):
412
- logger.warn(
413
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
414
- source_dir
415
- )
416
- )
417
- return
418
-
419
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
420
-
421
- path = os.path.join(model_dir, "githash")
422
- if os.path.exists(path):
423
- saved_hash = open(path).read()
424
- if saved_hash != cur_hash:
425
- logger.warn(
426
- "git hash values are different. {}(saved) != {}(current)".format(
427
- saved_hash[:8], cur_hash[:8]
428
- )
429
- )
430
- else:
431
- open(path, "w").write(cur_hash)
432
-
433
-
434
- def get_logger(model_dir, filename="train.log"):
435
- global logger
436
- logger = logging.getLogger(os.path.basename(model_dir))
437
- logger.setLevel(logging.DEBUG)
438
-
439
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
440
- if not os.path.exists(model_dir):
441
- os.makedirs(model_dir)
442
- h = logging.FileHandler(os.path.join(model_dir, filename))
443
- h.setLevel(logging.DEBUG)
444
- h.setFormatter(formatter)
445
- logger.addHandler(h)
446
- return logger
447
-
448
-
449
- class HParams:
450
- def __init__(self, **kwargs):
451
- for k, v in kwargs.items():
452
- if type(v) == dict:
453
- v = HParams(**v)
454
- self[k] = v
455
-
456
- def keys(self):
457
- return self.__dict__.keys()
458
-
459
- def items(self):
460
- return self.__dict__.items()
461
-
462
- def values(self):
463
- return self.__dict__.values()
464
-
465
- def __len__(self):
466
- return len(self.__dict__)
467
-
468
- def __getitem__(self, key):
469
- return getattr(self, key)
470
-
471
- def __setitem__(self, key, value):
472
- return setattr(self, key, value)
473
-
474
- def __contains__(self, key):
475
- return key in self.__dict__
476
-
477
- def __repr__(self):
478
- return self.__dict__.__repr__()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/tools/torchgate/utils.py DELETED
@@ -1,66 +0,0 @@
1
- import torch
2
- from torch.types import Number
3
-
4
-
5
- @torch.no_grad()
6
- def amp_to_db(x: torch.Tensor, eps=torch.finfo(torch.float64).eps, top_db=40) -> torch.Tensor:
7
- """
8
- Convert the input tensor from amplitude to decibel scale.
9
-
10
- Arguments:
11
- x {[torch.Tensor]} -- [Input tensor.]
12
-
13
- Keyword Arguments:
14
- eps {[float]} -- [Small value to avoid numerical instability.]
15
- (default: {torch.finfo(torch.float64).eps})
16
- top_db {[float]} -- [threshold the output at ``top_db`` below the peak]
17
- ` (default: {40})
18
-
19
- Returns:
20
- [torch.Tensor] -- [Output tensor in decibel scale.]
21
- """
22
- x_db = 20 * torch.log10(x.abs() + eps)
23
- return torch.max(x_db, (x_db.max(-1).values - top_db).unsqueeze(-1))
24
-
25
-
26
- @torch.no_grad()
27
- def temperature_sigmoid(x: torch.Tensor, x0: float, temp_coeff: float) -> torch.Tensor:
28
- """
29
- Apply a sigmoid function with temperature scaling.
30
-
31
- Arguments:
32
- x {[torch.Tensor]} -- [Input tensor.]
33
- x0 {[float]} -- [Parameter that controls the threshold of the sigmoid.]
34
- temp_coeff {[float]} -- [Parameter that controls the slope of the sigmoid.]
35
-
36
- Returns:
37
- [torch.Tensor] -- [Output tensor after applying the sigmoid with temperature scaling.]
38
- """
39
- return torch.sigmoid((x - x0) / temp_coeff)
40
-
41
-
42
- @torch.no_grad()
43
- def linspace(start: Number, stop: Number, num: int = 50, endpoint: bool = True, **kwargs) -> torch.Tensor:
44
- """
45
- Generate a linearly spaced 1-D tensor.
46
-
47
- Arguments:
48
- start {[Number]} -- [The starting value of the sequence.]
49
- stop {[Number]} -- [The end value of the sequence, unless `endpoint` is set to False.
50
- In that case, the sequence consists of all but the last of ``num + 1``
51
- evenly spaced samples, so that `stop` is excluded. Note that the step
52
- size changes when `endpoint` is False.]
53
-
54
- Keyword Arguments:
55
- num {[int]} -- [Number of samples to generate. Default is 50. Must be non-negative.]
56
- endpoint {[bool]} -- [If True, `stop` is the last sample. Otherwise, it is not included.
57
- Default is True.]
58
- **kwargs -- [Additional arguments to be passed to the underlying PyTorch `linspace` function.]
59
-
60
- Returns:
61
- [torch.Tensor] -- [1-D tensor of `num` equally spaced samples from `start` to `stop`.]
62
- """
63
- if endpoint:
64
- return torch.linspace(start, stop, num, **kwargs)
65
- else:
66
- return torch.linspace(start, stop, num + 1, **kwargs)[:-1]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Beasto/Face_To_Anime_Cyclegan/app.py DELETED
@@ -1,45 +0,0 @@
1
- import streamlit as st
2
- import tensorflow as tf
3
- import numpy as np
4
- from PIL import Image
5
- import tensorflow_addons as tfa
6
-
7
- import tensorflow as tf
8
- from tensorflow.keras.utils import custom_object_scope
9
-
10
- # Define a function to create the InstanceNormalization layer
11
- def create_in():
12
- return tfa.layers.InstanceNormalization()
13
-
14
-
15
- def model_out(model_path,img):
16
- with custom_object_scope({'InstanceNormalization': create_in}):
17
- model = tf.keras.models.load_model(model_path)
18
- img = (img-127.5)/127.5
19
- img = np.expand_dims(img, 0)
20
- pred = model.predict(img)
21
- pred = np.asarray(pred)
22
- return pred[0]
23
-
24
- st.title("Face to Anime cyclegan")
25
- face = st.file_uploader("face image input")
26
-
27
- if face is not None:
28
- img = Image.open(face)
29
- img = img.resize((256, 256))
30
- img = np.array(img)
31
- pred = model_out('anime_to_face2.h5', img)
32
- st.image(img, caption="Uploaded Image")
33
- st.image(((pred + 1) * 127.5).astype(np.uint8), caption="Generated Anime image")
34
-
35
-
36
- st.header('Which architecture did I use architecture, Resnet-Blocks or Unet architecture?')
37
- st.write('I have used ResNet architecture')
38
- st.header('Problems:')
39
- st.write('Sometimes(most of the times) generates cursedimages')
40
-
41
- st.header('What hardware I trained it on?')
42
- st.write('I trained the model on Kaggle notebook on P100 gpu with 13 gigs of ram cuz my pc wouldnt be in a good state if I trained the cyclegan model on Intel HD')
43
- st.header('Why did I make this model?')
44
- st.subheader('I made this model to extend my experience but mostly for FUNN!!!!')
45
- st.write("-------------------------------------------------")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Amazon Prime Mod Apk Premium Descargar La ltima Versin (2020).md DELETED
@@ -1,125 +0,0 @@
1
-
2
- <h1>Amazon Prime Mod APK Premium Descargar última versión (2020)</h1>
3
- <p>¿Está buscando una manera de disfrutar de todos los beneficios de Amazon Prime sin pagar un centavo? Si es así, es posible que haya llegado a través de algunos sitios web o aplicaciones que afirman ofrecer una versión modificada de Amazon Prime que puede desbloquear todas las características premium de forma gratuita. Pero antes de descargar e instalar cualquier aplicación de este tipo, usted debe ser consciente de los riesgos y consecuencias de usar Amazon Prime Mod APK. En este artículo, vamos a explicar lo que es Amazon Prime, cuáles son sus beneficios, lo que es Amazon Prime Mod APK, ¿por qué la gente lo usa, cuáles son los peligros de usarlo, y cómo prevenir o evitar usarlo. </p>
4
- <h2>¿Qué es Amazon Prime y cuáles son sus beneficios? </h2>
5
- <p>Amazon Prime es un servicio de suscripción que ofrece una amplia gama de ventajas y beneficios para sus miembros. Algunos de los beneficios más populares incluyen:</p>
6
- <h2>amazon prime mod apk premium descargar la última versión (2020)</h2><br /><p><b><b>Download File</b> &#9658;&#9658;&#9658;&#9658;&#9658; <a href="https://bltlly.com/2v6L1S">https://bltlly.com/2v6L1S</a></b></p><br /><br />
7
- <h3>Envío rápido y gratuito</h3>
8
- <p>Como miembro Prime, puede disfrutar de envío gratuito de un día, dos días o el mismo día en millones de artículos elegibles en Amazon. También puede elegir el envío sin prisas y ganar recompensas para futuras compras. También puede obtener la entrega gratuita en la fecha de lanzamiento de los artículos pedidos con antelación. </p>
9
- <h3>Ofertas y descuentos exclusivos</h3>
10
- <p>Como miembro Prime, puede acceder a ofertas y descuentos exclusivos en varios productos y servicios en Amazon. También puede obtener acceso temprano a las ofertas de Lightning y Prime Day. También puede ahorrar dinero en comestibles y artículos para el hogar con Prime Pantry y Whole Foods Market.</p>
11
- <h3>Streaming ilimitado de películas, programas, música y más</h3>
12
-
13
- <h2>¿Qué es Amazon Prime Mod APK y por qué la gente lo usa? </h2>
14
- <p>Amazon Prime Mod APK es una versión modificada de la aplicación original de Amazon Prime que afirma desbloquear todas las características premium de forma gratuita. Esto significa que puede utilizar todos los beneficios de Amazon Prime sin pagar por una suscripción. Algunas de las características premium que se puede acceder con Amazon Prime Mod APK incluyen:</p>
15
- <h3>Una versión modificada de la aplicación original que desbloquea funciones premium de forma gratuita</h3>
16
- <ul>
17
- <li>Ver cualquier película o programa de Prime Video sin anuncios o restricciones</li>
18
- <li>Descargar cualquier película o programa de Prime Video para ver sin conexión</li>
19
- <li>Escuchar cualquier canción o álbum de Prime Music sin anuncios o saltos</li>
20
- <li>Descargar cualquier canción o álbum de Prime Music para escuchar sin conexión</li>
21
- <li>Lea cualquier libro o revista de Prime Reading sin límites</li>
22
-
23
- <p>¿Está buscando una manera de disfrutar de todos los beneficios de Amazon Prime sin pagar un centavo? Si es así, es posible que haya llegado a través de algunos sitios web o aplicaciones que afirman ofrecer una versión modificada de Amazon Prime que puede desbloquear todas las características premium de forma gratuita. Pero antes de descargar e instalar cualquier aplicación de este tipo, usted debe ser consciente de los riesgos y consecuencias de usar Amazon Prime Mod APK. En este artículo, vamos a explicar lo que es Amazon Prime, cuáles son sus beneficios, lo que es Amazon Prime Mod APK, ¿por qué la gente lo usa, cuáles son los peligros de usarlo, y cómo prevenir o evitar usarlo. </p>
24
- <h2>¿Qué es Amazon Prime y cuáles son sus beneficios? </h2>
25
- <p>Amazon Prime es un servicio de suscripción que ofrece una amplia gama de ventajas y beneficios para sus miembros. Algunos de los beneficios más populares incluyen:</p>
26
- <h3>Envío rápido y gratuito</h3>
27
- <p>Como miembro Prime, puede disfrutar de envío gratuito de un día, dos días o el mismo día en millones de artículos elegibles en Amazon. También puede elegir el envío sin prisas y ganar recompensas para futuras compras. También puede obtener la entrega gratuita en la fecha de lanzamiento de los artículos pedidos con antelación. </p>
28
- <h3>Ofertas y descuentos exclusivos</h3>
29
- <p>Como miembro Prime, puede acceder a ofertas y descuentos exclusivos en varios productos y servicios en Amazon. También puede obtener acceso temprano a las ofertas de Lightning y Prime Day. También puede ahorrar dinero en comestibles y artículos para el hogar con Prime Pantry y Whole Foods Market.</p>
30
- <p></p>
31
- <h3>Streaming ilimitado de películas, programas, música y más</h3>
32
- <p>Como miembro de Prime, puedes transmitir películas y programas de televisión ilimitados desde Prime Video, incluyendo Amazon Originals como The Boys, The Marvelous Mrs. Maisel, The Man in the High Castle y más. También puede transmitir música ilimitada de Prime Music, incluyendo más de 2 millones de canciones y listas de reproducción. También puede acceder gratuitamente a libros, revistas, cómics y audiolibros de Prime Reading y Audible Channels. También puedes jugar juegos gratis y obtener botín gratis en el juego de Prime Gaming.</p>
33
-
34
- <p>Amazon Prime Mod APK es una versión modificada de la aplicación original de Amazon Prime que afirma desbloquear todas las características premium de forma gratuita. Esto significa que puede utilizar todos los beneficios de Amazon Prime sin pagar por una suscripción. Algunas de las características premium que se puede acceder con Amazon Prime Mod APK incluyen:</p>
35
- <h3>Una versión modificada de la aplicación original que desbloquea funciones premium de forma gratuita</h3>
36
- <ul>
37
- <li>Ver cualquier película o programa de Prime Video sin anuncios o restricciones</li>
38
- <li>Descargar cualquier película o programa de Prime Video para ver sin conexión</li>
39
- <li>Escuchar cualquier canción o álbum de Prime Music sin anuncios o saltos</li>
40
- <li>Descargar cualquier canción o álbum de Prime Music para escuchar sin conexión</li>
41
- <li>Lea cualquier libro o revista de Prime Reading sin límites</li>
42
- <li>Escuchar cualquier audiolibro de canales audibles sin. suscripciones o créditos</li>
43
- <li>Juega cualquier juego o obtener cualquier botín de Prime Gaming sin anuncios o compras</li>
44
- </ul>
45
- <h3>Ejemplos de características premium a las que se puede acceder con mod apk</h3>
46
- <p>Para darle una idea de lo que puede hacer con Amazon Prime Mod APK, aquí hay algunos ejemplos de características premium que se puede acceder con ella:</p>
47
- <tabla>
48
- <tr>
49
- <th>Característica</th>
50
- <th>Aplicación original</th>
51
- <th>Mod APK</th>
52
- </tr>
53
- <tr>
54
- <td>Ver la guerra del mañana</td>
55
- <td>Disponible solo para miembros Prime</td>
56
- <td>Disponible para cualquier persona</td>
57
- </tr>
58
- <tr>
59
- <td>Escuchar el folklore de Taylor Swift</td>
60
- <td>Limitado a 30 segundos por canción para no miembros</td>
61
- <td>Transmisión y descarga ilimitada para cualquier persona</td>
62
- </tr>
63
- <tr>
64
- <td>Lea la trilogía de los Juegos del Hambre</td>
65
- <td>Disponible solo para miembros Prime o suscriptores Kindle Unlimited</td>
66
- <td>Disponible para cualquier persona</td>
67
- </tr>
68
- <tr>
69
- <td>Escucha a Harry Potter y la piedra filosofal</td>
70
- <td>Disponible solo para miembros Audible o con créditos</td>
71
- <td>Disponible para cualquier persona</td>
72
- </tr>
73
- <tr>
74
- <td>Jugar chicos de otoño: Knockout final</td>
75
-
76
- <td>Gratis para jugar y obtener pieles y coronas exclusivas de Prime Gaming</td>
77
- </tr>
78
- </tabla>
79
- <h3>La popularidad y disponibilidad de los archivos apk mod en Internet</h3>
80
- <p>Amazon Prime Mod APK es muy popular entre las personas que quieren disfrutar de todos los beneficios de Amazon Prime sin pagar nada. Hay muchos sitios web y aplicaciones que afirman proporcionar la última versión de Amazon Prime Mod APK para su descarga gratuita. Algunos de estos sitios web y aplicaciones son:</p>
81
- <ul>
82
- <li>[APKPure]</li>
83
- <li>[APKMirror]</li>
84
- <li>[ModDroid]</li>
85
- <li>[HappyMod]</li>
86
- <li>[ACMarket]</li>
87
- </ul>
88
- <h2>¿Cuáles son los riesgos de usar Amazon Prime Mod APK? </h2>
89
- <p>Si bien el uso de Amazon Prime Mod APK puede parecer tentador, no está libre de riesgos. Algunos de los peligros de usar Amazon Prime Mod APK son:</p>
90
- <h3>Malware potencial, virus u otro software malicioso que puede dañar su dispositivo o datos</h3>
91
- <p>Descargar e instalar archivos apk mod de fuentes desconocidas o no confiables puede exponer su dispositivo a malware, virus u otro software malicioso que puede dañar su dispositivo o datos. Estos programas maliciosos pueden robar su información personal, dañar sus archivos, dañar su sistema o incluso tomar el control de su dispositivo. También pueden mostrar anuncios no deseados, ventanas emergentes o notificaciones que pueden molestarte o engañarte para que hagas clic en enlaces dañinos. </p>
92
- <h3>Aplicaciones falsas o falsificadas que pueden robar su información personal o dinero</h3>
93
- <p>Algunos archivos apk mod no son versiones originales de la aplicación original, pero las aplicaciones falsas o falsificadas que pueden robar su información personal o dinero. Estas aplicaciones falsas pueden verse y funcionar como la aplicación original, pero pueden recopilar en secreto sus credenciales de inicio de sesión, detalles de pago, lista de contactos, ubicación u otros datos confidenciales. También pueden cobrarle por servicios o productos que usted no ordenó o autorizó. </p>
94
- <h3> Problemas legales y violaciones de los términos de servicio que pueden resultar en la suspensión o terminación de la cuenta</h3>
95
-
96
- <h2> ¿Cómo prevenir o evitar el uso de Amazon Prime Mod APK? </h2>
97
- <p>La mejor manera de evitar o evitar el uso de Amazon Prime Mod APK es descargar aplicaciones solo de fuentes oficiales como Google Play Store o Amazon Appstore. Estas fuentes son seguras y seguras, y verifican la autenticidad y calidad de las aplicaciones que ofrecen. También debe comprobar los permisos de la aplicación, comentarios, calificaciones e información del desarrollador antes de instalar cualquier aplicación. También debes usar antivirus o software de seguridad para escanear tu dispositivo regularmente y eliminar cualquier archivo sospechoso o dañino. </p>
98
- <h2>Conclusión</h2>
99
- <p>En conclusión, Amazon Prime Mod APK es una versión modificada de la aplicación original de Amazon Prime que afirma desbloquear todas las características premium de forma gratuita. Sin embargo, el uso de Amazon Prime Mod APK es arriesgado e ilegal, ya que puede exponer su dispositivo a malware, virus u otro software malicioso, aplicaciones falsas o falsificadas que pueden robar su información personal o dinero, y problemas legales y violaciones de los términos de servicio que pueden resultar en la suspensión o terminación de la cuenta. Por lo tanto, la mejor manera de disfrutar de todos los beneficios de Amazon Prime es pagar una suscripción y descargar la aplicación oficial de las fuentes oficiales. De esta manera, puede apoyar a los desarrolladores y creadores de aplicaciones, y también proteger su dispositivo y los datos de cualquier daño. También puede obtener una prueba gratuita de Amazon Prime durante 30 días y ver si se adapta a sus necesidades y preferencias. </p>
100
- <p>Si has encontrado este artículo útil, por favor compártelo con tus amigos y familiares que podrían estar interesados en Amazon Prime Mod APK. También puede dejar un comentario a continuación y háganos saber lo que piensa acerca de Amazon Prime Mod APK. Gracias por leer! </p>
101
- <h2>Preguntas frecuentes</h2>
102
- <p>Aquí hay algunas preguntas frecuentes sobre Amazon Prime Mod APK:</p>
103
- <h3>Q: ¿Es Amazon Prime Mod APK seguro de usar? </h3>
104
-
105
- <h3>Q: ¿Cómo puedo descargar Amazon Prime Mod APK? </h3>
106
- <p>A: No debe descargar Amazon Prime Mod APK, ya que es ilegal y arriesgado. Usted debe descargar aplicaciones solo de fuentes oficiales como Google Play Store o Amazon Appstore, y pagar por una suscripción para disfrutar de todos los beneficios de Amazon Prime.</p>
107
- <h3>Q: ¿Cuáles son las alternativas a Amazon Prime Mod APK? </h3>
108
- <p>A: No hay alternativas a Amazon Prime Mod APK que puede ofrecerle los mismos beneficios de forma gratuita. Sin embargo, puedes probar otros servicios de streaming o plataformas que tengan contenido y características similares o diferentes, como Netflix, Hulu, Disney+, Spotify, YouTube, etc.</p>
109
- <h3>Q: ¿Cómo puedo cancelar mi suscripción de Amazon Prime? </h3>
110
- <p>A: Si desea cancelar su suscripción de Amazon Prime, puede hacerlo siguiendo estos pasos:</p>
111
- <ol>
112
- <li>Vaya a Su Cuenta en Amazon.com y seleccione Su Membresía Prime.</li>
113
- <li>Haga clic en Fin de membresía y beneficios.</li>
114
- <li>Siga las instrucciones en pantalla para confirmar su cancelación. </li>
115
- </ol>
116
- <p>También puede ponerse en contacto con el servicio al cliente de Amazon para obtener ayuda. </p>
117
- <h3>P: ¿Cómo puedo contactar con el servicio al cliente de Amazon? </h3>
118
- <p>A: Puede ponerse en contacto con el servicio de atención al cliente de Amazon siguiendo estos pasos:</p>
119
- <ol>
120
- <li>Vaya a Contáctenos en Amazon.com y seleccione su problema o consulta. </li>
121
- <li>Seleccione el método de comunicación que prefiera, como teléfono, chat o correo electrónico. </li>
122
- <li>Siga las instrucciones en pantalla para conectarse con un representante de servicio al cliente. </li>
123
- </ol></p> 64aa2da5cf<br />
124
- <br />
125
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Armas De La Gloria Isla Perdida.md DELETED
@@ -1,91 +0,0 @@
1
-
2
- <h1>Guns of Glory: Lost Island - Una nueva aventura te espera</h1>
3
- <p>Si eres un fan de los juegos de estrategia, es posible que hayas oído hablar de Guns of Glory, un popular juego móvil que te permite construir tu propio reino, entrenar a tu ejército y luchar contra los enemigos en un mundo de fantasía medieval. Pero, ¿sabías que hay una nueva expansión para Guns of Glory que añade una nueva dimensión al juego? Se llama Guns of Glory: Lost Island, y es una aventura emocionante que te llevará a una isla misteriosa llena de tesoros y peligros. En este artículo, te contaremos todo lo que necesitas saber sobre Guns of Glory: Lost Island, incluyendo qué es, cómo descargarlo, cómo jugarlo y qué características ofrece. ¡Vamos a empezar! </p>
4
- <h2>armas de la gloria isla perdida</h2><br /><p><b><b>Download</b> ->>> <a href="https://bltlly.com/2v6Lix">https://bltlly.com/2v6Lix</a></b></p><br /><br />
5
- <h2>¿Qué es Armas de Gloria: Isla Perdida? </h2>
6
- <p>Guns of Glory: Lost Island es una nueva expansión de Guns of Glory, un juego de estrategia desarrollado por FunPlus International AG. Es un juego gratuito que requiere una conexión a Internet y algunas compras en la aplicación. Guns of Glory: Lost Island presenta un nuevo arco argumental, nuevas características de juego, nuevos guardias, nuevas sirenas y más. Estos son algunos de los aspectos principales de Guns of Glory: Lost Island:</p>
7
- <h3>Un juego de estrategia con un nuevo arco de la historia y características de juego</h3>
8
- <p>Guns of Glory: Lost Island es un juego de estrategia que requiere que construyas tu propio reino, entrenes a tu ejército y luches contra enemigos en un mundo de fantasía medieval. Puedes elegir entre diferentes tipos de tropas, como infantería, caballería, artillería, dirigibles, etc., y usar diferentes estrategias para derrotar a tus oponentes. También puedes unirte a alianzas con otros jugadores de todo el mundo y cooperar en guerras, eventos, redadas, etc.</p>
9
-
10
- <p>Guns of Glory: Lost Island también añade nuevas características de juego al juego, como Relic War y Treant Invasion. Relic War es un modo en el que tienes que competir con otros jugadores por reliquias que pueden otorgarte potenciadores poderosos. Treant invasión es un modo en el que tienes que defender tu reino de oleadas de pisadas que pueden destruir tus edificios. </p>
11
- <h3>Una isla misteriosa llena de tesoros y peligros</h3>
12
- <p>Guns of Glory: Lost Island te lleva a una isla misteriosa que se ha perdido desde la antigüedad. La isla está envuelta por la niebla y rodeada de monstruos marinos. También es el hogar de antiguas civilizaciones perdidas que esconden secretos y tesoros. Tendrás que explorar la isla y encontrar pistas, artefactos, recursos, etc., mientras luchas contra enemigos que quieren detenerte. </p>
13
- <p>La isla está dividida en diferentes regiones, como la C <h3>Una oportunidad para reclutar nuevos guardias y crear artefactos poderosos</h3>
14
- <p>Guns of Glory: Lost Island también te da la oportunidad de reclutar nuevos guardias y crear poderosos artefactos que pueden mejorar tu reino y ejército. Los guardias son unidades especiales que pueden acompañarte en las batallas y proporcionarte varios beneficios. Puedes reclutar diferentes tipos de guardias, como guerreros, magos, curanderos, etc. </p>
15
- <p></p>
16
- <p>Los artefactos son elementos que pueden otorgarte habilidades y efectos especiales. Puedes crear diferentes tipos de artefactos, como armas, armaduras, anillos, etc., utilizando los recursos y materiales que encuentres en la isla. También puede combinar diferentes artefactos para crear más poderosos. Los artefactos pueden ser equipados por usted o sus guardias, y pueden hacer una gran diferencia en las batallas. </p>
17
- <h2>Cómo descargar Guns of Glory: Lost Island? </h2>
18
-
19
- <h3>Disponible para dispositivos Android e iOS</h3>
20
- <p>Guns of Glory: Lost Island es compatible con dispositivos Android que tienen Android 5.0 o superior, y dispositivos iOS que tienen iOS 9.0 o superior. El tamaño del juego es de aproximadamente 1,4 GB para dispositivos Android y 1,6 GB para dispositivos iOS. Necesitarás al menos 2 GB de RAM y 4 GB de espacio de almacenamiento gratuito para ejecutar el juego sin problemas. </p>
21
- <h3>Descargar enlaces y requisitos</h3>
22
- <tabla>
23
- <tr>
24
- <th>Dispositivo</th>
25
- <th>Enlace de descarga</th>
26
- <th>Requisitos</th>
27
- </tr>
28
- <tr>
29
- <td>Android</td>
30
- <td><a href="">Armas de Gloria: Isla Perdida en Google Play Store</a></td>
31
- <td>Android 5.0 o superior, 2 GB de RAM, 4 GB de espacio de almacenamiento gratuito</td>
32
- </tr>
33
- <tr>
34
- <td>iOS</td>
35
- <td><a href="">Armas de Gloria: Isla Perdida en la App Store</a></td>
36
- <td>iOS 9.0 o superior, 2 GB de RAM, 4 GB de espacio de almacenamiento gratuito</td>
37
- </tr>
38
- </tabla>
39
- <h3>Consejos para instalar y actualizar el juego</h3>
40
- <p>Aquí hay algunos consejos para instalar y actualizar Guns of Glory: Lost Island:</p>
41
- <ul>
42
- <li>Asegúrese de tener una conexión a Internet estable antes de descargar o actualizar el juego. </li>
43
- <li> Cerrar otras aplicaciones que se ejecutan en segundo plano para evitar interferencias o ralentizaciones. </li>
44
- <li>Si encuentra algún problema durante el proceso de instalación o actualización, intente limpiar la caché, reiniciar el dispositivo o reinstalar el juego. </li>
45
- <li>Si tiene alguna pregunta o problema con respecto al juego, puede ponerse en contacto con el equipo de servicio al cliente a través de la configuración del juego o el sitio web oficial. </li>
46
- </ul>
47
- <h2>¿Cómo se juega Guns of Glory: Lost Island? </h2> <p>Guns of Glory: Lost Island es un juego de estrategia que requiere explorar la isla desconocida y luchar contra los enemigos, construir su fortaleza y socializar con otros aventureros, crear los artefactos más poderosos y usarlos en la batalla, hacer amigos y aventureros rally en todo el mundo, y la experiencia variadas estrategias y reclamar la gloria. Estos son algunos de los aspectos principales de cómo jugar a Guns of Glory: Lost Island:</p>
48
-
49
- <p>Uno de los principales objetivos de Guns of Glory: Lost Island es explorar la isla desconocida y descubrir sus secretos y tesoros. Puede utilizar su aeronave para viajar por la isla y descubrir diferentes regiones, como la bahía maldita, las ruinas antiguas, la cala pirata, etc. Cada región tiene sus propios desafíos y recompensas, tales como recursos, materiales, reliquias, artefactos, etc.</p>
50
- <p>Sin embargo, no estás solo en la isla. Te encontrarás con varios enemigos que tratarán de detenerte de explorar y saquear. Estos enemigos incluyen piratas, monstruos marinos, fantasmas, traiciones, etc. Tendrás que luchar contra ellos usando tu ejército y tus guardias. También puedes usar diferentes estrategias y tácticas para obtener una ventaja en batallas, como usar el terreno, el clima, las formaciones, las habilidades, etc.</p>
51
- <h3>Construye tu fortaleza y socializa con otros aventureros</h3>
52
- <p>Otro aspecto importante de Guns of Glory: Lost Island es construir tu propia fortaleza en la isla y convertirla en tu base de operaciones. Puedes construir diferentes tipos de edificios, como cuarteles, talleres, almacenes, etc., para entrenar a tus tropas, producir recursos, almacenar materiales, etc. También puedes mejorar tus edificios para mejorar su eficiencia y desbloquear nuevas características. </p>
53
- <p>Además, puedes socializar con otros aventureros que también han llegado a la isla. Puedes unirte o crear alianzas con otros jugadores de todo el mundo y cooperar en guerras, eventos, redadas, etc. También puedes chatear con ellos en tiempo real utilizando mensajes de voz o texto. También puede intercambiar recursos y materiales con ellos o enviarles regalos. </p>
54
- <h3>Crear los artefactos más poderosos y utilizarlos en la batalla</h3>
55
-
56
- <p>Los artefactos pueden ser equipados por usted o sus guardias, y pueden hacer una gran diferencia en las batallas. Por ejemplo, puedes usar un artefacto de espada que puede aumentar tu poder de ataque o un artefacto de escudo que puede reducir el daño que recibes. También puedes usar artefactos que pueden afectar a tus enemigos o aliados de varias maneras. </p>
57
- <h3>Hacer amigos y aventureros rally en todo el mundo</h3>
58
- <p>Guns of Glory: Lost Island no es solo un juego de estrategia y combate, sino también un juego de amistad y cooperación. Usted puede hacer amigos con otros aventureros de todo el mundo y reunirlos para unirse a usted en su búsqueda de la gloria. Puedes invitarlos a unirse a tu alianza o formar un equipo de rally con ellos. También puedes enviarles mensajes o regalos para mostrar tu aprecio o amistad. </p>
59
- <p>Al hacer amigos y reunir a aventureros de todo el mundo, puedes disfrutar de más beneficios y diversión en Guns of Glory: Lost Island. Por ejemplo, puedes compartir recursos e información con ellos o ayudarlos en batallas o eventos. También puedes participar en competiciones o desafíos globales con ellos o contra ellos. </p>
60
- <h3>Experimenta estrategias variadas y reclama gloria</h3>
61
- <p>Guns of Glory: Lost Island es un juego que te ofrece variadas estrategias y formas de jugar. Puede elegir entre diferentes tipos de tropas, guardias, artefactos, etc., y utilizar diferentes combinaciones para adaptarse a su estilo y preferencias. También puedes usar diferentes estrategias y tácticas para derrotar a tus enemigos o superar desafíos. También puede personalizar su aeronave, fortaleza, avatar, etc., para expresar su personalidad e identidad. </p>
62
-
63
- <h2>¿Cuáles son las características de Guns of Glory: Lost Island? </h2>
64
- <p>Guns of Glory: Lost Island es un juego que te ofrece muchas características que mejorarán tu experiencia de juego y disfrute. Estas son algunas de las características principales de Guns of Glory: Lost Island:</p>
65
- <h3>Un arco de historia completamente nuevo con una aventura misteriosa</h3>
66
- <p>Guns of Glory: Lost Island presenta un nuevo arco argumental que te llevará a una isla misteriosa que ha resurgido del océano. Allí conocerás a una misteriosa aventurera que te ayudará a descubrir los secretos y tesoros de la isla. También encontrar��s civilizaciones antiguas, piratas malditos, monstruos marinos, fantasmas y otros peligros. Tendrás que dirigir un equipo de expedición y explorar la isla desconocida. </p>
67
- <h3>Nuevos juegos y eventos como Relic War y Treant Invasion</h3>
68
- <p>Guns of Glory: Lost Island también añade nuevos juegos y eventos que te desafiarán y recompensarán. Por ejemplo, Relic War es un modo en el que tienes que competir con otros jugadores por reliquias que pueden otorgarte potenciadores poderosos. Invasión Treant es un modo en el que tienes que defender tu reino de oleadas de traiciones que pueden destruir tus edificios. También puedes participar en otros eventos como Pirate Hunt, Treasure Hunt, Alliance War, etc.</p>
69
- <h3>Nuevos guardias con habilidades y habilidades únicas</h3>
70
- <p>Guns of Glory: Lost Island también te da la oportunidad de reclutar nuevos guardias que pueden acompañarte en las batallas y proporcionarte varios beneficios. Los guardias son unidades especiales que tienen habilidades y habilidades únicas que pueden afectar el resultado de las batallas. Puedes reclutar diferentes tipos de guardias, como guerreros, magos, curanderos, etc., y mejorar sus habilidades y equipos. También puede utilizar diferentes combinaciones de guardias para adaptarse a su estrategia y preferencias. </p>
71
- <h3>Un nuevo sistema de sirenas con tesoros y regalos</h3>
72
-
73
- <h3> Contenido optimizado y características convenientes para una mejor experiencia de juego</h3>
74
- <p>Guns of Glory: Lost Island también optimiza el contenido y las características del juego para proporcionarle una mejor experiencia de juego. Por ejemplo, el juego ha mejorado los gráficos y efectos de sonido, un rendimiento y velocidad de carga más suaves, niveles de juego y dificultad más equilibrados, interfaz y controles más fáciles de usar, tutoriales y guías más detallados, más opciones de personalización y personalización, etc.</p>
75
- <h2>Conclusión</h2>
76
- <p>Guns of Glory: Lost Island es una nueva expansión de Guns of Glory, un juego de estrategia que te permite construir tu propio reino, entrenar a tu ejército y luchar contra los enemigos en un mundo de fantasía medieval. Guns of Glory: Lost Island añade un nuevo arco argumental, nuevas características de juego, nuevos guardias, nuevas sirenas y más. Es una aventura emocionante que te llevará a una isla misteriosa llena de tesoros y peligros. Si estás buscando un juego de estrategia que te ofrezca variadas estrategias, emocionantes batallas, rico contenido, interacción social, romance y diversión, entonces Guns of Glory: Lost Island es el juego para ti. ¡Descárgalo ahora y comienza tu aventura! </p>
77
- <h2>Preguntas frecuentes</h2>
78
- <ul>
79
- <li>Q: ¿Es Guns of Glory: Lost Island libre para jugar? </li>
80
- <li>A: Sí, Guns of Glory: Lost Island es libre de jugar. Sin embargo, algunas compras en la aplicación están disponibles para mejorar su experiencia de juego. </li>
81
- <li>P: ¿Necesito jugar Armas de Gloria antes de jugar Armas de Gloria: Isla Perdida? </li>
82
- <li>A: No, no necesitas jugar a Guns of Glory antes de jugar a Guns of Glory: Lost Island. Puedes empezar a jugar a Guns of Glory: Lost Island directamente sin ningún conocimiento o experiencia previa. </li>
83
- <li>P: ¿Cómo puedo cambiar entre Armas de Gloria y Armas de Gloria: Isla Perdida? </li>
84
- <li>A: Puede cambiar entre Guns of Glory y Guns of Glory: Lost Island utilizando el icono de dirigible en la pantalla principal. También puedes usar la misma cuenta para ambos juegos. </li>
85
-
86
- <li>A: Puede ponerse en contacto con el equipo de servicio al cliente utilizando la configuración del juego o el sitio web oficial. También puede seguir las cuentas oficiales de las redes sociales para las últimas noticias y actualizaciones. </li>
87
- <li>Q: ¿Cómo puedo obtener más recursos y materiales en Guns of Glory: Lost Island? </li>
88
- <li>A: Puedes obtener más recursos y materiales explorando la isla, luchando contra enemigos, buceando por tesoros, participando en eventos, intercambiando con otros jugadores, etc. También puedes usar compras en la aplicación para obtener más recursos y materiales. </li>
89
- </ul></p> 64aa2da5cf<br />
90
- <br />
91
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Belote Juego De Cartas.md DELETED
@@ -1,58 +0,0 @@
1
-
2
- <h1>Belote juego de cartas Descargar: Cómo jugar y disfrutar de este clásico juego francés</h1>
3
- <p>Belote es un juego de cartas que ha estado emocionando a jugadores de todo el mundo durante más de 100 años. Es un juego de trucos que requiere habilidad, estrategia y trabajo en equipo. Si estás buscando un juego de cartas divertido y desafiante para jugar con tus amigos o en línea, belote es una gran opción. En este artículo, explicaremos qué es belote, cómo jugarlo, dónde descargarlo gratis, y algunos consejos y trucos para mejorar tus habilidades. </p>
4
- <h2>Qué es Belote y cómo jugarlo</h2>
5
- <p>Belote es un juego de cartas que se originó en Francia a principios del siglo XX. Se deriva de otro juego de cartas llamado klaberjass, que es popular en muchos países europeos. Belote también es conocido como baloot en Arabia Saudita, pilotta en Chipre y coinche en algunas regiones de Francia. Es el juego de cartas nacional de Francia, donde lo juegan millones de personas de todas las edades. </p>
6
- <h2>belote juego de cartas</h2><br /><p><b><b>Download Zip</b> &#10040;&#10040;&#10040; <a href="https://bltlly.com/2v6INj">https://bltlly.com/2v6INj</a></b></p><br /><br />
7
- <h3>El origen y la popularidad de Belote</h3>
8
- <p>El origen exacto de belote no está claro, pero se cree que fue creado adaptando las reglas de klaberjass para adaptarse a las preferencias francesas. El nombre belote viene de la palabra francesa para un par de un rey y una reina de un traje de triunfo, que es una de las combinaciones especiales que se pueden declarar en el juego. Las primeras reglas oficiales de belote fueron publicadas en 1921, y desde entonces, el juego se ha extendido a muchos otros países y regiones. </p>
9
- <p>Belote es popular porque es fácil de aprender pero difícil de dominar. Es un juego que combina suerte y habilidad, así como la cooperación y la competencia. Es un juego social que se puede jugar con amigos o familiares, o en línea con extraños. También es un juego que se puede adaptar a diferentes preferencias y niveles de dificultad, ya que hay muchas variaciones de belote, como coinche, rebelote, contrée, etc.</p>
10
- <h3>La cubierta y los equipos</h3>
11
-
12
- <p>Belote suele ser jugado por cuatro jugadores que forman dos equipos de dos socios. Los socios se sientan uno frente al otro en la mesa. El objetivo de cada equipo es recoger tantos puntos como sea posible tomando trucos y declarando combinaciones. El primer equipo en alcanzar un objetivo (generalmente 501 o 1000) gana el juego. </p>
13
- <h3>El trato y la oferta</h3>
14
- <p>El repartidor es elegido al azar para la primera ronda del juego. Luego, el repartidor gira en sentido contrario a las agujas del reloj para cada ronda posterior. El repartidor baraja la baraja y deja que el jugador a su derecha la corte. Luego, reparten cinco cartas a cada jugador en dos rondas: tres cartas en la primera ronda y dos cartas en la segunda ronda. Las cartas restantes se colocan boca abajo en la mesa, formando el stock. </p>
15
- <p>Después del reparto, el distribuidor da vuelta sobre la tarjeta superior de la acción y la coloca boca arriba en la tabla. Esta carta se llama carta de presentación, y determina el potencial palo de triunfo para la ronda. A partir del jugador a la izquierda del repartidor, cada jugador tiene la opción de aceptar o pasar la carta de vuelta como el palo de triunfo. Si un jugador acepta, dice "belote" y toma la carta de vuelta en su mano. Luego deben descartar una carta de su mano boca abajo en la acción. La fase de puja termina y comienza la fase de juego. Si un jugador pasa, dice "pasa" y la puja continúa con el siguiente jugador en sentido contrario a las agujas del reloj. </p>
16
- <p>Si los cuatro jugadores pasan la carta de asistencia, la fase de puja continúa con una segunda ronda. En esta ronda, cada jugador tiene la opción de proponer cualquier otro palo (excepto el palo de la carta) como el palo de triunfo, o pasar de nuevo. Si un jugador propone un palo, dice el nombre del palo (por ejemplo, "corazones") y la fase de puja termina. La fase de juego comienza con ese traje como el traje de triunfo. Si un jugador pasa, dice "pasa" y la puja continúa con el siguiente jugador en sentido contrario a las agujas del reloj. </p>
17
-
18
- <h3>El juego y la puntuación</h3>
19
- <p>La fase de juego consta de ocho trucos, cada uno de los cuales se juega con cuatro cartas (una de cada jugador). El jugador que aceptó o propuso el palo de triunfo lidera el primer truco al jugar cualquier carta de su mano. Los otros jugadores deben seguir el ejemplo si pueden, lo que significa que deben jugar una carta del mismo palo que la primera carta jugada. Si no pueden seguir el ejemplo, pueden jugar cualquier carta de su mano. El truco es ganado por la carta más alta del palo de triunfo, o por la carta más alta del palo si no se jugó ninguna carta de triunfo. El ganador de cada truco lidera el siguiente truco. </p>
20
- <p>Antes de jugar su primera carta en un truco, cada jugador puede declarar una combinación especial de cartas que tiene en su mano, como un belote (un rey y una reina de un palo de triunfo), un nivel (tres cartas consecutivas de cualquier palo), un cuarto (cuatro cartas consecutivas de cualquier palo), o un quinteto (cinco cartas consecutivas de cualquier palo). Cada combinación tiene un valor diferente en puntos, que se añaden a la puntuación del equipo al final de la ronda. Sin embargo, solo se puede declarar una combinación por equipo por ronda, y solo si es mayor que cualquier combinación anterior declarada por cualquiera de los equipos. </p>
21
- <p></p>
22
-
23
- <h2>Dónde descargar Belote Card Game gratis</h2>
24
- <p>Si desea jugar belote en su computadora o dispositivo móvil, tiene muchas opciones para elegir. Hay muchos sitios web y aplicaciones que ofrecen belote juego de cartas de descarga gratuita, con diferentes características y modos. Estos son algunos de los más populares:</p>
25
- <h3>Belote.com - Belote & Coinche</h3>
26
- <p>Este es uno de los sitios web más populares para jugar belote online con otros jugadores de todo el mundo. Puedes elegir entre belote clásico o coinche, que es una variación de belote donde puedes pujar más alto o más bajo que el palo de triunfo original. También puedes jugar solo o con amigos, unirte a torneos, chatear con otros jugadores y personalizar tu avatar y tus cartas. Puede acceder a este sitio web desde cualquier navegador, o descargar la aplicación para dispositivos Android o iOS. </p>
27
- <h3>Belote juego de cartas por Card Guru Game Studio</h3>
28
- <p>Esta es una aplicación simple y fácil de usar que te permite jugar belote offline contra el ordenador. Puede elegir entre tres niveles de dificultad, ajustar la velocidad del juego y cambiar el fondo y el diseño de la tarjeta. También puedes ver tus estadísticas y logros, y aprender las reglas del juego. Esta aplicación está disponible solo para dispositivos Android. </p>
29
- <h3>Belote por IsCool Entretenimiento</h3>
30
- <p>Esta es otra aplicación popular que le permite jugar belote en línea con otros jugadores o fuera de línea contra el ordenador. Puede elegir entre belote clásico o contrée, que es otra variación de belote donde puede pujar más alto o más bajo que el palo de triunfo original, o pasar sin proponer un traje. También puedes jugar en diferentes modos, como juego rápido, torneo o desafío. También puedes recoger monedas, trofeos y bonos, chatear con otros jugadores y personalizar tu perfil y tus cartas. Esta aplicación está disponible para dispositivos Android e iOS. </p>
31
- <h2>Consejos y trucos para mejorar tus habilidades Belote</h2>
32
-
33
- <h3>Aprenda las reglas y los valores de las tarjetas</h3>
34
- <p>El primer paso para convertirse en un buen jugador de belote es aprender las reglas del juego y los valores de las cartas. Debes saber cómo repartir, pujar, jugar, declarar y anotar en cada ronda. También debes saber cómo determinar el palo de triunfo y cómo afecta el ranking y el valor de las cartas. Debes memorizar la tabla que muestra el valor de cada combinación dependiendo del palo de triunfo. </p>
35
- <h3>Comunícate con tu pareja mediante declaraciones</h3>
36
- <p>Belote es un juego que requiere trabajo en equipo y comunicación entre socios. Una forma de comunicarse con su pareja es usar declaraciones para mostrarles qué tarjetas tiene en la mano. Por ejemplo, si declaras un nivel en picas, le estás diciendo a tu pareja que tienes tres cartas consecutivas en picas, lo que puede ser útil si las picas son el palo de triunfo o si quieres proponerlo como el palo de triunfo. También debe prestar atención a las declaraciones de su pareja y tratar de inferir qué tarjetas tienen o no tienen. </p>
37
- <h3>Sea flexible y adaptable en su estrategia</h3>
38
- <p>Belote es un juego que requiere flexibilidad y adaptabilidad en tu estrategia. No debes ceñirte a un plan o táctica a lo largo del juego, sino ajustar tu estrategia de acuerdo a la situación. Por ejemplo, si tienes una mano fuerte con muchas cartas altas en un palo, es posible que quieras aceptar o proponer ese traje como el palo de triunfo y tratar de tomar tantos trucos como sea posible. Sin embargo, si usted tiene una mano débil con muchas cartas bajas en diferentes palos, es posible que desee pasar o proponer un traje que no tiene y tratar de evitar tomar trucos. También debes considerar la puntuación de cada equipo y la puntuación objetivo del juego, y ajustar tu estrategia en consecuencia. </p>
39
- <h3>Practica regularmente y aprende de tus errores</h3>
40
-
41
- <h2>Conclusión</h2>
42
- <p>Belote es un juego de cartas divertido y desafiante para jugadores de todas las edades y niveles. Es un juego que combina suerte y habilidad, así como la cooperación y la competencia. Es un juego que se puede jugar online o offline, con amigos o extraños, en diferentes variaciones y modos. Es un juego que se puede aprender fácilmente pero dominar con dificultad. Es un juego que puede proporcionar horas de entretenimiento y disfrute. </p>
43
- <p>Si está interesado en jugar belote, puede descargarlo gratis desde varios sitios web y aplicaciones, como Belote.com - Belote & Coinche, Belote Card Game by Card Guru Game Studio o Belote by IsCool Entertainment. También puedes mejorar tus habilidades siguiendo algunos consejos y trucos, como aprender las reglas y los valores de las tarjetas, comunicarte con tu pareja mediante declaraciones, ser flexible y adaptable en tu estrategia, practicar regularmente y aprender de tus errores. </p>
44
- <p>Esperamos que este artículo te haya ayudado a entender qué es belote, cómo jugarlo, dónde descargarlo y cómo mejorar tus habilidades. Esperamos que disfrutes de este clásico juego de cartas francés tanto como nosotros. </p>
45
- <h2>Preguntas frecuentes</h2>
46
- <p>Aquí hay algunas preguntas frecuentes sobre belote juego de cartas descargar:</p>
47
- <h3>Q: ¿Cuántos jugadores pueden jugar belote? </h3>
48
- <p>A: Belote se juega generalmente por cuatro jugadores que forman dos equipos de dos socios. Sin embargo, también hay variaciones de belote que pueden ser jugadas por dos o tres jugadores. </p>
49
- <h3>Q: ¿Cuál es la diferencia entre belote y coinche? </h3>
50
-
51
- <h3>Q: ¿Cuánto tiempo dura una partida de belote? </h3>
52
- <p>A: La duración de un juego de belote depende de la puntuación objetivo acordada por los jugadores antes de que comience el juego. La puntuación objetivo suele ser de 501 o 1000 puntos, pero puede ser mayor o menor dependiendo de la preferencia de los jugadores. Un juego de belote puede durar de 10 minutos a una hora o más. </p>
53
- <h3>Q: ¿Es belote un juego de suerte o habilidad? </h3>
54
- <p>A: Belote es un juego que involucra tanto suerte como habilidad. La suerte juega un papel en el reparto de las cartas y la carta de vuelta, que puede afectar el resultado de la ronda. La habilidad juega un papel en las fases de puja, juego, declaración y puntuación de cada ronda, que requieren estrategia, trabajo en equipo, comunicación, memoria y cálculo. </p>
55
- <h3>Q: ¿Dónde puedo encontrar más información sobre belote? </h3>
56
- <p>A: Puede encontrar más información sobre belote en varios sitios web y blogs que se dedican a este juego de cartas. Algunos ejemplos son Belote World (https://www.beloteworld.com/), Belote Rules (https://www.beloterules.com/), y Belote Online (https://www.belote-online.com/). También puedes ver videos de juegos belote en YouTube u otras plataformas. </p> 64aa2da5cf<br />
57
- <br />
58
- <br />