diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson L5190 Resetter Crack Free Download The Ultimate Guide to Resetting Your Printer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson L5190 Resetter Crack Free Download The Ultimate Guide to Resetting Your Printer.md deleted file mode 100644 index 3d078fb539729080ea4bef39d560e9a4851d1cbf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson L5190 Resetter Crack Free Download The Ultimate Guide to Resetting Your Printer.md +++ /dev/null @@ -1,34 +0,0 @@ - -

Epson L5190 Resetter Crack Free Download: How to Reset Your Printer Easily

-

If you own an Epson L5190 printer, you might have encountered a problem where the printer stops working and displays an error message saying that the ink pads are at the end of their service life. This means that the printer has reached its maximum number of prints and needs to be reset. However, resetting the printer requires a software tool called Epson L5190 resetter, which is not free and costs around $10 to purchase. But what if you could get Epson L5190 resetter crack free download and reset your printer without paying anything? In this article, we will show you how to download and use Epson L5190 resetter crack free download safely and easily.

-

epson l5190 resetter crack free download


Download Zip ✓✓✓ https://byltly.com/2uKxci



-

What is Epson L5190 Resetter Crack Free Download?

-

Epson L5190 resetter crack free download is a modified version of the original Epson L5190 resetter that bypasses the license verification and lets you use the tool for free. However, this also means that Epson L5190 resetter crack free download is not authorized by the developers and may contain malware, viruses, or other harmful files. Therefore, you should be careful when downloading and using Epson L5190 resetter crack free download and only use it from trusted sources.

-

How to Download Epson L5190 Resetter Crack Free Download?

-

There are many websites and videos that claim to offer Epson L5190 resetter crack free download links, but most of them are fake, outdated, or infected. To avoid getting scammed or infected, you should only download Epson L5190 resetter crack free download from reputable sources that have positive feedback and reviews from other users. One such source is Epsonresetter.com, which provides a working and updated version of Epson L5190 resetter crack free download with a simple installation process.

-

To download Epson L5190 resetter crack free download from Epsonresetter.com, follow these steps:

-

-
    -
  1. Go to Epsonresetter.com and click on the "Download" button.
  2. -
  3. You will be redirected to a verification page where you need to complete a short survey or offer to prove that you are human. This is to prevent bots and leechers from abusing the download link.
  4. -
  5. After completing the verification, you will get access to the download link. Click on it and save the file to your computer.
  6. -
  7. Extract the file using WinRAR or 7-Zip and run the installer.
  8. -
  9. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. You have successfully downloaded and installed Epson L5190 resetter crack free download on your computer.
  12. -
-

How to Use Epson L5190 Resetter Crack Free Download?

-

To use Epson L5190 resetter crack free download, follow these steps:

-
    -
  1. Run Epson L5190 resetter as administrator from your desktop or start menu.
  2. -
  3. You will see a main interface of Epson L5190 resetter where you can select your printer model and port.
  4. -
  5. Click on the "Particular adjustment mode" button and choose "Waste ink pad counter" from the list.
  6. -
  7. Click on "OK" and then check the boxes for "Main pad counter" and "Platen pad counter".
  8. -
  9. Click on "Check" to see the current status of your ink pads.
  10. -
  11. Click on "Initialization" to reset your ink pads to zero.
  12. -
  13. A message will pop up asking you to turn off your printer. Do so and then turn it back on.
  14. -
  15. You have successfully reset your printer using Epson L5190 resetter crack free download.
  16. -
-

Conclusion

-

Epson L5190 resetter is a useful tool that can help you extend the life of your printer by resetting its ink pads. However, if you don't want to pay for it, you can try using Epson L5190 resetter crack free download from trusted sources like Epsonresetter.com. However, you should be aware of the risks involved in using cracked software and always scan your files with antivirus before running them. We hope this article helped you learn how to download

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Clannad After Story English Dub.md b/spaces/1gistliPinn/ChatGPT4/Examples/Clannad After Story English Dub.md deleted file mode 100644 index 480e027529358296e851da92bdefcd8dafabedef..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Clannad After Story English Dub.md +++ /dev/null @@ -1,6 +0,0 @@ -

Clannad After Story english dub


Downloadhttps://imgfil.com/2uy06Q



- - 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Farm Heroes Saga Mod Apk Versi Terbaru 2023 dengan Fitur Unlimited Lives dan Boosters.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Farm Heroes Saga Mod Apk Versi Terbaru 2023 dengan Fitur Unlimited Lives dan Boosters.md deleted file mode 100644 index d6c9f133d617ca5b0ab35e2594059db501870b9a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cara Download Farm Heroes Saga Mod Apk Versi Terbaru 2023 dengan Fitur Unlimited Lives dan Boosters.md +++ /dev/null @@ -1,99 +0,0 @@ - -

Download Farm Heroes Saga Mod Apk Versi Terbaru

-

Do you love playing farm-themed games? Do you want to enjoy a fun and relaxing puzzle game with cute animals and crops? If yes, then you should try Farm Heroes Saga, one of the most popular games in the Saga series. And if you want to make your gaming experience even more exciting, you should download Farm Heroes Saga mod apk versi terbaru, which gives you unlimited lives, boosters, and access to all levels and episodes. In this article, we will tell you everything you need to know about Farm Heroes Saga and its mod apk version. Read on to find out more.

-

download farm heroes saga mod apk versi terbaru


Download ……… https://urlin.us/2uSYMD



-

What is Farm Heroes Saga?

-

Farm Heroes Saga is a fascinating farm-themed game developed by King, the makers of Candy Crush Saga, Pet Rescue Saga, and other popular games. It is the last Saga game in the Saga series. The gameplay style of the game does not change much from its predecessors. You have to match three or more cropsies (fruits, vegetables, flowers, etc.) of the same type to collect them and complete the level objectives. You can also use boosters and power-ups to help you clear the board faster and score higher. The game has hundreds of levels and episodes, each with different challenges and themes. You can also play with your friends online and compete for the best scores.

-

Features of Farm Heroes Saga

-

Some of the features of Farm Heroes Saga are:

- -

Why download Farm Heroes Saga mod apk?

-

While Farm Heroes Saga is a free-to-play game, it also has some limitations and drawbacks that can affect your gaming experience. For example, you have a limited number of lives that you can use per day. If you run out of lives, you have to wait for some time or buy more lives with real money. Similarly, you have to buy boosters and power-ups with gold bars, which are also scarce and expensive. Moreover, some levels and episodes are locked until you reach a certain level or complete a certain task. And of course, there are annoying ads and pop-ups that can interrupt your gameplay.

-

If you want to get rid of these problems and enjoy the game without any restrictions, you should download Farm Heroes Saga mod apk versi terbaru. This is a modified version of the original game that gives you several benefits and advantages. Here are some of them:

-

Unlimited lives and boosters

-

With Farm Heroes Saga mod apk, you don't have to worry about running out of lives or boosters ever again. You can play as much as you want without any interruptions or delays. You can also use any booster or power-up you like without spending any gold bars or money.

-

farm heroes saga mod apk unlimited lives and boosters
-farm heroes saga mod apk latest version 2023
-farm heroes saga mod apk android 1
-farm heroes saga mod apk free download
-farm heroes saga mod apk offline
-farm heroes saga mod apk unlimited gold bars
-farm heroes saga mod apk unlimited everything
-farm heroes saga mod apk rexdl
-farm heroes saga mod apk revdl
-farm heroes saga mod apk happymod
-farm heroes saga mod apk no root
-farm heroes saga mod apk unlimited moves
-farm heroes saga mod apk unlimited beans
-farm heroes saga mod apk all levels unlocked
-farm heroes saga mod apk unlimited magic beans and gold bars
-farm heroes saga mod apk 6.15.3
-farm heroes saga mod apk 6.14.5
-farm heroes saga mod apk 6.13.8
-farm heroes saga mod apk 6.12.9
-farm heroes saga mod apk 6.11.6
-download farm heroes saga hack mod apk
-download game farm heroes saga mod apk versi terbaru
-download game farm heroes saga mod apk unlimited money
-download game farm heroes saga mod apk android 1
-download game farm heroes saga mod apk offline
-download game farm heroes saga hack mod apk
-download game farm heroes super saga mod apk versi terbaru
-download game farm heroes super saga mod apk unlimited money
-download game farm heroes super saga hack mod apk
-download game pet rescue saga mod apk versi terbaru
-download game pet rescue saga hack mod apk
-download game candy crush soda saga mod apk versi terbaru
-download game candy crush soda saga hack mod apk
-download game candy crush jelly saga mod apk versi terbaru
-download game candy crush jelly saga hack mod apk
-download game candy crush friends saga mod apk versi terbaru
-download game candy crush friends saga hack mod apk
-cara download farm heroes saga mod apk versi terbaru
-cara instal farm heroes saga mod apk versi terbaru
-cara main farm heroes saga mod apk versi terbaru
-cara cheat farm heroes saga mod apk versi terbaru
-cara update farm heroes saga mod apk versi terbaru
-link download farm heroes saga mod apk versi terbaru 2023
-link download game farm heroes super saga hack mod apk 2023

-

All levels and episodes unlocked

-

With Farm Heroes Saga mod apk, you don't have to wait for anything or do anything to unlock new levels and episodes. You can access all of them from the start and play them in any order you prefer. You can also skip any level or episode that you find too hard or boring.

-

No ads and pop-ups

-

With With Farm Heroes Saga mod apk, you don't have to deal with any ads or pop-ups that can ruin your mood and distract you from the game. You can enjoy a smooth and uninterrupted gameplay without any annoying interruptions or distractions.

-

How to download and install Farm Heroes Saga mod apk?

-

Downloading and installing Farm Heroes Saga mod apk is very easy and simple. You just need to follow these steps:

-

Step 1: Enable unknown sources

-

Before you can install any mod apk file on your device, you need to enable the option of unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

-

Step 2: Download the mod apk file

-

Next, you need to download the Farm Heroes Saga mod apk file from a reliable and trusted source. You can use the link below to download the latest version of the mod apk file. Make sure you have enough storage space on your device before downloading the file.

-

Download Farm Heroes Saga mod apk versi terbaru

-

Step 3: Install the mod apk file

-

Once you have downloaded the mod apk file, you need to install it on your device. To do this, locate the file in your downloads folder and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish.

-

Step 4: Launch the game and enjoy

-

Finally, you can launch the game and enjoy all the benefits and features of the mod apk version. You will see that you have unlimited lives, boosters, and access to all levels and episodes. You can also play without any ads or pop-ups. Have fun!

-

Conclusion

-

Farm Heroes Saga is a fun and relaxing farm-themed game that you can play anytime and anywhere. It has cute and colorful graphics, easy and fun gameplay, and hundreds of levels and episodes to keep you entertained. However, if you want to make your gaming experience even more exciting and enjoyable, you should download Farm Heroes Saga mod apk versi terbaru, which gives you unlimited lives, boosters, and access to all levels and episodes. You can also play without any ads or pop-ups. Downloading and installing Farm Heroes Saga mod apk is very easy and simple. You just need to follow the steps we have explained in this article. So what are you waiting for? Download Farm Heroes Saga mod apk now and have fun!

-

FAQs

-

Here are some frequently asked questions about Farm Heroes Saga mod apk:

- CONTACT TMZ
  • Contact Us
  • Send a Hot Tip
  • Careers
  • Advertising Inquiries
  • Media Inquiries
  • SUBSCRIBE Yes! Send me email updates and offers from TMZ and its Affiliates. By subscribing, I agree to the Privacy Policy and Terms of Use

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Generic Mod Enabler V2.6.0.157 Download nicht allein soundwo Manage your game mods with this simple and lightweight tool[2].md b/spaces/bioriAsaeru/text-to-voice/Generic Mod Enabler V2.6.0.157 Download nicht allein soundwo Manage your game mods with this simple and lightweight tool[2].md deleted file mode 100644 index 35fc538a5f33239f39dc2c74e9aeb4d06b902c3c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Generic Mod Enabler V2.6.0.157 Download nicht allein soundwo Manage your game mods with this simple and lightweight tool[2].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Generic Mod Enabler V2.6.0.157 Download nicht allein soundwo


    DOWNLOAD ★★★ https://urloso.com/2uyQoa



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Toilet - Ek Prem Katha Movie Hd Down).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (Toilet - Ek Prem Katha Movie Hd Down).md deleted file mode 100644 index dfda8c5fee34bd68bf3b69fe35a92c01d874936f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Toilet - Ek Prem Katha Movie Hd Down).md +++ /dev/null @@ -1,20 +0,0 @@ - -

    How to Watch Toilet: Ek Prem Katha Online in HD Quality

    -

    Are you looking for a way to watch Toilet: Ek Prem Katha, the 2017 Bollywood hit that tackles the issue of open defecation in rural India, online in HD quality? If yes, then you are in luck, because we have some tips for you to enjoy this movie from the comfort of your home.

    -

    Toilet: Ek Prem Katha stars Akshay Kumar and Bhumi Pednekar as Keshav and Jaya, a newly married couple who face a problem when Jaya discovers that Keshav's house has no toilet. She refuses to live with him until he builds one, and he embarks on a mission to convince his orthodox father and the villagers to change their mindset and adopt modern sanitation.

    -

    HD Online Player (Toilet - Ek Prem Katha movie hd down)


    Download Zip ··· https://urloso.com/2uyRoN



    -

    The movie is based on a true story and has received critical acclaim and commercial success for its social message and humorous approach. It also features Anupam Kher, Sudhir Pandey and Divyendu Sharma in supporting roles.

    -

    So, how can you watch this movie online in HD quality? Here are some options:

    - -

    These are some of the ways you can watch Toilet: Ek Prem Katha online in HD quality. However, we recommend you to use legal and safe methods to enjoy this movie and support the makers. We hope you have a great time watching this movie!

    - -

    Toilet: Ek Prem Katha is not just a movie, but a movement that aims to raise awareness and inspire action on the issue of open defecation in India. According to the World Health Organization, more than 600 million people in India still defecate in the open, which poses serious health and environmental risks. The movie showcases how one man's determination and love can bring about a positive change in his community and society.

    -

    The movie also highlights the importance of women's empowerment and dignity, as Jaya stands up for her rights and demands a basic facility that every woman deserves. She also inspires other women in the village to join her cause and fight for their hygiene and safety. The movie shows how women can be agents of change and leaders in their own right.

    -

    -

    Toilet: Ek Prem Katha is a must-watch for anyone who wants to witness a powerful story of love, courage and social transformation. The movie has a blend of comedy, drama and romance that will keep you entertained and engaged throughout. The movie also has some catchy songs and dialogues that will stay with you long after you finish watching it.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/King Of Fighter Wing 1.9 Free Download.md b/spaces/bioriAsaeru/text-to-voice/King Of Fighter Wing 1.9 Free Download.md deleted file mode 100644 index aff245e2c8662befd6a0e30242ab39c7d43fbd93..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/King Of Fighter Wing 1.9 Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    King of fighter Wing 1.9 free download


    DOWNLOADhttps://urloso.com/2uyS8m



    - - 1fdad05405
    -
    -
    -

    diff --git a/spaces/bipin/image2story/prefix_clip.py b/spaces/bipin/image2story/prefix_clip.py deleted file mode 100644 index 362f333391d5ce57bff1889932e0bb6e0ba6f437..0000000000000000000000000000000000000000 --- a/spaces/bipin/image2story/prefix_clip.py +++ /dev/null @@ -1,272 +0,0 @@ -import clip -from torch import nn -import numpy as np -import torch -import torch.nn.functional as nnf -import gdown -from typing import Tuple, List, Union, Optional -from transformers import ( - GPT2Tokenizer, - GPT2LMHeadModel, -) -from tqdm import trange - - -N = type(None) -V = np.array -ARRAY = np.ndarray -ARRAYS = Union[Tuple[ARRAY, ...], List[ARRAY]] -VS = Union[Tuple[V, ...], List[V]] -VN = Union[V, N] -VNS = Union[VS, N] -T = torch.Tensor -TS = Union[Tuple[T, ...], List[T]] -TN = Optional[T] -TNS = Union[Tuple[TN, ...], List[TN]] -TSN = Optional[TS] -TA = Union[T, ARRAY] - -D = torch.device -CPU = torch.device("cpu") - - -def download_pretrained_model(model, file_to_save): - conceptual_wt = "14pXWwB4Zm82rsDdvbGguLfx9F8aM7ovT" - coco_wt = "1IdaBtMSvtyzF0ByVaBHtvM0JYSXRExRX" - - # download pretrained weights - if model == "coco": - url = f"https://drive.google.com/uc?id={coco_wt}" - elif model == "conceptual": - url = f"https://drive.google.com/uc?id={conceptual_wt}" - gdown.download(url, file_to_save, quiet=False) - - -class MLP(nn.Module): - def forward(self, x: T) -> T: - return self.model(x) - - def __init__(self, sizes: Tuple[int, ...], bias=True, act=nn.Tanh): - super(MLP, self).__init__() - layers = [] - for i in range(len(sizes) - 1): - layers.append(nn.Linear(sizes[i], sizes[i + 1], bias=bias)) - if i < len(sizes) - 2: - layers.append(act()) - self.model = nn.Sequential(*layers) - - -class ClipCaptionModel(nn.Module): - def get_dummy_token(self, batch_size: int, device: D) -> T: - return torch.zeros( - batch_size, self.prefix_length, dtype=torch.int64, device=device - ) - - def forward( - self, tokens: T, prefix: T, mask: Optional[T] = None, labels: Optional[T] = None - ): - embedding_text = self.gpt.transformer.wte(tokens) - prefix_projections = self.clip_project(prefix).view( - -1, self.prefix_length, self.gpt_embedding_size - ) - # print(embedding_text.size()) #torch.Size([5, 67, 768]) - # print(prefix_projections.size()) #torch.Size([5, 1, 768]) - embedding_cat = torch.cat((prefix_projections, embedding_text), dim=1) - if labels is not None: - dummy_token = self.get_dummy_token(tokens.shape[0], tokens.device) - labels = torch.cat((dummy_token, tokens), dim=1) - out = self.gpt(inputs_embeds=embedding_cat, labels=labels, attention_mask=mask) - return out - - def __init__(self, prefix_length: int, prefix_size: int = 512): - super(ClipCaptionModel, self).__init__() - self.prefix_length = prefix_length - self.gpt = GPT2LMHeadModel.from_pretrained("gpt2") - self.gpt_embedding_size = self.gpt.transformer.wte.weight.shape[1] - if prefix_length > 10: # not enough memory - self.clip_project = nn.Linear( - prefix_size, self.gpt_embedding_size * prefix_length - ) - else: - self.clip_project = MLP( - ( - prefix_size, - (self.gpt_embedding_size * prefix_length) // 2, - self.gpt_embedding_size * prefix_length, - ) - ) - - -class ClipCaptionPrefix(ClipCaptionModel): - def parameters(self, recurse: bool = True): - return self.clip_project.parameters() - - def train(self, mode: bool = True): - super(ClipCaptionPrefix, self).train(mode) - self.gpt.eval() - return self - - -def generate_beam( - model, - tokenizer, - beam_size: int = 5, - prompt=None, - embed=None, - entry_length=67, - temperature=1.0, - stop_token: str = ".", -): - - model.eval() - stop_token_index = tokenizer.encode(stop_token)[0] - tokens = None - scores = None - device = next(model.parameters()).device - seq_lengths = torch.ones(beam_size, device=device) - is_stopped = torch.zeros(beam_size, device=device, dtype=torch.bool) - with torch.no_grad(): - if embed is not None: - generated = embed - else: - if tokens is None: - tokens = torch.tensor(tokenizer.encode(prompt)) - tokens = tokens.unsqueeze(0).to(device) - generated = model.gpt.transformer.wte(tokens) - for i in range(entry_length): - outputs = model.gpt(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - logits = logits.softmax(-1).log() - if scores is None: - scores, next_tokens = logits.topk(beam_size, -1) - generated = generated.expand(beam_size, *generated.shape[1:]) - next_tokens, scores = next_tokens.permute(1, 0), scores.squeeze(0) - if tokens is None: - tokens = next_tokens - else: - tokens = tokens.expand(beam_size, *tokens.shape[1:]) - tokens = torch.cat((tokens, next_tokens), dim=1) - else: - logits[is_stopped] = -float(np.inf) - logits[is_stopped, 0] = 0 - scores_sum = scores[:, None] + logits - seq_lengths[~is_stopped] += 1 - scores_sum_average = scores_sum / seq_lengths[:, None] - scores_sum_average, next_tokens = scores_sum_average.view(-1).topk( - beam_size, -1 - ) - next_tokens_source = next_tokens // scores_sum.shape[1] - seq_lengths = seq_lengths[next_tokens_source] - next_tokens = next_tokens % scores_sum.shape[1] - next_tokens = next_tokens.unsqueeze(1) - tokens = tokens[next_tokens_source] - tokens = torch.cat((tokens, next_tokens), dim=1) - generated = generated[next_tokens_source] - scores = scores_sum_average * seq_lengths - is_stopped = is_stopped[next_tokens_source] - next_token_embed = model.gpt.transformer.wte(next_tokens.squeeze()).view( - generated.shape[0], 1, -1 - ) - generated = torch.cat((generated, next_token_embed), dim=1) - is_stopped = is_stopped + next_tokens.eq(stop_token_index).squeeze() - if is_stopped.all(): - break - scores = scores / seq_lengths - output_list = tokens.cpu().numpy() - output_texts = [ - tokenizer.decode(output[: int(length)]) - for output, length in zip(output_list, seq_lengths) - ] - order = scores.argsort(descending=True) - output_texts = [output_texts[i] for i in order] - return output_texts - - -def generate2( - model, - tokenizer, - tokens=None, - prompt=None, - embed=None, - entry_count=1, - entry_length=67, # maximum number of words - top_p=0.8, - temperature=1.0, - stop_token: str = ".", -): - model.eval() - generated_num = 0 - generated_list = [] - stop_token_index = tokenizer.encode(stop_token)[0] - filter_value = -float("Inf") - device = next(model.parameters()).device - - with torch.no_grad(): - - for entry_idx in trange(entry_count): - if embed is not None: - generated = embed - else: - if tokens is None: - tokens = torch.tensor(tokenizer.encode(prompt)) - tokens = tokens.unsqueeze(0).to(device) - - generated = model.gpt.transformer.wte(tokens) - - for i in range(entry_length): - - outputs = model.gpt(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - cumulative_probs = torch.cumsum( - nnf.softmax(sorted_logits, dim=-1), dim=-1 - ) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[ - ..., :-1 - ].clone() - sorted_indices_to_remove[..., 0] = 0 - - indices_to_remove = sorted_indices[sorted_indices_to_remove] - logits[:, indices_to_remove] = filter_value - next_token = torch.argmax(logits, -1).unsqueeze(0) - next_token_embed = model.gpt.transformer.wte(next_token) - if tokens is None: - tokens = next_token - else: - tokens = torch.cat((tokens, next_token), dim=1) - generated = torch.cat((generated, next_token_embed), dim=1) - if stop_token_index == next_token.item(): - break - - output_list = list(tokens.squeeze().cpu().numpy()) - output_text = tokenizer.decode(output_list) - generated_list.append(output_text) - - return generated_list[0] - - -def generate_caption(model_path, pil_image, use_beam_search): - device = "cuda" if torch.cuda.is_available() else "cpu" - clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) - tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - - prefix_length = 10 - - model = ClipCaptionModel(prefix_length) - model.load_state_dict(torch.load(model_path, map_location=CPU)) - model = model.eval() - model = model.to(device) - - image = preprocess(pil_image).unsqueeze(0).to(device) - with torch.no_grad(): - prefix = clip_model.encode_image(image).to(device, dtype=torch.float32) - prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) - if use_beam_search: - image_caption = generate_beam(model, tokenizer, embed=prefix_embed)[0] - else: - image_caption = generate2(model, tokenizer, embed=prefix_embed) - - return image_caption diff --git a/spaces/blmdsydm/faster-whisper-webui/src/__init__.py b/spaces/blmdsydm/faster-whisper-webui/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cakiki/facets-overview/Dockerfile b/spaces/cakiki/facets-overview/Dockerfile deleted file mode 100644 index 5f1f4bb9feb52223d23b556d2bdfc046ea2b2b64..0000000000000000000000000000000000000000 --- a/spaces/cakiki/facets-overview/Dockerfile +++ /dev/null @@ -1,3 +0,0 @@ -FROM jupyter/base-notebook:latest - -RUN pip install --use-feature=2020-resolver pandas facets-overview diff --git a/spaces/calvinchaochao/text_generation/README.md b/spaces/calvinchaochao/text_generation/README.md deleted file mode 100644 index 52969951e883a41eb14cd694a06a4bb304fc59b0..0000000000000000000000000000000000000000 --- a/spaces/calvinchaochao/text_generation/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: text_generation -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.37.0 -app_file: run.py -pinned: false -duplicated_from: gradio/text_generation ---- diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/audio/stft.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/audio/stft.py deleted file mode 100644 index 2aa1ac89277734a6676c20a81bf88e21e8ca7aa9..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/audio/stft.py +++ /dev/null @@ -1,180 +0,0 @@ -import torch -import torch.nn.functional as F -import numpy as np -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from librosa.filters import mel as librosa_mel_fn - -from audioldm.audio.audio_processing import ( - dynamic_range_compression, - dynamic_range_decompression, - window_sumsquare, -) - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - - def __init__(self, filter_length, hop_length, win_length, window="hann"): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - if window is not None: - assert filter_length >= win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode="reflect", - ) - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - torch.autograd.Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0, - ).cpu() - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - torch.autograd.Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False - ) - window_sum = window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :] - inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -class TacotronSTFT(torch.nn.Module): - def __init__( - self, - filter_length, - hop_length, - win_length, - n_mel_channels, - sampling_rate, - mel_fmin, - mel_fmax, - ): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - - def spectral_normalize(self, magnitudes, normalize_fun): - output = dynamic_range_compression(magnitudes, normalize_fun) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y, normalize_fun=torch.log): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert torch.min(y.data) >= -1, torch.min(y.data) - assert torch.max(y.data) <= 1, torch.max(y.data) - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output, normalize_fun) - energy = torch.norm(magnitudes, dim=1) - - log_magnitudes = self.spectral_normalize(magnitudes, normalize_fun) - - return mel_output, log_magnitudes, energy diff --git a/spaces/chansung/LLM-As-Chatbot/models/llama_rlhf.py b/spaces/chansung/LLM-As-Chatbot/models/llama_rlhf.py deleted file mode 100644 index 754768ec04db2e8a14fcc03d4d2be0491ac33cad..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLM-As-Chatbot/models/llama_rlhf.py +++ /dev/null @@ -1,51 +0,0 @@ -import torch -from peft import PeftModel -from transformers import LlamaTokenizer, LlamaForCausalLM - -def load_model( - base, - finetuned, - mode_cpu, - mode_mps, - mode_full_gpu, - mode_8bit, - mode_4bit, - force_download_ckpt -): - tokenizer = LlamaTokenizer.from_pretrained(base) - tokenizer.pad_token_id = 0 - tokenizer.padding_side = "left" - - if not multi_gpu: - model = LlamaForCausalLM.from_pretrained( - base, - load_in_8bit=mode_8bit, - load_in_4bit=mode_4bit, - device_map="auto", - ) - - model = PeftModel.from_pretrained( - model, - finetuned, - # force_download=force_download_ckpt, - device_map={'': 0} - ) - return model, tokenizer - else: - model = LlamaForCausalLM.from_pretrained( - base, - load_in_8bit=mode_8bit, - load_in_4bit=mode_4bit, - torch_dtype=torch.float16, - device_map="auto", - ) - - model = PeftModel.from_pretrained( - model, - finetuned, - # force_download=force_download_ckpt, - torch_dtype=torch.float16 - ) - model.half() - return model, tokenizer - diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/pplm/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/pplm/README.md deleted file mode 100644 index f37ea8e96f216d1977491779f940c2f9851302da..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/pplm/README.md +++ /dev/null @@ -1,56 +0,0 @@ -# Plug and Play Language Models: a Simple Approach to Controlled Text Generation - -Authors: [Sumanth Dathathri](https://dathath.github.io/), [Andrea Madotto](https://andreamad8.github.io/), Janice Lan, Jane Hung, Eric Frank, [Piero Molino](https://w4nderlu.st/), [Jason Yosinski](http://yosinski.com/), and [Rosanne Liu](http://www.rosanneliu.com/) - -This folder contains the original code used to run the Plug and Play Language Model (PPLM). - -Paper link: https://arxiv.org/abs/1912.02164 - -Blog link: https://eng.uber.com/pplm - -Please check out the repo under uber-research for more information: https://github.com/uber-research/PPLM - -# Note - -⚠️ This project should be run with pytorch-lightning==1.0.4 which has a potential security vulnerability - -## Setup - -```bash -git clone https://github.com/huggingface/transformers && cd transformers -pip install . -pip install nltk torchtext # additional requirements. -cd examples/research_projects/pplm -``` - -## PPLM-BoW - -### Example command for bag-of-words control - -```bash -python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample -``` - -### Tuning hyperparameters for bag-of-words control - -1. Increase `--stepsize` to intensify topic control, and decrease its value to soften the control. `--stepsize 0` recovers the original uncontrolled GPT-2 model. - -2. If the language being generated is repetitive (For e.g. "science science experiment experiment"), there are several options to consider:
    - a) Reduce the `--stepsize`
    - b) Increase `--kl_scale` (the KL-loss coefficient) or decrease `--gm_scale` (the gm-scaling term)
    - c) Add `--grad-length xx` where xx is an (integer <= length, e.g. `--grad-length 30`).
    - - -## PPLM-Discrim - -### Example command for discriminator based sentiment control - -```bash -python run_pplm.py -D sentiment --class_label 2 --cond_text "My dog died" --length 50 --gamma 1.0 --num_iterations 10 --num_samples 10 --stepsize 0.04 --kl_scale 0.01 --gm_scale 0.95 --sample -``` - -### Tuning hyperparameters for discriminator control - -1. Increase `--stepsize` to intensify topic control, and decrease its value to soften the control. `--stepsize 0` recovers the original uncontrolled GPT-2 model. - -2. Use `--class_label 3` for negative, and `--class_label 2` for positive diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/metadata/languages.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/metadata/languages.py deleted file mode 100644 index eb40c5f0c8526208d434d762855d23079dc68b36..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/metadata/languages.py +++ /dev/null @@ -1,352 +0,0 @@ -""" -Metadata about languages used by our model training code for our -SingleByteCharSetProbers. Could be used for other things in the future. - -This code is based on the language metadata from the uchardet project. -""" - -from string import ascii_letters -from typing import List, Optional - -# TODO: Add Ukrainian (KOI8-U) - - -class Language: - """Metadata about a language useful for training models - - :ivar name: The human name for the language, in English. - :type name: str - :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise, - or use another catalog as a last resort. - :type iso_code: str - :ivar use_ascii: Whether or not ASCII letters should be included in trained - models. - :type use_ascii: bool - :ivar charsets: The charsets we want to support and create data for. - :type charsets: list of str - :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is - `True`, you only need to add those not in the ASCII set. - :type alphabet: str - :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling - Wikipedia for training data. - :type wiki_start_pages: list of str - """ - - def __init__( - self, - name: Optional[str] = None, - iso_code: Optional[str] = None, - use_ascii: bool = True, - charsets: Optional[List[str]] = None, - alphabet: Optional[str] = None, - wiki_start_pages: Optional[List[str]] = None, - ) -> None: - super().__init__() - self.name = name - self.iso_code = iso_code - self.use_ascii = use_ascii - self.charsets = charsets - if self.use_ascii: - if alphabet: - alphabet += ascii_letters - else: - alphabet = ascii_letters - elif not alphabet: - raise ValueError("Must supply alphabet if use_ascii is False") - self.alphabet = "".join(sorted(set(alphabet))) if alphabet else None - self.wiki_start_pages = wiki_start_pages - - def __repr__(self) -> str: - param_str = ", ".join( - f"{k}={v!r}" for k, v in self.__dict__.items() if not k.startswith("_") - ) - return f"{self.__class__.__name__}({param_str})" - - -LANGUAGES = { - "Arabic": Language( - name="Arabic", - iso_code="ar", - use_ascii=False, - # We only support encodings that use isolated - # forms, because the current recommendation is - # that the rendering system handles presentation - # forms. This means we purposefully skip IBM864. - charsets=["ISO-8859-6", "WINDOWS-1256", "CP720", "CP864"], - alphabet="ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـفقكلمنهوىيًٌٍَُِّ", - wiki_start_pages=["الصفحة_الرئيسية"], - ), - "Belarusian": Language( - name="Belarusian", - iso_code="be", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM866", "MacCyrillic"], - alphabet="АБВГДЕЁЖЗІЙКЛМНОПРСТУЎФХЦЧШЫЬЭЮЯабвгдеёжзійклмнопрстуўфхцчшыьэюяʼ", - wiki_start_pages=["Галоўная_старонка"], - ), - "Bulgarian": Language( - name="Bulgarian", - iso_code="bg", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM855"], - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", - wiki_start_pages=["Начална_страница"], - ), - "Czech": Language( - name="Czech", - iso_code="cz", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áčďéěíňóřšťúůýžÁČĎÉĚÍŇÓŘŠŤÚŮÝŽ", - wiki_start_pages=["Hlavní_strana"], - ), - "Danish": Language( - name="Danish", - iso_code="da", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="æøåÆØÅ", - wiki_start_pages=["Forside"], - ), - "German": Language( - name="German", - iso_code="de", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="äöüßẞÄÖÜ", - wiki_start_pages=["Wikipedia:Hauptseite"], - ), - "Greek": Language( - name="Greek", - iso_code="el", - use_ascii=False, - charsets=["ISO-8859-7", "WINDOWS-1253"], - alphabet="αβγδεζηθικλμνξοπρσςτυφχψωάέήίόύώΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎΏ", - wiki_start_pages=["Πύλη:Κύρια"], - ), - "English": Language( - name="English", - iso_code="en", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Main_Page"], - ), - "Esperanto": Language( - name="Esperanto", - iso_code="eo", - # Q, W, X, and Y not used at all - use_ascii=False, - charsets=["ISO-8859-3"], - alphabet="abcĉdefgĝhĥijĵklmnoprsŝtuŭvzABCĈDEFGĜHĤIJĴKLMNOPRSŜTUŬVZ", - wiki_start_pages=["Vikipedio:Ĉefpaĝo"], - ), - "Spanish": Language( - name="Spanish", - iso_code="es", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ñáéíóúüÑÁÉÍÓÚÜ", - wiki_start_pages=["Wikipedia:Portada"], - ), - "Estonian": Language( - name="Estonian", - iso_code="et", - use_ascii=False, - charsets=["ISO-8859-4", "ISO-8859-13", "WINDOWS-1257"], - # C, F, Š, Q, W, X, Y, Z, Ž are only for - # loanwords - alphabet="ABDEGHIJKLMNOPRSTUVÕÄÖÜabdeghijklmnoprstuvõäöü", - wiki_start_pages=["Esileht"], - ), - "Finnish": Language( - name="Finnish", - iso_code="fi", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÅÄÖŠŽåäöšž", - wiki_start_pages=["Wikipedia:Etusivu"], - ), - "French": Language( - name="French", - iso_code="fr", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="œàâçèéîïùûêŒÀÂÇÈÉÎÏÙÛÊ", - wiki_start_pages=["Wikipédia:Accueil_principal", "Bœuf (animal)"], - ), - "Hebrew": Language( - name="Hebrew", - iso_code="he", - use_ascii=False, - charsets=["ISO-8859-8", "WINDOWS-1255"], - alphabet="אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ", - wiki_start_pages=["עמוד_ראשי"], - ), - "Croatian": Language( - name="Croatian", - iso_code="hr", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčćdđefghijklmnoprsštuvzžABCČĆDĐEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stranica"], - ), - "Hungarian": Language( - name="Hungarian", - iso_code="hu", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcdefghijklmnoprstuvzáéíóöőúüűABCDEFGHIJKLMNOPRSTUVZÁÉÍÓÖŐÚÜŰ", - wiki_start_pages=["Kezdőlap"], - ), - "Italian": Language( - name="Italian", - iso_code="it", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÀÈÉÌÒÓÙàèéìòóù", - wiki_start_pages=["Pagina_principale"], - ), - "Lithuanian": Language( - name="Lithuanian", - iso_code="lt", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, and X not used at all - alphabet="AĄBCČDEĘĖFGHIĮYJKLMNOPRSŠTUŲŪVZŽaąbcčdeęėfghiįyjklmnoprsštuųūvzž", - wiki_start_pages=["Pagrindinis_puslapis"], - ), - "Latvian": Language( - name="Latvian", - iso_code="lv", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, X, Y are only for loanwords - alphabet="AĀBCČDEĒFGĢHIĪJKĶLĻMNŅOPRSŠTUŪVZŽaābcčdeēfgģhiījkķlļmnņoprsštuūvzž", - wiki_start_pages=["Sākumlapa"], - ), - "Macedonian": Language( - name="Macedonian", - iso_code="mk", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - alphabet="АБВГДЃЕЖЗЅИЈКЛЉМНЊОПРСТЌУФХЦЧЏШабвгдѓежзѕијклљмнњопрстќуфхцчџш", - wiki_start_pages=["Главна_страница"], - ), - "Dutch": Language( - name="Dutch", - iso_code="nl", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Hoofdpagina"], - ), - "Polish": Language( - name="Polish", - iso_code="pl", - # Q and X are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="AĄBCĆDEĘFGHIJKLŁMNŃOÓPRSŚTUWYZŹŻaąbcćdeęfghijklłmnńoóprsśtuwyzźż", - wiki_start_pages=["Wikipedia:Strona_główna"], - ), - "Portuguese": Language( - name="Portuguese", - iso_code="pt", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÁÂÃÀÇÉÊÍÓÔÕÚáâãàçéêíóôõú", - wiki_start_pages=["Wikipédia:Página_principal"], - ), - "Romanian": Language( - name="Romanian", - iso_code="ro", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="ăâîșțĂÂÎȘȚ", - wiki_start_pages=["Pagina_principală"], - ), - "Russian": Language( - name="Russian", - iso_code="ru", - use_ascii=False, - charsets=[ - "ISO-8859-5", - "WINDOWS-1251", - "KOI8-R", - "MacCyrillic", - "IBM866", - "IBM855", - ], - alphabet="абвгдеёжзийклмнопрстуфхцчшщъыьэюяАБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ", - wiki_start_pages=["Заглавная_страница"], - ), - "Slovak": Language( - name="Slovak", - iso_code="sk", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áäčďéíĺľňóôŕšťúýžÁÄČĎÉÍĹĽŇÓÔŔŠŤÚÝŽ", - wiki_start_pages=["Hlavná_stránka"], - ), - "Slovene": Language( - name="Slovene", - iso_code="sl", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčdefghijklmnoprsštuvzžABCČDEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stran"], - ), - # Serbian can be written in both Latin and Cyrillic, but there's no - # simple way to get the Latin alphabet pages from Wikipedia through - # the API, so for now we just support Cyrillic. - "Serbian": Language( - name="Serbian", - iso_code="sr", - alphabet="АБВГДЂЕЖЗИЈКЛЉМНЊОПРСТЋУФХЦЧЏШабвгдђежзијклљмнњопрстћуфхцчџш", - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - wiki_start_pages=["Главна_страна"], - ), - "Thai": Language( - name="Thai", - iso_code="th", - use_ascii=False, - charsets=["ISO-8859-11", "TIS-620", "CP874"], - alphabet="กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛", - wiki_start_pages=["หน้าหลัก"], - ), - "Turkish": Language( - name="Turkish", - iso_code="tr", - # Q, W, and X are not used by Turkish - use_ascii=False, - charsets=["ISO-8859-3", "ISO-8859-9", "WINDOWS-1254"], - alphabet="abcçdefgğhıijklmnoöprsştuüvyzâîûABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZÂÎÛ", - wiki_start_pages=["Ana_Sayfa"], - ), - "Vietnamese": Language( - name="Vietnamese", - iso_code="vi", - use_ascii=False, - # Windows-1258 is the only common 8-bit - # Vietnamese encoding supported by Python. - # From Wikipedia: - # For systems that lack support for Unicode, - # dozens of 8-bit Vietnamese code pages are - # available.[1] The most common are VISCII - # (TCVN 5712:1993), VPS, and Windows-1258.[3] - # Where ASCII is required, such as when - # ensuring readability in plain text e-mail, - # Vietnamese letters are often encoded - # according to Vietnamese Quoted-Readable - # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4] - # though usage of either variable-width - # scheme has declined dramatically following - # the adoption of Unicode on the World Wide - # Web. - charsets=["WINDOWS-1258"], - alphabet="aăâbcdđeêghiklmnoôơpqrstuưvxyAĂÂBCDĐEÊGHIKLMNOÔƠPQRSTUƯVXY", - wiki_start_pages=["Chữ_Quốc_ngữ"], - ), -} diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py deleted file mode 100644 index dbcb5ca19a01e3ae000986673d66def23f9c2eac..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/mpl_renderer.py +++ /dev/null @@ -1,613 +0,0 @@ -from __future__ import annotations - -import io -from typing import TYPE_CHECKING, Any, cast - -import matplotlib.collections as mcollections -import matplotlib.pyplot as plt -import numpy as np - -from contourpy import FillType, LineType -from contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths, mpl_codes_to_offsets -from contourpy.util.renderer import Renderer - -if TYPE_CHECKING: - from matplotlib.axes import Axes - from matplotlib.figure import Figure - from numpy.typing import ArrayLike - - import contourpy._contourpy as cpy - - -class MplRenderer(Renderer): - _axes: Axes - _fig: Figure - _want_tight: bool - - """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range. - - Args: - nrows (int, optional): Number of rows of plots, default ``1``. - ncols (int, optional): Number of columns of plots, default ``1``. - figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``. - show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``. - backend (str, optional): Matplotlib backend to use or ``None`` for default backend. - Default ``None``. - gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``, - default None. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - backend: str | None = None, - gridspec_kw: dict[str, Any] | None = None, - ) -> None: - if backend is not None: - import matplotlib - matplotlib.use(backend) - - kwargs = dict(figsize=figsize, squeeze=False, sharex=True, sharey=True) - if gridspec_kw is not None: - kwargs["gridspec_kw"] = gridspec_kw - else: - kwargs["subplot_kw"] = dict(aspect="equal") - - self._fig, axes = plt.subplots(nrows, ncols, **kwargs) - self._axes = axes.flatten() - if not show_frame: - for ax in self._axes: - ax.axis("off") - - self._want_tight = True - - def __del__(self) -> None: - if hasattr(self, "_fig"): - plt.close(self._fig) - - def _autoscale(self) -> None: - # Using axes._need_autoscale attribute if need to autoscale before rendering after adding - # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled - # added. - for ax in self._axes: - if getattr(ax, "_need_autoscale", False): - ax.autoscale_view(tight=True) - ax._need_autoscale = False - if self._want_tight and len(self._axes) > 1: - self._fig.tight_layout() - - def _get_ax(self, ax: Axes | int) -> Axes: - if isinstance(ax, int): - ax = self._axes[ax] - return ax - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 0.7, - ) -> None: - """Plot filled contours on a single Axes. - - Args: - filled (sequence of arrays): Filled contour data as returned by - :func:`~contourpy.ContourGenerator.filled`. - fill_type (FillType): Type of ``filled`` data, as returned by - :attr:`~contourpy.ContourGenerator.fill_type`. - ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``. - color (str, optional): Color to plot with. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot with, default ``0.7``. - """ - ax = self._get_ax(ax) - paths = filled_to_mpl_paths(filled, fill_type) - collection = mcollections.PathCollection( - paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def grid( - self, - x: ArrayLike, - y: ArrayLike, - ax: Axes | int = 0, - color: str = "black", - alpha: float = 0.1, - point_color: str | None = None, - quad_as_tri_alpha: float = 0, - ) -> None: - """Plot quad grid lines on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot grid lines, default ``"black"``. - alpha (float, optional): Opacity to plot lines with, default ``0.1``. - point_color (str, optional): Color to plot grid points or ``None`` if grid points - should not be plotted, default ``None``. - quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0. - - Colors may be a string color or the letter ``"C"`` followed by an integer in the range - ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap. - - Warning: - ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - kwargs = dict(color=color, alpha=alpha) - ax.plot(x, y, x.T, y.T, **kwargs) - if quad_as_tri_alpha > 0: - # Assumes no quad mask. - xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:]) - ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:]) - kwargs["alpha"] = quad_as_tri_alpha - ax.plot( - np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)), - np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)), - np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)), - np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)), - **kwargs) - if point_color is not None: - ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0) - ax._need_autoscale = True - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - ) -> None: - """Plot contour lines on a single Axes. - - Args: - lines (sequence of arrays): Contour line data as returned by - :func:`~contourpy.ContourGenerator.lines`. - line_type (LineType): Type of ``lines`` data, as returned by - :attr:`~contourpy.ContourGenerator.line_type`. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color to plot lines. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"C0"``. - alpha (float, optional): Opacity to plot lines with, default ``1.0``. - linewidth (float, optional): Width of lines, default ``1``. - """ - ax = self._get_ax(ax) - paths = lines_to_mpl_paths(lines, line_type) - collection = mcollections.PathCollection( - paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha) - ax.add_collection(collection) - ax._need_autoscale = True - - def mask( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike | np.ma.MaskedArray[Any, Any], - ax: Axes | int = 0, - color: str = "black", - ) -> None: - """Plot masked out grid points as circles on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (masked array of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Circle color, default ``"black"``. - """ - mask = np.ma.getmask(z) # type: ignore[no-untyped-call] - if mask is np.ma.nomask: - return - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - ax.plot(x[mask], y[mask], "o", c=color) - - def save(self, filename: str, transparent: bool = False) -> None: - """Save plots to SVG or PNG file. - - Args: - filename (str): Filename to save to. - transparent (bool, optional): Whether background should be transparent, default - ``False``. - """ - self._autoscale() - self._fig.savefig(filename, transparent=transparent) - - def save_to_buffer(self) -> io.BytesIO: - """Save plots to an ``io.BytesIO`` buffer. - - Return: - BytesIO: PNG image buffer. - """ - self._autoscale() - buf = io.BytesIO() - self._fig.savefig(buf, format="png") - buf.seek(0) - return buf - - def show(self) -> None: - """Show plots in an interactive window, in the usual Matplotlib manner. - """ - self._autoscale() - plt.show() - - def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None: - """Set the title of a single Axes. - - Args: - title (str): Title text. - ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``. - color (str, optional): Color to set title. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color - that depends on the stylesheet in use. - """ - if color: - self._get_ax(ax).set_title(title, color=color) - else: - self._get_ax(ax).set_title(title) - - def z_values( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "green", - fmt: str = ".1f", - quad_as_tri: bool = False, - ) -> None: - """Show ``z`` values on a single Axes. - - Args: - x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points. - y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points. - z (array-like of shape (ny, nx): z-values. - ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``. - color (str, optional): Color of added text. May be a string color or the letter ``"C"`` - followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the - ``tab10`` colormap. Default ``"green"``. - fmt (str, optional): Format to display z-values, default ``".1f"``. - quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers - of quads. - - Warning: - ``quad_as_tri=True`` shows z-values for all quads, even if masked. - """ - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center", - color=color, clip_on=True) - if quad_as_tri: - for j in range(ny-1): - for i in range(nx-1): - xx = np.mean(x[j:j+2, i:i+2]) - yy = np.mean(y[j:j+2, i:i+2]) - zz = np.mean(z[j:j+2, i:i+2]) - ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color, - clip_on=True) - - -class MplTestRenderer(MplRenderer): - """Test renderer implemented using Matplotlib. - - No whitespace around plots and no spines/ticks displayed. - Uses Agg backend, so can only save to file/buffer, cannot call ``show()``. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - ) -> None: - gridspec = { - "left": 0.01, - "right": 0.99, - "top": 0.99, - "bottom": 0.01, - "wspace": 0.01, - "hspace": 0.01, - } - super().__init__( - nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec, - ) - - for ax in self._axes: - ax.set_xmargin(0.0) - ax.set_ymargin(0.0) - ax.set_xticks([]) - ax.set_yticks([]) - - self._want_tight = False - - -class MplDebugRenderer(MplRenderer): - """Debug renderer implemented using Matplotlib. - - Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows, - text, etc. - """ - def __init__( - self, - nrows: int = 1, - ncols: int = 1, - figsize: tuple[float, float] = (9, 9), - show_frame: bool = True, - ) -> None: - super().__init__(nrows, ncols, figsize, show_frame) - - def _arrow( - self, - ax: Axes, - line_start: cpy.CoordinateArray, - line_end: cpy.CoordinateArray, - color: str, - alpha: float, - arrow_size: float, - ) -> None: - mid = 0.5*(line_start + line_end) - along = line_end - line_start - along /= np.sqrt(np.dot(along, along)) # Unit vector. - right = np.asarray((along[1], -along[0])) - arrow = np.stack(( - mid - (along*0.5 - right)*arrow_size, - mid + along*0.5*arrow_size, - mid - (along*0.5 + right)*arrow_size, - )) - ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha) - - def _filled_to_lists_of_points_and_offsets( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ) -> tuple[list[cpy.PointArray], list[cpy.OffsetArray]]: - if fill_type == FillType.OuterCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterCode, filled) - all_points = filled[0] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1]] - elif fill_type == FillType.ChunkCombinedCode: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCode, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1] if codes is not None] - elif fill_type == FillType.OuterOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_OuterOffset, filled) - all_points = filled[0] - all_offsets = filled[1] - elif fill_type == FillType.ChunkCombinedOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled) - all_points = [points for points in filled[0] if points is not None] - all_offsets = [offsets for offsets in filled[1] if offsets is not None] - elif fill_type == FillType.ChunkCombinedCodeOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled) - all_points = [] - all_offsets = [] - for points, codes, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert codes is not None and outer_offsets is not None - all_points += np.split(points, outer_offsets[1:-1]) - all_codes = np.split(codes, outer_offsets[1:-1]) - all_offsets += [mpl_codes_to_offsets(codes) for codes in all_codes] - elif fill_type == FillType.ChunkCombinedOffsetOffset: - if TYPE_CHECKING: - filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled) - all_points = [] - all_offsets = [] - for points, offsets, outer_offsets in zip(*filled): - if points is None: - continue - if TYPE_CHECKING: - assert offsets is not None and outer_offsets is not None - for i in range(len(outer_offsets)-1): - offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1] - all_points.append(points[offs[0]:offs[-1]]) - all_offsets.append(offs - offs[0]) - else: - raise RuntimeError(f"Rendering FillType {fill_type} not implemented") - - return all_points, all_offsets - - def _lines_to_list_of_points( - self, lines: cpy.LineReturn, line_type: LineType, - ) -> list[cpy.PointArray]: - if line_type == LineType.Separate: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_Separate, lines) - all_lines = lines - elif line_type == LineType.SeparateCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_SeparateCode, lines) - all_lines = lines[0] - elif line_type == LineType.ChunkCombinedCode: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedCode, lines) - all_lines = [] - for points, codes in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert codes is not None - offsets = mpl_codes_to_offsets(codes) - for i in range(len(offsets)-1): - all_lines.append(points[offsets[i]:offsets[i+1]]) - elif line_type == LineType.ChunkCombinedOffset: - if TYPE_CHECKING: - lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines) - all_lines = [] - for points, all_offsets in zip(*lines): - if points is not None: - if TYPE_CHECKING: - assert all_offsets is not None - for i in range(len(all_offsets)-1): - all_lines.append(points[all_offsets[i]:all_offsets[i+1]]) - else: - raise RuntimeError(f"Rendering LineType {line_type} not implemented") - - return all_lines - - def filled( - self, - filled: cpy.FillReturn, - fill_type: FillType, - ax: Axes | int = 0, - color: str = "C1", - alpha: float = 0.7, - line_color: str = "C0", - line_alpha: float = 0.7, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().filled(filled, fill_type, ax, color, alpha) - - if line_color is None and point_color is None: - return - - ax = self._get_ax(ax) - all_points, all_offsets = self._filled_to_lists_of_points_and_offsets(filled, fill_type) - - # Lines. - if line_color is not None: - for points, offsets in zip(all_points, all_offsets): - for start, end in zip(offsets[:-1], offsets[1:]): - xys = points[start:end] - ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha) - - if arrow_size > 0.0: - n = len(xys) - for i in range(n-1): - self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size) - - # Points. - if point_color is not None: - for points, offsets in zip(all_points, all_offsets): - mask = np.ones(offsets[-1], dtype=bool) - mask[offsets[1:]-1] = False # Exclude end points. - if start_point_color is not None: - start_indices = offsets[:-1] - mask[start_indices] = False # Exclude start points. - ax.plot( - points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha) - - if start_point_color is not None: - ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o", - c=start_point_color, alpha=line_alpha) - - def lines( - self, - lines: cpy.LineReturn, - line_type: LineType, - ax: Axes | int = 0, - color: str = "C0", - alpha: float = 1.0, - linewidth: float = 1, - point_color: str = "C0", - start_point_color: str = "red", - arrow_size: float = 0.1, - ) -> None: - super().lines(lines, line_type, ax, color, alpha, linewidth) - - if arrow_size == 0.0 and point_color is None: - return - - ax = self._get_ax(ax) - all_lines = self._lines_to_list_of_points(lines, line_type) - - if arrow_size > 0.0: - for line in all_lines: - for i in range(len(line)-1): - self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size) - - if point_color is not None: - for line in all_lines: - start_index = 0 - end_index = len(line) - if start_point_color is not None: - ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha) - start_index = 1 - if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]: - end_index -= 1 - ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o", - c=color, alpha=alpha) - - def point_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "red", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - quad = i + j*nx - ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color, - clip_on=True) - - def quad_numbers( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - ax: Axes | int = 0, - color: str = "blue", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(1, ny): - for i in range(1, nx): - quad = i + j*nx - xmid = x[j-1:j+1, i-1:i+1].mean() - ymid = y[j-1:j+1, i-1:i+1].mean() - ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True) - - def z_levels( - self, - x: ArrayLike, - y: ArrayLike, - z: ArrayLike, - lower_level: float, - upper_level: float | None = None, - ax: Axes | int = 0, - color: str = "green", - ) -> None: - ax = self._get_ax(ax) - x, y = self._grid_as_2d(x, y) - z = np.asarray(z) - ny, nx = z.shape - for j in range(ny): - for i in range(nx): - zz = z[j, i] - if upper_level is not None and zz > upper_level: - z_level = 2 - elif zz > lower_level: - z_level = 1 - else: - z_level = 0 - ax.text(x[j, i], y[j, i], z_level, ha="left", va="bottom", color=color, - clip_on=True) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py deleted file mode 100644 index 624cd47b4076a95cbc7c2124550371f6ffa5ea37..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/removeOverlaps.py +++ /dev/null @@ -1,248 +0,0 @@ -""" Simplify TrueType glyphs by merging overlapping contours/components. - -Requires https://github.com/fonttools/skia-pathops -""" - -import itertools -import logging -from typing import Callable, Iterable, Optional, Mapping - -from fontTools.misc.roundTools import otRound -from fontTools.ttLib import ttFont -from fontTools.ttLib.tables import _g_l_y_f -from fontTools.ttLib.tables import _h_m_t_x -from fontTools.pens.ttGlyphPen import TTGlyphPen - -import pathops - - -__all__ = ["removeOverlaps"] - - -class RemoveOverlapsError(Exception): - pass - - -log = logging.getLogger("fontTools.ttLib.removeOverlaps") - -_TTGlyphMapping = Mapping[str, ttFont._TTGlyph] - - -def skPathFromGlyph(glyphName: str, glyphSet: _TTGlyphMapping) -> pathops.Path: - path = pathops.Path() - pathPen = path.getPen(glyphSet=glyphSet) - glyphSet[glyphName].draw(pathPen) - return path - - -def skPathFromGlyphComponent( - component: _g_l_y_f.GlyphComponent, glyphSet: _TTGlyphMapping -): - baseGlyphName, transformation = component.getComponentInfo() - path = skPathFromGlyph(baseGlyphName, glyphSet) - return path.transform(*transformation) - - -def componentsOverlap(glyph: _g_l_y_f.Glyph, glyphSet: _TTGlyphMapping) -> bool: - if not glyph.isComposite(): - raise ValueError("This method only works with TrueType composite glyphs") - if len(glyph.components) < 2: - return False # single component, no overlaps - - component_paths = {} - - def _get_nth_component_path(index: int) -> pathops.Path: - if index not in component_paths: - component_paths[index] = skPathFromGlyphComponent( - glyph.components[index], glyphSet - ) - return component_paths[index] - - return any( - pathops.op( - _get_nth_component_path(i), - _get_nth_component_path(j), - pathops.PathOp.INTERSECTION, - fix_winding=False, - keep_starting_points=False, - ) - for i, j in itertools.combinations(range(len(glyph.components)), 2) - ) - - -def ttfGlyphFromSkPath(path: pathops.Path) -> _g_l_y_f.Glyph: - # Skia paths have no 'components', no need for glyphSet - ttPen = TTGlyphPen(glyphSet=None) - path.draw(ttPen) - glyph = ttPen.glyph() - assert not glyph.isComposite() - # compute glyph.xMin (glyfTable parameter unused for non composites) - glyph.recalcBounds(glyfTable=None) - return glyph - - -def _round_path( - path: pathops.Path, round: Callable[[float], float] = otRound -) -> pathops.Path: - rounded_path = pathops.Path() - for verb, points in path: - rounded_path.add(verb, *((round(p[0]), round(p[1])) for p in points)) - return rounded_path - - -def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path: - # skia-pathops has a bug where it sometimes fails to simplify paths when there - # are float coordinates and control points are very close to one another. - # Rounding coordinates to integers works around the bug. - # Since we are going to round glyf coordinates later on anyway, here it is - # ok(-ish) to also round before simplify. Better than failing the whole process - # for the entire font. - # https://bugs.chromium.org/p/skia/issues/detail?id=11958 - # https://github.com/google/fonts/issues/3365 - # TODO(anthrotype): remove once this Skia bug is fixed - try: - return pathops.simplify(path, clockwise=path.clockwise) - except pathops.PathOpsError: - pass - - path = _round_path(path) - try: - path = pathops.simplify(path, clockwise=path.clockwise) - log.debug( - "skia-pathops failed to simplify '%s' with float coordinates, " - "but succeded using rounded integer coordinates", - debugGlyphName, - ) - return path - except pathops.PathOpsError as e: - if log.isEnabledFor(logging.DEBUG): - path.dump() - raise RemoveOverlapsError( - f"Failed to remove overlaps from glyph {debugGlyphName!r}" - ) from e - - raise AssertionError("Unreachable") - - -def removeTTGlyphOverlaps( - glyphName: str, - glyphSet: _TTGlyphMapping, - glyfTable: _g_l_y_f.table__g_l_y_f, - hmtxTable: _h_m_t_x.table__h_m_t_x, - removeHinting: bool = True, -) -> bool: - glyph = glyfTable[glyphName] - # decompose composite glyphs only if components overlap each other - if ( - glyph.numberOfContours > 0 - or glyph.isComposite() - and componentsOverlap(glyph, glyphSet) - ): - path = skPathFromGlyph(glyphName, glyphSet) - - # remove overlaps - path2 = _simplify(path, glyphName) - - # replace TTGlyph if simplified path is different (ignoring contour order) - if {tuple(c) for c in path.contours} != {tuple(c) for c in path2.contours}: - glyfTable[glyphName] = glyph = ttfGlyphFromSkPath(path2) - # simplified glyph is always unhinted - assert not glyph.program - # also ensure hmtx LSB == glyph.xMin so glyph origin is at x=0 - width, lsb = hmtxTable[glyphName] - if lsb != glyph.xMin: - hmtxTable[glyphName] = (width, glyph.xMin) - return True - - if removeHinting: - glyph.removeHinting() - return False - - -def removeOverlaps( - font: ttFont.TTFont, - glyphNames: Optional[Iterable[str]] = None, - removeHinting: bool = True, - ignoreErrors=False, -) -> None: - """Simplify glyphs in TTFont by merging overlapping contours. - - Overlapping components are first decomposed to simple contours, then merged. - - Currently this only works with TrueType fonts with 'glyf' table. - Raises NotImplementedError if 'glyf' table is absent. - - Note that removing overlaps invalidates the hinting. By default we drop hinting - from all glyphs whether or not overlaps are removed from a given one, as it would - look weird if only some glyphs are left (un)hinted. - - Args: - font: input TTFont object, modified in place. - glyphNames: optional iterable of glyph names (str) to remove overlaps from. - By default, all glyphs in the font are processed. - removeHinting (bool): set to False to keep hinting for unmodified glyphs. - ignoreErrors (bool): set to True to ignore errors while removing overlaps, - thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363). - """ - try: - glyfTable = font["glyf"] - except KeyError: - raise NotImplementedError("removeOverlaps currently only works with TTFs") - - hmtxTable = font["hmtx"] - # wraps the underlying glyf Glyphs, takes care of interfacing with drawing pens - glyphSet = font.getGlyphSet() - - if glyphNames is None: - glyphNames = font.getGlyphOrder() - - # process all simple glyphs first, then composites with increasing component depth, - # so that by the time we test for component intersections the respective base glyphs - # have already been simplified - glyphNames = sorted( - glyphNames, - key=lambda name: ( - glyfTable[name].getCompositeMaxpValues(glyfTable).maxComponentDepth - if glyfTable[name].isComposite() - else 0, - name, - ), - ) - modified = set() - for glyphName in glyphNames: - try: - if removeTTGlyphOverlaps( - glyphName, glyphSet, glyfTable, hmtxTable, removeHinting - ): - modified.add(glyphName) - except RemoveOverlapsError: - if not ignoreErrors: - raise - log.error("Failed to remove overlaps for '%s'", glyphName) - - log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified)) - - -def main(args=None): - import sys - - if args is None: - args = sys.argv[1:] - - if len(args) < 2: - print( - f"usage: fonttools ttLib.removeOverlaps INPUT.ttf OUTPUT.ttf [GLYPHS ...]" - ) - sys.exit(1) - - src = args[0] - dst = args[1] - glyphNames = args[2:] or None - - with ttFont.TTFont(src) as f: - removeOverlaps(f, glyphNames) - f.save(dst) - - -if __name__ == "__main__": - main() diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Nero 9 Full Version For Free.md b/spaces/cihyFjudo/fairness-paper-search/Download Nero 9 Full Version For Free.md deleted file mode 100644 index af1335c66f0280ba9c8a3a5fcba7d76dd90e0866..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Nero 9 Full Version For Free.md +++ /dev/null @@ -1,11 +0,0 @@ -
    -

    You can use this disc burning and copying features for an unlimited time, absolutely FREE of cost. The installer is 55MB in size and works well on Windows XP, Vista and Windows 7. If you want to get complete features and functionality, then you will need to upgrade to Nero 9 full version.

    -

    Download Nero 9 Full Version For Free


    Download File ———>>> https://tinurli.com/2uwiWk



    -

    We weren't shocked that the free version of Nero lacked almost all the functionality of the pay version, but we were a tad surprised at just how limited it was compared to other free programs such as DeepBurner Free, ImgBurn, and InfraRecorder Portable.

    -

    The version is 9.4.12.708b, Nero 9 Lite, nero 9 is now available as a free version! Nero Lite is an custom created installer for the main Nero Burning Rom applications. It also comes with a lot of custom unattended switches to make it also install silent. Nero Lite is an installer that includes applications that are only necessary for...

    -

    Many people use Nero only for ripping and burning, not taking full advantage of all of its features. On a Windows PC, the same can be done with the standard CD-Burner, which now works well, unlike older versions of it. If you need an application to burn media, stay with what you have already bought. If you need a powerful media suite, buy Nero.

    -

    -

    Photoshop partisans may disagree, but The Gimp has served us awfully well in our screen-shot grabbing and prepping operations. Whats more, this free, cross-platform-friendly image-editing application can bui

    -

    A very useful research tool, Net Snippets grabs Web pages and text and images from document formats such as e-mail, Microsoft Word and PDF. The free version allows individuals to collect, organize and

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/web_app.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/web_app.py deleted file mode 100644 index 8fd4471d3af019c6e3bd01fcb9838ee99636238e..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/web_app.py +++ /dev/null @@ -1,557 +0,0 @@ -import asyncio -import logging -import warnings -from functools import partial, update_wrapper -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterator, - Awaitable, - Callable, - Dict, - Iterable, - Iterator, - List, - Mapping, - MutableMapping, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) - -from aiosignal import Signal -from frozenlist import FrozenList - -from . import hdrs -from .abc import ( - AbstractAccessLogger, - AbstractMatchInfo, - AbstractRouter, - AbstractStreamWriter, -) -from .helpers import DEBUG -from .http_parser import RawRequestMessage -from .log import web_logger -from .streams import StreamReader -from .web_log import AccessLogger -from .web_middlewares import _fix_request_current_app -from .web_protocol import RequestHandler -from .web_request import Request -from .web_response import StreamResponse -from .web_routedef import AbstractRouteDef -from .web_server import Server -from .web_urldispatcher import ( - AbstractResource, - AbstractRoute, - Domain, - MaskDomain, - MatchedSubAppResource, - PrefixedSubAppResource, - UrlDispatcher, -) - -__all__ = ("Application", "CleanupError") - - -if TYPE_CHECKING: # pragma: no cover - from .typedefs import Handler - - _AppSignal = Signal[Callable[["Application"], Awaitable[None]]] - _RespPrepareSignal = Signal[Callable[[Request, StreamResponse], Awaitable[None]]] - _Middleware = Union[ - Callable[[Request, Handler], Awaitable[StreamResponse]], - Callable[["Application", Handler], Awaitable[Handler]], # old-style - ] - _Middlewares = FrozenList[_Middleware] - _MiddlewaresHandlers = Optional[Sequence[Tuple[_Middleware, bool]]] - _Subapps = List["Application"] -else: - # No type checker mode, skip types - _AppSignal = Signal - _RespPrepareSignal = Signal - _Middleware = Callable - _Middlewares = FrozenList - _MiddlewaresHandlers = Optional[Sequence] - _Subapps = List - - -class Application(MutableMapping[str, Any]): - ATTRS = frozenset( - [ - "logger", - "_debug", - "_router", - "_loop", - "_handler_args", - "_middlewares", - "_middlewares_handlers", - "_run_middlewares", - "_state", - "_frozen", - "_pre_frozen", - "_subapps", - "_on_response_prepare", - "_on_startup", - "_on_shutdown", - "_on_cleanup", - "_client_max_size", - "_cleanup_ctx", - ] - ) - - def __init__( - self, - *, - logger: logging.Logger = web_logger, - router: Optional[UrlDispatcher] = None, - middlewares: Iterable[_Middleware] = (), - handler_args: Optional[Mapping[str, Any]] = None, - client_max_size: int = 1024**2, - loop: Optional[asyncio.AbstractEventLoop] = None, - debug: Any = ..., # mypy doesn't support ellipsis - ) -> None: - if router is None: - router = UrlDispatcher() - else: - warnings.warn( - "router argument is deprecated", DeprecationWarning, stacklevel=2 - ) - assert isinstance(router, AbstractRouter), router - - if loop is not None: - warnings.warn( - "loop argument is deprecated", DeprecationWarning, stacklevel=2 - ) - - if debug is not ...: - warnings.warn( - "debug argument is deprecated", DeprecationWarning, stacklevel=2 - ) - self._debug = debug - self._router: UrlDispatcher = router - self._loop = loop - self._handler_args = handler_args - self.logger = logger - - self._middlewares: _Middlewares = FrozenList(middlewares) - - # initialized on freezing - self._middlewares_handlers: _MiddlewaresHandlers = None - # initialized on freezing - self._run_middlewares: Optional[bool] = None - - self._state: Dict[str, Any] = {} - self._frozen = False - self._pre_frozen = False - self._subapps: _Subapps = [] - - self._on_response_prepare: _RespPrepareSignal = Signal(self) - self._on_startup: _AppSignal = Signal(self) - self._on_shutdown: _AppSignal = Signal(self) - self._on_cleanup: _AppSignal = Signal(self) - self._cleanup_ctx = CleanupContext() - self._on_startup.append(self._cleanup_ctx._on_startup) - self._on_cleanup.append(self._cleanup_ctx._on_cleanup) - self._client_max_size = client_max_size - - def __init_subclass__(cls: Type["Application"]) -> None: - warnings.warn( - "Inheritance class {} from web.Application " - "is discouraged".format(cls.__name__), - DeprecationWarning, - stacklevel=2, - ) - - if DEBUG: # pragma: no cover - - def __setattr__(self, name: str, val: Any) -> None: - if name not in self.ATTRS: - warnings.warn( - "Setting custom web.Application.{} attribute " - "is discouraged".format(name), - DeprecationWarning, - stacklevel=2, - ) - super().__setattr__(name, val) - - # MutableMapping API - - def __eq__(self, other: object) -> bool: - return self is other - - def __getitem__(self, key: str) -> Any: - return self._state[key] - - def _check_frozen(self) -> None: - if self._frozen: - warnings.warn( - "Changing state of started or joined " "application is deprecated", - DeprecationWarning, - stacklevel=3, - ) - - def __setitem__(self, key: str, value: Any) -> None: - self._check_frozen() - self._state[key] = value - - def __delitem__(self, key: str) -> None: - self._check_frozen() - del self._state[key] - - def __len__(self) -> int: - return len(self._state) - - def __iter__(self) -> Iterator[str]: - return iter(self._state) - - ######## - @property - def loop(self) -> asyncio.AbstractEventLoop: - # Technically the loop can be None - # but we mask it by explicit type cast - # to provide more convinient type annotation - warnings.warn("loop property is deprecated", DeprecationWarning, stacklevel=2) - return cast(asyncio.AbstractEventLoop, self._loop) - - def _set_loop(self, loop: Optional[asyncio.AbstractEventLoop]) -> None: - if loop is None: - loop = asyncio.get_event_loop() - if self._loop is not None and self._loop is not loop: - raise RuntimeError( - "web.Application instance initialized with different loop" - ) - - self._loop = loop - - # set loop debug - if self._debug is ...: - self._debug = loop.get_debug() - - # set loop to sub applications - for subapp in self._subapps: - subapp._set_loop(loop) - - @property - def pre_frozen(self) -> bool: - return self._pre_frozen - - def pre_freeze(self) -> None: - if self._pre_frozen: - return - - self._pre_frozen = True - self._middlewares.freeze() - self._router.freeze() - self._on_response_prepare.freeze() - self._cleanup_ctx.freeze() - self._on_startup.freeze() - self._on_shutdown.freeze() - self._on_cleanup.freeze() - self._middlewares_handlers = tuple(self._prepare_middleware()) - - # If current app and any subapp do not have middlewares avoid run all - # of the code footprint that it implies, which have a middleware - # hardcoded per app that sets up the current_app attribute. If no - # middlewares are configured the handler will receive the proper - # current_app without needing all of this code. - self._run_middlewares = True if self.middlewares else False - - for subapp in self._subapps: - subapp.pre_freeze() - self._run_middlewares = self._run_middlewares or subapp._run_middlewares - - @property - def frozen(self) -> bool: - return self._frozen - - def freeze(self) -> None: - if self._frozen: - return - - self.pre_freeze() - self._frozen = True - for subapp in self._subapps: - subapp.freeze() - - @property - def debug(self) -> bool: - warnings.warn("debug property is deprecated", DeprecationWarning, stacklevel=2) - return self._debug # type: ignore[no-any-return] - - def _reg_subapp_signals(self, subapp: "Application") -> None: - def reg_handler(signame: str) -> None: - subsig = getattr(subapp, signame) - - async def handler(app: "Application") -> None: - await subsig.send(subapp) - - appsig = getattr(self, signame) - appsig.append(handler) - - reg_handler("on_startup") - reg_handler("on_shutdown") - reg_handler("on_cleanup") - - def add_subapp(self, prefix: str, subapp: "Application") -> AbstractResource: - if not isinstance(prefix, str): - raise TypeError("Prefix must be str") - prefix = prefix.rstrip("/") - if not prefix: - raise ValueError("Prefix cannot be empty") - factory = partial(PrefixedSubAppResource, prefix, subapp) - return self._add_subapp(factory, subapp) - - def _add_subapp( - self, resource_factory: Callable[[], AbstractResource], subapp: "Application" - ) -> AbstractResource: - if self.frozen: - raise RuntimeError("Cannot add sub application to frozen application") - if subapp.frozen: - raise RuntimeError("Cannot add frozen application") - resource = resource_factory() - self.router.register_resource(resource) - self._reg_subapp_signals(subapp) - self._subapps.append(subapp) - subapp.pre_freeze() - if self._loop is not None: - subapp._set_loop(self._loop) - return resource - - def add_domain(self, domain: str, subapp: "Application") -> AbstractResource: - if not isinstance(domain, str): - raise TypeError("Domain must be str") - elif "*" in domain: - rule: Domain = MaskDomain(domain) - else: - rule = Domain(domain) - factory = partial(MatchedSubAppResource, rule, subapp) - return self._add_subapp(factory, subapp) - - def add_routes(self, routes: Iterable[AbstractRouteDef]) -> List[AbstractRoute]: - return self.router.add_routes(routes) - - @property - def on_response_prepare(self) -> _RespPrepareSignal: - return self._on_response_prepare - - @property - def on_startup(self) -> _AppSignal: - return self._on_startup - - @property - def on_shutdown(self) -> _AppSignal: - return self._on_shutdown - - @property - def on_cleanup(self) -> _AppSignal: - return self._on_cleanup - - @property - def cleanup_ctx(self) -> "CleanupContext": - return self._cleanup_ctx - - @property - def router(self) -> UrlDispatcher: - return self._router - - @property - def middlewares(self) -> _Middlewares: - return self._middlewares - - def _make_handler( - self, - *, - loop: Optional[asyncio.AbstractEventLoop] = None, - access_log_class: Type[AbstractAccessLogger] = AccessLogger, - **kwargs: Any, - ) -> Server: - - if not issubclass(access_log_class, AbstractAccessLogger): - raise TypeError( - "access_log_class must be subclass of " - "aiohttp.abc.AbstractAccessLogger, got {}".format(access_log_class) - ) - - self._set_loop(loop) - self.freeze() - - kwargs["debug"] = self._debug - kwargs["access_log_class"] = access_log_class - if self._handler_args: - for k, v in self._handler_args.items(): - kwargs[k] = v - - return Server( - self._handle, # type: ignore[arg-type] - request_factory=self._make_request, - loop=self._loop, - **kwargs, - ) - - def make_handler( - self, - *, - loop: Optional[asyncio.AbstractEventLoop] = None, - access_log_class: Type[AbstractAccessLogger] = AccessLogger, - **kwargs: Any, - ) -> Server: - - warnings.warn( - "Application.make_handler(...) is deprecated, " "use AppRunner API instead", - DeprecationWarning, - stacklevel=2, - ) - - return self._make_handler( - loop=loop, access_log_class=access_log_class, **kwargs - ) - - async def startup(self) -> None: - """Causes on_startup signal - - Should be called in the event loop along with the request handler. - """ - await self.on_startup.send(self) - - async def shutdown(self) -> None: - """Causes on_shutdown signal - - Should be called before cleanup() - """ - await self.on_shutdown.send(self) - - async def cleanup(self) -> None: - """Causes on_cleanup signal - - Should be called after shutdown() - """ - if self.on_cleanup.frozen: - await self.on_cleanup.send(self) - else: - # If an exception occurs in startup, ensure cleanup contexts are completed. - await self._cleanup_ctx._on_cleanup(self) - - def _make_request( - self, - message: RawRequestMessage, - payload: StreamReader, - protocol: RequestHandler, - writer: AbstractStreamWriter, - task: "asyncio.Task[None]", - _cls: Type[Request] = Request, - ) -> Request: - return _cls( - message, - payload, - protocol, - writer, - task, - self._loop, - client_max_size=self._client_max_size, - ) - - def _prepare_middleware(self) -> Iterator[Tuple[_Middleware, bool]]: - for m in reversed(self._middlewares): - if getattr(m, "__middleware_version__", None) == 1: - yield m, True - else: - warnings.warn( - 'old-style middleware "{!r}" deprecated, ' "see #2252".format(m), - DeprecationWarning, - stacklevel=2, - ) - yield m, False - - yield _fix_request_current_app(self), True - - async def _handle(self, request: Request) -> StreamResponse: - loop = asyncio.get_event_loop() - debug = loop.get_debug() - match_info = await self._router.resolve(request) - if debug: # pragma: no cover - if not isinstance(match_info, AbstractMatchInfo): - raise TypeError( - "match_info should be AbstractMatchInfo " - "instance, not {!r}".format(match_info) - ) - match_info.add_app(self) - - match_info.freeze() - - resp = None - request._match_info = match_info - expect = request.headers.get(hdrs.EXPECT) - if expect: - resp = await match_info.expect_handler(request) - await request.writer.drain() - - if resp is None: - handler = match_info.handler - - if self._run_middlewares: - for app in match_info.apps[::-1]: - for m, new_style in app._middlewares_handlers: # type: ignore[union-attr] # noqa - if new_style: - handler = update_wrapper( - partial(m, handler=handler), handler - ) - else: - handler = await m(app, handler) # type: ignore[arg-type] - - resp = await handler(request) - - return resp - - def __call__(self) -> "Application": - """gunicorn compatibility""" - return self - - def __repr__(self) -> str: - return f"" - - def __bool__(self) -> bool: - return True - - -class CleanupError(RuntimeError): - @property - def exceptions(self) -> List[BaseException]: - return cast(List[BaseException], self.args[1]) - - -if TYPE_CHECKING: # pragma: no cover - _CleanupContextBase = FrozenList[Callable[[Application], AsyncIterator[None]]] -else: - _CleanupContextBase = FrozenList - - -class CleanupContext(_CleanupContextBase): - def __init__(self) -> None: - super().__init__() - self._exits: List[AsyncIterator[None]] = [] - - async def _on_startup(self, app: Application) -> None: - for cb in self: - it = cb(app).__aiter__() - await it.__anext__() - self._exits.append(it) - - async def _on_cleanup(self, app: Application) -> None: - errors = [] - for it in reversed(self._exits): - try: - await it.__anext__() - except StopAsyncIteration: - pass - except Exception as exc: - errors.append(exc) - else: - errors.append(RuntimeError(f"{it!r} has more than one 'yield'")) - if errors: - if len(errors) == 1: - raise errors[0] - else: - raise CleanupError("Multiple errors on cleanup stage", errors) diff --git a/spaces/cncn102/bingo1/src/lib/hooks/use-bing.ts b/spaces/cncn102/bingo1/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei.h deleted file mode 100644 index 1c327a4689e3a6f4ef0da6e0b0e8b2bd29d1a004..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei.h +++ /dev/null @@ -1,205 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_CBS_SEI_H -#define AVCODEC_CBS_SEI_H - -#include -#include - -#include "libavutil/buffer.h" - -#include "cbs.h" -#include "sei.h" - - -typedef struct SEIRawFillerPayload { - uint32_t payload_size; -} SEIRawFillerPayload; - -typedef struct SEIRawUserDataRegistered { - uint8_t itu_t_t35_country_code; - uint8_t itu_t_t35_country_code_extension_byte; - uint8_t *data; - AVBufferRef *data_ref; - size_t data_length; -} SEIRawUserDataRegistered; - -typedef struct SEIRawUserDataUnregistered { - uint8_t uuid_iso_iec_11578[16]; - uint8_t *data; - AVBufferRef *data_ref; - size_t data_length; -} SEIRawUserDataUnregistered; - -typedef struct SEIRawMasteringDisplayColourVolume { - uint16_t display_primaries_x[3]; - uint16_t display_primaries_y[3]; - uint16_t white_point_x; - uint16_t white_point_y; - uint32_t max_display_mastering_luminance; - uint32_t min_display_mastering_luminance; -} SEIRawMasteringDisplayColourVolume; - -typedef struct SEIRawContentLightLevelInfo { - uint16_t max_content_light_level; - uint16_t max_pic_average_light_level; -} SEIRawContentLightLevelInfo; - -typedef struct SEIRawAlternativeTransferCharacteristics { - uint8_t preferred_transfer_characteristics; -} SEIRawAlternativeTransferCharacteristics; - -typedef struct SEIRawAmbientViewingEnvironment { - uint32_t ambient_illuminance; - uint16_t ambient_light_x; - uint16_t ambient_light_y; -} SEIRawAmbientViewingEnvironment; - -typedef struct SEIRawMessage { - uint32_t payload_type; - uint32_t payload_size; - void *payload; - AVBufferRef *payload_ref; - uint8_t *extension_data; - AVBufferRef *extension_data_ref; - size_t extension_bit_length; -} SEIRawMessage; - -typedef struct SEIRawMessageList { - SEIRawMessage *messages; - int nb_messages; - int nb_messages_allocated; -} SEIRawMessageList; - - -typedef struct SEIMessageState { - // The type of the payload being written. - uint32_t payload_type; - // When reading, contains the size of the payload to allow finding the - // end of variable-length fields (such as user_data_payload_byte[]). - // (When writing, the size will be derived from the total number of - // bytes actually written.) - uint32_t payload_size; - // When writing, indicates that payload extension data is present so - // all extended fields must be written. May be updated by the writer - // to indicate that extended fields have been written, so the extension - // end bits must be written too. - uint8_t extension_present; -} SEIMessageState; - -struct GetBitContext; -struct PutBitContext; - -typedef int (*SEIMessageReadFunction)(CodedBitstreamContext *ctx, - struct GetBitContext *rw, - void *current, - SEIMessageState *sei); - -typedef int (*SEIMessageWriteFunction)(CodedBitstreamContext *ctx, - struct PutBitContext *rw, - void *current, - SEIMessageState *sei); - -typedef struct SEIMessageTypeDescriptor { - // Payload type for the message. (-1 in this field ends a list.) - int type; - // Valid in a prefix SEI NAL unit (always for H.264). - uint8_t prefix; - // Valid in a suffix SEI NAL unit (never for H.264). - uint8_t suffix; - // Size of the decomposed structure. - size_t size; - // Read bitstream into SEI message. - SEIMessageReadFunction read; - // Write bitstream from SEI message. - SEIMessageWriteFunction write; -} SEIMessageTypeDescriptor; - -// Macro for the read/write pair. The clumsy cast is needed because the -// current pointer is typed in all of the read/write functions but has to -// be void here to fit all cases. -#define SEI_MESSAGE_RW(codec, name) \ - .read = (SEIMessageReadFunction) cbs_ ## codec ## _read_ ## name, \ - .write = (SEIMessageWriteFunction)cbs_ ## codec ## _write_ ## name - -// End-of-list sentinel element. -#define SEI_MESSAGE_TYPE_END { .type = -1 } - - -/** - * Find the type descriptor for the given payload type. - * - * Returns NULL if the payload type is not known. - */ -const SEIMessageTypeDescriptor *ff_cbs_sei_find_type(CodedBitstreamContext *ctx, - int payload_type); - -/** - * Allocate a new payload for the given SEI message. - */ -int ff_cbs_sei_alloc_message_payload(SEIRawMessage *message, - const SEIMessageTypeDescriptor *desc); - -/** - * Allocate a new empty SEI message in a message list. - * - * The new message is in place nb_messages - 1. - */ -int ff_cbs_sei_list_add(SEIRawMessageList *list); - -/** - * Free all SEI messages in a message list. - */ -void ff_cbs_sei_free_message_list(SEIRawMessageList *list); - -/** - * Add an SEI message to an access unit. - * - * Will add to an existing SEI NAL unit, or create a new one for the - * message if there is no suitable existing one. - * - * Takes a new reference to payload_buf, if set. If payload_buf is - * NULL then the new message will not be reference counted. - */ -int ff_cbs_sei_add_message(CodedBitstreamContext *ctx, - CodedBitstreamFragment *au, - int prefix, - uint32_t payload_type, - void *payload_data, - AVBufferRef *payload_buf); - -/** - * Iterate over messages with the given payload type in an access unit. - * - * Set message to NULL in the first call. Returns 0 while more messages - * are available, AVERROR(ENOENT) when all messages have been found. - */ -int ff_cbs_sei_find_message(CodedBitstreamContext *ctx, - CodedBitstreamFragment *au, - uint32_t payload_type, - SEIRawMessage **message); - -/** - * Delete all messages with the given payload type from an access unit. - */ -void ff_cbs_sei_delete_message_type(CodedBitstreamContext *ctx, - CodedBitstreamFragment *au, - uint32_t payload_type); - -#endif /* AVCODEC_CBS_SEI_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cdxl.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cdxl.c deleted file mode 100644 index 6b3b3e85e0f7ea13ffca017ea83ad419da9f4a0b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cdxl.c +++ /dev/null @@ -1,348 +0,0 @@ -/* - * CDXL video decoder - * Copyright (c) 2011-2012 Paul B Mahol - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Commodore CDXL video decoder - * @author Paul B Mahol - */ - -#define UNCHECKED_BITSTREAM_READER 1 - -#include "libavutil/intreadwrite.h" -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" - -#define BIT_PLANAR 0x00 -#define CHUNKY 0x20 -#define BYTE_PLANAR 0x40 -#define BIT_LINE 0x80 -#define BYTE_LINE 0xC0 - -typedef struct CDXLVideoContext { - AVCodecContext *avctx; - int bpp; - int type; - int format; - int padded_bits; - const uint8_t *palette; - int palette_size; - const uint8_t *video; - int video_size; - uint8_t *new_video; - int new_video_size; -} CDXLVideoContext; - -static av_cold int cdxl_decode_init(AVCodecContext *avctx) -{ - CDXLVideoContext *c = avctx->priv_data; - - c->new_video_size = 0; - c->avctx = avctx; - - return 0; -} - -static void import_palette(CDXLVideoContext *c, uint32_t *new_palette) -{ - if (c->type == 1) { - for (int i = 0; i < c->palette_size / 2; i++) { - unsigned rgb = AV_RB16(&c->palette[i * 2]); - unsigned r = ((rgb >> 8) & 0xF) * 0x11; - unsigned g = ((rgb >> 4) & 0xF) * 0x11; - unsigned b = (rgb & 0xF) * 0x11; - AV_WN32(&new_palette[i], (0xFFU << 24) | (r << 16) | (g << 8) | b); - } - } else { - for (int i = 0; i < c->palette_size / 3; i++) { - unsigned rgb = AV_RB24(&c->palette[i * 3]); - AV_WN32(&new_palette[i], (0xFFU << 24) | rgb); - } - } -} - -static void bitplanar2chunky(CDXLVideoContext *c, int linesize, uint8_t *out) -{ - GetBitContext gb; - int x, y, plane; - - if (init_get_bits8(&gb, c->video, c->video_size) < 0) - return; - for (plane = 0; plane < c->bpp; plane++) { - for (y = 0; y < c->avctx->height; y++) { - for (x = 0; x < c->avctx->width; x++) - out[linesize * y + x] |= get_bits1(&gb) << plane; - skip_bits(&gb, c->padded_bits); - } - } -} - -static void bitline2chunky(CDXLVideoContext *c, int linesize, uint8_t *out) -{ - GetBitContext gb; - int x, y, plane; - - if (init_get_bits8(&gb, c->video, c->video_size) < 0) - return; - for (y = 0; y < c->avctx->height; y++) { - for (plane = 0; plane < c->bpp; plane++) { - for (x = 0; x < c->avctx->width; x++) - out[linesize * y + x] |= get_bits1(&gb) << plane; - skip_bits(&gb, c->padded_bits); - } - } -} - -static void chunky2chunky(CDXLVideoContext *c, int linesize, uint8_t *out) -{ - GetByteContext gb; - int y; - - bytestream2_init(&gb, c->video, c->video_size); - for (y = 0; y < c->avctx->height; y++) { - bytestream2_get_buffer(&gb, out + linesize * y, c->avctx->width * 3); - } -} - -static void import_format(CDXLVideoContext *c, int linesize, uint8_t *out) -{ - memset(out, 0, linesize * c->avctx->height); - - switch (c->format) { - case BIT_PLANAR: - bitplanar2chunky(c, linesize, out); - break; - case BIT_LINE: - bitline2chunky(c, linesize, out); - break; - case CHUNKY: - chunky2chunky(c, linesize, out); - break; - } -} - -static void cdxl_decode_rgb(CDXLVideoContext *c, AVFrame *frame) -{ - uint32_t *new_palette = (uint32_t *)frame->data[1]; - - memset(frame->data[1], 0, AVPALETTE_SIZE); - import_palette(c, new_palette); - import_format(c, frame->linesize[0], frame->data[0]); -} - -static void cdxl_decode_raw(CDXLVideoContext *c, AVFrame *frame) -{ - import_format(c, frame->linesize[0], frame->data[0]); -} - -static void cdxl_decode_ham6(CDXLVideoContext *c, AVFrame *frame) -{ - AVCodecContext *avctx = c->avctx; - uint32_t new_palette[16], r, g, b; - uint8_t *ptr, *out, index, op; - int x, y; - - ptr = c->new_video; - out = frame->data[0]; - - import_palette(c, new_palette); - import_format(c, avctx->width, c->new_video); - - for (y = 0; y < avctx->height; y++) { - r = new_palette[0] & 0xFF0000; - g = new_palette[0] & 0xFF00; - b = new_palette[0] & 0xFF; - for (x = 0; x < avctx->width; x++) { - index = *ptr++; - op = index >> 4; - index &= 15; - switch (op) { - case 0: - r = new_palette[index] & 0xFF0000; - g = new_palette[index] & 0xFF00; - b = new_palette[index] & 0xFF; - break; - case 1: - b = index * 0x11; - break; - case 2: - r = index * 0x11 << 16; - break; - case 3: - g = index * 0x11 << 8; - break; - } - AV_WL24(out + x * 3, r | g | b); - } - out += frame->linesize[0]; - } -} - -static void cdxl_decode_ham8(CDXLVideoContext *c, AVFrame *frame) -{ - AVCodecContext *avctx = c->avctx; - uint32_t new_palette[64], r, g, b; - uint8_t *ptr, *out, index, op; - int x, y; - - ptr = c->new_video; - out = frame->data[0]; - - import_palette(c, new_palette); - import_format(c, avctx->width, c->new_video); - - for (y = 0; y < avctx->height; y++) { - r = new_palette[0] & 0xFF0000; - g = new_palette[0] & 0xFF00; - b = new_palette[0] & 0xFF; - for (x = 0; x < avctx->width; x++) { - index = *ptr++; - op = index >> 6; - index &= 63; - switch (op) { - case 0: - r = new_palette[index] & 0xFF0000; - g = new_palette[index] & 0xFF00; - b = new_palette[index] & 0xFF; - break; - case 1: - b = (index << 2) | (b & 3); - break; - case 2: - r = (index << 18) | (r & (3 << 16)); - break; - case 3: - g = (index << 10) | (g & (3 << 8)); - break; - } - AV_WL24(out + x * 3, r | g | b); - } - out += frame->linesize[0]; - } -} - -static int cdxl_decode_frame(AVCodecContext *avctx, AVFrame *p, - int *got_frame, AVPacket *pkt) -{ - CDXLVideoContext *c = avctx->priv_data; - int ret, w, h, encoding, aligned_width, buf_size = pkt->size; - const uint8_t *buf = pkt->data; - - if (buf_size < 32) - return AVERROR_INVALIDDATA; - c->type = buf[0]; - encoding = buf[1] & 7; - c->format = buf[1] & 0xE0; - w = AV_RB16(&buf[14]); - h = AV_RB16(&buf[16]); - c->bpp = buf[19]; - c->palette_size = AV_RB16(&buf[20]); - c->palette = buf + 32; - c->video = c->palette + c->palette_size; - c->video_size = buf_size - c->palette_size - 32; - - if (c->type > 1) - return AVERROR_INVALIDDATA; - if (c->type == 1 && c->palette_size > 512) - return AVERROR_INVALIDDATA; - if (c->type == 0 && c->palette_size > 768) - return AVERROR_INVALIDDATA; - if (buf_size < c->palette_size + 32) - return AVERROR_INVALIDDATA; - if (c->bpp < 1) - return AVERROR_INVALIDDATA; - if (c->format != BIT_PLANAR && c->format != BIT_LINE && c->format != CHUNKY) { - avpriv_request_sample(avctx, "Pixel format 0x%0x", c->format); - return AVERROR_PATCHWELCOME; - } - - if ((ret = ff_set_dimensions(avctx, w, h)) < 0) - return ret; - - if (c->format == CHUNKY) - aligned_width = avctx->width; - else - aligned_width = FFALIGN(c->avctx->width, 16); - c->padded_bits = aligned_width - c->avctx->width; - if (c->video_size < aligned_width * avctx->height * (int64_t)c->bpp / 8) - return AVERROR_INVALIDDATA; - if (!encoding && c->palette_size && c->bpp <= 8 && c->format != CHUNKY) { - avctx->pix_fmt = AV_PIX_FMT_PAL8; - } else if (encoding == 1 && (c->bpp == 6 || c->bpp == 8) && c->format != CHUNKY) { - if (c->palette_size != (1 << (c->bpp - 1))) - return AVERROR_INVALIDDATA; - avctx->pix_fmt = AV_PIX_FMT_BGR24; - } else if (!encoding && c->bpp == 24 && c->format == CHUNKY && - !c->palette_size) { - avctx->pix_fmt = AV_PIX_FMT_RGB24; - } else { - avpriv_request_sample(avctx, "Encoding %d, bpp %d and format 0x%x", - encoding, c->bpp, c->format); - return AVERROR_PATCHWELCOME; - } - - if ((ret = ff_get_buffer(avctx, p, 0)) < 0) - return ret; - p->pict_type = AV_PICTURE_TYPE_I; - p->key_frame = 1; - - if (encoding) { - av_fast_padded_malloc(&c->new_video, &c->new_video_size, - h * w + AV_INPUT_BUFFER_PADDING_SIZE); - if (!c->new_video) - return AVERROR(ENOMEM); - if (c->bpp == 8) - cdxl_decode_ham8(c, p); - else - cdxl_decode_ham6(c, p); - } else if (avctx->pix_fmt == AV_PIX_FMT_PAL8) { - cdxl_decode_rgb(c, p); - } else { - cdxl_decode_raw(c, p); - } - *got_frame = 1; - - return buf_size; -} - -static av_cold int cdxl_decode_end(AVCodecContext *avctx) -{ - CDXLVideoContext *c = avctx->priv_data; - - av_freep(&c->new_video); - - return 0; -} - -const FFCodec ff_cdxl_decoder = { - .p.name = "cdxl", - CODEC_LONG_NAME("Commodore CDXL video"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_CDXL, - .priv_data_size = sizeof(CDXLVideoContext), - .init = cdxl_decode_init, - .close = cdxl_decode_end, - FF_CODEC_DECODE_CB(cdxl_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/20 Minutes Till Dawn The Best Roguelite Survival Game for Mobile in 2023.md b/spaces/congsaPfin/Manga-OCR/logs/20 Minutes Till Dawn The Best Roguelite Survival Game for Mobile in 2023.md deleted file mode 100644 index b30a33931b38e196a9e167df32d5a85ddb768e9f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/20 Minutes Till Dawn The Best Roguelite Survival Game for Mobile in 2023.md +++ /dev/null @@ -1,73 +0,0 @@ -
    -

    20 Minutes Till Dawn: A Roguelike Survival Game for Mobile Devices

    -

    Do you love roguelike games, where you have to survive against endless waves of enemies with random and unique upgrades? Do you enjoy shoot 'em up games, where you have to blast your way through hordes of monsters with powerful weapons? If you answered yes to both questions, then you will love 20 Minutes Till Dawn, a roguelike survival game for mobile devices that combines both genres in a thrilling and challenging way.

    -

    20 Minutes Till Dawn is a game where you have to survive the onslaught of an endless horde of monsters for 20 minutes. You can choose from a variety of characters and weapons, each with their own bonuses and abilities. You can also customize your build with different upgrades that you can pick from every time you level up. The game has three different game modes, multiple difficulty levels, and stunning graphics and sound effects. The game is free to download and play on Android and iOS devices, so you can enjoy it anytime and anywhere.

    -

    20 minutes till dawn download mobile


    Downloadhttps://urlca.com/2uO9DO



    -

    In this article, we will tell you everything you need to know about 20 Minutes Till Dawn, including how to create unique builds every run, how to select your hero, what are the main features of the game, how the game was developed, and how to download it. We will also answer some frequently asked questions about the game at the end. So, let's get started!

    -

    Unique Builds Every Run

    -

    One of the most exciting aspects of 20 Minutes Till Dawn is that you can create a unique and overpowered build every run. The game offers over 80 different upgrades to choose from, which can affect your character's stats, abilities, weapons, and more. For example, you can become a fire wizard who ignites monsters with every shot of your shotgun, or an agile ninja who controls magic knives to pierce your enemies. The possibilities are endless!

    -

    The upgrades are randomly generated every time you level up during a round. You can choose one of four options that are presented to you. Some upgrades are more rare and powerful than others, so you have to be strategic about what you pick. You can also find treasure chests that contain special upgrades after defeating boss monsters.

    -

    The upgrades you get during a round are temporary and only last until the end of the session. However, you can also use the gems that you collect from defeated monsters to unlock permanent upgrades in the main menu. These include new characters, weapons, and runes that can enhance your gameplay experience.

    -

    Select Your Hero

    -

    Another important aspect of 20 Minutes Till Dawn is that you can select your hero from a wide cast of characters and weapons. Each character has different bonuses and abilities that can affect your gameplay style. For example, some characters have more health or speed than others, or have special skills that can help them survive longer or deal more damage.

    -

    You can also choose your starting weapon from a variety of options, such as pistols, shotguns, rifles, bows, swords, axes, hammers, and more. Each weapon has different stats and effects that can suit different situations and preferences. For example, some weapons have more range or accuracy than others, or have special effects that can stun or freeze enemies.

    -

    You can unlock more characters and weapons by spending gems in the main menu. Some characters and weapons are more expensive than others, so you have to save up enough gems to get them. You can also get some characters and weapons for free by completing certain

    Features

    -

    20 Minutes Till Dawn is not just a simple roguelike survival game. It also has many features that make it stand out from other games in the genre. Here are some of the main features of the game that you can enjoy:

    -
      -
    • Three game modes: You can choose from three different game modes to play: Normal, Hardcore, and Endless. Normal mode is the default mode, where you have to survive for 20 minutes and defeat the final boss. Hardcore mode is a more challenging mode, where you have only one life and no checkpoints. Endless mode is a mode where you can play as long as you want, but the difficulty increases over time.
    • -
    • Multiple difficulty levels: You can also adjust the difficulty level of the game according to your preference and skill level. You can choose from Easy, Normal, Hard, and Insane. The higher the difficulty level, the more enemies and obstacles you will encounter, and the less health and ammo you will have.
    • -
    • Stunning graphics and sound effects: The game has amazing graphics and sound effects that create a immersive and thrilling atmosphere. The game uses pixel art style that gives it a retro and nostalgic feel, but also adds some modern touches and details. The game also has dynamic lighting and shadows that enhance the visual quality. The game also has a catchy and energetic soundtrack that matches the fast-paced action of the game, as well as realistic and satisfying sound effects for the weapons and enemies.
    • -
    -

    Development

    -

    20 Minutes Till Dawn is a game that was developed and marketed by flanne and Erabit Studios, two independent game developers from South Korea. The game was released on June 1, 2023, for Android and iOS devices. The game was inspired by other roguelike and shoot 'em up games, such as Enter the Gungeon, Nuclear Throne, and Binding of Isaac.

    -

    20 minutes till dawn roguelike survival game
    -20 minutes till dawn premium app on google play
    -20 minutes till dawn vampire survivors inspiration
    -20 minutes till dawn erabit studios developer
    -20 minutes till dawn unique builds every run
    -20 minutes till dawn wide cast of characters and weapons
    -20 minutes till dawn free with google play pass
    -20 minutes till dawn launch on mobile devices
    -20 minutes till dawn shoot em up game genre
    -20 minutes till dawn lovecraftian monsters enemies
    -20 minutes till dawn ratings and reviews on google play
    -20 minutes till dawn data privacy and security practices
    -20 minutes till dawn fire wizard and agile ninja builds
    -20 minutes till dawn rune system for becoming stronger
    -20 minutes till dawn contact us via discord, facebook, twitter or email
    -20 minutes till dawn gameplay trailer on youtube
    -20 minutes till dawn casual quick play sessions
    -20 minutes till dawn over 80 different upgrades to choose from
    -20 minutes till dawn net energy gain in nuclear fusion experiment
    -20 minutes till dawn holy grail fusion experiment to create a mini sun
    -20 minutes till dawn korea superconducting tokamak advanced research facility
    -20 minutes till dawn temperature in excess of 100 million degrees celsius
    -20 minutes till dawn seven times hotter than the core of the sun
    -20 minutes till dawn successful pc roguelike game ported to mobile
    -20 minutes till dawn ghoulish and challenging shoot em up game

    -

    The developers wanted to create a game that was easy to pick up and play, but also challenging and rewarding. They also wanted to create a game that had a lot of replay value and variety, with different characters, weapons, upgrades, enemies, and levels. They also wanted to create a game that was fun and engaging for both casual and hardcore gamers.

    -

    The developers used Unity as their main engine for developing the game. They also used Photoshop for creating the pixel art graphics, Audacity for editing the sound effects, and FL Studio for composing the music. The developers spent about six months working on the game, from concept to release. They also received feedback and suggestions from beta testers and early access players.

    -

    Download

    -

    If you are interested in playing 20 Minutes Till Dawn, you can download it for free on your Android or iOS device. The game does not require any special permissions or in-app purchases to play. However, you can support the developers by watching ads or donating via PayPal or Patreon.

    -

    To download the game on your Android device, you can visit the Google Play Store link below:

    -

    20 Minutes Till Dawn - Google Play Store

    -

    To download the game on your iOS device, you can visit the App Store link below:

    -

    20 Minutes Till Dawn - App Store

    -

    Conclusion

    -

    20 Minutes Till Dawn is a roguelike survival game for mobile devices that combines elements of shoot 'em up games. You have to survive for 20 minutes against an endless horde of monsters with random and unique upgrades. You can choose from a variety of characters and weapons that offer different gameplay experiences. The game has three game modes, multiple difficulty levels, stunning graphics and sound effects, and more. The game is free to download and play on Android and iOS devices.

    -

    If you are looking for a fun and challenging game that will keep you entertained for hours, then you should definitely try 20 Minutes Till Dawn. It is one of the best roguelike survival games on mobile devices right now. You will not regret it!

    -

    FAQs

    -
      -
    • Q: How do I save my progress in the game?
    • -
    • A: The game automatically saves your progress at the end of each round or when you exit the game. You can resume your progress from the main menu by tapping on Continue.
    • -
    • Q: How do I unlock new characters and weapons in the game?
    • -
    • A: You can unlock new characters and weapons by spending gems in the main menu. Gems are earned by defeating monsters during a round or by watching ads. Some characters and weapons are also unlocked by completing certain achievements or challenges.
    • -
    • Q: How do I use the runes in the game?
    • -
    • A: Runes are special items that can enhance your character or weapon with different effects. You can equip up to three runes at a time in the main menu. You can unlock new runes by spending gems or by finding them in treasure chests during a round.
    • -
    • Q: What are the different types of enemies in the game?
    • -
    • A: There are over 50 different types of enemies in the game, each with their own behavior and attack patterns. Some of the common enemies are zombies, skeletons, spiders, bats, slimes, and ghosts. Some of the rare and powerful enemies are werewolves, vampires, dragons, demons, and bosses.
    • -
    • Q: How do I contact the developers or report a bug in the game?
    • -
    • A: You can contact the developers or report a bug in the game by sending an email to flanne.erabit@gmail.com or by visiting their official website or social media pages. You can also leave a review or a comment on the Google Play Store or App Store.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Combat Siege How to Dominate the Battlefield.md b/spaces/congsaPfin/Manga-OCR/logs/Combat Siege How to Dominate the Battlefield.md deleted file mode 100644 index e994a1ec8856260b545e102dd0c18bd98e9c661d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Combat Siege How to Dominate the Battlefield.md +++ /dev/null @@ -1,194 +0,0 @@ -
    -

    Combat Siege: A Real Time Strategy Game That You Can Play in Your Browser

    -

    If you are a fan of real time strategy games, you might want to check out Combat Siege, a new game from Studio Hoppe that you can play directly in your browser. Combat Siege is a further development of Panzer Rush, Desert Order, Strategy Combat, and Base Attack Force, and it is set in the last quarter of the last century, i.e. in the 70s, 80s, and 90s. In this game, you can build your base, train your army, and fight against other players in a dynamic and realistic war scenario. In this article, we will give you an overview of what Combat Siege is, how to play it, and how to review it.

    -

    combat siege


    Downloadhttps://urlca.com/2uOaqQ



    -

    What is Combat Siege?

    -

    Combat Siege is a real time strategy game that you can play directly in your browser without any installation or download. It is developed by Studio Hoppe, a German company that specializes in browser-based strategy games. You can access the game from this link: [1](https://www.combatsiege.com/).

    -

    The main features of Combat Siege

    -

    Combat Siege has many features that make it an exciting and challenging strategy game. Some of the main features are:

    -
      -
    • You can choose from over 100 different units, such as tanks, helicopters, planes, trains, ships, and special units. Each unit has its own strengths, weaknesses, and abilities.
    • -
    • You can build your base with various buildings, such as factories, barracks, power plants, defense towers, radar stations, airports, harbors, and more. You can also upgrade your buildings to improve their efficiency and capacity.
    • -
    • You can attack and defend against other players in real time. You can use various tactics and strategies to gain an advantage over your enemies. You can also join an alliance and cooperate with other players to conquer the map.
    • -
    • You can use sound, stealth, breaching charges, drones, flags, gold, and other elements to enhance your gameplay. You can also customize your units with different skins and colors.
    • -
    • You can participate in daily quests, events, tournaments, rankings, and rewards to earn resources, gold, medals, trophies, and other prizes.
    • -
    -

    The gameplay of Combat Siege

    -

    Combat Siege has a realistic and dynamic gameplay that simulates a war scenario. The game has a day-night cycle that affects the visibility and performance of your units. The game also has weather effects that can change the terrain and the conditions of the battle. The game has a map that consists of different regions with different resources and terrain types. You can explore the map and capture bases from other players or from neutral enemies. You can also trade resources with other players or with the market.

    -

    How to play Combat Siege?

    -

    If you are new to Combat Siege or to real time strategy games in general, you might need some guidance on how to play the game effectively. Here are some tips and tricks that will help you get started.

    -

    The beginner's guide to Combat Siege

    -

    When you start playing Combat Siege for the first time, you will be given a tutorial that will show you the basics of the game. You will also be given a beginner's quest series that will provide you with resources and gold to build your base and army. Here are some of the steps that you should follow as a beginner:

    -

    combat siege game
    -combat siege online
    -combat siege strategy
    -combat siege browser
    -combat siege ditogames
    -combat siege login
    -combat siege free
    -combat siege review
    -combat siege tips
    -combat siege guide
    -combat siege wiki
    -combat siege cheats
    -combat siege hack
    -combat siege download
    -combat siege apk
    -combat siege android
    -combat siege ios
    -combat siege pc
    -combat siege mac
    -combat siege steam
    -combat siege gameplay
    -combat siege trailer
    -combat siege video
    -combat siege youtube
    -combat siege twitch
    -combat siege reddit
    -combat siege forum
    -combat siege discord
    -combat siege facebook
    -combat siege twitter
    -combat siege instagram
    -combat siege pinterest
    -combat siege tiktok
    -combat siege support
    -combat siege help
    -combat siege faq
    -combat siege terms
    -combat siege privacy
    -combat siege imprint
    -combat siege world
    -combat siege map
    -combat siege units
    -combat siege buildings
    -combat siege resources
    -combat siege missions
    -combat siege events
    -combat siege alliance
    -combat siege war
    -combat siege pvp

    -

    How to build your base and army

    -

    The first thing you need to do is to build your base and army. You can do this by following these steps:

    -
      -
    1. Build a power plant to generate electricity for your base. You can upgrade your power plant to increase its output and efficiency.
    2. -
    3. Build a factory to produce units for your army. You can choose from different types of units, such as infantry, vehicles, aircraft, and naval units. You can upgrade your factory to unlock new units and increase their production speed.
    4. -
    5. Build a barracks to train your infantry units. You can choose from different types of infantry, such as riflemen, snipers, medics, engineers, and special forces. You can upgrade your barracks to improve their training speed and capacity.
    6. -
    7. Build a defense tower to protect your base from enemy attacks. You can choose from different types of defense towers, such as machine guns, rockets, lasers, and flamethrowers. You can upgrade your defense tower to increase its range and damage.
    8. -
    9. Build a radar station to detect enemy movements and positions. You can upgrade your radar station to increase its coverage and accuracy.
    10. -
    11. Build an airport to produce and deploy your aircraft units. You can choose from different types of aircraft, such as fighters, bombers, helicopters, and transport planes. You can upgrade your airport to increase its runway length and hangar space.
    12. -
    13. Build a harbor to produce and deploy your naval units. You can choose from different types of naval units, such as submarines, destroyers, cruisers, carriers, and battleships. You can upgrade your harbor to increase its dock size and depth.
    14. -
    -

    How to attack and defend

    -

    The next thing you need to do is to attack and defend against other players. You can do this by following these steps:

    -
      -
    1. Select the units that you want to use for your attack or defense. You can select multiple units by holding the CTRL key or by drawing a box around them.
    2. -
    3. Click on the map where you want to move or attack with your units. You can also use the arrow keys or the WASD keys to move the map.
    4. -
    5. Use the right mouse button or the space bar to cancel your orders or deselect your units.
    6. -
    7. Use the SHIFT key or the Q key to queue multiple orders for your units. For example, you can order your units to move to a location, then attack an enemy base, then retreat to another location.
    8. -
    9. Use the CTRL key or the E key to group your units into formations. For example, you can group your tanks into a wedge formation or your planes into a V formation.
    10. -
    11. Use the ALT key or the R key to rotate your formations. For example, you can rotate your wedge formation to face the enemy or your V formation to avoid anti-aircraft fire.
    12. -
    13. Use the TAB key or the F key to switch between different types of units in your selection. For example, you can switch between infantry, vehicles, aircraft, and naval units.
    14. -
    15. Use the number keys or the mouse wheel to zoom in or out on the map. You can also use the + and - keys or the Z and X keys to zoom in or out on the selected unit.
    16. -
    -

    How to join an alliance and cooperate with other players

    -

    The last thing you need to do is to join an alliance and cooperate with other players. You can do this by following these steps:

    -
      -
    1. Click on the alliance button on the top right corner of the screen. You will see a list of available alliances that you can join or create.
    2. -
    3. Select an alliance that suits your preferences and goals. You can see the name, description, flag, rank, members, and territory of each alliance.
    4. -
    5. Click on the join button or the create button depending on whether you want to join an existing alliance or create a new one.
    6. -
    7. If you join an existing alliance, you will have to wait for the approval of the alliance leader or one of the officers. If you create a new alliance, you will have to choose a name, description, flag, and password for your alliance.
    8. -
    9. Once you are part of an alliance, you will be able to chat with other members, share resources, request reinforcements, donate units, participate in wars, and conquer territories together.
    10. -
    -

    The tips and tricks for Combat Siege

    -

    If you want to improve your skills and performance in Combat Siege, you might want to learn some tips and tricks that will help you gain an edge over your enemies. Here are some of the tips and tricks that we have gathered for you:

    -

    How to use sound, stealth, and breaching charges

    -

    Sound, stealth, and breaching charges are some of the elements that can make a difference in your battles. You can use them to surprise, distract, or ambush your enemies. Here is how you can use them:

    -
      -
    • Sound: You can use sound to lure your enemies into traps or to divert their attention from your main attack. You can do this by using units that make noise, such as helicopters, planes, trains, or ships. You can also use units that have speakers, such as trucks or humvees. You can adjust the volume and the direction of the sound by using the sound button on the bottom right corner of the screen.
    • -
    • Stealth: You can use stealth to sneak past your enemies or to launch a surprise attack. You can do this by using units that have stealth capabilities, such as submarines, stealth bombers, or special forces. You can also use units that have camouflage, such as snipers or tanks. You can activate or deactivate the stealth mode by using the stealth button on the bottom right corner of the screen.
    • -
    • Breaching charges: You can use breaching charges to break through enemy defenses or to create openings for your units. You can do this by using units that have breaching charges, such as engineers or special forces. You can also use units that have explosives, such as rockets or bombs. You can place or detonate the breaching charges by using the breaching charge button on the bottom right corner of the screen.
    • -
    -

    How to use your drone and save your first drone

    -

    Your drone is a valuable asset that you can use to scout, spy, or support your units. You can use your drone to see what your enemies are doing, to mark targets for your units, or to drop supplies for your units. You can control your drone by using the drone button on the bottom right corner of the screen.

    -

    However, you should also be careful with your drone, as it can be shot down by enemy anti-aircraft fire or jammed by enemy radar stations. If you lose your drone, you will have to wait for a cooldown period before you can use it again. Therefore, you should try to save your first drone as much as possible. You can do this by following these tips:

    -
      -
    • Avoid flying over enemy bases or territories unless you have a good reason.
    • -
    • Avoid flying too low or too high, as you might be detected by enemy radar or anti-aircraft.
    • -
    • Avoid flying too close to enemy units, as you might be attacked by enemy fire.
    • -
    • Avoid flying over water or mountains, as you might lose signal or crash.
    • -
    • Avoid flying over friendly units, as you might interfere with their operations or visibility.
    • -
    • Return to your base or a safe location when your drone is low on fuel or health.
    • -
    -

    How to use special units and gold

    -

    Special units and gold are some of the elements that can give you an advantage over your enemies. You can use special units and gold to boost your army, to unlock new features, or to access premium content. Here is how you can use them:

    -
      -
    • Special units: Special units are units that have unique abilities or characteristics that make them stand out from other units. For example, some special units are faster, stronger, smarter, or more versatile than other units. Some examples of special units are commandos, spies, hackers, medevacs, nukes, satellites, and more. You can obtain special units by completing quests, events, tournaments, rankings, rewards, or by using gold.
    • -
    • Gold: Gold is the premium currency of Combat Siege that you can use to buy special units, resources, skins, speed, and other benefits. You can obtain gold by completing quests, events, tournaments, rankings, rewards, or by using real money.
    • -
    -

    How to upgrade your defense towers and power plants

    -

    Defense towers and power plants are some of the most important buildings in your base. You can use defense towers and power plants to protect your base from enemy attacks and to provide electricity for your base. You can upgrade your defense towers and power plants to improve their performance and efficiency. Here is how you can upgrade them:

    -
      -
    • Defense towers: Defense towers are buildings that can shoot at enemy units that come within their range. You can upgrade your defense towers to increase their range, damage, fire rate, and accuracy. You can also upgrade your defense towers to unlock new types of weapons, such as rockets, lasers, flamethrowers, and more. You can upgrade your defense towers by using resources and gold.
    • -
    • Power plants: Power plants are buildings that generate electricity for your base. You can upgrade your power plants to increase their output and efficiency. You can also upgrade your power plants to unlock new types of energy sources, such as solar, wind, nuclear, and more. You can upgrade your power plants by using resources and gold.
    • -
    -

    How to review Combat Siege?

    -

    If you want to share your opinion and feedback about Combat Siege with other players or with the developers, you might want to write a review of the game. You can write a review of Combat Siege by following these steps:

    -

    The pros and cons of Combat Siege

    -

    The first thing you need to do is to list the pros and cons of Combat Siege. The pros are the positive aspects of the game that you like or enjoy. The cons are the negative aspects of the game that you dislike or hate. Here are some examples of pros and cons of Combat Siege:

    - - - - - - - - - - - - - - - - - - - - - -
    ProsCons
    - The game has realistic and dynamic graphics and sound effects.- The game can be laggy or buggy sometimes.
    - The game has a variety of units, buildings, and features to choose from.- The game can be complex or confusing for beginners.
    - The game has a competitive and cooperative multiplayer mode.- The game can be unfair or frustrating for some players.
    - The game has a regular update and improvement schedule.- The game has a premium currency and content that can be expensive.
    -

    The comparison of Combat Siege with other Studio Hoppe games

    -

    The next thing you need to do is to compare Combat Siege with other Studio Hoppe games. Studio Hoppe is the developer of Combat Siege and other browser-based strategy games, such as Panzer Rush, Desert Order, Strategy Combat, and Base Attack Force. You can compare Combat Siege with other Studio Hoppe games by using these criteria:

    -
      -
    • The theme and setting of the game. For example, Combat Siege is set in the last quarter of the last century, while Panzer Rush is set in World War II.
    • -
    • The units and buildings of the game. For example, Combat Siege has over 100 different units, while Desert Order has over 70 different units.
    • -
    • The gameplay and features of the game. For example, Combat Siege has sound, stealth, breaching charges, drones, flags, gold, and other elements, while Strategy Combat has oil fields, mines, bridges, tunnels, walls, gates, flags, and other elements.
    • -
    -

    The verdict of Combat Siege

    -

    The last thing you need to do is to give your verdict of Combat Siege. Your verdict is your overall opinion and rating of the game based on your experience and evaluation. You can give your verdict of Combat Siege by using these steps:

    -
      -
    1. Summarize the main points of your review. For example, you can say that Combat Siege is a realistic and dynamic real time strategy game that you can play in your browser.
    2. -
    3. Highlight the strengths and weaknesses of the game. For example, you can say that Combat Siege has a variety of units, buildings, and features, but it can also be laggy, complex, or unfair.
    4. -
    5. Give your recommendation and rating of the game. For example, you can say that Combat Siege is a game that you would recommend to fans of real time strategy games, and that you would rate it 4 out of 5 stars.
    6. -
    -

    Here is an example of a verdict of Combat Siege:

    -

    Combat Siege is a realistic and dynamic real time strategy game that you can play in your browser. It has a variety of units, buildings, and features that make it an exciting and challenging strategy game. However, it can also be laggy, complex, or unfair for some players. Combat Siege is a game that I would recommend to fans of real time strategy games, and I would rate it 4 out of 5 stars.

    -

    Conclusion

    -

    In conclusion, Combat Siege is a real time strategy game that you can play in your browser. It is developed by Studio Hoppe, a German company that specializes in browser-based strategy games. In this game, you can build your base, train your army, and fight against other players in a dynamic and realistic war scenario. In this article, we have given you an overview of what Combat Siege is, how to play it, and how to review it. We hope that this article has been helpful and informative for you. If you have any questions or feedback about Combat Siege or this article, please feel free to leave a comment below. Thank you for reading and have fun playing Combat Siege!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Combat Siege:

    -
      -
    1. Q: How can I play Combat Siege on my mobile device?
      -A: You can play Combat Siege on your mobile device by using a browser that supports HTML5, such as Chrome or Safari. However, the game might not run as smoothly or as optimally as on a desktop or laptop computer.
    2. -
    3. Q: How can I contact the developers or the support team of Combat Siege?
      -A: You can contact the developers or the support team of Combat Siege by using the contact form on their website: [2](https://www.studiohoppe.com/contact/). You can also follow them on their social media accounts: [3](https://www.facebook.com/studiohoppe), [4](https://twitter.com/studiohoppe), [5](https://www.youtube.com/channel/UCwZ7n0yX8n1f9YJxXmZkq9g).
    4. -
    5. Q: How can I report a bug or a problem in Combat Siege?
      -A: You can report a bug or a problem in Combat Siege by using the bug report button on the top right corner of the screen. You can also use the forum on their website: [6](https://www.studiohoppe.com/forum/).
    6. -
    7. Q: How can I give feedback or suggestions for Combat Siege?
      -A: You can give feedback or suggestions for Combat Siege by using the feedback button on the top right corner of the screen. You can also use the forum on their website: [7](https://www.studiohoppe.com/forum/).
    8. -
    9. Q: How can I learn more about Combat Siege?
      -A: You can learn more about Combat Siege by reading the wiki on their website: [8](https://www.studiohoppe.com/wiki/). You can also watch the videos on their YouTube channel: [9](https://www.youtube.com/channel/UCwZ7n0yX8n1f9YJxXmZkq9g).
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Android APK Games Apps for Free - No registration required.md b/spaces/congsaPfin/Manga-OCR/logs/Download Android APK Games Apps for Free - No registration required.md deleted file mode 100644 index 01d3c39932b3c4fe0e9bac7b749d83d91f1c1786..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Android APK Games Apps for Free - No registration required.md +++ /dev/null @@ -1,111 +0,0 @@ - -

    How to Download and Install APK Files on Your Android Device

    -

    If you have an Android device, you might have heard the term APK and wondered what it means. While you can use Android without ever learning the meaning of APK, studying a bit will help you understand and appreciate the platform further.

    -

    download in apk


    Download ✶✶✶ https://urlca.com/2uOeTZ



    -

    Let's look at what an APK file is, how to download and install it on your Android device, and what are its advantages and disadvantages.

    -

    What is an APK File and What Does It Do?

    -

    APK stands for Android Package (sometimes Android Package Kit or Android Application Package). It's the file format that Android uses to distribute and install apps. As a result, an APK contains all the elements that an app needs to install correctly on your device.

    -

    An APK is an archive file, meaning that it contains multiple files, plus some metadata about them. You're probably familiar with other types of archive files, like ZIP and RAR. Generally, archive files (like ZIP) are used to combine multiple files into one, in order to make them more portable or compress them to save space. When an archive is used to distribute software, it's then called a software package.

    -

    As it turns out, APKs are a variant of the JAR (Java Archive) file format, since a lot of Android is built in Java. All APKs are ZIP files at their core, but they must contain additional information to properly function as an APK. So all APKs are ZIPs, but not all ZIPs are APKs.

    -

    If you're curious, you can crack open an APK file and see what's inside. Just use a file extraction tool like 7-Zip to open it like you would any old ZIP file. You can't do much with APKs on platforms other than Android, unless you install an Android emulator like Bluestacks.

    -

    What Are APK Files Used For?

    -

    APK files allow you to install apps on your Android phone. They're similar to the APPX files used to install Store apps on Windows, as well as corresponding package files on other platforms. When you open an APK on your device, it contains the instructions to install the app on your phone and provides information about the package itself to your device.

    -

    download in apk format
    -download in apk file
    -download in apk mod
    -download in apk pure
    -download in apk mirror
    -download in apk for android
    -download in apk for pc
    -download in apk for windows 10
    -download in apk for ios
    -download in apk for mac
    -download in apk from google play
    -download in apk from youtube
    -download in apk from website
    -download in apk from chrome
    -download in apk from facebook
    -download in apk games
    -download in apk apps
    -download in apk movies
    -download in apk music
    -download in apk videos
    -download in apk books
    -download in apk whatsapp
    -download in apk instagram
    -download in apk tiktok
    -download in apk netflix
    -download in apk spotify
    -download in apk zoom
    -download in apk discord
    -download in apk snapchat
    -download in apk telegram
    -download in apk free
    -download in apk pro
    -download in apk premium
    -download in apk full version
    -download in apk cracked
    -download in apk hack
    -download in apk cheat
    -download in apk unlimited money
    -download in apk offline
    -download in apk online
    -how to download in apk
    -why to download in apk
    -where to download in apk
    -when to download in apk
    -what to download in apk
    -best apps to download in apk
    -best games to download in apk
    -best sites to download in apk
    -best way to download in apk

    -

    Normally, when you visit Google Play to download or update an app, the store automatically installs the APK for you. In this way, the Play Store also acts as a package manager—a tool for easily installing, updating, and removing software on a device.

    -

    However, due to Android's open nature, Google Play is not the only way to find and install APKs. It's easy to obtain an APK file from elsewhere, move it to your device, and install it manually. See how to sideload apps on Android for a full guide.

    -

    How to Download APK Files

    -

    To download APK files, there are several methods. One way is to go to APK Mirror on your Android device via Chrome or another browser, search for the app you want, and tap on 'Download APK'. Another way is to use an APK downloader website, such as APK Bucket or Evozi's APK Downloader, to save the APK file from the Google Play Store URL. A third way is to use an APK downloader app, such as APKPure or APKMirror Installer, to download and install APK files directly on your Android device. However, before you download any APK file, you should make sure that the source is trustworthy and reputable. Some websites or apps may offer pirated, modified, or malicious APK files that can harm your device or compromise your privacy. You should also scan the APK file with a security app before installing it.

    How to Install APK Files

    -

    To install APK files on your Android device, you need to enable the option to allow installation from unknown sources. This means that you can install apps that are not from the Google Play Store or other official sources. However, this also exposes you to potential security risks, so you should be careful when installing unknown apps.

    -

    To enable unknown sources, you need to access the settings app and look for the security or privacy option. Depending on your device, you may need to tap on the lock screen and security tab or the install unknown apps switch. Then, you need to turn on the unknown sources switch or check the box next to it. You may see a warning message against enabling this option.

    -

    Once you have enabled unknown sources, you can use a file manager app to install APK files. A file manager app lets you browse and manage the files on your device's storage. You can use the default file manager app on your device or download a third-party app, such as ES File Explorer or Solid Explorer.

    -

    To use a file manager app to install an APK file, you need to:

    -
      -
    1. Go to the app drawer and launch the file manager app (e.g. Solid Explorer).
    2. -
    3. Navigate to the phone’s internal storage and find the Android APK file you transferred earlier.
    4. -
    5. Tap on the APK file to initiate the installation using Android’s built-in package installer.
    6. -
    7. Follow the on-screen prompts to grant permissions and complete the installation.
    8. -
    9. Launch the app from your app drawer or home screen.
    10. -
    -

    Advantages and Disadvantages of APK Files

    -

    Using APK files to install apps on your Android device has some advantages and disadvantages that you should be aware of. Here are some of them:

    -

    Advantages of APK Files

    -
      -
    • You can access apps that are not available on the Google Play Store due to regional restrictions, compatibility issues, or other reasons.
    • -
    • You can try beta versions or older versions of apps that may have features or bug fixes that are not yet released on the official channels.
    • -
    • You can customize your device with apps that offer more functionality or personalization options than the default ones.
    • -
    • You can save bandwidth and storage space by downloading APK files directly from websites instead of using the Google Play Store app.
    • -
    -

    Disadvantages of APK Files

    -
      -
    • You may expose your device to security risks by installing apps from unknown or untrusted sources that may contain malware or viruses.
    • -
    • You may violate intellectual property rights or terms of service by downloading pirated, modified, or hacked apps that are not authorized by the developers.
    • -
    • You may encounter compatibility issues or performance problems by installing apps that are not optimized for your device or Android version.
    • -
    • You may miss out on updates or support from the developers by installing apps that are not from the official channels.
    • -
    -

    Conclusion

    -

    In this article, we have explained what an APK file is, how to download and install it on your Android device, and what are its advantages and disadvantages. We hope you have learned something new and useful about this topic.

    -

    If you want to install apps from outside the Google Play Store, using APK files is one of the easiest and most common methods. However, you should also be careful and responsible when doing so, as there are some risks and limitations involved. Always download APK files from reputable sources, scan them with a security app before installing them, and enable unknown sources only when necessary.

    -

    FAQs

    -

    Here are some common questions and answers about APK files:

    -

    What does APK stand for?

    -

    APK stands for Android Package (sometimes Android Package Kit or Android Application Package). It's the file format that Android uses to distribute and install apps.

    -

    How do I open an APK file?

    -

    To open an APK file on your Android device, you need to enable unknown sources in your settings and use a file manager app to install it. To open an APK file on your computer, you need to use an Android emulator like BlueStacks.

    -

    How do I update an APK file?How do I update an APK file?

    -

    To update an APK file, you need to download the latest version of the app from the same source that you downloaded the original APK file from. Then, you need to install the new APK file over the old one, following the same steps as before. Alternatively, you can use an APK downloader app that can automatically check for updates and install them for you.

    -

    How do I delete an APK file?

    -

    To delete an APK file, you need to uninstall the app that it installed on your device. You can do this by going to your settings app and tapping on apps or applications. Then, you need to find the app that you want to uninstall and tap on it. Next, you need to tap on uninstall and confirm your choice. This will remove the app and its associated data from your device. You can also delete the APK file from your device's storage using a file manager app.

    -

    Are APK files safe?

    -

    APK files are not inherently unsafe, but they can pose some security risks if they are downloaded from unknown or untrusted sources. Some APK files may contain malware or viruses that can harm your device or compromise your privacy. Therefore, you should always download APK files from reputable sources, scan them with a security app before installing them, and enable unknown sources only when necessary.

    -

    Are APK files legal?

    -

    APK files are legal as long as they are authorized by the developers of the apps that they contain. However, some APK files may violate intellectual property rights or terms of service by offering pirated, modified, or hacked apps that are not approved by the developers. Therefore, you should avoid downloading such APK files and respect the rights of the developers.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Episode Choose Your Story Mod APK and Get Unlimited Gems and Tickets Instantly.md b/spaces/congsaPfin/Manga-OCR/logs/Download Episode Choose Your Story Mod APK and Get Unlimited Gems and Tickets Instantly.md deleted file mode 100644 index fbcc20a34cd592e30aff709cfa3308796f7cb6df..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Episode Choose Your Story Mod APK and Get Unlimited Gems and Tickets Instantly.md +++ /dev/null @@ -1,118 +0,0 @@ -
    -

    Episode Choose Your Story Mod APK (Unlimited Gems and Tickets)

    -

    Do you love reading and writing interactive stories? Do you want to create your own characters and choose your own destiny? Do you want to enjoy unlimited gems and tickets to access premium choices and episodes? If you answered yes to any of these questions, then you should try Episode Choose Your Story Mod APK, a modified version of the popular storytelling app that gives you everything you need to enjoy your favorite stories.

    -

    episode choose your story mod apk (unlimited gems and tickets)


    DOWNLOAD ★★★★★ https://urlca.com/2uObyi



    -

    What is Episode Choose Your Story?

    -

    Episode Choose Your Story is an app that lets you do just that with over 150,000 gripping stories, where you make choices that matter. With billions of reads and 150,000+ stories, Episode is an immense collection of interactive stories where YOU choose your destiny. Or become a creator and write your own!

    -

    Features of Episode Choose Your Story

    -

    Some of the features of Episode Choose Your Story are:

    -
      -
    • You can customize your avatar and design your outfits.
    • -
    • You can develop relationships with your favorite characters.
    • -
    • You can change your fate through your choices.
    • -
    • You can discover different genres like romance, drama, comedy, mystery, fantasy, and more.
    • -
    • You can create your own stories using the Episode Studio tool.
    • -
    -

    How to play Episode Choose Your Story

    -

    To play Episode Choose Your Story, you need to follow these steps:

    -
      -
    1. Download the app from the Google Play Store or the App Store.
    2. -
    3. Sign up with your email or Facebook account.
    4. -
    5. Browse the featured stories or search by genre, author, or title.
    6. -
    7. Select a story and start reading.
    8. -
    9. Make choices that affect the plot and the characters.
    10. -
    11. Earn gems and tickets to unlock more choices and episodes.
    12. -
    -

    What is Episode Choose Your Story Mod APK?

    -

    Episode Choose Your Story Mod APK is a modified version of the original app that gives you unlimited gems and tickets for free. This means that you can access all the premium choices and episodes without spending any money. You can also enjoy other benefits like ad-free experience, faster loading, and more.

    -

    Benefits of Episode Choose Your Story Mod APK

    -

    Some of the benefits of Episode Choose Your Story Mod APK are:

    -

    episode mod apk latest version unlimited gems and passes
    -episode choose your story hack apk download free gems and tickets
    -episode mod apk 2023 unlimited everything (gems, passes, tickets)
    -episode choose your story premium mod apk with unlimited gems and tickets
    -episode interactive mod apk free download (unlimited gems and tickets)
    -episode choose your story cracked apk unlimited gems and passes
    -episode mod apk ios no jailbreak (unlimited gems and tickets)
    -episode choose your story modded apk for android (unlimited gems and tickets)
    -episode hack apk online generator (free gems and tickets)
    -episode choose your story cheat codes for unlimited gems and passes
    -episode mod apk without human verification (unlimited gems and tickets)
    -episode choose your story vip mod apk (unlimited gems and tickets)
    -episode mod apk revdl (unlimited gems and passes)
    -episode choose your story mod menu apk (unlimited gems and tickets)
    -episode hack apk no root (free gems and tickets)
    -episode choose your story unlocked apk (unlimited gems and passes)
    -episode mod apk happymod (unlimited gems and tickets)
    -episode choose your story pro mod apk (unlimited gems and passes)
    -episode hack tool apk download (free gems and tickets)
    -episode choose your story full mod apk (unlimited gems and passes)
    -episode mod apk rexdl (unlimited gems and tickets)
    -episode choose your story unlimited money mod apk (unlimited gems and passes)
    -episode hack apk 2023 download (free gems and tickets)
    -episode choose your story mega mod apk (unlimited gems and passes)
    -episode mod apk old version (unlimited gems and tickets)
    -episode choose your story ad free mod apk (unlimited gems and passes)
    -episode hack apk ios download (free gems and tickets)
    -episode choose your story all episodes unlocked mod apk (unlimited gems and passes)
    -episode mod apk android 1 (unlimited gems and tickets)
    -episode choose your story no ads mod apk (unlimited gems and passes)
    -episode hack online no survey (free gems and tickets)
    -episode choose your story obb mod apk (unlimited gems and passes)
    -episode mod apk offline (unlimited gems and tickets)
    -episode choose your story original mod apk (unlimited gems and passes)
    -episode hack without verification or survey (free gems and tickets)
    -episode choose your story plus mod apk (unlimited gems and passes)
    -episode mod apk pure (unlimited gems and tickets)
    -episode choose your story real mod apk (unlimited gems and passes)
    -episode hack with lucky patcher (free gems and tickets)
    -episode choose your story ultimate mod apk (unlimited gems and passes)

    -
      -
    • You can enjoy unlimited gems and tickets to access premium choices and episodes.
    • -
    • You can save your progress and sync it across different devices.
    • -
    • You can play offline without any internet connection.
    • -
    • You can explore more stories and genres without any restrictions.
    • -
    • You can support your favorite authors and creators by giving them feedback and ratings.
    • -
    -

    How to download and install Episode Choose Your Story Mod APK

    -

    To download and install Episode Choose Your Story Mod APK, you need to follow these steps:

    -
      -
    1. Delete the original app from your device if you have it installed.
    2. -
    3. Allow unknown sources in your device settings.
    4. -
    5. Download the mod apk file from a trusted source (like [5play.app](^1^)).
    6. -
    7. Open the downloaded file and tap on install.
    8. -
    9. Wait for the installation to complete and launch the app.
    10. -
    11. Enjoy unlimited gems and tickets and have fun!
    12. -
    -

    Conclusion

    -

    If you are a fan of interactive stories, then you should definitely try Episode Choose Your Story Mod APK, a modified version of the popular storytelling app that gives you unlimited gems and tickets for free. You can create your own characters, choose your own destiny, and explore different genres without any limitations. You can also support your favorite authors and creators by giving them feedback and ratings. Download Episode Choose Your Story Mod APK today and start your own adventure!

    -

    FAQs

    -

    Here are some frequently asked questions about Episode Choose Your Story Mod APK:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Episode Choose Your Story Mod APK safe to use?Yes, Episode Choose Your Story Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when installing any modded apps on your device and scan them for viruses or malware.
    Will I get banned for using Episode Choose Your Story Mod APK?No, you will not get banned for using Episode Choose Your Story Mod APK as it does not interfere with the game servers or violate any terms of service. However, you should always use it at your own risk and discretion.
    Can I update Episode Choose Your Story Mod APK?No, you cannot update Episode Choose Your Story Mod APK as it is a modified version of the original app. If you want to enjoy the latest features and updates, you will have to download the new mod apk file and install it again.
    Can I play Episode Choose Your Story Mod APK with my friends?Yes, you can play Episode Choose Your Story Mod APK with your friends as it supports online multiplayer mode. You can also chat with other players and share your stories with them.
    Can I request a specific story or genre for Episode Choose Your Story Mod APK?No, you cannot request a specific story or genre for Episode Choose Your Story Mod APK as it depends on the availability and popularity of the stories on the original app. However, you can always browse the featured stories or search by genre, author, or title to find something that suits your taste.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2022 APK for Android and Experience the New Features.md b/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2022 APK for Android and Experience the New Features.md deleted file mode 100644 index 11787248ff1f797543f2bca6d3111038929bd111..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2022 APK for Android and Experience the New Features.md +++ /dev/null @@ -1,109 +0,0 @@ -
    -

    Download 2022 WhatsApp APK: How to Get the Latest Version of WhatsApp for Android

    -

    WhatsApp is one of the most popular messaging and video calling apps in the world, with over 2 billion users in more than 180 countries. It's simple, reliable, and private, so you can easily keep in touch with your friends and family. But did you know that you can get the latest version of WhatsApp for Android before it's officially released on the Google Play Store? In this article, we'll show you how to download 2022 WhatsApp APK and enjoy the new features and improvements of this amazing app.

    -

    What is WhatsApp and why should you use it?

    -

    WhatsApp is a free app that lets you send text messages, voice messages, photos, videos, documents, and stickers to anyone who has the app installed on their phone. You can also make free voice and video calls with up to eight people at a time, as well as create group chats with up to 256 participants. WhatsApp uses your phone's internet connection (4G/3G/2G/EDGE or Wi-Fi) to send and receive messages and calls, so you don't have to pay for SMS or phone charges.

    -

    download 2022 whatsapp apk


    DOWNLOADhttps://urlca.com/2uOfqt



    -

    Features of WhatsApp

    -

    Some of the features that make WhatsApp stand out from other messaging apps are:

    -
      -
    • End-to-end encryption: This means that your messages and calls are secure and only you and the person you're communicating with can read or listen to them. No one else, not even WhatsApp, can access your conversations.
    • -
    • Privacy controls: You can choose who can see your last seen, profile photo, about, status, and live location. You can also block unwanted contacts, report spam, and mute notifications.
    • -
    • WhatsApp Web and Desktop: You can use WhatsApp on your computer or tablet by scanning a QR code from your phone. This way, you can access your chats and calls from any device.
    • -
    • Status updates: You can share photos, videos, and GIFs that disappear after 24 hours with your contacts or select groups. You can also view and reply to your friends' status updates.
    • -
    • WhatsApp Business: This is a separate app that allows you to create a business profile and connect with your customers from anywhere. You can send automated messages, catalog your products or services, and provide support.
    • -
    -

    Benefits of WhatsApp

    -

    Some of the benefits that you can enjoy by using WhatsApp are:

    -
      -
    • Simplicity: WhatsApp has a user-friendly interface that makes it easy to use for anyone. You don't need to create an account or remember a password. You just need your phone number and a verification code.
    • -
    • Reliability: WhatsApp works across mobile and desktop even on slow connections. It also has a backup feature that lets you restore your chats and media from Google Drive or iCloud.
    • -
    • Compatibility: WhatsApp supports most Android devices as well as iOS, Windows Phone, KaiOS, and Symbian devices. You can also use it on any web browser or download it for Mac or Windows.
    • -
    • Variety: WhatsApp offers a wide range of options to express yourself and communicate with others. You can use emojis, stickers, voice notes, video clips, documents, and more. You can also customize your wallpaper, notification sounds, font size, and language.
    • -
    • Affordability: WhatsApp is free to download and use. There are no subscription fees or hidden charges. You only need an internet connection to use it.
    • -
    -

    What is an APK and why should you download it?

    -

    An

    An APK (short for Android Package Kit) is a file format that contains the code, resources, and metadata of an Android app. It's like a zip file that you can install on your device to run an app. You can download APK files from various sources on the internet, such as APKMirror, APKPure, or Uptodown. However, you need to be careful and only download APK files from trusted and reputable sites, as some may contain malware or viruses.

    -

    One of the reasons why you may want to download an APK file is to get the latest version of an app before it's officially available on the Google Play Store. This way, you can enjoy the new features and bug fixes of the app before anyone else. Another reason is to access apps that are not compatible with your device or region. For example, some apps may be restricted or banned in certain countries, or may not support older Android versions. By downloading an APK file, you can bypass these limitations and use any app you want.

    -

    How to download 2022 WhatsApp APK for Android?

    -

    If you want to download 2022 WhatsApp APK for Android, you need to follow these steps:

    -

    Step 1: Enable unknown sources on your device

    -

    By default, your Android device only allows you to install apps from the Google Play Store. To install apps from other sources, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device. Tap OK to proceed.

    -

    Step 2: Find a reliable source for the APK file

    -

    As mentioned earlier, you need to be careful when downloading APK files from the internet, as some may contain malware or viruses. To avoid this, you should only download APK files from trusted and reputable sites, such as APKMirror, APKPure, or Uptodown. These sites scan and verify the APK files before uploading them, so you can be sure that they are safe and authentic.

    -

    To find the 2022 WhatsApp APK file, you can search for it on these sites using keywords like "WhatsApp", "2022", or "APK". You should see a list of results with different versions and dates of the app. Choose the latest version that matches your device's Android version and architecture (arm or x86). You can check these details on your device by going to Settings > About phone > Software information.

    -

    download whatsapp 2022 latest version apk
    -download whatsapp 2022 beta apk for android
    -download whatsapp 2022 mod apk with new features
    -download whatsapp 2022 apk for pc windows 10
    -download whatsapp 2022 apk for ios devices
    -download whatsapp 2022 apk from official website
    -download whatsapp 2022 apk without google play store
    -download whatsapp 2022 apk with dark mode
    -download whatsapp 2022 apk with stickers and emojis
    -download whatsapp 2022 apk with video call option
    -download whatsapp 2022 apk with end-to-end encryption
    -download whatsapp 2022 apk with group chat functionality
    -download whatsapp 2022 apk with status update feature
    -download whatsapp 2022 apk with backup and restore option
    -download whatsapp 2022 apk with voice message feature
    -download whatsapp 2022 apk with web and desktop version
    -download whatsapp 2022 apk with business account option
    -download whatsapp 2022 apk with privacy and security settings
    -download whatsapp 2022 apk with custom wallpaper feature
    -download whatsapp 2022 apk with delete for everyone option
    -download whatsapp 2022 apk with live location sharing feature
    -download whatsapp 2022 apk with media and document sharing feature
    -download whatsapp 2022 apk with mute and block option
    -download whatsapp 2022 apk with notification and sound settings
    -download whatsapp 2022 apk with data usage and storage option
    -download whatsapp 2022 apk with language and font option
    -download whatsapp 2022 apk with theme and color option
    -download whatsapp 2022 apk with contact and chat settings
    -download whatsapp 2022 apk with profile and account settings
    -download whatsapp 2022 apk with help and support option
    -how to download whatsapp 2022 apk for free
    -how to install whatsapp 2022 apk on android device
    -how to update whatsapp 2022 apk to latest version
    -how to uninstall whatsapp 2022 apk from android device
    -how to use whatsapp 2022 apk on multiple devices
    -how to transfer whatsapp 2022 apk data to new device
    -how to fix whatsapp 2022 apk not working issue
    -how to enable whatsapp 2022 apk permissions on android device
    -how to disable whatsapp 2022 apk auto-update feature
    -how to join whatsapp 2022 apk beta program
    -why download whatsapp 2022 apk for android device
    -what are the benefits of downloading whatsapp 2022 apk for android device
    -what are the requirements of downloading whatsapp 2022 apk for android device
    -what are the risks of downloading whatsapp 2022 apk from third-party sources
    -what are the alternatives of downloading whatsapp 2022 apk for android device

    -

    Step 3: Download and install the APK file

    -

    Once you have found the 2022 WhatsApp APK file that suits your device, tap on it to download it. You may see a pop-up window that asks you to confirm the download. Tap OK to start the download. Depending on your internet speed and the size of the file, it may take a few minutes to complete.

    -

    After the download is finished, you should see a notification that says "Download complete". Tap on it to open the file. Alternatively, you can go to your device's file manager and locate the file in the Downloads folder. Tap on it to open it.

    -

    You should see a screen that asks you to install the app. Tap Install to begin the installation process. It may take a few seconds to finish.

    -

    Step 4: Verify and enjoy the new features of WhatsApp

    -

    After the installation is done, you should see a screen that says "App installed". Tap Open to launch the app. You may need to verify your phone number and restore your backup if you have one.

    -

    Congratulations! You have successfully downloaded and installed 2022 WhatsApp APK for Android. You can now enjoy the new features and improvements of this amazing app, such as:

    -
      -
    • New chat themes: You can choose from different colors and wallpapers for your chats.
    • -
    • New stickers: You can express yourself with more fun and animated stickers.
    • -
    • New privacy settings: You can choose who can add you to groups or view your online status.
    • -
    • New voice effects: You can change your voice pitch when sending voice messages.
    • -
    • New media editor: You can edit your photos and videos before sending them.
    • -
    -

    Conclusion

    -

    WhatsApp is one of the best messaging and video calling apps in the world, with over 2 billion users in more than 180 countries. It's simple, reliable, and private, so you can easily keep in touch with your friends and family. However, if you want to get the latest version of WhatsApp for Android before it's officially released on the Google Play Store, you need to download 2022 WhatsApp APK from a reliable source and install it on your device. This way This way, you can enjoy the new features and improvements of this amazing app before anyone else. You can also access apps that are not compatible with your device or region by downloading APK files. However, you need to be careful and only download APK files from trusted and reputable sites, as some may contain malware or viruses. You also need to enable unknown sources on your device to install apps from other sources. By following these steps, you can download 2022 WhatsApp APK for Android and stay connected with your loved ones.

    -

    FAQs

    -

    Here are some of the frequently asked questions about downloading 2022 WhatsApp APK for Android:

    -

    Is it safe to download 2022 WhatsApp APK for Android?

    -

    Yes, it is safe to download 2022 WhatsApp APK for Android as long as you download it from a trusted and reputable site, such as APKMirror, APKPure, or Uptodown. These sites scan and verify the APK files before uploading them, so you can be sure that they are safe and authentic. However, you should avoid downloading APK files from unknown or suspicious sites, as they may contain malware or viruses that can harm your device or compromise your privacy.

    -

    Is it legal to download 2022 WhatsApp APK for Android?

    -

    Yes, it is legal to download 2022 WhatsApp APK for Android as long as you don't violate the terms and conditions of WhatsApp or Google Play Store. You are not breaking any laws by downloading an APK file from a third-party source, as it is similar to downloading a file from any other website. However, you should be aware that by doing so, you may lose some of the features or support that WhatsApp or Google Play Store provides, such as automatic updates, security patches, or customer service.

    -

    Will I lose my chats and media if I download 2022 WhatsApp APK for Android?

    -

    No, you will not lose your chats and media if you download 2022 WhatsApp APK for Android. However, you should make sure that you have a backup of your chats and media before installing the APK file. You can do this by going to Settings > Chats > Chat backup on your WhatsApp app and tapping on Back up to Google Drive or iCloud. This way, you can restore your chats and media from your backup if anything goes wrong during the installation process.

    -

    Will I get banned from WhatsApp if I download 2022 WhatsApp APK for Android?

    -

    No, you will not get banned from WhatsApp if you download 2022 WhatsApp APK for Android. However, you should avoid downloading modified or unofficial versions of WhatsApp, such as WhatsApp Plus or GBWhatsApp, as they may violate the terms and conditions of WhatsApp and put your account at risk of being banned. You should only download the official version of WhatsApp from a reliable source, such as the ones mentioned above.

    -

    How can I update my 2022 WhatsApp APK for Android?

    -

    If you want to update your 2022 WhatsApp APK for Android, you need to repeat the same steps that you followed to download it. You need to find the latest version of the APK file from a trusted and reputable site, download it on your device, and install it over the existing app. You don't need to uninstall the previous version or lose your chats and media. However, you should always check for updates regularly to ensure that you have the most recent and secure version of the app.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Talking Tom 2 MOD APK Why You Should Try It on Your iOS Device.md b/spaces/congsaPfin/Manga-OCR/logs/My Talking Tom 2 MOD APK Why You Should Try It on Your iOS Device.md deleted file mode 100644 index 58c74e324fbbf0ecb6fca5f499d100dad963ed13..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/My Talking Tom 2 MOD APK Why You Should Try It on Your iOS Device.md +++ /dev/null @@ -1,106 +0,0 @@ -
    -

    My Talking Tom 2 Mod Apk for iOS: A Fun and Interactive Game for All Ages

    -

    Do you love cats? Do you want to have a virtual pet that you can take care of, play with, and customize? If yes, then you should try My Talking Tom 2 mod apk for iOS. This is a popular game that lets you adopt a cute kitten named Tom and watch him grow into a happy cat. You can feed him, bathe him, dress him, and even talk to him. He will repeat what you say in a funny voice and react to your touch. You can also explore his world and discover new things every day.

    -

    But what makes My Talking Tom 2 mod apk for iOS different from the original version? Well, the mod apk version has some amazing features that will make your gaming experience more fun and exciting. You will get unlimited coins and diamonds that you can use to buy clothes, accessories, furniture, and toys for your Tom. You will also unlock new skills, snacks, and activities that will keep your Tom happy and healthy. You can play mini-games, adopt pets, and make friends with other Toms. And of course, you will enjoy the high-quality graphics and sound effects that make the game more realistic and immersive.

    -

    my talking tom 2 mod apk for ios


    Download File ……… https://urlca.com/2uOcJl



    -

    Features of My Talking Tom 2 Mod Apk for iOS

    -

    My Talking Tom 2 mod apk for iOS has many features that will make you fall in love with this game. Here are some of them:

    -
      -
    • Unlimited coins and diamonds: With this feature, you can buy anything you want for your Tom without worrying about the cost. You can dress him up in different outfits, change his fur color, and give him cool accessories. You can also decorate his house with furniture, wallpapers, and stickers. You can even buy him a plane ticket to travel around the world.
    • -
    • New skills, snacks, and activities: With this feature, you can help your Tom learn new skills like playing the guitar, painting, or cooking. You can also feed him different snacks like pizza, sushi, or ice cream. And you can do various activities with him like brushing his teeth, taking him to the toilet, or putting him to bed.
    • -
    • Mini-games, pets, and friends: With this feature, you can play mini-games with your Tom like Flappy Tom, Bubble Shooter, or Space Trails. You can also adopt pets like a dog, a hamster, or a unicorn. And you can make friends with other Toms by visiting their houses or sending them gifts.
    • -
    • High-quality graphics and sound effects: With this feature, you can enjoy the game in full HD resolution and realistic animations. You can also hear your Tom's voice and reactions as he mimics what you say or responds to your touch. The game also has background music and sound effects that match the mood and theme of each scene.
    • -
    -

    How to Download and Install My Talking Tom 2 Mod Apk for iOS

    -

    If you want to download and install My Talking Tom 2 mod apk for iOS, you need to follow these simple steps:- Step 1: Download the mod apk file from a trusted source. You can find many websites that offer the mod apk file for My Talking Tom 2, but make sure you choose a safe and reliable one. You can use the link below to download the mod apk file for My Talking Tom 2.

    -

    - Step 2: Install the mod apk file using a third-party app installer. Since the mod apk file is not available on the App Store, you need to use a third-party app installer to install it on your iOS device. You can use any app installer that supports iOS, such as TutuApp, AppValley, or TweakBox. You can download these app installers from their official websites or from the link below. Once you have downloaded and installed the app installer, open it and search for My Talking Tom 2 mod apk. Then, tap on the install button and wait for the installation to complete.

    -

    - Step 3: Launch the game and enjoy. After the installation is done, you can find the game icon on your home screen. Tap on it and start playing My Talking Tom 2 mod apk for iOS. You will see that you have unlimited coins and diamonds, as well as all the other features that we mentioned above. Have fun with your Tom and explore his world.

    -

    Pros and Cons of My Talking Tom 2 Mod Apk for iOS

    -

    My Talking Tom 2 mod apk for iOS is a great game that will keep you entertained for hours. However, like any other game, it also has some pros and cons that you should be aware of. Here are some of them:

    -
      -
    • Pros:
        -
      • Fun, engaging, and educational game for all ages. You can learn new skills, words, and facts while playing with your Tom.
      • -
      • Unlimited coins and diamonds to customize your Tom and his house. You can express your creativity and style by dressing up your Tom and decorating his house.
      • -
      • New skills, snacks, and activities to keep your Tom happy and healthy. You can take care of your Tom's needs and make him feel loved and appreciated.
      • -
      • Mini-games, pets, and friends to interact with. You can play mini-games with your Tom, adopt pets, and make friends with other Toms.
      • -
      • High-quality graphics and sound effects. You can enjoy the game in full HD resolution and realistic animations. You can also hear your Tom's voice and reactions as he mimics what you say or responds to your touch.
      • -
      -
    • -
    • Cons:
        -
      • Requires internet connection. You need to have a stable internet connection to play the game and access all its features.
      • -
      • May contain ads. The game may show ads from time to time that may interrupt your gameplay or consume your data.
      • -
      • May not be compatible with some devices. The game may not work properly on some older or lower-end devices due to its high-quality graphics and sound effects.
      • -
      -
    • -
    -

    Conclusion

    -

    My Talking Tom 2 mod apk for iOS is a fun and interactive game that will make you feel like you have a real pet cat. You can adopt a cute kitten named Tom and watch him grow into a happy cat. You can feed him, bathe him, dress him, and even talk to him. You can also explore his world and discover new things every day.

    -

    The mod apk version of the game has some amazing features that will make your gaming experience more fun and exciting. You will get unlimited coins and diamonds that you can use to buy clothes, accessories, furniture, and toys for your Tom. You will also unlock new skills, snacks, and activities that will keep your Tom happy and healthy. You can play mini-games, adopt pets, and make friends with other Toms. And of course, you will enjoy the high-quality graphics and sound effects that make the game more realistic and immersive.

    -

    my talking tom 2 mod apk unlimited money ios
    -my talking tom 2 mod apk download for iphone
    -my talking tom 2 mod apk latest version ios
    -my talking tom 2 mod apk free shopping ios
    -my talking tom 2 mod apk hack ios
    -my talking tom 2 mod apk no ads ios
    -my talking tom 2 mod apk all unlocked ios
    -my talking tom 2 mod apk offline ios
    -my talking tom 2 mod apk revdl ios
    -my talking tom 2 mod apk rexdl ios
    -my talking tom 2 mod apk happymod ios
    -my talking tom 2 mod apk an1 ios
    -my talking tom 2 mod apk andropalace ios
    -my talking tom 2 mod apk android republic ios
    -my talking tom 2 mod apk android 1 ios
    -my talking tom 2 mod apk appvn ios
    -my talking tom 2 mod apk apkpure ios
    -my talking tom 2 mod apk apkmody ios
    -my talking tom 2 mod apk apkmirror ios
    -my talking tom 2 mod apk apknite ios
    -my talking tom 2 mod apk aptoide ios
    -my talking tom 2 mod apk blackmod ios
    -my talking tom 2 mod apk by revdl ios
    -my talking tom 2 mod apk by rexdl ios
    -my talking tom 2 mod apk by happymod ios
    -my talking tom 2 mod apk by an1 ios
    -my talking tom 2 mod apk by andropalace ios
    -my talking tom 2 mod apk by android republic ios
    -my talking tom 2 mod apk by android 1 ios
    -my talking tom 2 mod apk by appvn ios
    -my talking tom 2 mod apk by apkpure ios
    -my talking tom 2 mod apk by apkmody ios
    -my talking tom 2 mod apk by apkmirror ios
    -my talking tom 2 mod apk by apknite ios
    -my talking tom 2 mod apk by aptoide ios
    -my talking tom 2 mod apk by blackmod ios
    -download game my talking tom 2 mod apk for ios
    -download game my talking tom 2 hack for iphone free
    -how to install my talking tom 2 mod on iphone
    -how to get unlimited coins in my talking tom 2 on iphone
    -how to unlock all items in my talking tom 2 on iphone
    -how to play my talking tom 2 offline on iphone
    -how to remove ads from my talking tom 2 on iphone
    -how to update my talking tom 2 on iphone
    -how to backup and restore data of my talking tom 2 on iphone

    -

    If you love cats and want to have a virtual pet that you can take care of, play with, and customize, then you should download My Talking Tom 2 mod apk for iOS now. It is a free game that is suitable for all ages. It is also easy to download and install using a third-party app installer. So what are you waiting for? Download the game now and have fun with your Tom.

    -

    Frequently Asked Questions

    -

    Here are some of the most common questions that people ask about My Talking Tom 2 mod apk for iOS:

    -
      -
    1. Is My Talking Tom 2 mod apk for iOS safe to download?
    2. -

      Yes, My Talking Tom 2 mod p>Yes, My Talking Tom 2 mod apk for iOS is safe to download as long as you use a trusted source and a third-party app installer. The mod apk file does not contain any viruses or malware that can harm your device or compromise your privacy. However, you should always be careful when downloading and installing any mod apk file from the internet and make sure you have a backup of your data in case something goes wrong.

      -
    3. Can I play My Talking Tom 2 mod apk for iOS offline?
    4. -

      No, you cannot play My Talking Tom 2 mod apk for iOS offline. The game requires an internet connection to access all its features and functions. You need to have a stable internet connection to play the game and interact with your Tom, his pets, and his friends. You also need an internet connection to update the game and get new content and features.

      -
    5. How can I update My Talking Tom 2 mod apk for iOS?
    6. -

      To update My Talking Tom 2 mod apk for iOS, you need to download and install the latest version of the mod apk file from the same source that you used before. You can also check the app installer that you used to install the mod apk file for any updates or notifications. However, you should be aware that updating the mod apk file may overwrite your previous data and settings, so you may lose some of your progress and coins. You should also make sure that the new version of the mod apk file is compatible with your device and does not cause any errors or glitches.

      -
    7. Can I play My Talking Tom 2 mod apk for iOS with my friends?
    8. -

      Yes, you can play My Talking Tom 2 mod apk for iOS with your friends. The game has a social feature that allows you to visit other Toms' houses and send them gifts. You can also chat with them and see their Toms' reactions. You can also play mini-games with them and compete for high scores. To play with your friends, you need to have an internet connection and a Facebook account. You can connect your Facebook account to the game and invite your friends to join you.

      -
    9. What are some tips and tricks for playing My Talking Tom 2 mod apk for iOS?
    10. -

      Here are some tips and tricks for playing My Talking Tom 2 mod apk for iOS:

      -
        -
      • Check on your Tom regularly and take care of his needs. Feed him, bathe him, take him to the toilet, and put him to bed when he is hungry, dirty, full, or sleepy.
      • -
      • Play with your Tom and make him happy. Pet him, tickle him, poke him, and talk to him. He will repeat what you say in a funny voice and react to your touch.
      • -
      • Customize your Tom and his house with unlimited coins and diamonds. Buy clothes, accessories, furniture, and toys for your Tom. Change his fur color, eye color, and outfit. Decorate his house with wallpapers, stickers, and paintings.
      • -
      • Learn new skills, snacks, and activities with your Tom. Teach him how to play the guitar, paint, or cook. Feed him different snacks like pizza, sushi, or ice cream. Do various activities with him like brushing his teeth, taking him to the doctor, or flying a plane.
      • -
      • Play mini-games, adopt pets, and make friends with other Toms. Play mini-games with your Tom like Flappy Tom, Bubble Shooter, or Space Trails. Adopt pets like a dog, a hamster, or a unicorn. Make friends with other Toms by visiting their houses or sending them gifts.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Parking apk2 plaza de oriente la mejor opcin para visitar el Palacio Real.md b/spaces/congsaPfin/Manga-OCR/logs/Parking apk2 plaza de oriente la mejor opcin para visitar el Palacio Real.md deleted file mode 100644 index 1b7670b92bb2f243ade550402b55bbee9ebf5bbb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Parking apk2 plaza de oriente la mejor opcin para visitar el Palacio Real.md +++ /dev/null @@ -1,161 +0,0 @@ -
      -

      Parking APK2 Plaza de Oriente: A Convenient and Affordable Option in Madrid

      -

      If you are planning to visit Madrid by car, you may be wondering where to park your vehicle without spending a fortune or wasting time. Finding a parking spot in the city center can be challenging, especially if you are not familiar with the traffic rules and restrictions. Fortunately, there is a solution that will make your life easier and your trip more enjoyable: Parking APK2 Plaza de Oriente.

      -

      parking apk2 plaza de oriente


      Download Filehttps://urlca.com/2uO7G3



      -

      Parking APK2 Plaza de Oriente is a public parking facility located in one of the most privileged areas of Madrid, next to the Royal Palace and other major attractions. It offers you a secure, comfortable, and affordable place to leave your car while you explore the city. Whether you need to park for a few hours, a day, or a month, you will find an option that suits your needs and budget.

      -

      In this article, we will tell you everything you need to know about Parking APK2 Plaza de Oriente, including its location, price, convenience, and how to use it. By the end of this article, you will see why Parking APK2 Plaza de Oriente is one of the best options for parking in Madrid.

      What is APK2 Plaza de Oriente?

      -

      APK2 Plaza de Oriente is a public parking facility that belongs to the APK2 network, a leading company in the parking sector in Spain. It is located in the heart of Madrid, in the Plaza de Oriente, a square that connects the Royal Palace with the Opera House. It is one of the most emblematic and historical places in the city, where you can admire the monuments, gardens, and fountains that surround it.

      -

      The parking facility has a capacity of 500 spaces, distributed over three underground floors. It has two entrances and exits: one in Plaza de Oriente s/n (in front of the Opera House) and another in Calle Bailén 6 (in the Bailén tunnel). The headroom limit is 2 meters, so it can accommodate most cars and vans. It also has 30 spaces reserved for coaches, with a special entrance adapted for them.

      -

      The parking facility is open 24 hours a day, 7 days a week, and offers video surveillance, security guards, and customer service. It also has services such as charging points for electric vehicles, adapted spaces for people with reduced mobility, toilets, vending machines, and car wash. You can pay by cash, card, or mobile app, and you can also book your spot online or get a monthly pass.

      -

      Here is a map of the parking location:

      -APK2 Plaza de Oriente map

      Why choose APK2 Plaza de Oriente?

      -

      Parking at APK2 Plaza de Oriente has many advantages that make it one of the best options for parking in Madrid. Here are some of them:

      -

      parking apk2 plaza de oriente madrid
      -parking apk2 plaza de oriente s/n
      -parking apk2 plaza de oriente telefono
      -parking apk2 plaza de oriente precio
      -parking apk2 plaza de oriente reservar
      -parking apk2 plaza de oriente abono mensual
      -parking apk2 plaza de oriente opiniones
      -parking apk2 plaza de oriente app
      -parking apk2 plaza de oriente horario
      -parking apk2 plaza de oriente ubicacion
      -parking apk2 plaza de oriente cerca del palacio real
      -parking apk2 plaza de oriente catedral de la almudena
      -parking apk2 plaza de oriente teatro real
      -parking apk2 plaza de oriente plaza mayor
      -parking apk2 plaza de oriente puerta del sol
      -parking apk2 plaza de oriente vigilado 24/7
      -parking apk2 plaza de oriente adaptado para movilidad reducida
      -parking apk2 plaza de oriente puntos de recarga electrica
      -parking apk2 plaza de oriente venta de plazas
      -parking apk2 plaza de oriente gestion directa desde la app
      -parking apk2 plaza de oriente tarifas accesibles
      -parking apk2 plaza de oriente facil acceso
      -parking apk2 plaza de oriente galibo 2 metros
      -parking apk2 plaza de oriente entrada por calle bailen
      -parking apk2 plaza de oriente entrada por plaza del rey
      -parking apk2 plaza de oriente espacio para autocares
      -parking apk2 plaza de oriente ideal para turismo en madrid
      -parking apk2 plaza de oriente sabatini gardens
      -parking apk2 plaza de oriente temple of debod
      -parking apk2 plaza de oriente gran via
      -parking apk2 plaza de oriente madrid central sin multas
      -parking apk2 plaza de oriente etiqueta medioambiental obligatoria
      -parking apk2 plaza de oriente reservas por horas o dias
      -parking apk2 plaza de oriente reservas online con onepark
      -parking apk2 plaza de oriente reservas online con parclick
      -parking apk2 plaza de oriente reservas online con elparking
      -parking apk2 plaza de oriente reservas online con parkimeter
      -parking apk2 plaza de oriente reservas online con parkvia
      -parking apk2 plaza de oriente reservas online con parkapp
      -parking apk2 plaza de oriente reservas online con park4night

      -

      Location

      -

      One of the main benefits of parking at APK2 Plaza de Oriente is its location. It is situated in a strategic area of the city, close to many attractions and landmarks that you can visit on foot or by public transport. Some of the places that you can easily reach from the parking facility are:

      -
        -
      • The Royal Palace: the official residence of the Spanish royal family and one of the most impressive and visited monuments in Madrid. It is only a 5-minute walk from the parking lot.
      • -
      • The Opera House: also known as Teatro Real, it is one of the most prestigious and elegant theaters in Europe, where you can enjoy opera, ballet, concerts, and other cultural events. It is right in front of the parking entrance in Plaza de Oriente.
      • -
      • The Almudena Cathedral: the main church of Madrid and a symbol of its history and identity. It is located next to the Royal Palace and has a beautiful interior and a museum. It is a 10-minute walk from the parking lot.
      • -
      • The Sabatini Gardens: a classical-style garden that belongs to the Royal Palace and offers a stunning view of its facade. It is a perfect place to relax and enjoy nature in the city center. It is a 10-minute walk from the parking lot.
      • -
      • The Plaza Mayor: one of the most emblematic and lively squares in Madrid, where you can find cafes, restaurants, shops, street performers, and historical buildings. It is a 15-minute walk from the parking lot.
      • -
      -

      Moreover, parking at APK2 Plaza de Oriente is very convenient if you want to get around the city by public transport. There are several metro, bus, and train stations nearby that connect you with other areas of Madrid. Some of them are:

      -
        -
      • Opera metro station: lines 2, 5, and R (Ramal). It is right in front of the parking entrance in Plaza de Oriente.
      • -
      • Sol metro station: lines 1, 2, and 3. It is also a train station (Cercanías) with lines C-3 and C-4. It is a 15-minute walk from the parking lot.
      • -
      • Príncipe Pío metro station: lines 6, 10, and R (Ramal). It is also a train station (Cercanías) with lines C-1, C-7, and C-10. It is a 20-minute walk from the parking lot.
      • -
      -

      Price

      -

      Another advantage of parking at APK2 Plaza de Oriente is its price. Compared to other parking options in the city center, it offers very competitive and affordable rates that fit any budget. You can pay by the hour, by the day, or by the month, depending on your needs. You can also book your spot online or get a monthly pass to save money and time.

      -

      Here is a table with the parking fees for different time periods:

      - - - - - - - - - - - - - - - - - - - - - - - -
      Time periodPrice
      1 hour3.00 €
      2 hours6.00 €
      3 hours9.00 €
      4 hours12.00 €
      5 hours15.00 €
      6 hours18.00 €
      7 hours21.00 €
      8 hours24.00 €
      9 hours27.00 €
      10 hours30.00 €
      11 hours33.00 €
      12 hours36.00 €
      13 hours or more (maximum daily rate)39.00 €
      1 day (24 hours)39.00 €
      2 days (48 hours)78.00 €
      3 days (72 hours)117.00 €
      4 days (96 hours)156.00 €
      5 days (120 hours)195.00 €
      6 days (144 hours)234.00 €
      7 days (168 hours)273.00 €
      1 month (30 days)300.00 €
      -

      You can also check the prices and availability of the parking spots online, using the official website of APK2 or other platforms such as Parclick or ElParking. You can also make a reservation online and pay in advance, which will guarantee you a place and save you time when you arrive. Moreover, you can get a monthly pass for 300 €, which will allow you to park unlimitedly for 30 days.

      -

      Convenience

      -

      Parking at APK2 Plaza de Oriente is not only cheap and well-located, but also convenient and comfortable. The parking facility has several features that make it easy and hassle-free to park your car and enjoy your stay in Madrid. Some of these features are:

      -

      24/7 availability

      -

      You can park your car at any time of the day or night, as the parking facility is open 24 hours a day, 7 days a week. You don't have to worry about finding a place or being late for your appointment or event. You can also leave your car as long as you want, as there is no maximum parking time.

      -

      Video surveillance

      -

      You can park your car with peace of mind, as the parking facility has video cameras and security guards that monitor the premises constantly. You don't have to worry about theft, vandalism, or damage to your vehicle. You can also access your car whenever you need to, as there is no restriction on entry or exit.

      -

      Adapted for people with reduced mobility

      -

      You can park your car comfortably and safely, as the parking facility has spaces reserved and adapted for people with reduced mobility. These spaces are located near the elevators and the exits, and have a wider size and a lower headroom limit. They also have ramps, handrails, and signs to facilitate access and movement.

      -

      Charging points for electric vehicles

      -

      You can park your car and charge it at the same time, as the parking facility has charging points for electric vehicles. These points are located on the first floor of the parking lot, and have a power of 22 kW. They are compatible with most electric cars and vans, and have a standard plug type 2.

      -

      Customer service and assistance

      -

      You can park your car and get help if you need it, as the parking facility has customer service and assistance available at all times. You can contact them by phone, email, or intercom, and they will answer your questions, solve your problems, or provide you with information. They can also help you with issues such as lost tickets, flat tires, or battery failures.

      -

      How to use APK2 Plaza de Oriente?

      -

      Parking at APK2 Plaza de Oriente is very simple and easy. You just need to follow these steps:

      -

      How to get there

      -

      To get to the parking facility by car, you can use any of these routes:

      -
        -
      • If you are coming from the north or the east of Madrid, you can take the M-30 highway and exit at Calle Bailén. Then, follow the signs to Plaza de Oriente and enter the parking lot through the entrance in Calle Bailén 6.
      • -
      • If you are coming from the south or the west of Madrid, you can take the A-5 highway and exit at Paseo de Extremadura. Then, follow the signs to Plaza de España and enter the parking lot through the entrance in Plaza de Oriente s/n.
      • -
      • If you are already in the city center, you can take any of these streets: Gran Vía, Calle Mayor, Calle Arenal, or Calle Princesa. Then, follow the signs to Plaza de Oriente and enter the parking lot through either entrance.
      • -
      -

      To get to the parking facility by public transport, you can use any of these options:

      -
        -
      • If you are using the metro, you can take lines 2, 5, or R (Ramal) and get off at Opera station. Then walk a few meters to the parking entrance in Plaza de Oriente s/n.
      • -
      • If you are using the bus, you can take any of these lines: 3, 25, 39, or 148 and get off at Plaza de Oriente stop. Then, walk a few meters to the parking entrance in Plaza de Oriente s/n.
      • -
      • If you are using the train (Cercanías), you can take lines C-3 or C-4 and get off at Sol station. Then, walk for about 15 minutes to the parking entrance in Plaza de Oriente s/n or take the metro line 2 to Opera station.
      • -
      -

      How to pay

      -

      To pay for your parking fee, you can use any of these methods:

      -

      Using the parking meter

      -

      You can pay by cash or card at the parking meter located near the exits of the parking lot. You just need to insert your ticket and follow the instructions on the screen. You will receive a receipt and a validated ticket that you will need to exit the parking lot.

      -

      Using a mobile app

      -

      You can also pay by using a mobile app such as Parclick or ElParking. You just need to download the app, register your account, and select APK2 Plaza de Oriente as your parking option. You can also make a reservation online and pay in advance. You will receive a confirmation code that you will need to scan at the entrance and exit of the parking lot.

      -

      Using a monthly pass or a reservation

      -

      If you have a monthly pass or a reservation, you don't need to pay at the parking meter or use a mobile app. You just need to scan your pass or your reservation code at the entrance and exit of the parking lot. You can get a monthly pass for 300 € or make a reservation online through the official website of APK2 or other platforms such as Parclick or ElParking.

      -

      How to cancel or modify a reservation

      -

      If you have made a reservation online and you want to cancel or modify it, you can do so by following these steps:

      -
        -
      • Go to the website or platform where you made your reservation and log in with your account.
      • -
      • Find your reservation and click on cancel or modify.
      • -
      • Follow the instructions on the screen and confirm your action.
      • -
      • You will receive an email with the confirmation of your cancellation or modification.
      • -
      -

      If you have any questions or problems with your reservation, you can contact the customer service of APK2 or the platform where you made your reservation. You can find their contact information on their websites or apps.

      -

      Conclusion

      -

      Parking at APK2 Plaza de Oriente is a convenient and affordable option for parking in Madrid. It is located in a privileged area of the city, close to many attractions and landmarks that you can visit on foot or by public transport. It offers competitive and flexible rates that fit any budget and need. It also provides security, comfort, and services that make it easy and hassle-free to park your car and enjoy your stay in Madrid.

      -

      If you are looking for a parking spot in Madrid, don't hesitate to try APK2 Plaza de Oriente. You will not regret it. You can book your spot online or get a monthly pass to save money and time. You can also pay by cash, card, or mobile app, depending on your preference. You can also cancel or modify your reservation if you need to.

      -

      Thank you for reading this article. We hope you found it useful and informative. If you have any questions, comments, or feedback, please leave them below. We would love to hear from you. And if you have already parked at APK2 Plaza de Oriente, please share your experience with us. How was it? Did you like it? Would you recommend it?

      -

      FAQs

      -

      Is APK2 Plaza de Oriente inside Madrid Central?

      -

      No, APK2 Plaza de Oriente is not inside Madrid Central, which is a low-emission zone that restricts the access of certain vehicles to the city center. APK2 Plaza de Oriente is located outside Madrid Central, so you don't need a special permit or sticker to park there. However, if you want to enter Madrid Central from APK2 Plaza de Oriente, you will need to comply with the rules and requirements of Madrid Central.

      -

      Can I park my coach or motorhome at APK2 Plaza de Oriente?

      -

      Yes, you can park your coach or motorhome at APK2 Plaza de Oriente, as long as it does not exceed 12 meters in length and 4 meters in height. The parking facility has 30 spaces reserved for coaches, with a special entrance adapted for them. The entrance is located in Calle Bailén 6, in the Bailén tunnel. You will need to pay the same rate as cars, which is 3 € per hour or 39 € per day.

      -

      What happens if I exceed the maximum parking time or lose my ticket?

      -

      If you exceed the maximum parking time or lose your ticket, you will need to contact the customer service or the security guard of the parking facility. They will help you to solve the issue and pay the corresponding fee. If you exceed the maximum parking time, you will have to pay an extra charge of 3 € per hour or fraction. If you lose your ticket, you will have to pay a penalty of 10 € plus the parking fee.

      -

      How can I get a receipt or invoice for my parking fee?

      -

      If you need a receipt or invoice for your parking fee, you can request it at the parking meter or at the customer service of the parking facility. You will need to provide your ticket number and your personal or business information. You can also request it online, by sending an email to apk2@aparcamientos.com with your ticket number and your personal or business information.

      -

      What are the alternatives to APK2 Plaza de Oriente in Madrid?

      -

      If APK2 Plaza de Oriente is full or not available for some reason, you can look for other parking options in Madrid. Some of them are:

      -
        -
      • APK2 Plaza de España: another public parking facility that belongs to the APK2 network, located in Plaza de España, a 10-minute walk from APK2 Plaza de Oriente. It has similar features and prices as APK2 Plaza de Oriente.
      • -
      • Garaje Fermar: a private parking facility located in Calle Campomanes 10, a 5-minute walk from APK2 Plaza de Oriente. It has a capacity of 200 spaces and offers video surveillance, car wash, and electric vehicle charging points. It is open 24 hours a day and charges 4 € per hour or 36 € per day.
      • -
      • Parking Saba Palacio de Oriente: a private parking facility located in Calle Bailén s/n, next to APK2 Plaza de Oriente. It has a capacity of 400 spaces and offers video surveillance, security guards, and customer service. It is open 24 hours a day and charges 3.60 € per hour or 32.40 € per day.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tomb of the Mask Color - A Pixel Art Coloring Game with a Twist.md b/spaces/congsaPfin/Manga-OCR/logs/Tomb of the Mask Color - A Pixel Art Coloring Game with a Twist.md deleted file mode 100644 index ea7a10d0f83c86b5d0d51ce64fd64f78f03ad1f2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tomb of the Mask Color - A Pixel Art Coloring Game with a Twist.md +++ /dev/null @@ -1,95 +0,0 @@ - -

      Tomb of the Mask: Color - A Fun and Addictive Arcade Game

      -

      If you're looking for a new game to play on your phone or tablet, you might want to check out Tomb of the Mask: Color. This is a colorful and creative arcade game that will test your reflexes and skills as you explore a maze full of traps, enemies, and treasures. In this article, we'll tell you what Tomb of the Mask: Color is, how to play it, why you should play it, and some tips and tricks to help you master it.

      -

      What is Tomb of the Mask: Color?

      -

      Tomb of the Mask: Color is a sequel to the popular game Tomb of the Mask, which was released in 2016. In this game, you play as an adventurer who finds a mysterious mask that gives you the ability to climb walls and ceilings. You use this power to explore a labyrinth filled with dangers and rewards. The twist in this game is that you have to paint every corner of the maze with your color as you move. This adds an extra layer of challenge and fun to the game.

      -

      tomb of the mask color


      DOWNLOADhttps://urlca.com/2uO4vR



      -

      Tomb of the Mask: Color has many features that make it an enjoyable and addictive game. Some of these features are:

      -
        -
      • Over 200 levels to play, each with different layouts, obstacles, and enemies
      • -
      • A variety of power-ups and items to collect, such as magnets, shields, bombs, and coins
      • -
      • A leaderboard and achievements system to compete with other players and track your progress
      • -
      • A daily challenge mode that offers a new level every day with different rules and rewards
      • -
      • A custom level editor that lets you create your own levels and share them with other players
      • -
      -

      How to Play Tomb of the Mask: Color?

      -

      The gameplay of Tomb of the Mask: Color is simple but challenging. You control your character by swiping on the screen in the direction you want to move. You can move horizontally or vertically, but not diagonally. You can also change direction mid-air by swiping again. Your goal is to paint every tile in the maze with your color while avoiding or destroying enemies, spikes, lasers, and other hazards. You also have to collect coins, stars, keys, and other items along the way.

      -

      The game has two modes: adventure mode and arcade mode. In adventure mode, you have to complete each level by reaching the exit door. You have a limited number of lives, which you lose if you touch an enemy or a trap. You can earn more lives by collecting hearts or watching ads. In arcade mode, you have to survive as long as possible in an endless maze that gets harder as you go. You have only one life, but you can revive by watching ads or using gems.

      -

      Why You Should Play Tomb of the Mask: Color?

      -

      It's Free and Easy to Play

      -

      One of the reasons why you should play Tomb of the Mask: Color is that it's free to download and play on your device. You don't need any special skills or equipment to enjoy this game. All you need is your finger and your screen. The game also has simple controls and rules that anyone can learn in a matter of minutes. The game is also suitable for all ages and preferences, as it has a cute and colorful graphics style and a catchy soundtrack.

      -

      It's Challenging and Rewarding

      -

      Another reason why you should play Tomb of the Mask: Color is that it's a game that will challenge your reflexes, skills, and strategy. The game has many levels that vary in difficulty and complexity, requiring you to think fast and act faster. You will encounter different enemies and traps that will try to stop you from painting the maze, such as ghosts, bats, spiders, cannons, saws, and more. You will also have to collect various power-ups and items that will help you or hinder you, such as magnets, shields, bombs, and coins. The game will keep you on your toes and make you feel accomplished when you complete a level or beat a high score.

      -

      It's Colorful and Creative

      -

      The last reason why you should play Tomb of the Mask: Color is that it's a game that will stimulate your creativity and imagination. The game has a vibrant and colorful graphics style that will appeal to your eyes and mood. The game also has a unique and original concept that will make you wonder how the developers came up with it. The game lets you create your own levels and share them with other players, giving you the opportunity to express yourself and challenge others. The game is a feast for the senses and the mind.

      -

      Tips and Tricks for Tomb of the Mask: Color

      -

      If you want to master Tomb of the Mask: Color, here are some tips and tricks that might help you:

      -
        -
      • Swipe quickly and accurately. The faster you swipe, the faster you move. The more precise you swipe, the more control you have over your direction. This will help you avoid enemies and traps and paint more tiles.
      • -
      • Use power-ups wisely. Power-ups can give you an edge or a disadvantage depending on the situation. For example, magnets can help you collect coins easily, but they can also attract bombs or enemies. Shields can protect you from harm, but they can also prevent you from painting tiles. Bombs can clear obstacles, but they can also damage you or destroy items. Be careful when you use power-ups and don't rely on them too much.
      • -
      • Collect stars and keys. Stars are important for unlocking new levels and modes in the game. Keys are important for opening chests that contain gems, coins, or power-ups. Try to collect as many stars and keys as possible in each level.
      • -
      • Watch ads or use gems to revive. If you run out of lives or die in arcade mode, you can watch an ad or use gems to revive yourself. This can help you continue your progress or improve your score. However, don't abuse this feature as it can make the game less fun and challenging.
      • -
      • Play daily challenges and custom levels. Daily challenges offer a new level every day with different rules and rewards. Custom levels are levels created by other players that you can play and rate. These features can add more variety and fun to the game.
      • -
      -

      Conclusion

      -

      Tomb of the Mask: Color is a fun and addictive arcade game that will keep you entertained for hours. It has simple but challenging gameplay, colorful and creative graphics, and many features that make it worth playing. If you're looking for a new game to play on your phone or tablet, download Tomb of the Mask: Color today and enjoy painting the maze with your color.

      -

      tomb of the mask color game
      -tomb of the mask color online
      -tomb of the mask color play free
      -tomb of the mask color download
      -tomb of the mask color app
      -tomb of the mask color apk
      -tomb of the mask color mod
      -tomb of the mask color cheats
      -tomb of the mask color hack
      -tomb of the mask color tips
      -tomb of the mask color guide
      -tomb of the mask color walkthrough
      -tomb of the mask color levels
      -tomb of the mask color stars
      -tomb of the mask color coins
      -tomb of the mask color gems
      -tomb of the mask color skins
      -tomb of the mask color masks
      -tomb of the mask color characters
      -tomb of the mask color review
      -tomb of the mask color rating
      -tomb of the mask color gameplay
      -tomb of the mask color trailer
      -tomb of the mask color video
      -tomb of the mask color youtube
      -tomb of the mask color yandex games
      -tomb of the mask color bestgames.com
      -tomb of the mask color app store
      -tomb of the mask color ios
      -tomb of the mask color iphone
      -tomb of the mask color ipad
      -tomb of the mask color android
      -tomb of the mask color google play
      -tomb of the mask color pc
      -tomb of the mask color windows
      -tomb of the mask color mac
      -tomb of the mask color webgl
      -tomb of the mask color html5
      -tomb of the mask color 2d pixel art animation
      -tomb of the mask color avoid game
      -tomb of the mask color puzzle game
      -tomb of the mask color arcade game
      -tomb of the mask color exclusive game
      -tomb of the mask color playcanvas game
      -tomb of the mask color coloring game
      -tomb of the mask color painting game
      -tomb of the mask color maze game
      -tomb of the mask color labyrinth game
      -tomb of the mask color adventure game

      -

      FAQs

      -

      Here are some frequently asked questions about Tomb of the Mask: Color:

      -
        -
      1. Q: How do I change my color in the game?
        A: You can change your color in the settings menu by tapping on the gear icon on the top right corner of the screen. You can choose from 12 different colors.
      2. -
      3. Q: How do I unlock new masks in the game?
        A: You can unlock new masks by collecting gems and spending them in the shop menu by tapping on the cart icon on the top right corner of the screen. You can also unlock some masks by completing achievements or watching ads.
      4. -
      5. Q: How do I create my own level in the game?
        A: You can create your own level by tapping on the pencil icon on the bottom right corner of the screen. You can choose from different tiles, enemies, power-ups, items, and backgrounds to design your level. You can also test your level before publishing it.
      6. -
      7. Q: How do I play other players' levels in the game?
        A: You can play other players' levels by tapping on the globe icon on the bottom right corner of the screen. You can browse through different categories, such as popular , new, or random, or search for a specific level by its name or code. You can also rate and comment on the levels you play.
      8. -
      9. Q: How do I share my level with other players in the game?
        A: You can share your level with other players by tapping on the share icon on the bottom right corner of the screen. You can copy the level code or the level link and send it to your friends or social media. You can also see how many times your level has been played, liked, or disliked.
      10. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Manalink 3.0 Shandalar Download __FULL__.md b/spaces/contluForse/HuggingGPT/Manalink 3.0 Shandalar Download __FULL__.md deleted file mode 100644 index a0ace3220831a65a771fef286a9652d2a5029cf8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Manalink 3.0 Shandalar Download __FULL__.md +++ /dev/null @@ -1,110 +0,0 @@ -## Manalink 3.0 Shandalar Download - - - - - - ![Manalink 3.0 Shandalar Download __FULL__](https://uploads.documents.cimpress.io/v1/uploads/e4872e53-de47-4a98-bd3c-b3fa7ade57ce~110/original?tenant=vbu-digital) - - - - - -**DOWNLOAD ★★★★★ [https://www.google.com/url?q=https%3A%2F%2Fbyltly.com%2F2txoKs&sa=D&sntz=1&usg=AOvVaw0GCruc0fmN3FfbztE3zEun](https://www.google.com/url?q=https%3A%2F%2Fbyltly.com%2F2txoKs&sa=D&sntz=1&usg=AOvVaw0GCruc0fmN3FfbztE3zEun)** - - - - - - - - - - - - - -# How to Download and Install Manalink 3.0 Shandalar, the Ultimate Magic: The Gathering PC Game - - - -If you are a fan of Magic: The Gathering, you might have heard of Manalink 3.0 Shandalar, a fan-made update for the classic Microprose's Magic: The Gathering PC game from 1997. Manalink 3.0 Shandalar adds thousands of new cards, modes, features, and graphics to the original game, making it the most complete and immersive way to play Magic on your computer. - - - -But how do you download and install Manalink 3.0 Shandalar? It might seem complicated at first, but don't worry, we are here to help you with this step-by-step guide. - - - -## Step 1: Download the required files - - - -The first thing you need to do is to download the required files to run Manalink 3.0 Shandalar. You will need: - - - -- The base installation file, which contains the core program and all the necessary files to run the game. You can find it [here](https://www.slightlymagic.net/forum/viewforum.php?f=85), in the patches subforum. Look for the topic "New Base Install & Bugfix" and download the latest version available. - -- The card art file, which contains the images for all the cards in the game. You can find it [here](https://www.slightlymagic.net/forum/viewforum.php?f=85), in the same patches subforum. Look for the topic "New Base Install & Bugfix" and download the latest version available. - -- The Visual C++ Libraries, which are needed to run some of the game's features. You can find them [here](http://www.microsoft.com/download/en/details.aspx?id=8328). Download and install them on your computer. - - - -## Step 2: Install the base installation file - - - -Once you have downloaded the base installation file, you need to unzip it and run the setup.exe file. Follow the instructions on the screen and choose a folder where you want to install the game. We recommend creating a new folder for Manalink 3.0 Shandalar, rather than using an existing one. - - - -After the installation is complete, you will have a folder with all the files needed to run Manalink 3.0 Shandalar. However, you still need to add the card art file and apply some patches to update the game. - - - -## Step 3: Add the card art file - - - -Once you have downloaded the card art file, you need to unzip it and copy its contents into the folder where you installed Manalink 3.0 Shandalar. You will be asked to overwrite some existing files, choose yes. - - - -This will add all the images for the cards in the game, making it more visually appealing and easier to play. - - - -## Step 4: Apply the latest patches - - - -The last thing you need to do is to apply the latest patches for Manalink 3.0 Shandalar, which will fix some bugs and add new cards and features to the game. You can find them [here](https://www.slightlymagic.net/forum/viewforum.php?f=85), in the same patches subforum where you downloaded the base installation file and the card art file. - - - -Look for the topics that have a date in their title, such as "Patch XXXX-XX-XX". Download the latest patch available and unzip it. Then copy its contents into the folder where you installed Manalink 3.0 Shandalar. You will be asked to overwrite some existing files, choose yes. - - - -This will update your game to the latest version available, adding new cards and features that will enhance your gameplay experience. - - - -## Step 5: Enjoy Manalink 3.0 Shandalar! - - - -Congratulations! You have successfully downloaded and installed Manalink 3.0 Shandalar on your computer. Now you can enjoy playing Magic: The Gathering with thousands of cards, modes, features, and graphics that will make you feel like you are playing with real cards on a tabletop. - - - -To start playing, just run Magic.exe from your Manalink 3.0 Shandalar folder and choose your preferred mode of play. You can play solo or multiplayer, online or offline, - - 1b8d091108 - - - - - diff --git a/spaces/contluForse/HuggingGPT/assets/Dekart Private Disk 2.10 Serial 52l Create Multiple Encrypted Disks with a Simple Interface.md b/spaces/contluForse/HuggingGPT/assets/Dekart Private Disk 2.10 Serial 52l Create Multiple Encrypted Disks with a Simple Interface.md deleted file mode 100644 index d72da0599444300a98c4076db02bd58b364ca05f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dekart Private Disk 2.10 Serial 52l Create Multiple Encrypted Disks with a Simple Interface.md +++ /dev/null @@ -1,8 +0,0 @@ - -

      dekart private disk 2.10 serial 52 windows 11 wallpaper unicas LightCA Need for Speed: World free Ela-Salaty: Muslim Prayer Times for pc amd gaming evolved download nu vot Windows 11 png software drivers source downloadsource.net Free Spider Solitaire 2012 for Windows download bestcrypt license Visual c redistribution for visual studio 2012 update 4 download Cypher pro millenium 4 download wonder Fox apk

      -

      dekart sim manager 3.3 keygen, dekart sim manager 3.1 keygen, dekart key manager, dekart key manager.. ... Dekart SIM Manager v2.10 :: 2010-10-11 :: 33.. Dekart .. ... 1667, FFF Dekart Private Disk 2.10kg crk ... 2748 ...

      -

      Dekart Private Disk 2.10 Serial 52l


      DOWNLOAD ››› https://ssurll.com/2uzyf9



      -

      Dekart Private Disk Light 2.12.2: Powerful, reliable and flexible disk encryption program that ... Private Disk hides and restricts access to your programs and data.. ... 08/11/2020, Adobe update fixes vulnerabilities in Acrobat ... private disk win8; » dekart private disk 2.10 函数不正确; » dekart private disk 2.16; » pivate disk light ...

      -

      Dekart Private Disk 2.10 2.1 Download Free with Crack .. .. Mod Tc 2000 Para Rfactor Crack

      dekart private disk 2.15 keygen


      CRACK ... Picture Instruments Image 2 LUT Pro 1.0.11 Crack Free Download Latest ... This would be ...

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Dil Hai Ke Manta Nahin Full Movie Download Mp4 Watch the Romantic Comedy Online.md b/spaces/contluForse/HuggingGPT/assets/Dil Hai Ke Manta Nahin Full Movie Download Mp4 Watch the Romantic Comedy Online.md deleted file mode 100644 index af7155b1596c2b868239f3d9a2ec1ae526104143..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dil Hai Ke Manta Nahin Full Movie Download Mp4 Watch the Romantic Comedy Online.md +++ /dev/null @@ -1,5 +0,0 @@ -
      -

      download Dil Hai Ki Manta Nahi Egale Jhankar unlimited Movies and videos Download Here.Dil Hai Ki Manta Nahi Egale Jhankar Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.

      -

      Dil Hai Ke Manta Nahin Full Movie Download Mp4


      Download Zip ☆☆☆☆☆ https://ssurll.com/2uzxVn



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/FULL AUTODESK.INVENTOR.PRO.V2014.WIN64-ISO !!TOP!!.md b/spaces/contluForse/HuggingGPT/assets/FULL AUTODESK.INVENTOR.PRO.V2014.WIN64-ISO !!TOP!!.md deleted file mode 100644 index 25a23acdfef9e8741564bfd1851e58439f350b51..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/FULL AUTODESK.INVENTOR.PRO.V2014.WIN64-ISO !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      FULL AUTODESK.INVENTOR.PRO.V2014.WIN64-ISO


      Download Filehttps://ssurll.com/2uzw8B



      - -Download AutoDesk Inventor Professional 2014 Full Setup 32 Bit, 64 Bit ... V2017 (ISO) [WIN x64] Autodesk AutoCAD MEP 2016 [Win 64-Bit]. 1fdad05405
      -
      -
      -

      diff --git a/spaces/coomdoomer/doomer-reverse-proxy/README.md b/spaces/coomdoomer/doomer-reverse-proxy/README.md deleted file mode 100644 index 9d9020f29c88c9284a8dfe07495d2606f2c40751..0000000000000000000000000000000000000000 --- a/spaces/coomdoomer/doomer-reverse-proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Doomer Reverse Proxy -emoji: 🔥 -colorFrom: green -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/mobilenet_v3.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/mobilenet_v3.py deleted file mode 100644 index e3c22bdd22356a600454f14c2ed12e7ef72c8ca1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/mobilenet_v3.py +++ /dev/null @@ -1,255 +0,0 @@ -import logging - -import annotator.mmpkg.mmcv as mmcv -import torch.nn as nn -from annotator.mmpkg.mmcv.cnn import ConvModule, constant_init, kaiming_init -from annotator.mmpkg.mmcv.cnn.bricks import Conv2dAdaptivePadding -from annotator.mmpkg.mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidualV3 as InvertedResidual - - -@BACKBONES.register_module() -class MobileNetV3(nn.Module): - """MobileNetV3 backbone. - - This backbone is the improved implementation of `Searching for MobileNetV3 - `_. - - Args: - arch (str): Architecture of mobilnetv3, from {'small', 'large'}. - Default: 'small'. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - out_indices (tuple[int]): Output from which layer. - Default: (0, 1, 12). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save - some memory while slowing down the training speed. - Default: False. - """ - # Parameters to build each block: - # [kernel size, mid channels, out channels, with_se, act type, stride] - arch_settings = { - 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4 - [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8 - [3, 88, 24, False, 'ReLU', 1], - [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16 - [5, 240, 40, True, 'HSwish', 1], - [5, 240, 40, True, 'HSwish', 1], - [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16 - [5, 144, 48, True, 'HSwish', 1], - [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32 - [5, 576, 96, True, 'HSwish', 1], - [5, 576, 96, True, 'HSwish', 1]], - 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2 - [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4 - [3, 72, 24, False, 'ReLU', 1], - [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8 - [5, 120, 40, True, 'ReLU', 1], - [5, 120, 40, True, 'ReLU', 1], - [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16 - [3, 200, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 184, 80, False, 'HSwish', 1], - [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16 - [3, 672, 112, True, 'HSwish', 1], - [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32 - [5, 960, 160, True, 'HSwish', 1], - [5, 960, 160, True, 'HSwish', 1]] - } # yapf: disable - - def __init__(self, - arch='small', - conv_cfg=None, - norm_cfg=dict(type='BN'), - out_indices=(0, 1, 12), - frozen_stages=-1, - reduction_factor=1, - norm_eval=False, - with_cp=False): - super(MobileNetV3, self).__init__() - assert arch in self.arch_settings - assert isinstance(reduction_factor, int) and reduction_factor > 0 - assert mmcv.is_tuple_of(out_indices, int) - for index in out_indices: - if index not in range(0, len(self.arch_settings[arch]) + 2): - raise ValueError( - 'the item in out_indices must in ' - f'range(0, {len(self.arch_settings[arch])+2}). ' - f'But received {index}') - - if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2): - raise ValueError('frozen_stages must be in range(-1, ' - f'{len(self.arch_settings[arch])+2}). ' - f'But received {frozen_stages}') - self.arch = arch - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.reduction_factor = reduction_factor - self.norm_eval = norm_eval - self.with_cp = with_cp - self.layers = self._make_layer() - - def _make_layer(self): - layers = [] - - # build the first layer (layer0) - in_channels = 16 - layer = ConvModule( - in_channels=3, - out_channels=in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=dict(type='Conv2dAdaptivePadding'), - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - self.add_module('layer0', layer) - layers.append('layer0') - - layer_setting = self.arch_settings[self.arch] - for i, params in enumerate(layer_setting): - (kernel_size, mid_channels, out_channels, with_se, act, - stride) = params - - if self.arch == 'large' and i >= 12 or self.arch == 'small' and \ - i >= 8: - mid_channels = mid_channels // self.reduction_factor - out_channels = out_channels // self.reduction_factor - - if with_se: - se_cfg = dict( - channels=mid_channels, - ratio=4, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))) - else: - se_cfg = None - - layer = InvertedResidual( - in_channels=in_channels, - out_channels=out_channels, - mid_channels=mid_channels, - kernel_size=kernel_size, - stride=stride, - se_cfg=se_cfg, - with_expand_conv=(in_channels != mid_channels), - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type=act), - with_cp=self.with_cp) - in_channels = out_channels - layer_name = 'layer{}'.format(i + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # build the last layer - # block5 layer12 os=32 for small model - # block6 layer16 os=32 for large model - layer = ConvModule( - in_channels=in_channels, - out_channels=576 if self.arch == 'small' else 960, - kernel_size=1, - stride=1, - dilation=4, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=dict(type='HSwish')) - layer_name = 'layer{}'.format(len(layer_setting) + 1) - self.add_module(layer_name, layer) - layers.append(layer_name) - - # next, convert backbone MobileNetV3 to a semantic segmentation version - if self.arch == 'small': - self.layer4.depthwise_conv.conv.stride = (1, 1) - self.layer9.depthwise_conv.conv.stride = (1, 1) - for i in range(4, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 9: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - else: - self.layer7.depthwise_conv.conv.stride = (1, 1) - self.layer13.depthwise_conv.conv.stride = (1, 1) - for i in range(7, len(layers)): - layer = getattr(self, layers[i]) - if isinstance(layer, InvertedResidual): - modified_module = layer.depthwise_conv.conv - else: - modified_module = layer.conv - - if i < 13: - modified_module.dilation = (2, 2) - pad = 2 - else: - modified_module.dilation = (4, 4) - pad = 4 - - if not isinstance(modified_module, Conv2dAdaptivePadding): - # Adjust padding - pad *= (modified_module.kernel_size[0] - 1) // 2 - modified_module.padding = (pad, pad) - - return layers - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - return outs - - def _freeze_stages(self): - for i in range(self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV3, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/test_time_augmentation.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/test_time_augmentation.py deleted file mode 100644 index 625f8ba9a01275df64967c097912538337ec91dc..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/test_time_augmentation.py +++ /dev/null @@ -1,307 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -from contextlib import contextmanager -from itertools import count -from typing import List -import torch -from fvcore.transforms import HFlipTransform, NoOpTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.data.detection_utils import read_image -from annotator.oneformer.detectron2.data.transforms import ( - RandomFlip, - ResizeShortestEdge, - ResizeTransform, - apply_augmentations, -) -from annotator.oneformer.detectron2.structures import Boxes, Instances - -from .meta_arch import GeneralizedRCNN -from .postprocessing import detector_postprocess -from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image - -__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"] - - -class DatasetMapperTTA: - """ - Implement test-time augmentation for detection data. - It is a callable which takes a dataset dict from a detection dataset, - and returns a list of dataset dicts where the images - are augmented from the input image by the transformations defined in the config. - This is used for test-time augmentation. - """ - - @configurable - def __init__(self, min_sizes: List[int], max_size: int, flip: bool): - """ - Args: - min_sizes: list of short-edge size to resize the image to - max_size: maximum height or width of resized images - flip: whether to apply flipping augmentation - """ - self.min_sizes = min_sizes - self.max_size = max_size - self.flip = flip - - @classmethod - def from_config(cls, cfg): - return { - "min_sizes": cfg.TEST.AUG.MIN_SIZES, - "max_size": cfg.TEST.AUG.MAX_SIZE, - "flip": cfg.TEST.AUG.FLIP, - } - - def __call__(self, dataset_dict): - """ - Args: - dict: a dict in standard model input format. See tutorials for details. - - Returns: - list[dict]: - a list of dicts, which contain augmented version of the input image. - The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``. - Each dict has field "transforms" which is a TransformList, - containing the transforms that are used to generate this image. - """ - numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy() - shape = numpy_image.shape - orig_shape = (dataset_dict["height"], dataset_dict["width"]) - if shape[:2] != orig_shape: - # It transforms the "original" image in the dataset to the input image - pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1]) - else: - pre_tfm = NoOpTransform() - - # Create all combinations of augmentations to use - aug_candidates = [] # each element is a list[Augmentation] - for min_size in self.min_sizes: - resize = ResizeShortestEdge(min_size, self.max_size) - aug_candidates.append([resize]) # resize only - if self.flip: - flip = RandomFlip(prob=1.0) - aug_candidates.append([resize, flip]) # resize + flip - - # Apply all the augmentations - ret = [] - for aug in aug_candidates: - new_image, tfms = apply_augmentations(aug, np.copy(numpy_image)) - torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1))) - - dic = copy.deepcopy(dataset_dict) - dic["transforms"] = pre_tfm + tfms - dic["image"] = torch_image - ret.append(dic) - return ret - - -class GeneralizedRCNNWithTTA(nn.Module): - """ - A GeneralizedRCNN with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=3): - """ - Args: - cfg (CfgNode): - model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - assert isinstance( - model, GeneralizedRCNN - ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model)) - self.cfg = cfg.clone() - assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet" - assert ( - not self.cfg.MODEL.LOAD_PROPOSALS - ), "TTA for pre-computed proposals is not supported yet" - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - @contextmanager - def _turn_off_roi_heads(self, attrs): - """ - Open a context where some heads in `model.roi_heads` are temporarily turned off. - Args: - attr (list[str]): the attribute in `model.roi_heads` which can be used - to turn off a specific head, e.g., "mask_on", "keypoint_on". - """ - roi_heads = self.model.roi_heads - old = {} - for attr in attrs: - try: - old[attr] = getattr(roi_heads, attr) - except AttributeError: - # The head may not be implemented in certain ROIHeads - pass - - if len(old.keys()) == 0: - yield - else: - for attr in old.keys(): - setattr(roi_heads, attr, False) - yield - for attr in old.keys(): - setattr(roi_heads, attr, old[attr]) - - def _batch_inference(self, batched_inputs, detected_instances=None): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - - Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference` - """ - if detected_instances is None: - detected_instances = [None] * len(batched_inputs) - - outputs = [] - inputs, instances = [], [] - for idx, input, instance in zip(count(), batched_inputs, detected_instances): - inputs.append(input) - instances.append(instance) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - outputs.extend( - self.model.inference( - inputs, - instances if instances[0] is not None else None, - do_postprocess=False, - ) - ) - inputs, instances = [], [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`GeneralizedRCNN.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - augmented_inputs, tfms = self._get_augmented_inputs(input) - # Detect boxes from all augmented versions - with self._turn_off_roi_heads(["mask_on", "keypoint_on"]): - # temporarily disable roi heads - all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms) - # merge all detected boxes to obtain final predictions for boxes - merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape) - - if self.cfg.MODEL.MASK_ON: - # Use the detected boxes to obtain masks - augmented_instances = self._rescale_detected_boxes( - augmented_inputs, merged_instances, tfms - ) - # run forward on the detected boxes - outputs = self._batch_inference(augmented_inputs, augmented_instances) - # Delete now useless variables to avoid being out of memory - del augmented_inputs, augmented_instances - # average the predictions - merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms) - merged_instances = detector_postprocess(merged_instances, *orig_shape) - return {"instances": merged_instances} - else: - return {"instances": merged_instances} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms - - def _get_augmented_boxes(self, augmented_inputs, tfms): - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # 2: union the results - all_boxes = [] - all_scores = [] - all_classes = [] - for output, tfm in zip(outputs, tfms): - # Need to inverse the transforms on boxes, to obtain results on original image - pred_boxes = output.pred_boxes.tensor - original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy()) - all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device)) - - all_scores.extend(output.scores) - all_classes.extend(output.pred_classes) - all_boxes = torch.cat(all_boxes, dim=0) - return all_boxes, all_scores, all_classes - - def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw): - # select from the union of all results - num_boxes = len(all_boxes) - num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES - # +1 because fast_rcnn_inference expects background scores as well - all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device) - for idx, cls, score in zip(count(), all_classes, all_scores): - all_scores_2d[idx, cls] = score - - merged_instances, _ = fast_rcnn_inference_single_image( - all_boxes, - all_scores_2d, - shape_hw, - 1e-8, - self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - self.cfg.TEST.DETECTIONS_PER_IMAGE, - ) - - return merged_instances - - def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms): - augmented_instances = [] - for input, tfm in zip(augmented_inputs, tfms): - # Transform the target box to the augmented image's coordinate space - pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy() - pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes)) - - aug_instances = Instances( - image_size=input["image"].shape[1:3], - pred_boxes=Boxes(pred_boxes), - pred_classes=merged_instances.pred_classes, - scores=merged_instances.scores, - ) - augmented_instances.append(aug_instances) - return augmented_instances - - def _reduce_pred_masks(self, outputs, tfms): - # Should apply inverse transforms on masks. - # We assume only resize & flip are used. pred_masks is a scale-invariant - # representation, so we handle flip specially - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - output.pred_masks = output.pred_masks.flip(dims=[3]) - all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0) - avg_pred_masks = torch.mean(all_pred_masks, dim=0) - return avg_pred_masks diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/boxes.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/boxes.py deleted file mode 100644 index fd396f68645db1d6946056eed868ffcc02cd7a22..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/boxes.py +++ /dev/null @@ -1,425 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import numpy as np -from enum import IntEnum, unique -from typing import List, Tuple, Union -import torch -from torch import device - -_RawBoxType = Union[List[float], Tuple[float, ...], torch.Tensor, np.ndarray] - - -@unique -class BoxMode(IntEnum): - """ - Enum of different ways to represent a box. - """ - - XYXY_ABS = 0 - """ - (x0, y0, x1, y1) in absolute floating points coordinates. - The coordinates in range [0, width or height]. - """ - XYWH_ABS = 1 - """ - (x0, y0, w, h) in absolute floating points coordinates. - """ - XYXY_REL = 2 - """ - Not yet supported! - (x0, y0, x1, y1) in range [0, 1]. They are relative to the size of the image. - """ - XYWH_REL = 3 - """ - Not yet supported! - (x0, y0, w, h) in range [0, 1]. They are relative to the size of the image. - """ - XYWHA_ABS = 4 - """ - (xc, yc, w, h, a) in absolute floating points coordinates. - (xc, yc) is the center of the rotated box, and the angle a is in degrees ccw. - """ - - @staticmethod - def convert(box: _RawBoxType, from_mode: "BoxMode", to_mode: "BoxMode") -> _RawBoxType: - """ - Args: - box: can be a k-tuple, k-list or an Nxk array/tensor, where k = 4 or 5 - from_mode, to_mode (BoxMode) - - Returns: - The converted box of the same type. - """ - if from_mode == to_mode: - return box - - original_type = type(box) - is_numpy = isinstance(box, np.ndarray) - single_box = isinstance(box, (list, tuple)) - if single_box: - assert len(box) == 4 or len(box) == 5, ( - "BoxMode.convert takes either a k-tuple/list or an Nxk array/tensor," - " where k == 4 or 5" - ) - arr = torch.tensor(box)[None, :] - else: - # avoid modifying the input box - if is_numpy: - arr = torch.from_numpy(np.asarray(box)).clone() - else: - arr = box.clone() - - assert to_mode not in [BoxMode.XYXY_REL, BoxMode.XYWH_REL] and from_mode not in [ - BoxMode.XYXY_REL, - BoxMode.XYWH_REL, - ], "Relative mode not yet supported!" - - if from_mode == BoxMode.XYWHA_ABS and to_mode == BoxMode.XYXY_ABS: - assert ( - arr.shape[-1] == 5 - ), "The last dimension of input shape must be 5 for XYWHA format" - original_dtype = arr.dtype - arr = arr.double() - - w = arr[:, 2] - h = arr[:, 3] - a = arr[:, 4] - c = torch.abs(torch.cos(a * math.pi / 180.0)) - s = torch.abs(torch.sin(a * math.pi / 180.0)) - # This basically computes the horizontal bounding rectangle of the rotated box - new_w = c * w + s * h - new_h = c * h + s * w - - # convert center to top-left corner - arr[:, 0] -= new_w / 2.0 - arr[:, 1] -= new_h / 2.0 - # bottom-right corner - arr[:, 2] = arr[:, 0] + new_w - arr[:, 3] = arr[:, 1] + new_h - - arr = arr[:, :4].to(dtype=original_dtype) - elif from_mode == BoxMode.XYWH_ABS and to_mode == BoxMode.XYWHA_ABS: - original_dtype = arr.dtype - arr = arr.double() - arr[:, 0] += arr[:, 2] / 2.0 - arr[:, 1] += arr[:, 3] / 2.0 - angles = torch.zeros((arr.shape[0], 1), dtype=arr.dtype) - arr = torch.cat((arr, angles), axis=1).to(dtype=original_dtype) - else: - if to_mode == BoxMode.XYXY_ABS and from_mode == BoxMode.XYWH_ABS: - arr[:, 2] += arr[:, 0] - arr[:, 3] += arr[:, 1] - elif from_mode == BoxMode.XYXY_ABS and to_mode == BoxMode.XYWH_ABS: - arr[:, 2] -= arr[:, 0] - arr[:, 3] -= arr[:, 1] - else: - raise NotImplementedError( - "Conversion from BoxMode {} to {} is not supported yet".format( - from_mode, to_mode - ) - ) - - if single_box: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - else: - return arr - - -class Boxes: - """ - This structure stores a list of boxes as a Nx4 torch.Tensor. - It supports some common methods about boxes - (`area`, `clip`, `nonempty`, etc), - and also behaves like a Tensor - (support indexing, `to(device)`, `.device`, and iteration over all boxes) - - Attributes: - tensor (torch.Tensor): float matrix of Nx4. Each row is (x1, y1, x2, y2). - """ - - def __init__(self, tensor: torch.Tensor): - """ - Args: - tensor (Tensor[float]): a Nx4 matrix. Each row is (x1, y1, x2, y2). - """ - if not isinstance(tensor, torch.Tensor): - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=torch.device("cpu")) - else: - tensor = tensor.to(torch.float32) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that does not depend on - # the inputs (and consequently confuses jit) - tensor = tensor.reshape((-1, 4)).to(dtype=torch.float32) - assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size() - - self.tensor = tensor - - def clone(self) -> "Boxes": - """ - Clone the Boxes. - - Returns: - Boxes - """ - return Boxes(self.tensor.clone()) - - def to(self, device: torch.device): - # Boxes are assumed float32 and does not support to(dtype) - return Boxes(self.tensor.to(device=device)) - - def area(self) -> torch.Tensor: - """ - Computes the area of all the boxes. - - Returns: - torch.Tensor: a vector with areas of each box. - """ - box = self.tensor - area = (box[:, 2] - box[:, 0]) * (box[:, 3] - box[:, 1]) - return area - - def clip(self, box_size: Tuple[int, int]) -> None: - """ - Clip (in place) the boxes by limiting x coordinates to the range [0, width] - and y coordinates to the range [0, height]. - - Args: - box_size (height, width): The clipping box's size. - """ - assert torch.isfinite(self.tensor).all(), "Box tensor contains infinite or NaN!" - h, w = box_size - x1 = self.tensor[:, 0].clamp(min=0, max=w) - y1 = self.tensor[:, 1].clamp(min=0, max=h) - x2 = self.tensor[:, 2].clamp(min=0, max=w) - y2 = self.tensor[:, 3].clamp(min=0, max=h) - self.tensor = torch.stack((x1, y1, x2, y2), dim=-1) - - def nonempty(self, threshold: float = 0.0) -> torch.Tensor: - """ - Find boxes that are non-empty. - A box is considered empty, if either of its side is no larger than threshold. - - Returns: - Tensor: - a binary vector which represents whether each box is empty - (False) or non-empty (True). - """ - box = self.tensor - widths = box[:, 2] - box[:, 0] - heights = box[:, 3] - box[:, 1] - keep = (widths > threshold) & (heights > threshold) - return keep - - def __getitem__(self, item) -> "Boxes": - """ - Args: - item: int, slice, or a BoolTensor - - Returns: - Boxes: Create a new :class:`Boxes` by indexing. - - The following usage are allowed: - - 1. `new_boxes = boxes[3]`: return a `Boxes` which contains only one box. - 2. `new_boxes = boxes[2:10]`: return a slice of boxes. - 3. `new_boxes = boxes[vector]`, where vector is a torch.BoolTensor - with `length = len(boxes)`. Nonzero elements in the vector will be selected. - - Note that the returned Boxes might share storage with this Boxes, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Boxes(self.tensor[item].view(1, -1)) - b = self.tensor[item] - assert b.dim() == 2, "Indexing on Boxes with {} failed to return a matrix!".format(item) - return Boxes(b) - - def __len__(self) -> int: - return self.tensor.shape[0] - - def __repr__(self) -> str: - return "Boxes(" + str(self.tensor) + ")" - - def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor: - """ - Args: - box_size (height, width): Size of the reference box. - boundary_threshold (int): Boxes that extend beyond the reference box - boundary by more than boundary_threshold are considered "outside". - - Returns: - a binary vector, indicating whether each box is inside the reference box. - """ - height, width = box_size - inds_inside = ( - (self.tensor[..., 0] >= -boundary_threshold) - & (self.tensor[..., 1] >= -boundary_threshold) - & (self.tensor[..., 2] < width + boundary_threshold) - & (self.tensor[..., 3] < height + boundary_threshold) - ) - return inds_inside - - def get_centers(self) -> torch.Tensor: - """ - Returns: - The box centers in a Nx2 array of (x, y). - """ - return (self.tensor[:, :2] + self.tensor[:, 2:]) / 2 - - def scale(self, scale_x: float, scale_y: float) -> None: - """ - Scale the box with horizontal and vertical scaling factors - """ - self.tensor[:, 0::2] *= scale_x - self.tensor[:, 1::2] *= scale_y - - @classmethod - def cat(cls, boxes_list: List["Boxes"]) -> "Boxes": - """ - Concatenates a list of Boxes into a single Boxes - - Arguments: - boxes_list (list[Boxes]) - - Returns: - Boxes: the concatenated Boxes - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all([isinstance(box, Boxes) for box in boxes_list]) - - # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input - cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0)) - return cat_boxes - - @property - def device(self) -> device: - return self.tensor.device - - # type "Iterator[torch.Tensor]", yield, and iter() not supported by torchscript - # https://github.com/pytorch/pytorch/issues/18627 - @torch.jit.unused - def __iter__(self): - """ - Yield a box as a Tensor of shape (4,) at a time. - """ - yield from self.tensor - - -def pairwise_intersection(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, - compute the intersection area between __all__ N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax) - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: intersection, sized [N,M]. - """ - boxes1, boxes2 = boxes1.tensor, boxes2.tensor - width_height = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) - torch.max( - boxes1[:, None, :2], boxes2[:, :2] - ) # [N,M,2] - - width_height.clamp_(min=0) # [N,M,2] - intersection = width_height.prod(dim=2) # [N,M] - return intersection - - -# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py -# with slight modifications -def pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, compute the IoU - (intersection over union) between **all** N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoU, sized [N,M]. - """ - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - iou = torch.where( - inter > 0, - inter / (area1[:, None] + area2 - inter), - torch.zeros(1, dtype=inter.dtype, device=inter.device), - ) - return iou - - -def pairwise_ioa(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Similar to :func:`pariwise_iou` but compute the IoA (intersection over boxes2 area). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoA, sized [N,M]. - """ - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - ioa = torch.where( - inter > 0, inter / area2, torch.zeros(1, dtype=inter.dtype, device=inter.device) - ) - return ioa - - -def pairwise_point_box_distance(points: torch.Tensor, boxes: Boxes): - """ - Pairwise distance between N points and M boxes. The distance between a - point and a box is represented by the distance from the point to 4 edges - of the box. Distances are all positive when the point is inside the box. - - Args: - points: Nx2 coordinates. Each row is (x, y) - boxes: M boxes - - Returns: - Tensor: distances of size (N, M, 4). The 4 values are distances from - the point to the left, top, right, bottom of the box. - """ - x, y = points.unsqueeze(dim=2).unbind(dim=1) # (N, 1) - x0, y0, x1, y1 = boxes.tensor.unsqueeze(dim=0).unbind(dim=2) # (1, M) - return torch.stack([x - x0, y - y0, x1 - x, y1 - y], dim=2) - - -def matched_pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Compute pairwise intersection over union (IOU) of two sets of matched - boxes that have the same number of boxes. - Similar to :func:`pairwise_iou`, but computes only diagonal elements of the matrix. - - Args: - boxes1 (Boxes): bounding boxes, sized [N,4]. - boxes2 (Boxes): same length as boxes1 - Returns: - Tensor: iou, sized [N]. - """ - assert len(boxes1) == len( - boxes2 - ), "boxlists should have the same" "number of entries, got {}, {}".format( - len(boxes1), len(boxes2) - ) - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [N] - box1, box2 = boxes1.tensor, boxes2.tensor - lt = torch.max(box1[:, :2], box2[:, :2]) # [N,2] - rb = torch.min(box1[:, 2:], box2[:, 2:]) # [N,2] - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - iou = inter / (area1 + area2 - inter) # [N] - return iou diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/vgg.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/vgg.py deleted file mode 100644 index 8778b649561a45a9652b1a15a26c2d171e58f3e1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/vgg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes, out_planes, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes, - planes, - num_blocks, - dilation=1, - with_bn=False, - ceil_mode=False): - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth, - with_bn=False, - num_classes=-1, - num_stages=5, - dilations=(1, 1, 1, 1, 1), - out_indices=(0, 1, 2, 3, 4), - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - ceil_mode=False, - with_last_pool=True): - super(VGG, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(VGG, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/cvlab/zero123-live/CLIP/clip/clip.py b/spaces/cvlab/zero123-live/CLIP/clip/clip.py deleted file mode 100644 index 257511e1d40c120e0d64a0f1562d44b2b8a40a17..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/CLIP/clip/clip.py +++ /dev/null @@ -1,237 +0,0 @@ -import hashlib -import os -import urllib -import warnings -from typing import Any, Union, List -from pkg_resources import packaging - -import torch -from PIL import Image -from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize -from tqdm import tqdm - -from .model import build_model -from .simple_tokenizer import SimpleTokenizer as _Tokenizer - -try: - from torchvision.transforms import InterpolationMode - BICUBIC = InterpolationMode.BICUBIC -except ImportError: - BICUBIC = Image.BICUBIC - - -if packaging.version.parse(torch.__version__) < packaging.version.parse("1.7.1"): - warnings.warn("PyTorch version 1.7.1 or higher is recommended") - - -__all__ = ["available_models", "load", "tokenize"] -_tokenizer = _Tokenizer() - -_MODELS = { - "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt", - "RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt", - "RN50x64": "https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt", - "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", - "ViT-L/14": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt", - "ViT-L/14@336px": "https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt", -} - - -def _download(url: str, root: str): - os.makedirs(root, exist_ok=True) - filename = os.path.basename(url) - - expected_sha256 = url.split("/")[-2] - download_target = os.path.join(root, filename) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if os.path.isfile(download_target): - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256: - return download_target - else: - warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file") - - with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: - with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True, unit_divisor=1024) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) - - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256: - raise RuntimeError("Model has been downloaded but the SHA256 checksum does not not match") - - return download_target - - -def _convert_image_to_rgb(image): - return image.convert("RGB") - - -def _transform(n_px): - return Compose([ - Resize(n_px, interpolation=BICUBIC), - CenterCrop(n_px), - _convert_image_to_rgb, - ToTensor(), - Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), - ]) - - -def available_models() -> List[str]: - """Returns the names of available CLIP models""" - return list(_MODELS.keys()) - - -def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit: bool = False, download_root: str = None): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - - device : Union[str, torch.device] - The device to put the loaded model - - jit : bool - Whether to load the optimized JIT model or more hackable non-JIT model (default). - - download_root: str - path to download the model files; by default, it uses "~/.cache/clip" - - Returns - ------- - model : torch.nn.Module - The CLIP model - - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if name in _MODELS: - model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip")) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {available_models()}") - - with open(model_path, 'rb') as opened_file: - try: - # loading JIT archive - model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(opened_file, map_location="cpu") - - if not jit: - model = build_model(state_dict or model.state_dict()).to(device) - if str(device) == "cpu": - model.float() - return model, _transform(model.visual.input_resolution) - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - - model.float() - - return model, _transform(model.input_resolution.item()) - - -def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> Union[torch.IntTensor, torch.LongTensor]: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - - context_length : int - The context length to use; all CLIP models use 77 as the context length - - truncate: bool - Whether to truncate the text in case its encoding is longer than the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]. - We return LongTensor when torch version is <1.8.0, since older index_select requires indices to be long. - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder["<|startoftext|>"] - eot_token = _tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - if packaging.version.parse(torch.__version__) < packaging.version.parse("1.8.0"): - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - else: - result = torch.zeros(len(all_tokens), context_length, dtype=torch.int) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - if truncate: - tokens = tokens[:context_length] - tokens[-1] = eot_token - else: - raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/spaces/cvlab/zero123-live/ldm/models/diffusion/classifier.py b/spaces/cvlab/zero123-live/ldm/models/diffusion/classifier.py deleted file mode 100644 index 67e98b9d8ffb96a150b517497ace0a242d7163ef..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/ldm/models/diffusion/classifier.py +++ /dev/null @@ -1,267 +0,0 @@ -import os -import torch -import pytorch_lightning as pl -from omegaconf import OmegaConf -from torch.nn import functional as F -from torch.optim import AdamW -from torch.optim.lr_scheduler import LambdaLR -from copy import deepcopy -from einops import rearrange -from glob import glob -from natsort import natsorted - -from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel -from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config - -__models__ = { - 'class_label': EncoderUNetModel, - 'segmentation': UNetModel -} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class NoisyLatentImageClassifier(pl.LightningModule): - - def __init__(self, - diffusion_path, - num_classes, - ckpt_path=None, - pool='attention', - label_key=None, - diffusion_ckpt_path=None, - scheduler_config=None, - weight_decay=1.e-2, - log_steps=10, - monitor='val/loss', - *args, - **kwargs): - super().__init__(*args, **kwargs) - self.num_classes = num_classes - # get latest config of diffusion model - diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1] - self.diffusion_config = OmegaConf.load(diffusion_config).model - self.diffusion_config.params.ckpt_path = diffusion_ckpt_path - self.load_diffusion() - - self.monitor = monitor - self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1 - self.log_time_interval = self.diffusion_model.num_timesteps // log_steps - self.log_steps = log_steps - - self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \ - else self.diffusion_model.cond_stage_key - - assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params' - - if self.label_key not in __models__: - raise NotImplementedError() - - self.load_classifier(ckpt_path, pool) - - self.scheduler_config = scheduler_config - self.use_scheduler = self.scheduler_config is not None - self.weight_decay = weight_decay - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def load_diffusion(self): - model = instantiate_from_config(self.diffusion_config) - self.diffusion_model = model.eval() - self.diffusion_model.train = disabled_train - for param in self.diffusion_model.parameters(): - param.requires_grad = False - - def load_classifier(self, ckpt_path, pool): - model_config = deepcopy(self.diffusion_config.params.unet_config.params) - model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels - model_config.out_channels = self.num_classes - if self.label_key == 'class_label': - model_config.pool = pool - - self.model = __models__[self.label_key](**model_config) - if ckpt_path is not None: - print('#####################################################################') - print(f'load from ckpt "{ckpt_path}"') - print('#####################################################################') - self.init_from_ckpt(ckpt_path) - - @torch.no_grad() - def get_x_noisy(self, x, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x)) - continuous_sqrt_alpha_cumprod = None - if self.diffusion_model.use_continuous_noise: - continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1) - # todo: make sure t+1 is correct here - - return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise, - continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod) - - def forward(self, x_noisy, t, *args, **kwargs): - return self.model(x_noisy, t) - - @torch.no_grad() - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - @torch.no_grad() - def get_conditioning(self, batch, k=None): - if k is None: - k = self.label_key - assert k is not None, 'Needs to provide label key' - - targets = batch[k].to(self.device) - - if self.label_key == 'segmentation': - targets = rearrange(targets, 'b h w c -> b c h w') - for down in range(self.numd): - h, w = targets.shape[-2:] - targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest') - - # targets = rearrange(targets,'b c h w -> b h w c') - - return targets - - def compute_top_k(self, logits, labels, k, reduction="mean"): - _, top_ks = torch.topk(logits, k, dim=1) - if reduction == "mean": - return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item() - elif reduction == "none": - return (top_ks == labels[:, None]).float().sum(dim=-1) - - def on_train_epoch_start(self): - # save some memory - self.diffusion_model.model.to('cpu') - - @torch.no_grad() - def write_logs(self, loss, logits, targets): - log_prefix = 'train' if self.training else 'val' - log = {} - log[f"{log_prefix}/loss"] = loss.mean() - log[f"{log_prefix}/acc@1"] = self.compute_top_k( - logits, targets, k=1, reduction="mean" - ) - log[f"{log_prefix}/acc@5"] = self.compute_top_k( - logits, targets, k=5, reduction="mean" - ) - - self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True) - self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False) - self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True) - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True) - - def shared_step(self, batch, t=None): - x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key) - targets = self.get_conditioning(batch) - if targets.dim() == 4: - targets = targets.argmax(dim=1) - if t is None: - t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long() - else: - t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long() - x_noisy = self.get_x_noisy(x, t) - logits = self(x_noisy, t) - - loss = F.cross_entropy(logits, targets, reduction='none') - - self.write_logs(loss.detach(), logits.detach(), targets.detach()) - - loss = loss.mean() - return loss, logits, x_noisy, targets - - def training_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - return loss - - def reset_noise_accs(self): - self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in - range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)} - - def on_validation_start(self): - self.reset_noise_accs() - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - - for t in self.noisy_acc: - _, logits, _, targets = self.shared_step(batch, t) - self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean')) - self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean')) - - return loss - - def configure_optimizers(self): - optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay) - - if self.use_scheduler: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [optimizer], scheduler - - return optimizer - - @torch.no_grad() - def log_images(self, batch, N=8, *args, **kwargs): - log = dict() - x = self.get_input(batch, self.diffusion_model.first_stage_key) - log['inputs'] = x - - y = self.get_conditioning(batch) - - if self.label_key == 'class_label': - y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['labels'] = y - - if ismap(y): - log['labels'] = self.diffusion_model.to_rgb(y) - - for step in range(self.log_steps): - current_time = step * self.log_time_interval - - _, logits, x_noisy, _ = self.shared_step(batch, t=current_time) - - log[f'inputs@t{current_time}'] = x_noisy - - pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes) - pred = rearrange(pred, 'b h w c -> b c h w') - - log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred) - - for key in log: - log[key] = log[key][:N] - - return log diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/facerecon_model.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/facerecon_model.py deleted file mode 100644 index ad3500b58b836226ed2f41303f200a7fe14aa058..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from Demo_TFR_Pirenderer.src.face3d.models.base_model import BaseModel -from Demo_TFR_Pirenderer.src.face3d.models import networks -from Demo_TFR_Pirenderer.src.face3d.models.bfm import ParametricFaceModel -from Demo_TFR_Pirenderer.src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from Demo_TFR_Pirenderer.src.face3d.util import util -from Demo_TFR_Pirenderer.src.face3d.util.nvdiffrast import MeshRenderer -# from Demo_TFR_Pirenderer.src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git "a/spaces/dakaiye/dky_xuexi/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/dakaiye/dky_xuexi/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" deleted file mode 100644 index 8d3f97b5b3e13386c50ff463133b92aa570804c2..0000000000000000000000000000000000000000 --- "a/spaces/dakaiye/dky_xuexi/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" +++ /dev/null @@ -1,240 +0,0 @@ -from toolbox import update_ui, trimmed_format_exc -from toolbox import CatchException, report_execption, write_results_to_file, zip_folder - - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - def merge_result(self): - self.file_result = ["" for _ in range(len(self.file_paths))] - for r, k in zip(self.sp_file_result, self.sp_file_index): - self.file_result[k] += r - - def write_result(self): - manifest = [] - for path, res in zip(self.file_paths, self.file_result): - with open(path + '.polish.tex', 'w', encoding='utf8') as f: - manifest.append(path + '.polish.tex') - f.write(res) - return manifest - - def zip_result(self): - import os, time - folder = os.path.dirname(self.file_paths[0]) - t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - zip_folder(folder, './gpt_log/', f'{t}-polished.zip') - - -def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='polish'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'(? - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - - # <-------- 多线程润色开始 ----------> - if language == 'en': - if mode == 'polish': - inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " + - "improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [r"Below is a section from an academic paper, proofread this section." + - r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + - r"Answer me only with the revised text:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] - elif language == 'zh': - if mode == 'polish': - inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] - sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] - - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待 - scroller_max_len = 80 - ) - - # <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ----------> - try: - pfg.sp_file_result = [] - for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): - pfg.sp_file_result.append(gpt_say) - pfg.merge_result() - pfg.write_result() - pfg.zip_result() - except: - print(trimmed_format_exc()) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en') - - - - - - -@CatchException -def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh') - - - - -@CatchException -def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行纠错。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='proofread') diff --git a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/jquery-ui.min.js b/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/jquery-ui.min.js deleted file mode 100644 index 25398a167415050ae8bfb0bfebac6aa3ab790909..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/jquery-ui.min.js +++ /dev/null @@ -1,13 +0,0 @@ -/*! jQuery UI - v1.12.1 - 2016-09-14 -* http://jqueryui.com -* Includes: widget.js, position.js, data.js, disable-selection.js, effect.js, effects/effect-blind.js, effects/effect-bounce.js, effects/effect-clip.js, effects/effect-drop.js, effects/effect-explode.js, effects/effect-fade.js, effects/effect-fold.js, effects/effect-highlight.js, effects/effect-puff.js, effects/effect-pulsate.js, effects/effect-scale.js, effects/effect-shake.js, effects/effect-size.js, effects/effect-slide.js, effects/effect-transfer.js, focusable.js, form-reset-mixin.js, jquery-1-7.js, keycode.js, labels.js, scroll-parent.js, tabbable.js, unique-id.js, widgets/accordion.js, widgets/autocomplete.js, widgets/button.js, widgets/checkboxradio.js, widgets/controlgroup.js, widgets/datepicker.js, widgets/dialog.js, widgets/draggable.js, widgets/droppable.js, widgets/menu.js, widgets/mouse.js, widgets/progressbar.js, widgets/resizable.js, widgets/selectable.js, widgets/selectmenu.js, widgets/slider.js, widgets/sortable.js, widgets/spinner.js, widgets/tabs.js, widgets/tooltip.js -* Copyright jQuery Foundation and other contributors; Licensed MIT */ - -(function(t){"function"==typeof define&&define.amd?define(["jquery"],t):t(jQuery)})(function(t){function e(t){for(var e=t.css("visibility");"inherit"===e;)t=t.parent(),e=t.css("visibility");return"hidden"!==e}function i(t){for(var e,i;t.length&&t[0]!==document;){if(e=t.css("position"),("absolute"===e||"relative"===e||"fixed"===e)&&(i=parseInt(t.css("zIndex"),10),!isNaN(i)&&0!==i))return i;t=t.parent()}return 0}function s(){this._curInst=null,this._keyEvent=!1,this._disabledInputs=[],this._datepickerShowing=!1,this._inDialog=!1,this._mainDivId="ui-datepicker-div",this._inlineClass="ui-datepicker-inline",this._appendClass="ui-datepicker-append",this._triggerClass="ui-datepicker-trigger",this._dialogClass="ui-datepicker-dialog",this._disableClass="ui-datepicker-disabled",this._unselectableClass="ui-datepicker-unselectable",this._currentClass="ui-datepicker-current-day",this._dayOverClass="ui-datepicker-days-cell-over",this.regional=[],this.regional[""]={closeText:"Done",prevText:"Prev",nextText:"Next",currentText:"Today",monthNames:["January","February","March","April","May","June","July","August","September","October","November","December"],monthNamesShort:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],dayNames:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],dayNamesShort:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],dayNamesMin:["Su","Mo","Tu","We","Th","Fr","Sa"],weekHeader:"Wk",dateFormat:"mm/dd/yy",firstDay:0,isRTL:!1,showMonthAfterYear:!1,yearSuffix:""},this._defaults={showOn:"focus",showAnim:"fadeIn",showOptions:{},defaultDate:null,appendText:"",buttonText:"...",buttonImage:"",buttonImageOnly:!1,hideIfNoPrevNext:!1,navigationAsDateFormat:!1,gotoCurrent:!1,changeMonth:!1,changeYear:!1,yearRange:"c-10:c+10",showOtherMonths:!1,selectOtherMonths:!1,showWeek:!1,calculateWeek:this.iso8601Week,shortYearCutoff:"+10",minDate:null,maxDate:null,duration:"fast",beforeShowDay:null,beforeShow:null,onSelect:null,onChangeMonthYear:null,onClose:null,numberOfMonths:1,showCurrentAtPos:0,stepMonths:1,stepBigMonths:12,altField:"",altFormat:"",constrainInput:!0,showButtonPanel:!1,autoSize:!1,disabled:!1},t.extend(this._defaults,this.regional[""]),this.regional.en=t.extend(!0,{},this.regional[""]),this.regional["en-US"]=t.extend(!0,{},this.regional.en),this.dpDiv=n(t("
      "))}function n(e){var i="button, .ui-datepicker-prev, .ui-datepicker-next, .ui-datepicker-calendar td a";return e.on("mouseout",i,function(){t(this).removeClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).removeClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).removeClass("ui-datepicker-next-hover")}).on("mouseover",i,o)}function o(){t.datepicker._isDisabledDatepicker(m.inline?m.dpDiv.parent()[0]:m.input[0])||(t(this).parents(".ui-datepicker-calendar").find("a").removeClass("ui-state-hover"),t(this).addClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).addClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).addClass("ui-datepicker-next-hover"))}function a(e,i){t.extend(e,i);for(var s in i)null==i[s]&&(e[s]=i[s]);return e}function r(t){return function(){var e=this.element.val();t.apply(this,arguments),this._refresh(),e!==this.element.val()&&this._trigger("change")}}t.ui=t.ui||{},t.ui.version="1.12.1";var h=0,l=Array.prototype.slice;t.cleanData=function(e){return function(i){var s,n,o;for(o=0;null!=(n=i[o]);o++)try{s=t._data(n,"events"),s&&s.remove&&t(n).triggerHandler("remove")}catch(a){}e(i)}}(t.cleanData),t.widget=function(e,i,s){var n,o,a,r={},h=e.split(".")[0];e=e.split(".")[1];var l=h+"-"+e;return s||(s=i,i=t.Widget),t.isArray(s)&&(s=t.extend.apply(null,[{}].concat(s))),t.expr[":"][l.toLowerCase()]=function(e){return!!t.data(e,l)},t[h]=t[h]||{},n=t[h][e],o=t[h][e]=function(t,e){return this._createWidget?(arguments.length&&this._createWidget(t,e),void 0):new o(t,e)},t.extend(o,n,{version:s.version,_proto:t.extend({},s),_childConstructors:[]}),a=new i,a.options=t.widget.extend({},a.options),t.each(s,function(e,s){return t.isFunction(s)?(r[e]=function(){function t(){return i.prototype[e].apply(this,arguments)}function n(t){return i.prototype[e].apply(this,t)}return function(){var e,i=this._super,o=this._superApply;return this._super=t,this._superApply=n,e=s.apply(this,arguments),this._super=i,this._superApply=o,e}}(),void 0):(r[e]=s,void 0)}),o.prototype=t.widget.extend(a,{widgetEventPrefix:n?a.widgetEventPrefix||e:e},r,{constructor:o,namespace:h,widgetName:e,widgetFullName:l}),n?(t.each(n._childConstructors,function(e,i){var s=i.prototype;t.widget(s.namespace+"."+s.widgetName,o,i._proto)}),delete n._childConstructors):i._childConstructors.push(o),t.widget.bridge(e,o),o},t.widget.extend=function(e){for(var i,s,n=l.call(arguments,1),o=0,a=n.length;a>o;o++)for(i in n[o])s=n[o][i],n[o].hasOwnProperty(i)&&void 0!==s&&(e[i]=t.isPlainObject(s)?t.isPlainObject(e[i])?t.widget.extend({},e[i],s):t.widget.extend({},s):s);return e},t.widget.bridge=function(e,i){var s=i.prototype.widgetFullName||e;t.fn[e]=function(n){var o="string"==typeof n,a=l.call(arguments,1),r=this;return o?this.length||"instance"!==n?this.each(function(){var i,o=t.data(this,s);return"instance"===n?(r=o,!1):o?t.isFunction(o[n])&&"_"!==n.charAt(0)?(i=o[n].apply(o,a),i!==o&&void 0!==i?(r=i&&i.jquery?r.pushStack(i.get()):i,!1):void 0):t.error("no such method '"+n+"' for "+e+" widget instance"):t.error("cannot call methods on "+e+" prior to initialization; "+"attempted to call method '"+n+"'")}):r=void 0:(a.length&&(n=t.widget.extend.apply(null,[n].concat(a))),this.each(function(){var e=t.data(this,s);e?(e.option(n||{}),e._init&&e._init()):t.data(this,s,new i(n,this))})),r}},t.Widget=function(){},t.Widget._childConstructors=[],t.Widget.prototype={widgetName:"widget",widgetEventPrefix:"",defaultElement:"
      ",options:{classes:{},disabled:!1,create:null},_createWidget:function(e,i){i=t(i||this.defaultElement||this)[0],this.element=t(i),this.uuid=h++,this.eventNamespace="."+this.widgetName+this.uuid,this.bindings=t(),this.hoverable=t(),this.focusable=t(),this.classesElementLookup={},i!==this&&(t.data(i,this.widgetFullName,this),this._on(!0,this.element,{remove:function(t){t.target===i&&this.destroy()}}),this.document=t(i.style?i.ownerDocument:i.document||i),this.window=t(this.document[0].defaultView||this.document[0].parentWindow)),this.options=t.widget.extend({},this.options,this._getCreateOptions(),e),this._create(),this.options.disabled&&this._setOptionDisabled(this.options.disabled),this._trigger("create",null,this._getCreateEventData()),this._init()},_getCreateOptions:function(){return{}},_getCreateEventData:t.noop,_create:t.noop,_init:t.noop,destroy:function(){var e=this;this._destroy(),t.each(this.classesElementLookup,function(t,i){e._removeClass(i,t)}),this.element.off(this.eventNamespace).removeData(this.widgetFullName),this.widget().off(this.eventNamespace).removeAttr("aria-disabled"),this.bindings.off(this.eventNamespace)},_destroy:t.noop,widget:function(){return this.element},option:function(e,i){var s,n,o,a=e;if(0===arguments.length)return t.widget.extend({},this.options);if("string"==typeof e)if(a={},s=e.split("."),e=s.shift(),s.length){for(n=a[e]=t.widget.extend({},this.options[e]),o=0;s.length-1>o;o++)n[s[o]]=n[s[o]]||{},n=n[s[o]];if(e=s.pop(),1===arguments.length)return void 0===n[e]?null:n[e];n[e]=i}else{if(1===arguments.length)return void 0===this.options[e]?null:this.options[e];a[e]=i}return this._setOptions(a),this},_setOptions:function(t){var e;for(e in t)this._setOption(e,t[e]);return this},_setOption:function(t,e){return"classes"===t&&this._setOptionClasses(e),this.options[t]=e,"disabled"===t&&this._setOptionDisabled(e),this},_setOptionClasses:function(e){var i,s,n;for(i in e)n=this.classesElementLookup[i],e[i]!==this.options.classes[i]&&n&&n.length&&(s=t(n.get()),this._removeClass(n,i),s.addClass(this._classes({element:s,keys:i,classes:e,add:!0})))},_setOptionDisabled:function(t){this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,!!t),t&&(this._removeClass(this.hoverable,null,"ui-state-hover"),this._removeClass(this.focusable,null,"ui-state-focus"))},enable:function(){return this._setOptions({disabled:!1})},disable:function(){return this._setOptions({disabled:!0})},_classes:function(e){function i(i,o){var a,r;for(r=0;i.length>r;r++)a=n.classesElementLookup[i[r]]||t(),a=e.add?t(t.unique(a.get().concat(e.element.get()))):t(a.not(e.element).get()),n.classesElementLookup[i[r]]=a,s.push(i[r]),o&&e.classes[i[r]]&&s.push(e.classes[i[r]])}var s=[],n=this;return e=t.extend({element:this.element,classes:this.options.classes||{}},e),this._on(e.element,{remove:"_untrackClassesElement"}),e.keys&&i(e.keys.match(/\S+/g)||[],!0),e.extra&&i(e.extra.match(/\S+/g)||[]),s.join(" ")},_untrackClassesElement:function(e){var i=this;t.each(i.classesElementLookup,function(s,n){-1!==t.inArray(e.target,n)&&(i.classesElementLookup[s]=t(n.not(e.target).get()))})},_removeClass:function(t,e,i){return this._toggleClass(t,e,i,!1)},_addClass:function(t,e,i){return this._toggleClass(t,e,i,!0)},_toggleClass:function(t,e,i,s){s="boolean"==typeof s?s:i;var n="string"==typeof t||null===t,o={extra:n?e:i,keys:n?t:e,element:n?this.element:t,add:s};return o.element.toggleClass(this._classes(o),s),this},_on:function(e,i,s){var n,o=this;"boolean"!=typeof e&&(s=i,i=e,e=!1),s?(i=n=t(i),this.bindings=this.bindings.add(i)):(s=i,i=this.element,n=this.widget()),t.each(s,function(s,a){function r(){return e||o.options.disabled!==!0&&!t(this).hasClass("ui-state-disabled")?("string"==typeof a?o[a]:a).apply(o,arguments):void 0}"string"!=typeof a&&(r.guid=a.guid=a.guid||r.guid||t.guid++);var h=s.match(/^([\w:-]*)\s*(.*)$/),l=h[1]+o.eventNamespace,c=h[2];c?n.on(l,c,r):i.on(l,r)})},_off:function(e,i){i=(i||"").split(" ").join(this.eventNamespace+" ")+this.eventNamespace,e.off(i).off(i),this.bindings=t(this.bindings.not(e).get()),this.focusable=t(this.focusable.not(e).get()),this.hoverable=t(this.hoverable.not(e).get())},_delay:function(t,e){function i(){return("string"==typeof t?s[t]:t).apply(s,arguments)}var s=this;return setTimeout(i,e||0)},_hoverable:function(e){this.hoverable=this.hoverable.add(e),this._on(e,{mouseenter:function(e){this._addClass(t(e.currentTarget),null,"ui-state-hover")},mouseleave:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-hover")}})},_focusable:function(e){this.focusable=this.focusable.add(e),this._on(e,{focusin:function(e){this._addClass(t(e.currentTarget),null,"ui-state-focus")},focusout:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-focus")}})},_trigger:function(e,i,s){var n,o,a=this.options[e];if(s=s||{},i=t.Event(i),i.type=(e===this.widgetEventPrefix?e:this.widgetEventPrefix+e).toLowerCase(),i.target=this.element[0],o=i.originalEvent)for(n in o)n in i||(i[n]=o[n]);return this.element.trigger(i,s),!(t.isFunction(a)&&a.apply(this.element[0],[i].concat(s))===!1||i.isDefaultPrevented())}},t.each({show:"fadeIn",hide:"fadeOut"},function(e,i){t.Widget.prototype["_"+e]=function(s,n,o){"string"==typeof n&&(n={effect:n});var a,r=n?n===!0||"number"==typeof n?i:n.effect||i:e;n=n||{},"number"==typeof n&&(n={duration:n}),a=!t.isEmptyObject(n),n.complete=o,n.delay&&s.delay(n.delay),a&&t.effects&&t.effects.effect[r]?s[e](n):r!==e&&s[r]?s[r](n.duration,n.easing,o):s.queue(function(i){t(this)[e](),o&&o.call(s[0]),i()})}}),t.widget,function(){function e(t,e,i){return[parseFloat(t[0])*(u.test(t[0])?e/100:1),parseFloat(t[1])*(u.test(t[1])?i/100:1)]}function i(e,i){return parseInt(t.css(e,i),10)||0}function s(e){var i=e[0];return 9===i.nodeType?{width:e.width(),height:e.height(),offset:{top:0,left:0}}:t.isWindow(i)?{width:e.width(),height:e.height(),offset:{top:e.scrollTop(),left:e.scrollLeft()}}:i.preventDefault?{width:0,height:0,offset:{top:i.pageY,left:i.pageX}}:{width:e.outerWidth(),height:e.outerHeight(),offset:e.offset()}}var n,o=Math.max,a=Math.abs,r=/left|center|right/,h=/top|center|bottom/,l=/[\+\-]\d+(\.[\d]+)?%?/,c=/^\w+/,u=/%$/,d=t.fn.position;t.position={scrollbarWidth:function(){if(void 0!==n)return n;var e,i,s=t("
      "),o=s.children()[0];return t("body").append(s),e=o.offsetWidth,s.css("overflow","scroll"),i=o.offsetWidth,e===i&&(i=s[0].clientWidth),s.remove(),n=e-i},getScrollInfo:function(e){var i=e.isWindow||e.isDocument?"":e.element.css("overflow-x"),s=e.isWindow||e.isDocument?"":e.element.css("overflow-y"),n="scroll"===i||"auto"===i&&e.widthi?"left":e>0?"right":"center",vertical:0>r?"top":s>0?"bottom":"middle"};l>p&&p>a(e+i)&&(u.horizontal="center"),c>f&&f>a(s+r)&&(u.vertical="middle"),u.important=o(a(e),a(i))>o(a(s),a(r))?"horizontal":"vertical",n.using.call(this,t,u)}),h.offset(t.extend(D,{using:r}))})},t.ui.position={fit:{left:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollLeft:s.offset.left,a=s.width,r=t.left-e.collisionPosition.marginLeft,h=n-r,l=r+e.collisionWidth-a-n;e.collisionWidth>a?h>0&&0>=l?(i=t.left+h+e.collisionWidth-a-n,t.left+=h-i):t.left=l>0&&0>=h?n:h>l?n+a-e.collisionWidth:n:h>0?t.left+=h:l>0?t.left-=l:t.left=o(t.left-r,t.left)},top:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollTop:s.offset.top,a=e.within.height,r=t.top-e.collisionPosition.marginTop,h=n-r,l=r+e.collisionHeight-a-n;e.collisionHeight>a?h>0&&0>=l?(i=t.top+h+e.collisionHeight-a-n,t.top+=h-i):t.top=l>0&&0>=h?n:h>l?n+a-e.collisionHeight:n:h>0?t.top+=h:l>0?t.top-=l:t.top=o(t.top-r,t.top)}},flip:{left:function(t,e){var i,s,n=e.within,o=n.offset.left+n.scrollLeft,r=n.width,h=n.isWindow?n.scrollLeft:n.offset.left,l=t.left-e.collisionPosition.marginLeft,c=l-h,u=l+e.collisionWidth-r-h,d="left"===e.my[0]?-e.elemWidth:"right"===e.my[0]?e.elemWidth:0,p="left"===e.at[0]?e.targetWidth:"right"===e.at[0]?-e.targetWidth:0,f=-2*e.offset[0];0>c?(i=t.left+d+p+f+e.collisionWidth-r-o,(0>i||a(c)>i)&&(t.left+=d+p+f)):u>0&&(s=t.left-e.collisionPosition.marginLeft+d+p+f-h,(s>0||u>a(s))&&(t.left+=d+p+f))},top:function(t,e){var i,s,n=e.within,o=n.offset.top+n.scrollTop,r=n.height,h=n.isWindow?n.scrollTop:n.offset.top,l=t.top-e.collisionPosition.marginTop,c=l-h,u=l+e.collisionHeight-r-h,d="top"===e.my[1],p=d?-e.elemHeight:"bottom"===e.my[1]?e.elemHeight:0,f="top"===e.at[1]?e.targetHeight:"bottom"===e.at[1]?-e.targetHeight:0,g=-2*e.offset[1];0>c?(s=t.top+p+f+g+e.collisionHeight-r-o,(0>s||a(c)>s)&&(t.top+=p+f+g)):u>0&&(i=t.top-e.collisionPosition.marginTop+p+f+g-h,(i>0||u>a(i))&&(t.top+=p+f+g))}},flipfit:{left:function(){t.ui.position.flip.left.apply(this,arguments),t.ui.position.fit.left.apply(this,arguments)},top:function(){t.ui.position.flip.top.apply(this,arguments),t.ui.position.fit.top.apply(this,arguments)}}}}(),t.ui.position,t.extend(t.expr[":"],{data:t.expr.createPseudo?t.expr.createPseudo(function(e){return function(i){return!!t.data(i,e)}}):function(e,i,s){return!!t.data(e,s[3])}}),t.fn.extend({disableSelection:function(){var t="onselectstart"in document.createElement("div")?"selectstart":"mousedown";return function(){return this.on(t+".ui-disableSelection",function(t){t.preventDefault()})}}(),enableSelection:function(){return this.off(".ui-disableSelection")}});var c="ui-effects-",u="ui-effects-style",d="ui-effects-animated",p=t;t.effects={effect:{}},function(t,e){function i(t,e,i){var s=u[e.type]||{};return null==t?i||!e.def?null:e.def:(t=s.floor?~~t:parseFloat(t),isNaN(t)?e.def:s.mod?(t+s.mod)%s.mod:0>t?0:t>s.max?s.max:t)}function s(i){var s=l(),n=s._rgba=[];return i=i.toLowerCase(),f(h,function(t,o){var a,r=o.re.exec(i),h=r&&o.parse(r),l=o.space||"rgba";return h?(a=s[l](h),s[c[l].cache]=a[c[l].cache],n=s._rgba=a._rgba,!1):e}),n.length?("0,0,0,0"===n.join()&&t.extend(n,o.transparent),s):o[i]}function n(t,e,i){return i=(i+1)%1,1>6*i?t+6*(e-t)*i:1>2*i?e:2>3*i?t+6*(e-t)*(2/3-i):t}var o,a="backgroundColor borderBottomColor borderLeftColor borderRightColor borderTopColor color columnRuleColor outlineColor textDecorationColor textEmphasisColor",r=/^([\-+])=\s*(\d+\.?\d*)/,h=[{re:/rgba?\(\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d{1,3})\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[t[1],t[2],t[3],t[4]]}},{re:/rgba?\(\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[2.55*t[1],2.55*t[2],2.55*t[3],t[4]]}},{re:/#([a-f0-9]{2})([a-f0-9]{2})([a-f0-9]{2})/,parse:function(t){return[parseInt(t[1],16),parseInt(t[2],16),parseInt(t[3],16)]}},{re:/#([a-f0-9])([a-f0-9])([a-f0-9])/,parse:function(t){return[parseInt(t[1]+t[1],16),parseInt(t[2]+t[2],16),parseInt(t[3]+t[3],16)]}},{re:/hsla?\(\s*(\d+(?:\.\d+)?)\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,space:"hsla",parse:function(t){return[t[1],t[2]/100,t[3]/100,t[4]]}}],l=t.Color=function(e,i,s,n){return new t.Color.fn.parse(e,i,s,n)},c={rgba:{props:{red:{idx:0,type:"byte"},green:{idx:1,type:"byte"},blue:{idx:2,type:"byte"}}},hsla:{props:{hue:{idx:0,type:"degrees"},saturation:{idx:1,type:"percent"},lightness:{idx:2,type:"percent"}}}},u={"byte":{floor:!0,max:255},percent:{max:1},degrees:{mod:360,floor:!0}},d=l.support={},p=t("

      ")[0],f=t.each;p.style.cssText="background-color:rgba(1,1,1,.5)",d.rgba=p.style.backgroundColor.indexOf("rgba")>-1,f(c,function(t,e){e.cache="_"+t,e.props.alpha={idx:3,type:"percent",def:1}}),l.fn=t.extend(l.prototype,{parse:function(n,a,r,h){if(n===e)return this._rgba=[null,null,null,null],this;(n.jquery||n.nodeType)&&(n=t(n).css(a),a=e);var u=this,d=t.type(n),p=this._rgba=[];return a!==e&&(n=[n,a,r,h],d="array"),"string"===d?this.parse(s(n)||o._default):"array"===d?(f(c.rgba.props,function(t,e){p[e.idx]=i(n[e.idx],e)}),this):"object"===d?(n instanceof l?f(c,function(t,e){n[e.cache]&&(u[e.cache]=n[e.cache].slice())}):f(c,function(e,s){var o=s.cache;f(s.props,function(t,e){if(!u[o]&&s.to){if("alpha"===t||null==n[t])return;u[o]=s.to(u._rgba)}u[o][e.idx]=i(n[t],e,!0)}),u[o]&&0>t.inArray(null,u[o].slice(0,3))&&(u[o][3]=1,s.from&&(u._rgba=s.from(u[o])))}),this):e},is:function(t){var i=l(t),s=!0,n=this;return f(c,function(t,o){var a,r=i[o.cache];return r&&(a=n[o.cache]||o.to&&o.to(n._rgba)||[],f(o.props,function(t,i){return null!=r[i.idx]?s=r[i.idx]===a[i.idx]:e})),s}),s},_space:function(){var t=[],e=this;return f(c,function(i,s){e[s.cache]&&t.push(i)}),t.pop()},transition:function(t,e){var s=l(t),n=s._space(),o=c[n],a=0===this.alpha()?l("transparent"):this,r=a[o.cache]||o.to(a._rgba),h=r.slice();return s=s[o.cache],f(o.props,function(t,n){var o=n.idx,a=r[o],l=s[o],c=u[n.type]||{};null!==l&&(null===a?h[o]=l:(c.mod&&(l-a>c.mod/2?a+=c.mod:a-l>c.mod/2&&(a-=c.mod)),h[o]=i((l-a)*e+a,n)))}),this[n](h)},blend:function(e){if(1===this._rgba[3])return this;var i=this._rgba.slice(),s=i.pop(),n=l(e)._rgba;return l(t.map(i,function(t,e){return(1-s)*n[e]+s*t}))},toRgbaString:function(){var e="rgba(",i=t.map(this._rgba,function(t,e){return null==t?e>2?1:0:t});return 1===i[3]&&(i.pop(),e="rgb("),e+i.join()+")"},toHslaString:function(){var e="hsla(",i=t.map(this.hsla(),function(t,e){return null==t&&(t=e>2?1:0),e&&3>e&&(t=Math.round(100*t)+"%"),t});return 1===i[3]&&(i.pop(),e="hsl("),e+i.join()+")"},toHexString:function(e){var i=this._rgba.slice(),s=i.pop();return e&&i.push(~~(255*s)),"#"+t.map(i,function(t){return t=(t||0).toString(16),1===t.length?"0"+t:t}).join("")},toString:function(){return 0===this._rgba[3]?"transparent":this.toRgbaString()}}),l.fn.parse.prototype=l.fn,c.hsla.to=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e,i,s=t[0]/255,n=t[1]/255,o=t[2]/255,a=t[3],r=Math.max(s,n,o),h=Math.min(s,n,o),l=r-h,c=r+h,u=.5*c;return e=h===r?0:s===r?60*(n-o)/l+360:n===r?60*(o-s)/l+120:60*(s-n)/l+240,i=0===l?0:.5>=u?l/c:l/(2-c),[Math.round(e)%360,i,u,null==a?1:a]},c.hsla.from=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e=t[0]/360,i=t[1],s=t[2],o=t[3],a=.5>=s?s*(1+i):s+i-s*i,r=2*s-a;return[Math.round(255*n(r,a,e+1/3)),Math.round(255*n(r,a,e)),Math.round(255*n(r,a,e-1/3)),o]},f(c,function(s,n){var o=n.props,a=n.cache,h=n.to,c=n.from;l.fn[s]=function(s){if(h&&!this[a]&&(this[a]=h(this._rgba)),s===e)return this[a].slice();var n,r=t.type(s),u="array"===r||"object"===r?s:arguments,d=this[a].slice();return f(o,function(t,e){var s=u["object"===r?t:e.idx];null==s&&(s=d[e.idx]),d[e.idx]=i(s,e)}),c?(n=l(c(d)),n[a]=d,n):l(d)},f(o,function(e,i){l.fn[e]||(l.fn[e]=function(n){var o,a=t.type(n),h="alpha"===e?this._hsla?"hsla":"rgba":s,l=this[h](),c=l[i.idx];return"undefined"===a?c:("function"===a&&(n=n.call(this,c),a=t.type(n)),null==n&&i.empty?this:("string"===a&&(o=r.exec(n),o&&(n=c+parseFloat(o[2])*("+"===o[1]?1:-1))),l[i.idx]=n,this[h](l)))})})}),l.hook=function(e){var i=e.split(" ");f(i,function(e,i){t.cssHooks[i]={set:function(e,n){var o,a,r="";if("transparent"!==n&&("string"!==t.type(n)||(o=s(n)))){if(n=l(o||n),!d.rgba&&1!==n._rgba[3]){for(a="backgroundColor"===i?e.parentNode:e;(""===r||"transparent"===r)&&a&&a.style;)try{r=t.css(a,"backgroundColor"),a=a.parentNode}catch(h){}n=n.blend(r&&"transparent"!==r?r:"_default")}n=n.toRgbaString()}try{e.style[i]=n}catch(h){}}},t.fx.step[i]=function(e){e.colorInit||(e.start=l(e.elem,i),e.end=l(e.end),e.colorInit=!0),t.cssHooks[i].set(e.elem,e.start.transition(e.end,e.pos))}})},l.hook(a),t.cssHooks.borderColor={expand:function(t){var e={};return f(["Top","Right","Bottom","Left"],function(i,s){e["border"+s+"Color"]=t}),e}},o=t.Color.names={aqua:"#00ffff",black:"#000000",blue:"#0000ff",fuchsia:"#ff00ff",gray:"#808080",green:"#008000",lime:"#00ff00",maroon:"#800000",navy:"#000080",olive:"#808000",purple:"#800080",red:"#ff0000",silver:"#c0c0c0",teal:"#008080",white:"#ffffff",yellow:"#ffff00",transparent:[null,null,null,0],_default:"#ffffff"}}(p),function(){function e(e){var i,s,n=e.ownerDocument.defaultView?e.ownerDocument.defaultView.getComputedStyle(e,null):e.currentStyle,o={};if(n&&n.length&&n[0]&&n[n[0]])for(s=n.length;s--;)i=n[s],"string"==typeof n[i]&&(o[t.camelCase(i)]=n[i]);else for(i in n)"string"==typeof n[i]&&(o[i]=n[i]);return o}function i(e,i){var s,o,a={};for(s in i)o=i[s],e[s]!==o&&(n[s]||(t.fx.step[s]||!isNaN(parseFloat(o)))&&(a[s]=o));return a}var s=["add","remove","toggle"],n={border:1,borderBottom:1,borderColor:1,borderLeft:1,borderRight:1,borderTop:1,borderWidth:1,margin:1,padding:1};t.each(["borderLeftStyle","borderRightStyle","borderBottomStyle","borderTopStyle"],function(e,i){t.fx.step[i]=function(t){("none"!==t.end&&!t.setAttr||1===t.pos&&!t.setAttr)&&(p.style(t.elem,i,t.end),t.setAttr=!0)}}),t.fn.addBack||(t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.effects.animateClass=function(n,o,a,r){var h=t.speed(o,a,r);return this.queue(function(){var o,a=t(this),r=a.attr("class")||"",l=h.children?a.find("*").addBack():a;l=l.map(function(){var i=t(this);return{el:i,start:e(this)}}),o=function(){t.each(s,function(t,e){n[e]&&a[e+"Class"](n[e])})},o(),l=l.map(function(){return this.end=e(this.el[0]),this.diff=i(this.start,this.end),this}),a.attr("class",r),l=l.map(function(){var e=this,i=t.Deferred(),s=t.extend({},h,{queue:!1,complete:function(){i.resolve(e)}});return this.el.animate(this.diff,s),i.promise()}),t.when.apply(t,l.get()).done(function(){o(),t.each(arguments,function(){var e=this.el;t.each(this.diff,function(t){e.css(t,"")})}),h.complete.call(a[0])})})},t.fn.extend({addClass:function(e){return function(i,s,n,o){return s?t.effects.animateClass.call(this,{add:i},s,n,o):e.apply(this,arguments)}}(t.fn.addClass),removeClass:function(e){return function(i,s,n,o){return arguments.length>1?t.effects.animateClass.call(this,{remove:i},s,n,o):e.apply(this,arguments)}}(t.fn.removeClass),toggleClass:function(e){return function(i,s,n,o,a){return"boolean"==typeof s||void 0===s?n?t.effects.animateClass.call(this,s?{add:i}:{remove:i},n,o,a):e.apply(this,arguments):t.effects.animateClass.call(this,{toggle:i},s,n,o)}}(t.fn.toggleClass),switchClass:function(e,i,s,n,o){return t.effects.animateClass.call(this,{add:i,remove:e},s,n,o)}})}(),function(){function e(e,i,s,n){return t.isPlainObject(e)&&(i=e,e=e.effect),e={effect:e},null==i&&(i={}),t.isFunction(i)&&(n=i,s=null,i={}),("number"==typeof i||t.fx.speeds[i])&&(n=s,s=i,i={}),t.isFunction(s)&&(n=s,s=null),i&&t.extend(e,i),s=s||i.duration,e.duration=t.fx.off?0:"number"==typeof s?s:s in t.fx.speeds?t.fx.speeds[s]:t.fx.speeds._default,e.complete=n||i.complete,e}function i(e){return!e||"number"==typeof e||t.fx.speeds[e]?!0:"string"!=typeof e||t.effects.effect[e]?t.isFunction(e)?!0:"object"!=typeof e||e.effect?!1:!0:!0}function s(t,e){var i=e.outerWidth(),s=e.outerHeight(),n=/^rect\((-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto)\)$/,o=n.exec(t)||["",0,i,s,0];return{top:parseFloat(o[1])||0,right:"auto"===o[2]?i:parseFloat(o[2]),bottom:"auto"===o[3]?s:parseFloat(o[3]),left:parseFloat(o[4])||0}}t.expr&&t.expr.filters&&t.expr.filters.animated&&(t.expr.filters.animated=function(e){return function(i){return!!t(i).data(d)||e(i)}}(t.expr.filters.animated)),t.uiBackCompat!==!1&&t.extend(t.effects,{save:function(t,e){for(var i=0,s=e.length;s>i;i++)null!==e[i]&&t.data(c+e[i],t[0].style[e[i]])},restore:function(t,e){for(var i,s=0,n=e.length;n>s;s++)null!==e[s]&&(i=t.data(c+e[s]),t.css(e[s],i))},setMode:function(t,e){return"toggle"===e&&(e=t.is(":hidden")?"show":"hide"),e},createWrapper:function(e){if(e.parent().is(".ui-effects-wrapper"))return e.parent();var i={width:e.outerWidth(!0),height:e.outerHeight(!0),"float":e.css("float")},s=t("

      ").addClass("ui-effects-wrapper").css({fontSize:"100%",background:"transparent",border:"none",margin:0,padding:0}),n={width:e.width(),height:e.height()},o=document.activeElement;try{o.id}catch(a){o=document.body}return e.wrap(s),(e[0]===o||t.contains(e[0],o))&&t(o).trigger("focus"),s=e.parent(),"static"===e.css("position")?(s.css({position:"relative"}),e.css({position:"relative"})):(t.extend(i,{position:e.css("position"),zIndex:e.css("z-index")}),t.each(["top","left","bottom","right"],function(t,s){i[s]=e.css(s),isNaN(parseInt(i[s],10))&&(i[s]="auto")}),e.css({position:"relative",top:0,left:0,right:"auto",bottom:"auto"})),e.css(n),s.css(i).show()},removeWrapper:function(e){var i=document.activeElement;return e.parent().is(".ui-effects-wrapper")&&(e.parent().replaceWith(e),(e[0]===i||t.contains(e[0],i))&&t(i).trigger("focus")),e}}),t.extend(t.effects,{version:"1.12.1",define:function(e,i,s){return s||(s=i,i="effect"),t.effects.effect[e]=s,t.effects.effect[e].mode=i,s},scaledDimensions:function(t,e,i){if(0===e)return{height:0,width:0,outerHeight:0,outerWidth:0};var s="horizontal"!==i?(e||100)/100:1,n="vertical"!==i?(e||100)/100:1;return{height:t.height()*n,width:t.width()*s,outerHeight:t.outerHeight()*n,outerWidth:t.outerWidth()*s}},clipToBox:function(t){return{width:t.clip.right-t.clip.left,height:t.clip.bottom-t.clip.top,left:t.clip.left,top:t.clip.top}},unshift:function(t,e,i){var s=t.queue();e>1&&s.splice.apply(s,[1,0].concat(s.splice(e,i))),t.dequeue()},saveStyle:function(t){t.data(u,t[0].style.cssText)},restoreStyle:function(t){t[0].style.cssText=t.data(u)||"",t.removeData(u)},mode:function(t,e){var i=t.is(":hidden");return"toggle"===e&&(e=i?"show":"hide"),(i?"hide"===e:"show"===e)&&(e="none"),e},getBaseline:function(t,e){var i,s;switch(t[0]){case"top":i=0;break;case"middle":i=.5;break;case"bottom":i=1;break;default:i=t[0]/e.height}switch(t[1]){case"left":s=0;break;case"center":s=.5;break;case"right":s=1;break;default:s=t[1]/e.width}return{x:s,y:i}},createPlaceholder:function(e){var i,s=e.css("position"),n=e.position();return e.css({marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()),/^(static|relative)/.test(s)&&(s="absolute",i=t("<"+e[0].nodeName+">").insertAfter(e).css({display:/^(inline|ruby)/.test(e.css("display"))?"inline-block":"block",visibility:"hidden",marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight"),"float":e.css("float")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()).addClass("ui-effects-placeholder"),e.data(c+"placeholder",i)),e.css({position:s,left:n.left,top:n.top}),i},removePlaceholder:function(t){var e=c+"placeholder",i=t.data(e);i&&(i.remove(),t.removeData(e))},cleanUp:function(e){t.effects.restoreStyle(e),t.effects.removePlaceholder(e)},setTransition:function(e,i,s,n){return n=n||{},t.each(i,function(t,i){var o=e.cssUnit(i);o[0]>0&&(n[i]=o[0]*s+o[1])}),n}}),t.fn.extend({effect:function(){function i(e){function i(){r.removeData(d),t.effects.cleanUp(r),"hide"===s.mode&&r.hide(),a()}function a(){t.isFunction(h)&&h.call(r[0]),t.isFunction(e)&&e()}var r=t(this);s.mode=c.shift(),t.uiBackCompat===!1||o?"none"===s.mode?(r[l](),a()):n.call(r[0],s,i):(r.is(":hidden")?"hide"===l:"show"===l)?(r[l](),a()):n.call(r[0],s,a)}var s=e.apply(this,arguments),n=t.effects.effect[s.effect],o=n.mode,a=s.queue,r=a||"fx",h=s.complete,l=s.mode,c=[],u=function(e){var i=t(this),s=t.effects.mode(i,l)||o;i.data(d,!0),c.push(s),o&&("show"===s||s===o&&"hide"===s)&&i.show(),o&&"none"===s||t.effects.saveStyle(i),t.isFunction(e)&&e()};return t.fx.off||!n?l?this[l](s.duration,h):this.each(function(){h&&h.call(this)}):a===!1?this.each(u).each(i):this.queue(r,u).queue(r,i)},show:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="show",this.effect.call(this,n) -}}(t.fn.show),hide:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="hide",this.effect.call(this,n)}}(t.fn.hide),toggle:function(t){return function(s){if(i(s)||"boolean"==typeof s)return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="toggle",this.effect.call(this,n)}}(t.fn.toggle),cssUnit:function(e){var i=this.css(e),s=[];return t.each(["em","px","%","pt"],function(t,e){i.indexOf(e)>0&&(s=[parseFloat(i),e])}),s},cssClip:function(t){return t?this.css("clip","rect("+t.top+"px "+t.right+"px "+t.bottom+"px "+t.left+"px)"):s(this.css("clip"),this)},transfer:function(e,i){var s=t(this),n=t(e.to),o="fixed"===n.css("position"),a=t("body"),r=o?a.scrollTop():0,h=o?a.scrollLeft():0,l=n.offset(),c={top:l.top-r,left:l.left-h,height:n.innerHeight(),width:n.innerWidth()},u=s.offset(),d=t("
      ").appendTo("body").addClass(e.className).css({top:u.top-r,left:u.left-h,height:s.innerHeight(),width:s.innerWidth(),position:o?"fixed":"absolute"}).animate(c,e.duration,e.easing,function(){d.remove(),t.isFunction(i)&&i()})}}),t.fx.step.clip=function(e){e.clipInit||(e.start=t(e.elem).cssClip(),"string"==typeof e.end&&(e.end=s(e.end,e.elem)),e.clipInit=!0),t(e.elem).cssClip({top:e.pos*(e.end.top-e.start.top)+e.start.top,right:e.pos*(e.end.right-e.start.right)+e.start.right,bottom:e.pos*(e.end.bottom-e.start.bottom)+e.start.bottom,left:e.pos*(e.end.left-e.start.left)+e.start.left})}}(),function(){var e={};t.each(["Quad","Cubic","Quart","Quint","Expo"],function(t,i){e[i]=function(e){return Math.pow(e,t+2)}}),t.extend(e,{Sine:function(t){return 1-Math.cos(t*Math.PI/2)},Circ:function(t){return 1-Math.sqrt(1-t*t)},Elastic:function(t){return 0===t||1===t?t:-Math.pow(2,8*(t-1))*Math.sin((80*(t-1)-7.5)*Math.PI/15)},Back:function(t){return t*t*(3*t-2)},Bounce:function(t){for(var e,i=4;((e=Math.pow(2,--i))-1)/11>t;);return 1/Math.pow(4,3-i)-7.5625*Math.pow((3*e-2)/22-t,2)}}),t.each(e,function(e,i){t.easing["easeIn"+e]=i,t.easing["easeOut"+e]=function(t){return 1-i(1-t)},t.easing["easeInOut"+e]=function(t){return.5>t?i(2*t)/2:1-i(-2*t+2)/2}})}();var f=t.effects;t.effects.define("blind","hide",function(e,i){var s={up:["bottom","top"],vertical:["bottom","top"],down:["top","bottom"],left:["right","left"],horizontal:["right","left"],right:["left","right"]},n=t(this),o=e.direction||"up",a=n.cssClip(),r={clip:t.extend({},a)},h=t.effects.createPlaceholder(n);r.clip[s[o][0]]=r.clip[s[o][1]],"show"===e.mode&&(n.cssClip(r.clip),h&&h.css(t.effects.clipToBox(r)),r.clip=a),h&&h.animate(t.effects.clipToBox(r),e.duration,e.easing),n.animate(r,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("bounce",function(e,i){var s,n,o,a=t(this),r=e.mode,h="hide"===r,l="show"===r,c=e.direction||"up",u=e.distance,d=e.times||5,p=2*d+(l||h?1:0),f=e.duration/p,g=e.easing,m="up"===c||"down"===c?"top":"left",_="up"===c||"left"===c,v=0,b=a.queue().length;for(t.effects.createPlaceholder(a),o=a.css(m),u||(u=a["top"===m?"outerHeight":"outerWidth"]()/3),l&&(n={opacity:1},n[m]=o,a.css("opacity",0).css(m,_?2*-u:2*u).animate(n,f,g)),h&&(u/=Math.pow(2,d-1)),n={},n[m]=o;d>v;v++)s={},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g).animate(n,f,g),u=h?2*u:u/2;h&&(s={opacity:0},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g)),a.queue(i),t.effects.unshift(a,b,p+1)}),t.effects.define("clip","hide",function(e,i){var s,n={},o=t(this),a=e.direction||"vertical",r="both"===a,h=r||"horizontal"===a,l=r||"vertical"===a;s=o.cssClip(),n.clip={top:l?(s.bottom-s.top)/2:s.top,right:h?(s.right-s.left)/2:s.right,bottom:l?(s.bottom-s.top)/2:s.bottom,left:h?(s.right-s.left)/2:s.left},t.effects.createPlaceholder(o),"show"===e.mode&&(o.cssClip(n.clip),n.clip=s),o.animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("drop","hide",function(e,i){var s,n=t(this),o=e.mode,a="show"===o,r=e.direction||"left",h="up"===r||"down"===r?"top":"left",l="up"===r||"left"===r?"-=":"+=",c="+="===l?"-=":"+=",u={opacity:0};t.effects.createPlaceholder(n),s=e.distance||n["top"===h?"outerHeight":"outerWidth"](!0)/2,u[h]=l+s,a&&(n.css(u),u[h]=c+s,u.opacity=1),n.animate(u,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("explode","hide",function(e,i){function s(){b.push(this),b.length===u*d&&n()}function n(){p.css({visibility:"visible"}),t(b).remove(),i()}var o,a,r,h,l,c,u=e.pieces?Math.round(Math.sqrt(e.pieces)):3,d=u,p=t(this),f=e.mode,g="show"===f,m=p.show().css("visibility","hidden").offset(),_=Math.ceil(p.outerWidth()/d),v=Math.ceil(p.outerHeight()/u),b=[];for(o=0;u>o;o++)for(h=m.top+o*v,c=o-(u-1)/2,a=0;d>a;a++)r=m.left+a*_,l=a-(d-1)/2,p.clone().appendTo("body").wrap("
      ").css({position:"absolute",visibility:"visible",left:-a*_,top:-o*v}).parent().addClass("ui-effects-explode").css({position:"absolute",overflow:"hidden",width:_,height:v,left:r+(g?l*_:0),top:h+(g?c*v:0),opacity:g?0:1}).animate({left:r+(g?0:l*_),top:h+(g?0:c*v),opacity:g?1:0},e.duration||500,e.easing,s)}),t.effects.define("fade","toggle",function(e,i){var s="show"===e.mode;t(this).css("opacity",s?0:1).animate({opacity:s?1:0},{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("fold","hide",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=e.size||15,h=/([0-9]+)%/.exec(r),l=!!e.horizFirst,c=l?["right","bottom"]:["bottom","right"],u=e.duration/2,d=t.effects.createPlaceholder(s),p=s.cssClip(),f={clip:t.extend({},p)},g={clip:t.extend({},p)},m=[p[c[0]],p[c[1]]],_=s.queue().length;h&&(r=parseInt(h[1],10)/100*m[a?0:1]),f.clip[c[0]]=r,g.clip[c[0]]=r,g.clip[c[1]]=0,o&&(s.cssClip(g.clip),d&&d.css(t.effects.clipToBox(g)),g.clip=p),s.queue(function(i){d&&d.animate(t.effects.clipToBox(f),u,e.easing).animate(t.effects.clipToBox(g),u,e.easing),i()}).animate(f,u,e.easing).animate(g,u,e.easing).queue(i),t.effects.unshift(s,_,4)}),t.effects.define("highlight","show",function(e,i){var s=t(this),n={backgroundColor:s.css("backgroundColor")};"hide"===e.mode&&(n.opacity=0),t.effects.saveStyle(s),s.css({backgroundImage:"none",backgroundColor:e.color||"#ffff99"}).animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("size",function(e,i){var s,n,o,a=t(this),r=["fontSize"],h=["borderTopWidth","borderBottomWidth","paddingTop","paddingBottom"],l=["borderLeftWidth","borderRightWidth","paddingLeft","paddingRight"],c=e.mode,u="effect"!==c,d=e.scale||"both",p=e.origin||["middle","center"],f=a.css("position"),g=a.position(),m=t.effects.scaledDimensions(a),_=e.from||m,v=e.to||t.effects.scaledDimensions(a,0);t.effects.createPlaceholder(a),"show"===c&&(o=_,_=v,v=o),n={from:{y:_.height/m.height,x:_.width/m.width},to:{y:v.height/m.height,x:v.width/m.width}},("box"===d||"both"===d)&&(n.from.y!==n.to.y&&(_=t.effects.setTransition(a,h,n.from.y,_),v=t.effects.setTransition(a,h,n.to.y,v)),n.from.x!==n.to.x&&(_=t.effects.setTransition(a,l,n.from.x,_),v=t.effects.setTransition(a,l,n.to.x,v))),("content"===d||"both"===d)&&n.from.y!==n.to.y&&(_=t.effects.setTransition(a,r,n.from.y,_),v=t.effects.setTransition(a,r,n.to.y,v)),p&&(s=t.effects.getBaseline(p,m),_.top=(m.outerHeight-_.outerHeight)*s.y+g.top,_.left=(m.outerWidth-_.outerWidth)*s.x+g.left,v.top=(m.outerHeight-v.outerHeight)*s.y+g.top,v.left=(m.outerWidth-v.outerWidth)*s.x+g.left),a.css(_),("content"===d||"both"===d)&&(h=h.concat(["marginTop","marginBottom"]).concat(r),l=l.concat(["marginLeft","marginRight"]),a.find("*[width]").each(function(){var i=t(this),s=t.effects.scaledDimensions(i),o={height:s.height*n.from.y,width:s.width*n.from.x,outerHeight:s.outerHeight*n.from.y,outerWidth:s.outerWidth*n.from.x},a={height:s.height*n.to.y,width:s.width*n.to.x,outerHeight:s.height*n.to.y,outerWidth:s.width*n.to.x};n.from.y!==n.to.y&&(o=t.effects.setTransition(i,h,n.from.y,o),a=t.effects.setTransition(i,h,n.to.y,a)),n.from.x!==n.to.x&&(o=t.effects.setTransition(i,l,n.from.x,o),a=t.effects.setTransition(i,l,n.to.x,a)),u&&t.effects.saveStyle(i),i.css(o),i.animate(a,e.duration,e.easing,function(){u&&t.effects.restoreStyle(i)})})),a.animate(v,{queue:!1,duration:e.duration,easing:e.easing,complete:function(){var e=a.offset();0===v.opacity&&a.css("opacity",_.opacity),u||(a.css("position","static"===f?"relative":f).offset(e),t.effects.saveStyle(a)),i()}})}),t.effects.define("scale",function(e,i){var s=t(this),n=e.mode,o=parseInt(e.percent,10)||(0===parseInt(e.percent,10)?0:"effect"!==n?0:100),a=t.extend(!0,{from:t.effects.scaledDimensions(s),to:t.effects.scaledDimensions(s,o,e.direction||"both"),origin:e.origin||["middle","center"]},e);e.fade&&(a.from.opacity=1,a.to.opacity=0),t.effects.effect.size.call(this,a,i)}),t.effects.define("puff","hide",function(e,i){var s=t.extend(!0,{},e,{fade:!0,percent:parseInt(e.percent,10)||150});t.effects.effect.scale.call(this,s,i)}),t.effects.define("pulsate","show",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=o||a,h=2*(e.times||5)+(r?1:0),l=e.duration/h,c=0,u=1,d=s.queue().length;for((o||!s.is(":visible"))&&(s.css("opacity",0).show(),c=1);h>u;u++)s.animate({opacity:c},l,e.easing),c=1-c;s.animate({opacity:c},l,e.easing),s.queue(i),t.effects.unshift(s,d,h+1)}),t.effects.define("shake",function(e,i){var s=1,n=t(this),o=e.direction||"left",a=e.distance||20,r=e.times||3,h=2*r+1,l=Math.round(e.duration/h),c="up"===o||"down"===o?"top":"left",u="up"===o||"left"===o,d={},p={},f={},g=n.queue().length;for(t.effects.createPlaceholder(n),d[c]=(u?"-=":"+=")+a,p[c]=(u?"+=":"-=")+2*a,f[c]=(u?"-=":"+=")+2*a,n.animate(d,l,e.easing);r>s;s++)n.animate(p,l,e.easing).animate(f,l,e.easing);n.animate(p,l,e.easing).animate(d,l/2,e.easing).queue(i),t.effects.unshift(n,g,h+1)}),t.effects.define("slide","show",function(e,i){var s,n,o=t(this),a={up:["bottom","top"],down:["top","bottom"],left:["right","left"],right:["left","right"]},r=e.mode,h=e.direction||"left",l="up"===h||"down"===h?"top":"left",c="up"===h||"left"===h,u=e.distance||o["top"===l?"outerHeight":"outerWidth"](!0),d={};t.effects.createPlaceholder(o),s=o.cssClip(),n=o.position()[l],d[l]=(c?-1:1)*u+n,d.clip=o.cssClip(),d.clip[a[h][1]]=d.clip[a[h][0]],"show"===r&&(o.cssClip(d.clip),o.css(l,d[l]),d.clip=s,d[l]=n),o.animate(d,{queue:!1,duration:e.duration,easing:e.easing,complete:i})});var f;t.uiBackCompat!==!1&&(f=t.effects.define("transfer",function(e,i){t(this).transfer(e,i)})),t.ui.focusable=function(i,s){var n,o,a,r,h,l=i.nodeName.toLowerCase();return"area"===l?(n=i.parentNode,o=n.name,i.href&&o&&"map"===n.nodeName.toLowerCase()?(a=t("img[usemap='#"+o+"']"),a.length>0&&a.is(":visible")):!1):(/^(input|select|textarea|button|object)$/.test(l)?(r=!i.disabled,r&&(h=t(i).closest("fieldset")[0],h&&(r=!h.disabled))):r="a"===l?i.href||s:s,r&&t(i).is(":visible")&&e(t(i)))},t.extend(t.expr[":"],{focusable:function(e){return t.ui.focusable(e,null!=t.attr(e,"tabindex"))}}),t.ui.focusable,t.fn.form=function(){return"string"==typeof this[0].form?this.closest("form"):t(this[0].form)},t.ui.formResetMixin={_formResetHandler:function(){var e=t(this);setTimeout(function(){var i=e.data("ui-form-reset-instances");t.each(i,function(){this.refresh()})})},_bindFormResetHandler:function(){if(this.form=this.element.form(),this.form.length){var t=this.form.data("ui-form-reset-instances")||[];t.length||this.form.on("reset.ui-form-reset",this._formResetHandler),t.push(this),this.form.data("ui-form-reset-instances",t)}},_unbindFormResetHandler:function(){if(this.form.length){var e=this.form.data("ui-form-reset-instances");e.splice(t.inArray(this,e),1),e.length?this.form.data("ui-form-reset-instances",e):this.form.removeData("ui-form-reset-instances").off("reset.ui-form-reset")}}},"1.7"===t.fn.jquery.substring(0,3)&&(t.each(["Width","Height"],function(e,i){function s(e,i,s,o){return t.each(n,function(){i-=parseFloat(t.css(e,"padding"+this))||0,s&&(i-=parseFloat(t.css(e,"border"+this+"Width"))||0),o&&(i-=parseFloat(t.css(e,"margin"+this))||0)}),i}var n="Width"===i?["Left","Right"]:["Top","Bottom"],o=i.toLowerCase(),a={innerWidth:t.fn.innerWidth,innerHeight:t.fn.innerHeight,outerWidth:t.fn.outerWidth,outerHeight:t.fn.outerHeight};t.fn["inner"+i]=function(e){return void 0===e?a["inner"+i].call(this):this.each(function(){t(this).css(o,s(this,e)+"px")})},t.fn["outer"+i]=function(e,n){return"number"!=typeof e?a["outer"+i].call(this,e):this.each(function(){t(this).css(o,s(this,e,!0,n)+"px")})}}),t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.ui.keyCode={BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38},t.ui.escapeSelector=function(){var t=/([!"#$%&'()*+,.\/:;<=>?@[\]^`{|}~])/g;return function(e){return e.replace(t,"\\$1")}}(),t.fn.labels=function(){var e,i,s,n,o;return this[0].labels&&this[0].labels.length?this.pushStack(this[0].labels):(n=this.eq(0).parents("label"),s=this.attr("id"),s&&(e=this.eq(0).parents().last(),o=e.add(e.length?e.siblings():this.siblings()),i="label[for='"+t.ui.escapeSelector(s)+"']",n=n.add(o.find(i).addBack(i))),this.pushStack(n))},t.fn.scrollParent=function(e){var i=this.css("position"),s="absolute"===i,n=e?/(auto|scroll|hidden)/:/(auto|scroll)/,o=this.parents().filter(function(){var e=t(this);return s&&"static"===e.css("position")?!1:n.test(e.css("overflow")+e.css("overflow-y")+e.css("overflow-x"))}).eq(0);return"fixed"!==i&&o.length?o:t(this[0].ownerDocument||document)},t.extend(t.expr[":"],{tabbable:function(e){var i=t.attr(e,"tabindex"),s=null!=i;return(!s||i>=0)&&t.ui.focusable(e,s)}}),t.fn.extend({uniqueId:function(){var t=0;return function(){return this.each(function(){this.id||(this.id="ui-id-"+ ++t)})}}(),removeUniqueId:function(){return this.each(function(){/^ui-id-\d+$/.test(this.id)&&t(this).removeAttr("id")})}}),t.widget("ui.accordion",{version:"1.12.1",options:{active:0,animate:{},classes:{"ui-accordion-header":"ui-corner-top","ui-accordion-header-collapsed":"ui-corner-all","ui-accordion-content":"ui-corner-bottom"},collapsible:!1,event:"click",header:"> li > :first-child, > :not(li):even",heightStyle:"auto",icons:{activeHeader:"ui-icon-triangle-1-s",header:"ui-icon-triangle-1-e"},activate:null,beforeActivate:null},hideProps:{borderTopWidth:"hide",borderBottomWidth:"hide",paddingTop:"hide",paddingBottom:"hide",height:"hide"},showProps:{borderTopWidth:"show",borderBottomWidth:"show",paddingTop:"show",paddingBottom:"show",height:"show"},_create:function(){var e=this.options;this.prevShow=this.prevHide=t(),this._addClass("ui-accordion","ui-widget ui-helper-reset"),this.element.attr("role","tablist"),e.collapsible||e.active!==!1&&null!=e.active||(e.active=0),this._processPanels(),0>e.active&&(e.active+=this.headers.length),this._refresh()},_getCreateEventData:function(){return{header:this.active,panel:this.active.length?this.active.next():t()}},_createIcons:function(){var e,i,s=this.options.icons;s&&(e=t(""),this._addClass(e,"ui-accordion-header-icon","ui-icon "+s.header),e.prependTo(this.headers),i=this.active.children(".ui-accordion-header-icon"),this._removeClass(i,s.header)._addClass(i,null,s.activeHeader)._addClass(this.headers,"ui-accordion-icons"))},_destroyIcons:function(){this._removeClass(this.headers,"ui-accordion-icons"),this.headers.children(".ui-accordion-header-icon").remove()},_destroy:function(){var t;this.element.removeAttr("role"),this.headers.removeAttr("role aria-expanded aria-selected aria-controls tabIndex").removeUniqueId(),this._destroyIcons(),t=this.headers.next().css("display","").removeAttr("role aria-hidden aria-labelledby").removeUniqueId(),"content"!==this.options.heightStyle&&t.css("height","")},_setOption:function(t,e){return"active"===t?(this._activate(e),void 0):("event"===t&&(this.options.event&&this._off(this.headers,this.options.event),this._setupEvents(e)),this._super(t,e),"collapsible"!==t||e||this.options.active!==!1||this._activate(0),"icons"===t&&(this._destroyIcons(),e&&this._createIcons()),void 0)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t),this._toggleClass(null,"ui-state-disabled",!!t),this._toggleClass(this.headers.add(this.headers.next()),null,"ui-state-disabled",!!t)},_keydown:function(e){if(!e.altKey&&!e.ctrlKey){var i=t.ui.keyCode,s=this.headers.length,n=this.headers.index(e.target),o=!1;switch(e.keyCode){case i.RIGHT:case i.DOWN:o=this.headers[(n+1)%s];break;case i.LEFT:case i.UP:o=this.headers[(n-1+s)%s];break;case i.SPACE:case i.ENTER:this._eventHandler(e);break;case i.HOME:o=this.headers[0];break;case i.END:o=this.headers[s-1]}o&&(t(e.target).attr("tabIndex",-1),t(o).attr("tabIndex",0),t(o).trigger("focus"),e.preventDefault())}},_panelKeyDown:function(e){e.keyCode===t.ui.keyCode.UP&&e.ctrlKey&&t(e.currentTarget).prev().trigger("focus")},refresh:function(){var e=this.options;this._processPanels(),e.active===!1&&e.collapsible===!0||!this.headers.length?(e.active=!1,this.active=t()):e.active===!1?this._activate(0):this.active.length&&!t.contains(this.element[0],this.active[0])?this.headers.length===this.headers.find(".ui-state-disabled").length?(e.active=!1,this.active=t()):this._activate(Math.max(0,e.active-1)):e.active=this.headers.index(this.active),this._destroyIcons(),this._refresh()},_processPanels:function(){var t=this.headers,e=this.panels;this.headers=this.element.find(this.options.header),this._addClass(this.headers,"ui-accordion-header ui-accordion-header-collapsed","ui-state-default"),this.panels=this.headers.next().filter(":not(.ui-accordion-content-active)").hide(),this._addClass(this.panels,"ui-accordion-content","ui-helper-reset ui-widget-content"),e&&(this._off(t.not(this.headers)),this._off(e.not(this.panels)))},_refresh:function(){var e,i=this.options,s=i.heightStyle,n=this.element.parent();this.active=this._findActive(i.active),this._addClass(this.active,"ui-accordion-header-active","ui-state-active")._removeClass(this.active,"ui-accordion-header-collapsed"),this._addClass(this.active.next(),"ui-accordion-content-active"),this.active.next().show(),this.headers.attr("role","tab").each(function(){var e=t(this),i=e.uniqueId().attr("id"),s=e.next(),n=s.uniqueId().attr("id");e.attr("aria-controls",n),s.attr("aria-labelledby",i)}).next().attr("role","tabpanel"),this.headers.not(this.active).attr({"aria-selected":"false","aria-expanded":"false",tabIndex:-1}).next().attr({"aria-hidden":"true"}).hide(),this.active.length?this.active.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0}).next().attr({"aria-hidden":"false"}):this.headers.eq(0).attr("tabIndex",0),this._createIcons(),this._setupEvents(i.event),"fill"===s?(e=n.height(),this.element.siblings(":visible").each(function(){var i=t(this),s=i.css("position");"absolute"!==s&&"fixed"!==s&&(e-=i.outerHeight(!0))}),this.headers.each(function(){e-=t(this).outerHeight(!0)}),this.headers.next().each(function(){t(this).height(Math.max(0,e-t(this).innerHeight()+t(this).height()))}).css("overflow","auto")):"auto"===s&&(e=0,this.headers.next().each(function(){var i=t(this).is(":visible");i||t(this).show(),e=Math.max(e,t(this).css("height","").height()),i||t(this).hide()}).height(e))},_activate:function(e){var i=this._findActive(e)[0];i!==this.active[0]&&(i=i||this.active[0],this._eventHandler({target:i,currentTarget:i,preventDefault:t.noop}))},_findActive:function(e){return"number"==typeof e?this.headers.eq(e):t()},_setupEvents:function(e){var i={keydown:"_keydown"};e&&t.each(e.split(" "),function(t,e){i[e]="_eventHandler"}),this._off(this.headers.add(this.headers.next())),this._on(this.headers,i),this._on(this.headers.next(),{keydown:"_panelKeyDown"}),this._hoverable(this.headers),this._focusable(this.headers)},_eventHandler:function(e){var i,s,n=this.options,o=this.active,a=t(e.currentTarget),r=a[0]===o[0],h=r&&n.collapsible,l=h?t():a.next(),c=o.next(),u={oldHeader:o,oldPanel:c,newHeader:h?t():a,newPanel:l};e.preventDefault(),r&&!n.collapsible||this._trigger("beforeActivate",e,u)===!1||(n.active=h?!1:this.headers.index(a),this.active=r?t():a,this._toggle(u),this._removeClass(o,"ui-accordion-header-active","ui-state-active"),n.icons&&(i=o.children(".ui-accordion-header-icon"),this._removeClass(i,null,n.icons.activeHeader)._addClass(i,null,n.icons.header)),r||(this._removeClass(a,"ui-accordion-header-collapsed")._addClass(a,"ui-accordion-header-active","ui-state-active"),n.icons&&(s=a.children(".ui-accordion-header-icon"),this._removeClass(s,null,n.icons.header)._addClass(s,null,n.icons.activeHeader)),this._addClass(a.next(),"ui-accordion-content-active")))},_toggle:function(e){var i=e.newPanel,s=this.prevShow.length?this.prevShow:e.oldPanel;this.prevShow.add(this.prevHide).stop(!0,!0),this.prevShow=i,this.prevHide=s,this.options.animate?this._animate(i,s,e):(s.hide(),i.show(),this._toggleComplete(e)),s.attr({"aria-hidden":"true"}),s.prev().attr({"aria-selected":"false","aria-expanded":"false"}),i.length&&s.length?s.prev().attr({tabIndex:-1,"aria-expanded":"false"}):i.length&&this.headers.filter(function(){return 0===parseInt(t(this).attr("tabIndex"),10)}).attr("tabIndex",-1),i.attr("aria-hidden","false").prev().attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0})},_animate:function(t,e,i){var s,n,o,a=this,r=0,h=t.css("box-sizing"),l=t.length&&(!e.length||t.index()",delay:300,options:{icons:{submenu:"ui-icon-caret-1-e"},items:"> *",menus:"ul",position:{my:"left top",at:"right top"},role:"menu",blur:null,focus:null,select:null},_create:function(){this.activeMenu=this.element,this.mouseHandled=!1,this.element.uniqueId().attr({role:this.options.role,tabIndex:0}),this._addClass("ui-menu","ui-widget ui-widget-content"),this._on({"mousedown .ui-menu-item":function(t){t.preventDefault()},"click .ui-menu-item":function(e){var i=t(e.target),s=t(t.ui.safeActiveElement(this.document[0]));!this.mouseHandled&&i.not(".ui-state-disabled").length&&(this.select(e),e.isPropagationStopped()||(this.mouseHandled=!0),i.has(".ui-menu").length?this.expand(e):!this.element.is(":focus")&&s.closest(".ui-menu").length&&(this.element.trigger("focus",[!0]),this.active&&1===this.active.parents(".ui-menu").length&&clearTimeout(this.timer)))},"mouseenter .ui-menu-item":function(e){if(!this.previousFilter){var i=t(e.target).closest(".ui-menu-item"),s=t(e.currentTarget);i[0]===s[0]&&(this._removeClass(s.siblings().children(".ui-state-active"),null,"ui-state-active"),this.focus(e,s))}},mouseleave:"collapseAll","mouseleave .ui-menu":"collapseAll",focus:function(t,e){var i=this.active||this.element.find(this.options.items).eq(0);e||this.focus(t,i)},blur:function(e){this._delay(function(){var i=!t.contains(this.element[0],t.ui.safeActiveElement(this.document[0]));i&&this.collapseAll(e)})},keydown:"_keydown"}),this.refresh(),this._on(this.document,{click:function(t){this._closeOnDocumentClick(t)&&this.collapseAll(t),this.mouseHandled=!1}})},_destroy:function(){var e=this.element.find(".ui-menu-item").removeAttr("role aria-disabled"),i=e.children(".ui-menu-item-wrapper").removeUniqueId().removeAttr("tabIndex role aria-haspopup");this.element.removeAttr("aria-activedescendant").find(".ui-menu").addBack().removeAttr("role aria-labelledby aria-expanded aria-hidden aria-disabled tabIndex").removeUniqueId().show(),i.children().each(function(){var e=t(this);e.data("ui-menu-submenu-caret")&&e.remove()})},_keydown:function(e){var i,s,n,o,a=!0;switch(e.keyCode){case t.ui.keyCode.PAGE_UP:this.previousPage(e);break;case t.ui.keyCode.PAGE_DOWN:this.nextPage(e);break;case t.ui.keyCode.HOME:this._move("first","first",e);break;case t.ui.keyCode.END:this._move("last","last",e);break;case t.ui.keyCode.UP:this.previous(e);break;case t.ui.keyCode.DOWN:this.next(e);break;case t.ui.keyCode.LEFT:this.collapse(e);break;case t.ui.keyCode.RIGHT:this.active&&!this.active.is(".ui-state-disabled")&&this.expand(e);break;case t.ui.keyCode.ENTER:case t.ui.keyCode.SPACE:this._activate(e);break;case t.ui.keyCode.ESCAPE:this.collapse(e);break;default:a=!1,s=this.previousFilter||"",o=!1,n=e.keyCode>=96&&105>=e.keyCode?""+(e.keyCode-96):String.fromCharCode(e.keyCode),clearTimeout(this.filterTimer),n===s?o=!0:n=s+n,i=this._filterMenuItems(n),i=o&&-1!==i.index(this.active.next())?this.active.nextAll(".ui-menu-item"):i,i.length||(n=String.fromCharCode(e.keyCode),i=this._filterMenuItems(n)),i.length?(this.focus(e,i),this.previousFilter=n,this.filterTimer=this._delay(function(){delete this.previousFilter},1e3)):delete this.previousFilter}a&&e.preventDefault()},_activate:function(t){this.active&&!this.active.is(".ui-state-disabled")&&(this.active.children("[aria-haspopup='true']").length?this.expand(t):this.select(t))},refresh:function(){var e,i,s,n,o,a=this,r=this.options.icons.submenu,h=this.element.find(this.options.menus);this._toggleClass("ui-menu-icons",null,!!this.element.find(".ui-icon").length),s=h.filter(":not(.ui-menu)").hide().attr({role:this.options.role,"aria-hidden":"true","aria-expanded":"false"}).each(function(){var e=t(this),i=e.prev(),s=t("").data("ui-menu-submenu-caret",!0);a._addClass(s,"ui-menu-icon","ui-icon "+r),i.attr("aria-haspopup","true").prepend(s),e.attr("aria-labelledby",i.attr("id"))}),this._addClass(s,"ui-menu","ui-widget ui-widget-content ui-front"),e=h.add(this.element),i=e.find(this.options.items),i.not(".ui-menu-item").each(function(){var e=t(this);a._isDivider(e)&&a._addClass(e,"ui-menu-divider","ui-widget-content")}),n=i.not(".ui-menu-item, .ui-menu-divider"),o=n.children().not(".ui-menu").uniqueId().attr({tabIndex:-1,role:this._itemRole()}),this._addClass(n,"ui-menu-item")._addClass(o,"ui-menu-item-wrapper"),i.filter(".ui-state-disabled").attr("aria-disabled","true"),this.active&&!t.contains(this.element[0],this.active[0])&&this.blur()},_itemRole:function(){return{menu:"menuitem",listbox:"option"}[this.options.role]},_setOption:function(t,e){if("icons"===t){var i=this.element.find(".ui-menu-icon");this._removeClass(i,null,this.options.icons.submenu)._addClass(i,null,e.submenu)}this._super(t,e)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t+""),this._toggleClass(null,"ui-state-disabled",!!t)},focus:function(t,e){var i,s,n;this.blur(t,t&&"focus"===t.type),this._scrollIntoView(e),this.active=e.first(),s=this.active.children(".ui-menu-item-wrapper"),this._addClass(s,null,"ui-state-active"),this.options.role&&this.element.attr("aria-activedescendant",s.attr("id")),n=this.active.parent().closest(".ui-menu-item").children(".ui-menu-item-wrapper"),this._addClass(n,null,"ui-state-active"),t&&"keydown"===t.type?this._close():this.timer=this._delay(function(){this._close()},this.delay),i=e.children(".ui-menu"),i.length&&t&&/^mouse/.test(t.type)&&this._startOpening(i),this.activeMenu=e.parent(),this._trigger("focus",t,{item:e})},_scrollIntoView:function(e){var i,s,n,o,a,r;this._hasScroll()&&(i=parseFloat(t.css(this.activeMenu[0],"borderTopWidth"))||0,s=parseFloat(t.css(this.activeMenu[0],"paddingTop"))||0,n=e.offset().top-this.activeMenu.offset().top-i-s,o=this.activeMenu.scrollTop(),a=this.activeMenu.height(),r=e.outerHeight(),0>n?this.activeMenu.scrollTop(o+n):n+r>a&&this.activeMenu.scrollTop(o+n-a+r))},blur:function(t,e){e||clearTimeout(this.timer),this.active&&(this._removeClass(this.active.children(".ui-menu-item-wrapper"),null,"ui-state-active"),this._trigger("blur",t,{item:this.active}),this.active=null)},_startOpening:function(t){clearTimeout(this.timer),"true"===t.attr("aria-hidden")&&(this.timer=this._delay(function(){this._close(),this._open(t)},this.delay))},_open:function(e){var i=t.extend({of:this.active},this.options.position);clearTimeout(this.timer),this.element.find(".ui-menu").not(e.parents(".ui-menu")).hide().attr("aria-hidden","true"),e.show().removeAttr("aria-hidden").attr("aria-expanded","true").position(i)},collapseAll:function(e,i){clearTimeout(this.timer),this.timer=this._delay(function(){var s=i?this.element:t(e&&e.target).closest(this.element.find(".ui-menu"));s.length||(s=this.element),this._close(s),this.blur(e),this._removeClass(s.find(".ui-state-active"),null,"ui-state-active"),this.activeMenu=s},this.delay)},_close:function(t){t||(t=this.active?this.active.parent():this.element),t.find(".ui-menu").hide().attr("aria-hidden","true").attr("aria-expanded","false")},_closeOnDocumentClick:function(e){return!t(e.target).closest(".ui-menu").length},_isDivider:function(t){return!/[^\-\u2014\u2013\s]/.test(t.text())},collapse:function(t){var e=this.active&&this.active.parent().closest(".ui-menu-item",this.element);e&&e.length&&(this._close(),this.focus(t,e))},expand:function(t){var e=this.active&&this.active.children(".ui-menu ").find(this.options.items).first();e&&e.length&&(this._open(e.parent()),this._delay(function(){this.focus(t,e)}))},next:function(t){this._move("next","first",t)},previous:function(t){this._move("prev","last",t)},isFirstItem:function(){return this.active&&!this.active.prevAll(".ui-menu-item").length},isLastItem:function(){return this.active&&!this.active.nextAll(".ui-menu-item").length},_move:function(t,e,i){var s;this.active&&(s="first"===t||"last"===t?this.active["first"===t?"prevAll":"nextAll"](".ui-menu-item").eq(-1):this.active[t+"All"](".ui-menu-item").eq(0)),s&&s.length&&this.active||(s=this.activeMenu.find(this.options.items)[e]()),this.focus(i,s)},nextPage:function(e){var i,s,n;return this.active?(this.isLastItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.nextAll(".ui-menu-item").each(function(){return i=t(this),0>i.offset().top-s-n}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items)[this.active?"last":"first"]())),void 0):(this.next(e),void 0)},previousPage:function(e){var i,s,n;return this.active?(this.isFirstItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.prevAll(".ui-menu-item").each(function(){return i=t(this),i.offset().top-s+n>0}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items).first())),void 0):(this.next(e),void 0)},_hasScroll:function(){return this.element.outerHeight()",options:{appendTo:null,autoFocus:!1,delay:300,minLength:1,position:{my:"left top",at:"left bottom",collision:"none"},source:null,change:null,close:null,focus:null,open:null,response:null,search:null,select:null},requestIndex:0,pending:0,_create:function(){var e,i,s,n=this.element[0].nodeName.toLowerCase(),o="textarea"===n,a="input"===n; -this.isMultiLine=o||!a&&this._isContentEditable(this.element),this.valueMethod=this.element[o||a?"val":"text"],this.isNewMenu=!0,this._addClass("ui-autocomplete-input"),this.element.attr("autocomplete","off"),this._on(this.element,{keydown:function(n){if(this.element.prop("readOnly"))return e=!0,s=!0,i=!0,void 0;e=!1,s=!1,i=!1;var o=t.ui.keyCode;switch(n.keyCode){case o.PAGE_UP:e=!0,this._move("previousPage",n);break;case o.PAGE_DOWN:e=!0,this._move("nextPage",n);break;case o.UP:e=!0,this._keyEvent("previous",n);break;case o.DOWN:e=!0,this._keyEvent("next",n);break;case o.ENTER:this.menu.active&&(e=!0,n.preventDefault(),this.menu.select(n));break;case o.TAB:this.menu.active&&this.menu.select(n);break;case o.ESCAPE:this.menu.element.is(":visible")&&(this.isMultiLine||this._value(this.term),this.close(n),n.preventDefault());break;default:i=!0,this._searchTimeout(n)}},keypress:function(s){if(e)return e=!1,(!this.isMultiLine||this.menu.element.is(":visible"))&&s.preventDefault(),void 0;if(!i){var n=t.ui.keyCode;switch(s.keyCode){case n.PAGE_UP:this._move("previousPage",s);break;case n.PAGE_DOWN:this._move("nextPage",s);break;case n.UP:this._keyEvent("previous",s);break;case n.DOWN:this._keyEvent("next",s)}}},input:function(t){return s?(s=!1,t.preventDefault(),void 0):(this._searchTimeout(t),void 0)},focus:function(){this.selectedItem=null,this.previous=this._value()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(clearTimeout(this.searching),this.close(t),this._change(t),void 0)}}),this._initSource(),this.menu=t("
        ").appendTo(this._appendTo()).menu({role:null}).hide().menu("instance"),this._addClass(this.menu.element,"ui-autocomplete","ui-front"),this._on(this.menu.element,{mousedown:function(e){e.preventDefault(),this.cancelBlur=!0,this._delay(function(){delete this.cancelBlur,this.element[0]!==t.ui.safeActiveElement(this.document[0])&&this.element.trigger("focus")})},menufocus:function(e,i){var s,n;return this.isNewMenu&&(this.isNewMenu=!1,e.originalEvent&&/^mouse/.test(e.originalEvent.type))?(this.menu.blur(),this.document.one("mousemove",function(){t(e.target).trigger(e.originalEvent)}),void 0):(n=i.item.data("ui-autocomplete-item"),!1!==this._trigger("focus",e,{item:n})&&e.originalEvent&&/^key/.test(e.originalEvent.type)&&this._value(n.value),s=i.item.attr("aria-label")||n.value,s&&t.trim(s).length&&(this.liveRegion.children().hide(),t("
        ").text(s).appendTo(this.liveRegion)),void 0)},menuselect:function(e,i){var s=i.item.data("ui-autocomplete-item"),n=this.previous;this.element[0]!==t.ui.safeActiveElement(this.document[0])&&(this.element.trigger("focus"),this.previous=n,this._delay(function(){this.previous=n,this.selectedItem=s})),!1!==this._trigger("select",e,{item:s})&&this._value(s.value),this.term=this._value(),this.close(e),this.selectedItem=s}}),this.liveRegion=t("
        ",{role:"status","aria-live":"assertive","aria-relevant":"additions"}).appendTo(this.document[0].body),this._addClass(this.liveRegion,null,"ui-helper-hidden-accessible"),this._on(this.window,{beforeunload:function(){this.element.removeAttr("autocomplete")}})},_destroy:function(){clearTimeout(this.searching),this.element.removeAttr("autocomplete"),this.menu.element.remove(),this.liveRegion.remove()},_setOption:function(t,e){this._super(t,e),"source"===t&&this._initSource(),"appendTo"===t&&this.menu.element.appendTo(this._appendTo()),"disabled"===t&&e&&this.xhr&&this.xhr.abort()},_isEventTargetInWidget:function(e){var i=this.menu.element[0];return e.target===this.element[0]||e.target===i||t.contains(i,e.target)},_closeOnClickOutside:function(t){this._isEventTargetInWidget(t)||this.close()},_appendTo:function(){var e=this.options.appendTo;return e&&(e=e.jquery||e.nodeType?t(e):this.document.find(e).eq(0)),e&&e[0]||(e=this.element.closest(".ui-front, dialog")),e.length||(e=this.document[0].body),e},_initSource:function(){var e,i,s=this;t.isArray(this.options.source)?(e=this.options.source,this.source=function(i,s){s(t.ui.autocomplete.filter(e,i.term))}):"string"==typeof this.options.source?(i=this.options.source,this.source=function(e,n){s.xhr&&s.xhr.abort(),s.xhr=t.ajax({url:i,data:e,dataType:"json",success:function(t){n(t)},error:function(){n([])}})}):this.source=this.options.source},_searchTimeout:function(t){clearTimeout(this.searching),this.searching=this._delay(function(){var e=this.term===this._value(),i=this.menu.element.is(":visible"),s=t.altKey||t.ctrlKey||t.metaKey||t.shiftKey;(!e||e&&!i&&!s)&&(this.selectedItem=null,this.search(null,t))},this.options.delay)},search:function(t,e){return t=null!=t?t:this._value(),this.term=this._value(),t.length").append(t("
        ").text(i.label)).appendTo(e)},_move:function(t,e){return this.menu.element.is(":visible")?this.menu.isFirstItem()&&/^previous/.test(t)||this.menu.isLastItem()&&/^next/.test(t)?(this.isMultiLine||this._value(this.term),this.menu.blur(),void 0):(this.menu[t](e),void 0):(this.search(null,e),void 0)},widget:function(){return this.menu.element},_value:function(){return this.valueMethod.apply(this.element,arguments)},_keyEvent:function(t,e){(!this.isMultiLine||this.menu.element.is(":visible"))&&(this._move(t,e),e.preventDefault())},_isContentEditable:function(t){if(!t.length)return!1;var e=t.prop("contentEditable");return"inherit"===e?this._isContentEditable(t.parent()):"true"===e}}),t.extend(t.ui.autocomplete,{escapeRegex:function(t){return t.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g,"\\$&")},filter:function(e,i){var s=RegExp(t.ui.autocomplete.escapeRegex(i),"i");return t.grep(e,function(t){return s.test(t.label||t.value||t)})}}),t.widget("ui.autocomplete",t.ui.autocomplete,{options:{messages:{noResults:"No search results.",results:function(t){return t+(t>1?" results are":" result is")+" available, use up and down arrow keys to navigate."}}},__response:function(e){var i;this._superApply(arguments),this.options.disabled||this.cancelSearch||(i=e&&e.length?this.options.messages.results(e.length):this.options.messages.noResults,this.liveRegion.children().hide(),t("
        ").text(i).appendTo(this.liveRegion))}}),t.ui.autocomplete;var g=/ui-corner-([a-z]){2,6}/g;t.widget("ui.controlgroup",{version:"1.12.1",defaultElement:"
        ",options:{direction:"horizontal",disabled:null,onlyVisible:!0,items:{button:"input[type=button], input[type=submit], input[type=reset], button, a",controlgroupLabel:".ui-controlgroup-label",checkboxradio:"input[type='checkbox'], input[type='radio']",selectmenu:"select",spinner:".ui-spinner-input"}},_create:function(){this._enhance()},_enhance:function(){this.element.attr("role","toolbar"),this.refresh()},_destroy:function(){this._callChildMethod("destroy"),this.childWidgets.removeData("ui-controlgroup-data"),this.element.removeAttr("role"),this.options.items.controlgroupLabel&&this.element.find(this.options.items.controlgroupLabel).find(".ui-controlgroup-label-contents").contents().unwrap()},_initWidgets:function(){var e=this,i=[];t.each(this.options.items,function(s,n){var o,a={};return n?"controlgroupLabel"===s?(o=e.element.find(n),o.each(function(){var e=t(this);e.children(".ui-controlgroup-label-contents").length||e.contents().wrapAll("")}),e._addClass(o,null,"ui-widget ui-widget-content ui-state-default"),i=i.concat(o.get()),void 0):(t.fn[s]&&(a=e["_"+s+"Options"]?e["_"+s+"Options"]("middle"):{classes:{}},e.element.find(n).each(function(){var n=t(this),o=n[s]("instance"),r=t.widget.extend({},a);if("button"!==s||!n.parent(".ui-spinner").length){o||(o=n[s]()[s]("instance")),o&&(r.classes=e._resolveClassesValues(r.classes,o)),n[s](r);var h=n[s]("widget");t.data(h[0],"ui-controlgroup-data",o?o:n[s]("instance")),i.push(h[0])}})),void 0):void 0}),this.childWidgets=t(t.unique(i)),this._addClass(this.childWidgets,"ui-controlgroup-item")},_callChildMethod:function(e){this.childWidgets.each(function(){var i=t(this),s=i.data("ui-controlgroup-data");s&&s[e]&&s[e]()})},_updateCornerClass:function(t,e){var i="ui-corner-top ui-corner-bottom ui-corner-left ui-corner-right ui-corner-all",s=this._buildSimpleOptions(e,"label").classes.label;this._removeClass(t,null,i),this._addClass(t,null,s)},_buildSimpleOptions:function(t,e){var i="vertical"===this.options.direction,s={classes:{}};return s.classes[e]={middle:"",first:"ui-corner-"+(i?"top":"left"),last:"ui-corner-"+(i?"bottom":"right"),only:"ui-corner-all"}[t],s},_spinnerOptions:function(t){var e=this._buildSimpleOptions(t,"ui-spinner");return e.classes["ui-spinner-up"]="",e.classes["ui-spinner-down"]="",e},_buttonOptions:function(t){return this._buildSimpleOptions(t,"ui-button")},_checkboxradioOptions:function(t){return this._buildSimpleOptions(t,"ui-checkboxradio-label")},_selectmenuOptions:function(t){var e="vertical"===this.options.direction;return{width:e?"auto":!1,classes:{middle:{"ui-selectmenu-button-open":"","ui-selectmenu-button-closed":""},first:{"ui-selectmenu-button-open":"ui-corner-"+(e?"top":"tl"),"ui-selectmenu-button-closed":"ui-corner-"+(e?"top":"left")},last:{"ui-selectmenu-button-open":e?"":"ui-corner-tr","ui-selectmenu-button-closed":"ui-corner-"+(e?"bottom":"right")},only:{"ui-selectmenu-button-open":"ui-corner-top","ui-selectmenu-button-closed":"ui-corner-all"}}[t]}},_resolveClassesValues:function(e,i){var s={};return t.each(e,function(n){var o=i.options.classes[n]||"";o=t.trim(o.replace(g,"")),s[n]=(o+" "+e[n]).replace(/\s+/g," ")}),s},_setOption:function(t,e){return"direction"===t&&this._removeClass("ui-controlgroup-"+this.options.direction),this._super(t,e),"disabled"===t?(this._callChildMethod(e?"disable":"enable"),void 0):(this.refresh(),void 0)},refresh:function(){var e,i=this;this._addClass("ui-controlgroup ui-controlgroup-"+this.options.direction),"horizontal"===this.options.direction&&this._addClass(null,"ui-helper-clearfix"),this._initWidgets(),e=this.childWidgets,this.options.onlyVisible&&(e=e.filter(":visible")),e.length&&(t.each(["first","last"],function(t,s){var n=e[s]().data("ui-controlgroup-data");if(n&&i["_"+n.widgetName+"Options"]){var o=i["_"+n.widgetName+"Options"](1===e.length?"only":s);o.classes=i._resolveClassesValues(o.classes,n),n.element[n.widgetName](o)}else i._updateCornerClass(e[s](),s)}),this._callChildMethod("refresh"))}}),t.widget("ui.checkboxradio",[t.ui.formResetMixin,{version:"1.12.1",options:{disabled:null,label:null,icon:!0,classes:{"ui-checkboxradio-label":"ui-corner-all","ui-checkboxradio-icon":"ui-corner-all"}},_getCreateOptions:function(){var e,i,s=this,n=this._super()||{};return this._readType(),i=this.element.labels(),this.label=t(i[i.length-1]),this.label.length||t.error("No label found for checkboxradio widget"),this.originalLabel="",this.label.contents().not(this.element[0]).each(function(){s.originalLabel+=3===this.nodeType?t(this).text():this.outerHTML}),this.originalLabel&&(n.label=this.originalLabel),e=this.element[0].disabled,null!=e&&(n.disabled=e),n},_create:function(){var t=this.element[0].checked;this._bindFormResetHandler(),null==this.options.disabled&&(this.options.disabled=this.element[0].disabled),this._setOption("disabled",this.options.disabled),this._addClass("ui-checkboxradio","ui-helper-hidden-accessible"),this._addClass(this.label,"ui-checkboxradio-label","ui-button ui-widget"),"radio"===this.type&&this._addClass(this.label,"ui-checkboxradio-radio-label"),this.options.label&&this.options.label!==this.originalLabel?this._updateLabel():this.originalLabel&&(this.options.label=this.originalLabel),this._enhance(),t&&(this._addClass(this.label,"ui-checkboxradio-checked","ui-state-active"),this.icon&&this._addClass(this.icon,null,"ui-state-hover")),this._on({change:"_toggleClasses",focus:function(){this._addClass(this.label,null,"ui-state-focus ui-visual-focus")},blur:function(){this._removeClass(this.label,null,"ui-state-focus ui-visual-focus")}})},_readType:function(){var e=this.element[0].nodeName.toLowerCase();this.type=this.element[0].type,"input"===e&&/radio|checkbox/.test(this.type)||t.error("Can't create checkboxradio on element.nodeName="+e+" and element.type="+this.type)},_enhance:function(){this._updateIcon(this.element[0].checked)},widget:function(){return this.label},_getRadioGroup:function(){var e,i=this.element[0].name,s="input[name='"+t.ui.escapeSelector(i)+"']";return i?(e=this.form.length?t(this.form[0].elements).filter(s):t(s).filter(function(){return 0===t(this).form().length}),e.not(this.element)):t([])},_toggleClasses:function(){var e=this.element[0].checked;this._toggleClass(this.label,"ui-checkboxradio-checked","ui-state-active",e),this.options.icon&&"checkbox"===this.type&&this._toggleClass(this.icon,null,"ui-icon-check ui-state-checked",e)._toggleClass(this.icon,null,"ui-icon-blank",!e),"radio"===this.type&&this._getRadioGroup().each(function(){var e=t(this).checkboxradio("instance");e&&e._removeClass(e.label,"ui-checkboxradio-checked","ui-state-active")})},_destroy:function(){this._unbindFormResetHandler(),this.icon&&(this.icon.remove(),this.iconSpace.remove())},_setOption:function(t,e){return"label"!==t||e?(this._super(t,e),"disabled"===t?(this._toggleClass(this.label,null,"ui-state-disabled",e),this.element[0].disabled=e,void 0):(this.refresh(),void 0)):void 0},_updateIcon:function(e){var i="ui-icon ui-icon-background ";this.options.icon?(this.icon||(this.icon=t(""),this.iconSpace=t(" "),this._addClass(this.iconSpace,"ui-checkboxradio-icon-space")),"checkbox"===this.type?(i+=e?"ui-icon-check ui-state-checked":"ui-icon-blank",this._removeClass(this.icon,null,e?"ui-icon-blank":"ui-icon-check")):i+="ui-icon-blank",this._addClass(this.icon,"ui-checkboxradio-icon",i),e||this._removeClass(this.icon,null,"ui-icon-check ui-state-checked"),this.icon.prependTo(this.label).after(this.iconSpace)):void 0!==this.icon&&(this.icon.remove(),this.iconSpace.remove(),delete this.icon)},_updateLabel:function(){var t=this.label.contents().not(this.element[0]);this.icon&&(t=t.not(this.icon[0])),this.iconSpace&&(t=t.not(this.iconSpace[0])),t.remove(),this.label.append(this.options.label)},refresh:function(){var t=this.element[0].checked,e=this.element[0].disabled;this._updateIcon(t),this._toggleClass(this.label,"ui-checkboxradio-checked","ui-state-active",t),null!==this.options.label&&this._updateLabel(),e!==this.options.disabled&&this._setOptions({disabled:e})}}]),t.ui.checkboxradio,t.widget("ui.button",{version:"1.12.1",defaultElement:"").addClass(this._triggerClass).html(o?t("").attr({src:o,alt:n,title:n}):n)),e[r?"before":"after"](i.trigger),i.trigger.on("click",function(){return t.datepicker._datepickerShowing&&t.datepicker._lastInput===e[0]?t.datepicker._hideDatepicker():t.datepicker._datepickerShowing&&t.datepicker._lastInput!==e[0]?(t.datepicker._hideDatepicker(),t.datepicker._showDatepicker(e[0])):t.datepicker._showDatepicker(e[0]),!1}))},_autoSize:function(t){if(this._get(t,"autoSize")&&!t.inline){var e,i,s,n,o=new Date(2009,11,20),a=this._get(t,"dateFormat");a.match(/[DM]/)&&(e=function(t){for(i=0,s=0,n=0;t.length>n;n++)t[n].length>i&&(i=t[n].length,s=n);return s},o.setMonth(e(this._get(t,a.match(/MM/)?"monthNames":"monthNamesShort"))),o.setDate(e(this._get(t,a.match(/DD/)?"dayNames":"dayNamesShort"))+20-o.getDay())),t.input.attr("size",this._formatDate(t,o).length)}},_inlineDatepicker:function(e,i){var s=t(e);s.hasClass(this.markerClassName)||(s.addClass(this.markerClassName).append(i.dpDiv),t.data(e,"datepicker",i),this._setDate(i,this._getDefaultDate(i),!0),this._updateDatepicker(i),this._updateAlternate(i),i.settings.disabled&&this._disableDatepicker(e),i.dpDiv.css("display","block"))},_dialogDatepicker:function(e,i,s,n,o){var r,h,l,c,u,d=this._dialogInst;return d||(this.uuid+=1,r="dp"+this.uuid,this._dialogInput=t(""),this._dialogInput.on("keydown",this._doKeyDown),t("body").append(this._dialogInput),d=this._dialogInst=this._newInst(this._dialogInput,!1),d.settings={},t.data(this._dialogInput[0],"datepicker",d)),a(d.settings,n||{}),i=i&&i.constructor===Date?this._formatDate(d,i):i,this._dialogInput.val(i),this._pos=o?o.length?o:[o.pageX,o.pageY]:null,this._pos||(h=document.documentElement.clientWidth,l=document.documentElement.clientHeight,c=document.documentElement.scrollLeft||document.body.scrollLeft,u=document.documentElement.scrollTop||document.body.scrollTop,this._pos=[h/2-100+c,l/2-150+u]),this._dialogInput.css("left",this._pos[0]+20+"px").css("top",this._pos[1]+"px"),d.settings.onSelect=s,this._inDialog=!0,this.dpDiv.addClass(this._dialogClass),this._showDatepicker(this._dialogInput[0]),t.blockUI&&t.blockUI(this.dpDiv),t.data(this._dialogInput[0],"datepicker",d),this},_destroyDatepicker:function(e){var i,s=t(e),n=t.data(e,"datepicker");s.hasClass(this.markerClassName)&&(i=e.nodeName.toLowerCase(),t.removeData(e,"datepicker"),"input"===i?(n.append.remove(),n.trigger.remove(),s.removeClass(this.markerClassName).off("focus",this._showDatepicker).off("keydown",this._doKeyDown).off("keypress",this._doKeyPress).off("keyup",this._doKeyUp)):("div"===i||"span"===i)&&s.removeClass(this.markerClassName).empty(),m===n&&(m=null))},_enableDatepicker:function(e){var i,s,n=t(e),o=t.data(e,"datepicker");n.hasClass(this.markerClassName)&&(i=e.nodeName.toLowerCase(),"input"===i?(e.disabled=!1,o.trigger.filter("button").each(function(){this.disabled=!1}).end().filter("img").css({opacity:"1.0",cursor:""})):("div"===i||"span"===i)&&(s=n.children("."+this._inlineClass),s.children().removeClass("ui-state-disabled"),s.find("select.ui-datepicker-month, select.ui-datepicker-year").prop("disabled",!1)),this._disabledInputs=t.map(this._disabledInputs,function(t){return t===e?null:t}))},_disableDatepicker:function(e){var i,s,n=t(e),o=t.data(e,"datepicker");n.hasClass(this.markerClassName)&&(i=e.nodeName.toLowerCase(),"input"===i?(e.disabled=!0,o.trigger.filter("button").each(function(){this.disabled=!0}).end().filter("img").css({opacity:"0.5",cursor:"default"})):("div"===i||"span"===i)&&(s=n.children("."+this._inlineClass),s.children().addClass("ui-state-disabled"),s.find("select.ui-datepicker-month, select.ui-datepicker-year").prop("disabled",!0)),this._disabledInputs=t.map(this._disabledInputs,function(t){return t===e?null:t}),this._disabledInputs[this._disabledInputs.length]=e)},_isDisabledDatepicker:function(t){if(!t)return!1;for(var e=0;this._disabledInputs.length>e;e++)if(this._disabledInputs[e]===t)return!0;return!1},_getInst:function(e){try{return t.data(e,"datepicker")}catch(i){throw"Missing instance data for this datepicker"}},_optionDatepicker:function(e,i,s){var n,o,r,h,l=this._getInst(e);return 2===arguments.length&&"string"==typeof i?"defaults"===i?t.extend({},t.datepicker._defaults):l?"all"===i?t.extend({},l.settings):this._get(l,i):null:(n=i||{},"string"==typeof i&&(n={},n[i]=s),l&&(this._curInst===l&&this._hideDatepicker(),o=this._getDateDatepicker(e,!0),r=this._getMinMaxDate(l,"min"),h=this._getMinMaxDate(l,"max"),a(l.settings,n),null!==r&&void 0!==n.dateFormat&&void 0===n.minDate&&(l.settings.minDate=this._formatDate(l,r)),null!==h&&void 0!==n.dateFormat&&void 0===n.maxDate&&(l.settings.maxDate=this._formatDate(l,h)),"disabled"in n&&(n.disabled?this._disableDatepicker(e):this._enableDatepicker(e)),this._attachments(t(e),l),this._autoSize(l),this._setDate(l,o),this._updateAlternate(l),this._updateDatepicker(l)),void 0)},_changeDatepicker:function(t,e,i){this._optionDatepicker(t,e,i)},_refreshDatepicker:function(t){var e=this._getInst(t);e&&this._updateDatepicker(e)},_setDateDatepicker:function(t,e){var i=this._getInst(t);i&&(this._setDate(i,e),this._updateDatepicker(i),this._updateAlternate(i))},_getDateDatepicker:function(t,e){var i=this._getInst(t);return i&&!i.inline&&this._setDateFromField(i,e),i?this._getDate(i):null},_doKeyDown:function(e){var i,s,n,o=t.datepicker._getInst(e.target),a=!0,r=o.dpDiv.is(".ui-datepicker-rtl");if(o._keyEvent=!0,t.datepicker._datepickerShowing)switch(e.keyCode){case 9:t.datepicker._hideDatepicker(),a=!1;break;case 13:return n=t("td."+t.datepicker._dayOverClass+":not(."+t.datepicker._currentClass+")",o.dpDiv),n[0]&&t.datepicker._selectDay(e.target,o.selectedMonth,o.selectedYear,n[0]),i=t.datepicker._get(o,"onSelect"),i?(s=t.datepicker._formatDate(o),i.apply(o.input?o.input[0]:null,[s,o])):t.datepicker._hideDatepicker(),!1;case 27:t.datepicker._hideDatepicker();break;case 33:t.datepicker._adjustDate(e.target,e.ctrlKey?-t.datepicker._get(o,"stepBigMonths"):-t.datepicker._get(o,"stepMonths"),"M");break;case 34:t.datepicker._adjustDate(e.target,e.ctrlKey?+t.datepicker._get(o,"stepBigMonths"):+t.datepicker._get(o,"stepMonths"),"M");break;case 35:(e.ctrlKey||e.metaKey)&&t.datepicker._clearDate(e.target),a=e.ctrlKey||e.metaKey;break;case 36:(e.ctrlKey||e.metaKey)&&t.datepicker._gotoToday(e.target),a=e.ctrlKey||e.metaKey;break;case 37:(e.ctrlKey||e.metaKey)&&t.datepicker._adjustDate(e.target,r?1:-1,"D"),a=e.ctrlKey||e.metaKey,e.originalEvent.altKey&&t.datepicker._adjustDate(e.target,e.ctrlKey?-t.datepicker._get(o,"stepBigMonths"):-t.datepicker._get(o,"stepMonths"),"M");break;case 38:(e.ctrlKey||e.metaKey)&&t.datepicker._adjustDate(e.target,-7,"D"),a=e.ctrlKey||e.metaKey;break;case 39:(e.ctrlKey||e.metaKey)&&t.datepicker._adjustDate(e.target,r?-1:1,"D"),a=e.ctrlKey||e.metaKey,e.originalEvent.altKey&&t.datepicker._adjustDate(e.target,e.ctrlKey?+t.datepicker._get(o,"stepBigMonths"):+t.datepicker._get(o,"stepMonths"),"M");break;case 40:(e.ctrlKey||e.metaKey)&&t.datepicker._adjustDate(e.target,7,"D"),a=e.ctrlKey||e.metaKey;break;default:a=!1}else 36===e.keyCode&&e.ctrlKey?t.datepicker._showDatepicker(this):a=!1;a&&(e.preventDefault(),e.stopPropagation())},_doKeyPress:function(e){var i,s,n=t.datepicker._getInst(e.target);return t.datepicker._get(n,"constrainInput")?(i=t.datepicker._possibleChars(t.datepicker._get(n,"dateFormat")),s=String.fromCharCode(null==e.charCode?e.keyCode:e.charCode),e.ctrlKey||e.metaKey||" ">s||!i||i.indexOf(s)>-1):void 0},_doKeyUp:function(e){var i,s=t.datepicker._getInst(e.target);if(s.input.val()!==s.lastVal)try{i=t.datepicker.parseDate(t.datepicker._get(s,"dateFormat"),s.input?s.input.val():null,t.datepicker._getFormatConfig(s)),i&&(t.datepicker._setDateFromField(s),t.datepicker._updateAlternate(s),t.datepicker._updateDatepicker(s))}catch(n){}return!0},_showDatepicker:function(e){if(e=e.target||e,"input"!==e.nodeName.toLowerCase()&&(e=t("input",e.parentNode)[0]),!t.datepicker._isDisabledDatepicker(e)&&t.datepicker._lastInput!==e){var s,n,o,r,h,l,c;s=t.datepicker._getInst(e),t.datepicker._curInst&&t.datepicker._curInst!==s&&(t.datepicker._curInst.dpDiv.stop(!0,!0),s&&t.datepicker._datepickerShowing&&t.datepicker._hideDatepicker(t.datepicker._curInst.input[0])),n=t.datepicker._get(s,"beforeShow"),o=n?n.apply(e,[e,s]):{},o!==!1&&(a(s.settings,o),s.lastVal=null,t.datepicker._lastInput=e,t.datepicker._setDateFromField(s),t.datepicker._inDialog&&(e.value=""),t.datepicker._pos||(t.datepicker._pos=t.datepicker._findPos(e),t.datepicker._pos[1]+=e.offsetHeight),r=!1,t(e).parents().each(function(){return r|="fixed"===t(this).css("position"),!r}),h={left:t.datepicker._pos[0],top:t.datepicker._pos[1]},t.datepicker._pos=null,s.dpDiv.empty(),s.dpDiv.css({position:"absolute",display:"block",top:"-1000px"}),t.datepicker._updateDatepicker(s),h=t.datepicker._checkOffset(s,h,r),s.dpDiv.css({position:t.datepicker._inDialog&&t.blockUI?"static":r?"fixed":"absolute",display:"none",left:h.left+"px",top:h.top+"px"}),s.inline||(l=t.datepicker._get(s,"showAnim"),c=t.datepicker._get(s,"duration"),s.dpDiv.css("z-index",i(t(e))+1),t.datepicker._datepickerShowing=!0,t.effects&&t.effects.effect[l]?s.dpDiv.show(l,t.datepicker._get(s,"showOptions"),c):s.dpDiv[l||"show"](l?c:null),t.datepicker._shouldFocusInput(s)&&s.input.trigger("focus"),t.datepicker._curInst=s)) -}},_updateDatepicker:function(e){this.maxRows=4,m=e,e.dpDiv.empty().append(this._generateHTML(e)),this._attachHandlers(e);var i,s=this._getNumberOfMonths(e),n=s[1],a=17,r=e.dpDiv.find("."+this._dayOverClass+" a");r.length>0&&o.apply(r.get(0)),e.dpDiv.removeClass("ui-datepicker-multi-2 ui-datepicker-multi-3 ui-datepicker-multi-4").width(""),n>1&&e.dpDiv.addClass("ui-datepicker-multi-"+n).css("width",a*n+"em"),e.dpDiv[(1!==s[0]||1!==s[1]?"add":"remove")+"Class"]("ui-datepicker-multi"),e.dpDiv[(this._get(e,"isRTL")?"add":"remove")+"Class"]("ui-datepicker-rtl"),e===t.datepicker._curInst&&t.datepicker._datepickerShowing&&t.datepicker._shouldFocusInput(e)&&e.input.trigger("focus"),e.yearshtml&&(i=e.yearshtml,setTimeout(function(){i===e.yearshtml&&e.yearshtml&&e.dpDiv.find("select.ui-datepicker-year:first").replaceWith(e.yearshtml),i=e.yearshtml=null},0))},_shouldFocusInput:function(t){return t.input&&t.input.is(":visible")&&!t.input.is(":disabled")&&!t.input.is(":focus")},_checkOffset:function(e,i,s){var n=e.dpDiv.outerWidth(),o=e.dpDiv.outerHeight(),a=e.input?e.input.outerWidth():0,r=e.input?e.input.outerHeight():0,h=document.documentElement.clientWidth+(s?0:t(document).scrollLeft()),l=document.documentElement.clientHeight+(s?0:t(document).scrollTop());return i.left-=this._get(e,"isRTL")?n-a:0,i.left-=s&&i.left===e.input.offset().left?t(document).scrollLeft():0,i.top-=s&&i.top===e.input.offset().top+r?t(document).scrollTop():0,i.left-=Math.min(i.left,i.left+n>h&&h>n?Math.abs(i.left+n-h):0),i.top-=Math.min(i.top,i.top+o>l&&l>o?Math.abs(o+r):0),i},_findPos:function(e){for(var i,s=this._getInst(e),n=this._get(s,"isRTL");e&&("hidden"===e.type||1!==e.nodeType||t.expr.filters.hidden(e));)e=e[n?"previousSibling":"nextSibling"];return i=t(e).offset(),[i.left,i.top]},_hideDatepicker:function(e){var i,s,n,o,a=this._curInst;!a||e&&a!==t.data(e,"datepicker")||this._datepickerShowing&&(i=this._get(a,"showAnim"),s=this._get(a,"duration"),n=function(){t.datepicker._tidyDialog(a)},t.effects&&(t.effects.effect[i]||t.effects[i])?a.dpDiv.hide(i,t.datepicker._get(a,"showOptions"),s,n):a.dpDiv["slideDown"===i?"slideUp":"fadeIn"===i?"fadeOut":"hide"](i?s:null,n),i||n(),this._datepickerShowing=!1,o=this._get(a,"onClose"),o&&o.apply(a.input?a.input[0]:null,[a.input?a.input.val():"",a]),this._lastInput=null,this._inDialog&&(this._dialogInput.css({position:"absolute",left:"0",top:"-100px"}),t.blockUI&&(t.unblockUI(),t("body").append(this.dpDiv))),this._inDialog=!1)},_tidyDialog:function(t){t.dpDiv.removeClass(this._dialogClass).off(".ui-datepicker-calendar")},_checkExternalClick:function(e){if(t.datepicker._curInst){var i=t(e.target),s=t.datepicker._getInst(i[0]);(i[0].id!==t.datepicker._mainDivId&&0===i.parents("#"+t.datepicker._mainDivId).length&&!i.hasClass(t.datepicker.markerClassName)&&!i.closest("."+t.datepicker._triggerClass).length&&t.datepicker._datepickerShowing&&(!t.datepicker._inDialog||!t.blockUI)||i.hasClass(t.datepicker.markerClassName)&&t.datepicker._curInst!==s)&&t.datepicker._hideDatepicker()}},_adjustDate:function(e,i,s){var n=t(e),o=this._getInst(n[0]);this._isDisabledDatepicker(n[0])||(this._adjustInstDate(o,i+("M"===s?this._get(o,"showCurrentAtPos"):0),s),this._updateDatepicker(o))},_gotoToday:function(e){var i,s=t(e),n=this._getInst(s[0]);this._get(n,"gotoCurrent")&&n.currentDay?(n.selectedDay=n.currentDay,n.drawMonth=n.selectedMonth=n.currentMonth,n.drawYear=n.selectedYear=n.currentYear):(i=new Date,n.selectedDay=i.getDate(),n.drawMonth=n.selectedMonth=i.getMonth(),n.drawYear=n.selectedYear=i.getFullYear()),this._notifyChange(n),this._adjustDate(s)},_selectMonthYear:function(e,i,s){var n=t(e),o=this._getInst(n[0]);o["selected"+("M"===s?"Month":"Year")]=o["draw"+("M"===s?"Month":"Year")]=parseInt(i.options[i.selectedIndex].value,10),this._notifyChange(o),this._adjustDate(n)},_selectDay:function(e,i,s,n){var o,a=t(e);t(n).hasClass(this._unselectableClass)||this._isDisabledDatepicker(a[0])||(o=this._getInst(a[0]),o.selectedDay=o.currentDay=t("a",n).html(),o.selectedMonth=o.currentMonth=i,o.selectedYear=o.currentYear=s,this._selectDate(e,this._formatDate(o,o.currentDay,o.currentMonth,o.currentYear)))},_clearDate:function(e){var i=t(e);this._selectDate(i,"")},_selectDate:function(e,i){var s,n=t(e),o=this._getInst(n[0]);i=null!=i?i:this._formatDate(o),o.input&&o.input.val(i),this._updateAlternate(o),s=this._get(o,"onSelect"),s?s.apply(o.input?o.input[0]:null,[i,o]):o.input&&o.input.trigger("change"),o.inline?this._updateDatepicker(o):(this._hideDatepicker(),this._lastInput=o.input[0],"object"!=typeof o.input[0]&&o.input.trigger("focus"),this._lastInput=null)},_updateAlternate:function(e){var i,s,n,o=this._get(e,"altField");o&&(i=this._get(e,"altFormat")||this._get(e,"dateFormat"),s=this._getDate(e),n=this.formatDate(i,s,this._getFormatConfig(e)),t(o).val(n))},noWeekends:function(t){var e=t.getDay();return[e>0&&6>e,""]},iso8601Week:function(t){var e,i=new Date(t.getTime());return i.setDate(i.getDate()+4-(i.getDay()||7)),e=i.getTime(),i.setMonth(0),i.setDate(1),Math.floor(Math.round((e-i)/864e5)/7)+1},parseDate:function(e,i,s){if(null==e||null==i)throw"Invalid arguments";if(i="object"==typeof i?""+i:i+"",""===i)return null;var n,o,a,r,h=0,l=(s?s.shortYearCutoff:null)||this._defaults.shortYearCutoff,c="string"!=typeof l?l:(new Date).getFullYear()%100+parseInt(l,10),u=(s?s.dayNamesShort:null)||this._defaults.dayNamesShort,d=(s?s.dayNames:null)||this._defaults.dayNames,p=(s?s.monthNamesShort:null)||this._defaults.monthNamesShort,f=(s?s.monthNames:null)||this._defaults.monthNames,g=-1,m=-1,_=-1,v=-1,b=!1,y=function(t){var i=e.length>n+1&&e.charAt(n+1)===t;return i&&n++,i},w=function(t){var e=y(t),s="@"===t?14:"!"===t?20:"y"===t&&e?4:"o"===t?3:2,n="y"===t?s:1,o=RegExp("^\\d{"+n+","+s+"}"),a=i.substring(h).match(o);if(!a)throw"Missing number at position "+h;return h+=a[0].length,parseInt(a[0],10)},k=function(e,s,n){var o=-1,a=t.map(y(e)?n:s,function(t,e){return[[e,t]]}).sort(function(t,e){return-(t[1].length-e[1].length)});if(t.each(a,function(t,e){var s=e[1];return i.substr(h,s.length).toLowerCase()===s.toLowerCase()?(o=e[0],h+=s.length,!1):void 0}),-1!==o)return o+1;throw"Unknown name at position "+h},x=function(){if(i.charAt(h)!==e.charAt(n))throw"Unexpected literal at position "+h;h++};for(n=0;e.length>n;n++)if(b)"'"!==e.charAt(n)||y("'")?x():b=!1;else switch(e.charAt(n)){case"d":_=w("d");break;case"D":k("D",u,d);break;case"o":v=w("o");break;case"m":m=w("m");break;case"M":m=k("M",p,f);break;case"y":g=w("y");break;case"@":r=new Date(w("@")),g=r.getFullYear(),m=r.getMonth()+1,_=r.getDate();break;case"!":r=new Date((w("!")-this._ticksTo1970)/1e4),g=r.getFullYear(),m=r.getMonth()+1,_=r.getDate();break;case"'":y("'")?x():b=!0;break;default:x()}if(i.length>h&&(a=i.substr(h),!/^\s+/.test(a)))throw"Extra/unparsed characters found in date: "+a;if(-1===g?g=(new Date).getFullYear():100>g&&(g+=(new Date).getFullYear()-(new Date).getFullYear()%100+(c>=g?0:-100)),v>-1)for(m=1,_=v;;){if(o=this._getDaysInMonth(g,m-1),o>=_)break;m++,_-=o}if(r=this._daylightSavingAdjust(new Date(g,m-1,_)),r.getFullYear()!==g||r.getMonth()+1!==m||r.getDate()!==_)throw"Invalid date";return r},ATOM:"yy-mm-dd",COOKIE:"D, dd M yy",ISO_8601:"yy-mm-dd",RFC_822:"D, d M y",RFC_850:"DD, dd-M-y",RFC_1036:"D, d M y",RFC_1123:"D, d M yy",RFC_2822:"D, d M yy",RSS:"D, d M y",TICKS:"!",TIMESTAMP:"@",W3C:"yy-mm-dd",_ticksTo1970:1e7*60*60*24*(718685+Math.floor(492.5)-Math.floor(19.7)+Math.floor(4.925)),formatDate:function(t,e,i){if(!e)return"";var s,n=(i?i.dayNamesShort:null)||this._defaults.dayNamesShort,o=(i?i.dayNames:null)||this._defaults.dayNames,a=(i?i.monthNamesShort:null)||this._defaults.monthNamesShort,r=(i?i.monthNames:null)||this._defaults.monthNames,h=function(e){var i=t.length>s+1&&t.charAt(s+1)===e;return i&&s++,i},l=function(t,e,i){var s=""+e;if(h(t))for(;i>s.length;)s="0"+s;return s},c=function(t,e,i,s){return h(t)?s[e]:i[e]},u="",d=!1;if(e)for(s=0;t.length>s;s++)if(d)"'"!==t.charAt(s)||h("'")?u+=t.charAt(s):d=!1;else switch(t.charAt(s)){case"d":u+=l("d",e.getDate(),2);break;case"D":u+=c("D",e.getDay(),n,o);break;case"o":u+=l("o",Math.round((new Date(e.getFullYear(),e.getMonth(),e.getDate()).getTime()-new Date(e.getFullYear(),0,0).getTime())/864e5),3);break;case"m":u+=l("m",e.getMonth()+1,2);break;case"M":u+=c("M",e.getMonth(),a,r);break;case"y":u+=h("y")?e.getFullYear():(10>e.getFullYear()%100?"0":"")+e.getFullYear()%100;break;case"@":u+=e.getTime();break;case"!":u+=1e4*e.getTime()+this._ticksTo1970;break;case"'":h("'")?u+="'":d=!0;break;default:u+=t.charAt(s)}return u},_possibleChars:function(t){var e,i="",s=!1,n=function(i){var s=t.length>e+1&&t.charAt(e+1)===i;return s&&e++,s};for(e=0;t.length>e;e++)if(s)"'"!==t.charAt(e)||n("'")?i+=t.charAt(e):s=!1;else switch(t.charAt(e)){case"d":case"m":case"y":case"@":i+="0123456789";break;case"D":case"M":return null;case"'":n("'")?i+="'":s=!0;break;default:i+=t.charAt(e)}return i},_get:function(t,e){return void 0!==t.settings[e]?t.settings[e]:this._defaults[e]},_setDateFromField:function(t,e){if(t.input.val()!==t.lastVal){var i=this._get(t,"dateFormat"),s=t.lastVal=t.input?t.input.val():null,n=this._getDefaultDate(t),o=n,a=this._getFormatConfig(t);try{o=this.parseDate(i,s,a)||n}catch(r){s=e?"":s}t.selectedDay=o.getDate(),t.drawMonth=t.selectedMonth=o.getMonth(),t.drawYear=t.selectedYear=o.getFullYear(),t.currentDay=s?o.getDate():0,t.currentMonth=s?o.getMonth():0,t.currentYear=s?o.getFullYear():0,this._adjustInstDate(t)}},_getDefaultDate:function(t){return this._restrictMinMax(t,this._determineDate(t,this._get(t,"defaultDate"),new Date))},_determineDate:function(e,i,s){var n=function(t){var e=new Date;return e.setDate(e.getDate()+t),e},o=function(i){try{return t.datepicker.parseDate(t.datepicker._get(e,"dateFormat"),i,t.datepicker._getFormatConfig(e))}catch(s){}for(var n=(i.toLowerCase().match(/^c/)?t.datepicker._getDate(e):null)||new Date,o=n.getFullYear(),a=n.getMonth(),r=n.getDate(),h=/([+\-]?[0-9]+)\s*(d|D|w|W|m|M|y|Y)?/g,l=h.exec(i);l;){switch(l[2]||"d"){case"d":case"D":r+=parseInt(l[1],10);break;case"w":case"W":r+=7*parseInt(l[1],10);break;case"m":case"M":a+=parseInt(l[1],10),r=Math.min(r,t.datepicker._getDaysInMonth(o,a));break;case"y":case"Y":o+=parseInt(l[1],10),r=Math.min(r,t.datepicker._getDaysInMonth(o,a))}l=h.exec(i)}return new Date(o,a,r)},a=null==i||""===i?s:"string"==typeof i?o(i):"number"==typeof i?isNaN(i)?s:n(i):new Date(i.getTime());return a=a&&"Invalid Date"==""+a?s:a,a&&(a.setHours(0),a.setMinutes(0),a.setSeconds(0),a.setMilliseconds(0)),this._daylightSavingAdjust(a)},_daylightSavingAdjust:function(t){return t?(t.setHours(t.getHours()>12?t.getHours()+2:0),t):null},_setDate:function(t,e,i){var s=!e,n=t.selectedMonth,o=t.selectedYear,a=this._restrictMinMax(t,this._determineDate(t,e,new Date));t.selectedDay=t.currentDay=a.getDate(),t.drawMonth=t.selectedMonth=t.currentMonth=a.getMonth(),t.drawYear=t.selectedYear=t.currentYear=a.getFullYear(),n===t.selectedMonth&&o===t.selectedYear||i||this._notifyChange(t),this._adjustInstDate(t),t.input&&t.input.val(s?"":this._formatDate(t))},_getDate:function(t){var e=!t.currentYear||t.input&&""===t.input.val()?null:this._daylightSavingAdjust(new Date(t.currentYear,t.currentMonth,t.currentDay));return e},_attachHandlers:function(e){var i=this._get(e,"stepMonths"),s="#"+e.id.replace(/\\\\/g,"\\");e.dpDiv.find("[data-handler]").map(function(){var e={prev:function(){t.datepicker._adjustDate(s,-i,"M")},next:function(){t.datepicker._adjustDate(s,+i,"M")},hide:function(){t.datepicker._hideDatepicker()},today:function(){t.datepicker._gotoToday(s)},selectDay:function(){return t.datepicker._selectDay(s,+this.getAttribute("data-month"),+this.getAttribute("data-year"),this),!1},selectMonth:function(){return t.datepicker._selectMonthYear(s,this,"M"),!1},selectYear:function(){return t.datepicker._selectMonthYear(s,this,"Y"),!1}};t(this).on(this.getAttribute("data-event"),e[this.getAttribute("data-handler")])})},_generateHTML:function(t){var e,i,s,n,o,a,r,h,l,c,u,d,p,f,g,m,_,v,b,y,w,k,x,C,D,I,T,P,M,S,H,z,O,A,N,W,E,F,L,R=new Date,B=this._daylightSavingAdjust(new Date(R.getFullYear(),R.getMonth(),R.getDate())),Y=this._get(t,"isRTL"),j=this._get(t,"showButtonPanel"),q=this._get(t,"hideIfNoPrevNext"),K=this._get(t,"navigationAsDateFormat"),U=this._getNumberOfMonths(t),V=this._get(t,"showCurrentAtPos"),$=this._get(t,"stepMonths"),X=1!==U[0]||1!==U[1],G=this._daylightSavingAdjust(t.currentDay?new Date(t.currentYear,t.currentMonth,t.currentDay):new Date(9999,9,9)),Q=this._getMinMaxDate(t,"min"),J=this._getMinMaxDate(t,"max"),Z=t.drawMonth-V,te=t.drawYear;if(0>Z&&(Z+=12,te--),J)for(e=this._daylightSavingAdjust(new Date(J.getFullYear(),J.getMonth()-U[0]*U[1]+1,J.getDate())),e=Q&&Q>e?Q:e;this._daylightSavingAdjust(new Date(te,Z,1))>e;)Z--,0>Z&&(Z=11,te--);for(t.drawMonth=Z,t.drawYear=te,i=this._get(t,"prevText"),i=K?this.formatDate(i,this._daylightSavingAdjust(new Date(te,Z-$,1)),this._getFormatConfig(t)):i,s=this._canAdjustMonth(t,-1,te,Z)?""+i+"":q?"":""+i+"",n=this._get(t,"nextText"),n=K?this.formatDate(n,this._daylightSavingAdjust(new Date(te,Z+$,1)),this._getFormatConfig(t)):n,o=this._canAdjustMonth(t,1,te,Z)?""+n+"":q?"":""+n+"",a=this._get(t,"currentText"),r=this._get(t,"gotoCurrent")&&t.currentDay?G:B,a=K?this.formatDate(a,r,this._getFormatConfig(t)):a,h=t.inline?"":"",l=j?"
        "+(Y?h:"")+(this._isInRange(t,r)?"":"")+(Y?"":h)+"
        ":"",c=parseInt(this._get(t,"firstDay"),10),c=isNaN(c)?0:c,u=this._get(t,"showWeek"),d=this._get(t,"dayNames"),p=this._get(t,"dayNamesMin"),f=this._get(t,"monthNames"),g=this._get(t,"monthNamesShort"),m=this._get(t,"beforeShowDay"),_=this._get(t,"showOtherMonths"),v=this._get(t,"selectOtherMonths"),b=this._getDefaultDate(t),y="",k=0;U[0]>k;k++){for(x="",this.maxRows=4,C=0;U[1]>C;C++){if(D=this._daylightSavingAdjust(new Date(te,Z,t.selectedDay)),I=" ui-corner-all",T="",X){if(T+="
        "}for(T+="
        "+(/all|left/.test(I)&&0===k?Y?o:s:"")+(/all|right/.test(I)&&0===k?Y?s:o:"")+this._generateMonthYearHeader(t,Z,te,Q,J,k>0||C>0,f,g)+"
        "+"",P=u?"":"",w=0;7>w;w++)M=(w+c)%7,P+="";for(T+=P+"",S=this._getDaysInMonth(te,Z),te===t.selectedYear&&Z===t.selectedMonth&&(t.selectedDay=Math.min(t.selectedDay,S)),H=(this._getFirstDayOfMonth(te,Z)-c+7)%7,z=Math.ceil((H+S)/7),O=X?this.maxRows>z?this.maxRows:z:z,this.maxRows=O,A=this._daylightSavingAdjust(new Date(te,Z,1-H)),N=0;O>N;N++){for(T+="",W=u?"":"",w=0;7>w;w++)E=m?m.apply(t.input?t.input[0]:null,[A]):[!0,""],F=A.getMonth()!==Z,L=F&&!v||!E[0]||Q&&Q>A||J&&A>J,W+="",A.setDate(A.getDate()+1),A=this._daylightSavingAdjust(A);T+=W+""}Z++,Z>11&&(Z=0,te++),T+="
        "+this._get(t,"weekHeader")+"=5?" class='ui-datepicker-week-end'":"")+">"+""+p[M]+"
        "+this._get(t,"calculateWeek")(A)+""+(F&&!_?" ":L?""+A.getDate()+"":""+A.getDate()+"")+"
        "+(X?"
        "+(U[0]>0&&C===U[1]-1?"
        ":""):""),x+=T}y+=x}return y+=l,t._keyEvent=!1,y},_generateMonthYearHeader:function(t,e,i,s,n,o,a,r){var h,l,c,u,d,p,f,g,m=this._get(t,"changeMonth"),_=this._get(t,"changeYear"),v=this._get(t,"showMonthAfterYear"),b="
        ",y="";if(o||!m)y+=""+a[e]+"";else{for(h=s&&s.getFullYear()===i,l=n&&n.getFullYear()===i,y+=""}if(v||(b+=y+(!o&&m&&_?"":" ")),!t.yearshtml)if(t.yearshtml="",o||!_)b+=""+i+"";else{for(u=this._get(t,"yearRange").split(":"),d=(new Date).getFullYear(),p=function(t){var e=t.match(/c[+\-].*/)?i+parseInt(t.substring(1),10):t.match(/[+\-].*/)?d+parseInt(t,10):parseInt(t,10);return isNaN(e)?d:e},f=p(u[0]),g=Math.max(f,p(u[1]||"")),f=s?Math.max(f,s.getFullYear()):f,g=n?Math.min(g,n.getFullYear()):g,t.yearshtml+="",b+=t.yearshtml,t.yearshtml=null}return b+=this._get(t,"yearSuffix"),v&&(b+=(!o&&m&&_?"":" ")+y),b+="
        "},_adjustInstDate:function(t,e,i){var s=t.selectedYear+("Y"===i?e:0),n=t.selectedMonth+("M"===i?e:0),o=Math.min(t.selectedDay,this._getDaysInMonth(s,n))+("D"===i?e:0),a=this._restrictMinMax(t,this._daylightSavingAdjust(new Date(s,n,o)));t.selectedDay=a.getDate(),t.drawMonth=t.selectedMonth=a.getMonth(),t.drawYear=t.selectedYear=a.getFullYear(),("M"===i||"Y"===i)&&this._notifyChange(t)},_restrictMinMax:function(t,e){var i=this._getMinMaxDate(t,"min"),s=this._getMinMaxDate(t,"max"),n=i&&i>e?i:e;return s&&n>s?s:n},_notifyChange:function(t){var e=this._get(t,"onChangeMonthYear");e&&e.apply(t.input?t.input[0]:null,[t.selectedYear,t.selectedMonth+1,t])},_getNumberOfMonths:function(t){var e=this._get(t,"numberOfMonths");return null==e?[1,1]:"number"==typeof e?[1,e]:e},_getMinMaxDate:function(t,e){return this._determineDate(t,this._get(t,e+"Date"),null)},_getDaysInMonth:function(t,e){return 32-this._daylightSavingAdjust(new Date(t,e,32)).getDate()},_getFirstDayOfMonth:function(t,e){return new Date(t,e,1).getDay()},_canAdjustMonth:function(t,e,i,s){var n=this._getNumberOfMonths(t),o=this._daylightSavingAdjust(new Date(i,s+(0>e?e:n[0]*n[1]),1));return 0>e&&o.setDate(this._getDaysInMonth(o.getFullYear(),o.getMonth())),this._isInRange(t,o)},_isInRange:function(t,e){var i,s,n=this._getMinMaxDate(t,"min"),o=this._getMinMaxDate(t,"max"),a=null,r=null,h=this._get(t,"yearRange");return h&&(i=h.split(":"),s=(new Date).getFullYear(),a=parseInt(i[0],10),r=parseInt(i[1],10),i[0].match(/[+\-].*/)&&(a+=s),i[1].match(/[+\-].*/)&&(r+=s)),(!n||e.getTime()>=n.getTime())&&(!o||e.getTime()<=o.getTime())&&(!a||e.getFullYear()>=a)&&(!r||r>=e.getFullYear())},_getFormatConfig:function(t){var e=this._get(t,"shortYearCutoff");return e="string"!=typeof e?e:(new Date).getFullYear()%100+parseInt(e,10),{shortYearCutoff:e,dayNamesShort:this._get(t,"dayNamesShort"),dayNames:this._get(t,"dayNames"),monthNamesShort:this._get(t,"monthNamesShort"),monthNames:this._get(t,"monthNames")}},_formatDate:function(t,e,i,s){e||(t.currentDay=t.selectedDay,t.currentMonth=t.selectedMonth,t.currentYear=t.selectedYear);var n=e?"object"==typeof e?e:this._daylightSavingAdjust(new Date(s,i,e)):this._daylightSavingAdjust(new Date(t.currentYear,t.currentMonth,t.currentDay));return this.formatDate(this._get(t,"dateFormat"),n,this._getFormatConfig(t))}}),t.fn.datepicker=function(e){if(!this.length)return this;t.datepicker.initialized||(t(document).on("mousedown",t.datepicker._checkExternalClick),t.datepicker.initialized=!0),0===t("#"+t.datepicker._mainDivId).length&&t("body").append(t.datepicker.dpDiv);var i=Array.prototype.slice.call(arguments,1);return"string"!=typeof e||"isDisabled"!==e&&"getDate"!==e&&"widget"!==e?"option"===e&&2===arguments.length&&"string"==typeof arguments[1]?t.datepicker["_"+e+"Datepicker"].apply(t.datepicker,[this[0]].concat(i)):this.each(function(){"string"==typeof e?t.datepicker["_"+e+"Datepicker"].apply(t.datepicker,[this].concat(i)):t.datepicker._attachDatepicker(this,e)}):t.datepicker["_"+e+"Datepicker"].apply(t.datepicker,[this[0]].concat(i))},t.datepicker=new s,t.datepicker.initialized=!1,t.datepicker.uuid=(new Date).getTime(),t.datepicker.version="1.12.1",t.datepicker,t.ui.ie=!!/msie [\w.]+/.exec(navigator.userAgent.toLowerCase());var _=!1;t(document).on("mouseup",function(){_=!1}),t.widget("ui.mouse",{version:"1.12.1",options:{cancel:"input, textarea, button, select, option",distance:1,delay:0},_mouseInit:function(){var e=this;this.element.on("mousedown."+this.widgetName,function(t){return e._mouseDown(t)}).on("click."+this.widgetName,function(i){return!0===t.data(i.target,e.widgetName+".preventClickEvent")?(t.removeData(i.target,e.widgetName+".preventClickEvent"),i.stopImmediatePropagation(),!1):void 0}),this.started=!1},_mouseDestroy:function(){this.element.off("."+this.widgetName),this._mouseMoveDelegate&&this.document.off("mousemove."+this.widgetName,this._mouseMoveDelegate).off("mouseup."+this.widgetName,this._mouseUpDelegate)},_mouseDown:function(e){if(!_){this._mouseMoved=!1,this._mouseStarted&&this._mouseUp(e),this._mouseDownEvent=e;var i=this,s=1===e.which,n="string"==typeof this.options.cancel&&e.target.nodeName?t(e.target).closest(this.options.cancel).length:!1;return s&&!n&&this._mouseCapture(e)?(this.mouseDelayMet=!this.options.delay,this.mouseDelayMet||(this._mouseDelayTimer=setTimeout(function(){i.mouseDelayMet=!0},this.options.delay)),this._mouseDistanceMet(e)&&this._mouseDelayMet(e)&&(this._mouseStarted=this._mouseStart(e)!==!1,!this._mouseStarted)?(e.preventDefault(),!0):(!0===t.data(e.target,this.widgetName+".preventClickEvent")&&t.removeData(e.target,this.widgetName+".preventClickEvent"),this._mouseMoveDelegate=function(t){return i._mouseMove(t)},this._mouseUpDelegate=function(t){return i._mouseUp(t)},this.document.on("mousemove."+this.widgetName,this._mouseMoveDelegate).on("mouseup."+this.widgetName,this._mouseUpDelegate),e.preventDefault(),_=!0,!0)):!0}},_mouseMove:function(e){if(this._mouseMoved){if(t.ui.ie&&(!document.documentMode||9>document.documentMode)&&!e.button)return this._mouseUp(e);if(!e.which)if(e.originalEvent.altKey||e.originalEvent.ctrlKey||e.originalEvent.metaKey||e.originalEvent.shiftKey)this.ignoreMissingWhich=!0;else if(!this.ignoreMissingWhich)return this._mouseUp(e)}return(e.which||e.button)&&(this._mouseMoved=!0),this._mouseStarted?(this._mouseDrag(e),e.preventDefault()):(this._mouseDistanceMet(e)&&this._mouseDelayMet(e)&&(this._mouseStarted=this._mouseStart(this._mouseDownEvent,e)!==!1,this._mouseStarted?this._mouseDrag(e):this._mouseUp(e)),!this._mouseStarted)},_mouseUp:function(e){this.document.off("mousemove."+this.widgetName,this._mouseMoveDelegate).off("mouseup."+this.widgetName,this._mouseUpDelegate),this._mouseStarted&&(this._mouseStarted=!1,e.target===this._mouseDownEvent.target&&t.data(e.target,this.widgetName+".preventClickEvent",!0),this._mouseStop(e)),this._mouseDelayTimer&&(clearTimeout(this._mouseDelayTimer),delete this._mouseDelayTimer),this.ignoreMissingWhich=!1,_=!1,e.preventDefault()},_mouseDistanceMet:function(t){return Math.max(Math.abs(this._mouseDownEvent.pageX-t.pageX),Math.abs(this._mouseDownEvent.pageY-t.pageY))>=this.options.distance},_mouseDelayMet:function(){return this.mouseDelayMet},_mouseStart:function(){},_mouseDrag:function(){},_mouseStop:function(){},_mouseCapture:function(){return!0}}),t.ui.plugin={add:function(e,i,s){var n,o=t.ui[e].prototype;for(n in s)o.plugins[n]=o.plugins[n]||[],o.plugins[n].push([i,s[n]])},call:function(t,e,i,s){var n,o=t.plugins[e];if(o&&(s||t.element[0].parentNode&&11!==t.element[0].parentNode.nodeType))for(n=0;o.length>n;n++)t.options[o[n][0]]&&o[n][1].apply(t.element,i)}},t.ui.safeBlur=function(e){e&&"body"!==e.nodeName.toLowerCase()&&t(e).trigger("blur")},t.widget("ui.draggable",t.ui.mouse,{version:"1.12.1",widgetEventPrefix:"drag",options:{addClasses:!0,appendTo:"parent",axis:!1,connectToSortable:!1,containment:!1,cursor:"auto",cursorAt:!1,grid:!1,handle:!1,helper:"original",iframeFix:!1,opacity:!1,refreshPositions:!1,revert:!1,revertDuration:500,scope:"default",scroll:!0,scrollSensitivity:20,scrollSpeed:20,snap:!1,snapMode:"both",snapTolerance:20,stack:!1,zIndex:!1,drag:null,start:null,stop:null},_create:function(){"original"===this.options.helper&&this._setPositionRelative(),this.options.addClasses&&this._addClass("ui-draggable"),this._setHandleClassName(),this._mouseInit()},_setOption:function(t,e){this._super(t,e),"handle"===t&&(this._removeHandleClassName(),this._setHandleClassName())},_destroy:function(){return(this.helper||this.element).is(".ui-draggable-dragging")?(this.destroyOnClear=!0,void 0):(this._removeHandleClassName(),this._mouseDestroy(),void 0)},_mouseCapture:function(e){var i=this.options;return this.helper||i.disabled||t(e.target).closest(".ui-resizable-handle").length>0?!1:(this.handle=this._getHandle(e),this.handle?(this._blurActiveElement(e),this._blockFrames(i.iframeFix===!0?"iframe":i.iframeFix),!0):!1)},_blockFrames:function(e){this.iframeBlocks=this.document.find(e).map(function(){var e=t(this);return t("
        ").css("position","absolute").appendTo(e.parent()).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()).offset(e.offset())[0]})},_unblockFrames:function(){this.iframeBlocks&&(this.iframeBlocks.remove(),delete this.iframeBlocks)},_blurActiveElement:function(e){var i=t.ui.safeActiveElement(this.document[0]),s=t(e.target);s.closest(i).length||t.ui.safeBlur(i)},_mouseStart:function(e){var i=this.options;return this.helper=this._createHelper(e),this._addClass(this.helper,"ui-draggable-dragging"),this._cacheHelperProportions(),t.ui.ddmanager&&(t.ui.ddmanager.current=this),this._cacheMargins(),this.cssPosition=this.helper.css("position"),this.scrollParent=this.helper.scrollParent(!0),this.offsetParent=this.helper.offsetParent(),this.hasFixedAncestor=this.helper.parents().filter(function(){return"fixed"===t(this).css("position")}).length>0,this.positionAbs=this.element.offset(),this._refreshOffsets(e),this.originalPosition=this.position=this._generatePosition(e,!1),this.originalPageX=e.pageX,this.originalPageY=e.pageY,i.cursorAt&&this._adjustOffsetFromHelper(i.cursorAt),this._setContainment(),this._trigger("start",e)===!1?(this._clear(),!1):(this._cacheHelperProportions(),t.ui.ddmanager&&!i.dropBehaviour&&t.ui.ddmanager.prepareOffsets(this,e),this._mouseDrag(e,!0),t.ui.ddmanager&&t.ui.ddmanager.dragStart(this,e),!0)},_refreshOffsets:function(t){this.offset={top:this.positionAbs.top-this.margins.top,left:this.positionAbs.left-this.margins.left,scroll:!1,parent:this._getParentOffset(),relative:this._getRelativeOffset()},this.offset.click={left:t.pageX-this.offset.left,top:t.pageY-this.offset.top}},_mouseDrag:function(e,i){if(this.hasFixedAncestor&&(this.offset.parent=this._getParentOffset()),this.position=this._generatePosition(e,!0),this.positionAbs=this._convertPositionTo("absolute"),!i){var s=this._uiHash();if(this._trigger("drag",e,s)===!1)return this._mouseUp(new t.Event("mouseup",e)),!1;this.position=s.position}return this.helper[0].style.left=this.position.left+"px",this.helper[0].style.top=this.position.top+"px",t.ui.ddmanager&&t.ui.ddmanager.drag(this,e),!1},_mouseStop:function(e){var i=this,s=!1;return t.ui.ddmanager&&!this.options.dropBehaviour&&(s=t.ui.ddmanager.drop(this,e)),this.dropped&&(s=this.dropped,this.dropped=!1),"invalid"===this.options.revert&&!s||"valid"===this.options.revert&&s||this.options.revert===!0||t.isFunction(this.options.revert)&&this.options.revert.call(this.element,s)?t(this.helper).animate(this.originalPosition,parseInt(this.options.revertDuration,10),function(){i._trigger("stop",e)!==!1&&i._clear()}):this._trigger("stop",e)!==!1&&this._clear(),!1},_mouseUp:function(e){return this._unblockFrames(),t.ui.ddmanager&&t.ui.ddmanager.dragStop(this,e),this.handleElement.is(e.target)&&this.element.trigger("focus"),t.ui.mouse.prototype._mouseUp.call(this,e)},cancel:function(){return this.helper.is(".ui-draggable-dragging")?this._mouseUp(new t.Event("mouseup",{target:this.element[0]})):this._clear(),this},_getHandle:function(e){return this.options.handle?!!t(e.target).closest(this.element.find(this.options.handle)).length:!0},_setHandleClassName:function(){this.handleElement=this.options.handle?this.element.find(this.options.handle):this.element,this._addClass(this.handleElement,"ui-draggable-handle")},_removeHandleClassName:function(){this._removeClass(this.handleElement,"ui-draggable-handle")},_createHelper:function(e){var i=this.options,s=t.isFunction(i.helper),n=s?t(i.helper.apply(this.element[0],[e])):"clone"===i.helper?this.element.clone().removeAttr("id"):this.element;return n.parents("body").length||n.appendTo("parent"===i.appendTo?this.element[0].parentNode:i.appendTo),s&&n[0]===this.element[0]&&this._setPositionRelative(),n[0]===this.element[0]||/(fixed|absolute)/.test(n.css("position"))||n.css("position","absolute"),n},_setPositionRelative:function(){/^(?:r|a|f)/.test(this.element.css("position"))||(this.element[0].style.position="relative")},_adjustOffsetFromHelper:function(e){"string"==typeof e&&(e=e.split(" ")),t.isArray(e)&&(e={left:+e[0],top:+e[1]||0}),"left"in e&&(this.offset.click.left=e.left+this.margins.left),"right"in e&&(this.offset.click.left=this.helperProportions.width-e.right+this.margins.left),"top"in e&&(this.offset.click.top=e.top+this.margins.top),"bottom"in e&&(this.offset.click.top=this.helperProportions.height-e.bottom+this.margins.top)},_isRootNode:function(t){return/(html|body)/i.test(t.tagName)||t===this.document[0]},_getParentOffset:function(){var e=this.offsetParent.offset(),i=this.document[0];return"absolute"===this.cssPosition&&this.scrollParent[0]!==i&&t.contains(this.scrollParent[0],this.offsetParent[0])&&(e.left+=this.scrollParent.scrollLeft(),e.top+=this.scrollParent.scrollTop()),this._isRootNode(this.offsetParent[0])&&(e={top:0,left:0}),{top:e.top+(parseInt(this.offsetParent.css("borderTopWidth"),10)||0),left:e.left+(parseInt(this.offsetParent.css("borderLeftWidth"),10)||0)}},_getRelativeOffset:function(){if("relative"!==this.cssPosition)return{top:0,left:0};var t=this.element.position(),e=this._isRootNode(this.scrollParent[0]);return{top:t.top-(parseInt(this.helper.css("top"),10)||0)+(e?0:this.scrollParent.scrollTop()),left:t.left-(parseInt(this.helper.css("left"),10)||0)+(e?0:this.scrollParent.scrollLeft())} -},_cacheMargins:function(){this.margins={left:parseInt(this.element.css("marginLeft"),10)||0,top:parseInt(this.element.css("marginTop"),10)||0,right:parseInt(this.element.css("marginRight"),10)||0,bottom:parseInt(this.element.css("marginBottom"),10)||0}},_cacheHelperProportions:function(){this.helperProportions={width:this.helper.outerWidth(),height:this.helper.outerHeight()}},_setContainment:function(){var e,i,s,n=this.options,o=this.document[0];return this.relativeContainer=null,n.containment?"window"===n.containment?(this.containment=[t(window).scrollLeft()-this.offset.relative.left-this.offset.parent.left,t(window).scrollTop()-this.offset.relative.top-this.offset.parent.top,t(window).scrollLeft()+t(window).width()-this.helperProportions.width-this.margins.left,t(window).scrollTop()+(t(window).height()||o.body.parentNode.scrollHeight)-this.helperProportions.height-this.margins.top],void 0):"document"===n.containment?(this.containment=[0,0,t(o).width()-this.helperProportions.width-this.margins.left,(t(o).height()||o.body.parentNode.scrollHeight)-this.helperProportions.height-this.margins.top],void 0):n.containment.constructor===Array?(this.containment=n.containment,void 0):("parent"===n.containment&&(n.containment=this.helper[0].parentNode),i=t(n.containment),s=i[0],s&&(e=/(scroll|auto)/.test(i.css("overflow")),this.containment=[(parseInt(i.css("borderLeftWidth"),10)||0)+(parseInt(i.css("paddingLeft"),10)||0),(parseInt(i.css("borderTopWidth"),10)||0)+(parseInt(i.css("paddingTop"),10)||0),(e?Math.max(s.scrollWidth,s.offsetWidth):s.offsetWidth)-(parseInt(i.css("borderRightWidth"),10)||0)-(parseInt(i.css("paddingRight"),10)||0)-this.helperProportions.width-this.margins.left-this.margins.right,(e?Math.max(s.scrollHeight,s.offsetHeight):s.offsetHeight)-(parseInt(i.css("borderBottomWidth"),10)||0)-(parseInt(i.css("paddingBottom"),10)||0)-this.helperProportions.height-this.margins.top-this.margins.bottom],this.relativeContainer=i),void 0):(this.containment=null,void 0)},_convertPositionTo:function(t,e){e||(e=this.position);var i="absolute"===t?1:-1,s=this._isRootNode(this.scrollParent[0]);return{top:e.top+this.offset.relative.top*i+this.offset.parent.top*i-("fixed"===this.cssPosition?-this.offset.scroll.top:s?0:this.offset.scroll.top)*i,left:e.left+this.offset.relative.left*i+this.offset.parent.left*i-("fixed"===this.cssPosition?-this.offset.scroll.left:s?0:this.offset.scroll.left)*i}},_generatePosition:function(t,e){var i,s,n,o,a=this.options,r=this._isRootNode(this.scrollParent[0]),h=t.pageX,l=t.pageY;return r&&this.offset.scroll||(this.offset.scroll={top:this.scrollParent.scrollTop(),left:this.scrollParent.scrollLeft()}),e&&(this.containment&&(this.relativeContainer?(s=this.relativeContainer.offset(),i=[this.containment[0]+s.left,this.containment[1]+s.top,this.containment[2]+s.left,this.containment[3]+s.top]):i=this.containment,t.pageX-this.offset.click.lefti[2]&&(h=i[2]+this.offset.click.left),t.pageY-this.offset.click.top>i[3]&&(l=i[3]+this.offset.click.top)),a.grid&&(n=a.grid[1]?this.originalPageY+Math.round((l-this.originalPageY)/a.grid[1])*a.grid[1]:this.originalPageY,l=i?n-this.offset.click.top>=i[1]||n-this.offset.click.top>i[3]?n:n-this.offset.click.top>=i[1]?n-a.grid[1]:n+a.grid[1]:n,o=a.grid[0]?this.originalPageX+Math.round((h-this.originalPageX)/a.grid[0])*a.grid[0]:this.originalPageX,h=i?o-this.offset.click.left>=i[0]||o-this.offset.click.left>i[2]?o:o-this.offset.click.left>=i[0]?o-a.grid[0]:o+a.grid[0]:o),"y"===a.axis&&(h=this.originalPageX),"x"===a.axis&&(l=this.originalPageY)),{top:l-this.offset.click.top-this.offset.relative.top-this.offset.parent.top+("fixed"===this.cssPosition?-this.offset.scroll.top:r?0:this.offset.scroll.top),left:h-this.offset.click.left-this.offset.relative.left-this.offset.parent.left+("fixed"===this.cssPosition?-this.offset.scroll.left:r?0:this.offset.scroll.left)}},_clear:function(){this._removeClass(this.helper,"ui-draggable-dragging"),this.helper[0]===this.element[0]||this.cancelHelperRemoval||this.helper.remove(),this.helper=null,this.cancelHelperRemoval=!1,this.destroyOnClear&&this.destroy()},_trigger:function(e,i,s){return s=s||this._uiHash(),t.ui.plugin.call(this,e,[i,s,this],!0),/^(drag|start|stop)/.test(e)&&(this.positionAbs=this._convertPositionTo("absolute"),s.offset=this.positionAbs),t.Widget.prototype._trigger.call(this,e,i,s)},plugins:{},_uiHash:function(){return{helper:this.helper,position:this.position,originalPosition:this.originalPosition,offset:this.positionAbs}}}),t.ui.plugin.add("draggable","connectToSortable",{start:function(e,i,s){var n=t.extend({},i,{item:s.element});s.sortables=[],t(s.options.connectToSortable).each(function(){var i=t(this).sortable("instance");i&&!i.options.disabled&&(s.sortables.push(i),i.refreshPositions(),i._trigger("activate",e,n))})},stop:function(e,i,s){var n=t.extend({},i,{item:s.element});s.cancelHelperRemoval=!1,t.each(s.sortables,function(){var t=this;t.isOver?(t.isOver=0,s.cancelHelperRemoval=!0,t.cancelHelperRemoval=!1,t._storedCSS={position:t.placeholder.css("position"),top:t.placeholder.css("top"),left:t.placeholder.css("left")},t._mouseStop(e),t.options.helper=t.options._helper):(t.cancelHelperRemoval=!0,t._trigger("deactivate",e,n))})},drag:function(e,i,s){t.each(s.sortables,function(){var n=!1,o=this;o.positionAbs=s.positionAbs,o.helperProportions=s.helperProportions,o.offset.click=s.offset.click,o._intersectsWith(o.containerCache)&&(n=!0,t.each(s.sortables,function(){return this.positionAbs=s.positionAbs,this.helperProportions=s.helperProportions,this.offset.click=s.offset.click,this!==o&&this._intersectsWith(this.containerCache)&&t.contains(o.element[0],this.element[0])&&(n=!1),n})),n?(o.isOver||(o.isOver=1,s._parent=i.helper.parent(),o.currentItem=i.helper.appendTo(o.element).data("ui-sortable-item",!0),o.options._helper=o.options.helper,o.options.helper=function(){return i.helper[0]},e.target=o.currentItem[0],o._mouseCapture(e,!0),o._mouseStart(e,!0,!0),o.offset.click.top=s.offset.click.top,o.offset.click.left=s.offset.click.left,o.offset.parent.left-=s.offset.parent.left-o.offset.parent.left,o.offset.parent.top-=s.offset.parent.top-o.offset.parent.top,s._trigger("toSortable",e),s.dropped=o.element,t.each(s.sortables,function(){this.refreshPositions()}),s.currentItem=s.element,o.fromOutside=s),o.currentItem&&(o._mouseDrag(e),i.position=o.position)):o.isOver&&(o.isOver=0,o.cancelHelperRemoval=!0,o.options._revert=o.options.revert,o.options.revert=!1,o._trigger("out",e,o._uiHash(o)),o._mouseStop(e,!0),o.options.revert=o.options._revert,o.options.helper=o.options._helper,o.placeholder&&o.placeholder.remove(),i.helper.appendTo(s._parent),s._refreshOffsets(e),i.position=s._generatePosition(e,!0),s._trigger("fromSortable",e),s.dropped=!1,t.each(s.sortables,function(){this.refreshPositions()}))})}}),t.ui.plugin.add("draggable","cursor",{start:function(e,i,s){var n=t("body"),o=s.options;n.css("cursor")&&(o._cursor=n.css("cursor")),n.css("cursor",o.cursor)},stop:function(e,i,s){var n=s.options;n._cursor&&t("body").css("cursor",n._cursor)}}),t.ui.plugin.add("draggable","opacity",{start:function(e,i,s){var n=t(i.helper),o=s.options;n.css("opacity")&&(o._opacity=n.css("opacity")),n.css("opacity",o.opacity)},stop:function(e,i,s){var n=s.options;n._opacity&&t(i.helper).css("opacity",n._opacity)}}),t.ui.plugin.add("draggable","scroll",{start:function(t,e,i){i.scrollParentNotHidden||(i.scrollParentNotHidden=i.helper.scrollParent(!1)),i.scrollParentNotHidden[0]!==i.document[0]&&"HTML"!==i.scrollParentNotHidden[0].tagName&&(i.overflowOffset=i.scrollParentNotHidden.offset())},drag:function(e,i,s){var n=s.options,o=!1,a=s.scrollParentNotHidden[0],r=s.document[0];a!==r&&"HTML"!==a.tagName?(n.axis&&"x"===n.axis||(s.overflowOffset.top+a.offsetHeight-e.pageY=0;d--)h=s.snapElements[d].left-s.margins.left,l=h+s.snapElements[d].width,c=s.snapElements[d].top-s.margins.top,u=c+s.snapElements[d].height,h-g>_||m>l+g||c-g>b||v>u+g||!t.contains(s.snapElements[d].item.ownerDocument,s.snapElements[d].item)?(s.snapElements[d].snapping&&s.options.snap.release&&s.options.snap.release.call(s.element,e,t.extend(s._uiHash(),{snapItem:s.snapElements[d].item})),s.snapElements[d].snapping=!1):("inner"!==f.snapMode&&(n=g>=Math.abs(c-b),o=g>=Math.abs(u-v),a=g>=Math.abs(h-_),r=g>=Math.abs(l-m),n&&(i.position.top=s._convertPositionTo("relative",{top:c-s.helperProportions.height,left:0}).top),o&&(i.position.top=s._convertPositionTo("relative",{top:u,left:0}).top),a&&(i.position.left=s._convertPositionTo("relative",{top:0,left:h-s.helperProportions.width}).left),r&&(i.position.left=s._convertPositionTo("relative",{top:0,left:l}).left)),p=n||o||a||r,"outer"!==f.snapMode&&(n=g>=Math.abs(c-v),o=g>=Math.abs(u-b),a=g>=Math.abs(h-m),r=g>=Math.abs(l-_),n&&(i.position.top=s._convertPositionTo("relative",{top:c,left:0}).top),o&&(i.position.top=s._convertPositionTo("relative",{top:u-s.helperProportions.height,left:0}).top),a&&(i.position.left=s._convertPositionTo("relative",{top:0,left:h}).left),r&&(i.position.left=s._convertPositionTo("relative",{top:0,left:l-s.helperProportions.width}).left)),!s.snapElements[d].snapping&&(n||o||a||r||p)&&s.options.snap.snap&&s.options.snap.snap.call(s.element,e,t.extend(s._uiHash(),{snapItem:s.snapElements[d].item})),s.snapElements[d].snapping=n||o||a||r||p)}}),t.ui.plugin.add("draggable","stack",{start:function(e,i,s){var n,o=s.options,a=t.makeArray(t(o.stack)).sort(function(e,i){return(parseInt(t(e).css("zIndex"),10)||0)-(parseInt(t(i).css("zIndex"),10)||0)});a.length&&(n=parseInt(t(a[0]).css("zIndex"),10)||0,t(a).each(function(e){t(this).css("zIndex",n+e)}),this.css("zIndex",n+a.length))}}),t.ui.plugin.add("draggable","zIndex",{start:function(e,i,s){var n=t(i.helper),o=s.options;n.css("zIndex")&&(o._zIndex=n.css("zIndex")),n.css("zIndex",o.zIndex)},stop:function(e,i,s){var n=s.options;n._zIndex&&t(i.helper).css("zIndex",n._zIndex)}}),t.ui.draggable,t.widget("ui.resizable",t.ui.mouse,{version:"1.12.1",widgetEventPrefix:"resize",options:{alsoResize:!1,animate:!1,animateDuration:"slow",animateEasing:"swing",aspectRatio:!1,autoHide:!1,classes:{"ui-resizable-se":"ui-icon ui-icon-gripsmall-diagonal-se"},containment:!1,ghost:!1,grid:!1,handles:"e,s,se",helper:!1,maxHeight:null,maxWidth:null,minHeight:10,minWidth:10,zIndex:90,resize:null,start:null,stop:null},_num:function(t){return parseFloat(t)||0},_isNumber:function(t){return!isNaN(parseFloat(t))},_hasScroll:function(e,i){if("hidden"===t(e).css("overflow"))return!1;var s=i&&"left"===i?"scrollLeft":"scrollTop",n=!1;return e[s]>0?!0:(e[s]=1,n=e[s]>0,e[s]=0,n)},_create:function(){var e,i=this.options,s=this;this._addClass("ui-resizable"),t.extend(this,{_aspectRatio:!!i.aspectRatio,aspectRatio:i.aspectRatio,originalElement:this.element,_proportionallyResizeElements:[],_helper:i.helper||i.ghost||i.animate?i.helper||"ui-resizable-helper":null}),this.element[0].nodeName.match(/^(canvas|textarea|input|select|button|img)$/i)&&(this.element.wrap(t("
        ").css({position:this.element.css("position"),width:this.element.outerWidth(),height:this.element.outerHeight(),top:this.element.css("top"),left:this.element.css("left")})),this.element=this.element.parent().data("ui-resizable",this.element.resizable("instance")),this.elementIsWrapper=!0,e={marginTop:this.originalElement.css("marginTop"),marginRight:this.originalElement.css("marginRight"),marginBottom:this.originalElement.css("marginBottom"),marginLeft:this.originalElement.css("marginLeft")},this.element.css(e),this.originalElement.css("margin",0),this.originalResizeStyle=this.originalElement.css("resize"),this.originalElement.css("resize","none"),this._proportionallyResizeElements.push(this.originalElement.css({position:"static",zoom:1,display:"block"})),this.originalElement.css(e),this._proportionallyResize()),this._setupHandles(),i.autoHide&&t(this.element).on("mouseenter",function(){i.disabled||(s._removeClass("ui-resizable-autohide"),s._handles.show())}).on("mouseleave",function(){i.disabled||s.resizing||(s._addClass("ui-resizable-autohide"),s._handles.hide())}),this._mouseInit()},_destroy:function(){this._mouseDestroy();var e,i=function(e){t(e).removeData("resizable").removeData("ui-resizable").off(".resizable").find(".ui-resizable-handle").remove()};return this.elementIsWrapper&&(i(this.element),e=this.element,this.originalElement.css({position:e.css("position"),width:e.outerWidth(),height:e.outerHeight(),top:e.css("top"),left:e.css("left")}).insertAfter(e),e.remove()),this.originalElement.css("resize",this.originalResizeStyle),i(this.originalElement),this},_setOption:function(t,e){switch(this._super(t,e),t){case"handles":this._removeHandles(),this._setupHandles();break;default:}},_setupHandles:function(){var e,i,s,n,o,a=this.options,r=this;if(this.handles=a.handles||(t(".ui-resizable-handle",this.element).length?{n:".ui-resizable-n",e:".ui-resizable-e",s:".ui-resizable-s",w:".ui-resizable-w",se:".ui-resizable-se",sw:".ui-resizable-sw",ne:".ui-resizable-ne",nw:".ui-resizable-nw"}:"e,s,se"),this._handles=t(),this.handles.constructor===String)for("all"===this.handles&&(this.handles="n,e,s,w,se,sw,ne,nw"),s=this.handles.split(","),this.handles={},i=0;s.length>i;i++)e=t.trim(s[i]),n="ui-resizable-"+e,o=t("
        "),this._addClass(o,"ui-resizable-handle "+n),o.css({zIndex:a.zIndex}),this.handles[e]=".ui-resizable-"+e,this.element.append(o);this._renderAxis=function(e){var i,s,n,o;e=e||this.element;for(i in this.handles)this.handles[i].constructor===String?this.handles[i]=this.element.children(this.handles[i]).first().show():(this.handles[i].jquery||this.handles[i].nodeType)&&(this.handles[i]=t(this.handles[i]),this._on(this.handles[i],{mousedown:r._mouseDown})),this.elementIsWrapper&&this.originalElement[0].nodeName.match(/^(textarea|input|select|button)$/i)&&(s=t(this.handles[i],this.element),o=/sw|ne|nw|se|n|s/.test(i)?s.outerHeight():s.outerWidth(),n=["padding",/ne|nw|n/.test(i)?"Top":/se|sw|s/.test(i)?"Bottom":/^e$/.test(i)?"Right":"Left"].join(""),e.css(n,o),this._proportionallyResize()),this._handles=this._handles.add(this.handles[i])},this._renderAxis(this.element),this._handles=this._handles.add(this.element.find(".ui-resizable-handle")),this._handles.disableSelection(),this._handles.on("mouseover",function(){r.resizing||(this.className&&(o=this.className.match(/ui-resizable-(se|sw|ne|nw|n|e|s|w)/i)),r.axis=o&&o[1]?o[1]:"se")}),a.autoHide&&(this._handles.hide(),this._addClass("ui-resizable-autohide"))},_removeHandles:function(){this._handles.remove()},_mouseCapture:function(e){var i,s,n=!1;for(i in this.handles)s=t(this.handles[i])[0],(s===e.target||t.contains(s,e.target))&&(n=!0);return!this.options.disabled&&n},_mouseStart:function(e){var i,s,n,o=this.options,a=this.element;return this.resizing=!0,this._renderProxy(),i=this._num(this.helper.css("left")),s=this._num(this.helper.css("top")),o.containment&&(i+=t(o.containment).scrollLeft()||0,s+=t(o.containment).scrollTop()||0),this.offset=this.helper.offset(),this.position={left:i,top:s},this.size=this._helper?{width:this.helper.width(),height:this.helper.height()}:{width:a.width(),height:a.height()},this.originalSize=this._helper?{width:a.outerWidth(),height:a.outerHeight()}:{width:a.width(),height:a.height()},this.sizeDiff={width:a.outerWidth()-a.width(),height:a.outerHeight()-a.height()},this.originalPosition={left:i,top:s},this.originalMousePosition={left:e.pageX,top:e.pageY},this.aspectRatio="number"==typeof o.aspectRatio?o.aspectRatio:this.originalSize.width/this.originalSize.height||1,n=t(".ui-resizable-"+this.axis).css("cursor"),t("body").css("cursor","auto"===n?this.axis+"-resize":n),this._addClass("ui-resizable-resizing"),this._propagate("start",e),!0},_mouseDrag:function(e){var i,s,n=this.originalMousePosition,o=this.axis,a=e.pageX-n.left||0,r=e.pageY-n.top||0,h=this._change[o];return this._updatePrevProperties(),h?(i=h.apply(this,[e,a,r]),this._updateVirtualBoundaries(e.shiftKey),(this._aspectRatio||e.shiftKey)&&(i=this._updateRatio(i,e)),i=this._respectSize(i,e),this._updateCache(i),this._propagate("resize",e),s=this._applyChanges(),!this._helper&&this._proportionallyResizeElements.length&&this._proportionallyResize(),t.isEmptyObject(s)||(this._updatePrevProperties(),this._trigger("resize",e,this.ui()),this._applyChanges()),!1):!1},_mouseStop:function(e){this.resizing=!1;var i,s,n,o,a,r,h,l=this.options,c=this;return this._helper&&(i=this._proportionallyResizeElements,s=i.length&&/textarea/i.test(i[0].nodeName),n=s&&this._hasScroll(i[0],"left")?0:c.sizeDiff.height,o=s?0:c.sizeDiff.width,a={width:c.helper.width()-o,height:c.helper.height()-n},r=parseFloat(c.element.css("left"))+(c.position.left-c.originalPosition.left)||null,h=parseFloat(c.element.css("top"))+(c.position.top-c.originalPosition.top)||null,l.animate||this.element.css(t.extend(a,{top:h,left:r})),c.helper.height(c.size.height),c.helper.width(c.size.width),this._helper&&!l.animate&&this._proportionallyResize()),t("body").css("cursor","auto"),this._removeClass("ui-resizable-resizing"),this._propagate("stop",e),this._helper&&this.helper.remove(),!1},_updatePrevProperties:function(){this.prevPosition={top:this.position.top,left:this.position.left},this.prevSize={width:this.size.width,height:this.size.height}},_applyChanges:function(){var t={};return this.position.top!==this.prevPosition.top&&(t.top=this.position.top+"px"),this.position.left!==this.prevPosition.left&&(t.left=this.position.left+"px"),this.size.width!==this.prevSize.width&&(t.width=this.size.width+"px"),this.size.height!==this.prevSize.height&&(t.height=this.size.height+"px"),this.helper.css(t),t},_updateVirtualBoundaries:function(t){var e,i,s,n,o,a=this.options;o={minWidth:this._isNumber(a.minWidth)?a.minWidth:0,maxWidth:this._isNumber(a.maxWidth)?a.maxWidth:1/0,minHeight:this._isNumber(a.minHeight)?a.minHeight:0,maxHeight:this._isNumber(a.maxHeight)?a.maxHeight:1/0},(this._aspectRatio||t)&&(e=o.minHeight*this.aspectRatio,s=o.minWidth/this.aspectRatio,i=o.maxHeight*this.aspectRatio,n=o.maxWidth/this.aspectRatio,e>o.minWidth&&(o.minWidth=e),s>o.minHeight&&(o.minHeight=s),o.maxWidth>i&&(o.maxWidth=i),o.maxHeight>n&&(o.maxHeight=n)),this._vBoundaries=o},_updateCache:function(t){this.offset=this.helper.offset(),this._isNumber(t.left)&&(this.position.left=t.left),this._isNumber(t.top)&&(this.position.top=t.top),this._isNumber(t.height)&&(this.size.height=t.height),this._isNumber(t.width)&&(this.size.width=t.width)},_updateRatio:function(t){var e=this.position,i=this.size,s=this.axis;return this._isNumber(t.height)?t.width=t.height*this.aspectRatio:this._isNumber(t.width)&&(t.height=t.width/this.aspectRatio),"sw"===s&&(t.left=e.left+(i.width-t.width),t.top=null),"nw"===s&&(t.top=e.top+(i.height-t.height),t.left=e.left+(i.width-t.width)),t},_respectSize:function(t){var e=this._vBoundaries,i=this.axis,s=this._isNumber(t.width)&&e.maxWidth&&e.maxWidtht.width,a=this._isNumber(t.height)&&e.minHeight&&e.minHeight>t.height,r=this.originalPosition.left+this.originalSize.width,h=this.originalPosition.top+this.originalSize.height,l=/sw|nw|w/.test(i),c=/nw|ne|n/.test(i);return o&&(t.width=e.minWidth),a&&(t.height=e.minHeight),s&&(t.width=e.maxWidth),n&&(t.height=e.maxHeight),o&&l&&(t.left=r-e.minWidth),s&&l&&(t.left=r-e.maxWidth),a&&c&&(t.top=h-e.minHeight),n&&c&&(t.top=h-e.maxHeight),t.width||t.height||t.left||!t.top?t.width||t.height||t.top||!t.left||(t.left=null):t.top=null,t},_getPaddingPlusBorderDimensions:function(t){for(var e=0,i=[],s=[t.css("borderTopWidth"),t.css("borderRightWidth"),t.css("borderBottomWidth"),t.css("borderLeftWidth")],n=[t.css("paddingTop"),t.css("paddingRight"),t.css("paddingBottom"),t.css("paddingLeft")];4>e;e++)i[e]=parseFloat(s[e])||0,i[e]+=parseFloat(n[e])||0;return{height:i[0]+i[2],width:i[1]+i[3]}},_proportionallyResize:function(){if(this._proportionallyResizeElements.length)for(var t,e=0,i=this.helper||this.element;this._proportionallyResizeElements.length>e;e++)t=this._proportionallyResizeElements[e],this.outerDimensions||(this.outerDimensions=this._getPaddingPlusBorderDimensions(t)),t.css({height:i.height()-this.outerDimensions.height||0,width:i.width()-this.outerDimensions.width||0})},_renderProxy:function(){var e=this.element,i=this.options;this.elementOffset=e.offset(),this._helper?(this.helper=this.helper||t("
        "),this._addClass(this.helper,this._helper),this.helper.css({width:this.element.outerWidth(),height:this.element.outerHeight(),position:"absolute",left:this.elementOffset.left+"px",top:this.elementOffset.top+"px",zIndex:++i.zIndex}),this.helper.appendTo("body").disableSelection()):this.helper=this.element},_change:{e:function(t,e){return{width:this.originalSize.width+e}},w:function(t,e){var i=this.originalSize,s=this.originalPosition;return{left:s.left+e,width:i.width-e}},n:function(t,e,i){var s=this.originalSize,n=this.originalPosition;return{top:n.top+i,height:s.height-i}},s:function(t,e,i){return{height:this.originalSize.height+i}},se:function(e,i,s){return t.extend(this._change.s.apply(this,arguments),this._change.e.apply(this,[e,i,s]))},sw:function(e,i,s){return t.extend(this._change.s.apply(this,arguments),this._change.w.apply(this,[e,i,s]))},ne:function(e,i,s){return t.extend(this._change.n.apply(this,arguments),this._change.e.apply(this,[e,i,s]))},nw:function(e,i,s){return t.extend(this._change.n.apply(this,arguments),this._change.w.apply(this,[e,i,s]))}},_propagate:function(e,i){t.ui.plugin.call(this,e,[i,this.ui()]),"resize"!==e&&this._trigger(e,i,this.ui())},plugins:{},ui:function(){return{originalElement:this.originalElement,element:this.element,helper:this.helper,position:this.position,size:this.size,originalSize:this.originalSize,originalPosition:this.originalPosition}}}),t.ui.plugin.add("resizable","animate",{stop:function(e){var i=t(this).resizable("instance"),s=i.options,n=i._proportionallyResizeElements,o=n.length&&/textarea/i.test(n[0].nodeName),a=o&&i._hasScroll(n[0],"left")?0:i.sizeDiff.height,r=o?0:i.sizeDiff.width,h={width:i.size.width-r,height:i.size.height-a},l=parseFloat(i.element.css("left"))+(i.position.left-i.originalPosition.left)||null,c=parseFloat(i.element.css("top"))+(i.position.top-i.originalPosition.top)||null;i.element.animate(t.extend(h,c&&l?{top:c,left:l}:{}),{duration:s.animateDuration,easing:s.animateEasing,step:function(){var s={width:parseFloat(i.element.css("width")),height:parseFloat(i.element.css("height")),top:parseFloat(i.element.css("top")),left:parseFloat(i.element.css("left"))};n&&n.length&&t(n[0]).css({width:s.width,height:s.height}),i._updateCache(s),i._propagate("resize",e)}})}}),t.ui.plugin.add("resizable","containment",{start:function(){var e,i,s,n,o,a,r,h=t(this).resizable("instance"),l=h.options,c=h.element,u=l.containment,d=u instanceof t?u.get(0):/parent/.test(u)?c.parent().get(0):u;d&&(h.containerElement=t(d),/document/.test(u)||u===document?(h.containerOffset={left:0,top:0},h.containerPosition={left:0,top:0},h.parentData={element:t(document),left:0,top:0,width:t(document).width(),height:t(document).height()||document.body.parentNode.scrollHeight}):(e=t(d),i=[],t(["Top","Right","Left","Bottom"]).each(function(t,s){i[t]=h._num(e.css("padding"+s))}),h.containerOffset=e.offset(),h.containerPosition=e.position(),h.containerSize={height:e.innerHeight()-i[3],width:e.innerWidth()-i[1]},s=h.containerOffset,n=h.containerSize.height,o=h.containerSize.width,a=h._hasScroll(d,"left")?d.scrollWidth:o,r=h._hasScroll(d)?d.scrollHeight:n,h.parentData={element:d,left:s.left,top:s.top,width:a,height:r}))},resize:function(e){var i,s,n,o,a=t(this).resizable("instance"),r=a.options,h=a.containerOffset,l=a.position,c=a._aspectRatio||e.shiftKey,u={top:0,left:0},d=a.containerElement,p=!0;d[0]!==document&&/static/.test(d.css("position"))&&(u=h),l.left<(a._helper?h.left:0)&&(a.size.width=a.size.width+(a._helper?a.position.left-h.left:a.position.left-u.left),c&&(a.size.height=a.size.width/a.aspectRatio,p=!1),a.position.left=r.helper?h.left:0),l.top<(a._helper?h.top:0)&&(a.size.height=a.size.height+(a._helper?a.position.top-h.top:a.position.top),c&&(a.size.width=a.size.height*a.aspectRatio,p=!1),a.position.top=a._helper?h.top:0),n=a.containerElement.get(0)===a.element.parent().get(0),o=/relative|absolute/.test(a.containerElement.css("position")),n&&o?(a.offset.left=a.parentData.left+a.position.left,a.offset.top=a.parentData.top+a.position.top):(a.offset.left=a.element.offset().left,a.offset.top=a.element.offset().top),i=Math.abs(a.sizeDiff.width+(a._helper?a.offset.left-u.left:a.offset.left-h.left)),s=Math.abs(a.sizeDiff.height+(a._helper?a.offset.top-u.top:a.offset.top-h.top)),i+a.size.width>=a.parentData.width&&(a.size.width=a.parentData.width-i,c&&(a.size.height=a.size.width/a.aspectRatio,p=!1)),s+a.size.height>=a.parentData.height&&(a.size.height=a.parentData.height-s,c&&(a.size.width=a.size.height*a.aspectRatio,p=!1)),p||(a.position.left=a.prevPosition.left,a.position.top=a.prevPosition.top,a.size.width=a.prevSize.width,a.size.height=a.prevSize.height)},stop:function(){var e=t(this).resizable("instance"),i=e.options,s=e.containerOffset,n=e.containerPosition,o=e.containerElement,a=t(e.helper),r=a.offset(),h=a.outerWidth()-e.sizeDiff.width,l=a.outerHeight()-e.sizeDiff.height;e._helper&&!i.animate&&/relative/.test(o.css("position"))&&t(this).css({left:r.left-n.left-s.left,width:h,height:l}),e._helper&&!i.animate&&/static/.test(o.css("position"))&&t(this).css({left:r.left-n.left-s.left,width:h,height:l})}}),t.ui.plugin.add("resizable","alsoResize",{start:function(){var e=t(this).resizable("instance"),i=e.options;t(i.alsoResize).each(function(){var e=t(this);e.data("ui-resizable-alsoresize",{width:parseFloat(e.width()),height:parseFloat(e.height()),left:parseFloat(e.css("left")),top:parseFloat(e.css("top"))})})},resize:function(e,i){var s=t(this).resizable("instance"),n=s.options,o=s.originalSize,a=s.originalPosition,r={height:s.size.height-o.height||0,width:s.size.width-o.width||0,top:s.position.top-a.top||0,left:s.position.left-a.left||0};t(n.alsoResize).each(function(){var e=t(this),s=t(this).data("ui-resizable-alsoresize"),n={},o=e.parents(i.originalElement[0]).length?["width","height"]:["width","height","top","left"];t.each(o,function(t,e){var i=(s[e]||0)+(r[e]||0);i&&i>=0&&(n[e]=i||null)}),e.css(n)})},stop:function(){t(this).removeData("ui-resizable-alsoresize")}}),t.ui.plugin.add("resizable","ghost",{start:function(){var e=t(this).resizable("instance"),i=e.size;e.ghost=e.originalElement.clone(),e.ghost.css({opacity:.25,display:"block",position:"relative",height:i.height,width:i.width,margin:0,left:0,top:0}),e._addClass(e.ghost,"ui-resizable-ghost"),t.uiBackCompat!==!1&&"string"==typeof e.options.ghost&&e.ghost.addClass(this.options.ghost),e.ghost.appendTo(e.helper)},resize:function(){var e=t(this).resizable("instance");e.ghost&&e.ghost.css({position:"relative",height:e.size.height,width:e.size.width})},stop:function(){var e=t(this).resizable("instance");e.ghost&&e.helper&&e.helper.get(0).removeChild(e.ghost.get(0))}}),t.ui.plugin.add("resizable","grid",{resize:function(){var e,i=t(this).resizable("instance"),s=i.options,n=i.size,o=i.originalSize,a=i.originalPosition,r=i.axis,h="number"==typeof s.grid?[s.grid,s.grid]:s.grid,l=h[0]||1,c=h[1]||1,u=Math.round((n.width-o.width)/l)*l,d=Math.round((n.height-o.height)/c)*c,p=o.width+u,f=o.height+d,g=s.maxWidth&&p>s.maxWidth,m=s.maxHeight&&f>s.maxHeight,_=s.minWidth&&s.minWidth>p,v=s.minHeight&&s.minHeight>f;s.grid=h,_&&(p+=l),v&&(f+=c),g&&(p-=l),m&&(f-=c),/^(se|s|e)$/.test(r)?(i.size.width=p,i.size.height=f):/^(ne)$/.test(r)?(i.size.width=p,i.size.height=f,i.position.top=a.top-d):/^(sw)$/.test(r)?(i.size.width=p,i.size.height=f,i.position.left=a.left-u):((0>=f-c||0>=p-l)&&(e=i._getPaddingPlusBorderDimensions(this)),f-c>0?(i.size.height=f,i.position.top=a.top-d):(f=c-e.height,i.size.height=f,i.position.top=a.top+o.height-f),p-l>0?(i.size.width=p,i.position.left=a.left-u):(p=l-e.width,i.size.width=p,i.position.left=a.left+o.width-p))}}),t.ui.resizable,t.widget("ui.dialog",{version:"1.12.1",options:{appendTo:"body",autoOpen:!0,buttons:[],classes:{"ui-dialog":"ui-corner-all","ui-dialog-titlebar":"ui-corner-all"},closeOnEscape:!0,closeText:"Close",draggable:!0,hide:null,height:"auto",maxHeight:null,maxWidth:null,minHeight:150,minWidth:150,modal:!1,position:{my:"center",at:"center",of:window,collision:"fit",using:function(e){var i=t(this).css(e).offset().top;0>i&&t(this).css("top",e.top-i)}},resizable:!0,show:null,title:null,width:300,beforeClose:null,close:null,drag:null,dragStart:null,dragStop:null,focus:null,open:null,resize:null,resizeStart:null,resizeStop:null},sizeRelatedOptions:{buttons:!0,height:!0,maxHeight:!0,maxWidth:!0,minHeight:!0,minWidth:!0,width:!0},resizableRelatedOptions:{maxHeight:!0,maxWidth:!0,minHeight:!0,minWidth:!0},_create:function(){this.originalCss={display:this.element[0].style.display,width:this.element[0].style.width,minHeight:this.element[0].style.minHeight,maxHeight:this.element[0].style.maxHeight,height:this.element[0].style.height},this.originalPosition={parent:this.element.parent(),index:this.element.parent().children().index(this.element)},this.originalTitle=this.element.attr("title"),null==this.options.title&&null!=this.originalTitle&&(this.options.title=this.originalTitle),this.options.disabled&&(this.options.disabled=!1),this._createWrapper(),this.element.show().removeAttr("title").appendTo(this.uiDialog),this._addClass("ui-dialog-content","ui-widget-content"),this._createTitlebar(),this._createButtonPane(),this.options.draggable&&t.fn.draggable&&this._makeDraggable(),this.options.resizable&&t.fn.resizable&&this._makeResizable(),this._isOpen=!1,this._trackFocus()},_init:function(){this.options.autoOpen&&this.open()},_appendTo:function(){var e=this.options.appendTo;return e&&(e.jquery||e.nodeType)?t(e):this.document.find(e||"body").eq(0)},_destroy:function(){var t,e=this.originalPosition;this._untrackInstance(),this._destroyOverlay(),this.element.removeUniqueId().css(this.originalCss).detach(),this.uiDialog.remove(),this.originalTitle&&this.element.attr("title",this.originalTitle),t=e.parent.children().eq(e.index),t.length&&t[0]!==this.element[0]?t.before(this.element):e.parent.append(this.element)},widget:function(){return this.uiDialog -},disable:t.noop,enable:t.noop,close:function(e){var i=this;this._isOpen&&this._trigger("beforeClose",e)!==!1&&(this._isOpen=!1,this._focusedElement=null,this._destroyOverlay(),this._untrackInstance(),this.opener.filter(":focusable").trigger("focus").length||t.ui.safeBlur(t.ui.safeActiveElement(this.document[0])),this._hide(this.uiDialog,this.options.hide,function(){i._trigger("close",e)}))},isOpen:function(){return this._isOpen},moveToTop:function(){this._moveToTop()},_moveToTop:function(e,i){var s=!1,n=this.uiDialog.siblings(".ui-front:visible").map(function(){return+t(this).css("z-index")}).get(),o=Math.max.apply(null,n);return o>=+this.uiDialog.css("z-index")&&(this.uiDialog.css("z-index",o+1),s=!0),s&&!i&&this._trigger("focus",e),s},open:function(){var e=this;return this._isOpen?(this._moveToTop()&&this._focusTabbable(),void 0):(this._isOpen=!0,this.opener=t(t.ui.safeActiveElement(this.document[0])),this._size(),this._position(),this._createOverlay(),this._moveToTop(null,!0),this.overlay&&this.overlay.css("z-index",this.uiDialog.css("z-index")-1),this._show(this.uiDialog,this.options.show,function(){e._focusTabbable(),e._trigger("focus")}),this._makeFocusTarget(),this._trigger("open"),void 0)},_focusTabbable:function(){var t=this._focusedElement;t||(t=this.element.find("[autofocus]")),t.length||(t=this.element.find(":tabbable")),t.length||(t=this.uiDialogButtonPane.find(":tabbable")),t.length||(t=this.uiDialogTitlebarClose.filter(":tabbable")),t.length||(t=this.uiDialog),t.eq(0).trigger("focus")},_keepFocus:function(e){function i(){var e=t.ui.safeActiveElement(this.document[0]),i=this.uiDialog[0]===e||t.contains(this.uiDialog[0],e);i||this._focusTabbable()}e.preventDefault(),i.call(this),this._delay(i)},_createWrapper:function(){this.uiDialog=t("
        ").hide().attr({tabIndex:-1,role:"dialog"}).appendTo(this._appendTo()),this._addClass(this.uiDialog,"ui-dialog","ui-widget ui-widget-content ui-front"),this._on(this.uiDialog,{keydown:function(e){if(this.options.closeOnEscape&&!e.isDefaultPrevented()&&e.keyCode&&e.keyCode===t.ui.keyCode.ESCAPE)return e.preventDefault(),this.close(e),void 0;if(e.keyCode===t.ui.keyCode.TAB&&!e.isDefaultPrevented()){var i=this.uiDialog.find(":tabbable"),s=i.filter(":first"),n=i.filter(":last");e.target!==n[0]&&e.target!==this.uiDialog[0]||e.shiftKey?e.target!==s[0]&&e.target!==this.uiDialog[0]||!e.shiftKey||(this._delay(function(){n.trigger("focus")}),e.preventDefault()):(this._delay(function(){s.trigger("focus")}),e.preventDefault())}},mousedown:function(t){this._moveToTop(t)&&this._focusTabbable()}}),this.element.find("[aria-describedby]").length||this.uiDialog.attr({"aria-describedby":this.element.uniqueId().attr("id")})},_createTitlebar:function(){var e;this.uiDialogTitlebar=t("
        "),this._addClass(this.uiDialogTitlebar,"ui-dialog-titlebar","ui-widget-header ui-helper-clearfix"),this._on(this.uiDialogTitlebar,{mousedown:function(e){t(e.target).closest(".ui-dialog-titlebar-close")||this.uiDialog.trigger("focus")}}),this.uiDialogTitlebarClose=t("").button({label:t("").text(this.options.closeText).html(),icon:"ui-icon-closethick",showLabel:!1}).appendTo(this.uiDialogTitlebar),this._addClass(this.uiDialogTitlebarClose,"ui-dialog-titlebar-close"),this._on(this.uiDialogTitlebarClose,{click:function(t){t.preventDefault(),this.close(t)}}),e=t("").uniqueId().prependTo(this.uiDialogTitlebar),this._addClass(e,"ui-dialog-title"),this._title(e),this.uiDialogTitlebar.prependTo(this.uiDialog),this.uiDialog.attr({"aria-labelledby":e.attr("id")})},_title:function(t){this.options.title?t.text(this.options.title):t.html(" ")},_createButtonPane:function(){this.uiDialogButtonPane=t("
        "),this._addClass(this.uiDialogButtonPane,"ui-dialog-buttonpane","ui-widget-content ui-helper-clearfix"),this.uiButtonSet=t("
        ").appendTo(this.uiDialogButtonPane),this._addClass(this.uiButtonSet,"ui-dialog-buttonset"),this._createButtons()},_createButtons:function(){var e=this,i=this.options.buttons;return this.uiDialogButtonPane.remove(),this.uiButtonSet.empty(),t.isEmptyObject(i)||t.isArray(i)&&!i.length?(this._removeClass(this.uiDialog,"ui-dialog-buttons"),void 0):(t.each(i,function(i,s){var n,o;s=t.isFunction(s)?{click:s,text:i}:s,s=t.extend({type:"button"},s),n=s.click,o={icon:s.icon,iconPosition:s.iconPosition,showLabel:s.showLabel,icons:s.icons,text:s.text},delete s.click,delete s.icon,delete s.iconPosition,delete s.showLabel,delete s.icons,"boolean"==typeof s.text&&delete s.text,t("",s).button(o).appendTo(e.uiButtonSet).on("click",function(){n.apply(e.element[0],arguments)})}),this._addClass(this.uiDialog,"ui-dialog-buttons"),this.uiDialogButtonPane.appendTo(this.uiDialog),void 0)},_makeDraggable:function(){function e(t){return{position:t.position,offset:t.offset}}var i=this,s=this.options;this.uiDialog.draggable({cancel:".ui-dialog-content, .ui-dialog-titlebar-close",handle:".ui-dialog-titlebar",containment:"document",start:function(s,n){i._addClass(t(this),"ui-dialog-dragging"),i._blockFrames(),i._trigger("dragStart",s,e(n))},drag:function(t,s){i._trigger("drag",t,e(s))},stop:function(n,o){var a=o.offset.left-i.document.scrollLeft(),r=o.offset.top-i.document.scrollTop();s.position={my:"left top",at:"left"+(a>=0?"+":"")+a+" "+"top"+(r>=0?"+":"")+r,of:i.window},i._removeClass(t(this),"ui-dialog-dragging"),i._unblockFrames(),i._trigger("dragStop",n,e(o))}})},_makeResizable:function(){function e(t){return{originalPosition:t.originalPosition,originalSize:t.originalSize,position:t.position,size:t.size}}var i=this,s=this.options,n=s.resizable,o=this.uiDialog.css("position"),a="string"==typeof n?n:"n,e,s,w,se,sw,ne,nw";this.uiDialog.resizable({cancel:".ui-dialog-content",containment:"document",alsoResize:this.element,maxWidth:s.maxWidth,maxHeight:s.maxHeight,minWidth:s.minWidth,minHeight:this._minHeight(),handles:a,start:function(s,n){i._addClass(t(this),"ui-dialog-resizing"),i._blockFrames(),i._trigger("resizeStart",s,e(n))},resize:function(t,s){i._trigger("resize",t,e(s))},stop:function(n,o){var a=i.uiDialog.offset(),r=a.left-i.document.scrollLeft(),h=a.top-i.document.scrollTop();s.height=i.uiDialog.height(),s.width=i.uiDialog.width(),s.position={my:"left top",at:"left"+(r>=0?"+":"")+r+" "+"top"+(h>=0?"+":"")+h,of:i.window},i._removeClass(t(this),"ui-dialog-resizing"),i._unblockFrames(),i._trigger("resizeStop",n,e(o))}}).css("position",o)},_trackFocus:function(){this._on(this.widget(),{focusin:function(e){this._makeFocusTarget(),this._focusedElement=t(e.target)}})},_makeFocusTarget:function(){this._untrackInstance(),this._trackingInstances().unshift(this)},_untrackInstance:function(){var e=this._trackingInstances(),i=t.inArray(this,e);-1!==i&&e.splice(i,1)},_trackingInstances:function(){var t=this.document.data("ui-dialog-instances");return t||(t=[],this.document.data("ui-dialog-instances",t)),t},_minHeight:function(){var t=this.options;return"auto"===t.height?t.minHeight:Math.min(t.minHeight,t.height)},_position:function(){var t=this.uiDialog.is(":visible");t||this.uiDialog.show(),this.uiDialog.position(this.options.position),t||this.uiDialog.hide()},_setOptions:function(e){var i=this,s=!1,n={};t.each(e,function(t,e){i._setOption(t,e),t in i.sizeRelatedOptions&&(s=!0),t in i.resizableRelatedOptions&&(n[t]=e)}),s&&(this._size(),this._position()),this.uiDialog.is(":data(ui-resizable)")&&this.uiDialog.resizable("option",n)},_setOption:function(e,i){var s,n,o=this.uiDialog;"disabled"!==e&&(this._super(e,i),"appendTo"===e&&this.uiDialog.appendTo(this._appendTo()),"buttons"===e&&this._createButtons(),"closeText"===e&&this.uiDialogTitlebarClose.button({label:t("").text(""+this.options.closeText).html()}),"draggable"===e&&(s=o.is(":data(ui-draggable)"),s&&!i&&o.draggable("destroy"),!s&&i&&this._makeDraggable()),"position"===e&&this._position(),"resizable"===e&&(n=o.is(":data(ui-resizable)"),n&&!i&&o.resizable("destroy"),n&&"string"==typeof i&&o.resizable("option","handles",i),n||i===!1||this._makeResizable()),"title"===e&&this._title(this.uiDialogTitlebar.find(".ui-dialog-title")))},_size:function(){var t,e,i,s=this.options;this.element.show().css({width:"auto",minHeight:0,maxHeight:"none",height:0}),s.minWidth>s.width&&(s.width=s.minWidth),t=this.uiDialog.css({height:"auto",width:s.width}).outerHeight(),e=Math.max(0,s.minHeight-t),i="number"==typeof s.maxHeight?Math.max(0,s.maxHeight-t):"none","auto"===s.height?this.element.css({minHeight:e,maxHeight:i,height:"auto"}):this.element.height(Math.max(0,s.height-t)),this.uiDialog.is(":data(ui-resizable)")&&this.uiDialog.resizable("option","minHeight",this._minHeight())},_blockFrames:function(){this.iframeBlocks=this.document.find("iframe").map(function(){var e=t(this);return t("
        ").css({position:"absolute",width:e.outerWidth(),height:e.outerHeight()}).appendTo(e.parent()).offset(e.offset())[0]})},_unblockFrames:function(){this.iframeBlocks&&(this.iframeBlocks.remove(),delete this.iframeBlocks)},_allowInteraction:function(e){return t(e.target).closest(".ui-dialog").length?!0:!!t(e.target).closest(".ui-datepicker").length},_createOverlay:function(){if(this.options.modal){var e=!0;this._delay(function(){e=!1}),this.document.data("ui-dialog-overlays")||this._on(this.document,{focusin:function(t){e||this._allowInteraction(t)||(t.preventDefault(),this._trackingInstances()[0]._focusTabbable())}}),this.overlay=t("
        ").appendTo(this._appendTo()),this._addClass(this.overlay,null,"ui-widget-overlay ui-front"),this._on(this.overlay,{mousedown:"_keepFocus"}),this.document.data("ui-dialog-overlays",(this.document.data("ui-dialog-overlays")||0)+1)}},_destroyOverlay:function(){if(this.options.modal&&this.overlay){var t=this.document.data("ui-dialog-overlays")-1;t?this.document.data("ui-dialog-overlays",t):(this._off(this.document,"focusin"),this.document.removeData("ui-dialog-overlays")),this.overlay.remove(),this.overlay=null}}}),t.uiBackCompat!==!1&&t.widget("ui.dialog",t.ui.dialog,{options:{dialogClass:""},_createWrapper:function(){this._super(),this.uiDialog.addClass(this.options.dialogClass)},_setOption:function(t,e){"dialogClass"===t&&this.uiDialog.removeClass(this.options.dialogClass).addClass(e),this._superApply(arguments)}}),t.ui.dialog,t.widget("ui.droppable",{version:"1.12.1",widgetEventPrefix:"drop",options:{accept:"*",addClasses:!0,greedy:!1,scope:"default",tolerance:"intersect",activate:null,deactivate:null,drop:null,out:null,over:null},_create:function(){var e,i=this.options,s=i.accept;this.isover=!1,this.isout=!0,this.accept=t.isFunction(s)?s:function(t){return t.is(s)},this.proportions=function(){return arguments.length?(e=arguments[0],void 0):e?e:e={width:this.element[0].offsetWidth,height:this.element[0].offsetHeight}},this._addToManager(i.scope),i.addClasses&&this._addClass("ui-droppable")},_addToManager:function(e){t.ui.ddmanager.droppables[e]=t.ui.ddmanager.droppables[e]||[],t.ui.ddmanager.droppables[e].push(this)},_splice:function(t){for(var e=0;t.length>e;e++)t[e]===this&&t.splice(e,1)},_destroy:function(){var e=t.ui.ddmanager.droppables[this.options.scope];this._splice(e)},_setOption:function(e,i){if("accept"===e)this.accept=t.isFunction(i)?i:function(t){return t.is(i)};else if("scope"===e){var s=t.ui.ddmanager.droppables[this.options.scope];this._splice(s),this._addToManager(i)}this._super(e,i)},_activate:function(e){var i=t.ui.ddmanager.current;this._addActiveClass(),i&&this._trigger("activate",e,this.ui(i))},_deactivate:function(e){var i=t.ui.ddmanager.current;this._removeActiveClass(),i&&this._trigger("deactivate",e,this.ui(i))},_over:function(e){var i=t.ui.ddmanager.current;i&&(i.currentItem||i.element)[0]!==this.element[0]&&this.accept.call(this.element[0],i.currentItem||i.element)&&(this._addHoverClass(),this._trigger("over",e,this.ui(i)))},_out:function(e){var i=t.ui.ddmanager.current;i&&(i.currentItem||i.element)[0]!==this.element[0]&&this.accept.call(this.element[0],i.currentItem||i.element)&&(this._removeHoverClass(),this._trigger("out",e,this.ui(i)))},_drop:function(e,i){var s=i||t.ui.ddmanager.current,n=!1;return s&&(s.currentItem||s.element)[0]!==this.element[0]?(this.element.find(":data(ui-droppable)").not(".ui-draggable-dragging").each(function(){var i=t(this).droppable("instance");return i.options.greedy&&!i.options.disabled&&i.options.scope===s.options.scope&&i.accept.call(i.element[0],s.currentItem||s.element)&&v(s,t.extend(i,{offset:i.element.offset()}),i.options.tolerance,e)?(n=!0,!1):void 0}),n?!1:this.accept.call(this.element[0],s.currentItem||s.element)?(this._removeActiveClass(),this._removeHoverClass(),this._trigger("drop",e,this.ui(s)),this.element):!1):!1},ui:function(t){return{draggable:t.currentItem||t.element,helper:t.helper,position:t.position,offset:t.positionAbs}},_addHoverClass:function(){this._addClass("ui-droppable-hover")},_removeHoverClass:function(){this._removeClass("ui-droppable-hover")},_addActiveClass:function(){this._addClass("ui-droppable-active")},_removeActiveClass:function(){this._removeClass("ui-droppable-active")}});var v=t.ui.intersect=function(){function t(t,e,i){return t>=e&&e+i>t}return function(e,i,s,n){if(!i.offset)return!1;var o=(e.positionAbs||e.position.absolute).left+e.margins.left,a=(e.positionAbs||e.position.absolute).top+e.margins.top,r=o+e.helperProportions.width,h=a+e.helperProportions.height,l=i.offset.left,c=i.offset.top,u=l+i.proportions().width,d=c+i.proportions().height;switch(s){case"fit":return o>=l&&u>=r&&a>=c&&d>=h;case"intersect":return o+e.helperProportions.width/2>l&&u>r-e.helperProportions.width/2&&a+e.helperProportions.height/2>c&&d>h-e.helperProportions.height/2;case"pointer":return t(n.pageY,c,i.proportions().height)&&t(n.pageX,l,i.proportions().width);case"touch":return(a>=c&&d>=a||h>=c&&d>=h||c>a&&h>d)&&(o>=l&&u>=o||r>=l&&u>=r||l>o&&r>u);default:return!1}}}();t.ui.ddmanager={current:null,droppables:{"default":[]},prepareOffsets:function(e,i){var s,n,o=t.ui.ddmanager.droppables[e.options.scope]||[],a=i?i.type:null,r=(e.currentItem||e.element).find(":data(ui-droppable)").addBack();t:for(s=0;o.length>s;s++)if(!(o[s].options.disabled||e&&!o[s].accept.call(o[s].element[0],e.currentItem||e.element))){for(n=0;r.length>n;n++)if(r[n]===o[s].element[0]){o[s].proportions().height=0;continue t}o[s].visible="none"!==o[s].element.css("display"),o[s].visible&&("mousedown"===a&&o[s]._activate.call(o[s],i),o[s].offset=o[s].element.offset(),o[s].proportions({width:o[s].element[0].offsetWidth,height:o[s].element[0].offsetHeight}))}},drop:function(e,i){var s=!1;return t.each((t.ui.ddmanager.droppables[e.options.scope]||[]).slice(),function(){this.options&&(!this.options.disabled&&this.visible&&v(e,this,this.options.tolerance,i)&&(s=this._drop.call(this,i)||s),!this.options.disabled&&this.visible&&this.accept.call(this.element[0],e.currentItem||e.element)&&(this.isout=!0,this.isover=!1,this._deactivate.call(this,i)))}),s},dragStart:function(e,i){e.element.parentsUntil("body").on("scroll.droppable",function(){e.options.refreshPositions||t.ui.ddmanager.prepareOffsets(e,i)})},drag:function(e,i){e.options.refreshPositions&&t.ui.ddmanager.prepareOffsets(e,i),t.each(t.ui.ddmanager.droppables[e.options.scope]||[],function(){if(!this.options.disabled&&!this.greedyChild&&this.visible){var s,n,o,a=v(e,this,this.options.tolerance,i),r=!a&&this.isover?"isout":a&&!this.isover?"isover":null;r&&(this.options.greedy&&(n=this.options.scope,o=this.element.parents(":data(ui-droppable)").filter(function(){return t(this).droppable("instance").options.scope===n}),o.length&&(s=t(o[0]).droppable("instance"),s.greedyChild="isover"===r)),s&&"isover"===r&&(s.isover=!1,s.isout=!0,s._out.call(s,i)),this[r]=!0,this["isout"===r?"isover":"isout"]=!1,this["isover"===r?"_over":"_out"].call(this,i),s&&"isout"===r&&(s.isout=!1,s.isover=!0,s._over.call(s,i)))}})},dragStop:function(e,i){e.element.parentsUntil("body").off("scroll.droppable"),e.options.refreshPositions||t.ui.ddmanager.prepareOffsets(e,i)}},t.uiBackCompat!==!1&&t.widget("ui.droppable",t.ui.droppable,{options:{hoverClass:!1,activeClass:!1},_addActiveClass:function(){this._super(),this.options.activeClass&&this.element.addClass(this.options.activeClass)},_removeActiveClass:function(){this._super(),this.options.activeClass&&this.element.removeClass(this.options.activeClass)},_addHoverClass:function(){this._super(),this.options.hoverClass&&this.element.addClass(this.options.hoverClass)},_removeHoverClass:function(){this._super(),this.options.hoverClass&&this.element.removeClass(this.options.hoverClass)}}),t.ui.droppable,t.widget("ui.progressbar",{version:"1.12.1",options:{classes:{"ui-progressbar":"ui-corner-all","ui-progressbar-value":"ui-corner-left","ui-progressbar-complete":"ui-corner-right"},max:100,value:0,change:null,complete:null},min:0,_create:function(){this.oldValue=this.options.value=this._constrainedValue(),this.element.attr({role:"progressbar","aria-valuemin":this.min}),this._addClass("ui-progressbar","ui-widget ui-widget-content"),this.valueDiv=t("
        ").appendTo(this.element),this._addClass(this.valueDiv,"ui-progressbar-value","ui-widget-header"),this._refreshValue()},_destroy:function(){this.element.removeAttr("role aria-valuemin aria-valuemax aria-valuenow"),this.valueDiv.remove()},value:function(t){return void 0===t?this.options.value:(this.options.value=this._constrainedValue(t),this._refreshValue(),void 0)},_constrainedValue:function(t){return void 0===t&&(t=this.options.value),this.indeterminate=t===!1,"number"!=typeof t&&(t=0),this.indeterminate?!1:Math.min(this.options.max,Math.max(this.min,t))},_setOptions:function(t){var e=t.value;delete t.value,this._super(t),this.options.value=this._constrainedValue(e),this._refreshValue()},_setOption:function(t,e){"max"===t&&(e=Math.max(this.min,e)),this._super(t,e)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t),this._toggleClass(null,"ui-state-disabled",!!t)},_percentage:function(){return this.indeterminate?100:100*(this.options.value-this.min)/(this.options.max-this.min)},_refreshValue:function(){var e=this.options.value,i=this._percentage();this.valueDiv.toggle(this.indeterminate||e>this.min).width(i.toFixed(0)+"%"),this._toggleClass(this.valueDiv,"ui-progressbar-complete",null,e===this.options.max)._toggleClass("ui-progressbar-indeterminate",null,this.indeterminate),this.indeterminate?(this.element.removeAttr("aria-valuenow"),this.overlayDiv||(this.overlayDiv=t("
        ").appendTo(this.valueDiv),this._addClass(this.overlayDiv,"ui-progressbar-overlay"))):(this.element.attr({"aria-valuemax":this.options.max,"aria-valuenow":e}),this.overlayDiv&&(this.overlayDiv.remove(),this.overlayDiv=null)),this.oldValue!==e&&(this.oldValue=e,this._trigger("change")),e===this.options.max&&this._trigger("complete")}}),t.widget("ui.selectable",t.ui.mouse,{version:"1.12.1",options:{appendTo:"body",autoRefresh:!0,distance:0,filter:"*",tolerance:"touch",selected:null,selecting:null,start:null,stop:null,unselected:null,unselecting:null},_create:function(){var e=this;this._addClass("ui-selectable"),this.dragged=!1,this.refresh=function(){e.elementPos=t(e.element[0]).offset(),e.selectees=t(e.options.filter,e.element[0]),e._addClass(e.selectees,"ui-selectee"),e.selectees.each(function(){var i=t(this),s=i.offset(),n={left:s.left-e.elementPos.left,top:s.top-e.elementPos.top};t.data(this,"selectable-item",{element:this,$element:i,left:n.left,top:n.top,right:n.left+i.outerWidth(),bottom:n.top+i.outerHeight(),startselected:!1,selected:i.hasClass("ui-selected"),selecting:i.hasClass("ui-selecting"),unselecting:i.hasClass("ui-unselecting")})})},this.refresh(),this._mouseInit(),this.helper=t("
        "),this._addClass(this.helper,"ui-selectable-helper")},_destroy:function(){this.selectees.removeData("selectable-item"),this._mouseDestroy()},_mouseStart:function(e){var i=this,s=this.options;this.opos=[e.pageX,e.pageY],this.elementPos=t(this.element[0]).offset(),this.options.disabled||(this.selectees=t(s.filter,this.element[0]),this._trigger("start",e),t(s.appendTo).append(this.helper),this.helper.css({left:e.pageX,top:e.pageY,width:0,height:0}),s.autoRefresh&&this.refresh(),this.selectees.filter(".ui-selected").each(function(){var s=t.data(this,"selectable-item");s.startselected=!0,e.metaKey||e.ctrlKey||(i._removeClass(s.$element,"ui-selected"),s.selected=!1,i._addClass(s.$element,"ui-unselecting"),s.unselecting=!0,i._trigger("unselecting",e,{unselecting:s.element}))}),t(e.target).parents().addBack().each(function(){var s,n=t.data(this,"selectable-item");return n?(s=!e.metaKey&&!e.ctrlKey||!n.$element.hasClass("ui-selected"),i._removeClass(n.$element,s?"ui-unselecting":"ui-selected")._addClass(n.$element,s?"ui-selecting":"ui-unselecting"),n.unselecting=!s,n.selecting=s,n.selected=s,s?i._trigger("selecting",e,{selecting:n.element}):i._trigger("unselecting",e,{unselecting:n.element}),!1):void 0}))},_mouseDrag:function(e){if(this.dragged=!0,!this.options.disabled){var i,s=this,n=this.options,o=this.opos[0],a=this.opos[1],r=e.pageX,h=e.pageY;return o>r&&(i=r,r=o,o=i),a>h&&(i=h,h=a,a=i),this.helper.css({left:o,top:a,width:r-o,height:h-a}),this.selectees.each(function(){var i=t.data(this,"selectable-item"),l=!1,c={};i&&i.element!==s.element[0]&&(c.left=i.left+s.elementPos.left,c.right=i.right+s.elementPos.left,c.top=i.top+s.elementPos.top,c.bottom=i.bottom+s.elementPos.top,"touch"===n.tolerance?l=!(c.left>r||o>c.right||c.top>h||a>c.bottom):"fit"===n.tolerance&&(l=c.left>o&&r>c.right&&c.top>a&&h>c.bottom),l?(i.selected&&(s._removeClass(i.$element,"ui-selected"),i.selected=!1),i.unselecting&&(s._removeClass(i.$element,"ui-unselecting"),i.unselecting=!1),i.selecting||(s._addClass(i.$element,"ui-selecting"),i.selecting=!0,s._trigger("selecting",e,{selecting:i.element}))):(i.selecting&&((e.metaKey||e.ctrlKey)&&i.startselected?(s._removeClass(i.$element,"ui-selecting"),i.selecting=!1,s._addClass(i.$element,"ui-selected"),i.selected=!0):(s._removeClass(i.$element,"ui-selecting"),i.selecting=!1,i.startselected&&(s._addClass(i.$element,"ui-unselecting"),i.unselecting=!0),s._trigger("unselecting",e,{unselecting:i.element}))),i.selected&&(e.metaKey||e.ctrlKey||i.startselected||(s._removeClass(i.$element,"ui-selected"),i.selected=!1,s._addClass(i.$element,"ui-unselecting"),i.unselecting=!0,s._trigger("unselecting",e,{unselecting:i.element})))))}),!1}},_mouseStop:function(e){var i=this;return this.dragged=!1,t(".ui-unselecting",this.element[0]).each(function(){var s=t.data(this,"selectable-item");i._removeClass(s.$element,"ui-unselecting"),s.unselecting=!1,s.startselected=!1,i._trigger("unselected",e,{unselected:s.element})}),t(".ui-selecting",this.element[0]).each(function(){var s=t.data(this,"selectable-item");i._removeClass(s.$element,"ui-selecting")._addClass(s.$element,"ui-selected"),s.selecting=!1,s.selected=!0,s.startselected=!0,i._trigger("selected",e,{selected:s.element})}),this._trigger("stop",e),this.helper.remove(),!1}}),t.widget("ui.selectmenu",[t.ui.formResetMixin,{version:"1.12.1",defaultElement:"",widgetEventPrefix:"spin",options:{classes:{"ui-spinner":"ui-corner-all","ui-spinner-down":"ui-corner-br","ui-spinner-up":"ui-corner-tr"},culture:null,icons:{down:"ui-icon-triangle-1-s",up:"ui-icon-triangle-1-n"},incremental:!0,max:null,min:null,numberFormat:null,page:10,step:1,change:null,spin:null,start:null,stop:null},_create:function(){this._setOption("max",this.options.max),this._setOption("min",this.options.min),this._setOption("step",this.options.step),""!==this.value()&&this._value(this.element.val(),!0),this._draw(),this._on(this._events),this._refresh(),this._on(this.window,{beforeunload:function(){this.element.removeAttr("autocomplete")}})},_getCreateOptions:function(){var e=this._super(),i=this.element;return t.each(["min","max","step"],function(t,s){var n=i.attr(s);null!=n&&n.length&&(e[s]=n)}),e},_events:{keydown:function(t){this._start(t)&&this._keydown(t)&&t.preventDefault()},keyup:"_stop",focus:function(){this.previous=this.element.val()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(this._stop(),this._refresh(),this.previous!==this.element.val()&&this._trigger("change",t),void 0)},mousewheel:function(t,e){if(e){if(!this.spinning&&!this._start(t))return!1;this._spin((e>0?1:-1)*this.options.step,t),clearTimeout(this.mousewheelTimer),this.mousewheelTimer=this._delay(function(){this.spinning&&this._stop(t)},100),t.preventDefault()}},"mousedown .ui-spinner-button":function(e){function i(){var e=this.element[0]===t.ui.safeActiveElement(this.document[0]);e||(this.element.trigger("focus"),this.previous=s,this._delay(function(){this.previous=s}))}var s;s=this.element[0]===t.ui.safeActiveElement(this.document[0])?this.previous:this.element.val(),e.preventDefault(),i.call(this),this.cancelBlur=!0,this._delay(function(){delete this.cancelBlur,i.call(this)}),this._start(e)!==!1&&this._repeat(null,t(e.currentTarget).hasClass("ui-spinner-up")?1:-1,e)},"mouseup .ui-spinner-button":"_stop","mouseenter .ui-spinner-button":function(e){return t(e.currentTarget).hasClass("ui-state-active")?this._start(e)===!1?!1:(this._repeat(null,t(e.currentTarget).hasClass("ui-spinner-up")?1:-1,e),void 0):void 0},"mouseleave .ui-spinner-button":"_stop"},_enhance:function(){this.uiSpinner=this.element.attr("autocomplete","off").wrap("").parent().append("")},_draw:function(){this._enhance(),this._addClass(this.uiSpinner,"ui-spinner","ui-widget ui-widget-content"),this._addClass("ui-spinner-input"),this.element.attr("role","spinbutton"),this.buttons=this.uiSpinner.children("a").attr("tabIndex",-1).attr("aria-hidden",!0).button({classes:{"ui-button":""}}),this._removeClass(this.buttons,"ui-corner-all"),this._addClass(this.buttons.first(),"ui-spinner-button ui-spinner-up"),this._addClass(this.buttons.last(),"ui-spinner-button ui-spinner-down"),this.buttons.first().button({icon:this.options.icons.up,showLabel:!1}),this.buttons.last().button({icon:this.options.icons.down,showLabel:!1}),this.buttons.height()>Math.ceil(.5*this.uiSpinner.height())&&this.uiSpinner.height()>0&&this.uiSpinner.height(this.uiSpinner.height())},_keydown:function(e){var i=this.options,s=t.ui.keyCode;switch(e.keyCode){case s.UP:return this._repeat(null,1,e),!0;case s.DOWN:return this._repeat(null,-1,e),!0;case s.PAGE_UP:return this._repeat(null,i.page,e),!0;case s.PAGE_DOWN:return this._repeat(null,-i.page,e),!0}return!1},_start:function(t){return this.spinning||this._trigger("start",t)!==!1?(this.counter||(this.counter=1),this.spinning=!0,!0):!1},_repeat:function(t,e,i){t=t||500,clearTimeout(this.timer),this.timer=this._delay(function(){this._repeat(40,e,i)},t),this._spin(e*this.options.step,i)},_spin:function(t,e){var i=this.value()||0;this.counter||(this.counter=1),i=this._adjustValue(i+t*this._increment(this.counter)),this.spinning&&this._trigger("spin",e,{value:i})===!1||(this._value(i),this.counter++)},_increment:function(e){var i=this.options.incremental;return i?t.isFunction(i)?i(e):Math.floor(e*e*e/5e4-e*e/500+17*e/200+1):1},_precision:function(){var t=this._precisionOf(this.options.step);return null!==this.options.min&&(t=Math.max(t,this._precisionOf(this.options.min))),t},_precisionOf:function(t){var e=""+t,i=e.indexOf(".");return-1===i?0:e.length-i-1},_adjustValue:function(t){var e,i,s=this.options;return e=null!==s.min?s.min:0,i=t-e,i=Math.round(i/s.step)*s.step,t=e+i,t=parseFloat(t.toFixed(this._precision())),null!==s.max&&t>s.max?s.max:null!==s.min&&s.min>t?s.min:t},_stop:function(t){this.spinning&&(clearTimeout(this.timer),clearTimeout(this.mousewheelTimer),this.counter=0,this.spinning=!1,this._trigger("stop",t))},_setOption:function(t,e){var i,s,n;return"culture"===t||"numberFormat"===t?(i=this._parse(this.element.val()),this.options[t]=e,this.element.val(this._format(i)),void 0):(("max"===t||"min"===t||"step"===t)&&"string"==typeof e&&(e=this._parse(e)),"icons"===t&&(s=this.buttons.first().find(".ui-icon"),this._removeClass(s,null,this.options.icons.up),this._addClass(s,null,e.up),n=this.buttons.last().find(".ui-icon"),this._removeClass(n,null,this.options.icons.down),this._addClass(n,null,e.down)),this._super(t,e),void 0)},_setOptionDisabled:function(t){this._super(t),this._toggleClass(this.uiSpinner,null,"ui-state-disabled",!!t),this.element.prop("disabled",!!t),this.buttons.button(t?"disable":"enable")},_setOptions:r(function(t){this._super(t)}),_parse:function(t){return"string"==typeof t&&""!==t&&(t=window.Globalize&&this.options.numberFormat?Globalize.parseFloat(t,10,this.options.culture):+t),""===t||isNaN(t)?null:t},_format:function(t){return""===t?"":window.Globalize&&this.options.numberFormat?Globalize.format(t,this.options.numberFormat,this.options.culture):t},_refresh:function(){this.element.attr({"aria-valuemin":this.options.min,"aria-valuemax":this.options.max,"aria-valuenow":this._parse(this.element.val())})},isValid:function(){var t=this.value();return null===t?!1:t===this._adjustValue(t)},_value:function(t,e){var i;""!==t&&(i=this._parse(t),null!==i&&(e||(i=this._adjustValue(i)),t=this._format(i))),this.element.val(t),this._refresh()},_destroy:function(){this.element.prop("disabled",!1).removeAttr("autocomplete role aria-valuemin aria-valuemax aria-valuenow"),this.uiSpinner.replaceWith(this.element)},stepUp:r(function(t){this._stepUp(t)}),_stepUp:function(t){this._start()&&(this._spin((t||1)*this.options.step),this._stop())},stepDown:r(function(t){this._stepDown(t)}),_stepDown:function(t){this._start()&&(this._spin((t||1)*-this.options.step),this._stop())},pageUp:r(function(t){this._stepUp((t||1)*this.options.page)}),pageDown:r(function(t){this._stepDown((t||1)*this.options.page)}),value:function(t){return arguments.length?(r(this._value).call(this,t),void 0):this._parse(this.element.val())},widget:function(){return this.uiSpinner}}),t.uiBackCompat!==!1&&t.widget("ui.spinner",t.ui.spinner,{_enhance:function(){this.uiSpinner=this.element.attr("autocomplete","off").wrap(this._uiSpinnerHtml()).parent().append(this._buttonHtml())},_uiSpinnerHtml:function(){return""},_buttonHtml:function(){return""}}),t.ui.spinner,t.widget("ui.tabs",{version:"1.12.1",delay:300,options:{active:null,classes:{"ui-tabs":"ui-corner-all","ui-tabs-nav":"ui-corner-all","ui-tabs-panel":"ui-corner-bottom","ui-tabs-tab":"ui-corner-top"},collapsible:!1,event:"click",heightStyle:"content",hide:null,show:null,activate:null,beforeActivate:null,beforeLoad:null,load:null},_isLocal:function(){var t=/#.*$/;return function(e){var i,s;i=e.href.replace(t,""),s=location.href.replace(t,"");try{i=decodeURIComponent(i)}catch(n){}try{s=decodeURIComponent(s)}catch(n){}return e.hash.length>1&&i===s}}(),_create:function(){var e=this,i=this.options;this.running=!1,this._addClass("ui-tabs","ui-widget ui-widget-content"),this._toggleClass("ui-tabs-collapsible",null,i.collapsible),this._processTabs(),i.active=this._initialActive(),t.isArray(i.disabled)&&(i.disabled=t.unique(i.disabled.concat(t.map(this.tabs.filter(".ui-state-disabled"),function(t){return e.tabs.index(t)}))).sort()),this.active=this.options.active!==!1&&this.anchors.length?this._findActive(i.active):t(),this._refresh(),this.active.length&&this.load(i.active)},_initialActive:function(){var e=this.options.active,i=this.options.collapsible,s=location.hash.substring(1);return null===e&&(s&&this.tabs.each(function(i,n){return t(n).attr("aria-controls")===s?(e=i,!1):void 0}),null===e&&(e=this.tabs.index(this.tabs.filter(".ui-tabs-active"))),(null===e||-1===e)&&(e=this.tabs.length?0:!1)),e!==!1&&(e=this.tabs.index(this.tabs.eq(e)),-1===e&&(e=i?!1:0)),!i&&e===!1&&this.anchors.length&&(e=0),e},_getCreateEventData:function(){return{tab:this.active,panel:this.active.length?this._getPanelForTab(this.active):t()}},_tabKeydown:function(e){var i=t(t.ui.safeActiveElement(this.document[0])).closest("li"),s=this.tabs.index(i),n=!0;if(!this._handlePageNav(e)){switch(e.keyCode){case t.ui.keyCode.RIGHT:case t.ui.keyCode.DOWN:s++;break;case t.ui.keyCode.UP:case t.ui.keyCode.LEFT:n=!1,s--;break;case t.ui.keyCode.END:s=this.anchors.length-1;break;case t.ui.keyCode.HOME:s=0;break;case t.ui.keyCode.SPACE:return e.preventDefault(),clearTimeout(this.activating),this._activate(s),void 0;case t.ui.keyCode.ENTER:return e.preventDefault(),clearTimeout(this.activating),this._activate(s===this.options.active?!1:s),void 0;default:return}e.preventDefault(),clearTimeout(this.activating),s=this._focusNextTab(s,n),e.ctrlKey||e.metaKey||(i.attr("aria-selected","false"),this.tabs.eq(s).attr("aria-selected","true"),this.activating=this._delay(function(){this.option("active",s)},this.delay))}},_panelKeydown:function(e){this._handlePageNav(e)||e.ctrlKey&&e.keyCode===t.ui.keyCode.UP&&(e.preventDefault(),this.active.trigger("focus"))},_handlePageNav:function(e){return e.altKey&&e.keyCode===t.ui.keyCode.PAGE_UP?(this._activate(this._focusNextTab(this.options.active-1,!1)),!0):e.altKey&&e.keyCode===t.ui.keyCode.PAGE_DOWN?(this._activate(this._focusNextTab(this.options.active+1,!0)),!0):void 0},_findNextTab:function(e,i){function s(){return e>n&&(e=0),0>e&&(e=n),e}for(var n=this.tabs.length-1;-1!==t.inArray(s(),this.options.disabled);)e=i?e+1:e-1;return e},_focusNextTab:function(t,e){return t=this._findNextTab(t,e),this.tabs.eq(t).trigger("focus"),t},_setOption:function(t,e){return"active"===t?(this._activate(e),void 0):(this._super(t,e),"collapsible"===t&&(this._toggleClass("ui-tabs-collapsible",null,e),e||this.options.active!==!1||this._activate(0)),"event"===t&&this._setupEvents(e),"heightStyle"===t&&this._setupHeightStyle(e),void 0)},_sanitizeSelector:function(t){return t?t.replace(/[!"$%&'()*+,.\/:;<=>?@\[\]\^`{|}~]/g,"\\$&"):""},refresh:function(){var e=this.options,i=this.tablist.children(":has(a[href])");e.disabled=t.map(i.filter(".ui-state-disabled"),function(t){return i.index(t)}),this._processTabs(),e.active!==!1&&this.anchors.length?this.active.length&&!t.contains(this.tablist[0],this.active[0])?this.tabs.length===e.disabled.length?(e.active=!1,this.active=t()):this._activate(this._findNextTab(Math.max(0,e.active-1),!1)):e.active=this.tabs.index(this.active):(e.active=!1,this.active=t()),this._refresh()},_refresh:function(){this._setOptionDisabled(this.options.disabled),this._setupEvents(this.options.event),this._setupHeightStyle(this.options.heightStyle),this.tabs.not(this.active).attr({"aria-selected":"false","aria-expanded":"false",tabIndex:-1}),this.panels.not(this._getPanelForTab(this.active)).hide().attr({"aria-hidden":"true"}),this.active.length?(this.active.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0}),this._addClass(this.active,"ui-tabs-active","ui-state-active"),this._getPanelForTab(this.active).show().attr({"aria-hidden":"false"})):this.tabs.eq(0).attr("tabIndex",0)},_processTabs:function(){var e=this,i=this.tabs,s=this.anchors,n=this.panels;this.tablist=this._getList().attr("role","tablist"),this._addClass(this.tablist,"ui-tabs-nav","ui-helper-reset ui-helper-clearfix ui-widget-header"),this.tablist.on("mousedown"+this.eventNamespace,"> li",function(e){t(this).is(".ui-state-disabled")&&e.preventDefault()}).on("focus"+this.eventNamespace,".ui-tabs-anchor",function(){t(this).closest("li").is(".ui-state-disabled")&&this.blur()}),this.tabs=this.tablist.find("> li:has(a[href])").attr({role:"tab",tabIndex:-1}),this._addClass(this.tabs,"ui-tabs-tab","ui-state-default"),this.anchors=this.tabs.map(function(){return t("a",this)[0]}).attr({role:"presentation",tabIndex:-1}),this._addClass(this.anchors,"ui-tabs-anchor"),this.panels=t(),this.anchors.each(function(i,s){var n,o,a,r=t(s).uniqueId().attr("id"),h=t(s).closest("li"),l=h.attr("aria-controls");e._isLocal(s)?(n=s.hash,a=n.substring(1),o=e.element.find(e._sanitizeSelector(n))):(a=h.attr("aria-controls")||t({}).uniqueId()[0].id,n="#"+a,o=e.element.find(n),o.length||(o=e._createPanel(a),o.insertAfter(e.panels[i-1]||e.tablist)),o.attr("aria-live","polite")),o.length&&(e.panels=e.panels.add(o)),l&&h.data("ui-tabs-aria-controls",l),h.attr({"aria-controls":a,"aria-labelledby":r}),o.attr("aria-labelledby",r)}),this.panels.attr("role","tabpanel"),this._addClass(this.panels,"ui-tabs-panel","ui-widget-content"),i&&(this._off(i.not(this.tabs)),this._off(s.not(this.anchors)),this._off(n.not(this.panels)))},_getList:function(){return this.tablist||this.element.find("ol, ul").eq(0)},_createPanel:function(e){return t("
        ").attr("id",e).data("ui-tabs-destroy",!0)},_setOptionDisabled:function(e){var i,s,n;for(t.isArray(e)&&(e.length?e.length===this.anchors.length&&(e=!0):e=!1),n=0;s=this.tabs[n];n++)i=t(s),e===!0||-1!==t.inArray(n,e)?(i.attr("aria-disabled","true"),this._addClass(i,null,"ui-state-disabled")):(i.removeAttr("aria-disabled"),this._removeClass(i,null,"ui-state-disabled"));this.options.disabled=e,this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,e===!0)},_setupEvents:function(e){var i={};e&&t.each(e.split(" "),function(t,e){i[e]="_eventHandler"}),this._off(this.anchors.add(this.tabs).add(this.panels)),this._on(!0,this.anchors,{click:function(t){t.preventDefault()}}),this._on(this.anchors,i),this._on(this.tabs,{keydown:"_tabKeydown"}),this._on(this.panels,{keydown:"_panelKeydown"}),this._focusable(this.tabs),this._hoverable(this.tabs)},_setupHeightStyle:function(e){var i,s=this.element.parent();"fill"===e?(i=s.height(),i-=this.element.outerHeight()-this.element.height(),this.element.siblings(":visible").each(function(){var e=t(this),s=e.css("position");"absolute"!==s&&"fixed"!==s&&(i-=e.outerHeight(!0))}),this.element.children().not(this.panels).each(function(){i-=t(this).outerHeight(!0)}),this.panels.each(function(){t(this).height(Math.max(0,i-t(this).innerHeight()+t(this).height()))}).css("overflow","auto")):"auto"===e&&(i=0,this.panels.each(function(){i=Math.max(i,t(this).height("").height())}).height(i))},_eventHandler:function(e){var i=this.options,s=this.active,n=t(e.currentTarget),o=n.closest("li"),a=o[0]===s[0],r=a&&i.collapsible,h=r?t():this._getPanelForTab(o),l=s.length?this._getPanelForTab(s):t(),c={oldTab:s,oldPanel:l,newTab:r?t():o,newPanel:h};e.preventDefault(),o.hasClass("ui-state-disabled")||o.hasClass("ui-tabs-loading")||this.running||a&&!i.collapsible||this._trigger("beforeActivate",e,c)===!1||(i.active=r?!1:this.tabs.index(o),this.active=a?t():o,this.xhr&&this.xhr.abort(),l.length||h.length||t.error("jQuery UI Tabs: Mismatching fragment identifier."),h.length&&this.load(this.tabs.index(o),e),this._toggle(e,c))},_toggle:function(e,i){function s(){o.running=!1,o._trigger("activate",e,i)}function n(){o._addClass(i.newTab.closest("li"),"ui-tabs-active","ui-state-active"),a.length&&o.options.show?o._show(a,o.options.show,s):(a.show(),s())}var o=this,a=i.newPanel,r=i.oldPanel;this.running=!0,r.length&&this.options.hide?this._hide(r,this.options.hide,function(){o._removeClass(i.oldTab.closest("li"),"ui-tabs-active","ui-state-active"),n()}):(this._removeClass(i.oldTab.closest("li"),"ui-tabs-active","ui-state-active"),r.hide(),n()),r.attr("aria-hidden","true"),i.oldTab.attr({"aria-selected":"false","aria-expanded":"false"}),a.length&&r.length?i.oldTab.attr("tabIndex",-1):a.length&&this.tabs.filter(function(){return 0===t(this).attr("tabIndex")}).attr("tabIndex",-1),a.attr("aria-hidden","false"),i.newTab.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0})},_activate:function(e){var i,s=this._findActive(e);s[0]!==this.active[0]&&(s.length||(s=this.active),i=s.find(".ui-tabs-anchor")[0],this._eventHandler({target:i,currentTarget:i,preventDefault:t.noop}))},_findActive:function(e){return e===!1?t():this.tabs.eq(e)},_getIndex:function(e){return"string"==typeof e&&(e=this.anchors.index(this.anchors.filter("[href$='"+t.ui.escapeSelector(e)+"']"))),e},_destroy:function(){this.xhr&&this.xhr.abort(),this.tablist.removeAttr("role").off(this.eventNamespace),this.anchors.removeAttr("role tabIndex").removeUniqueId(),this.tabs.add(this.panels).each(function(){t.data(this,"ui-tabs-destroy")?t(this).remove():t(this).removeAttr("role tabIndex aria-live aria-busy aria-selected aria-labelledby aria-hidden aria-expanded")}),this.tabs.each(function(){var e=t(this),i=e.data("ui-tabs-aria-controls");i?e.attr("aria-controls",i).removeData("ui-tabs-aria-controls"):e.removeAttr("aria-controls")}),this.panels.show(),"content"!==this.options.heightStyle&&this.panels.css("height","")},enable:function(e){var i=this.options.disabled;i!==!1&&(void 0===e?i=!1:(e=this._getIndex(e),i=t.isArray(i)?t.map(i,function(t){return t!==e?t:null}):t.map(this.tabs,function(t,i){return i!==e?i:null})),this._setOptionDisabled(i))},disable:function(e){var i=this.options.disabled;if(i!==!0){if(void 0===e)i=!0;else{if(e=this._getIndex(e),-1!==t.inArray(e,i))return;i=t.isArray(i)?t.merge([e],i).sort():[e]}this._setOptionDisabled(i)}},load:function(e,i){e=this._getIndex(e);var s=this,n=this.tabs.eq(e),o=n.find(".ui-tabs-anchor"),a=this._getPanelForTab(n),r={tab:n,panel:a},h=function(t,e){"abort"===e&&s.panels.stop(!1,!0),s._removeClass(n,"ui-tabs-loading"),a.removeAttr("aria-busy"),t===s.xhr&&delete s.xhr};this._isLocal(o[0])||(this.xhr=t.ajax(this._ajaxSettings(o,i,r)),this.xhr&&"canceled"!==this.xhr.statusText&&(this._addClass(n,"ui-tabs-loading"),a.attr("aria-busy","true"),this.xhr.done(function(t,e,n){setTimeout(function(){a.html(t),s._trigger("load",i,r),h(n,e)},1)}).fail(function(t,e){setTimeout(function(){h(t,e)},1)})))},_ajaxSettings:function(e,i,s){var n=this;return{url:e.attr("href").replace(/#.*$/,""),beforeSend:function(e,o){return n._trigger("beforeLoad",i,t.extend({jqXHR:e,ajaxSettings:o},s))}}},_getPanelForTab:function(e){var i=t(e).attr("aria-controls");return this.element.find(this._sanitizeSelector("#"+i))}}),t.uiBackCompat!==!1&&t.widget("ui.tabs",t.ui.tabs,{_processTabs:function(){this._superApply(arguments),this._addClass(this.tabs,"ui-tab")}}),t.ui.tabs,t.widget("ui.tooltip",{version:"1.12.1",options:{classes:{"ui-tooltip":"ui-corner-all ui-widget-shadow"},content:function(){var e=t(this).attr("title")||"";return t("").text(e).html()},hide:!0,items:"[title]:not([disabled])",position:{my:"left top+15",at:"left bottom",collision:"flipfit flip"},show:!0,track:!1,close:null,open:null},_addDescribedBy:function(e,i){var s=(e.attr("aria-describedby")||"").split(/\s+/);s.push(i),e.data("ui-tooltip-id",i).attr("aria-describedby",t.trim(s.join(" ")))},_removeDescribedBy:function(e){var i=e.data("ui-tooltip-id"),s=(e.attr("aria-describedby")||"").split(/\s+/),n=t.inArray(i,s);-1!==n&&s.splice(n,1),e.removeData("ui-tooltip-id"),s=t.trim(s.join(" ")),s?e.attr("aria-describedby",s):e.removeAttr("aria-describedby")},_create:function(){this._on({mouseover:"open",focusin:"open"}),this.tooltips={},this.parents={},this.liveRegion=t("
        ").attr({role:"log","aria-live":"assertive","aria-relevant":"additions"}).appendTo(this.document[0].body),this._addClass(this.liveRegion,null,"ui-helper-hidden-accessible"),this.disabledTitles=t([])},_setOption:function(e,i){var s=this;this._super(e,i),"content"===e&&t.each(this.tooltips,function(t,e){s._updateContent(e.element)})},_setOptionDisabled:function(t){this[t?"_disable":"_enable"]()},_disable:function(){var e=this;t.each(this.tooltips,function(i,s){var n=t.Event("blur");n.target=n.currentTarget=s.element[0],e.close(n,!0)}),this.disabledTitles=this.disabledTitles.add(this.element.find(this.options.items).addBack().filter(function(){var e=t(this);return e.is("[title]")?e.data("ui-tooltip-title",e.attr("title")).removeAttr("title"):void 0}))},_enable:function(){this.disabledTitles.each(function(){var e=t(this);e.data("ui-tooltip-title")&&e.attr("title",e.data("ui-tooltip-title"))}),this.disabledTitles=t([])},open:function(e){var i=this,s=t(e?e.target:this.element).closest(this.options.items);s.length&&!s.data("ui-tooltip-id")&&(s.attr("title")&&s.data("ui-tooltip-title",s.attr("title")),s.data("ui-tooltip-open",!0),e&&"mouseover"===e.type&&s.parents().each(function(){var e,s=t(this);s.data("ui-tooltip-open")&&(e=t.Event("blur"),e.target=e.currentTarget=this,i.close(e,!0)),s.attr("title")&&(s.uniqueId(),i.parents[this.id]={element:this,title:s.attr("title")},s.attr("title",""))}),this._registerCloseHandlers(e,s),this._updateContent(s,e))},_updateContent:function(t,e){var i,s=this.options.content,n=this,o=e?e.type:null;return"string"==typeof s||s.nodeType||s.jquery?this._open(e,t,s):(i=s.call(t[0],function(i){n._delay(function(){t.data("ui-tooltip-open")&&(e&&(e.type=o),this._open(e,t,i))})}),i&&this._open(e,t,i),void 0)},_open:function(e,i,s){function n(t){l.of=t,a.is(":hidden")||a.position(l)}var o,a,r,h,l=t.extend({},this.options.position);if(s){if(o=this._find(i))return o.tooltip.find(".ui-tooltip-content").html(s),void 0;i.is("[title]")&&(e&&"mouseover"===e.type?i.attr("title",""):i.removeAttr("title")),o=this._tooltip(i),a=o.tooltip,this._addDescribedBy(i,a.attr("id")),a.find(".ui-tooltip-content").html(s),this.liveRegion.children().hide(),h=t("
        ").html(a.find(".ui-tooltip-content").html()),h.removeAttr("name").find("[name]").removeAttr("name"),h.removeAttr("id").find("[id]").removeAttr("id"),h.appendTo(this.liveRegion),this.options.track&&e&&/^mouse/.test(e.type)?(this._on(this.document,{mousemove:n}),n(e)):a.position(t.extend({of:i},this.options.position)),a.hide(),this._show(a,this.options.show),this.options.track&&this.options.show&&this.options.show.delay&&(r=this.delayedShow=setInterval(function(){a.is(":visible")&&(n(l.of),clearInterval(r))},t.fx.interval)),this._trigger("open",e,{tooltip:a})}},_registerCloseHandlers:function(e,i){var s={keyup:function(e){if(e.keyCode===t.ui.keyCode.ESCAPE){var s=t.Event(e);s.currentTarget=i[0],this.close(s,!0)}}};i[0]!==this.element[0]&&(s.remove=function(){this._removeTooltip(this._find(i).tooltip)}),e&&"mouseover"!==e.type||(s.mouseleave="close"),e&&"focusin"!==e.type||(s.focusout="close"),this._on(!0,i,s)},close:function(e){var i,s=this,n=t(e?e.currentTarget:this.element),o=this._find(n);return o?(i=o.tooltip,o.closing||(clearInterval(this.delayedShow),n.data("ui-tooltip-title")&&!n.attr("title")&&n.attr("title",n.data("ui-tooltip-title")),this._removeDescribedBy(n),o.hiding=!0,i.stop(!0),this._hide(i,this.options.hide,function(){s._removeTooltip(t(this))}),n.removeData("ui-tooltip-open"),this._off(n,"mouseleave focusout keyup"),n[0]!==this.element[0]&&this._off(n,"remove"),this._off(this.document,"mousemove"),e&&"mouseleave"===e.type&&t.each(this.parents,function(e,i){t(i.element).attr("title",i.title),delete s.parents[e]}),o.closing=!0,this._trigger("close",e,{tooltip:i}),o.hiding||(o.closing=!1)),void 0):(n.removeData("ui-tooltip-open"),void 0)},_tooltip:function(e){var i=t("
        ").attr("role","tooltip"),s=t("
        ").appendTo(i),n=i.uniqueId().attr("id");return this._addClass(s,"ui-tooltip-content"),this._addClass(i,"ui-tooltip","ui-widget ui-widget-content"),i.appendTo(this._appendTo(e)),this.tooltips[n]={element:e,tooltip:i}},_find:function(t){var e=t.data("ui-tooltip-id");return e?this.tooltips[e]:null},_removeTooltip:function(t){t.remove(),delete this.tooltips[t.attr("id")]},_appendTo:function(t){var e=t.closest(".ui-front, dialog");return e.length||(e=this.document[0].body),e},_destroy:function(){var e=this;t.each(this.tooltips,function(i,s){var n=t.Event("blur"),o=s.element;n.target=n.currentTarget=o[0],e.close(n,!0),t("#"+i).remove(),o.data("ui-tooltip-title")&&(o.attr("title")||o.attr("title",o.data("ui-tooltip-title")),o.removeData("ui-tooltip-title"))}),this.liveRegion.remove()}}),t.uiBackCompat!==!1&&t.widget("ui.tooltip",t.ui.tooltip,{options:{tooltipClass:null},_tooltip:function(){var t=this._superApply(arguments);return this.options.tooltipClass&&t.tooltip.addClass(this.options.tooltipClass),t}}),t.ui.tooltip}); \ No newline at end of file diff --git a/spaces/dataroots/SofaStyler/StyleTransfer/styleTransfer.py b/spaces/dataroots/SofaStyler/StyleTransfer/styleTransfer.py deleted file mode 100644 index 024a57fdc71ba9c761d078c84bff731c33419f5a..0000000000000000000000000000000000000000 --- a/spaces/dataroots/SofaStyler/StyleTransfer/styleTransfer.py +++ /dev/null @@ -1,181 +0,0 @@ -import numpy as np -import paddlehub as phub -import StyleTransfer.srcTransformer.StyTR as StyTR -import StyleTransfer.srcTransformer.transformer as transformer -import tensorflow as tf -import tensorflow_hub as tfhub -import torch -import torch.nn as nn -from PIL import Image -from torchvision import transforms - -# TRANSFORMER - -vgg_path = "StyleTransfer/srcTransformer/Transformer_models/vgg_normalised.pth" -decoder_path = "StyleTransfer/srcTransformer/Transformer_models/decoder_iter_160000.pth" -Trans_path = ( - "StyleTransfer/srcTransformer/Transformer_models/transformer_iter_160000.pth" -) -embedding_path = ( - "StyleTransfer/srcTransformer/Transformer_models/embedding_iter_160000.pth" -) - - -def style_transform(h, w): - """ - This function creates a transformation for the style image, - that crops it and formats it into a tensor. - - Parameters: - h = height - w = width - Return: - transform = transformation pipeline - """ - transform_list = [] - transform_list.append(transforms.CenterCrop((h, w))) - transform_list.append(transforms.ToTensor()) - transform = transforms.Compose(transform_list) - return transform - - -def content_transform(): - """ - This function simply creates a transformation pipeline, - that formats the content image into a tensor. - - Returns: - transform = the transformation pipeline - """ - transform_list = [] - transform_list.append(transforms.ToTensor()) - transform = transforms.Compose(transform_list) - return transform - - -# This loads the network architecture already at building time -vgg = StyTR.vgg -vgg.load_state_dict(torch.load(vgg_path)) -vgg = nn.Sequential(*list(vgg.children())[:44]) -decoder = StyTR.decoder -Trans = transformer.Transformer() -embedding = StyTR.PatchEmbed() -# The (square) shape of the content and style image is fixed -content_size = 640 -style_size = 640 - - -def StyleTransformer(content_img: Image.Image, style_img: Image.Image) -> Image.Image: - """ - This function creates the Transformer network and applies it on - a content and style image to create a styled image. - - Parameters: - content_img = the image with the content - style_img = the image with the style/pattern - Returns: - output = an image that is a combination of both - """ - - decoder.eval() - Trans.eval() - vgg.eval() - - state_dict = torch.load(decoder_path) - decoder.load_state_dict(state_dict) - - state_dict = torch.load(Trans_path) - Trans.load_state_dict(state_dict) - - state_dict = torch.load(embedding_path) - embedding.load_state_dict(state_dict) - - network = StyTR.StyTrans(vgg, decoder, embedding, Trans) - network.eval() - - content_tf = content_transform() - style_tf = style_transform(style_size, style_size) - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - network.to(device) - content = content_tf(content_img.convert("RGB")) - style = style_tf(style_img.convert("RGB")) - style = style.to(device).unsqueeze(0) - content = content.to(device).unsqueeze(0) - with torch.no_grad(): - output = network(content, style) - output = output[0].cpu().squeeze() - output = ( - output.mul(255) - .add_(0.5) - .clamp_(0, 255) - .permute(1, 2, 0) - .to("cpu", torch.uint8) - .numpy() - ) - return Image.fromarray(output) - - -# STYLE-FAST - -style_transfer_model = tfhub.load( - "https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2" -) - - -def StyleFAST(content_image: Image.Image, style_image: Image.Image) -> Image.Image: - """ - This function applies a Fast image style transfer technique, - which uses a pretrained model from tensorhub. - - Parameters: - content_image = the image with the content - style_image = the image with the style/pattern - Returns: - stylized_image = an image that is a combination of both - """ - content_image = ( - tf.convert_to_tensor(np.array(content_image), np.float32)[tf.newaxis, ...] - / 255.0 - ) - style_image = ( - tf.convert_to_tensor(np.array(style_image), np.float32)[tf.newaxis, ...] / 255.0 - ) - output = style_transfer_model(content_image, style_image) - stylized_image = output[0] - return Image.fromarray(np.uint8(stylized_image[0] * 255)) - - -# STYLE PROJECTION - -stylepro_artistic = phub.Module(name="stylepro_artistic") - - -def styleProjection( - content_image: Image.Image, style_image: Image.Image, alpha: float = 1.0 -): - """ - This function uses parameter free style transfer, - based on a model from paddlehub. - There is an optional weight parameter alpha, which - allows to control the balance between image and style. - - Parameters: - content_image = the image with the content - style_image = the image with the style/pattern - alpha = weight for the image vs style. - This should be a float between 0 and 1. - Returns: - result = an image that is a combination of both - """ - result = stylepro_artistic.style_transfer( - images=[ - { - "content": np.array(content_image.convert("RGB"))[:, :, ::-1], - "styles": [np.array(style_image.convert("RGB"))[:, :, ::-1]], - } - ], - alpha=alpha, - ) - - return Image.fromarray(np.uint8(result[0]["data"])[:, :, ::-1]).convert("RGB") diff --git a/spaces/datboichidori/Ryzan-fantasy-diffusion-v1/README.md b/spaces/datboichidori/Ryzan-fantasy-diffusion-v1/README.md deleted file mode 100644 index 1d57738a578a84b6efcf447401a650717232640c..0000000000000000000000000000000000000000 --- a/spaces/datboichidori/Ryzan-fantasy-diffusion-v1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ryzan Fantasy Diffusion V1 -emoji: 🏢 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/davidpiscasio/unpaired-img2img/models/colorization_model.py b/spaces/davidpiscasio/unpaired-img2img/models/colorization_model.py deleted file mode 100644 index 2b4a12722e52cf93b85504bbe9a078f7b396d28b..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/models/colorization_model.py +++ /dev/null @@ -1,68 +0,0 @@ -from .pix2pix_model import Pix2PixModel -import torch -from skimage import color # used for lab2rgb -import numpy as np - - -class ColorizationModel(Pix2PixModel): - """This is a subclass of Pix2PixModel for image colorization (black & white image -> colorful images). - - The model training requires '-dataset_model colorization' dataset. - It trains a pix2pix model, mapping from L channel to ab channels in Lab color space. - By default, the colorization dataset will automatically set '--input_nc 1' and '--output_nc 2'. - """ - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new dataset-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - - By default, we use 'colorization' dataset for this model. - See the original pix2pix paper (https://arxiv.org/pdf/1611.07004.pdf) and colorization results (Figure 9 in the paper) - """ - Pix2PixModel.modify_commandline_options(parser, is_train) - parser.set_defaults(dataset_mode='colorization') - return parser - - def __init__(self, opt): - """Initialize the class. - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - - For visualization, we set 'visual_names' as 'real_A' (input real image), - 'real_B_rgb' (ground truth RGB image), and 'fake_B_rgb' (predicted RGB image) - We convert the Lab image 'real_B' (inherited from Pix2pixModel) to a RGB image 'real_B_rgb'. - we convert the Lab image 'fake_B' (inherited from Pix2pixModel) to a RGB image 'fake_B_rgb'. - """ - # reuse the pix2pix model - Pix2PixModel.__init__(self, opt) - # specify the images to be visualized. - self.visual_names = ['real_A', 'real_B_rgb', 'fake_B_rgb'] - - def lab2rgb(self, L, AB): - """Convert an Lab tensor image to a RGB numpy output - Parameters: - L (1-channel tensor array): L channel images (range: [-1, 1], torch tensor array) - AB (2-channel tensor array): ab channel images (range: [-1, 1], torch tensor array) - - Returns: - rgb (RGB numpy image): rgb output images (range: [0, 255], numpy array) - """ - AB2 = AB * 110.0 - L2 = (L + 1.0) * 50.0 - Lab = torch.cat([L2, AB2], dim=1) - Lab = Lab[0].data.cpu().float().numpy() - Lab = np.transpose(Lab.astype(np.float64), (1, 2, 0)) - rgb = color.lab2rgb(Lab) * 255 - return rgb - - def compute_visuals(self): - """Calculate additional output images for visdom and HTML visualization""" - self.real_B_rgb = self.lab2rgb(self.real_A, self.real_B) - self.fake_B_rgb = self.lab2rgb(self.real_A, self.fake_B) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/help.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/help.py deleted file mode 100644 index 2a238de3d6d5d69c70c0130d98f8272be7efabf5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/help.py +++ /dev/null @@ -1,35 +0,0 @@ -import pkgutil -import sys -import fontTools -import importlib -import os -from pathlib import Path - - -def main(): - """Show this help""" - path = fontTools.__path__ - descriptions = {} - for pkg in sorted( - mod.name - for mod in pkgutil.walk_packages([fontTools.__path__[0]], prefix="fontTools.") - ): - try: - imports = __import__(pkg, globals(), locals(), ["main"]) - except ImportError as e: - continue - try: - description = imports.main.__doc__ - if description: - pkg = pkg.replace("fontTools.", "").replace(".__main__", "") - # show the docstring's first line only - descriptions[pkg] = description.splitlines()[0] - except AttributeError as e: - pass - for pkg, description in descriptions.items(): - print("fonttools %-25s %s" % (pkg, description), file=sys.stderr) - - -if __name__ == "__main__": - print("fonttools v%s\n" % fontTools.__version__, file=sys.stderr) - main() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTables.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTables.py deleted file mode 100644 index 5cabd4b4fcbdc0377660b387dc7ab2d3e4380bc7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTables.py +++ /dev/null @@ -1,2274 +0,0 @@ -# coding: utf-8 -"""fontTools.ttLib.tables.otTables -- A collection of classes representing the various -OpenType subtables. - -Most are constructed upon import from data in otData.py, all are populated with -converter objects from otConverters.py. -""" -import copy -from enum import IntEnum -from functools import reduce -from math import radians -import itertools -from collections import defaultdict, namedtuple -from fontTools.ttLib.tables.otTraverse import dfs_base_table -from fontTools.misc.arrayTools import quantizeRect -from fontTools.misc.roundTools import otRound -from fontTools.misc.transform import Transform, Identity -from fontTools.misc.textTools import bytesjoin, pad, safeEval -from fontTools.pens.boundsPen import ControlBoundsPen -from fontTools.pens.transformPen import TransformPen -from .otBase import ( - BaseTable, - FormatSwitchingBaseTable, - ValueRecord, - CountReference, - getFormatSwitchingBaseTableClass, -) -from fontTools.feaLib.lookupDebugInfo import LookupDebugInfo, LOOKUP_DEBUG_INFO_KEY -import logging -import struct -from typing import TYPE_CHECKING, Iterator, List, Optional, Set - -if TYPE_CHECKING: - from fontTools.ttLib.ttGlyphSet import _TTGlyphSet - - -log = logging.getLogger(__name__) - - -class AATStateTable(object): - def __init__(self): - self.GlyphClasses = {} # GlyphID --> GlyphClass - self.States = [] # List of AATState, indexed by state number - self.PerGlyphLookups = [] # [{GlyphID:GlyphID}, ...] - - -class AATState(object): - def __init__(self): - self.Transitions = {} # GlyphClass --> AATAction - - -class AATAction(object): - _FLAGS = None - - @staticmethod - def compileActions(font, states): - return (None, None) - - def _writeFlagsToXML(self, xmlWriter): - flags = [f for f in self._FLAGS if self.__dict__[f]] - if flags: - xmlWriter.simpletag("Flags", value=",".join(flags)) - xmlWriter.newline() - if self.ReservedFlags != 0: - xmlWriter.simpletag("ReservedFlags", value="0x%04X" % self.ReservedFlags) - xmlWriter.newline() - - def _setFlag(self, flag): - assert flag in self._FLAGS, "unsupported flag %s" % flag - self.__dict__[flag] = True - - -class RearrangementMorphAction(AATAction): - staticSize = 4 - actionHeaderSize = 0 - _FLAGS = ["MarkFirst", "DontAdvance", "MarkLast"] - - _VERBS = { - 0: "no change", - 1: "Ax ⇒ xA", - 2: "xD ⇒ Dx", - 3: "AxD ⇒ DxA", - 4: "ABx ⇒ xAB", - 5: "ABx ⇒ xBA", - 6: "xCD ⇒ CDx", - 7: "xCD ⇒ DCx", - 8: "AxCD ⇒ CDxA", - 9: "AxCD ⇒ DCxA", - 10: "ABxD ⇒ DxAB", - 11: "ABxD ⇒ DxBA", - 12: "ABxCD ⇒ CDxAB", - 13: "ABxCD ⇒ CDxBA", - 14: "ABxCD ⇒ DCxAB", - 15: "ABxCD ⇒ DCxBA", - } - - def __init__(self): - self.NewState = 0 - self.Verb = 0 - self.MarkFirst = False - self.DontAdvance = False - self.MarkLast = False - self.ReservedFlags = 0 - - def compile(self, writer, font, actionIndex): - assert actionIndex is None - writer.writeUShort(self.NewState) - assert self.Verb >= 0 and self.Verb <= 15, self.Verb - flags = self.Verb | self.ReservedFlags - if self.MarkFirst: - flags |= 0x8000 - if self.DontAdvance: - flags |= 0x4000 - if self.MarkLast: - flags |= 0x2000 - writer.writeUShort(flags) - - def decompile(self, reader, font, actionReader): - assert actionReader is None - self.NewState = reader.readUShort() - flags = reader.readUShort() - self.Verb = flags & 0xF - self.MarkFirst = bool(flags & 0x8000) - self.DontAdvance = bool(flags & 0x4000) - self.MarkLast = bool(flags & 0x2000) - self.ReservedFlags = flags & 0x1FF0 - - def toXML(self, xmlWriter, font, attrs, name): - xmlWriter.begintag(name, **attrs) - xmlWriter.newline() - xmlWriter.simpletag("NewState", value=self.NewState) - xmlWriter.newline() - self._writeFlagsToXML(xmlWriter) - xmlWriter.simpletag("Verb", value=self.Verb) - verbComment = self._VERBS.get(self.Verb) - if verbComment is not None: - xmlWriter.comment(verbComment) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - self.NewState = self.Verb = self.ReservedFlags = 0 - self.MarkFirst = self.DontAdvance = self.MarkLast = False - content = [t for t in content if isinstance(t, tuple)] - for eltName, eltAttrs, eltContent in content: - if eltName == "NewState": - self.NewState = safeEval(eltAttrs["value"]) - elif eltName == "Verb": - self.Verb = safeEval(eltAttrs["value"]) - elif eltName == "ReservedFlags": - self.ReservedFlags = safeEval(eltAttrs["value"]) - elif eltName == "Flags": - for flag in eltAttrs["value"].split(","): - self._setFlag(flag.strip()) - - -class ContextualMorphAction(AATAction): - staticSize = 8 - actionHeaderSize = 0 - _FLAGS = ["SetMark", "DontAdvance"] - - def __init__(self): - self.NewState = 0 - self.SetMark, self.DontAdvance = False, False - self.ReservedFlags = 0 - self.MarkIndex, self.CurrentIndex = 0xFFFF, 0xFFFF - - def compile(self, writer, font, actionIndex): - assert actionIndex is None - writer.writeUShort(self.NewState) - flags = self.ReservedFlags - if self.SetMark: - flags |= 0x8000 - if self.DontAdvance: - flags |= 0x4000 - writer.writeUShort(flags) - writer.writeUShort(self.MarkIndex) - writer.writeUShort(self.CurrentIndex) - - def decompile(self, reader, font, actionReader): - assert actionReader is None - self.NewState = reader.readUShort() - flags = reader.readUShort() - self.SetMark = bool(flags & 0x8000) - self.DontAdvance = bool(flags & 0x4000) - self.ReservedFlags = flags & 0x3FFF - self.MarkIndex = reader.readUShort() - self.CurrentIndex = reader.readUShort() - - def toXML(self, xmlWriter, font, attrs, name): - xmlWriter.begintag(name, **attrs) - xmlWriter.newline() - xmlWriter.simpletag("NewState", value=self.NewState) - xmlWriter.newline() - self._writeFlagsToXML(xmlWriter) - xmlWriter.simpletag("MarkIndex", value=self.MarkIndex) - xmlWriter.newline() - xmlWriter.simpletag("CurrentIndex", value=self.CurrentIndex) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - self.NewState = self.ReservedFlags = 0 - self.SetMark = self.DontAdvance = False - self.MarkIndex, self.CurrentIndex = 0xFFFF, 0xFFFF - content = [t for t in content if isinstance(t, tuple)] - for eltName, eltAttrs, eltContent in content: - if eltName == "NewState": - self.NewState = safeEval(eltAttrs["value"]) - elif eltName == "Flags": - for flag in eltAttrs["value"].split(","): - self._setFlag(flag.strip()) - elif eltName == "ReservedFlags": - self.ReservedFlags = safeEval(eltAttrs["value"]) - elif eltName == "MarkIndex": - self.MarkIndex = safeEval(eltAttrs["value"]) - elif eltName == "CurrentIndex": - self.CurrentIndex = safeEval(eltAttrs["value"]) - - -class LigAction(object): - def __init__(self): - self.Store = False - # GlyphIndexDelta is a (possibly negative) delta that gets - # added to the glyph ID at the top of the AAT runtime - # execution stack. It is *not* a byte offset into the - # morx table. The result of the addition, which is performed - # at run time by the shaping engine, is an index into - # the ligature components table. See 'morx' specification. - # In the AAT specification, this field is called Offset; - # but its meaning is quite different from other offsets - # in either AAT or OpenType, so we use a different name. - self.GlyphIndexDelta = 0 - - -class LigatureMorphAction(AATAction): - staticSize = 6 - - # 4 bytes for each of {action,ligComponents,ligatures}Offset - actionHeaderSize = 12 - - _FLAGS = ["SetComponent", "DontAdvance"] - - def __init__(self): - self.NewState = 0 - self.SetComponent, self.DontAdvance = False, False - self.ReservedFlags = 0 - self.Actions = [] - - def compile(self, writer, font, actionIndex): - assert actionIndex is not None - writer.writeUShort(self.NewState) - flags = self.ReservedFlags - if self.SetComponent: - flags |= 0x8000 - if self.DontAdvance: - flags |= 0x4000 - if len(self.Actions) > 0: - flags |= 0x2000 - writer.writeUShort(flags) - if len(self.Actions) > 0: - actions = self.compileLigActions() - writer.writeUShort(actionIndex[actions]) - else: - writer.writeUShort(0) - - def decompile(self, reader, font, actionReader): - assert actionReader is not None - self.NewState = reader.readUShort() - flags = reader.readUShort() - self.SetComponent = bool(flags & 0x8000) - self.DontAdvance = bool(flags & 0x4000) - performAction = bool(flags & 0x2000) - # As of 2017-09-12, the 'morx' specification says that - # the reserved bitmask in ligature subtables is 0x3FFF. - # However, the specification also defines a flag 0x2000, - # so the reserved value should actually be 0x1FFF. - # TODO: Report this specification bug to Apple. - self.ReservedFlags = flags & 0x1FFF - actionIndex = reader.readUShort() - if performAction: - self.Actions = self._decompileLigActions(actionReader, actionIndex) - else: - self.Actions = [] - - @staticmethod - def compileActions(font, states): - result, actions, actionIndex = b"", set(), {} - for state in states: - for _glyphClass, trans in state.Transitions.items(): - actions.add(trans.compileLigActions()) - # Sort the compiled actions in decreasing order of - # length, so that the longer sequence come before the - # shorter ones. For each compiled action ABCD, its - # suffixes BCD, CD, and D do not be encoded separately - # (in case they occur); instead, we can just store an - # index that points into the middle of the longer - # sequence. Every compiled AAT ligature sequence is - # terminated with an end-of-sequence flag, which can - # only be set on the last element of the sequence. - # Therefore, it is sufficient to consider just the - # suffixes. - for a in sorted(actions, key=lambda x: (-len(x), x)): - if a not in actionIndex: - for i in range(0, len(a), 4): - suffix = a[i:] - suffixIndex = (len(result) + i) // 4 - actionIndex.setdefault(suffix, suffixIndex) - result += a - result = pad(result, 4) - return (result, actionIndex) - - def compileLigActions(self): - result = [] - for i, action in enumerate(self.Actions): - last = i == len(self.Actions) - 1 - value = action.GlyphIndexDelta & 0x3FFFFFFF - value |= 0x80000000 if last else 0 - value |= 0x40000000 if action.Store else 0 - result.append(struct.pack(">L", value)) - return bytesjoin(result) - - def _decompileLigActions(self, actionReader, actionIndex): - actions = [] - last = False - reader = actionReader.getSubReader(actionReader.pos + actionIndex * 4) - while not last: - value = reader.readULong() - last = bool(value & 0x80000000) - action = LigAction() - actions.append(action) - action.Store = bool(value & 0x40000000) - delta = value & 0x3FFFFFFF - if delta >= 0x20000000: # sign-extend 30-bit value - delta = -0x40000000 + delta - action.GlyphIndexDelta = delta - return actions - - def fromXML(self, name, attrs, content, font): - self.NewState = self.ReservedFlags = 0 - self.SetComponent = self.DontAdvance = False - self.ReservedFlags = 0 - self.Actions = [] - content = [t for t in content if isinstance(t, tuple)] - for eltName, eltAttrs, eltContent in content: - if eltName == "NewState": - self.NewState = safeEval(eltAttrs["value"]) - elif eltName == "Flags": - for flag in eltAttrs["value"].split(","): - self._setFlag(flag.strip()) - elif eltName == "ReservedFlags": - self.ReservedFlags = safeEval(eltAttrs["value"]) - elif eltName == "Action": - action = LigAction() - flags = eltAttrs.get("Flags", "").split(",") - flags = [f.strip() for f in flags] - action.Store = "Store" in flags - action.GlyphIndexDelta = safeEval(eltAttrs["GlyphIndexDelta"]) - self.Actions.append(action) - - def toXML(self, xmlWriter, font, attrs, name): - xmlWriter.begintag(name, **attrs) - xmlWriter.newline() - xmlWriter.simpletag("NewState", value=self.NewState) - xmlWriter.newline() - self._writeFlagsToXML(xmlWriter) - for action in self.Actions: - attribs = [("GlyphIndexDelta", action.GlyphIndexDelta)] - if action.Store: - attribs.append(("Flags", "Store")) - xmlWriter.simpletag("Action", attribs) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - -class InsertionMorphAction(AATAction): - staticSize = 8 - actionHeaderSize = 4 # 4 bytes for actionOffset - _FLAGS = [ - "SetMark", - "DontAdvance", - "CurrentIsKashidaLike", - "MarkedIsKashidaLike", - "CurrentInsertBefore", - "MarkedInsertBefore", - ] - - def __init__(self): - self.NewState = 0 - for flag in self._FLAGS: - setattr(self, flag, False) - self.ReservedFlags = 0 - self.CurrentInsertionAction, self.MarkedInsertionAction = [], [] - - def compile(self, writer, font, actionIndex): - assert actionIndex is not None - writer.writeUShort(self.NewState) - flags = self.ReservedFlags - if self.SetMark: - flags |= 0x8000 - if self.DontAdvance: - flags |= 0x4000 - if self.CurrentIsKashidaLike: - flags |= 0x2000 - if self.MarkedIsKashidaLike: - flags |= 0x1000 - if self.CurrentInsertBefore: - flags |= 0x0800 - if self.MarkedInsertBefore: - flags |= 0x0400 - flags |= len(self.CurrentInsertionAction) << 5 - flags |= len(self.MarkedInsertionAction) - writer.writeUShort(flags) - if len(self.CurrentInsertionAction) > 0: - currentIndex = actionIndex[tuple(self.CurrentInsertionAction)] - else: - currentIndex = 0xFFFF - writer.writeUShort(currentIndex) - if len(self.MarkedInsertionAction) > 0: - markedIndex = actionIndex[tuple(self.MarkedInsertionAction)] - else: - markedIndex = 0xFFFF - writer.writeUShort(markedIndex) - - def decompile(self, reader, font, actionReader): - assert actionReader is not None - self.NewState = reader.readUShort() - flags = reader.readUShort() - self.SetMark = bool(flags & 0x8000) - self.DontAdvance = bool(flags & 0x4000) - self.CurrentIsKashidaLike = bool(flags & 0x2000) - self.MarkedIsKashidaLike = bool(flags & 0x1000) - self.CurrentInsertBefore = bool(flags & 0x0800) - self.MarkedInsertBefore = bool(flags & 0x0400) - self.CurrentInsertionAction = self._decompileInsertionAction( - actionReader, font, index=reader.readUShort(), count=((flags & 0x03E0) >> 5) - ) - self.MarkedInsertionAction = self._decompileInsertionAction( - actionReader, font, index=reader.readUShort(), count=(flags & 0x001F) - ) - - def _decompileInsertionAction(self, actionReader, font, index, count): - if index == 0xFFFF or count == 0: - return [] - reader = actionReader.getSubReader(actionReader.pos + index * 2) - return font.getGlyphNameMany(reader.readUShortArray(count)) - - def toXML(self, xmlWriter, font, attrs, name): - xmlWriter.begintag(name, **attrs) - xmlWriter.newline() - xmlWriter.simpletag("NewState", value=self.NewState) - xmlWriter.newline() - self._writeFlagsToXML(xmlWriter) - for g in self.CurrentInsertionAction: - xmlWriter.simpletag("CurrentInsertionAction", glyph=g) - xmlWriter.newline() - for g in self.MarkedInsertionAction: - xmlWriter.simpletag("MarkedInsertionAction", glyph=g) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - self.__init__() - content = [t for t in content if isinstance(t, tuple)] - for eltName, eltAttrs, eltContent in content: - if eltName == "NewState": - self.NewState = safeEval(eltAttrs["value"]) - elif eltName == "Flags": - for flag in eltAttrs["value"].split(","): - self._setFlag(flag.strip()) - elif eltName == "CurrentInsertionAction": - self.CurrentInsertionAction.append(eltAttrs["glyph"]) - elif eltName == "MarkedInsertionAction": - self.MarkedInsertionAction.append(eltAttrs["glyph"]) - else: - assert False, eltName - - @staticmethod - def compileActions(font, states): - actions, actionIndex, result = set(), {}, b"" - for state in states: - for _glyphClass, trans in state.Transitions.items(): - if trans.CurrentInsertionAction is not None: - actions.add(tuple(trans.CurrentInsertionAction)) - if trans.MarkedInsertionAction is not None: - actions.add(tuple(trans.MarkedInsertionAction)) - # Sort the compiled actions in decreasing order of - # length, so that the longer sequence come before the - # shorter ones. - for action in sorted(actions, key=lambda x: (-len(x), x)): - # We insert all sub-sequences of the action glyph sequence - # into actionIndex. For example, if one action triggers on - # glyph sequence [A, B, C, D, E] and another action triggers - # on [C, D], we return result=[A, B, C, D, E] (as list of - # encoded glyph IDs), and actionIndex={('A','B','C','D','E'): 0, - # ('C','D'): 2}. - if action in actionIndex: - continue - for start in range(0, len(action)): - startIndex = (len(result) // 2) + start - for limit in range(start, len(action)): - glyphs = action[start : limit + 1] - actionIndex.setdefault(glyphs, startIndex) - for glyph in action: - glyphID = font.getGlyphID(glyph) - result += struct.pack(">H", glyphID) - return result, actionIndex - - -class FeatureParams(BaseTable): - def compile(self, writer, font): - assert ( - featureParamTypes.get(writer["FeatureTag"]) == self.__class__ - ), "Wrong FeatureParams type for feature '%s': %s" % ( - writer["FeatureTag"], - self.__class__.__name__, - ) - BaseTable.compile(self, writer, font) - - def toXML(self, xmlWriter, font, attrs=None, name=None): - BaseTable.toXML(self, xmlWriter, font, attrs, name=self.__class__.__name__) - - -class FeatureParamsSize(FeatureParams): - pass - - -class FeatureParamsStylisticSet(FeatureParams): - pass - - -class FeatureParamsCharacterVariants(FeatureParams): - pass - - -class Coverage(FormatSwitchingBaseTable): - - # manual implementation to get rid of glyphID dependencies - - def populateDefaults(self, propagator=None): - if not hasattr(self, "glyphs"): - self.glyphs = [] - - def postRead(self, rawTable, font): - if self.Format == 1: - self.glyphs = rawTable["GlyphArray"] - elif self.Format == 2: - glyphs = self.glyphs = [] - ranges = rawTable["RangeRecord"] - # Some SIL fonts have coverage entries that don't have sorted - # StartCoverageIndex. If it is so, fixup and warn. We undo - # this when writing font out. - sorted_ranges = sorted(ranges, key=lambda a: a.StartCoverageIndex) - if ranges != sorted_ranges: - log.warning("GSUB/GPOS Coverage is not sorted by glyph ids.") - ranges = sorted_ranges - del sorted_ranges - for r in ranges: - start = r.Start - end = r.End - startID = font.getGlyphID(start) - endID = font.getGlyphID(end) + 1 - glyphs.extend(font.getGlyphNameMany(range(startID, endID))) - else: - self.glyphs = [] - log.warning("Unknown Coverage format: %s", self.Format) - del self.Format # Don't need this anymore - - def preWrite(self, font): - glyphs = getattr(self, "glyphs", None) - if glyphs is None: - glyphs = self.glyphs = [] - format = 1 - rawTable = {"GlyphArray": glyphs} - if glyphs: - # find out whether Format 2 is more compact or not - glyphIDs = font.getGlyphIDMany(glyphs) - brokenOrder = sorted(glyphIDs) != glyphIDs - - last = glyphIDs[0] - ranges = [[last]] - for glyphID in glyphIDs[1:]: - if glyphID != last + 1: - ranges[-1].append(last) - ranges.append([glyphID]) - last = glyphID - ranges[-1].append(last) - - if brokenOrder or len(ranges) * 3 < len(glyphs): # 3 words vs. 1 word - # Format 2 is more compact - index = 0 - for i in range(len(ranges)): - start, end = ranges[i] - r = RangeRecord() - r.StartID = start - r.Start = font.getGlyphName(start) - r.End = font.getGlyphName(end) - r.StartCoverageIndex = index - ranges[i] = r - index = index + end - start + 1 - if brokenOrder: - log.warning("GSUB/GPOS Coverage is not sorted by glyph ids.") - ranges.sort(key=lambda a: a.StartID) - for r in ranges: - del r.StartID - format = 2 - rawTable = {"RangeRecord": ranges} - # else: - # fallthrough; Format 1 is more compact - self.Format = format - return rawTable - - def toXML2(self, xmlWriter, font): - for glyphName in getattr(self, "glyphs", []): - xmlWriter.simpletag("Glyph", value=glyphName) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - glyphs = getattr(self, "glyphs", None) - if glyphs is None: - glyphs = [] - self.glyphs = glyphs - glyphs.append(attrs["value"]) - - -# The special 0xFFFFFFFF delta-set index is used to indicate that there -# is no variation data in the ItemVariationStore for a given variable field -NO_VARIATION_INDEX = 0xFFFFFFFF - - -class DeltaSetIndexMap(getFormatSwitchingBaseTableClass("uint8")): - def populateDefaults(self, propagator=None): - if not hasattr(self, "mapping"): - self.mapping = [] - - def postRead(self, rawTable, font): - assert (rawTable["EntryFormat"] & 0xFFC0) == 0 - self.mapping = rawTable["mapping"] - - @staticmethod - def getEntryFormat(mapping): - ored = 0 - for idx in mapping: - ored |= idx - - inner = ored & 0xFFFF - innerBits = 0 - while inner: - innerBits += 1 - inner >>= 1 - innerBits = max(innerBits, 1) - assert innerBits <= 16 - - ored = (ored >> (16 - innerBits)) | (ored & ((1 << innerBits) - 1)) - if ored <= 0x000000FF: - entrySize = 1 - elif ored <= 0x0000FFFF: - entrySize = 2 - elif ored <= 0x00FFFFFF: - entrySize = 3 - else: - entrySize = 4 - - return ((entrySize - 1) << 4) | (innerBits - 1) - - def preWrite(self, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = self.mapping = [] - self.Format = 1 if len(mapping) > 0xFFFF else 0 - rawTable = self.__dict__.copy() - rawTable["MappingCount"] = len(mapping) - rawTable["EntryFormat"] = self.getEntryFormat(mapping) - return rawTable - - def toXML2(self, xmlWriter, font): - # Make xml dump less verbose, by omitting no-op entries like: - # - xmlWriter.comment("Omitted values default to 0xFFFF/0xFFFF (no variations)") - xmlWriter.newline() - for i, value in enumerate(getattr(self, "mapping", [])): - attrs = [("index", i)] - if value != NO_VARIATION_INDEX: - attrs.extend( - [ - ("outer", value >> 16), - ("inner", value & 0xFFFF), - ] - ) - xmlWriter.simpletag("Map", attrs) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - self.mapping = mapping = [] - index = safeEval(attrs["index"]) - outer = safeEval(attrs.get("outer", "0xFFFF")) - inner = safeEval(attrs.get("inner", "0xFFFF")) - assert inner <= 0xFFFF - mapping.insert(index, (outer << 16) | inner) - - -class VarIdxMap(BaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "mapping"): - self.mapping = {} - - def postRead(self, rawTable, font): - assert (rawTable["EntryFormat"] & 0xFFC0) == 0 - glyphOrder = font.getGlyphOrder() - mapList = rawTable["mapping"] - mapList.extend([mapList[-1]] * (len(glyphOrder) - len(mapList))) - self.mapping = dict(zip(glyphOrder, mapList)) - - def preWrite(self, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = self.mapping = {} - - glyphOrder = font.getGlyphOrder() - mapping = [mapping[g] for g in glyphOrder] - while len(mapping) > 1 and mapping[-2] == mapping[-1]: - del mapping[-1] - - rawTable = {"mapping": mapping} - rawTable["MappingCount"] = len(mapping) - rawTable["EntryFormat"] = DeltaSetIndexMap.getEntryFormat(mapping) - return rawTable - - def toXML2(self, xmlWriter, font): - for glyph, value in sorted(getattr(self, "mapping", {}).items()): - attrs = ( - ("glyph", glyph), - ("outer", value >> 16), - ("inner", value & 0xFFFF), - ) - xmlWriter.simpletag("Map", attrs) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = {} - self.mapping = mapping - try: - glyph = attrs["glyph"] - except: # https://github.com/fonttools/fonttools/commit/21cbab8ce9ded3356fef3745122da64dcaf314e9#commitcomment-27649836 - glyph = font.getGlyphOrder()[attrs["index"]] - outer = safeEval(attrs["outer"]) - inner = safeEval(attrs["inner"]) - assert inner <= 0xFFFF - mapping[glyph] = (outer << 16) | inner - - -class VarRegionList(BaseTable): - def preWrite(self, font): - # The OT spec says VarStore.VarRegionList.RegionAxisCount should always - # be equal to the fvar.axisCount, and OTS < v8.0.0 enforces this rule - # even when the VarRegionList is empty. We can't treat RegionAxisCount - # like a normal propagated count (== len(Region[i].VarRegionAxis)), - # otherwise it would default to 0 if VarRegionList is empty. - # Thus, we force it to always be equal to fvar.axisCount. - # https://github.com/khaledhosny/ots/pull/192 - fvarTable = font.get("fvar") - if fvarTable: - self.RegionAxisCount = len(fvarTable.axes) - return { - **self.__dict__, - "RegionAxisCount": CountReference(self.__dict__, "RegionAxisCount"), - } - - -class SingleSubst(FormatSwitchingBaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "mapping"): - self.mapping = {} - - def postRead(self, rawTable, font): - mapping = {} - input = _getGlyphsFromCoverageTable(rawTable["Coverage"]) - if self.Format == 1: - delta = rawTable["DeltaGlyphID"] - inputGIDS = font.getGlyphIDMany(input) - outGIDS = [(glyphID + delta) % 65536 for glyphID in inputGIDS] - outNames = font.getGlyphNameMany(outGIDS) - for inp, out in zip(input, outNames): - mapping[inp] = out - elif self.Format == 2: - assert ( - len(input) == rawTable["GlyphCount"] - ), "invalid SingleSubstFormat2 table" - subst = rawTable["Substitute"] - for inp, sub in zip(input, subst): - mapping[inp] = sub - else: - assert 0, "unknown format: %s" % self.Format - self.mapping = mapping - del self.Format # Don't need this anymore - - def preWrite(self, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = self.mapping = {} - items = list(mapping.items()) - getGlyphID = font.getGlyphID - gidItems = [(getGlyphID(a), getGlyphID(b)) for a, b in items] - sortableItems = sorted(zip(gidItems, items)) - - # figure out format - format = 2 - delta = None - for inID, outID in gidItems: - if delta is None: - delta = (outID - inID) % 65536 - - if (inID + delta) % 65536 != outID: - break - else: - if delta is None: - # the mapping is empty, better use format 2 - format = 2 - else: - format = 1 - - rawTable = {} - self.Format = format - cov = Coverage() - input = [item[1][0] for item in sortableItems] - subst = [item[1][1] for item in sortableItems] - cov.glyphs = input - rawTable["Coverage"] = cov - if format == 1: - assert delta is not None - rawTable["DeltaGlyphID"] = delta - else: - rawTable["Substitute"] = subst - return rawTable - - def toXML2(self, xmlWriter, font): - items = sorted(self.mapping.items()) - for inGlyph, outGlyph in items: - xmlWriter.simpletag("Substitution", [("in", inGlyph), ("out", outGlyph)]) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = {} - self.mapping = mapping - mapping[attrs["in"]] = attrs["out"] - - -class MultipleSubst(FormatSwitchingBaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "mapping"): - self.mapping = {} - - def postRead(self, rawTable, font): - mapping = {} - if self.Format == 1: - glyphs = _getGlyphsFromCoverageTable(rawTable["Coverage"]) - subst = [s.Substitute for s in rawTable["Sequence"]] - mapping = dict(zip(glyphs, subst)) - else: - assert 0, "unknown format: %s" % self.Format - self.mapping = mapping - del self.Format # Don't need this anymore - - def preWrite(self, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = self.mapping = {} - cov = Coverage() - cov.glyphs = sorted(list(mapping.keys()), key=font.getGlyphID) - self.Format = 1 - rawTable = { - "Coverage": cov, - "Sequence": [self.makeSequence_(mapping[glyph]) for glyph in cov.glyphs], - } - return rawTable - - def toXML2(self, xmlWriter, font): - items = sorted(self.mapping.items()) - for inGlyph, outGlyphs in items: - out = ",".join(outGlyphs) - xmlWriter.simpletag("Substitution", [("in", inGlyph), ("out", out)]) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - mapping = getattr(self, "mapping", None) - if mapping is None: - mapping = {} - self.mapping = mapping - - # TTX v3.0 and earlier. - if name == "Coverage": - self.old_coverage_ = [] - for element in content: - if not isinstance(element, tuple): - continue - element_name, element_attrs, _ = element - if element_name == "Glyph": - self.old_coverage_.append(element_attrs["value"]) - return - if name == "Sequence": - index = int(attrs.get("index", len(mapping))) - glyph = self.old_coverage_[index] - glyph_mapping = mapping[glyph] = [] - for element in content: - if not isinstance(element, tuple): - continue - element_name, element_attrs, _ = element - if element_name == "Substitute": - glyph_mapping.append(element_attrs["value"]) - return - - # TTX v3.1 and later. - outGlyphs = attrs["out"].split(",") if attrs["out"] else [] - mapping[attrs["in"]] = [g.strip() for g in outGlyphs] - - @staticmethod - def makeSequence_(g): - seq = Sequence() - seq.Substitute = g - return seq - - -class ClassDef(FormatSwitchingBaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "classDefs"): - self.classDefs = {} - - def postRead(self, rawTable, font): - classDefs = {} - - if self.Format == 1: - start = rawTable["StartGlyph"] - classList = rawTable["ClassValueArray"] - startID = font.getGlyphID(start) - endID = startID + len(classList) - glyphNames = font.getGlyphNameMany(range(startID, endID)) - for glyphName, cls in zip(glyphNames, classList): - if cls: - classDefs[glyphName] = cls - - elif self.Format == 2: - records = rawTable["ClassRangeRecord"] - for rec in records: - cls = rec.Class - if not cls: - continue - start = rec.Start - end = rec.End - startID = font.getGlyphID(start) - endID = font.getGlyphID(end) + 1 - glyphNames = font.getGlyphNameMany(range(startID, endID)) - for glyphName in glyphNames: - classDefs[glyphName] = cls - else: - log.warning("Unknown ClassDef format: %s", self.Format) - self.classDefs = classDefs - del self.Format # Don't need this anymore - - def _getClassRanges(self, font): - classDefs = getattr(self, "classDefs", None) - if classDefs is None: - self.classDefs = {} - return - getGlyphID = font.getGlyphID - items = [] - for glyphName, cls in classDefs.items(): - if not cls: - continue - items.append((getGlyphID(glyphName), glyphName, cls)) - if items: - items.sort() - last, lastName, lastCls = items[0] - ranges = [[lastCls, last, lastName]] - for glyphID, glyphName, cls in items[1:]: - if glyphID != last + 1 or cls != lastCls: - ranges[-1].extend([last, lastName]) - ranges.append([cls, glyphID, glyphName]) - last = glyphID - lastName = glyphName - lastCls = cls - ranges[-1].extend([last, lastName]) - return ranges - - def preWrite(self, font): - format = 2 - rawTable = {"ClassRangeRecord": []} - ranges = self._getClassRanges(font) - if ranges: - startGlyph = ranges[0][1] - endGlyph = ranges[-1][3] - glyphCount = endGlyph - startGlyph + 1 - if len(ranges) * 3 < glyphCount + 1: - # Format 2 is more compact - for i in range(len(ranges)): - cls, start, startName, end, endName = ranges[i] - rec = ClassRangeRecord() - rec.Start = startName - rec.End = endName - rec.Class = cls - ranges[i] = rec - format = 2 - rawTable = {"ClassRangeRecord": ranges} - else: - # Format 1 is more compact - startGlyphName = ranges[0][2] - classes = [0] * glyphCount - for cls, start, startName, end, endName in ranges: - for g in range(start - startGlyph, end - startGlyph + 1): - classes[g] = cls - format = 1 - rawTable = {"StartGlyph": startGlyphName, "ClassValueArray": classes} - self.Format = format - return rawTable - - def toXML2(self, xmlWriter, font): - items = sorted(self.classDefs.items()) - for glyphName, cls in items: - xmlWriter.simpletag("ClassDef", [("glyph", glyphName), ("class", cls)]) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - classDefs = getattr(self, "classDefs", None) - if classDefs is None: - classDefs = {} - self.classDefs = classDefs - classDefs[attrs["glyph"]] = int(attrs["class"]) - - -class AlternateSubst(FormatSwitchingBaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "alternates"): - self.alternates = {} - - def postRead(self, rawTable, font): - alternates = {} - if self.Format == 1: - input = _getGlyphsFromCoverageTable(rawTable["Coverage"]) - alts = rawTable["AlternateSet"] - assert len(input) == len(alts) - for inp, alt in zip(input, alts): - alternates[inp] = alt.Alternate - else: - assert 0, "unknown format: %s" % self.Format - self.alternates = alternates - del self.Format # Don't need this anymore - - def preWrite(self, font): - self.Format = 1 - alternates = getattr(self, "alternates", None) - if alternates is None: - alternates = self.alternates = {} - items = list(alternates.items()) - for i in range(len(items)): - glyphName, set = items[i] - items[i] = font.getGlyphID(glyphName), glyphName, set - items.sort() - cov = Coverage() - cov.glyphs = [item[1] for item in items] - alternates = [] - setList = [item[-1] for item in items] - for set in setList: - alts = AlternateSet() - alts.Alternate = set - alternates.append(alts) - # a special case to deal with the fact that several hundred Adobe Japan1-5 - # CJK fonts will overflow an offset if the coverage table isn't pushed to the end. - # Also useful in that when splitting a sub-table because of an offset overflow - # I don't need to calculate the change in the subtable offset due to the change in the coverage table size. - # Allows packing more rules in subtable. - self.sortCoverageLast = 1 - return {"Coverage": cov, "AlternateSet": alternates} - - def toXML2(self, xmlWriter, font): - items = sorted(self.alternates.items()) - for glyphName, alternates in items: - xmlWriter.begintag("AlternateSet", glyph=glyphName) - xmlWriter.newline() - for alt in alternates: - xmlWriter.simpletag("Alternate", glyph=alt) - xmlWriter.newline() - xmlWriter.endtag("AlternateSet") - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - alternates = getattr(self, "alternates", None) - if alternates is None: - alternates = {} - self.alternates = alternates - glyphName = attrs["glyph"] - set = [] - alternates[glyphName] = set - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - set.append(attrs["glyph"]) - - -class LigatureSubst(FormatSwitchingBaseTable): - def populateDefaults(self, propagator=None): - if not hasattr(self, "ligatures"): - self.ligatures = {} - - def postRead(self, rawTable, font): - ligatures = {} - if self.Format == 1: - input = _getGlyphsFromCoverageTable(rawTable["Coverage"]) - ligSets = rawTable["LigatureSet"] - assert len(input) == len(ligSets) - for i in range(len(input)): - ligatures[input[i]] = ligSets[i].Ligature - else: - assert 0, "unknown format: %s" % self.Format - self.ligatures = ligatures - del self.Format # Don't need this anymore - - def preWrite(self, font): - self.Format = 1 - ligatures = getattr(self, "ligatures", None) - if ligatures is None: - ligatures = self.ligatures = {} - - if ligatures and isinstance(next(iter(ligatures)), tuple): - # New high-level API in v3.1 and later. Note that we just support compiling this - # for now. We don't load to this API, and don't do XML with it. - - # ligatures is map from components-sequence to lig-glyph - newLigatures = dict() - for comps, lig in sorted( - ligatures.items(), key=lambda item: (-len(item[0]), item[0]) - ): - ligature = Ligature() - ligature.Component = comps[1:] - ligature.CompCount = len(comps) - ligature.LigGlyph = lig - newLigatures.setdefault(comps[0], []).append(ligature) - ligatures = newLigatures - - items = list(ligatures.items()) - for i in range(len(items)): - glyphName, set = items[i] - items[i] = font.getGlyphID(glyphName), glyphName, set - items.sort() - cov = Coverage() - cov.glyphs = [item[1] for item in items] - - ligSets = [] - setList = [item[-1] for item in items] - for set in setList: - ligSet = LigatureSet() - ligs = ligSet.Ligature = [] - for lig in set: - ligs.append(lig) - ligSets.append(ligSet) - # Useful in that when splitting a sub-table because of an offset overflow - # I don't need to calculate the change in subtabl offset due to the coverage table size. - # Allows packing more rules in subtable. - self.sortCoverageLast = 1 - return {"Coverage": cov, "LigatureSet": ligSets} - - def toXML2(self, xmlWriter, font): - items = sorted(self.ligatures.items()) - for glyphName, ligSets in items: - xmlWriter.begintag("LigatureSet", glyph=glyphName) - xmlWriter.newline() - for lig in ligSets: - xmlWriter.simpletag( - "Ligature", glyph=lig.LigGlyph, components=",".join(lig.Component) - ) - xmlWriter.newline() - xmlWriter.endtag("LigatureSet") - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - ligatures = getattr(self, "ligatures", None) - if ligatures is None: - ligatures = {} - self.ligatures = ligatures - glyphName = attrs["glyph"] - ligs = [] - ligatures[glyphName] = ligs - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - lig = Ligature() - lig.LigGlyph = attrs["glyph"] - components = attrs["components"] - lig.Component = components.split(",") if components else [] - lig.CompCount = len(lig.Component) - ligs.append(lig) - - -class COLR(BaseTable): - def decompile(self, reader, font): - # COLRv0 is exceptional in that LayerRecordCount appears *after* the - # LayerRecordArray it counts, but the parser logic expects Count fields - # to always precede the arrays. Here we work around this by parsing the - # LayerRecordCount before the rest of the table, and storing it in - # the reader's local state. - subReader = reader.getSubReader(offset=0) - for conv in self.getConverters(): - if conv.name != "LayerRecordCount": - subReader.advance(conv.staticSize) - continue - reader[conv.name] = conv.read(subReader, font, tableDict={}) - break - else: - raise AssertionError("LayerRecordCount converter not found") - return BaseTable.decompile(self, reader, font) - - def preWrite(self, font): - # The writer similarly assumes Count values precede the things counted, - # thus here we pre-initialize a CountReference; the actual count value - # will be set to the lenght of the array by the time this is assembled. - self.LayerRecordCount = None - return { - **self.__dict__, - "LayerRecordCount": CountReference(self.__dict__, "LayerRecordCount"), - } - - def computeClipBoxes(self, glyphSet: "_TTGlyphSet", quantization: int = 1): - if self.Version == 0: - return - - clips = {} - for rec in self.BaseGlyphList.BaseGlyphPaintRecord: - try: - clipBox = rec.Paint.computeClipBox(self, glyphSet, quantization) - except Exception as e: - from fontTools.ttLib import TTLibError - - raise TTLibError( - f"Failed to compute COLR ClipBox for {rec.BaseGlyph!r}" - ) from e - - if clipBox is not None: - clips[rec.BaseGlyph] = clipBox - - hasClipList = hasattr(self, "ClipList") and self.ClipList is not None - if not clips: - if hasClipList: - self.ClipList = None - else: - if not hasClipList: - self.ClipList = ClipList() - self.ClipList.Format = 1 - self.ClipList.clips = clips - - -class LookupList(BaseTable): - @property - def table(self): - for l in self.Lookup: - for st in l.SubTable: - if type(st).__name__.endswith("Subst"): - return "GSUB" - if type(st).__name__.endswith("Pos"): - return "GPOS" - raise ValueError - - def toXML2(self, xmlWriter, font): - if ( - not font - or "Debg" not in font - or LOOKUP_DEBUG_INFO_KEY not in font["Debg"].data - ): - return super().toXML2(xmlWriter, font) - debugData = font["Debg"].data[LOOKUP_DEBUG_INFO_KEY][self.table] - for conv in self.getConverters(): - if conv.repeat: - value = getattr(self, conv.name, []) - for lookupIndex, item in enumerate(value): - if str(lookupIndex) in debugData: - info = LookupDebugInfo(*debugData[str(lookupIndex)]) - tag = info.location - if info.name: - tag = f"{info.name}: {tag}" - if info.feature: - script, language, feature = info.feature - tag = f"{tag} in {feature} ({script}/{language})" - xmlWriter.comment(tag) - xmlWriter.newline() - - conv.xmlWrite( - xmlWriter, font, item, conv.name, [("index", lookupIndex)] - ) - else: - if conv.aux and not eval(conv.aux, None, vars(self)): - continue - value = getattr( - self, conv.name, None - ) # TODO Handle defaults instead of defaulting to None! - conv.xmlWrite(xmlWriter, font, value, conv.name, []) - - -class BaseGlyphRecordArray(BaseTable): - def preWrite(self, font): - self.BaseGlyphRecord = sorted( - self.BaseGlyphRecord, key=lambda rec: font.getGlyphID(rec.BaseGlyph) - ) - return self.__dict__.copy() - - -class BaseGlyphList(BaseTable): - def preWrite(self, font): - self.BaseGlyphPaintRecord = sorted( - self.BaseGlyphPaintRecord, key=lambda rec: font.getGlyphID(rec.BaseGlyph) - ) - return self.__dict__.copy() - - -class ClipBoxFormat(IntEnum): - Static = 1 - Variable = 2 - - def is_variable(self): - return self is self.Variable - - def as_variable(self): - return self.Variable - - -class ClipBox(getFormatSwitchingBaseTableClass("uint8")): - formatEnum = ClipBoxFormat - - def as_tuple(self): - return tuple(getattr(self, conv.name) for conv in self.getConverters()) - - def __repr__(self): - return f"{self.__class__.__name__}{self.as_tuple()}" - - -class ClipList(getFormatSwitchingBaseTableClass("uint8")): - def populateDefaults(self, propagator=None): - if not hasattr(self, "clips"): - self.clips = {} - - def postRead(self, rawTable, font): - clips = {} - glyphOrder = font.getGlyphOrder() - for i, rec in enumerate(rawTable["ClipRecord"]): - if rec.StartGlyphID > rec.EndGlyphID: - log.warning( - "invalid ClipRecord[%i].StartGlyphID (%i) > " - "EndGlyphID (%i); skipped", - i, - rec.StartGlyphID, - rec.EndGlyphID, - ) - continue - redefinedGlyphs = [] - missingGlyphs = [] - for glyphID in range(rec.StartGlyphID, rec.EndGlyphID + 1): - try: - glyph = glyphOrder[glyphID] - except IndexError: - missingGlyphs.append(glyphID) - continue - if glyph not in clips: - clips[glyph] = copy.copy(rec.ClipBox) - else: - redefinedGlyphs.append(glyphID) - if redefinedGlyphs: - log.warning( - "ClipRecord[%i] overlaps previous records; " - "ignoring redefined clip boxes for the " - "following glyph ID range: [%i-%i]", - i, - min(redefinedGlyphs), - max(redefinedGlyphs), - ) - if missingGlyphs: - log.warning( - "ClipRecord[%i] range references missing " "glyph IDs: [%i-%i]", - i, - min(missingGlyphs), - max(missingGlyphs), - ) - self.clips = clips - - def groups(self): - glyphsByClip = defaultdict(list) - uniqueClips = {} - for glyphName, clipBox in self.clips.items(): - key = clipBox.as_tuple() - glyphsByClip[key].append(glyphName) - if key not in uniqueClips: - uniqueClips[key] = clipBox - return { - frozenset(glyphs): uniqueClips[key] for key, glyphs in glyphsByClip.items() - } - - def preWrite(self, font): - if not hasattr(self, "clips"): - self.clips = {} - clipBoxRanges = {} - glyphMap = font.getReverseGlyphMap() - for glyphs, clipBox in self.groups().items(): - glyphIDs = sorted( - glyphMap[glyphName] for glyphName in glyphs if glyphName in glyphMap - ) - if not glyphIDs: - continue - last = glyphIDs[0] - ranges = [[last]] - for glyphID in glyphIDs[1:]: - if glyphID != last + 1: - ranges[-1].append(last) - ranges.append([glyphID]) - last = glyphID - ranges[-1].append(last) - for start, end in ranges: - assert (start, end) not in clipBoxRanges - clipBoxRanges[(start, end)] = clipBox - - clipRecords = [] - for (start, end), clipBox in sorted(clipBoxRanges.items()): - record = ClipRecord() - record.StartGlyphID = start - record.EndGlyphID = end - record.ClipBox = clipBox - clipRecords.append(record) - rawTable = { - "ClipCount": len(clipRecords), - "ClipRecord": clipRecords, - } - return rawTable - - def toXML(self, xmlWriter, font, attrs=None, name=None): - tableName = name if name else self.__class__.__name__ - if attrs is None: - attrs = [] - if hasattr(self, "Format"): - attrs.append(("Format", self.Format)) - xmlWriter.begintag(tableName, attrs) - xmlWriter.newline() - # sort clips alphabetically to ensure deterministic XML dump - for glyphs, clipBox in sorted( - self.groups().items(), key=lambda item: min(item[0]) - ): - xmlWriter.begintag("Clip") - xmlWriter.newline() - for glyphName in sorted(glyphs): - xmlWriter.simpletag("Glyph", value=glyphName) - xmlWriter.newline() - xmlWriter.begintag("ClipBox", [("Format", clipBox.Format)]) - xmlWriter.newline() - clipBox.toXML2(xmlWriter, font) - xmlWriter.endtag("ClipBox") - xmlWriter.newline() - xmlWriter.endtag("Clip") - xmlWriter.newline() - xmlWriter.endtag(tableName) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, font): - clips = getattr(self, "clips", None) - if clips is None: - self.clips = clips = {} - assert name == "Clip" - glyphs = [] - clipBox = None - for elem in content: - if not isinstance(elem, tuple): - continue - name, attrs, content = elem - if name == "Glyph": - glyphs.append(attrs["value"]) - elif name == "ClipBox": - clipBox = ClipBox() - clipBox.Format = safeEval(attrs["Format"]) - for elem in content: - if not isinstance(elem, tuple): - continue - name, attrs, content = elem - clipBox.fromXML(name, attrs, content, font) - if clipBox: - for glyphName in glyphs: - clips[glyphName] = clipBox - - -class ExtendMode(IntEnum): - PAD = 0 - REPEAT = 1 - REFLECT = 2 - - -# Porter-Duff modes for COLRv1 PaintComposite: -# https://github.com/googlefonts/colr-gradients-spec/tree/off_sub_1#compositemode-enumeration -class CompositeMode(IntEnum): - CLEAR = 0 - SRC = 1 - DEST = 2 - SRC_OVER = 3 - DEST_OVER = 4 - SRC_IN = 5 - DEST_IN = 6 - SRC_OUT = 7 - DEST_OUT = 8 - SRC_ATOP = 9 - DEST_ATOP = 10 - XOR = 11 - PLUS = 12 - SCREEN = 13 - OVERLAY = 14 - DARKEN = 15 - LIGHTEN = 16 - COLOR_DODGE = 17 - COLOR_BURN = 18 - HARD_LIGHT = 19 - SOFT_LIGHT = 20 - DIFFERENCE = 21 - EXCLUSION = 22 - MULTIPLY = 23 - HSL_HUE = 24 - HSL_SATURATION = 25 - HSL_COLOR = 26 - HSL_LUMINOSITY = 27 - - -class PaintFormat(IntEnum): - PaintColrLayers = 1 - PaintSolid = 2 - PaintVarSolid = (3,) - PaintLinearGradient = 4 - PaintVarLinearGradient = 5 - PaintRadialGradient = 6 - PaintVarRadialGradient = 7 - PaintSweepGradient = 8 - PaintVarSweepGradient = 9 - PaintGlyph = 10 - PaintColrGlyph = 11 - PaintTransform = 12 - PaintVarTransform = 13 - PaintTranslate = 14 - PaintVarTranslate = 15 - PaintScale = 16 - PaintVarScale = 17 - PaintScaleAroundCenter = 18 - PaintVarScaleAroundCenter = 19 - PaintScaleUniform = 20 - PaintVarScaleUniform = 21 - PaintScaleUniformAroundCenter = 22 - PaintVarScaleUniformAroundCenter = 23 - PaintRotate = 24 - PaintVarRotate = 25 - PaintRotateAroundCenter = 26 - PaintVarRotateAroundCenter = 27 - PaintSkew = 28 - PaintVarSkew = 29 - PaintSkewAroundCenter = 30 - PaintVarSkewAroundCenter = 31 - PaintComposite = 32 - - def is_variable(self): - return self.name.startswith("PaintVar") - - def as_variable(self): - if self.is_variable(): - return self - try: - return PaintFormat.__members__[f"PaintVar{self.name[5:]}"] - except KeyError: - return None - - -class Paint(getFormatSwitchingBaseTableClass("uint8")): - formatEnum = PaintFormat - - def getFormatName(self): - try: - return self.formatEnum(self.Format).name - except ValueError: - raise NotImplementedError(f"Unknown Paint format: {self.Format}") - - def toXML(self, xmlWriter, font, attrs=None, name=None): - tableName = name if name else self.__class__.__name__ - if attrs is None: - attrs = [] - attrs.append(("Format", self.Format)) - xmlWriter.begintag(tableName, attrs) - xmlWriter.comment(self.getFormatName()) - xmlWriter.newline() - self.toXML2(xmlWriter, font) - xmlWriter.endtag(tableName) - xmlWriter.newline() - - def iterPaintSubTables(self, colr: COLR) -> Iterator[BaseTable.SubTableEntry]: - if self.Format == PaintFormat.PaintColrLayers: - # https://github.com/fonttools/fonttools/issues/2438: don't die when no LayerList exists - layers = [] - if colr.LayerList is not None: - layers = colr.LayerList.Paint - yield from ( - BaseTable.SubTableEntry(name="Layers", value=v, index=i) - for i, v in enumerate( - layers[self.FirstLayerIndex : self.FirstLayerIndex + self.NumLayers] - ) - ) - return - - if self.Format == PaintFormat.PaintColrGlyph: - for record in colr.BaseGlyphList.BaseGlyphPaintRecord: - if record.BaseGlyph == self.Glyph: - yield BaseTable.SubTableEntry(name="BaseGlyph", value=record.Paint) - return - else: - raise KeyError(f"{self.Glyph!r} not in colr.BaseGlyphList") - - for conv in self.getConverters(): - if conv.tableClass is not None and issubclass(conv.tableClass, type(self)): - value = getattr(self, conv.name) - yield BaseTable.SubTableEntry(name=conv.name, value=value) - - def getChildren(self, colr) -> List["Paint"]: - # this is kept for backward compatibility (e.g. it's used by the subsetter) - return [p.value for p in self.iterPaintSubTables(colr)] - - def traverse(self, colr: COLR, callback): - """Depth-first traversal of graph rooted at self, callback on each node.""" - if not callable(callback): - raise TypeError("callback must be callable") - - for path in dfs_base_table( - self, iter_subtables_fn=lambda paint: paint.iterPaintSubTables(colr) - ): - paint = path[-1].value - callback(paint) - - def getTransform(self) -> Transform: - if self.Format == PaintFormat.PaintTransform: - t = self.Transform - return Transform(t.xx, t.yx, t.xy, t.yy, t.dx, t.dy) - elif self.Format == PaintFormat.PaintTranslate: - return Identity.translate(self.dx, self.dy) - elif self.Format == PaintFormat.PaintScale: - return Identity.scale(self.scaleX, self.scaleY) - elif self.Format == PaintFormat.PaintScaleAroundCenter: - return ( - Identity.translate(self.centerX, self.centerY) - .scale(self.scaleX, self.scaleY) - .translate(-self.centerX, -self.centerY) - ) - elif self.Format == PaintFormat.PaintScaleUniform: - return Identity.scale(self.scale) - elif self.Format == PaintFormat.PaintScaleUniformAroundCenter: - return ( - Identity.translate(self.centerX, self.centerY) - .scale(self.scale) - .translate(-self.centerX, -self.centerY) - ) - elif self.Format == PaintFormat.PaintRotate: - return Identity.rotate(radians(self.angle)) - elif self.Format == PaintFormat.PaintRotateAroundCenter: - return ( - Identity.translate(self.centerX, self.centerY) - .rotate(radians(self.angle)) - .translate(-self.centerX, -self.centerY) - ) - elif self.Format == PaintFormat.PaintSkew: - return Identity.skew(radians(-self.xSkewAngle), radians(self.ySkewAngle)) - elif self.Format == PaintFormat.PaintSkewAroundCenter: - return ( - Identity.translate(self.centerX, self.centerY) - .skew(radians(-self.xSkewAngle), radians(self.ySkewAngle)) - .translate(-self.centerX, -self.centerY) - ) - if PaintFormat(self.Format).is_variable(): - raise NotImplementedError(f"Variable Paints not supported: {self.Format}") - - return Identity - - def computeClipBox( - self, colr: COLR, glyphSet: "_TTGlyphSet", quantization: int = 1 - ) -> Optional[ClipBox]: - pen = ControlBoundsPen(glyphSet) - for path in dfs_base_table( - self, iter_subtables_fn=lambda paint: paint.iterPaintSubTables(colr) - ): - paint = path[-1].value - if paint.Format == PaintFormat.PaintGlyph: - transformation = reduce( - Transform.transform, - (st.value.getTransform() for st in path), - Identity, - ) - glyphSet[paint.Glyph].draw(TransformPen(pen, transformation)) - - if pen.bounds is None: - return None - - cb = ClipBox() - cb.Format = int(ClipBoxFormat.Static) - cb.xMin, cb.yMin, cb.xMax, cb.yMax = quantizeRect(pen.bounds, quantization) - return cb - - -# For each subtable format there is a class. However, we don't really distinguish -# between "field name" and "format name": often these are the same. Yet there's -# a whole bunch of fields with different names. The following dict is a mapping -# from "format name" to "field name". _buildClasses() uses this to create a -# subclass for each alternate field name. -# -_equivalents = { - "MarkArray": ("Mark1Array",), - "LangSys": ("DefaultLangSys",), - "Coverage": ( - "MarkCoverage", - "BaseCoverage", - "LigatureCoverage", - "Mark1Coverage", - "Mark2Coverage", - "BacktrackCoverage", - "InputCoverage", - "LookAheadCoverage", - "VertGlyphCoverage", - "HorizGlyphCoverage", - "TopAccentCoverage", - "ExtendedShapeCoverage", - "MathKernCoverage", - ), - "ClassDef": ( - "ClassDef1", - "ClassDef2", - "BacktrackClassDef", - "InputClassDef", - "LookAheadClassDef", - "GlyphClassDef", - "MarkAttachClassDef", - ), - "Anchor": ( - "EntryAnchor", - "ExitAnchor", - "BaseAnchor", - "LigatureAnchor", - "Mark2Anchor", - "MarkAnchor", - ), - "Device": ( - "XPlaDevice", - "YPlaDevice", - "XAdvDevice", - "YAdvDevice", - "XDeviceTable", - "YDeviceTable", - "DeviceTable", - ), - "Axis": ( - "HorizAxis", - "VertAxis", - ), - "MinMax": ("DefaultMinMax",), - "BaseCoord": ( - "MinCoord", - "MaxCoord", - ), - "JstfLangSys": ("DefJstfLangSys",), - "JstfGSUBModList": ( - "ShrinkageEnableGSUB", - "ShrinkageDisableGSUB", - "ExtensionEnableGSUB", - "ExtensionDisableGSUB", - ), - "JstfGPOSModList": ( - "ShrinkageEnableGPOS", - "ShrinkageDisableGPOS", - "ExtensionEnableGPOS", - "ExtensionDisableGPOS", - ), - "JstfMax": ( - "ShrinkageJstfMax", - "ExtensionJstfMax", - ), - "MathKern": ( - "TopRightMathKern", - "TopLeftMathKern", - "BottomRightMathKern", - "BottomLeftMathKern", - ), - "MathGlyphConstruction": ("VertGlyphConstruction", "HorizGlyphConstruction"), -} - -# -# OverFlow logic, to automatically create ExtensionLookups -# XXX This should probably move to otBase.py -# - - -def fixLookupOverFlows(ttf, overflowRecord): - """Either the offset from the LookupList to a lookup overflowed, or - an offset from a lookup to a subtable overflowed. - The table layout is: - GPSO/GUSB - Script List - Feature List - LookUpList - Lookup[0] and contents - SubTable offset list - SubTable[0] and contents - ... - SubTable[n] and contents - ... - Lookup[n] and contents - SubTable offset list - SubTable[0] and contents - ... - SubTable[n] and contents - If the offset to a lookup overflowed (SubTableIndex is None) - we must promote the *previous* lookup to an Extension type. - If the offset from a lookup to subtable overflowed, then we must promote it - to an Extension Lookup type. - """ - ok = 0 - lookupIndex = overflowRecord.LookupListIndex - if overflowRecord.SubTableIndex is None: - lookupIndex = lookupIndex - 1 - if lookupIndex < 0: - return ok - if overflowRecord.tableType == "GSUB": - extType = 7 - elif overflowRecord.tableType == "GPOS": - extType = 9 - - lookups = ttf[overflowRecord.tableType].table.LookupList.Lookup - lookup = lookups[lookupIndex] - # If the previous lookup is an extType, look further back. Very unlikely, but possible. - while lookup.SubTable[0].__class__.LookupType == extType: - lookupIndex = lookupIndex - 1 - if lookupIndex < 0: - return ok - lookup = lookups[lookupIndex] - - for lookupIndex in range(lookupIndex, len(lookups)): - lookup = lookups[lookupIndex] - if lookup.LookupType != extType: - lookup.LookupType = extType - for si in range(len(lookup.SubTable)): - subTable = lookup.SubTable[si] - extSubTableClass = lookupTypes[overflowRecord.tableType][extType] - extSubTable = extSubTableClass() - extSubTable.Format = 1 - extSubTable.ExtSubTable = subTable - lookup.SubTable[si] = extSubTable - ok = 1 - return ok - - -def splitMultipleSubst(oldSubTable, newSubTable, overflowRecord): - ok = 1 - oldMapping = sorted(oldSubTable.mapping.items()) - oldLen = len(oldMapping) - - if overflowRecord.itemName in ["Coverage", "RangeRecord"]: - # Coverage table is written last. Overflow is to or within the - # the coverage table. We will just cut the subtable in half. - newLen = oldLen // 2 - - elif overflowRecord.itemName == "Sequence": - # We just need to back up by two items from the overflowed - # Sequence index to make sure the offset to the Coverage table - # doesn't overflow. - newLen = overflowRecord.itemIndex - 1 - - newSubTable.mapping = {} - for i in range(newLen, oldLen): - item = oldMapping[i] - key = item[0] - newSubTable.mapping[key] = item[1] - del oldSubTable.mapping[key] - - return ok - - -def splitAlternateSubst(oldSubTable, newSubTable, overflowRecord): - ok = 1 - if hasattr(oldSubTable, "sortCoverageLast"): - newSubTable.sortCoverageLast = oldSubTable.sortCoverageLast - - oldAlts = sorted(oldSubTable.alternates.items()) - oldLen = len(oldAlts) - - if overflowRecord.itemName in ["Coverage", "RangeRecord"]: - # Coverage table is written last. overflow is to or within the - # the coverage table. We will just cut the subtable in half. - newLen = oldLen // 2 - - elif overflowRecord.itemName == "AlternateSet": - # We just need to back up by two items - # from the overflowed AlternateSet index to make sure the offset - # to the Coverage table doesn't overflow. - newLen = overflowRecord.itemIndex - 1 - - newSubTable.alternates = {} - for i in range(newLen, oldLen): - item = oldAlts[i] - key = item[0] - newSubTable.alternates[key] = item[1] - del oldSubTable.alternates[key] - - return ok - - -def splitLigatureSubst(oldSubTable, newSubTable, overflowRecord): - ok = 1 - oldLigs = sorted(oldSubTable.ligatures.items()) - oldLen = len(oldLigs) - - if overflowRecord.itemName in ["Coverage", "RangeRecord"]: - # Coverage table is written last. overflow is to or within the - # the coverage table. We will just cut the subtable in half. - newLen = oldLen // 2 - - elif overflowRecord.itemName == "LigatureSet": - # We just need to back up by two items - # from the overflowed AlternateSet index to make sure the offset - # to the Coverage table doesn't overflow. - newLen = overflowRecord.itemIndex - 1 - - newSubTable.ligatures = {} - for i in range(newLen, oldLen): - item = oldLigs[i] - key = item[0] - newSubTable.ligatures[key] = item[1] - del oldSubTable.ligatures[key] - - return ok - - -def splitPairPos(oldSubTable, newSubTable, overflowRecord): - st = oldSubTable - ok = False - newSubTable.Format = oldSubTable.Format - if oldSubTable.Format == 1 and len(oldSubTable.PairSet) > 1: - for name in "ValueFormat1", "ValueFormat2": - setattr(newSubTable, name, getattr(oldSubTable, name)) - - # Move top half of coverage to new subtable - - newSubTable.Coverage = oldSubTable.Coverage.__class__() - - coverage = oldSubTable.Coverage.glyphs - records = oldSubTable.PairSet - - oldCount = len(oldSubTable.PairSet) // 2 - - oldSubTable.Coverage.glyphs = coverage[:oldCount] - oldSubTable.PairSet = records[:oldCount] - - newSubTable.Coverage.glyphs = coverage[oldCount:] - newSubTable.PairSet = records[oldCount:] - - oldSubTable.PairSetCount = len(oldSubTable.PairSet) - newSubTable.PairSetCount = len(newSubTable.PairSet) - - ok = True - - elif oldSubTable.Format == 2 and len(oldSubTable.Class1Record) > 1: - if not hasattr(oldSubTable, "Class2Count"): - oldSubTable.Class2Count = len(oldSubTable.Class1Record[0].Class2Record) - for name in "Class2Count", "ClassDef2", "ValueFormat1", "ValueFormat2": - setattr(newSubTable, name, getattr(oldSubTable, name)) - - # The two subtables will still have the same ClassDef2 and the table - # sharing will still cause the sharing to overflow. As such, disable - # sharing on the one that is serialized second (that's oldSubTable). - oldSubTable.DontShare = True - - # Move top half of class numbers to new subtable - - newSubTable.Coverage = oldSubTable.Coverage.__class__() - newSubTable.ClassDef1 = oldSubTable.ClassDef1.__class__() - - coverage = oldSubTable.Coverage.glyphs - classDefs = oldSubTable.ClassDef1.classDefs - records = oldSubTable.Class1Record - - oldCount = len(oldSubTable.Class1Record) // 2 - newGlyphs = set(k for k, v in classDefs.items() if v >= oldCount) - - oldSubTable.Coverage.glyphs = [g for g in coverage if g not in newGlyphs] - oldSubTable.ClassDef1.classDefs = { - k: v for k, v in classDefs.items() if v < oldCount - } - oldSubTable.Class1Record = records[:oldCount] - - newSubTable.Coverage.glyphs = [g for g in coverage if g in newGlyphs] - newSubTable.ClassDef1.classDefs = { - k: (v - oldCount) for k, v in classDefs.items() if v > oldCount - } - newSubTable.Class1Record = records[oldCount:] - - oldSubTable.Class1Count = len(oldSubTable.Class1Record) - newSubTable.Class1Count = len(newSubTable.Class1Record) - - ok = True - - return ok - - -def splitMarkBasePos(oldSubTable, newSubTable, overflowRecord): - # split half of the mark classes to the new subtable - classCount = oldSubTable.ClassCount - if classCount < 2: - # oh well, not much left to split... - return False - - oldClassCount = classCount // 2 - newClassCount = classCount - oldClassCount - - oldMarkCoverage, oldMarkRecords = [], [] - newMarkCoverage, newMarkRecords = [], [] - for glyphName, markRecord in zip( - oldSubTable.MarkCoverage.glyphs, oldSubTable.MarkArray.MarkRecord - ): - if markRecord.Class < oldClassCount: - oldMarkCoverage.append(glyphName) - oldMarkRecords.append(markRecord) - else: - markRecord.Class -= oldClassCount - newMarkCoverage.append(glyphName) - newMarkRecords.append(markRecord) - - oldBaseRecords, newBaseRecords = [], [] - for rec in oldSubTable.BaseArray.BaseRecord: - oldBaseRecord, newBaseRecord = rec.__class__(), rec.__class__() - oldBaseRecord.BaseAnchor = rec.BaseAnchor[:oldClassCount] - newBaseRecord.BaseAnchor = rec.BaseAnchor[oldClassCount:] - oldBaseRecords.append(oldBaseRecord) - newBaseRecords.append(newBaseRecord) - - newSubTable.Format = oldSubTable.Format - - oldSubTable.MarkCoverage.glyphs = oldMarkCoverage - newSubTable.MarkCoverage = oldSubTable.MarkCoverage.__class__() - newSubTable.MarkCoverage.glyphs = newMarkCoverage - - # share the same BaseCoverage in both halves - newSubTable.BaseCoverage = oldSubTable.BaseCoverage - - oldSubTable.ClassCount = oldClassCount - newSubTable.ClassCount = newClassCount - - oldSubTable.MarkArray.MarkRecord = oldMarkRecords - newSubTable.MarkArray = oldSubTable.MarkArray.__class__() - newSubTable.MarkArray.MarkRecord = newMarkRecords - - oldSubTable.MarkArray.MarkCount = len(oldMarkRecords) - newSubTable.MarkArray.MarkCount = len(newMarkRecords) - - oldSubTable.BaseArray.BaseRecord = oldBaseRecords - newSubTable.BaseArray = oldSubTable.BaseArray.__class__() - newSubTable.BaseArray.BaseRecord = newBaseRecords - - oldSubTable.BaseArray.BaseCount = len(oldBaseRecords) - newSubTable.BaseArray.BaseCount = len(newBaseRecords) - - return True - - -splitTable = { - "GSUB": { - # 1: splitSingleSubst, - 2: splitMultipleSubst, - 3: splitAlternateSubst, - 4: splitLigatureSubst, - # 5: splitContextSubst, - # 6: splitChainContextSubst, - # 7: splitExtensionSubst, - # 8: splitReverseChainSingleSubst, - }, - "GPOS": { - # 1: splitSinglePos, - 2: splitPairPos, - # 3: splitCursivePos, - 4: splitMarkBasePos, - # 5: splitMarkLigPos, - # 6: splitMarkMarkPos, - # 7: splitContextPos, - # 8: splitChainContextPos, - # 9: splitExtensionPos, - }, -} - - -def fixSubTableOverFlows(ttf, overflowRecord): - """ - An offset has overflowed within a sub-table. We need to divide this subtable into smaller parts. - """ - table = ttf[overflowRecord.tableType].table - lookup = table.LookupList.Lookup[overflowRecord.LookupListIndex] - subIndex = overflowRecord.SubTableIndex - subtable = lookup.SubTable[subIndex] - - # First, try not sharing anything for this subtable... - if not hasattr(subtable, "DontShare"): - subtable.DontShare = True - return True - - if hasattr(subtable, "ExtSubTable"): - # We split the subtable of the Extension table, and add a new Extension table - # to contain the new subtable. - - subTableType = subtable.ExtSubTable.__class__.LookupType - extSubTable = subtable - subtable = extSubTable.ExtSubTable - newExtSubTableClass = lookupTypes[overflowRecord.tableType][ - extSubTable.__class__.LookupType - ] - newExtSubTable = newExtSubTableClass() - newExtSubTable.Format = extSubTable.Format - toInsert = newExtSubTable - - newSubTableClass = lookupTypes[overflowRecord.tableType][subTableType] - newSubTable = newSubTableClass() - newExtSubTable.ExtSubTable = newSubTable - else: - subTableType = subtable.__class__.LookupType - newSubTableClass = lookupTypes[overflowRecord.tableType][subTableType] - newSubTable = newSubTableClass() - toInsert = newSubTable - - if hasattr(lookup, "SubTableCount"): # may not be defined yet. - lookup.SubTableCount = lookup.SubTableCount + 1 - - try: - splitFunc = splitTable[overflowRecord.tableType][subTableType] - except KeyError: - log.error( - "Don't know how to split %s lookup type %s", - overflowRecord.tableType, - subTableType, - ) - return False - - ok = splitFunc(subtable, newSubTable, overflowRecord) - if ok: - lookup.SubTable.insert(subIndex + 1, toInsert) - return ok - - -# End of OverFlow logic - - -def _buildClasses(): - import re - from .otData import otData - - formatPat = re.compile(r"([A-Za-z0-9]+)Format(\d+)$") - namespace = globals() - - # populate module with classes - for name, table in otData: - baseClass = BaseTable - m = formatPat.match(name) - if m: - # XxxFormatN subtable, we only add the "base" table - name = m.group(1) - # the first row of a format-switching otData table describes the Format; - # the first column defines the type of the Format field. - # Currently this can be either 'uint16' or 'uint8'. - formatType = table[0][0] - baseClass = getFormatSwitchingBaseTableClass(formatType) - if name not in namespace: - # the class doesn't exist yet, so the base implementation is used. - cls = type(name, (baseClass,), {}) - if name in ("GSUB", "GPOS"): - cls.DontShare = True - namespace[name] = cls - - # link Var{Table} <-> {Table} (e.g. ColorStop <-> VarColorStop, etc.) - for name, _ in otData: - if name.startswith("Var") and len(name) > 3 and name[3:] in namespace: - varType = namespace[name] - noVarType = namespace[name[3:]] - varType.NoVarType = noVarType - noVarType.VarType = varType - - for base, alts in _equivalents.items(): - base = namespace[base] - for alt in alts: - namespace[alt] = base - - global lookupTypes - lookupTypes = { - "GSUB": { - 1: SingleSubst, - 2: MultipleSubst, - 3: AlternateSubst, - 4: LigatureSubst, - 5: ContextSubst, - 6: ChainContextSubst, - 7: ExtensionSubst, - 8: ReverseChainSingleSubst, - }, - "GPOS": { - 1: SinglePos, - 2: PairPos, - 3: CursivePos, - 4: MarkBasePos, - 5: MarkLigPos, - 6: MarkMarkPos, - 7: ContextPos, - 8: ChainContextPos, - 9: ExtensionPos, - }, - "mort": { - 4: NoncontextualMorph, - }, - "morx": { - 0: RearrangementMorph, - 1: ContextualMorph, - 2: LigatureMorph, - # 3: Reserved, - 4: NoncontextualMorph, - 5: InsertionMorph, - }, - } - lookupTypes["JSTF"] = lookupTypes["GPOS"] # JSTF contains GPOS - for lookupEnum in lookupTypes.values(): - for enum, cls in lookupEnum.items(): - cls.LookupType = enum - - global featureParamTypes - featureParamTypes = { - "size": FeatureParamsSize, - } - for i in range(1, 20 + 1): - featureParamTypes["ss%02d" % i] = FeatureParamsStylisticSet - for i in range(1, 99 + 1): - featureParamTypes["cv%02d" % i] = FeatureParamsCharacterVariants - - # add converters to classes - from .otConverters import buildConverters - - for name, table in otData: - m = formatPat.match(name) - if m: - # XxxFormatN subtable, add converter to "base" table - name, format = m.groups() - format = int(format) - cls = namespace[name] - if not hasattr(cls, "converters"): - cls.converters = {} - cls.convertersByName = {} - converters, convertersByName = buildConverters(table[1:], namespace) - cls.converters[format] = converters - cls.convertersByName[format] = convertersByName - # XXX Add staticSize? - else: - cls = namespace[name] - cls.converters, cls.convertersByName = buildConverters(table, namespace) - # XXX Add staticSize? - - -_buildClasses() - - -def _getGlyphsFromCoverageTable(coverage): - if coverage is None: - # empty coverage table - return [] - else: - return coverage.glyphs diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py deleted file mode 100644 index 1580cb392e8d87bda23d7db8eb88318a4f8e6bf6..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py +++ /dev/null @@ -1,635 +0,0 @@ -import argparse -import itertools -import math -import os -import random -from pathlib import Path - -import intel_extension_for_pytorch as ipex -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from huggingface_hub import create_repo, upload_folder - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import AutoencoderKL, DDPMScheduler, PNDMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - - -logger = get_logger(__name__) - - -def save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path): - logger.info("Saving embeddings") - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id] - learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--only_save_embeds", - action="store_true", - default=False, - help="Save only the embeddings for the new concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=True, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def freeze_params(params): - for param in params: - param.requires_grad = False - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer and add the placeholder token as a additional special token - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Add the placeholder token in tokenizer - num_added_tokens = tokenizer.add_tokens(args.placeholder_token) - if num_added_tokens == 0: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token) - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="text_encoder", - revision=args.revision, - ) - vae = AutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="vae", - revision=args.revision, - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="unet", - revision=args.revision, - ) - - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder.resize_token_embeddings(len(tokenizer)) - - # Initialise the newly added placeholder token with the embeddings of the initializer token - token_embeds = text_encoder.get_input_embeddings().weight.data - token_embeds[placeholder_token_id] = token_embeds[initializer_token_id] - - # Freeze vae and unet - freeze_params(vae.parameters()) - freeze_params(unet.parameters()) - # Freeze all parameters except for the token embeddings in text encoder - params_to_freeze = itertools.chain( - text_encoder.text_model.encoder.parameters(), - text_encoder.text_model.final_layer_norm.parameters(), - text_encoder.text_model.embeddings.position_embedding.parameters(), - ) - freeze_params(params_to_freeze) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=args.train_batch_size, shuffle=True) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # Move vae and unet to device - vae.to(accelerator.device) - unet.to(accelerator.device) - - # Keep vae and unet in eval model as we don't train these - vae.eval() - unet.eval() - - unet = ipex.optimize(unet, dtype=torch.bfloat16, inplace=True) - vae = ipex.optimize(vae, dtype=torch.bfloat16, inplace=True) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - global_step = 0 - - text_encoder.train() - text_encoder, optimizer = ipex.optimize(text_encoder, optimizer=optimizer, dtype=torch.bfloat16) - - for epoch in range(args.num_train_epochs): - for step, batch in enumerate(train_dataloader): - with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16): - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn(latents.shape).to(latents.device) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device - ).long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred, target, reduction="none").mean([1, 2, 3]).mean() - accelerator.backward(loss) - - # Zero out the gradients for all token embeddings except the newly added - # embeddings for the concept, as we only want to optimize the concept embeddings - if accelerator.num_processes > 1: - grads = text_encoder.module.get_input_embeddings().weight.grad - else: - grads = text_encoder.get_input_embeddings().weight.grad - # Get the index for tokens that we want to zero the grads for - index_grads_to_zero = torch.arange(len(tokenizer)) != placeholder_token_id - grads.data[index_grads_to_zero, :] = grads.data[index_grads_to_zero, :].fill_(0) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.push_to_hub and args.only_save_embeds: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = not args.only_save_embeds - if save_full_model: - pipeline = StableDiffusionPipeline( - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=PNDMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler"), - safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"), - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/README.md b/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/README.md deleted file mode 100644 index 204d9c951c996fedabc169d9a32781be9f4c4cc1..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/README.md +++ /dev/null @@ -1,5 +0,0 @@ -## Diffusers examples with ONNXRuntime optimizations - -**This research project is not actively maintained by the diffusers team. For any questions or comments, please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.** - -This aims to provide diffusers examples with ONNXRuntime optimizations for training/fine-tuning unconditional image generation, text to image, and textual inversion. Please see individual directories for more details on how to run each task using ONNXRuntime. \ No newline at end of file diff --git a/spaces/deepwisdom/MetaGPT/metagpt/prompts/metagpt_sample.py b/spaces/deepwisdom/MetaGPT/metagpt/prompts/metagpt_sample.py deleted file mode 100644 index 24af8d8c352f39d22e39cdee48fc40b0c8c22ff6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/prompts/metagpt_sample.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/7 20:29 -@Author : alexanderwu -@File : metagpt_sample.py -""" - -METAGPT_SAMPLE = """ -### 设定 - -你是一个用户的编程助手,可以使用公共库与python系统库进行编程,你的回复应该有且只有一个函数。 -1. 函数本身应尽可能完整,不应缺失需求细节 -2. 你可能需要写一些提示词,用来让LLM(你自己)理解带有上下文的搜索请求 -3. 面对复杂的、难以用简单函数解决的逻辑,尽量交给llm解决 - -### 公共库 - -你可以使用公共库metagpt提供的函数,不能使用其他第三方库的函数。公共库默认已经被import为x变量 -- `import metagpt as x` -- 你可以使用 `x.func(paras)` 方式来对公共库进行调用。 - -公共库中已有函数如下 -- def llm(question: str) -> str # 输入问题,基于大模型进行回答 -- def intent_detection(query: str) -> str # 输入query,分析意图,返回公共库函数名 -- def add_doc(doc_path: str) -> None # 输入文件路径或者文件夹路径,加入知识库 -- def search(query: str) -> list[str] # 输入query返回向量知识库搜索的多个结果 -- def google(query: str) -> list[str] # 使用google查询公网结果 -- def math(query: str) -> str # 输入query公式,返回对公式执行的结果 -- def tts(text: str, wav_path: str) # 输入text文本与对应想要输出音频的路径,将文本转为音频文件 - -### 用户需求 - -我有一个个人知识库文件,我希望基于它来实现一个带有搜索功能的个人助手,需求细则如下 -1. 个人助手会思考是否需要使用个人知识库搜索,如果没有必要,就不使用它 -2. 个人助手会判断用户意图,在不同意图下使用恰当的函数解决问题 -3. 用语音回答 - -""" -# - def summarize(doc: str) -> str # 输入doc返回摘要 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/utils/s3.py b/spaces/deepwisdom/MetaGPT/metagpt/utils/s3.py deleted file mode 100644 index 96b4579721c41c5d2a695c926a9a0a932c636ff6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/utils/s3.py +++ /dev/null @@ -1,155 +0,0 @@ -import base64 -import os.path -import traceback -import uuid -from pathlib import Path -from typing import Optional - -import aioboto3 -import aiofiles - -from metagpt.config import CONFIG -from metagpt.const import BASE64_FORMAT -from metagpt.logs import logger - - -class S3: - """A class for interacting with Amazon S3 storage.""" - - def __init__(self): - self.session = aioboto3.Session() - self.s3_config = CONFIG.S3 - self.auth_config = { - "service_name": "s3", - "aws_access_key_id": self.s3_config["access_key"], - "aws_secret_access_key": self.s3_config["secret_key"], - "endpoint_url": self.s3_config["endpoint_url"], - } - - async def upload_file( - self, - bucket: str, - local_path: str, - object_name: str, - ) -> None: - """Upload a file from the local path to the specified path of the storage bucket specified in s3. - - Args: - bucket: The name of the S3 storage bucket. - local_path: The local file path, including the file name. - object_name: The complete path of the uploaded file to be stored in S3, including the file name. - - Raises: - Exception: If an error occurs during the upload process, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - async with aiofiles.open(local_path, mode="rb") as reader: - body = await reader.read() - await client.put_object(Body=body, Bucket=bucket, Key=object_name) - logger.info(f"Successfully uploaded the file to path {object_name} in bucket {bucket} of s3.") - except Exception as e: - logger.error(f"Failed to upload the file to path {object_name} in bucket {bucket} of s3: {e}") - raise e - - async def get_object_url( - self, - bucket: str, - object_name: str, - ) -> str: - """Get the URL for a downloadable or preview file stored in the specified S3 bucket. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - - Returns: - The URL for the downloadable or preview file. - - Raises: - Exception: If an error occurs while retrieving the URL, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - file = await client.get_object(Bucket=bucket, Key=object_name) - return str(file["Body"].url) - except Exception as e: - logger.error(f"Failed to get the url for a downloadable or preview file: {e}") - raise e - - async def get_object( - self, - bucket: str, - object_name: str, - ) -> bytes: - """Get the binary data of a file stored in the specified S3 bucket. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - - Returns: - The binary data of the requested file. - - Raises: - Exception: If an error occurs while retrieving the file data, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - s3_object = await client.get_object(Bucket=bucket, Key=object_name) - return await s3_object["Body"].read() - except Exception as e: - logger.error(f"Failed to get the binary data of the file: {e}") - raise e - - async def download_file( - self, bucket: str, object_name: str, local_path: str, chunk_size: Optional[int] = 128 * 1024 - ) -> None: - """Download an S3 object to a local file. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - local_path: The local file path where the S3 object will be downloaded. - chunk_size: The size of data chunks to read and write at a time. Default is 128 KB. - - Raises: - Exception: If an error occurs during the download process, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - s3_object = await client.get_object(Bucket=bucket, Key=object_name) - stream = s3_object["Body"] - async with aiofiles.open(local_path, mode="wb") as writer: - while True: - file_data = await stream.read(chunk_size) - if not file_data: - break - await writer.write(file_data) - except Exception as e: - logger.error(f"Failed to download the file from S3: {e}") - raise e - - async def cache(self, data: str, file_ext: str, format: str = "") -> str: - """Save data to remote S3 and return url""" - object_name = str(uuid.uuid4()).replace("-", "") + file_ext - path = Path(__file__).parent - pathname = path / object_name - try: - async with aiofiles.open(str(pathname), mode="wb") as file: - if format == BASE64_FORMAT: - data = base64.b64decode(data) - await file.write(data) - - bucket = CONFIG.S3.get("bucket") - object_pathname = CONFIG.S3.get("path") or "system" - object_pathname += f"/{object_name}" - object_pathname = os.path.normpath(object_pathname) - await self.upload_file(bucket=bucket, local_path=str(pathname), object_name=object_pathname) - pathname.unlink(missing_ok=True) - - return await self.get_object_url(bucket=bucket, object_name=object_pathname) - except Exception as e: - logger.exception(f"{e}, stack:{traceback.format_exc()}") - pathname.unlink(missing_ok=True) - return None diff --git a/spaces/derful/Chatgpt-academic/predict.py b/spaces/derful/Chatgpt-academic/predict.py deleted file mode 100644 index 5c925604b096f0600c787e102c848f15db69566c..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/predict.py +++ /dev/null @@ -1,237 +0,0 @@ -# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目 - -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import json -import gradio as gr -import logging -import traceback -import requests -import importlib - -# config_private.py放自己的秘密如API和代理网址 -# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件 -try: from config_private import proxies, API_URL, API_KEY, TIMEOUT_SECONDS, MAX_RETRY, LLM_MODEL -except: from config import proxies, API_URL, API_KEY, TIMEOUT_SECONDS, MAX_RETRY, LLM_MODEL - -timeout_bot_msg = '[local] Request timeout, network error. please check proxy settings in config.py.' - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - -def predict_no_ui(api, inputs, top_p, temperature, history=[], sys_prompt=""): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。 - predict函数的简化版。 - 用于payload比较大的情况,或者用于实现多线、带嵌套的复杂功能。 - - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表 - (注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误,然后raise ConnectionAbortedError) - """ - headers, payload = generate_payload(api, inputs, top_p, temperature, history, system_prompt=sys_prompt, stream=False) - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - response = requests.post(API_URL, headers=headers, proxies=proxies, - json=payload, stream=False, timeout=TIMEOUT_SECONDS*2); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - try: - result = json.loads(response.text)["choices"][0]["message"]["content"] - return result - except Exception as e: - if "choices" not in response.text: print(response.text) - raise ConnectionAbortedError("Json解析不合常规,可能是文本过长" + response.text) - - -def predict_no_ui_long_connection(api, inputs, top_p, temperature, history=[], sys_prompt=""): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免有人中途掐网线。 - """ - headers, payload = generate_payload(api, inputs, top_p, temperature, history, system_prompt=sys_prompt, stream=True) - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - response = requests.post(API_URL, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - stream_response = response.iter_lines() - result = '' - while True: - try: chunk = next(stream_response).decode() - except StopIteration: break - if len(chunk)==0: continue - if not chunk.startswith('data:'): - chunk = get_full_error(chunk.encode('utf8'), stream_response) - raise ConnectionAbortedError("OpenAI拒绝了请求:" + chunk.decode()) - delta = json.loads(chunk.lstrip('data:'))['choices'][0]["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: result += delta["content"]; print(delta["content"], end='') - else: raise RuntimeError("意外Json结构:"+delta) - return result - - -def predict(api, inputs, top_p, temperature, chatbot=[], history=[], system_prompt='', - stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import functional - importlib.reload(functional) - functional = functional.get_functionals() - inputs = functional[additional_fn]["Prefix"] + inputs + functional[additional_fn]["Suffix"] - - if stream: - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield chatbot, history, "等待响应" - - headers, payload = generate_payload(api, inputs, top_p, temperature, history, system_prompt, stream) - history.append(inputs); history.append(" ") - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=True - response = requests.post(API_URL, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS);break - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield chatbot, history, "请求超时"+retry_msg - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - - is_head_of_the_stream = True - if stream: - stream_response = response.iter_lines() - while True: - chunk = next(stream_response) - # print(chunk.decode()[6:]) - if is_head_of_the_stream: - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - try: - if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0: - # 判定为数据流的结束,gpt_replying_buffer也写完了 - logging.info(f'[response] {gpt_replying_buffer}') - break - # 处理数据流的主体 - chunkjson = json.loads(chunk.decode()[6:]) - status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}" - # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出 - gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk.decode()[6:])['choices'][0]["delta"]["content"] - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield chatbot, history, status_text - - except Exception as e: - traceback.print_exc() - yield chatbot, history, "Json解析不合常规" - chunk = get_full_error(chunk, stream_response) - error_msg = chunk.decode() - if "reduce the length" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Input (or history) is too long, please reduce input or clear history by refreshing this page.") - history = [] - elif "Incorrect API key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key provided.") - else: - from toolbox import regular_txt_to_markdown - tb_str = regular_txt_to_markdown(traceback.format_exc()) - chatbot[-1] = (chatbot[-1][0], f"[Local Message] Json Error \n\n {tb_str} \n\n {regular_txt_to_markdown(chunk.decode()[4:])}") - yield chatbot, history, "Json解析不合常规" + error_msg - return - -def generate_payload(api, inputs, top_p, temperature, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 - """ - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api}" - } - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - if what_gpt_answer["content"] == timeout_bot_msg: continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": LLM_MODEL, - "messages": messages, - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - - print(f" {LLM_MODEL} : {conversation_cnt} : {inputs}") - return headers,payload - - diff --git a/spaces/diacanFperku/AutoGPT/Asc Timetable 2010 Keygen Crackinstmanks.md b/spaces/diacanFperku/AutoGPT/Asc Timetable 2010 Keygen Crackinstmanks.md deleted file mode 100644 index 874cb51ab27d3b9db2398518ddc090181f559356..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Asc Timetable 2010 Keygen Crackinstmanks.md +++ /dev/null @@ -1,6 +0,0 @@ -

        asc timetable 2010 keygen crackinstmanks


        DOWNLOAD ✫✫✫ https://gohhs.com/2uFTKk



        -
        -autodeskrevitmep2014itatorrent · Axel Tony - Je Te Ressemble - 2013 - 320Kbps · asc timetable 2010 keygen crackinstmanks. galejapop's ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/FilmConvert Pro 1.34 For Sony Strane Monografie Ba.md b/spaces/diacanFperku/AutoGPT/FilmConvert Pro 1.34 For Sony Strane Monografie Ba.md deleted file mode 100644 index 8b249aadb0d665e1f1d1ad9539c7ddbec1e47967..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/FilmConvert Pro 1.34 For Sony Strane Monografie Ba.md +++ /dev/null @@ -1,6 +0,0 @@ -

        FilmConvert Pro 1.34 For Sony strane monografie ba


        DOWNLOAD ⚙⚙⚙ https://gohhs.com/2uFTTP



        - -You can read Kalle's interview with Wired Magazine here. By Kalle Ljung. GoPro HERO3+ Black Edition // DJI Phantom 2. FJ H400 Pro. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/segmented_maxsim.cpp b/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/segmented_maxsim.cpp deleted file mode 100644 index 0bdac29d241e1b882eb1cf2317ff9da8f137c0c0..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/segmented_maxsim.cpp +++ /dev/null @@ -1,97 +0,0 @@ -#include -#include - -#include -#include - -typedef struct { - int tid; - int nthreads; - - int ndocs; - int ndoc_vectors; - int nquery_vectors; - - int64_t* lengths; - float* scores; - int64_t* offsets; - - float* max_scores; -} max_args_t; - -void* max(void* args) { - max_args_t* max_args = (max_args_t*)args; - - int ndocs_per_thread = - std::ceil(((float)max_args->ndocs) / max_args->nthreads); - int start = max_args->tid * ndocs_per_thread; - int end = std::min((max_args->tid + 1) * ndocs_per_thread, max_args->ndocs); - - auto max_scores_offset = - max_args->max_scores + (start * max_args->nquery_vectors); - auto scores_offset = - max_args->scores + (max_args->offsets[start] * max_args->nquery_vectors); - - for (int i = start; i < end; i++) { - for (int j = 0; j < max_args->lengths[i]; j++) { - std::transform(max_scores_offset, - max_scores_offset + max_args->nquery_vectors, - scores_offset, max_scores_offset, - [](float a, float b) { return std::max(a, b); }); - scores_offset += max_args->nquery_vectors; - } - max_scores_offset += max_args->nquery_vectors; - } - - return NULL; -} - -torch::Tensor segmented_maxsim(const torch::Tensor scores, - const torch::Tensor lengths) { - auto lengths_a = lengths.data_ptr(); - auto scores_a = scores.data_ptr(); - auto ndocs = lengths.size(0); - auto ndoc_vectors = scores.size(0); - auto nquery_vectors = scores.size(1); - auto nthreads = at::get_num_threads(); - - torch::Tensor max_scores = - torch::zeros({ndocs, nquery_vectors}, scores.options()); - - int64_t offsets[ndocs + 1]; - offsets[0] = 0; - std::partial_sum(lengths_a, lengths_a + ndocs, offsets + 1); - - pthread_t threads[nthreads]; - max_args_t args[nthreads]; - - for (int i = 0; i < nthreads; i++) { - args[i].tid = i; - args[i].nthreads = nthreads; - - args[i].ndocs = ndocs; - args[i].ndoc_vectors = ndoc_vectors; - args[i].nquery_vectors = nquery_vectors; - - args[i].lengths = lengths_a; - args[i].scores = scores_a; - args[i].offsets = offsets; - - args[i].max_scores = max_scores.data_ptr(); - - int rc = pthread_create(&threads[i], NULL, max, (void*)&args[i]); - if (rc) { - fprintf(stderr, "Unable to create thread %d: %d\n", i, rc); - } - } - - for (int i = 0; i < nthreads; i++) { - pthread_join(threads[i], NULL); - } - - return max_scores.sum(1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("segmented_maxsim_cpp", &segmented_maxsim, "Segmented MaxSim"); -} diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/losses.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azusa-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/app.py b/spaces/digitalxingtong/Kino-Bert-VITS2/app.py deleted file mode 100644 index 6ce555003de1b72e107dfc8d6fff9a8b6a3a73d1..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/app.py +++ /dev/null @@ -1,183 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/kino_new/kino.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 吉诺儿kino Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳 https://huggingface.co/spaces/digitalxingtong/Xingtong-Bert-Vits2 \n - 星瞳 朗读专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2 \n - 星瞳 长文本专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2 \n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/README.md b/spaces/digitalxingtong/Nanami-Bert-VITS2/README.md deleted file mode 100644 index 8b1c54bc59ccd38b76ad8cc81c87148e7088293a..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI七海 -emoji: 🌟 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/walt/datasets/cocoeval.py b/spaces/dineshreddy/WALT/walt/datasets/cocoeval.py deleted file mode 100644 index a42a2735b51fa5b8a5f49dfefb48d84121d18484..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/walt/datasets/cocoeval.py +++ /dev/null @@ -1,612 +0,0 @@ -__author__ = 'tsungyi' - -import numpy as np -import datetime -import time -from collections import defaultdict -import pycocotools.mask as maskUtils -import copy - - - -def xywh_to_xyxy(xywh): - """Convert [x1 y1 w h] box format to [x1 y1 x2 y2] format.""" - if isinstance(xywh, (list, tuple)): - # Single box given as a list of coordinates - assert len(xywh) == 4 - x1, y1 = xywh[0], xywh[1] - x2 = x1 + np.maximum(0., xywh[2] - 1.) - y2 = y1 + np.maximum(0., xywh[3] - 1.) - return (x1, y1, x2, y2) - elif isinstance(xywh, np.ndarray): - # Multiple boxes given as a 2D ndarray - return np.hstack( - (xywh[:, 0:2], xywh[:, 0:2] + np.maximum(0, xywh[:, 2:4] - 1)) - ) - else: - raise TypeError('Argument xywh must be a list, tuple, or numpy array.') - -def get_iou(pred_box, gt_box): - """ - pred_box : the coordinate for predict bounding box - gt_box : the coordinate for ground truth bounding box - return : the iou score - the left-down coordinate of pred_box:(pred_box[0], pred_box[1]) - the right-up coordinate of pred_box:(pred_box[2], pred_box[3]) - """ - pred_box = xywh_to_xyxy(pred_box) - gt_box = xywh_to_xyxy(gt_box) - # 1.get the coordinate of inters - ixmin = max(pred_box[0], gt_box[0]) - ixmax = min(pred_box[2], gt_box[2]) - iymin = max(pred_box[1], gt_box[1]) - iymax = min(pred_box[3], gt_box[3]) - - iw = np.maximum(ixmax-ixmin+1., 0.) - ih = np.maximum(iymax-iymin+1., 0.) - - # 2. calculate the area of inters - inters = iw*ih - - # 3. calculate the area of union - uni = ((pred_box[2]-pred_box[0]+1.) * (pred_box[3]-pred_box[1]+1.) + - (gt_box[2] - gt_box[0] + 1.) * (gt_box[3] - gt_box[1] + 1.) - - inters) - - # 4. calculate the overlaps between pred_box and gt_box - iou = inters / uni - - return iou - - -class COCOeval: - # Interface for evaluating detection on the Microsoft COCO dataset. - # - # The usage for CocoEval is as follows: - # cocoGt=..., cocoDt=... # load dataset and results - # E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object - # E.params.recThrs = ...; # set parameters as desired - # E.evaluate(); # run per image evaluation - # E.accumulate(); # accumulate per image results - # E.summarize(); # display summary metrics of results - # For example usage see evalDemo.m and http://mscoco.org/. - # - # The evaluation parameters are as follows (defaults in brackets): - # imgIds - [all] N img ids to use for evaluation - # catIds - [all] K cat ids to use for evaluation - # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation - # recThrs - [0:.01:1] R=101 recall thresholds for evaluation - # areaRng - [...] A=4 object area ranges for evaluation - # maxDets - [1 10 100] M=3 thresholds on max detections per image - # iouType - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints' - # iouType replaced the now DEPRECATED useSegm parameter. - # useCats - [1] if true use category labels for evaluation - # Note: if useCats=0 category labels are ignored as in proposal scoring. - # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified. - # - # evaluate(): evaluates detections on every image and every category and - # concats the results into the "evalImgs" with fields: - # dtIds - [1xD] id for each of the D detections (dt) - # gtIds - [1xG] id for each of the G ground truths (gt) - # dtMatches - [TxD] matching gt id at each IoU or 0 - # gtMatches - [TxG] matching dt id at each IoU or 0 - # dtScores - [1xD] confidence of each dt - # gtIgnore - [1xG] ignore flag for each gt - # dtIgnore - [TxD] ignore flag for each dt at each IoU - # - # accumulate(): accumulates the per-image, per-category evaluation - # results in "evalImgs" into the dictionary "eval" with fields: - # params - parameters used for evaluation - # date - date evaluation was performed - # counts - [T,R,K,A,M] parameter dimensions (see above) - # precision - [TxRxKxAxM] precision for every evaluation setting - # recall - [TxKxAxM] max recall for every evaluation setting - # Note: precision and recall==-1 for settings with no gt objects. - # - # See also coco, mask, pycocoDemo, pycocoEvalDemo - # - # Microsoft COCO Toolbox. version 2.0 - # Data, paper, and tutorials available at: http://mscoco.org/ - # Code written by Piotr Dollar and Tsung-Yi Lin, 2015. - # Licensed under the Simplified BSD License [see coco/license.txt] - def __init__(self, cocoGt=None, cocoDt=None, iouType='segm'): - ''' - Initialize CocoEval using coco APIs for gt and dt - :param cocoGt: coco object with ground truth annotations - :param cocoDt: coco object with detection results - :return: None - ''' - if not iouType: - print('iouType not specified. use default iouType segm') - self.cocoGt = cocoGt # ground truth COCO API - self.cocoDt = cocoDt # detections COCO API - self.evalImgs = defaultdict(list) # per-image per-category evaluation results [KxAxI] elements - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iouType=iouType) # parameters - self._paramsEval = {} # parameters for evaluation - self.stats = [] # result summarization - self.ious = {} # ious between all gts and dts - self.percentage_occ = 0 - if not cocoGt is None: - self.params.imgIds = sorted(cocoGt.getImgIds()) - self.params.catIds = sorted(cocoGt.getCatIds()) - - - def _prepare(self): - ''' - Prepare ._gts and ._dts for evaluation based on params - :return: None - ''' - def _toMask(anns, coco): - # modify ann['segmentation'] by reference - for ann in anns: - rle = coco.annToRLE(ann) - ann['segmentation'] = rle - p = self.params - if p.useCats: - gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) - dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds)) - else: - gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds)) - dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds)) - - if self.percentage_occ >= 0: - gts_new = [] - indices = [] - for gt in gts: - #print(gt['occ_percentage'], self.percentage_occ) - if gt['occ_percentage'] >= self.percentage_occ*10 and gt['occ_percentage'] <(self.percentage_occ+1)*10: - for ind, dt in enumerate(dts): - if ind in indices or dt['image_id'] != gt['image_id']: - continue - #print(dt['image_id'], gt['image_id']) - if get_iou(gt['bbox'], dt['bbox']) >0.4: - indices.append(ind) - gts_new.append(gt) - - dts_new = [] - for i in np.unique(indices): - dts_new.append(dts[i]) - - #print(len(gts_new), len(gts), len(dts), len(dts_new), len(indices)) - dts = dts_new - gts = gts_new - ''' - ''' - - # convert ground truth to mask if iouType == 'segm' - if p.iouType == 'segm': - _toMask(gts, self.cocoGt) - _toMask(dts, self.cocoDt) - # set ignore flag - for gt in gts: - gt['ignore'] = gt['ignore'] if 'ignore' in gt else 0 - gt['ignore'] = 'iscrowd' in gt and gt['iscrowd'] - if p.iouType == 'keypoints': - gt['ignore'] = (gt['num_keypoints'] == 0) or gt['ignore'] - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - for gt in gts: - self._gts[gt['image_id'], gt['category_id']].append(gt) - for dt in dts: - self._dts[dt['image_id'], dt['category_id']].append(dt) - self.evalImgs = defaultdict(list) # per-image per-category evaluation results - self.eval = {} # accumulated evaluation results - - def evaluate(self): - ''' - Run per image evaluation on given images and store results (a list of dict) in self.evalImgs - :return: None - ''' - tic = time.time() - print('Running per image evaluation...') - p = self.params - # add backward compatibility if useSegm is specified in params - if not p.useSegm is None: - p.iouType = 'segm' if p.useSegm == 1 else 'bbox' - print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType)) - print('Evaluate annotation type *{}*'.format(p.iouType)) - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params=p - - self._prepare() - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == 'segm' or p.iouType == 'bbox': - computeIoU = self.computeIoU - elif p.iouType == 'keypoints': - computeIoU = self.computeOks - self.ious = {(imgId, catId): computeIoU(imgId, catId) \ - for imgId in p.imgIds - for catId in catIds} - - evaluateImg = self.evaluateImg - maxDet = p.maxDets[-1] - self.evalImgs = [evaluateImg(imgId, catId, areaRng, maxDet) - for catId in catIds - for areaRng in p.areaRng - for imgId in p.imgIds - ] - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - print('DONE (t={:0.2f}s).'.format(toc-tic)) - - def computeIoU(self, imgId, catId): - # dts_new.append(dt) - p = self.params - if p.useCats: - gt = self._gts[imgId,catId] - dt = self._dts[imgId,catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]] - if len(gt) == 0 and len(dt) ==0: - return [] - inds = np.argsort([-d['score'] for d in dt], kind='mergesort') - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt=dt[0:p.maxDets[-1]] - - if p.iouType == 'segm': - g = [g['segmentation'] for g in gt] - d = [d['segmentation'] for d in dt] - elif p.iouType == 'bbox': - g = [g['bbox'] for g in gt] - d = [d['bbox'] for d in dt] - else: - raise Exception('unknown iouType for iou computation') - - # compute iou between each dt and gt region - iscrowd = [int(o['iscrowd']) for o in gt] - ious = maskUtils.iou(d,g,iscrowd) - return ious - - def computeOks(self, imgId, catId): - p = self.params - # dimention here should be Nxm - gts = self._gts[imgId, catId] - dts = self._dts[imgId, catId] - inds = np.argsort([-d['score'] for d in dts], kind='mergesort') - dts = [dts[i] for i in inds] - if len(dts) > p.maxDets[-1]: - dts = dts[0:p.maxDets[-1]] - # if len(gts) == 0 and len(dts) == 0: - if len(gts) == 0 or len(dts) == 0: - return [] - ious = np.zeros((len(dts), len(gts))) - sigmas = p.kpt_oks_sigmas - vars = (sigmas * 2)**2 - k = len(sigmas) - # compute oks between each detection and ground truth object - for j, gt in enumerate(gts): - # create bounds for ignore regions(double the gt bbox) - g = np.array(gt['keypoints']) - xg = g[0::3]; yg = g[1::3]; vg = g[2::3] - k1 = np.count_nonzero(vg > 0) - bb = gt['bbox'] - x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2 - y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2 - for i, dt in enumerate(dts): - d = np.array(dt['keypoints']) - xd = d[0::3]; yd = d[1::3] - if k1>0: - # measure the per-keypoint distance if keypoints visible - dx = xd - xg - dy = yd - yg - else: - # measure minimum distance to keypoints in (x0,y0) & (x1,y1) - z = np.zeros((k)) - dx = np.max((z, x0-xd),axis=0)+np.max((z, xd-x1),axis=0) - dy = np.max((z, y0-yd),axis=0)+np.max((z, yd-y1),axis=0) - e = (dx**2 + dy**2) / vars / (gt['area']+np.spacing(1)) / 2 - if k1 > 0: - e=e[vg > 0] - ious[i, j] = np.sum(np.exp(-e)) / e.shape[0] - return ious - - def evaluateImg(self, imgId, catId, aRng, maxDet): - ''' - perform evaluation for single category and image - :return: dict (single image results) - ''' - p = self.params - if p.useCats: - gt = self._gts[imgId,catId] - dt = self._dts[imgId,catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]] - if len(gt) == 0 and len(dt) ==0: - return None - - for g in gt: - if g['ignore'] or (g['area']aRng[1]): - g['_ignore'] = 1 - else: - g['_ignore'] = 0 - - # sort dt highest score first, sort gt ignore last - gtind = np.argsort([g['_ignore'] for g in gt], kind='mergesort') - gt = [gt[i] for i in gtind] - dtind = np.argsort([-d['score'] for d in dt], kind='mergesort') - dt = [dt[i] for i in dtind[0:maxDet]] - iscrowd = [int(o['iscrowd']) for o in gt] - # load computed ious - ious = self.ious[imgId, catId][:, gtind] if len(self.ious[imgId, catId]) > 0 else self.ious[imgId, catId] - - T = len(p.iouThrs) - G = len(gt) - D = len(dt) - gtm = np.zeros((T,G)) - dtm = np.zeros((T,D)) - gtIg = np.array([g['_ignore'] for g in gt]) - dtIg = np.zeros((T,D)) - if not len(ious)==0: - for tind, t in enumerate(p.iouThrs): - for dind, d in enumerate(dt): - # information about best match so far (m=-1 -> unmatched) - iou = min([t,1-1e-10]) - m = -1 - for gind, g in enumerate(gt): - # if this gt already matched, and not a crowd, continue - if gtm[tind,gind]>0 and not iscrowd[gind]: - continue - # if dt matched to reg gt, and on ignore gt, stop - if m>-1 and gtIg[m]==0 and gtIg[gind]==1: - break - # continue to next gt unless better match made - if ious[dind,gind] < iou: - continue - # if match successful and best so far, store appropriately - iou=ious[dind,gind] - m=gind - # if match made store id of match for both dt and gt - if m ==-1: - continue - dtIg[tind,dind] = gtIg[m] - dtm[tind,dind] = gt[m]['id'] - gtm[tind,m] = d['id'] - # set unmatched detections outside of area range to ignore - a = np.array([d['area']aRng[1] for d in dt]).reshape((1, len(dt))) - dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0))) - # store results for given image and category - return { - 'image_id': imgId, - 'category_id': catId, - 'aRng': aRng, - 'maxDet': maxDet, - 'dtIds': [d['id'] for d in dt], - 'gtIds': [g['id'] for g in gt], - 'dtMatches': dtm, - 'gtMatches': gtm, - 'dtScores': [d['score'] for d in dt], - 'gtIgnore': gtIg, - 'dtIgnore': dtIg, - } - - def accumulate(self, p = None): - ''' - Accumulate per image evaluation results and store the result in self.eval - :param p: input params for evaluation - :return: None - ''' - print('Accumulating evaluation results...') - tic = time.time() - if not self.evalImgs: - print('Please run evaluate() first') - # allows input customized parameters - if p is None: - p = self.params - p.catIds = p.catIds if p.useCats == 1 else [-1] - T = len(p.iouThrs) - R = len(p.recThrs) - K = len(p.catIds) if p.useCats else 1 - A = len(p.areaRng) - M = len(p.maxDets) - precision = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories - recall = -np.ones((T,K,A,M)) - scores = -np.ones((T,R,K,A,M)) - - # create dictionary for future indexing - _pe = self._paramsEval - catIds = _pe.catIds if _pe.useCats else [-1] - setK = set(catIds) - setA = set(map(tuple, _pe.areaRng)) - setM = set(_pe.maxDets) - setI = set(_pe.imgIds) - # get inds to evaluate - k_list = [n for n, k in enumerate(p.catIds) if k in setK] - m_list = [m for n, m in enumerate(p.maxDets) if m in setM] - a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA] - i_list = [n for n, i in enumerate(p.imgIds) if i in setI] - I0 = len(_pe.imgIds) - A0 = len(_pe.areaRng) - # retrieve E at each category, area range, and max number of detections - for k, k0 in enumerate(k_list): - Nk = k0*A0*I0 - for a, a0 in enumerate(a_list): - Na = a0*I0 - for m, maxDet in enumerate(m_list): - E = [self.evalImgs[Nk + Na + i] for i in i_list] - E = [e for e in E if not e is None] - if len(E) == 0: - continue - dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E]) - - # different sorting method generates slightly different results. - # mergesort is used to be consistent as Matlab implementation. - inds = np.argsort(-dtScores, kind='mergesort') - dtScoresSorted = dtScores[inds] - - dtm = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds] - dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet] for e in E], axis=1)[:,inds] - gtIg = np.concatenate([e['gtIgnore'] for e in E]) - npig = np.count_nonzero(gtIg==0 ) - if npig == 0: - continue - tps = np.logical_and( dtm, np.logical_not(dtIg) ) - fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) ) - - tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) - for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - nd = len(tp) - rc = tp / npig - pr = tp / (fp+tp+np.spacing(1)) - q = np.zeros((R,)) - ss = np.zeros((R,)) - - if nd: - recall[t,k,a,m] = rc[-1] - else: - recall[t,k,a,m] = 0 - - # numpy is slow without cython optimization for accessing elements - # use python array gets significant speed improvement - pr = pr.tolist(); q = q.tolist() - - for i in range(nd-1, 0, -1): - if pr[i] > pr[i-1]: - pr[i-1] = pr[i] - - inds = np.searchsorted(rc, p.recThrs, side='left') - try: - for ri, pi in enumerate(inds): - q[ri] = pr[pi] - ss[ri] = dtScoresSorted[pi] - except: - pass - precision[t,:,k,a,m] = np.array(q) - scores[t,:,k,a,m] = np.array(ss) - self.eval = { - 'params': p, - 'counts': [T, R, K, A, M], - 'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), - 'precision': precision, - 'recall': recall, - 'scores': scores, - } - toc = time.time() - print('DONE (t={:0.2f}s).'.format( toc-tic)) - - def summarize(self): - ''' - Compute and display summary metrics for evaluation results. - Note this functin can *only* be applied on the default parameter setting - ''' - def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ): - p = self.params - iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}' - titleStr = 'Average Precision' if ap == 1 else 'Average Recall' - typeStr = '(AP)' if ap==1 else '(AR)' - iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \ - if iouThr is None else '{:0.2f}'.format(iouThr) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - if ap == 1: - # dimension of precision: [TxRxKxAxM] - s = self.eval['precision'] - # IoU - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:,:,:,aind,mind] - else: - # dimension of recall: [TxKxAxM] - s = self.eval['recall'] - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:,:,aind,mind] - if len(s[s>-1])==0: - mean_s = -1 - else: - mean_s = np.mean(s[s>-1]) - print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s)) - return mean_s - def _summarizeDets(): - stats = np.zeros((12,)) - stats[0] = _summarize(1) - stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2]) - stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2]) - stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2]) - stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2]) - stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2]) - stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[7] = _summarize(0, maxDets=self.params.maxDets[1]) - stats[8] = _summarize(0, maxDets=self.params.maxDets[2]) - stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2]) - stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2]) - stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2]) - return stats - def _summarizeKps(): - stats = np.zeros((10,)) - stats[0] = _summarize(1, maxDets=20) - stats[1] = _summarize(1, maxDets=20, iouThr=.5) - stats[2] = _summarize(1, maxDets=20, iouThr=.75) - stats[3] = _summarize(1, maxDets=20, areaRng='medium') - stats[4] = _summarize(1, maxDets=20, areaRng='large') - stats[5] = _summarize(0, maxDets=20) - stats[6] = _summarize(0, maxDets=20, iouThr=.5) - stats[7] = _summarize(0, maxDets=20, iouThr=.75) - stats[8] = _summarize(0, maxDets=20, areaRng='medium') - stats[9] = _summarize(0, maxDets=20, areaRng='large') - return stats - if not self.eval: - raise Exception('Please run accumulate() first') - iouType = self.params.iouType - if iouType == 'segm' or iouType == 'bbox': - summarize = _summarizeDets - elif iouType == 'keypoints': - summarize = _summarizeKps - self.stats = summarize() - - def __str__(self): - self.summarize() - -class Params: - ''' - Params for coco evaluation api - ''' - def setDetParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True) - self.maxDets = [1, 10, 100] - self.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 32 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]] - self.areaRngLbl = ['all', 'small', 'medium', 'large'] - self.useCats = 1 - - def setKpParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True) - self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True) - self.maxDets = [20] - self.areaRng = [[0 ** 2, 1e5 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]] - self.areaRngLbl = ['all', 'medium', 'large'] - self.useCats = 1 - self.kpt_oks_sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0 - - def __init__(self, iouType='segm'): - if iouType == 'segm' or iouType == 'bbox': - self.setDetParams() - elif iouType == 'keypoints': - self.setKpParams() - else: - raise Exception('iouType not supported') - self.iouType = iouType - # useSegm is deprecated - self.useSegm = None diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/psenet_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/psenet_pipeline.py deleted file mode 100644 index fd99dc3c2eb14921bbbf64ae861e5e5d6aa55c66..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/psenet_pipeline.py +++ /dev/null @@ -1,70 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='ScaleAspectJitter', - img_scale=[(3000, 736)], - ratio_range=(0.5, 3), - aspect_ratio_range=(1, 1), - multiscale_mode='value', - long_size_bound=1280, - short_size_bound=640, - resize_type='long_short_bound', - keep_ratio=False), - dict(type='PSENetTargets'), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='RandomRotateTextDet'), - dict( - type='RandomCropInstances', - target_size=(640, 640), - instance_key='gt_kernels'), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_kernels', 'gt_mask'], - visualize=dict(flag=False, boundary_key='gt_kernels')), - dict(type='Collect', keys=['img', 'gt_kernels', 'gt_mask']) -] - -# for ctw1500 -img_scale_test_ctw1500 = (1280, 1280) -test_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_test_ctw1500, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for icdar2015 -img_scale_test_icdar2015 = (2240, 2240) -test_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_test_icdar2015, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/dorkai/singpt-2.0/extensions/character_bias/script.py b/spaces/dorkai/singpt-2.0/extensions/character_bias/script.py deleted file mode 100644 index 35b38c0edcb38512f2472937578a363343a4468c..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt-2.0/extensions/character_bias/script.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr - -params = { - "activate": True, - "bias string": " *I am so happy*", -} - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return string - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - return string - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - if params['activate'] == True: - return f'{string} {params["bias string"].strip()} ' - else: - return string - -def ui(): - # Gradio elements - activate = gr.Checkbox(value=params['activate'], label='Activate character bias') - string = gr.Textbox(value=params["bias string"], label='Character bias') - - # Event functions to update the parameters in the backend - string.change(lambda x: params.update({"bias string": x}), string, None) - activate.change(lambda x: params.update({"activate": x}), activate, None) diff --git a/spaces/dteam/chatgpt-dteam/bin_public/app/Chatbot.py b/spaces/dteam/chatgpt-dteam/bin_public/app/Chatbot.py deleted file mode 100644 index e72fc55be2ab78cf8e72020d0fdd94bff3381523..0000000000000000000000000000000000000000 --- a/spaces/dteam/chatgpt-dteam/bin_public/app/Chatbot.py +++ /dev/null @@ -1,427 +0,0 @@ -# -*- coding:utf-8 -*- - -from overwrites import * -from chat_func import * -from openai_func import * - -from bin_public.utils.utils import * -from bin_public.utils.utils_db import * -from bin_public.config.presets import * -from bin_public.utils.Pinecone import * -my_api_key = "" - -# if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - '''if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): # API key 所在的文件 - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip()''' - - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open(CSS_path, "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS) as demo: - history = gr.State([]) - token_count = gr.State([]) - invite_code = gr.State() - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - function_template = gr.State(function) - interviewer_prompt = gr.Textbox(INTERVIEWER_PROMPT, visible=False) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - pinecone_api_key = gr.Textbox(PINECONE_API_KEY, visible=False) - pinecone_api_env = gr.Textbox(PINECONE_API_ENV, visible=False) - pinecone_index_name = gr.Textbox(PINECONE_INDEX_NAME, visible=False) - - # gr.HTML(""" - #
        - # """) - gr.HTML(title) - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot().style(height=600) # .style(color_map=("#1D51EE", "#585A5B")) - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(scale=1): - emptyBtn = gr.Button("🧹 新的对话", ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话", visible=False) - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - status_display = gr.Markdown("status: ready") - with gr.Tab(label="ChatGPT"): - keyTXT = gr.Textbox(show_label=True, placeholder=f"OpenAI API-key...", - type="password", visible=not HIDE_MY_KEY, label="API-Key/Invite-Code",) - usageTxt = gr.Markdown("**提交key** 以显示额度", elem_id="usage_display", visible=False) - - keyTxt = gr.Textbox(visible=False) - - key_button = gr.Button("Enter") - #usage_button = gr.Button("显示用量") - - model_select_dropdown = gr.Dropdown(label="选择模型", choices=MODELS, multiselect=False, - value=MODELS[0]) - with gr.Accordion("参数", open=False): - temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, - step=0.1, interactive=True, label="Temperature", ) - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)", visible=False) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template( - get_template_names(plain=True)[0], mode=1), multiselect=False, value= - load_template( - get_template_names(plain=True)[0], mode=1)[ - 0]) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - visible=False - ) - - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="学术功能"): - with gr.Row(): - aep = gr.Button("英语学术润色") - acp = gr.Button("中文学术润色") - sge = gr.Button("查找英文语法错误") - ac2e = gr.Button("学术中译英") - c2e = gr.Button("中译英") - e2c = gr.Button("英译中") - - with gr.Tab(label="角色"): - with gr.Row(): - interviewer = gr.Button("面试官") - migraine = gr.Button("医生问诊") - pre_defined_q = gr.Dropdown(label="选择预设问题", - choices=qs, - multiselect=False, - value=qs[0]) - index_pinecone = gr.Textbox(placeholder=f"Index fetched", visible=True) - - with gr.Tab(label="Davinci-003"): - with gr.Column(): - with gr.Row(): - with gr.Column(): - davinci_user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - temperature_davinci = gr.Slider(minimum=-0, maximum=1.0, value=0.7, - step=0.1, interactive=True, label="Temperature", ) - davinci_submitBtn = gr.Button("🚀", variant="primary") - davinci_output = gr.Textbox(show_label=False, placeholder="output").style( - container=False) - - gr.HTML(""" -
        - """) - gr.Markdown(description) - - get_usage_args = dict( - fn=get_usage, inputs=[keyTxt], outputs=[usageTxt], show_progress=False - ) - - # 输入为api key则保持不变,为邀请码则调用中心的api key - key_button.click(key_preprocessing, [keyTXT], [status_display, keyTxt, invite_code]).then(**get_usage_args) - - #usage_button.click(**get_usage_args) - - user_input.submit(predict, - [ - keyTxt, - invite_code, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files - ], - [chatbot, history, status_display, token_count], show_progress=True) - - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [ - keyTxt, - invite_code, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files], - [chatbot, history, status_display, token_count], show_progress=True) - - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, - [keyTxt, invite_code, systemPromptTxt, history, chatbot, token_count, top_p, temperature, - use_streaming_checkbox, - model_select_dropdown], [chatbot, history, status_display, token_count], show_progress=True) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, invite_code, systemPromptTxt, history, chatbot, token_count, top_p, - temperature, use_streaming_checkbox, model_select_dropdown], - [chatbot, history, status_display, token_count], show_progress=True) - # History - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - # historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templateFileSelectDropdown.change(load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True) - - templateSelectDropdown.change(get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True) - - # 功能 - function_button_list = [aep, acp, sge, ac2e, c2e, e2c] - for i in function_button_list: - name = gr.Dropdown(choices=list(function.keys()), value=i.value, visible=False) - i.click(get_function_content, - [function_template, name], - [systemPromptTxt], - show_progress=True) - - # 角色 - interviewer.click(predict, [ - keyTxt, - invite_code, - interviewer_prompt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files], - [chatbot, history, status_display, token_count], show_progress=True) - - interviewer.click(get_character_content, - [interviewer_prompt], - [systemPromptTxt]) - - migraine.click(context_construction,[ - keyTxt, - user_input, - model_select_dropdown, - pinecone_api_key, - pinecone_api_env, - temperature, - pinecone_index_name], - [systemPromptTxt, index_pinecone, status_display] - ).then( - predict, [ - keyTxt, - invite_code, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files], - [chatbot, history, status_display, token_count], show_progress=True - ).then(reset_textbox, [], [user_input]) - - pre_defined_q.change(get_pre_defined_q, - [pre_defined_q], - [user_input]).then(context_construction,[ - keyTxt, - user_input, - model_select_dropdown, - pinecone_api_key, - pinecone_api_env, - temperature, - pinecone_index_name], - [systemPromptTxt, index_pinecone, status_display] - ).then( - predict, [ - keyTxt, - invite_code, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files], - [chatbot, history, status_display, token_count], show_progress=True - ).then(reset_textbox, [], [user_input]) - - # Davinci - davinci_user_input.submit(predict_davinci, - [ - keyTxt, - davinci_user_input, - temperature, - ], - [davinci_output], show_progress=True) - - davinci_submitBtn.click(predict_davinci, - [ - keyTxt, - davinci_user_input, - temperature_davinci, - ], - [davinci_output], show_progress=True) - -logging.info("\n访问 http://localhost:7860 查看界面") -logging.info(os.environ) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "ChatGPT-长江商学院 🚀" - -if __name__ == "__main__": - reload_javascript() - - # if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - # if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 diff --git a/spaces/dteam/chatgpt-dteam/bin_public/utils/shared.py b/spaces/dteam/chatgpt-dteam/bin_public/utils/shared.py deleted file mode 100644 index 90a21e66294f3ac3ef4143a1c31ebe76f97ad6fc..0000000000000000000000000000000000000000 --- a/spaces/dteam/chatgpt-dteam/bin_public/utils/shared.py +++ /dev/null @@ -1,32 +0,0 @@ -from bin_public.config.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -class State: - interrupted = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host): - self.completion_url = f"https://{api_host}/v1/chat/completions" - self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"https://{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1" - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - -state = State() \ No newline at end of file diff --git a/spaces/dukujames/ML-Sentiment/README.md b/spaces/dukujames/ML-Sentiment/README.md deleted file mode 100644 index 2ecb61640eface4708c089e30aae3ac649ebe867..0000000000000000000000000000000000000000 --- a/spaces/dukujames/ML-Sentiment/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ML Sentiment -emoji: 🐠 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dwolfe66/text-generation-webui-space/modules/html_generator.py b/spaces/dwolfe66/text-generation-webui-space/modules/html_generator.py deleted file mode 100644 index 162040bac68c2e987b33a02ccb12e90b51a63b2d..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/modules/html_generator.py +++ /dev/null @@ -1,357 +0,0 @@ -''' - -This is a library for formatting GPT-4chan and chat outputs as nice HTML. - -''' - -import os -import re -from pathlib import Path - -from PIL import Image - -# This is to store the paths to the thumbnails of the profile pictures -image_cache = {} - -def generate_basic_html(s): - css = """ - .container { - max-width: 600px; - margin-left: auto; - margin-right: auto; - background-color: rgb(31, 41, 55); - padding:3em; - } - .container p { - font-size: 16px !important; - color: white !important; - margin-bottom: 22px; - line-height: 1.4 !important; - } - """ - s = '\n'.join([f'

        {line}

        ' for line in s.split('\n')]) - s = f'
        {s}
        ' - return s - -def process_post(post, c): - t = post.split('\n') - number = t[0].split(' ')[1] - if len(t) > 1: - src = '\n'.join(t[1:]) - else: - src = '' - src = re.sub('>', '>', src) - src = re.sub('(>>[0-9]*)', '\\1', src) - src = re.sub('\n', '
        \n', src) - src = f'
        {src}\n' - src = f'Anonymous No.{number}\n{src}' - return src - -def generate_4chan_html(f): - css = """ - - #parent #container { - background-color: #eef2ff; - padding: 17px; - } - #parent #container .reply { - background-color: rgb(214, 218, 240); - border-bottom-color: rgb(183, 197, 217); - border-bottom-style: solid; - border-bottom-width: 1px; - border-image-outset: 0; - border-image-repeat: stretch; - border-image-slice: 100%; - border-image-source: none; - border-image-width: 1; - border-left-color: rgb(0, 0, 0); - border-left-style: none; - border-left-width: 0px; - border-right-color: rgb(183, 197, 217); - border-right-style: solid; - border-right-width: 1px; - border-top-color: rgb(0, 0, 0); - border-top-style: none; - border-top-width: 0px; - color: rgb(0, 0, 0); - display: table; - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 4px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; - padding-bottom: 4px; - padding-left: 2px; - padding-right: 2px; - padding-top: 4px; - } - - #parent #container .number { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - width: 342.65px; - margin-right: 7px; - } - - #parent #container .op { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 8px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; - } - - #parent #container .op blockquote { - margin-left: 0px !important; - } - - #parent #container .name { - color: rgb(17, 119, 67); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - font-weight: 700; - margin-left: 7px; - } - - #parent #container .quote { - color: rgb(221, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - text-decoration-color: rgb(221, 0, 0); - text-decoration-line: underline; - text-decoration-style: solid; - text-decoration-thickness: auto; - } - - #parent #container .greentext { - color: rgb(120, 153, 34); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - } - - #parent #container blockquote { - margin: 0px !important; - margin-block-start: 1em; - margin-block-end: 1em; - margin-inline-start: 40px; - margin-inline-end: 40px; - margin-top: 13.33px !important; - margin-bottom: 13.33px !important; - margin-left: 40px !important; - margin-right: 40px !important; - } - - #parent #container .message { - color: black; - border: none; - } - """ - - posts = [] - post = '' - c = -2 - for line in f.splitlines(): - line += "\n" - if line == '-----\n': - continue - elif line.startswith('--- '): - c += 1 - if post != '': - src = process_post(post, c) - posts.append(src) - post = line - else: - post += line - if post != '': - src = process_post(post, c) - posts.append(src) - - for i in range(len(posts)): - if i == 0: - posts[i] = f'
        {posts[i]}
        \n' - else: - posts[i] = f'
        {posts[i]}
        \n' - - output = '' - output += f'
        ' - for post in posts: - output += post - output += '
        ' - output = output.split('\n') - for i in range(len(output)): - output[i] = re.sub(r'^(>(.*?)(
        |
        ))', r'\1', output[i]) - output[i] = re.sub(r'^
        (>(.*?)(
        |
        ))', r'
        \1', output[i]) - output = '\n'.join(output) - - return output - -def get_image_cache(path): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - mtime = os.stat(path).st_mtime - if (path in image_cache and mtime != image_cache[path][0]) or (path not in image_cache): - img = Image.open(path) - img.thumbnail((200, 200)) - output_file = Path(f'cache/{path.name}_cache.png') - img.convert('RGB').save(output_file, format='PNG') - image_cache[path] = [mtime, output_file.as_posix()] - - return image_cache[path][1] - -def generate_chat_html(history, name1, name2, character): - css = """ - .chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: 66.67vh; - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - } - - .message { - display: grid; - grid-template-columns: 60px 1fr; - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; - } - - .circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; - } - - .circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; - } - - .circle-bot img, .circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; - } - - .text { - } - - .text p { - margin-top: 5px; - } - - .username { - font-weight: bold; - } - - .message-body { - } - - .message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; - } - - .message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; - } - - .dark .message-body p em { - color: rgb(138, 138, 138) !important; - } - - .message-body p em { - color: rgb(110, 110, 110) !important; - } - - """ - - output = '' - output += f'
        ' - img = '' - - for i in [ - f"characters/{character}.png", - f"characters/{character}.jpg", - f"characters/{character}.jpeg", - "img_bot.png", - "img_bot.jpg", - "img_bot.jpeg" - ]: - - path = Path(i) - if path.exists(): - img = f'' - break - - img_me = '' - for i in ["img_me.png", "img_me.jpg", "img_me.jpeg"]: - path = Path(i) - if path.exists(): - img_me = f'' - break - - for i,_row in enumerate(history[::-1]): - row = _row.copy() - row[0] = re.sub(r"(\*\*)([^\*\n]*)(\*\*)", r"\2", row[0]) - row[1] = re.sub(r"(\*\*)([^\*\n]*)(\*\*)", r"\2", row[1]) - row[0] = re.sub(r"(\*)([^\*\n]*)(\*)", r"\2", row[0]) - row[1] = re.sub(r"(\*)([^\*\n]*)(\*)", r"\2", row[1]) - p = '\n'.join([f"

        {x}

        " for x in row[1].split('\n')]) - output += f""" -
        -
        - {img} -
        -
        -
        - {name2} -
        -
        - {p} -
        -
        -
        - """ - - if not (i == len(history)-1 and len(row[0]) == 0): - p = '\n'.join([f"

        {x}

        " for x in row[0].split('\n')]) - output += f""" -
        -
        - {img_me} -
        -
        -
        - {name1} -
        -
        - {p} -
        -
        -
        - """ - - output += "
        " - return output diff --git a/spaces/elplaguister/Yuuka_TTS/src/text/__init__.py b/spaces/elplaguister/Yuuka_TTS/src/text/__init__.py deleted file mode 100644 index 9257fda0fa536b2a0dc4fa0c2e1247ac42598080..0000000000000000000000000000000000000000 --- a/spaces/elplaguister/Yuuka_TTS/src/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from src.text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text \ No newline at end of file diff --git a/spaces/emilylearning/llm_uncertainty/winogender_schema/readme.md b/spaces/emilylearning/llm_uncertainty/winogender_schema/readme.md deleted file mode 100644 index 30c47084daf777d7646286d94cef2648a0718bf9..0000000000000000000000000000000000000000 --- a/spaces/emilylearning/llm_uncertainty/winogender_schema/readme.md +++ /dev/null @@ -1,7 +0,0 @@ -Files in this directory: - -`templates.tsv` is an unmodifed version fo taht provided in winogender-schemas: -https://github.com/rudinger/winogender-schemas - - -`all_sentences.tsv` is the result of running `../winogender_sentences` \ No newline at end of file diff --git a/spaces/evawade17/acne_detector/app.py b/spaces/evawade17/acne_detector/app.py deleted file mode 100644 index fec212d4958050d6d5f46fbe8ec7a42f7045a741..0000000000000000000000000000000000000000 --- a/spaces/evawade17/acne_detector/app.py +++ /dev/null @@ -1,23 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner("export.pkl") - -categories = ("Acne", "Eczema") - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['acne.png','eczema.png'] -title = 'Acne and Eczema Predictor' -description = 'This app predicts skin cancer. For reference only.' -article = "Author: Eva Wade. " - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples, title=title, description=description, article=article) -intf.launch(inline=False) - - -#update \ No newline at end of file diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" "b/spaces/f2api/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" deleted file mode 100644 index c638d1bd087c878e9722bec02361111613ac2b7c..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" +++ /dev/null @@ -1,143 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import re - -def write_chat_to_file(chatbot, history=None, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - from theme import advanced_css - f.write(f'对话历史') - for i, contents in enumerate(chatbot): - for j, content in enumerate(contents): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: content = str(content) - except: - continue - f.write(content) - if j == 0: - f.write('
        ') - f.write('
        \n\n') - f.write('
        \n\n raw chat context:\n') - f.write('') - for h in history: - f.write("\n>>>" + h) - f.write('') - res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - -def gen_file_preview(file_name): - try: - with open(file_name, 'r', encoding='utf8') as f: - file_content = f.read() - # pattern to match the text between and - pattern = re.compile(r'.*?', flags=re.DOTALL) - file_content = re.sub(pattern, '', file_content) - html, history = file_content.split('
        \n\n raw chat context:\n') - history = history.strip('') - history = history.strip('') - history = history.split("\n>>>") - return list(filter(lambda x:x!="", history))[0][:100] - except: - return "" - -def read_file_to_chat(chatbot, history, file_name): - with open(file_name, 'r', encoding='utf8') as f: - file_content = f.read() - # pattern to match the text between and - pattern = re.compile(r'.*?', flags=re.DOTALL) - file_content = re.sub(pattern, '', file_content) - html, history = file_content.split('
        \n\n raw chat context:\n') - history = history.strip('') - history = history.strip('') - history = history.split("\n>>>") - history = list(filter(lambda x:x!="", history)) - html = html.split('
        \n\n') - html = list(filter(lambda x:x!="", html)) - chatbot.clear() - for i, h in enumerate(html): - i_say, gpt_say = h.split('
        ') - chatbot.append([i_say, gpt_say]) - chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"]) - return chatbot, history - -@CatchException -def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - - chatbot.append(("保存当前对话", - f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用“载入对话历史存档”还原当下的对话。\n警告!被保存的对话历史可以被使用该系统的任何人查阅。")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - -def hide_cwd(str): - import os - current_path = os.getcwd() - replace_path = "." - return str.replace(current_path, replace_path) - -@CatchException -def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - from .crazy_utils import get_files_from_everything - success, file_manifest, _ = get_files_from_everything(txt, type='.html') - - if not success: - if txt == "": txt = '空空如也的输入栏' - import glob - local_history = "
        ".join(["`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)]) - chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:
        {local_history}"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - try: - chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - except: - chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - -@CatchException -def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - - import glob, os - local_history = "
        ".join(["`"+hide_cwd(f)+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)]) - for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True): - os.remove(f) - chatbot.append([f"删除所有历史对话文件", f"已删除
        {local_history}"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - diff --git a/spaces/facebook/MusicGen/audiocraft/grids/musicgen/__init__.py b/spaces/facebook/MusicGen/audiocraft/grids/musicgen/__init__.py deleted file mode 100644 index d3f101f5a29ff85271e44e4f27545168a8f27baa..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/grids/musicgen/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""MusicGen grids.""" diff --git a/spaces/falterWliame/Face_Mask_Detection/Baofeng Uv 8d Software 11 Photo Santo Passport REPACK.md b/spaces/falterWliame/Face_Mask_Detection/Baofeng Uv 8d Software 11 Photo Santo Passport REPACK.md deleted file mode 100644 index 5b56291ff0afd16f03e692f69c9567f80dec3250..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Baofeng Uv 8d Software 11 Photo Santo Passport REPACK.md +++ /dev/null @@ -1,7 +0,0 @@ - -

        you can have wide variety of tools and effects to play with before you achieve the final goal that you are looking for. it also comes with the option to change the color of the image. you can change the color of your image in a way that it will be the same as you want. you can also use a shortcut key to perform operations. this helps to give you instant access to all the tools. you can also customize the look and feel of your document in a way that it will match your mood. there are different templates available in the program that you can use. using different filtering tools, you can effectively play around with your photos and videos.
        the best thing about this program is it can be used by novice users as well. you can apply the tools in a way that it will reflect your creativity. another great feature of this program is it can edit video files as well. this makes it the best tool to resize your photos and videos.

        -

        Baofeng Uv 8d Software 11 photo santo passport


        Download Zip ☆☆☆ https://urlca.com/2uDd2F



        -

        watermark is a simple photo editing utility designed to replace or add watermark to your photographs. you can use watermark to add text, your name, logo, copyright or a picture to your photos. it is free to use and there are no registration details. watermark is available for both mac os x and windows systems. watermark works with any type of image and does not need to be connected to the internet.

        -

        the ultimate page count calculation, fun to use, right first time. 1.5.0.54. ultimate page count calculation! if you have a lot of pages to count (of any paper type) this is the app for you. you can print the pages on your printer or save them as a pdf or.tif,.png,.jpeg (or.gif). why do you need it? how many pages? whatever you need to know! ultimate page count calculation gives you more information than any other software in the market. if you are a newspaper, magazine or book publisher - the app will give you more
        http://onwww.buypropecia6.net/book-making-software/ .

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Clave Para Activar Windows 8 Single Language.md b/spaces/falterWliame/Face_Mask_Detection/Clave Para Activar Windows 8 Single Language.md deleted file mode 100644 index d330a8fd79207a77aa751ed671971c43aaea1baf..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Clave Para Activar Windows 8 Single Language.md +++ /dev/null @@ -1,6 +0,0 @@ -

        clave para activar windows 8 single language


        DOWNLOADhttps://urlca.com/2uDc7q



        -
        -[UPDATED] Looking for a genuine Windows 10 product key? ... key using SLUI; How to upgrade from Windows 7 or 8 to Windows 10 ... Windows 10 Home Single Language, 8PTT6-RNW4C-6V7J2-C2D3X-MHBPB, Windows ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/falterWliame/Face_Mask_Detection/Friends Forever Dubbed Movies In Hindi 720p.md b/spaces/falterWliame/Face_Mask_Detection/Friends Forever Dubbed Movies In Hindi 720p.md deleted file mode 100644 index 04b7bdc92ea3ddc7615eee014b6d68e95495cf2c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Friends Forever Dubbed Movies In Hindi 720p.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Friends Forever dubbed movies in hindi 720p


        Download File →→→ https://urlca.com/2uDchm



        - - 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/fatiXbelha/sd/1bank APK - How to Access Your Bank of Cyprus Finances on Your Mobile Device.md b/spaces/fatiXbelha/sd/1bank APK - How to Access Your Bank of Cyprus Finances on Your Mobile Device.md deleted file mode 100644 index 47358971ae65efc601faf91895e8e2765e9689c5..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/1bank APK - How to Access Your Bank of Cyprus Finances on Your Mobile Device.md +++ /dev/null @@ -1,104 +0,0 @@ - -

        What is 1bank apk and why you need it

        -

        If you are looking for a convenient, fast, and secure way to manage your banking on the go, you might want to check out 1bank apk. This is the official mobile app of Bank of Cyprus, one of the largest banks in Cyprus. With 1bank apk, you can perform your queries and transactions with a touch of a finger, anytime and anywhere. In this article, we will tell you everything you need to know about 1bank apk, including its benefits and features, how to download and install it, and how to use it.

        -

        1bank apk


        Download Zip ===== https://urllie.com/2uNB8V



        -

        Benefits and features of 1bank apk

        -

        1bank apk is designed to make your everyday banking easier and faster. Here are some of the benefits and features that you can enjoy with this app:

        -
          -
        • Login with biometrics or passcode: You can login using your user ID and the 6-digit passcode or using biometrics (such as fingerprint or face recognition) if your device supports it. This way, you can access your accounts quickly and securely.
        • -
        • Check your balances and transactions: You can check your balances using the convenient home page, where connected accounts are separated per account type (current accounts/saving accounts/cards/loans). You can also view your account details, such as interest rates, IBAN, hold amounts, uncleared cheques, etc. You can also check your transaction history with a filter option to find a specific transaction.
        • -
        • Transfer funds and make payments: You can transfer funds between your accounts or to any Bank of Cyprus customer. You can also use your predefined templates for convenience. You can also carry out fast and easy QuickPay mobile payments to Bank of Cyprus customers, up to €150 per day, using the beneficiary's mobile number or account/card number. You can also set your favorite QuickPay contacts for easy selection. You can also transfer funds to other local banks or abroad (SEPA & SWIFT) either to new or to auto-saved beneficiaries.
        • -
        • Open accounts and apply for cards: You can open eFixed Deposit (in euro and other currencies) and eNotice accounts. You can also apply for an eCredit Card.
        • -
        • Manage your finances and get insights: You can view a snapshot of your finances and get useful insights such as your net worth and scheduled payments. You can also connect accounts held with other banking institutions (for supported banks only) and view information about those accounts.
        • -
        • Personalize and secure your app: You can personalize the app by uploading a picture of your choice or setting up account alias. You can also update your contact information (phone numbers, email) with a Digipass OTP. You can also get images of cheques you issued or that you deposited. You can also view the notices sent by the bank from time to time, to keep up with their news.
        • -
        -

        How to download and install 1bank apk

        -

        If you want to use 1bank apk on your Android device, here are the steps you need to follow:

        -
          -
        1. Go to [Aptoide](^1^), a third-party app store that offers free apps for Android users.
        2. -
        3. Search for Bank Of Cyprus in the search bar.
        4. -
        5. Select the app from the list of results and tap on Download APK.
        6. -
        7. Wait for the download to complete and then
        8. Open the downloaded file and tap on Install. You may need to enable the installation of apps from unknown sources in your device settings.
        9. -
        10. Once the installation is complete, you can open the app and start using it.
        11. -
        -

        Note that 1bank apk requires Android 5.0 or higher and a minimum of 50 MB of free space on your device. It is also compatible with tablets and smartphones.

        -

        If you encounter any problems with downloading or installing the app, you can contact the bank's customer service at 800 00 800 or +357 22 128000 (from abroad) or email them at 1bank@bankofcyprus.com.

        -

        How to use 1bank apk

        -

        Using 1bank apk is easy and intuitive. Here are some tips on how to use the app:

        -

        1bank apk download
        -1bank apk latest version
        -1bank apk for android
        -1bank apk free download
        -1bank apk aptoide
        -1bank apk update
        -1bank apk mod
        -1bank apk old version
        -1bank apk mirror
        -1bank apk hack
        -1bank apk file
        -1bank apk cracked
        -1bank apk premium
        -1bank apk pro
        -1bank apk full
        -1bank apk offline
        -1bank apk online
        -1bank apk install
        -1bank apk review
        -1bank apk features
        -1bank apk benefits
        -1bank apk security
        -1bank apk support
        -1bank apk login
        -1bank apk registration
        -1bank apk transfer
        -1bank apk payment
        -1bank apk balance
        -1bank apk transaction
        -1bank apk history
        -1bank apk quickpay
        -1bank apk biometrics
        -1bank apk passcode
        -1bank apk digipass
        -1bank apk iban
        -1bank apk cheque
        -1bank apk bill
        -1bank apk credit card
        -1bank apk fixed deposit
        -1bank apk notice account
        -1bank apk personalise
        -1bank apk snapshot
        -1bank apk insights
        -1bank apk net worth
        -1bank apk scheduled payments

        -
          -
        • How to login and navigate the app: To login, you need to enter your user ID and the 6-digit passcode or use biometrics (if available). You can also use the Quick Login option, which allows you to login with a single tap without entering any credentials. To navigate the app, you can use the menu icon on the top left corner, which gives you access to all the features and services of the app. You can also use the shortcuts on the home page, which allow you to perform common actions such as transfers, payments, etc.
        • -
        • How to perform common tasks such as transfers, payments, etc.: To transfer funds or make payments, you need to tap on the Transfers & Payments icon on the home page or the menu. Then, you can choose the type of transaction you want to perform, such as QuickPay, Transfer within Bank of Cyprus, Transfer to other local banks or abroad, etc. You can also use your predefined templates for convenience. To create a new template, you need to tap on the Templates icon on the menu and then select New Template. You can also edit or delete your existing templates from there.
        • -
        • How to access support and feedback: If you need any help or have any suggestions for improving the app, you can tap on the Support & Feedback icon on the menu. There, you can find useful information such as FAQs, Terms & Conditions, Privacy Policy, etc. You can also contact the bank's customer service by phone or email, or send them your feedback using the Feedback Form.
        • -
        -

        Conclusion

        -

        1bank apk is a great app for Bank of Cyprus customers who want to manage their banking on the go. It offers a range of benefits and features that make banking easier and faster. You can download and install it from Aptoide, a third-party app store that offers free apps for Android users. You can also use it easily and intuitively, with a simple login and navigation system and a variety of options for transfers, payments, and more. You can also access support and feedback from within the app if you need any assistance or have any ideas for improvement.

        -

        If you are not yet a Bank of Cyprus customer, you can visit their website or branch to find out more about their products and services and how to become one. You can also open an account online using their eBanking platform.

        -

        So what are you waiting for? Download 1bank apk today and enjoy banking on the go!

        -

        FAQs

        -
          -
        • What is 1bank apk?
        • -

          1bank apk is the official mobile app of Bank of Cyprus, one of the largest banks in Cyprus. It allows you to perform your queries and transactions with a touch of a finger, anytime and anywhere.

          -
        • How do I download and install 1bank apk?
        • -

          You can download and install 1bank apk from Aptoide, a third-party app store that offers free apps for Android users. You need to search for Bank Of Cyprus in Aptoide's search bar, download the APK file, and install it on your device. You may need to enable the installation of apps from unknown sources in your device settings.

          -
        • What are the benefits and features of 1bank apk?
        • -

          Some of the benefits and features of 1bank apk are:

          -
            -
          • Login with biometrics or passcode
          • -
          • Check your balances and transactions
          • -
          • Transfer funds and make payments
          • -
          • Open accounts and apply for cards
          • -
          • Manage your finances and get insights
          • -
          • Personalize and secure your app
          • -
          -
        • How do I use 1bank apk?
        • -

          To use 1bank apk, you need to login with your user ID and the 6-digit passcode or use biometrics (if available). You can also use the Quick Login option, which allows you to login with a single tap without entering any credentials. To navigate the app, you can use the menu icon on the top left corner, which gives you access to all the features and services of the app. You can also use the shortcuts on the home page, which allow you to perform common actions such as transfers, payments, etc.

          -
        • Is 1bank apk safe and secure?
        • -

          Yes, 1bank apk is safe and secure. It uses encryption and authentication technologies to protect your data and transactions. It also allows you to login with biometrics or passcode, which adds an extra layer of security. You can also personalize and secure your app by uploading a picture of your choice or setting up account alias. You can also update your contact information and get images of cheques you issued or that you deposited.

          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/audio.py b/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/audio.py deleted file mode 100644 index 2fcb77ad1d3a85f523e24f84691886736a5686cb..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/audio.py +++ /dev/null @@ -1,107 +0,0 @@ -from scipy.ndimage.morphology import binary_dilation -from speaker_encoder.params_data import * -from pathlib import Path -from typing import Optional, Union -import numpy as np -import webrtcvad -import librosa -import struct - -int16_max = (2 ** 15) - 1 - - -def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray], - source_sr: Optional[int] = None): - """ - Applies the preprocessing operations used in training the Speaker Encoder to a waveform - either on disk or in memory. The waveform will be resampled to match the data hyperparameters. - - :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not - just .wav), either the waveform as a numpy array of floats. - :param source_sr: if passing an audio waveform, the sampling rate of the waveform before - preprocessing. After preprocessing, the waveform's sampling rate will match the data - hyperparameters. If passing a filepath, the sampling rate will be automatically detected and - this argument will be ignored. - """ - # Load the wav from disk if needed - if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path): - wav, source_sr = librosa.load(fpath_or_wav, sr=None) - else: - wav = fpath_or_wav - - # Resample the wav if needed - if source_sr is not None and source_sr != sampling_rate: - wav = librosa.resample(wav, source_sr, sampling_rate) - - # Apply the preprocessing: normalize volume and shorten long silences - wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True) - wav = trim_long_silences(wav) - - return wav - - -def wav_to_mel_spectrogram(wav): - """ - Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. - Note: this not a log-mel spectrogram. - """ - frames = librosa.feature.melspectrogram( - y=wav, - sr=sampling_rate, - n_fft=int(sampling_rate * mel_window_length / 1000), - hop_length=int(sampling_rate * mel_window_step / 1000), - n_mels=mel_n_channels - ) - return frames.astype(np.float32).T - - -def trim_long_silences(wav): - """ - Ensures that segments without voice in the waveform remain no longer than a - threshold determined by the VAD parameters in params.py. - - :param wav: the raw waveform as a numpy array of floats - :return: the same waveform with silences trimmed away (length <= original wav length) - """ - # Compute the voice detection window size - samples_per_window = (vad_window_length * sampling_rate) // 1000 - - # Trim the end of the audio to have a multiple of the window size - wav = wav[:len(wav) - (len(wav) % samples_per_window)] - - # Convert the float waveform to 16-bit mono PCM - pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16)) - - # Perform voice activation detection - voice_flags = [] - vad = webrtcvad.Vad(mode=3) - for window_start in range(0, len(wav), samples_per_window): - window_end = window_start + samples_per_window - voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2], - sample_rate=sampling_rate)) - voice_flags = np.array(voice_flags) - - # Smooth the voice detection with a moving average - def moving_average(array, width): - array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2))) - ret = np.cumsum(array_padded, dtype=float) - ret[width:] = ret[width:] - ret[:-width] - return ret[width - 1:] / width - - audio_mask = moving_average(voice_flags, vad_moving_average_width) - audio_mask = np.round(audio_mask).astype(np.bool) - - # Dilate the voiced regions - audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1)) - audio_mask = np.repeat(audio_mask, samples_per_window) - - return wav[audio_mask == True] - - -def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False): - if increase_only and decrease_only: - raise ValueError("Both increase only and decrease only are set") - dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2)) - if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only): - return wav - return wav * (10 ** (dBFS_change / 20)) diff --git a/spaces/fclong/summary/fengshen/models/zen1/__init__.py b/spaces/fclong/summary/fengshen/models/zen1/__init__.py deleted file mode 100644 index 2dec07c8fb965677ba8c8d3b0a13809d0199d301..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/zen1/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .ngram_utils import ZenNgramDict, NGRAM_DICT_NAME -from .modeling import ZenConfig, ZenModel, ZenForPreTraining, ZenForTokenClassification, ZenForSequenceClassification -from .tokenization import BertTokenizer, BasicTokenizer, WordpieceTokenizer -version = "0.1.0" -__all__ = ['ZenNgramDict', 'NGRAM_DICT_NAME', "ZenConfig", "ZenModel", "ZenForPreTraining", "ZenForTokenClassification", - "ZenForSequenceClassification", "BertTokenizer", "BasicTokenizer", "WordpieceTokenizer"] diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RFS - Real Flight Simulator Pro and fly like a pro.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RFS - Real Flight Simulator Pro and fly like a pro.md deleted file mode 100644 index 5ad2bb3ad4e9e24598f494fba0518ae61f8059ab..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RFS - Real Flight Simulator Pro and fly like a pro.md +++ /dev/null @@ -1,147 +0,0 @@ -
        -

        Download RFS Real Flight Simulator Pro: A Comprehensive Guide

        -

        Do you love flying and exploring the world from the sky? Do you want to experience the thrill of piloting a realistic aircraft in various weather conditions and scenarios? Do you want to join a community of passionate aviators and share your flights with them? If you answered yes to any of these questions, then you should definitely try RFS Real Flight Simulator Pro, one of the best flight simulator games available for Android devices.

        -

        What is RFS Real Flight Simulator Pro?

        -

        RFS Real Flight Simulator Pro is a simulation game developed by RORTOS, a company specialized in creating realistic and immersive flight simulators. RFS Pro is not just a game, but a complete simulation platform that allows you to live a unique experience flying in any part of the world and exploring sceneries and airports in high resolution with satellite maps, 3D buildings, runways, procedures and air traffic.

        -

        download rfs real flight simulator pro


        Download File 🗹 https://gohhs.com/2uPsEJ



        -

        Features and benefits of RFS Pro

        -

        Some of the features and benefits of RFS Pro are:

        -
          -
        • You can choose from hundreds of different aircraft models, from light planes to airliners, from military jets to helicopters, each with their own custom liveries, 3D live cockpit, working parts and lights.
        • -
        • You can create and manage your own flight plans, with multiple options to customize your departure, arrival, approach and transition procedures. You can also use the automatic flight plan generator or select from thousands of real time flights.
        • -
        • You can interact with realistic ATC controllers and other pilots using the interactive multi voice system. You can also chat with other players in multiplayer mode and join them in their flights.
        • -
        • You can access high definition satellite terrains and heightmaps that provide a stunning visual experience. You can also adjust the weather conditions, from clear skies to storms, from wind direction and speed to turbulence and ground temperature.
        • -
        • You can experience different failures and emergencies, such as engine failure, landing gear malfunction, fuel leak, etc. You can also customize your aircraft's performance, fuel, passengers and cargo.
        • -
        -

        How to download and install RFS Pro on your device

        -

        If you are interested in downloading and installing RFS Pro on your device, here are the steps you need to follow:

        -
          -
        1. Go to the Google Play Store on your device and search for RFS - Real Flight Simulator. Alternatively, you can use this link: [text](^2^).
        2. -
        3. Select the app from the search results and tap on Install. The app will start downloading on your device.
        4. -
        5. Once the download is complete, tap on Open. The app will launch on your device.
        6. -
        7. To access the pro features, you need to purchase a subscription. Tap on the Menu icon on the top left corner of the screen and select Subscription. You can choose from different plans: monthly ($0.99), yearly ($6.99) or lifetime ($19.99). Select the plan that suits you best and tap on Buy.
        8. -
        9. Enter your payment details and confirm your purchase. You will receive a confirmation email with your receipt.
        10. -
        11. Congratulations! You have successfully downloaded and installed RFS Pro on your device. You can now enjoy all the features and benefits of this amazing flight simulator.
        12. -
        -

        How to use RFS Real Flight Simulator Pro

        -

        Now that you have downloaded and installed RFS Pro on your device, you might be wondering how to use it. Don't worry, we have got you covered. Here are some tips on how to use RFS Real Flight Simulator Pro:

        -

        How to download rfs real flight simulator pro for free
        -Rfs real flight simulator pro vs infinite flight
        -Rfs real flight simulator pro review and gameplay
        -Best liveries for rfs real flight simulator pro
        -Rfs real flight simulator pro multiplayer mode
        -Rfs real flight simulator pro tips and tricks
        -Rfs real flight simulator pro apk mod download
        -Rfs real flight simulator pro system requirements
        -Rfs real flight simulator pro update and new features
        -Rfs real flight simulator pro tutorial and manual
        -Rfs real flight simulator pro discount and coupon code
        -Rfs real flight simulator pro subscription and price
        -Rfs real flight simulator pro vs x-plane 11 mobile
        -Rfs real flight simulator pro best airports and routes
        -Rfs real flight simulator pro atc and voice chat
        -Rfs real flight simulator pro custom and community liveries
        -Rfs real flight simulator pro satellite terrain and heightmaps
        -Rfs real flight simulator pro realistic weather and wind
        -Rfs real flight simulator pro advanced flight plan and procedures
        -Rfs real flight simulator pro failures and emergencies
        -Rfs real flight simulator pro cockpit and instruments
        -Rfs real flight simulator pro 3d buildings and vehicles
        -Rfs real flight simulator pro comparison and benchmark
        -Rfs real flight simulator pro online and offline mode
        -Rfs real flight simulator pro graphics and performance settings
        -How to install rfs real flight simulator pro on pc
        -How to fly rfs real flight simulator pro with a joystick
        -How to record rfs real flight simulator pro videos
        -How to stream rfs real flight simulator pro on twitch or youtube
        -How to join rfs real flight simulator pro groups and events
        -How to create rfs real flight simulator pro liveries and share them
        -How to contact rfs real flight simulator pro support and feedback
        -How to fix rfs real flight simulator pro errors and bugs
        -How to refund rfs real flight simulator pro purchase or subscription
        -How to upgrade rfs real flight simulator pro basic to pro version
        -How to cancel rfs real flight simulator pro subscription or renewal
        -How to restore rfs real flight simulator pro purchase or data
        -How to sync rfs real flight simulator pro across devices or platforms
        -How to gift rfs real flight simulator pro to a friend or family member
        -How to earn rfs real flight simulator pro rewards or achievements

        -

        How to create and edit flight plans

        -

        A flight plan is a set of instructions that defines your route, altitude, speed, waypoints and procedures for your flight. To create and edit a flight plan in RFS Pro, follow these steps:

        -
          -
        1. Tap on the Menu icon on the top left corner of the screen and select Flight Plan.
        2. -
        3. Tap on the + icon on the top right corner of the screen to create a new flight plan. You can also select an existing flight plan from the list and tap on Edit.
        4. -
        5. Enter the departure and arrival airports by typing their ICAO codes or tapping on the map. You can also use the search function to find airports by name, city or country.
        6. -
        7. Select the aircraft type and configuration by tapping on the Aircraft icon on the bottom left corner of the screen. You can choose from different categories, models and liveries.
        8. -
        9. Select the departure and arrival procedures by tapping on the Procedure icon on the bottom right corner of the screen. You can choose from different SID (Standard Instrument Departure), STAR (Standard Terminal Arrival Route) and IAP (Instrument Approach Procedure) options.
        10. -
        11. Add waypoints to your route by tapping on the Waypoint icon on the bottom center of the screen. You can choose from different types of waypoints, such as VOR (VHF Omnidirectional Range), NDB (Non-Directional Beacon), FIX (Fixed Point) or GPS (Global Positioning System).
        12. -
        13. Adjust your altitude, speed and heading for each waypoint by tapping on it and dragging the sliders. You can also use the Auto option to let the app calculate the optimal values for you.
        14. -
        15. Review your flight plan by tapping on the Preview icon on the top right corner of the screen. You can see a summary of your flight details, such as distance, time, fuel, weight and balance.
        16. -
        17. Save your flight plan by tapping on the Save icon on the top right corner of the screen. You can also share your flight plan with other players by tapping on the Share icon.
        18. -
        -

        How to interact with ATC and other pilots

        -

        ATC (Air Traffic Control) is a service that provides guidance and instructions to pilots to ensure safe and orderly flight operations. In RFS Pro, you can interact with realistic ATC controllers and other pilots using the interactive multi voice system. To do so, follow these steps:

        -
          -
        1. Tap on the Menu icon on the top left corner of the screen and select ATC.
        2. -
        3. Select the frequency that corresponds to your phase of flight, such as Ground, Tower, Departure, Approach or Center. You can also use the Auto option to let the app select the appropriate frequency for you.
        4. -
        5. Tap on the Microphone icon on the bottom center of the screen to start speaking. You can use standard aviation phraseology or natural language to communicate with ATC and other pilots.
        6. -
        7. Listen to the responses from ATC and other pilots by tapping on the Speaker icon on the bottom center of the screen. You can also read the transcripts of the communications by tapping on the Text icon.
        8. -
        9. Follow the instructions and requests from ATC and other pilots by using your flight controls, instruments and procedures. You can also ask for clarifications or confirmations by using phrases such as "Say again" or "Roger".
        10. -
        -

        How to customize your aircraft and liveries

        -

        In RFS Pro, you can customize your aircraft and liveries to suit your preferences and style. To do so, follow these steps:

        -
          -
        1. Tap on the Menu icon on the top left corner of the screen and select Aircraft.
        2. -
        3. Select the category and model of the aircraft that you want to customize by tapping on them. You can also use the search function to find aircraft by name, manufacturer or type.
        4. -
        5. Tap on the Livery icon on the bottom left corner of the screen to change the livery of your aircraft. You can choose from different options, such as airlines, military, special, custom or random.
        6. -
        7. Tap on the Edit icon on the bottom right corner of the screen to edit the livery of your aircraft. You can use different tools, such as paint, erase, fill, text, logo, etc. to create your own unique design.
        8. -
        9. Tap on the Save icon on the top right corner of the screen to save your custom livery. You can also share your custom livery with other players by tapping on the Share icon.
        10. -
        -

        Tips and tricks for RFS Real Flight Simulator Pro

        -

        To make the most out of RFS Real Flight Simulator Pro, here are some tips and tricks that you should know:

        -

        How to optimize your performance and battery life

        -

        RFS Pro is a demanding game that requires a lot of resources from your device. To optimize your performance and battery life, you can do the following:

        -
          -
        • Adjust the graphics settings by tapping on the Menu icon on the top left corner of the screen and selecting Settings. You can change the resolution, quality, shadows, reflections, etc. to suit your device's capabilities.
        • -
        • Turn off unnecessary features by tapping on the Menu icon on the top left corner of the screen and selecting Settings. You can disable features such as sound, vibration, notifications, etc. to save battery and CPU power.
        • -
        • Close other apps running in the background by tapping on the Recent Apps button on your device and swiping them away. This will free up memory and prevent overheating.
        • -
        • Use a charger or a power bank to keep your device plugged in while playing RFS Pro. This will prevent your battery from draining too fast and extend your play time.
        • -
        -

        How to access real time flights and multiplayer mode

        -

        RFS Pro allows you to access real time flights and multiplayer mode, where you can fly with other players from around the world and see their aircrafts in real time. To access these features, you need to do the following:

        -
          -
        • Connect your device to a stable internet connection by using Wi-Fi or mobile data. You can check your connection status by tapping on the Menu icon on the top left corner of the screen and selecting Connection.
        • -
        • Select a server by tapping on the Menu icon on the top left corner of the screen and selecting Server. You can choose from different regions, such as Europe, America, Asia, etc. You can also see how many players are online in each server.
        • -
        • Select a real time flight by tapping on the Menu icon on the top left corner of the screen and selecting Real Time Flights. You can see a list of real time flights that are currently in progress, with their details, such as departure, arrival, aircraft, etc. You can also use the search function to find real time flights by flight number, airline, airport, etc.
        • -
        • Select a multiplayer mode by tapping on the Menu icon on the top left corner of the screen and selecting Multiplayer. You can choose from different modes, such as Free Flight, Formation Flight, Air Race, etc. You can also create your own room or join an existing one.
        • -
        -

        How to use satellite terrain and heightmaps

        -

        RFS Pro offers you the possibility to use satellite terrain and heightmaps, which provide a realistic and detailed representation of the earth's surface. To use these features, you need to do the following:

        -
          -
        • Download the satellite terrain and heightmaps by tapping on the Menu icon on the top left corner of the screen and selecting Download. You can see a list of regions that are available for download, with their size and status. You can also use the search function to find regions by name or location.
        • -
        • Select the regions that you want to download by tapping on them and then tapping on Download. The download will start automatically and you can see the progress on the screen. You can also pause or resume the download by tapping on the Pause or Resume buttons.
        • -
        • Enable the satellite terrain and heightmaps by tapping on the Menu icon on the top left corner of the screen and selecting Settings. You can toggle the Satellite Terrain and Heightmaps options on or off by tapping on them.
        • -
        -

        Conclusion

        -

        RFS Real Flight Simulator Pro is a fantastic game that lets you experience the joy of flying in a realistic and immersive way. You can choose from hundreds of different aircraft models, create and manage your own flight plans, interact with ATC and other pilots, customize your aircraft and liveries, access real time flights and multiplayer mode, use satellite terrain and heightmaps, and much more.

        -

        Summary of the main points

        -

        In this article, we have covered the following topics:

        -
          -
        • What is RFS Real Flight Simulator Pro and what are its features and benefits.
        • -
        • How to download and install RFS Pro on your device.
        • -
        • How to use RFS Real Flight Simulator Pro.
        • -
        • Tips and tricks for RFS Real Flight Simulator Pro.
        • -
        -

        Call to action and recommendation

        -

        If you are looking for a fun and realistic flight simulator game that will keep you entertained for hours, then you should definitely download RFS Real Flight Simulator Pro today. You will not regret it. You can download it from the Google Play Store by using this link: [text].

        -

        We hope you have enjoyed this article and found it useful. If you have any questions or feedback, please let us know in the comments section below. Happy flying!

        -

        FAQs

        -

        Here are some frequently asked questions about RFS Real Flight Simulator Pro:

        -
          -
        1. Q: How much does RFS Pro cost?
          A: RFS Pro is free to download from the Google Play Store, but it requires a subscription to access all the pro features. The subscription plans are: monthly ($0.99), yearly ($6.99) or lifetime ($19.99).
        2. -
        3. Q: What are the minimum requirements for RFS Pro?
          A: RFS Pro requires Android 4.4 or higher and at least 1 GB of RAM. However, for optimal performance and graphics quality, it is recommended to have a device with Android 8.0 or higher and at least 4 GB of RAM.
        4. -
        5. Q: How can I contact RORTOS for support or feedback?
          A: You can contact RORTOS by using their website: [text]. You can also follow them on their social media channels: Facebook ([text]), Twitter ([text]), Instagram ([text]), YouTube ([text]).
        6. -
        7. Q: How can I learn more about RFS Pro?
          A: You can learn more about RFS Pro by reading their user manual: [text]. You can also watch their tutorial videos: [text].
        8. -
        9. Q: How can I join the RFS community?
          A: You can join the RFS community by using their official forum: [text]. You can also join their Discord server: [text].
        10. -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/safer-buffer/Porting-Buffer.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/safer-buffer/Porting-Buffer.md deleted file mode 100644 index 68d86bab032fabc624b2e312ec3a87666a12b07c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/safer-buffer/Porting-Buffer.md +++ /dev/null @@ -1,268 +0,0 @@ -# Porting to the Buffer.from/Buffer.alloc API - - -## Overview - -- [Variant 1: Drop support for Node.js ≤ 4.4.x and 5.0.0 — 5.9.x.](#variant-1) (*recommended*) -- [Variant 2: Use a polyfill](#variant-2) -- [Variant 3: manual detection, with safeguards](#variant-3) - -### Finding problematic bits of code using grep - -Just run `grep -nrE '[^a-zA-Z](Slow)?Buffer\s*\(' --exclude-dir node_modules`. - -It will find all the potentially unsafe places in your own code (with some considerably unlikely -exceptions). - -### Finding problematic bits of code using Node.js 8 - -If you’re using Node.js ≥ 8.0.0 (which is recommended), Node.js exposes multiple options that help with finding the relevant pieces of code: - -- `--trace-warnings` will make Node.js show a stack trace for this warning and other warnings that are printed by Node.js. -- `--trace-deprecation` does the same thing, but only for deprecation warnings. -- `--pending-deprecation` will show more types of deprecation warnings. In particular, it will show the `Buffer()` deprecation warning, even on Node.js 8. - -You can set these flags using an environment variable: - -```console -$ export NODE_OPTIONS='--trace-warnings --pending-deprecation' -$ cat example.js -'use strict'; -const foo = new Buffer('foo'); -$ node example.js -(node:7147) [DEP0005] DeprecationWarning: The Buffer() and new Buffer() constructors are not recommended for use due to security and usability concerns. Please use the new Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() construction methods instead. - at showFlaggedDeprecation (buffer.js:127:13) - at new Buffer (buffer.js:148:3) - at Object. (/path/to/example.js:2:13) - [... more stack trace lines ...] -``` - -### Finding problematic bits of code using linters - -Eslint rules [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) -or -[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) -also find calls to deprecated `Buffer()` API. Those rules are included in some pre-sets. - -There is a drawback, though, that it doesn't always -[work correctly](https://github.com/chalker/safer-buffer#why-not-safe-buffer) when `Buffer` is -overriden e.g. with a polyfill, so recommended is a combination of this and some other method -described above. - - -## Variant 1: Drop support for Node.js ≤ 4.4.x and 5.0.0 — 5.9.x. - -This is the recommended solution nowadays that would imply only minimal overhead. - -The Node.js 5.x release line has been unsupported since July 2016, and the Node.js 4.x release line reaches its End of Life in April 2018 (→ [Schedule](https://github.com/nodejs/Release#release-schedule)). This means that these versions of Node.js will *not* receive any updates, even in case of security issues, so using these release lines should be avoided, if at all possible. - -What you would do in this case is to convert all `new Buffer()` or `Buffer()` calls to use `Buffer.alloc()` or `Buffer.from()`, in the following way: - -- For `new Buffer(number)`, replace it with `Buffer.alloc(number)`. -- For `new Buffer(string)` (or `new Buffer(string, encoding)`), replace it with `Buffer.from(string)` (or `Buffer.from(string, encoding)`). -- For all other combinations of arguments (these are much rarer), also replace `new Buffer(...arguments)` with `Buffer.from(...arguments)`. - -Note that `Buffer.alloc()` is also _faster_ on the current Node.js versions than -`new Buffer(size).fill(0)`, which is what you would otherwise need to ensure zero-filling. - -Enabling eslint rule [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) -or -[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) -is recommended to avoid accidential unsafe Buffer API usage. - -There is also a [JSCodeshift codemod](https://github.com/joyeecheung/node-dep-codemod#dep005) -for automatically migrating Buffer constructors to `Buffer.alloc()` or `Buffer.from()`. -Note that it currently only works with cases where the arguments are literals or where the -constructor is invoked with two arguments. - -_If you currently support those older Node.js versions and dropping them would be a semver-major change -for you, or if you support older branches of your packages, consider using [Variant 2](#variant-2) -or [Variant 3](#variant-3) on older branches, so people using those older branches will also receive -the fix. That way, you will eradicate potential issues caused by unguarded Buffer API usage and -your users will not observe a runtime deprecation warning when running your code on Node.js 10._ - - -## Variant 2: Use a polyfill - -Utilize [safer-buffer](https://www.npmjs.com/package/safer-buffer) as a polyfill to support older -Node.js versions. - -You would take exacly the same steps as in [Variant 1](#variant-1), but with a polyfill -`const Buffer = require('safer-buffer').Buffer` in all files where you use the new `Buffer` api. - -Make sure that you do not use old `new Buffer` API — in any files where the line above is added, -using old `new Buffer()` API will _throw_. It will be easy to notice that in CI, though. - -Alternatively, you could use [buffer-from](https://www.npmjs.com/package/buffer-from) and/or -[buffer-alloc](https://www.npmjs.com/package/buffer-alloc) [ponyfills](https://ponyfill.com/) — -those are great, the only downsides being 4 deps in the tree and slightly more code changes to -migrate off them (as you would be using e.g. `Buffer.from` under a different name). If you need only -`Buffer.from` polyfilled — `buffer-from` alone which comes with no extra dependencies. - -_Alternatively, you could use [safe-buffer](https://www.npmjs.com/package/safe-buffer) — it also -provides a polyfill, but takes a different approach which has -[it's drawbacks](https://github.com/chalker/safer-buffer#why-not-safe-buffer). It will allow you -to also use the older `new Buffer()` API in your code, though — but that's arguably a benefit, as -it is problematic, can cause issues in your code, and will start emitting runtime deprecation -warnings starting with Node.js 10._ - -Note that in either case, it is important that you also remove all calls to the old Buffer -API manually — just throwing in `safe-buffer` doesn't fix the problem by itself, it just provides -a polyfill for the new API. I have seen people doing that mistake. - -Enabling eslint rule [no-buffer-constructor](https://eslint.org/docs/rules/no-buffer-constructor) -or -[node/no-deprecated-api](https://github.com/mysticatea/eslint-plugin-node/blob/master/docs/rules/no-deprecated-api.md) -is recommended. - -_Don't forget to drop the polyfill usage once you drop support for Node.js < 4.5.0._ - - -## Variant 3 — manual detection, with safeguards - -This is useful if you create Buffer instances in only a few places (e.g. one), or you have your own -wrapper around them. - -### Buffer(0) - -This special case for creating empty buffers can be safely replaced with `Buffer.concat([])`, which -returns the same result all the way down to Node.js 0.8.x. - -### Buffer(notNumber) - -Before: - -```js -var buf = new Buffer(notNumber, encoding); -``` - -After: - -```js -var buf; -if (Buffer.from && Buffer.from !== Uint8Array.from) { - buf = Buffer.from(notNumber, encoding); -} else { - if (typeof notNumber === 'number') - throw new Error('The "size" argument must be of type number.'); - buf = new Buffer(notNumber, encoding); -} -``` - -`encoding` is optional. - -Note that the `typeof notNumber` before `new Buffer` is required (for cases when `notNumber` argument is not -hard-coded) and _is not caused by the deprecation of Buffer constructor_ — it's exactly _why_ the -Buffer constructor is deprecated. Ecosystem packages lacking this type-check caused numereous -security issues — situations when unsanitized user input could end up in the `Buffer(arg)` create -problems ranging from DoS to leaking sensitive information to the attacker from the process memory. - -When `notNumber` argument is hardcoded (e.g. literal `"abc"` or `[0,1,2]`), the `typeof` check can -be omitted. - -Also note that using TypeScript does not fix this problem for you — when libs written in -`TypeScript` are used from JS, or when user input ends up there — it behaves exactly as pure JS, as -all type checks are translation-time only and are not present in the actual JS code which TS -compiles to. - -### Buffer(number) - -For Node.js 0.10.x (and below) support: - -```js -var buf; -if (Buffer.alloc) { - buf = Buffer.alloc(number); -} else { - buf = new Buffer(number); - buf.fill(0); -} -``` - -Otherwise (Node.js ≥ 0.12.x): - -```js -const buf = Buffer.alloc ? Buffer.alloc(number) : new Buffer(number).fill(0); -``` - -## Regarding Buffer.allocUnsafe - -Be extra cautious when using `Buffer.allocUnsafe`: - * Don't use it if you don't have a good reason to - * e.g. you probably won't ever see a performance difference for small buffers, in fact, those - might be even faster with `Buffer.alloc()`, - * if your code is not in the hot code path — you also probably won't notice a difference, - * keep in mind that zero-filling minimizes the potential risks. - * If you use it, make sure that you never return the buffer in a partially-filled state, - * if you are writing to it sequentially — always truncate it to the actuall written length - -Errors in handling buffers allocated with `Buffer.allocUnsafe` could result in various issues, -ranged from undefined behaviour of your code to sensitive data (user input, passwords, certs) -leaking to the remote attacker. - -_Note that the same applies to `new Buffer` usage without zero-filling, depending on the Node.js -version (and lacking type checks also adds DoS to the list of potential problems)._ - - -## FAQ - - -### What is wrong with the `Buffer` constructor? - -The `Buffer` constructor could be used to create a buffer in many different ways: - -- `new Buffer(42)` creates a `Buffer` of 42 bytes. Before Node.js 8, this buffer contained - *arbitrary memory* for performance reasons, which could include anything ranging from - program source code to passwords and encryption keys. -- `new Buffer('abc')` creates a `Buffer` that contains the UTF-8-encoded version of - the string `'abc'`. A second argument could specify another encoding: For example, - `new Buffer(string, 'base64')` could be used to convert a Base64 string into the original - sequence of bytes that it represents. -- There are several other combinations of arguments. - -This meant that, in code like `var buffer = new Buffer(foo);`, *it is not possible to tell -what exactly the contents of the generated buffer are* without knowing the type of `foo`. - -Sometimes, the value of `foo` comes from an external source. For example, this function -could be exposed as a service on a web server, converting a UTF-8 string into its Base64 form: - -``` -function stringToBase64(req, res) { - // The request body should have the format of `{ string: 'foobar' }` - const rawBytes = new Buffer(req.body.string) - const encoded = rawBytes.toString('base64') - res.end({ encoded: encoded }) -} -``` - -Note that this code does *not* validate the type of `req.body.string`: - -- `req.body.string` is expected to be a string. If this is the case, all goes well. -- `req.body.string` is controlled by the client that sends the request. -- If `req.body.string` is the *number* `50`, the `rawBytes` would be 50 bytes: - - Before Node.js 8, the content would be uninitialized - - After Node.js 8, the content would be `50` bytes with the value `0` - -Because of the missing type check, an attacker could intentionally send a number -as part of the request. Using this, they can either: - -- Read uninitialized memory. This **will** leak passwords, encryption keys and other - kinds of sensitive information. (Information leak) -- Force the program to allocate a large amount of memory. For example, when specifying - `500000000` as the input value, each request will allocate 500MB of memory. - This can be used to either exhaust the memory available of a program completely - and make it crash, or slow it down significantly. (Denial of Service) - -Both of these scenarios are considered serious security issues in a real-world -web server context. - -when using `Buffer.from(req.body.string)` instead, passing a number will always -throw an exception instead, giving a controlled behaviour that can always be -handled by the program. - - -### The `Buffer()` constructor has been deprecated for a while. Is this really an issue? - -Surveys of code in the `npm` ecosystem have shown that the `Buffer()` constructor is still -widely used. This includes new code, and overall usage of such code has actually been -*increasing*. diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/env.sh b/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/env.sh deleted file mode 100644 index f3052f0ea1672a569e7775f8c54967d730a7b5ec..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/env.sh +++ /dev/null @@ -1,8 +0,0 @@ -DIRNAME="$(dirname $0)" -DIRNAME="$(realpath ""$DIRNAME"")" - -BINDIR="$DIRNAME/.." -SRCDIR="$BINDIR/.." -CONFIGDIR="$SRCDIR/configs" - -export PYTHONPATH="$SRCDIR:$PYTHONPATH" diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_14.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_14.py deleted file mode 100644 index c5ac25ca72f7e85c59cc0b0279087c3cf484bc0a..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_14.py +++ /dev/null @@ -1,23 +0,0 @@ -def is_spam(message): - from re import search - - keywords = [ - "실력입증", "추천주", "잠시 시간내서", "지원금받기", "무료교육", "주식상담", - "광고)", "추.천", "해외선물", "무료거부", "정회원방", "kakaotalk.it", "me2.kr", - "선입수", "프로모션", "초대합니다", "특별케어", "완성", "체험반", "차별", "체험", "너도나도", - "로또", "지식교환", "신세계 상품권", "치킨", "커피" - ] - - def contains_keyword(text): - for word in keywords: - if word in text: - return True - return False - - def contains_url(text): - return bool(search(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', text)) - - if contains_keyword(message) and contains_url(message): - return True - else: - return False \ No newline at end of file diff --git a/spaces/firestalker/anime-tts/mel_processing.py b/spaces/firestalker/anime-tts/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/firestalker/anime-tts/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/fiz123321/dumbcutie/README.md b/spaces/fiz123321/dumbcutie/README.md deleted file mode 100644 index 9350b131b144c0fda51fab00eafeb2f8116373ea..0000000000000000000000000000000000000000 --- a/spaces/fiz123321/dumbcutie/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Dumbcutie -emoji: 🏆 -colorFrom: yellow -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/flax-community/SentenceSimplifier/About/credits.md b/spaces/flax-community/SentenceSimplifier/About/credits.md deleted file mode 100644 index 8511f900917ba2aaa68a79d628efcc4bfca58b26..0000000000000000000000000000000000000000 --- a/spaces/flax-community/SentenceSimplifier/About/credits.md +++ /dev/null @@ -1,2 +0,0 @@ -## Credits -Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week. Especially for providing such massive computing resource. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for solving our issues and mentoring during the whole community week. diff --git a/spaces/florim/MedGPT/tests.py b/spaces/florim/MedGPT/tests.py deleted file mode 100644 index 62f76da8ac4925ef6cdfcce0484612cf70959862..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/tests.py +++ /dev/null @@ -1,21 +0,0 @@ -import unittest - -import coverage - -if __name__ == "__main__": - # Start coverage collection - cov = coverage.Coverage() - cov.start() - - # Load all tests from the 'autogpt/tests' package - suite = unittest.defaultTestLoader.discover("./tests") - - # Run the tests - unittest.TextTestRunner().run(suite) - - # Stop coverage collection - cov.stop() - cov.save() - - # Report the coverage - cov.report(show_missing=True) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/__init__.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/__init__.py deleted file mode 100644 index 56075e09633a6fdf23ab95e5625e045e30220c69..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from gym_minigrid.envs.empty import * -from gym_minigrid.envs.doorkey import * -from gym_minigrid.envs.multiroom import * -from gym_minigrid.envs.multiroom_noisytv import * -from gym_minigrid.envs.fetch import * -from gym_minigrid.envs.gotoobject import * -from gym_minigrid.envs.gotodoor import * -from gym_minigrid.envs.putnear import * -from gym_minigrid.envs.lockedroom import * -from gym_minigrid.envs.keycorridor import * -from gym_minigrid.envs.unlock import * -from gym_minigrid.envs.unlockpickup import * -from gym_minigrid.envs.blockedunlockpickup import * -from gym_minigrid.envs.playground_v0 import * -from gym_minigrid.envs.redbluedoors import * -from gym_minigrid.envs.obstructedmaze import * -from gym_minigrid.envs.memory import * -from gym_minigrid.envs.fourrooms import * -from gym_minigrid.envs.crossing import * -from gym_minigrid.envs.lavagap import * -from gym_minigrid.envs.dynamicobstacles import * -from gym_minigrid.envs.distshift import * \ No newline at end of file diff --git a/spaces/freddyaboulton/test-blue/app.py b/spaces/freddyaboulton/test-blue/app.py deleted file mode 100644 index e2116fccac86c7a7dc4a4906e02ab39fe0ddc4be..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/test-blue/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='freddyaboulton/test-blue') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `test-blue` - To use this theme, set `theme='freddyaboulton/test-blue'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/fun-research/FC-CLIP/demo/__init__.py b/spaces/fun-research/FC-CLIP/demo/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/utils.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/metrics.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/metrics.py deleted file mode 100644 index f7a7dadc00b713631a7e0937a0cd1f5f01c19fc7..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/metrics.py +++ /dev/null @@ -1,111 +0,0 @@ -from . import base -from . import functional as F -from ..base.modules import Activation - - -class IoU(base.Metric): - __name__ = "iou_score" - - def __init__( - self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs - ): - super().__init__(**kwargs) - self.eps = eps - self.threshold = threshold - self.activation = Activation(activation) - self.ignore_channels = ignore_channels - - def forward(self, y_pr, y_gt): - y_pr = self.activation(y_pr) - return F.iou( - y_pr, - y_gt, - eps=self.eps, - threshold=self.threshold, - ignore_channels=self.ignore_channels, - ) - - -class Fscore(base.Metric): - def __init__( - self, - beta=1, - eps=1e-7, - threshold=0.5, - activation=None, - ignore_channels=None, - **kwargs - ): - super().__init__(**kwargs) - self.eps = eps - self.beta = beta - self.threshold = threshold - self.activation = Activation(activation) - self.ignore_channels = ignore_channels - - def forward(self, y_pr, y_gt): - y_pr = self.activation(y_pr) - return F.f_score( - y_pr, - y_gt, - eps=self.eps, - beta=self.beta, - threshold=self.threshold, - ignore_channels=self.ignore_channels, - ) - - -class Accuracy(base.Metric): - def __init__(self, threshold=0.5, activation=None, ignore_channels=None, **kwargs): - super().__init__(**kwargs) - self.threshold = threshold - self.activation = Activation(activation) - self.ignore_channels = ignore_channels - - def forward(self, y_pr, y_gt): - y_pr = self.activation(y_pr) - return F.accuracy( - y_pr, y_gt, threshold=self.threshold, ignore_channels=self.ignore_channels, - ) - - -class Recall(base.Metric): - def __init__( - self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs - ): - super().__init__(**kwargs) - self.eps = eps - self.threshold = threshold - self.activation = Activation(activation) - self.ignore_channels = ignore_channels - - def forward(self, y_pr, y_gt): - y_pr = self.activation(y_pr) - return F.recall( - y_pr, - y_gt, - eps=self.eps, - threshold=self.threshold, - ignore_channels=self.ignore_channels, - ) - - -class Precision(base.Metric): - def __init__( - self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs - ): - super().__init__(**kwargs) - self.eps = eps - self.threshold = threshold - self.activation = Activation(activation) - self.ignore_channels = ignore_channels - - def forward(self, y_pr, y_gt): - y_pr = self.activation(y_pr) - return F.precision( - y_pr, - y_gt, - eps=self.eps, - threshold=self.threshold, - ignore_channels=self.ignore_channels, - ) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Armin Van Buuren Feat Laura V Drowning Acapella Learn the Secrets Behind the Songs Lyrics and Meaning.md b/spaces/gotiQspiryo/whisper-ui/examples/Armin Van Buuren Feat Laura V Drowning Acapella Learn the Secrets Behind the Songs Lyrics and Meaning.md deleted file mode 100644 index 925eb909dca51bf8995989c7ab7d1340e0e6d5ff..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Armin Van Buuren Feat Laura V Drowning Acapella Learn the Secrets Behind the Songs Lyrics and Meaning.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Armin Van Buuren Feat Laura V Drowning Acapella


        Download --->>> https://urlgoal.com/2uyLBS



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Burnout Paradise 1.0.0.0 Crack Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Burnout Paradise 1.0.0.0 Crack Download.md deleted file mode 100644 index 25099dbcac2a28afbcb45aaa91b9a8f94f762509..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Burnout Paradise 1.0.0.0 Crack Download.md +++ /dev/null @@ -1,5 +0,0 @@ - -

        ya, 2017 ihn music. listen to ihhashi elimhlophe inkiya-nkiya mp3 song.. you have to watch this stunning live performance from this music genius (maskandi music).ihn (in. nkiya nkiya ihhashelimhlophe dat ->>> 2c3f341067. bheki & linah ngcobo known as ihashi elimhlophe on. nkiya nkiya ihhash elimhlophe dpt mp3 song asmp4 video how to hack?. download free acid jazz dj remix mp3 wav,aac rar 3gp mshubo feat ihashi elimhlophe - elimhlophe (sings - elimhlophe audio online.mp3 itunes. mshubo feat ihashi elimhlophe youtube. mshubo feat ihashi elimhlophe (black motion). blackmotion. upload mp3 song 2016. download, download the mp3 song links to your computer in seconds. search, download, and convert mp3 at any time. all online links are checked before they are posted on our website.mp3 download. create an account for free mp3 download only.mp3 song free download. you can create a free account by filling in the username, mail, password, phone number, and country. you can also browse the online songs and download the mp3 links for all the songs that you like. check our new player to enjoy the new features.mashup. all the videos are in mp4,avi,mov,wmv and mp3. download now. listen mp3 music in any places. this site is not store video or music files because we respect the intellectual property laws. use videosfreak to find the links of other songs. you can use this service in any device like pc, iphone, android phones, laptop, or tab.upload your own songs, mp3, and other media. submit songs that are not in our catalog. get good ratings and subscribers from your friends on youtube. you can also follow.searches songs. look songs for you faster.fast downloading. depeche mode - personal jesus (2011 remix) (hd1080p) [web-dl 720p r5e] - another fine exploit by nnaemeka - boko fufanmi. ihashi elimhlophe - psilange ft.inkiya nkiya - nnaemeka | rip&mix 2016. zip. nnaemeka - boss jboss. want a girl who has a vikas book download pdf free vikas book download pdf. band name: jboss. 28 apr 2017 - 00:48. tonic for thyme. duration: 17 seconds. viewed: 1,854,100 times. jboss is a parody of jim morrison, of the doors, and also of the band the doors. jboss (pronounced "john") is a fictional character in the animated television series ducktales.

        -

        burnout paradise 1.0.0.0 crack download


        Download Zip 🗹 https://urlgoal.com/2uyNAq



        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/pay_less_attention_paper/README.md b/spaces/gradio/HuBERT/examples/pay_less_attention_paper/README.md deleted file mode 100644 index 5adab11f4dc3461f9e7126ac391b04e703616e6b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/pay_less_attention_paper/README.md +++ /dev/null @@ -1,176 +0,0 @@ -# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019) - -This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430). - -## Citation: -```bibtex -@inproceedings{wu2018pay, - title = {Pay Less Attention with Lightweight and Dynamic Convolutions}, - author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli}, - booktitle = {International Conference on Learning Representations}, - year = {2019}, - url = {https://arxiv.org/abs/1901.10430}, -} -``` - -## Translation - -### Pre-trained models -For some datasets we release models without GLUs which are faster at inference. - -Model | Description | Dataset | Download ----|---|---|--- -`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz)
        IWSLT14 test:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz)
        IWSLT14 test:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz)
        newstest2014 (shared vocab):
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz)
        newstest2014 (shared vocab):
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz)
        newstest2014 (shared vocab):
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz)
        newstest2014 (shared vocab):
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz)
        newstest2014:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz)
        newstest2014:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz)
        newstest2017:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) -`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
        [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz)
        newstest2017:
        [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) - -### Memory-Efficient CUDA Kernels - -Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent. - -To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used. - -```sh -# to install lightconv -cd fairseq/modules/lightconv_layer -python cuda_function_gen.py -python setup.py install - -# to install dynamicconv -cd fairseq/modules/dynamicconv_layer -python cuda_function_gen.py -python setup.py install -``` - -### Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ] - -# Load a transformer trained on WMT'16 En-De -zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt') - -# The underlying model is available under the *models* attribute -assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel) - -# Translate a sentence -zh2en.translate('你好 世界') -# 'Hello World' -``` - -Loading custom models: -```python -from fairseq.models.lightconv import LightConvModel -en2fr = LightConvModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt14_en_fr', - bpe='subword_nmt', - bpe_codes='data-bin/wmt14_en_fr/en.code' -) -en2fr.translate('Hello world!') -# 'Bonjour le monde' -``` - -### Preprocessing the training datasets - -Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data. - -### Training and evaluation options: -To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`. -For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv. -For best BLEU results, lenpen may need to be manually tuned. - -To use the CUDA kernels, first install the PyTorch modules using the commands -above. Once the CUDA modules are installed, they will automatically be used -instead of the PyTorch modules. - -### IWSLT14 De-En -Training and evaluating DynamicConv (without GLU) on a GPU: -```sh -# Training -SAVE="save/dynamic_conv_iwslt" -mkdir -p $SAVE -CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \ - --clip-norm 0 --optimizer adam --lr 0.0005 \ - --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \ - --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --lr-scheduler inverse_sqrt \ - --ddp-backend=legacy_ddp \ - --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \ - -a lightconv_iwslt_de_en --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 0 --decoder-glu 0 -python scripts/average_checkpoints.py --inputs $SAVE \ - --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt" - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet -``` - -### WMT16 En-De -Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt16en2de" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 20000 \ - --arch lightconv_wmt_en_de_big --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt -bash scripts/compound_split_bleu.sh wmt16_gen.txt -``` - -### WMT14 En-Fr -Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt14en2fr" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 70000 \ - --arch lightconv_wmt_en_fr_big --save-dir $SAVE \ - --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test -``` diff --git a/spaces/gradio/HuBERT/examples/speech_to_text/docs/librispeech_example.md b/spaces/gradio/HuBERT/examples/speech_to_text/docs/librispeech_example.md deleted file mode 100644 index 4040fda9426027537036ba987d087a43e734bfd9..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_to_text/docs/librispeech_example.md +++ /dev/null @@ -1,69 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Recognition (ASR) on LibriSpeech -[LibriSpeech](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a de-facto standard English ASR -benchmark. We provide competitive -vanilla [Transformer](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) baselines. - -## Data preparation -Download and preprocess LibriSpeech data with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -python examples/speech_to_text/prep_librispeech_data.py \ - --output-root ${LS_ROOT} --vocab-type unigram --vocab-size 10000 -``` -where `LS_ROOT` is the root path for downloaded data as well as generated files (manifest, features, vocabulary and -data configuration). - -[Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_vocab_unigram10000.zip) our vocabulary files -if you want to use our pre-trained models. - -## Training -```bash -fairseq-train ${LS_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train-clean-100,train-clean-360,train-other-500 --valid-subset dev-clean,dev-other \ - --num-workers 4 --max-tokens 40000 --max-update 300000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --share-decoder-input-output-embed \ - --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt --warmup-updates 10000 \ - --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `SAVE_DIR` is the checkpoint root path. Here we use `--arch s2t_transformer_s` (31M parameters) as example. -For better performance, you may switch to `s2t_transformer_m` (71M, with `--lr 1e-3`) or `s2t_transformer_l` -(268M, with `--lr 5e-4`). We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly -when using more than 1 GPU. - -## Inference & Evaluation -Average the last 10 checkpoints and evaluate on the 4 splits -(`dev-clean`, `dev-other`, `test-clean` and `test-other`): -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py --inputs ${SAVE_DIR} \ - --num-epoch-checkpoints 10 \ - --output "${SAVE_DIR}/${CHECKPOINT_FILENAME}" -for SUBSET in dev-clean dev-other test-clean test-other; do - fairseq-generate ${LS_ROOT} --config-yaml config.yaml --gen-subset ${SUBSET} \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring wer -done -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${LS_ROOT} --config-yaml config.yaml --task speech_to_text \ - --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -## Results - -| --arch | Params | dev-clean | dev-other | test-clean | test-other | Model | -|---|---|---|---|---|---|---| -| s2t_transformer_s | 30M | 3.8 | 8.9 | 4.4 | 9.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_s.pt) | -| s2t_transformer_m | 71M | 3.2 | 8.0 | 3.4 | 7.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_m.pt) | -| s2t_transformer_l | 268M | 3.0 | 7.5 | 3.2 | 7.5 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_l.pt) | - -[[Back]](..) diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/chat.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/chat.ts deleted file mode 100644 index 87ba9b68c7e4cd042ba3572c4388205b791809fc..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/chat.ts +++ /dev/null @@ -1,73 +0,0 @@ -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { OpenAIError, OpenAIStream } from '@/utils/server'; - -import { ChatBody, Message } from '@/types/chat'; - -// @ts-expect-error -import wasm from '../../node_modules/@dqbd/tiktoken/lite/tiktoken_bg.wasm?module'; - -import tiktokenModel from '@dqbd/tiktoken/encoders/cl100k_base.json'; -import { Tiktoken, init } from '@dqbd/tiktoken/lite/init'; - -export const config = { - runtime: 'edge', -}; - -const handler = async (req: Request): Promise => { - try { - const chatBody = (await req.json()) as ChatBody; - let prompt = chatBody?.prompt; - let temperature = chatBody?.temperature; - let messages = chatBody?.messages; - let model = chatBody?.model; - let key = chatBody?.key; - - await init((imports) => WebAssembly.instantiate(wasm, imports)); - const encoding = new Tiktoken( - tiktokenModel.bpe_ranks, - tiktokenModel.special_tokens, - tiktokenModel.pat_str, - ); - - let promptToSend = prompt; - if (!promptToSend) { - promptToSend = DEFAULT_SYSTEM_PROMPT; - } - - let temperatureToUse = temperature; - if (temperatureToUse == null) { - temperatureToUse = DEFAULT_TEMPERATURE; - } - - const prompt_tokens = encoding.encode(promptToSend); - - let tokenCount = prompt_tokens.length; - let messagesToSend: Message[] = []; - - for (let i = messages.length - 1; i >= 0; i--) { - const message = messages[i]; - const tokens = encoding.encode(message.content); - - if (tokenCount + tokens.length + 1000 > model.tokenLimit) { - break; - } - tokenCount += tokens.length; - messagesToSend = [message, ...messagesToSend]; - } - - encoding.free(); - - const stream = await OpenAIStream(model, promptToSend, temperatureToUse, key, messagesToSend); - - return new Response(stream); - } catch (error) { - console.error(error); - if (error instanceof OpenAIError) { - return new Response('Error', { status: 500, statusText: error.message }); - } else { - return new Response('Error', { status: 500 }); - } - } -}; - -export default handler; diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/data/__init__.py b/spaces/hamacojr/CAT-Seg/cat_seg/data/__init__.py deleted file mode 100644 index 63ba265b1effc69f1eef16e57a04db8902ee347e..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/data/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import datasets diff --git a/spaces/haonanzhang/ChatGPT-BOT/modules/presets.py b/spaces/haonanzhang/ChatGPT-BOT/modules/presets.py deleted file mode 100644 index 99e5f51d3069b6cd6b6f852dc1cb43dadb3d0c1c..0000000000000000000000000000000000000000 --- a/spaces/haonanzhang/ChatGPT-BOT/modules/presets.py +++ /dev/null @@ -1,195 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -timeout_streaming = 10 # 流式对话时的超时时间 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

        ChatGPT-BOT 🍃

        """ -description = """\ -
        - -此App使用 `gpt-3.5-turbo` 大语言模型 - -Thanks repo(https://space.bilibili.com/29125536) - -
        -""" - -footer = """\ -
        {versions}
        -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -MODEL_SOFT_TOKEN_LIMIT = { - "gpt-3.5-turbo": { - "streaming": 3500, - "all": 3500 - }, - "gpt-3.5-turbo-0301": { - "streaming": 3500, - "all": 3500 - }, - "gpt-4": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-0314": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-32k": { - "streaming": 31000, - "all": 31000 - }, - "gpt-4-32k-0314": { - "streaming": 31000, - "all": 31000 - } -} - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/proposal_generator/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/proposal_generator/__init__.py deleted file mode 100644 index 64fb6d46359c05ed3d7aa1ec91fdd6e15b14c932..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/proposal_generator/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .build import PROPOSAL_GENERATOR_REGISTRY, build_proposal_generator -from .rpn import RPN_HEAD_REGISTRY, build_rpn_head, RPN diff --git a/spaces/hbestm/gpt-academic-play/request_llm/bridge_newbing.py b/spaces/hbestm/gpt-academic-play/request_llm/bridge_newbing.py deleted file mode 100644 index 2136f01beb3edd25b94dd8048c20b63a14ef905e..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/request_llm/bridge_newbing.py +++ /dev/null @@ -1,254 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -from .edge_gpt import NewbingChatbot -load_message = "等待NewBing响应。" - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" -import time -import json -import re -import logging -import asyncio -import importlib -import threading -from toolbox import update_ui, get_conf, trimmed_format_exc -from multiprocessing import Process, Pipe - -def preprocess_newbing_out(s): - pattern = r'\^(\d+)\^' # 匹配^数字^ - sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值 - result = re.sub(pattern, sub, s) # 替换操作 - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -def preprocess_newbing_out_simple(result): - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -class NewBingHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.newbing_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import certifi, httpx, rich - self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。" - self.success = False - - def ready(self): - return self.newbing_model is not None - - async def async_run(self): - # 读取配置 - NEWBING_STYLE, = get_conf('NEWBING_STYLE') - from request_llm.bridge_all import model_info - endpoint = model_info['newbing']['endpoint'] - while True: - # 等待 - kwargs = self.child.recv() - question=kwargs['query'] - history=kwargs['history'] - system_prompt=kwargs['system_prompt'] - - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - await self.newbing_model.reset() - self.local_history = [] - - # 开始问问题 - prompt = "" - if system_prompt not in self.local_history: - self.local_history.append(system_prompt) - prompt += system_prompt + '\n' - - # 追加历史 - for ab in history: - a, b = ab - if a not in self.local_history: - self.local_history.append(a) - prompt += a + '\n' - # if b not in self.local_history: - # self.local_history.append(b) - # prompt += b + '\n' - - # 问题 - prompt += question - self.local_history.append(question) - print('question:', prompt) - # 提交 - async for final, response in self.newbing_model.ask_stream( - prompt=question, - conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"] - wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub" - ): - if not final: - print(response) - self.child.send(str(response)) - else: - print('-------- receive final ---------') - self.child.send('[Finish]') - # self.local_history.append(response) - - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.newbing_model is None) or (not self.success): - # 代理设置 - proxies, = get_conf('proxies') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - # cookie - NEWBING_COOKIES, = get_conf('NEWBING_COOKIES') - try: - cookies = json.loads(NEWBING_COOKIES) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。") - - try: - self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] Newbing失败 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() - self.parent.send(kwargs) # 发送请求到子进程 - while True: - res = self.parent.recv() # 等待newbing回复的片段 - if res == '[Finish]': - break # 结束 - elif res == '[Fail]': - self.success = False - break - else: - yield res # newbing回复的片段 - self.threadLock.release() - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global newbing_handle -newbing_handle = None - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - observe_window[0] = load_message + "\n\n" + newbing_handle.info - if not newbing_handle.success: - error = newbing_handle.info - newbing_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - observe_window[0] = "[Local Message]: 等待NewBing响应中 ..." - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ...")) - - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not newbing_handle.success: - newbing_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...") - response = "[Local Message]: 等待NewBing响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") - diff --git a/spaces/hdhzk/bingo/src/components/chat-attachments.tsx b/spaces/hdhzk/bingo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
        - {attachmentList.map(file => ( -
        - {file.status === 'loading' && ( -
        -
        -
        ) - } - {file.status !== 'error' && ( -
        - -
        ) - } - {file.status === 'error' && ( -
        - refresh uploadImage(file.url)} /> -
        - )} - -
        - ))} -
        - ) : null -} diff --git a/spaces/hebert2099/MusicGen/setup.py b/spaces/hebert2099/MusicGen/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_CascadeFullRes.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_CascadeFullRes.py deleted file mode 100644 index deff94324da466915a78beed0f55271fb5d3b8bf..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNetTrainerV2_CascadeFullRes.py +++ /dev/null @@ -1,353 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from multiprocessing.pool import Pool -from time import sleep -import matplotlib -from nnunet.configuration import default_num_threads -from nnunet.postprocessing.connected_components import determine_postprocessing -from nnunet.training.data_augmentation.data_augmentation_moreDA import get_moreDA_augmentation -from nnunet.training.dataloading.dataset_loading import DataLoader3D, unpack_dataset -from nnunet.evaluation.evaluator import aggregate_scores -from nnunet.network_architecture.neural_network import SegmentationNetwork -from nnunet.paths import network_training_output_dir -from nnunet.inference.segmentation_export import save_segmentation_nifti_from_softmax -from batchgenerators.utilities.file_and_folder_operations import * -import numpy as np -from nnunet.training.loss_functions.deep_supervision import MultipleOutputLoss2 -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from nnunet.utilities.one_hot_encoding import to_one_hot -import shutil - -from torch import nn - -matplotlib.use("agg") - - -class nnUNetTrainerV2CascadeFullRes(nnUNetTrainerV2): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, previous_trainer="nnUNetTrainerV2", fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, - batch_dice, stage, unpack_data, deterministic, fp16) - self.init_args = (plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, previous_trainer, fp16) - - if self.output_folder is not None: - task = self.output_folder.split("/")[-3] - plans_identifier = self.output_folder.split("/")[-2].split("__")[-1] - - folder_with_segs_prev_stage = join(network_training_output_dir, "3d_lowres", - task, previous_trainer + "__" + plans_identifier, "pred_next_stage") - self.folder_with_segs_from_prev_stage = folder_with_segs_prev_stage - # Do not put segs_prev_stage into self.output_folder as we need to unpack them for performance and we - # don't want to do that in self.output_folder because that one is located on some network drive. - else: - self.folder_with_segs_from_prev_stage = None - - def do_split(self): - super().do_split() - for k in self.dataset: - self.dataset[k]['seg_from_prev_stage_file'] = join(self.folder_with_segs_from_prev_stage, - k + "_segFromPrevStage.npz") - assert isfile(self.dataset[k]['seg_from_prev_stage_file']), \ - "seg from prev stage missing: %s. " \ - "Please run all 5 folds of the 3d_lowres configuration of this " \ - "task!" % (self.dataset[k]['seg_from_prev_stage_file']) - for k in self.dataset_val: - self.dataset_val[k]['seg_from_prev_stage_file'] = join(self.folder_with_segs_from_prev_stage, - k + "_segFromPrevStage.npz") - for k in self.dataset_tr: - self.dataset_tr[k]['seg_from_prev_stage_file'] = join(self.folder_with_segs_from_prev_stage, - k + "_segFromPrevStage.npz") - - def get_basic_generators(self): - self.load_dataset() - self.do_split() - - if self.threeD: - dl_tr = DataLoader3D(self.dataset_tr, self.basic_generator_patch_size, self.patch_size, self.batch_size, - True, oversample_foreground_percent=self.oversample_foreground_percent, - pad_mode="constant", pad_sides=self.pad_all_sides) - dl_val = DataLoader3D(self.dataset_val, self.patch_size, self.patch_size, self.batch_size, True, - oversample_foreground_percent=self.oversample_foreground_percent, - pad_mode="constant", pad_sides=self.pad_all_sides) - else: - raise NotImplementedError("2D has no cascade") - - return dl_tr, dl_val - - def process_plans(self, plans): - super().process_plans(plans) - self.num_input_channels += (self.num_classes - 1) # for seg from prev stage - - def setup_DA_params(self): - super().setup_DA_params() - - self.data_aug_params["num_cached_per_thread"] = 2 - - self.data_aug_params['move_last_seg_chanel_to_data'] = True - self.data_aug_params['cascade_do_cascade_augmentations'] = True - - self.data_aug_params['cascade_random_binary_transform_p'] = 0.4 - self.data_aug_params['cascade_random_binary_transform_p_per_label'] = 1 - self.data_aug_params['cascade_random_binary_transform_size'] = (1, 8) - - self.data_aug_params['cascade_remove_conn_comp_p'] = 0.2 - self.data_aug_params['cascade_remove_conn_comp_max_size_percent_threshold'] = 0.15 - self.data_aug_params['cascade_remove_conn_comp_fill_with_other_class_p'] = 0.0 - - # we have 2 channels now because the segmentation from the previous stage is stored in 'seg' as well until it - # is moved to 'data' at the end - self.data_aug_params['selected_seg_channels'] = [0, 1] - # needed for converting the segmentation from the previous stage to one hot - self.data_aug_params['all_segmentation_labels'] = list(range(1, self.num_classes)) - - def initialize(self, training=True, force_load_plans=False): - """ - For prediction of test cases just set training=False, this will prevent loading of training data and - training batchgenerator initialization - :param training: - :return: - """ - if not self.was_initialized: - if force_load_plans or (self.plans is None): - self.load_plans_file() - - self.process_plans(self.plans) - - self.setup_DA_params() - - ################# Here we wrap the loss for deep supervision ############ - # we need to know the number of outputs of the network - net_numpool = len(self.net_num_pool_op_kernel_sizes) - - # we give each output a weight which decreases exponentially (division by 2) as the resolution decreases - # this gives higher resolution outputs more weight in the loss - weights = np.array([1 / (2 ** i) for i in range(net_numpool)]) - - # we don't use the lowest 2 outputs. Normalize weights so that they sum to 1 - mask = np.array([True if i < net_numpool - 1 else False for i in range(net_numpool)]) - weights[~mask] = 0 - weights = weights / weights.sum() - self.ds_loss_weights = weights - # now wrap the loss - self.loss = MultipleOutputLoss2(self.loss, self.ds_loss_weights) - ################# END ################### - - self.folder_with_preprocessed_data = join(self.dataset_directory, self.plans['data_identifier'] + - "_stage%d" % self.stage) - - if training: - if not isdir(self.folder_with_segs_from_prev_stage): - raise RuntimeError( - "Cannot run final stage of cascade. Run corresponding 3d_lowres first and predict the " - "segmentations for the next stage") - - self.dl_tr, self.dl_val = self.get_basic_generators() - if self.unpack_data: - print("unpacking dataset") - unpack_dataset(self.folder_with_preprocessed_data) - print("done") - else: - print( - "INFO: Not unpacking data! Training may be slow due to that. Pray you are not using 2d or you " - "will wait all winter for your model to finish!") - - self.tr_gen, self.val_gen = get_moreDA_augmentation(self.dl_tr, self.dl_val, - self.data_aug_params[ - 'patch_size_for_spatialtransform'], - self.data_aug_params, - deep_supervision_scales=self.deep_supervision_scales, - pin_memory=self.pin_memory) - self.print_to_log_file("TRAINING KEYS:\n %s" % (str(self.dataset_tr.keys())), - also_print_to_console=False) - self.print_to_log_file("VALIDATION KEYS:\n %s" % (str(self.dataset_val.keys())), - also_print_to_console=False) - else: - pass - - self.initialize_network() - self.initialize_optimizer_and_scheduler() - - assert isinstance(self.network, (SegmentationNetwork, nn.DataParallel)) - else: - self.print_to_log_file('self.was_initialized is True, not running self.initialize again') - - self.was_initialized = True - - def validate(self, do_mirroring: bool = True, use_sliding_window: bool = True, step_size: float = 0.5, - save_softmax: bool = True, use_gaussian: bool = True, overwrite: bool = True, - validation_folder_name: str = 'validation_raw', debug: bool = False, all_in_gpu: bool = False, - segmentation_export_kwargs: dict = None, run_postprocessing_on_folds: bool = True): - assert self.was_initialized, "must initialize, ideally with checkpoint (or train first)" - - current_mode = self.network.training - self.network.eval() - # save whether network is in deep supervision mode or not - ds = self.network.do_ds - # disable deep supervision - self.network.do_ds = False - - if segmentation_export_kwargs is None: - if 'segmentation_export_params' in self.plans.keys(): - force_separate_z = self.plans['segmentation_export_params']['force_separate_z'] - interpolation_order = self.plans['segmentation_export_params']['interpolation_order'] - interpolation_order_z = self.plans['segmentation_export_params']['interpolation_order_z'] - else: - force_separate_z = None - interpolation_order = 1 - interpolation_order_z = 0 - else: - force_separate_z = segmentation_export_kwargs['force_separate_z'] - interpolation_order = segmentation_export_kwargs['interpolation_order'] - interpolation_order_z = segmentation_export_kwargs['interpolation_order_z'] - - if self.dataset_val is None: - self.load_dataset() - self.do_split() - - output_folder = join(self.output_folder, validation_folder_name) - maybe_mkdir_p(output_folder) - # this is for debug purposes - my_input_args = {'do_mirroring': do_mirroring, - 'use_sliding_window': use_sliding_window, - 'step': step_size, - 'save_softmax': save_softmax, - 'use_gaussian': use_gaussian, - 'overwrite': overwrite, - 'validation_folder_name': validation_folder_name, - 'debug': debug, - 'all_in_gpu': all_in_gpu, - 'segmentation_export_kwargs': segmentation_export_kwargs, - } - save_json(my_input_args, join(output_folder, "validation_args.json")) - - if do_mirroring: - if not self.data_aug_params['do_mirror']: - raise RuntimeError("We did not train with mirroring so you cannot do inference with mirroring enabled") - mirror_axes = self.data_aug_params['mirror_axes'] - else: - mirror_axes = () - - pred_gt_tuples = [] - - export_pool = Pool(default_num_threads) - results = [] - - for k in self.dataset_val.keys(): - properties = load_pickle(self.dataset[k]['properties_file']) - fname = properties['list_of_data_files'][0].split("/")[-1][:-12] - - if overwrite or (not isfile(join(output_folder, fname + ".nii.gz"))) or \ - (save_softmax and not isfile(join(output_folder, fname + ".npz"))): - data = np.load(self.dataset[k]['data_file'])['data'] - - # concat segmentation of previous step - seg_from_prev_stage = np.load(join(self.folder_with_segs_from_prev_stage, - k + "_segFromPrevStage.npz"))['data'][None] - - print(k, data.shape) - data[-1][data[-1] == -1] = 0 - - data_for_net = np.concatenate((data[:-1], to_one_hot(seg_from_prev_stage[0], range(1, self.num_classes)))) - - softmax_pred = self.predict_preprocessed_data_return_seg_and_softmax(data_for_net, - do_mirroring=do_mirroring, - mirror_axes=mirror_axes, - use_sliding_window=use_sliding_window, - step_size=step_size, - use_gaussian=use_gaussian, - all_in_gpu=all_in_gpu, - mixed_precision=self.fp16)[1] - - softmax_pred = softmax_pred.transpose([0] + [i + 1 for i in self.transpose_backward]) - - if save_softmax: - softmax_fname = join(output_folder, fname + ".npz") - else: - softmax_fname = None - - """There is a problem with python process communication that prevents us from communicating obejcts - larger than 2 GB between processes (basically when the length of the pickle string that will be sent is - communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long - enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually - patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will - then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either - filename or np.ndarray and will handle this automatically""" - if np.prod(softmax_pred.shape) > (2e9 / 4 * 0.85): # *0.85 just to be save - np.save(join(output_folder, fname + ".npy"), softmax_pred) - softmax_pred = join(output_folder, fname + ".npy") - - results.append(export_pool.starmap_async(save_segmentation_nifti_from_softmax, - ((softmax_pred, join(output_folder, fname + ".nii.gz"), - properties, interpolation_order, None, None, None, - softmax_fname, None, force_separate_z, - interpolation_order_z), - ) - ) - ) - - pred_gt_tuples.append([join(output_folder, fname + ".nii.gz"), - join(self.gt_niftis_folder, fname + ".nii.gz")]) - - _ = [i.get() for i in results] - self.print_to_log_file("finished prediction") - - # evaluate raw predictions - self.print_to_log_file("evaluation of raw predictions") - task = self.dataset_directory.split("/")[-1] - job_name = self.experiment_name - _ = aggregate_scores(pred_gt_tuples, labels=list(range(self.num_classes)), - json_output_file=join(output_folder, "summary.json"), - json_name=job_name + " val tiled %s" % (str(use_sliding_window)), - json_author="Fabian", - json_task=task, num_threads=default_num_threads) - - if run_postprocessing_on_folds: - # in the old nnunet we would stop here. Now we add a postprocessing. This postprocessing can remove everything - # except the largest connected component for each class. To see if this improves results, we do this for all - # classes and then rerun the evaluation. Those classes for which this resulted in an improved dice score will - # have this applied during inference as well - self.print_to_log_file("determining postprocessing") - determine_postprocessing(self.output_folder, self.gt_niftis_folder, validation_folder_name, - final_subf_name=validation_folder_name + "_postprocessed", debug=debug) - # after this the final predictions for the vlaidation set can be found in validation_folder_name_base + "_postprocessed" - # They are always in that folder, even if no postprocessing as applied! - - # detemining postprocesing on a per-fold basis may be OK for this fold but what if another fold finds another - # postprocesing to be better? In this case we need to consolidate. At the time the consolidation is going to be - # done we won't know what self.gt_niftis_folder was, so now we copy all the niftis into a separate folder to - # be used later - gt_nifti_folder = join(self.output_folder_base, "gt_niftis") - maybe_mkdir_p(gt_nifti_folder) - for f in subfiles(self.gt_niftis_folder, suffix=".nii.gz"): - success = False - attempts = 0 - e = None - while not success and attempts < 10: - try: - shutil.copy(f, gt_nifti_folder) - success = True - except OSError as e: - attempts += 1 - sleep(1) - if not success: - print("Could not copy gt nifti file %s into folder %s" % (f, gt_nifti_folder)) - if e is not None: - raise e - - # restore network deep supervision mode - self.network.train(current_mode) - self.network.do_ds = ds diff --git a/spaces/hoang1007/wav2vec2/finetuning/preprocess.py b/spaces/hoang1007/wav2vec2/finetuning/preprocess.py deleted file mode 100644 index 8fd3c6d6a0d9290a4ee66ca120f57a4198d46a6c..0000000000000000000000000000000000000000 --- a/spaces/hoang1007/wav2vec2/finetuning/preprocess.py +++ /dev/null @@ -1,27 +0,0 @@ -import sys - -sys.path.append("..") - -import os -import argparse -from torch.utils.data import random_split -from src.datamodule import VLSP2020TarDataset, VLSP2020Dataset - - -def prepare_tar_dataset(data_dir: str, dest_dir: str): - dts = VLSP2020Dataset(data_dir) - train_set, val_set = random_split(dts, [42_000, 14_427]) - - VLSP2020TarDataset(os.path.join(dest_dir, "vlsp2020_train_set.tar")).convert( - train_set - ) - VLSP2020TarDataset(os.path.join(dest_dir, "vlsp2020_val_set.tar")).convert(val_set) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--data_dir", type=str, required=True) - parser.add_argument("--dest_dir", type=str, required=True) - args = parser.parse_args() - - prepare_tar_dataset(args.data_dir, args.dest_dir) diff --git a/spaces/hra/ChatGPT-SEC-Filings-QA/README.md b/spaces/hra/ChatGPT-SEC-Filings-QA/README.md deleted file mode 100644 index 920612a0be480165b2f2153ad2f4ddb13a6c110e..0000000000000000000000000000000000000000 --- a/spaces/hra/ChatGPT-SEC-Filings-QA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT-SEC-Filings-QA -emoji: 🏢 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: cc-by-nc-nd-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huspacy/example-applications/examples/anon.py b/spaces/huspacy/example-applications/examples/anon.py deleted file mode 100644 index 46ac79f48a27f2ee1536240430984ba04ae7d84b..0000000000000000000000000000000000000000 --- a/spaces/huspacy/example-applications/examples/anon.py +++ /dev/null @@ -1,78 +0,0 @@ -from typing import Tuple, List - -import gradio as gr -from faker import Faker -from presidio_analyzer import AnalyzerEngine -from presidio_analyzer.nlp_engine import SpacyNlpEngine -from presidio_anonymizer import AnonymizerEngine -from presidio_anonymizer.entities.engine import OperatorConfig -from spacy import Language - -from examples.common import NLP - - -# noinspection PyMissingConstructor -class HuSpaCyNlpEngine(SpacyNlpEngine): - def __init__(self, nlp: Language): - self.nlp = {"hu": nlp} - - -def process(text: str, fake_data: bool, entities: List) -> Tuple[str, List]: - nlp_engine = HuSpaCyNlpEngine(NLP) - - analyzer = AnalyzerEngine(nlp_engine=nlp_engine, supported_languages=["hu"]) - - results = analyzer.analyze( - text=text, entities=entities, language="hu") - - fake = Faker(locale=["hu_HU"]) - - fake_operators = { - "PERSON": OperatorConfig("custom", {"lambda": lambda x: fake.name()}), - "LOCATION": OperatorConfig("custom", {"lambda": lambda x: fake.address()}), - "EMAIL_ADDRESS": OperatorConfig("custom", {"lambda": lambda x: fake.email()}), - "PHONE_NUMBER": OperatorConfig("custom", {"lambda": lambda x: fake.phone_number()}), - "CRYPTO": OperatorConfig("custom", {"lambda": lambda x: fake.password()}), - "IP_ADDRESS": OperatorConfig("custom", {"lambda": lambda x: fake.ipv4()}), - "URL": OperatorConfig("custom", {"lambda": lambda x: fake.url()}), - "DATE_TIME": OperatorConfig("custom", {"lambda": lambda x: fake.date()}), - "CREDIT_CARD": OperatorConfig("custom", {"lambda": lambda x: fake.credit_card_number()}), - "IBAN_CODE": OperatorConfig("custom", {"lambda": lambda x: fake.iban()}), - } - - anonymizer = AnonymizerEngine() - anonymized_text = anonymizer.anonymize( - text=text, analyzer_results=results, operators=fake_operators) if fake_data else anonymizer.anonymize(text=text, - analyzer_results=results) - - return anonymized_text.text, anonymized_text.items - - -EXAMPLES = [ - [ - "Vespucci 1450-es években született Firenzében, és 1497 és 1504 között legalább két felfedező úton vett részt – az egyiket spanyol, a másikat portugál támogatással.", - False, ["PERSON", "LOCATION"]], - [ - "Elon Musk 1971-ben született a Dél-afrikai Köztársaságban, anyja Maye Musk (született: Haldeman) modell, apja Errol Musk mérnök, pilóta.", - True, [ - "PERSON", "LOCATION"]], - [ - "Vespucci 1450-es években született Firenzében, és 1497 és 1504 között legalább két felfedező úton vett részt. Bárorító leveleket a vespucci@deojeda.es email-címre várt, mellette működött egy hangrögzítője is a +3903827802737 telefonszámon. Adományokat a bitcoin tárcájába (1Fsb3io3hj1jKaRCTRQ89Du88Dp7NxgEcU), bankkártyájára (5200 8282 8282 8210) és IBAN számlaszámára (ES8201289482186115378819) fogadott. Utazási blogja a https://firenze.it/vespucci címen volt elérhető. Legutóbb 1503-03-15-én publikált, ezt a 192.168.0.1 ip-címről tette meg.", - True, - ["PERSON", "LOCATION", "EMAIL_ADDRESS", "PHONE_NUMBER", "CRYPTO", "IP_ADDRESS", "URL", "DATE_TIME", - "CREDIT_CARD", "IBAN_CODE"]], -] - -demo = gr.Interface( - fn=process, - inputs=[gr.Textbox(value=EXAMPLES[0][0], lines=10, label="Input text", show_label=True), - gr.Checkbox(value=EXAMPLES[0][1], - label="Apply de-identification", show_label=True), - gr.CheckboxGroup( - ['PERSON', 'LOCATION', 'DATE_TIME', 'IP_ADDRESS', 'URL', 'EMAIL_ADDRESS', 'PHONE_NUMBER', 'CREDIT_CARD', - 'IBAN_CODE', 'CRYPTO'], label="Entities", show_label=True, value=EXAMPLES[0][2])], - outputs=[gr.Textbox(label="Anonymized text", show_label=True), - gr.Textbox(label="Tags", show_label=True)], - examples=EXAMPLES, - cache_examples=False, -) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/inference.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/inference.py deleted file mode 100644 index 1aab06628b4f33a67284ea1446ddc7c38642c33f..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/inference.py +++ /dev/null @@ -1,34 +0,0 @@ -import argparse - -import cv2 -import numpy as np -import torch -from backbones import get_model - - -@torch.no_grad() -def inference(weight, name, img): - if img is None: - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8) - else: - img = cv2.imread(img) - img = cv2.resize(img, (112, 112)) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - img.div_(255).sub_(0.5).div_(0.5) - net = get_model(name, fp16=False) - net.load_state_dict(torch.load(weight)) - net.eval() - feat = net(img).numpy() - print(feat) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="PyTorch ArcFace Training") - parser.add_argument("--network", type=str, default="r50", help="backbone network") - parser.add_argument("--weight", type=str, default="") - parser.add_argument("--img", type=str, default=None) - args = parser.parse_args() - inference(args.weight, args.network, args.img) diff --git a/spaces/ifrit98/terenceGPT/README.md b/spaces/ifrit98/terenceGPT/README.md deleted file mode 100644 index 11de980b33171739ed2a69b042509c8a90f52753..0000000000000000000000000000000000000000 --- a/spaces/ifrit98/terenceGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TerenceGPT -emoji: 💩 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: pddl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/ACDSee Pro 2.5.332 Crack [2021].md b/spaces/inplisQlawa/anything-midjourney-v4-1/ACDSee Pro 2.5.332 Crack [2021].md deleted file mode 100644 index 9275387c2332ecadcf109909d3b2b4940a1b7714..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/ACDSee Pro 2.5.332 Crack [2021].md +++ /dev/null @@ -1,9 +0,0 @@ -

        ACDSee Pro 2.5.332 Crack


        Download File ★★★ https://urlin.us/2uEwpa



        -
        -March 9, 2011 - ACDSee Pro is a professional digital photography software with a number of advanced features designed for professionals.The program includes many different tools, as well as many other utilities. -In addition to the basic photo viewing and converting features, the program provides many additional photo editing features, including fine adjustment of colors, brightness and contrast, as well as the ability to create slideshows with a wide range of transitions. -Also, using the program, you can batch process large files, significantly reducing the time. -Screenshots: 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Dreambox Dm500 Gemini Image 4 70 HOT.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Dreambox Dm500 Gemini Image 4 70 HOT.md deleted file mode 100644 index d0ed758228228c79cfbc6342e5f19ed8e91d7c86..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Dreambox Dm500 Gemini Image 4 70 HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

        dreambox dm500 gemini image 4 70


        Download Ziphttps://urlin.us/2uEy59



        - -Dreambox Dm500s Gemini Image mediafire links free download, download dm500s gemini 4 70 gsf tp 11398 fix, DM500s Gemini 4 70 Backup ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (dvdvideosoft Free Studio 5.0.3 Seria).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (dvdvideosoft Free Studio 5.0.3 Seria).md deleted file mode 100644 index f47b777830b1f9d33b990147f0f96823779f7d33..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (dvdvideosoft Free Studio 5.0.3 Seria).md +++ /dev/null @@ -1,70 +0,0 @@ -

        HD Online Player (dvdvideosoft free studio 5.0.3 seria)


        Download ··· https://urlin.us/2uEwbq



        - -Datanumen Rar Repair Crack 64 Download by Datanumen Scanner Repair Pro 6.0.3 Serial KeyFull.Download Free Download Datanumen Rar Repair Crack 64 Keygen.Rar Repair Crack Free Download by Datanumen Scanner Repair Pro 6.0.3 Crack 2018 Latest.Online Game Datanumen Rar Repair Crack 64 Full Version Download Full Version free.datanumen.rar Repair..Q: - -SQL Query to update multiple tables in a table - -I have table called Person. It has an ID column and a Status column. - -I have another table called Person_Login called PersonLogin. - -This table is linked to the Person table. - -The ID column in PersonLogin matches the ID column in Person. - -The Status column in PersonLogin matches the Status column in Person. - -The Status column in PersonLogin is set to 'Inactive'. - -I want to update the Status column in PersonLogin to 'Active' and in the same time I want to remove the PersonLogin row if it's Status is 'Inactive'. - -What is the best way of doing this? I tried using something like this: - -UPDATE Person - -SET Person.Status = 'Active' - -WHERE Person.ID = PersonLogin.ID - -This will update only the rows with the same ID in both tables. How can I get the first query to update every single row in PersonLogin? - -A: - -Try something like this: - -UPDATE p - -SET p.Status = 'Active' - -FROM Person p - -INNER JOIN PersonLogin l ON p.ID = l.ID - -WHERE l.Status = 'Inactive' - -Q: - -How do I exclude double quotes from sed? - -When I run the following command: - -sed -i's/hello/hey/"/g' test.txt - -I get the following error: - -sed: -e expression #1, char 1: unknown option to `s' - -I've tried a lot of different ways to get around this problem, such as: - -sed -i's/"/g' test.txt - -sed -i '/hello/d' test.txt - -but I still get the same error. - -I suggest to use -r and to avoid escaping: - -sed -ri's/"/g' test 4fefd39f24
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Dhoom 3 Full Movie Hd 1080p Free Download [Extra Quality] Utorrent 13.md b/spaces/inreVtussa/clothingai/Examples/Dhoom 3 Full Movie Hd 1080p Free Download [Extra Quality] Utorrent 13.md deleted file mode 100644 index 6898dd788502156d74d98d271ad7e3f95853b6a3..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Dhoom 3 Full Movie Hd 1080p Free Download [Extra Quality] Utorrent 13.md +++ /dev/null @@ -1,40 +0,0 @@ -

        dhoom 3 full movie hd 1080p free download utorrent 13


        Download Filehttps://tiurll.com/2uCj0d



        -
        -1.5 GB 2.5 GB 7.5 GB 14.8 GB. Planet Earth (2,101) Hindi Full Movie 1.6 GB 3.7 GB 14.7 GB 28.6 GB - -4.2 GB 2.9 GB 2.8 GB 6.4 GB - -Online Quiz – How To Answer The MCQs With Greed (2016) Indian Full Movie 720p Watch Online Quiz - How To Answer The MCQs With Greed (2016) Hindi Full Movie 1.2 GB 2.6 GB 3.5 GB 4.1 GB - -3.6 GB 3.5 GB 3.1 GB 5.5 GB - -Why Should A Student Attend A University? (2015) Hindi Full Movie 480p [450MB] | 720p [1GB] | 1080p [2.5GB] | Download Dhoom 3 (2013) Hindi Full Movie (480p) | 720p | 1080p | Watch Online | YouTube. Published on: May 27, 2013Nick Thompson - -Nick Thompson (born 3 September 1965) is a British long-distance runner. He competed in the men's 10,000 metres at the 1996 Summer Olympics. - -References - -Category:1965 births - -Category:Living people - -Category:Athletes (track and field) at the 1996 Summer Olympics - -Category:British male long-distance runners - -Category:Olympic athletes of Great Britain - -Category:Place of birth missing (living people) - -Category:Universiade medalists in athletics (track and field) - -Category:Universiade gold medalists for Great BritainQ: - -Prove that the map $\pi_*$ is an isomorphism for the homotopy groups of the spectrum $\mathcalE_\mathbbF_p$ (a truncated Eilenberg-MacLane spectrum). - -We have a family of spectra $\ \mathcalE_\mathbbF_p \_p \geq 1$ defined by the following construction: - -$$ \mathcalE_\mathbbF_p = \textFib(BP_\mathbbF_p \leftarrow \Sigma BP_\mathbbF_p^\wedge_+) \simeq \textFib(S^0 \ 4fefd39f24
        -
        -
        -

        diff --git a/spaces/islammohy/Chat-with-Llama-2-7b-st-voice/README.md b/spaces/islammohy/Chat-with-Llama-2-7b-st-voice/README.md deleted file mode 100644 index 0dd02783fad063b1d5c61e959869013e9f1b810a..0000000000000000000000000000000000000000 --- a/spaces/islammohy/Chat-with-Llama-2-7b-st-voice/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat With Llama 2 7b St Voice -emoji: 🦙 -colorFrom: yellow -colorTo: indigo -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/jackyliang42/code-as-policies/LICENSE.md b/spaces/jackyliang42/code-as-policies/LICENSE.md deleted file mode 100644 index 2697cde25676d46a917a2d9362dd0e5495b6d2ca..0000000000000000000000000000000000000000 --- a/spaces/jackyliang42/code-as-policies/LICENSE.md +++ /dev/null @@ -1,7 +0,0 @@ -Copyright 2021 Google LLC. SPDX-License-Identifier: Apache-2.0 - -Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - -https://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. \ No newline at end of file diff --git a/spaces/jannisborn/paccmann/attention.py b/spaces/jannisborn/paccmann/attention.py deleted file mode 100644 index d177784b766b13f9951d58a406e187e7f8658486..0000000000000000000000000000000000000000 --- a/spaces/jannisborn/paccmann/attention.py +++ /dev/null @@ -1,126 +0,0 @@ -"""Get/put submission results concerning attention from/on COS.""" -import os -import json -import dill -import logging -import numpy as np -from typing import Iterable -from configuration import GENES -from cos import ( - RESULTS_PREFIX, - bytes_from_key, - string_from_key, - bytes_to_key, -) -from utils import Drug -from plots import embed_barplot -from smiles import smiles_attention_to_svg - -logger = logging.getLogger("openapi_server:attention") - - -def download_attention(workspace_id: str, task_id: str, sample_name: str) -> dict: - """ - Download attention figures and related data. - - Args: - workspace_id (str): workspace identifier. - task_id (str): task identifier. - sample_name (str): name of the sample. - - Returns: - dict: attention figures and related data. - """ - - def _remote_to_bytes(basename: str) -> bytes: - object_name = os.path.join(workspace_id, task_id, sample_name, basename) - key = os.path.join(RESULTS_PREFIX, object_name) - return bytes_from_key(key) - - drug_path = os.path.join(workspace_id, task_id, "drug.json") - key = os.path.join(RESULTS_PREFIX, drug_path) - drug = Drug(**json.loads(string_from_key(key))) - logger.debug(f"download attention results from COS for {drug.smiles}.") - # omic - logger.debug("gene attention.") - gene_attention = dill.loads(_remote_to_bytes("gene_attention.pkl")) - genes = np.array(GENES) - order = gene_attention.argsort()[::-1] # descending - gene_attention_js, gene_attention_html = embed_barplot( - genes[order], gene_attention[order] - ) - logger.debug("gene attention plots created.") - # smiles - logger.debug("SMILES attention.") - smiles_attention = dill.loads(_remote_to_bytes("smiles_attention.pkl")) - drug_attention_svg, drug_color_bar_svg = smiles_attention_to_svg( - drug.smiles, smiles_attention - ) - logger.debug("SMILES attention plots created.") - return { - "drug": drug, - "sample_name": sample_name, - "sample_drug_attention_svg": drug_attention_svg, - "sample_drug_color_bar_svg": drug_color_bar_svg, - "sample_gene_attention_js": gene_attention_js, - "sample_gene_attention_html": gene_attention_html, - } - - -def _upload_ndarray(sample_prefix: str, array: np.ndarray, filename: str) -> None: - bytes_to_key(dill.dumps(array), os.path.join(sample_prefix, f"{filename}.pkl")) - - -def upload_attention( - prefix: str, - sample_names: Iterable[str], - omic_attention: np.ndarray, - smiles_attention: np.ndarray, -) -> None: - """ - Upload attention profiles. - - Args: - prefix (str): base prefix used as a root. - sample_names (Iterable[str]): name of the samples. - omic_attention (np.ndarray): attention values for genes. - smiles_attention (np.ndarray): attention values for SMILES. - - Raises: - ValueError: mismatch in sample names and gene attention. - ValueError: mismatch in sample names and SMILES attention. - ValueError: mismatch in number of genes and gene attention. - """ - omic_entities = np.array(GENES) - # sanity checks - if len(sample_names) != omic_attention.shape[0]: - raise ValueError( - f"length of sample_names {len(sample_names)} does not " - f"match omic_attention {omic_attention.shape[0]}" - ) - if len(sample_names) != len(smiles_attention): - raise ValueError( - f"length of sample_names {len(sample_names)} does not " - f"match smiles_attention {len(smiles_attention)}" - ) - if len(omic_entities) != omic_attention.shape[1]: - raise ValueError( - f"length of omic_entities {len(omic_entities)} " - f"does not match omic_attention.shape[1] {omic_attention.shape[1]}" - ) - # special case first - sample_name = "average" - # omic - res = {} - omic_alphas = omic_attention.mean(axis=0) - res["gene_attention"] = omic_alphas - - # smiles - smiles_alphas = smiles_attention.mean(axis=0) - res["smiles_attention"] = smiles_alphas - - # logging.debug('uploaded "average" attention figures.') - # for index, sample_name in enumerate(sample_names): - # res[f"gene_attention_{index}"] = omic_attention[index] - # res[f"smiles_attention_{index}"] = smiles_attention[index] - return res diff --git a/spaces/jatin-tech/SkinZen/get_patches.py b/spaces/jatin-tech/SkinZen/get_patches.py deleted file mode 100644 index 68b89eceb943bab18d4c0a65318c6ee70a537e3d..0000000000000000000000000000000000000000 --- a/spaces/jatin-tech/SkinZen/get_patches.py +++ /dev/null @@ -1,329 +0,0 @@ -import numpy as np -import cv2 -from os.path import join -import dlib -import os -from PIL import Image -import urllib.request -import imageio - -width_ratio = 1.5 -top_ratio = 1.5 -gap_ratio = 0.1 -down_ratio = 4.5 -chin_width_ratio = 2.8 -forehead_ratio = 0.3 -verb = False - -BASE_DIR = os.path.dirname(os.path.abspath(__file__)) -PREDICTOR_PATH = os.path.join(BASE_DIR, "shape_predictor_68_face_landmarks.dat") -eye_cascade = cv2.CascadeClassifier(os.path.join(BASE_DIR, "haarcascade_eye.xml")) - -assert not eye_cascade.empty() - -SCALE_FACTOR = 1 -FEATHER_AMOUNT = 11 - -FACE_POINTS = list(range(17, 68)) -MOUTH_POINTS = list(range(48, 61)) -RIGHT_BROW_POINTS = list(range(17, 22)) -LEFT_BROW_POINTS = list(range(22, 27)) -RIGHT_EYE_POINTS = list(range(36, 42)) -LEFT_EYE_POINTS = list(range(42, 48)) -NOSE_POINTS = list(range(27, 35)) -JAW_POINTS = list(range(0, 17)) - -OVERLAY_POINTS = [ - LEFT_EYE_POINTS + RIGHT_EYE_POINTS + LEFT_BROW_POINTS + RIGHT_BROW_POINTS, - NOSE_POINTS + MOUTH_POINTS, -] - -detector = dlib.get_frontal_face_detector() -predictor = dlib.shape_predictor(PREDICTOR_PATH) - -class TooManyFaces(Exception): - pass - -class NoFaces(Exception): - pass - -def get_landmarks(im): - rects = detector(im, 1) - - if len(rects) > 1: - raise TooManyFaces - if len(rects) == 0: - raise NoFaces - - return np.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()]) - -def read_imgURL(URL): - with urllib.request.urlopen(URL) as url: - with open('temp.jpg', 'wb') as f: - f.write(url.read()) - - img = Image.open('temp.jpg') - img = np.array(img) - return img - -def draw_convex_hull(im, points, color): - points = cv2.convexHull(points) - cv2.fillConvexPoly(im, points, color=color) - -def get_face_mask(im, landmarks): - im = np.zeros(im.shape[:2], dtype=np.float64) - - for group in OVERLAY_POINTS: - draw_convex_hull(im, - landmarks[group], - color=1) - - im = np.array([im, im, im]).transpose((1, 2, 0)) - - im = (cv2.GaussianBlur(im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0) > 0) * 1.0 - im = cv2.GaussianBlur(im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0) - - return im - - -def read_im_and_landmarks(fname): - im = np.array(fname) - im = cv2.resize(im, (im.shape[1] * SCALE_FACTOR, - im.shape[0] * SCALE_FACTOR)) - s = get_landmarks(im) - return im, s - -def warp_im(im, M, dshape): - output_im = np.zeros(dshape, dtype=im.dtype) - cv2.warpAffine(im, - M[:2], - (dshape[1], dshape[0]), - dst=output_im, - borderMode=cv2.BORDER_TRANSPARENT, - flags=cv2.WARP_INVERSE_MAP) - return output_im - -def infer_chin_region(eye, width_ratio, down_ratio, left_or_right): - region1 = [0] * 4 - if left_or_right == 'right': #assuming it is the absolute right chin - region1[0] = int(max(0, int(eye[0] - 0.5 * eye[2]))) #chin region should go lefwards - region1[2] = int(0.5 * eye[2]) - else: # assuming it is the absolute left chin - region1[0] = int(eye[0] + eye[2]) # chin region should go rightwards - region1[2] = int(0.5 * eye[2]) - region1[1] = int(eye[1] + eye[3]) - region1[3] = int(1.5 * eye[3]) - - return region1 - -def detect_face_direction(gray, face, eye, down_ratio, chin_width_ratio): - region1 = [0] * 4 # assuming this is the left eye, forhead should go rightward - region2 = [0] * 4 # assuming this is the right eye, forhead should go leftward - print(eye[0]) - region1 = infer_chin_region(eye[0], chin_width_ratio, down_ratio, 'left') #region1 is from eye to right - region2 = infer_chin_region(eye[0], chin_width_ratio, down_ratio, 'right') # region2 is from eye to left - - std1 = np.std(gray[region1[1]:(region1[1]+region1[3]), region1[0]:(region1[0]+region1[2])]) - std2 = np.std(gray[region2[1]:(region2[1]+region2[3]), region2[0]:(region2[0]+region2[2])]) - face_direction = "" - - if std1 > std2: #eye right has higher variance than eye left - face_direction = "right" - else: - face_direction = "left" - return face_direction - -def extract_cheek_region(face_x_min, face_x_max, face_y_max, eye_landmarks, left_or_right): - if left_or_right == "Left": - cheek_region_min_x = eye_landmarks[0,0] - cheek_region_max_x = int(face_x_max - 0.05 * (face_x_max - min(eye_landmarks[:,0]))) - else: - cheek_region_max_x = max(eye_landmarks[:,0])[0,0] - #print (max(eye_landmarks[:,0])[0,0]) - #cheek_region_max_x = max(eye_landmarks[:, 0]) - cheek_region_min_x = int(face_x_min + 0.1 * (cheek_region_max_x - face_x_min)) - cheek_region_min_y = int(max(eye_landmarks[:,1]) + 0.2 * (max(eye_landmarks[:,1]) - min(eye_landmarks[:,1]))) - cheek_region_max_y = int(face_y_max - 0.1 * (face_y_max - max(eye_landmarks[:,1]))) - return [cheek_region_min_x, cheek_region_min_y, cheek_region_max_x, cheek_region_max_y] - -def extract_patches(imagefile, dimension_dict, face_loc_dict, image_dim, croppedFaces_Dir): - - imageName = "temp" - - img, landmarks = read_im_and_landmarks(imagefile) - face_detected = True - - img_height, img_width = img.shape[0:2] - image_dim = [img_height, img_width] - min_dim = min(img_height, img_width) - min_face_size = min(min_dim * 0.2, min_dim * 0.2) - min_eye = min_face_size * 0.2 - min_eye_area = min_eye ** 2 - - gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - - if face_detected: - mask = get_face_mask(img, landmarks) - face_x_min = int(max(0, np.asarray(min(landmarks[:,0])).flatten()[0])) - face_x_max = int(min(img_width, np.asarray(max(landmarks[:,0])).flatten()[0])) - face_y_min = int(max(0, np.asarray(min(landmarks[:,1])).flatten()[0])) - face_y_max = int(min(img_height, np.asarray(max(landmarks[:,1])).flatten()[0])) - face_loc_dict['face_loc'] = [face_x_min, face_x_max, face_y_min, face_y_max] - face_height = face_y_max - face_y_min - forehead_height = int(face_height * forehead_ratio) - new_face_y_min = max(0, face_y_min - forehead_height) - right_brow_landmarks = landmarks[RIGHT_BROW_POINTS,:] - left_brow_landmarks = landmarks[LEFT_BROW_POINTS,:] - right_eye_landmarks = landmarks[RIGHT_EYE_POINTS,:] - left_eye_landmarks = landmarks[LEFT_EYE_POINTS,:] - mouse_landmarks = landmarks[MOUTH_POINTS,:] - ######################## - # Get the forehead patch - ######################## - [right_brow_min_x, left_brow_max_x] = \ - [max(0, np.min(np.array(right_brow_landmarks[:,0]))), min(img_width, np.max(np.array(left_brow_landmarks[:,0])))] - brow_min_y = min(np.min(np.array(right_brow_landmarks[:,1])),np.min(np.array(left_brow_landmarks[:,1]))) - forehead_x_min = right_brow_min_x - forehead_x_max = left_brow_max_x - forehead_y_min = max(0, brow_min_y - forehead_height) - forehead_y_max = min(brow_min_y, forehead_y_min + forehead_height) - forehead_region = img[forehead_y_min:forehead_y_max, forehead_x_min:forehead_x_max, :] - #print ('forehead dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (forehead_x_min, forehead_x_max, forehead_y_min, forehead_y_max)) - key_name = 'landmark_fh' - dimension_dict[key_name] = [forehead_x_min, forehead_x_max, forehead_y_min, forehead_y_max] - forehead_file_name = join(croppedFaces_Dir, key_name +".jpg") - #forehead_region = cv2.cvtColor(forehead_region, cv2.COLOR_BGR2RGB) - imageio.imwrite(forehead_file_name, forehead_region) - - chin_x_min = np.max(np.array(right_eye_landmarks[:,0])) - chin_x_max = np.min(np.array(left_eye_landmarks[:,0])) - chin_y_min = np.max(np.array(mouse_landmarks[:,1])) - chin_y_max = face_y_max - chin_region = img[chin_y_min:chin_y_max, chin_x_min:chin_x_max, :] - #print ('chin dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (chin_x_min, chin_x_max, chin_y_min, chin_y_max)) - key_name = 'landmark_chin' - dimension_dict[key_name] = [chin_x_min, chin_x_max, chin_y_min, chin_y_max] - chin_file_name = join(croppedFaces_Dir, key_name +".jpg") - #chin_region = cv2.cvtColor(chin_region, cv2.COLOR_BGR2RGB) - imageio.imwrite(chin_file_name, chin_region) - - ########################## - # Get the cheeks patch - ########################## - # Decide whether it is a side view or not - left_eye_width = np.max(np.array(left_eye_landmarks[:,0])) - np.min(np.array(left_eye_landmarks[:,0])) - right_eye_width = np.max(np.array(right_eye_landmarks[:,0])) - np.min(np.array(right_eye_landmarks[:,0])) - right_face = True - left_face = True - if float(right_eye_width) / float(left_eye_width) >= 1.15: # right eye is bigger than left eye, showing the right face - left_face = False - elif float(left_eye_width) / float(right_eye_width) >= 1.15: # left eye is bigger than right eye, showing the left face - right_face = False - - if right_face: - right_cheek_region = extract_cheek_region(face_x_min, face_x_max, face_y_max, right_eye_landmarks, "Right") - cheek_region = img[right_cheek_region[1]:right_cheek_region[3], right_cheek_region[0]:right_cheek_region[2], :] - #print ('right cheek dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (right_cheek_region[0], right_cheek_region[2], right_cheek_region[1], right_cheek_region[3])) - key_name = 'landmark_rc' - dimension_dict[key_name] = [right_cheek_region[0], right_cheek_region[2], right_cheek_region[1], right_cheek_region[3]] - cheek_file_name = join(croppedFaces_Dir, key_name +".jpg") - #cheek_region = cv2.cvtColor(cheek_region, cv2.COLOR_BGR2RGB) - imageio.imwrite(cheek_file_name, cheek_region) - if left_face: - left_cheek_region = extract_cheek_region(face_x_min, face_x_max, face_y_max, left_eye_landmarks, "Left") - cheek_region = img[left_cheek_region[1]:left_cheek_region[3], left_cheek_region[0]:left_cheek_region[2], :] - #print ('left cheek dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (left_cheek_region[0], left_cheek_region[2], left_cheek_region[1], left_cheek_region[3])) - key_name = 'landmark_lc' - dimension_dict[key_name] = [left_cheek_region[0], left_cheek_region[2], left_cheek_region[1], left_cheek_region[3]] - cheek_file_name = join(croppedFaces_Dir, key_name +".jpg") - #cheek_region = cv2.cvtColor(cheek_region, cv2.COLOR_BGR2RGB) - imageio.imwrite(cheek_file_name, cheek_region) - - - if not face_detected: - print("Face not detected by landmarks model...") - # Use the OneEye model to detect one eye, and infer the face region based on the eye location - eye_detected = False - roi_gray = gray - roi_color = img - roi_color = cv2.cvtColor(roi_color, cv2.COLOR_BGR2RGB) - eyes = eye_cascade.detectMultiScale(roi_gray, 1.1, 5) - max_area = 0 - eye_count = 0 - max_index = 0 - - for (ex,ey,ew,eh) in eyes: # there might be multiple eyes detected. Choose the biggest one - if ew*eh >= max_area and ex >= img_width * 0.1 and ex <= img_width * 0.9: - max_area = ew*eh - max_index = eye_count - eye_count += 1 - if max_area >= min_eye_area: - eye_detected = True - (ex, ey, ew, eh) = eyes[max_index] - if float(ew) / float(img_width) > 0.15 or float(eh) / float(img_height) > 0.15: # detected eye too large - # resize the detected eye - center_x = ex + ew/2 - center_y = ey + eh/2 - resized_w = min(img_width * 0.15, img_height * 0.15) - ex = int(center_x - resized_w/2) - ey = int(center_y - resized_w/2) - ew = int(resized_w) - eh = int(resized_w) - eyes1 = np.array([ex, ey, resized_w, resized_w]).reshape((1,4)) - else: - eyes1 = np.array(eyes[max_index]).reshape((1,4)) - face1 = np.array(()) - face_direction = detect_face_direction(gray, face1, eyes1, down_ratio, chin_width_ratio) - if face_direction == "left": - print("Left eye detected") - face_min_x = eyes1[0, 0] - face_max_x = min(img_width, int(eyes1[0,0] + (chin_width_ratio + 0.5) * eyes1[0, 2])) - forehead_max_x = min(img_width, int(eyes1[0,0] + width_ratio * eyes1[0, 2])) - forehead_min_x = face_min_x - cheek_min_x = int(eyes1[0, 0] + 0.5 * eyes1[0,2]) - cheek_max_x = face_max_x - else: - print("Right eye detected") - face_min_x = max(0, int(eyes1[0, 0] - chin_width_ratio * eyes1[0, 2])) - face_max_x = eyes1[0, 0] + eyes1[0, 2] - forehead_min_x = max(0, int(eyes1[0, 0] - width_ratio * eyes1[0, 2])) - forehead_max_x = min(img_width, int(eyes1[0, 0] + width_ratio * eyes1[0, 2])) - cheek_max_x = int(eyes1[0,0] + 0.5*eyes1[0,2]) - cheek_min_x = face_min_x - forehead_min_y = max(0, int(eyes1[0, 1] - top_ratio * eyes1[0,3])) - forehead_max_y = max(0, int(eyes1[0, 1] - 0.5 * eyes1[0, 3])) - forehead_ok = False - # Get the forehead region - if forehead_max_y - forehead_min_y >= 0.7 * eyes1[0, 3]: - forehead_ok = True - forehead_region = img[forehead_min_y:forehead_max_y, forehead_min_x: forehead_max_x, :] - #print ('forehead dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (forehead_min_x, forehead_max_x, forehead_min_y, forehead_max_y)) - key_name = 'oneeye_fh' - dimension_dict[key_name] = [forehead_min_x, forehead_max_x, forehead_min_y, forehead_max_y] - forehead_file_name = join(croppedFaces_Dir, key_name +".jpg") - imageio.imwrite(forehead_file_name, forehead_region) - # Get the cheek region - cheek_min_y = int(eyes1[0, 1] + eyes1[0, 3]) - cheek_max_y = min(img_height, int(eyes1[0, 1] + down_ratio * eyes1[0, 3])) - cheek_region = img[cheek_min_y: cheek_max_y, cheek_min_x: cheek_max_x, :] - #print ('cheek dim (x_min, x_max, y_min, y_max): %i,%i, %i, %i' % (cheek_min_x, cheek_max_x, cheek_min_y, cheek_max_y)) - key_name = 'oneeye_cheek' - dimension_dict[key_name] = [cheek_min_x, cheek_max_x, cheek_min_y, cheek_max_y] - face_loc_dict['face_loc'] = [face_min_x, face_max_x, forehead_min_y, cheek_max_y] - #cheek_region = cv2.cvtColor(cheek_region, cv2.COLOR_BGR2RGB) - if face_direction == "left": - cheek_file_name = join(croppedFaces_Dir, key_name +".jpg") - elif face_direction == "right": - cheek_file_name = join(croppedFaces_Dir, key_name +".jpg") - else: - cheek_file_name = join(croppedFaces_Dir, key_name +".jpg") - imageio.imwrite(cheek_file_name, cheek_region) - - - if (not face_detected) and (not eye_detected): - print("No chin or forehead detected, output the original file %s.jpg"%imageName) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - outfile = join(croppedFaces_Dir, imageName+".jpg") - imageio.imwrite(outfile, img) - - return dimension_dict, face_loc_dict, image_dim \ No newline at end of file diff --git a/spaces/jbilcke-hf/LifeSim/postcss.config.js b/spaces/jbilcke-hf/LifeSim/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/jbilcke-hf/speech-recognition-server-1/README.md b/spaces/jbilcke-hf/speech-recognition-server-1/README.md deleted file mode 100644 index 3284b7f577a7cbf892563d16206342bf4100d229..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/speech-recognition-server-1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper -emoji: 📉 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.41.0 -app_file: app.py -pinned: false -duplicated_from: openai/whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/__init__.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/__init__.py deleted file mode 100644 index d7b02184067b3a370e2815d5dec39b9d1cdad42f..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -from .events import setup_wandb, WandbWriter -from .predictor import VisualizationDemo, VisualizationDemoIndoor \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_protocols.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_protocols.py deleted file mode 100644 index 89c80f9a5dfa01713c80990e768ce952cd7e52a0..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_protocols.py +++ /dev/null @@ -1,62 +0,0 @@ -"""Helpers for working with PDF types.""" - -from pathlib import Path -from typing import IO, Any, Dict, List, Optional, Tuple, Union - -try: - # Python 3.8+: https://peps.python.org/pep-0586 - from typing import Protocol # type: ignore[attr-defined] -except ImportError: - from typing_extensions import Protocol # type: ignore[misc] - -from ._utils import StrByteType - - -class PdfObjectProtocol(Protocol): - indirect_reference: Any - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> Any: - ... - - def _reference_clone(self, clone: Any, pdf_dest: Any) -> Any: - ... - - def get_object(self) -> Optional["PdfObjectProtocol"]: - ... - - -class PdfReaderProtocol(Protocol): # pragma: no cover - @property - def pdf_header(self) -> str: - ... - - @property - def strict(self) -> bool: - ... - - @property - def xref(self) -> Dict[int, Dict[int, Any]]: - ... - - @property - def pages(self) -> List[Any]: - ... - - def get_object(self, indirect_reference: Any) -> Optional[PdfObjectProtocol]: - ... - - -class PdfWriterProtocol(Protocol): # pragma: no cover - _objects: List[Any] - _id_translated: Dict[int, Dict[int, int]] - - def get_object(self, indirect_reference: Any) -> Optional[PdfObjectProtocol]: - ... - - def write(self, stream: Union[Path, StrByteType]) -> Tuple[bool, IO]: - ... diff --git a/spaces/justYu2001/furniture-detection/utils/autoanchor.py b/spaces/justYu2001/furniture-detection/utils/autoanchor.py deleted file mode 100644 index f491032e53ab43cd81d966d127bd92f9b414b9fe..0000000000000000000000000000000000000000 --- a/spaces/justYu2001/furniture-detection/utils/autoanchor.py +++ /dev/null @@ -1,160 +0,0 @@ -# Auto-anchor utils - -import numpy as np -import torch -import yaml -from scipy.cluster.vq import kmeans -from tqdm import tqdm - -from utils.general import colorstr - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print('Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) - - -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - prefix = colorstr('autoanchor: ') - print(f'\n{prefix}Analyzing anchors... ', end='') - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1. / thr).float().mean() # best possible recall - return bpr, aat - - anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors - bpr, aat = metric(anchors) - print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='') - if bpr < 0.98: # threshold to recompute - print('. Attempting to improve anchors, please wait...') - na = m.anchor_grid.numel() // 2 # number of anchors - try: - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - except Exception as e: - print(f'{prefix}ERROR: {e}') - new_bpr = metric(anchors)[0] - if new_bpr > bpr: # replace anchors - anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference - check_anchor_order(m) - m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss - print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.') - else: - print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.') - print('') # newline - - -def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - path: path to dataset *.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - thr = 1. / thr - prefix = colorstr('autoanchor: ') - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr') - print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' - f'past_thr={x[x > thr].mean():.3f}-mean: ', end='') - for i, x in enumerate(k): - print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg - return k - - if isinstance(path, str): # *.yaml file - with open(path) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict - from utils.datasets import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - else: - dataset = path # dataset - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') - wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels - # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans calculation - print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...') - s = wh.std(0) # sigmas for whitening - k, dist = kmeans(wh / s, n, iter=30) # points, mean distance - assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}') - k *= s - wh = torch.tensor(wh, dtype=torch.float32) # filtered - wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered - k = print_results(k) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - npr = np.random - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k) - - return print_results(k) diff --git a/spaces/kanden/vits-uma-genshin-honkai/text/__init__.py b/spaces/kanden/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/kanden/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/katasou/Music-discord-bot/landing.md b/spaces/katasou/Music-discord-bot/landing.md deleted file mode 100644 index a28bef36a779209a3227cd2cbfc5ee0fcf47cc7e..0000000000000000000000000000000000000000 --- a/spaces/katasou/Music-discord-bot/landing.md +++ /dev/null @@ -1,48 +0,0 @@ -# katasou-Music-botBETA - -katasouが作ったdiscord音楽bot - -## 取説 - -### /hello -挨拶を返します。 - -### /join -discordのボイスチャンネルに接続した状態で使うことで、botをボイチャに召喚する - -### /leave -botをボイチャから切断する - -### /play url(search word) -ボイスチャンネルで音源を再生します。 - -先にボイチャに接続してから使ってね。 -/play を打った後にURL又はwordを入力してください。 -wordはyoutubeで検索されます。 - -playlist機能もあります。 - -### /pause -再生を一時停止します。 - -### /resume -再生を再開します。 - -### /shuffle -playlistをシャッフルします。 - -### /stop -再生を終了します。 -playlistの内容も削除されます。 - -### /clear -playlistの中身を削除します。 - -### /queue -playlistの中身を表示します。 - -## 対応service -youtube,ニコニコ,ボカコレ,Twitch,Tver,googlepodcasts - -## 現在対応中 -spoon,spotify \ No newline at end of file diff --git a/spaces/kaushalya/medclip-roco/medclip/configuration_hybrid_clip.py b/spaces/kaushalya/medclip-roco/medclip/configuration_hybrid_clip.py deleted file mode 100644 index 5272ac44a1a884eaf9b058c9e29729bfaec29a58..0000000000000000000000000000000000000000 --- a/spaces/kaushalya/medclip-roco/medclip/configuration_hybrid_clip.py +++ /dev/null @@ -1,112 +0,0 @@ -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - - -class HybridCLIPConfig(PretrainedConfig): - r""" - :class:`HybridCLIPConfig` is the configuration class to store the configuration of a - :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, - defining the text model and vision model configs. - - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - - Args: - text_config_dict (:obj:`dict`): - Dictionary of configuration options that defines text model config. - vision_config_dict (:obj:`dict`): - Dictionary of configuration options that defines vison model config. - projection_dim (:obj:`int`, `optional`, defaults to 512): - Dimentionality of text and vision projection layers. - kwargs (`optional`): - Dictionary of keyword arguments. - - Examples:: - - >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP - - >>> # Initializing a BERT and CLIP configuration - >>> config_text = BertConfig() - >>> config_vision = CLIPConfig() - - >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) - - >>> # Initializing a BERT and CLIPVision model - >>> model = EncoderDecoderModel(config=config) - - >>> # Accessing the model configuration - >>> config_text = model.config.text_config - >>> config_vision = model.config.vision_config - - >>> # Saving the model, including its configuration - >>> model.save_pretrained('my-model') - - >>> # loading model and config from pretrained folder - >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') - >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) - """ - - model_type = "hybrid-clip" - is_composition = True - - def __init__(self, projection_dim=512, **kwargs): - super().__init__(**kwargs) - - if "text_config" not in kwargs: - raise ValueError("`text_config` can not be `None`.") - - if "vision_config" not in kwargs: - raise ValueError("`vision_config` can not be `None`.") - - text_config = kwargs.pop("text_config") - vision_config = kwargs.pop("vision_config") - - text_model_type = text_config.pop("model_type") - vision_model_type = vision_config.pop("model_type") - - from transformers import AutoConfig - - self.text_config = AutoConfig.for_model(text_model_type, **text_config) - - if vision_model_type == "clip": - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config).vision_config - elif vision_model_type == "clip_vision_model": - from transformers import CLIPVisionConfig - - self.vision_config = CLIPVisionConfig(**vision_config) - else: - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config) - - self.projection_dim = projection_dim - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs(cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs): - r""" - Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and - vision model configuration. - - Returns: - :class:`HybridCLIPConfig`: An instance of a configuration object - """ - - return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default - :meth:`~transformers.PretrainedConfig.to_dict`. - - Returns: - :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/kdrkdrkdr/HutaoTTS/text/cleaners.py b/spaces/kdrkdrkdr/HutaoTTS/text/cleaners.py deleted file mode 100644 index b155dfeca776469dc1ab4286497f43d674c82897..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HutaoTTS/text/cleaners.py +++ /dev/null @@ -1,11 +0,0 @@ -import re -from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - if re.match('[\u3131-\u3163]', text[-1]): - text += '.' - return text diff --git a/spaces/kdrkdrkdr/ZhongliTTS/app.py b/spaces/kdrkdrkdr/ZhongliTTS/app.py deleted file mode 100644 index e71eb08d9935ca1f8ebd582d681260fdcdfe5e40..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ZhongliTTS/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import json -import os -import re - -import librosa -import numpy as np -import torch -from torch import no_grad, LongTensor -import commons -import utils -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from mel_processing import spectrogram_torch - -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - - -def get_text(text, hps, is_phoneme): - text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, speed, is_phoneme): - if limitation: - text_len = len(text) - max_len = 200 - if is_phoneme: - max_len *= 3 - else: - if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners": - text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text)) - if text_len > max_len: - return "Error: Text is too long", None - - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_phoneme) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - sid = LongTensor([speaker_id]) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - -def create_to_phoneme_fn(hps): - def to_phoneme_fn(text): - return _clean_text(text, hps.data.text_cleaners) if text != "" else "" - - return to_phoneme_fn - - -css = """ - #advanced-btn { - color: white; - border-color: black; - background: black; - font-size: .7rem !important; - line-height: 19px; - margin-top: 24px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } -""" - -if __name__ == '__main__': - models_tts = [] - name = 'ZhongliTTS' - lang = '한국어 (Korean)' - example = '아쉽군. 아쉽게도 까먹었어.' - config_path = f"saved_model/config.json" - model_path = f"saved_model/model.pth" - cover_path = f"saved_model/cover.png" - hps = utils.get_hparams_from_file(config_path) - model = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - utils.load_checkpoint(model_path, model, None) - model.eval() - speaker_ids = [0] - speakers = [name] - - t = 'vits' - models_tts.append((name, cover_path, speakers, lang, example, - hps.symbols, create_tts_fn(model, hps, speaker_ids), - create_to_phoneme_fn(hps))) - - app = gr.Blocks(css=css) - - - with app: - gr.Markdown("# Genshin Impact Zhongli TTS Using Vits Model\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=kdrkdrkdr.ZhongliTTS)\n\n") - - for i, (name, cover_path, speakers, lang, example, symbols, tts_fn, - to_phoneme_fn) in enumerate(models_tts): - - with gr.Column(): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})\n\n" - f"lang: {lang}") - tts_input1 = gr.TextArea(label="Text (100 words limitation)", value=example, - elem_id=f"tts-input{i}") - tts_input2 = gr.Dropdown(label="Speaker", choices=speakers, - type="index", value=speakers[0]) - tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.1, maximum=2, step=0.1) - with gr.Accordion(label="Advanced Options", open=False): - phoneme_input = gr.Checkbox(value=False, label="Phoneme input") - to_phoneme_btn = gr.Button("Covert text to phoneme") - phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1], - samples=[[x] for x in symbols], - elem_id=f"phoneme-list{i}") - phoneme_list_json = gr.Json(value=symbols, visible=False) - tts_submit = gr.Button("Generate", variant="primary") - tts_output1 = gr.Textbox(label="Output Message") - tts_output2 = gr.Audio(label="Output Audio") - tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input], - [tts_output1, tts_output2]) - to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1]) - phoneme_list.click(None, [phoneme_list, phoneme_list_json], [], - _js=f""" - (i,phonemes) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input{i}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + phonemes[i].length; - text_input.selectionEnd = startPos + phonemes[i].length; - text_input.blur(); - window.scrollTo(x, y); - return []; - }}""") - - app.queue(concurrency_count=3).launch(show_api=False) diff --git a/spaces/keras-io/timeseries_forecasting_for_weather/README.md b/spaces/keras-io/timeseries_forecasting_for_weather/README.md deleted file mode 100644 index 3ebf0a705f18c721fea83eabd83c0e541bb08f04..0000000000000000000000000000000000000000 --- a/spaces/keras-io/timeseries_forecasting_for_weather/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Timeseries Forecasting -emoji: 📈 -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/__init__.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kottu/stabble_diffusion_sketch/style.css b/spaces/kottu/stabble_diffusion_sketch/style.css deleted file mode 100644 index 3735d5cb99302ed423e2806673f9eaac909083b0..0000000000000000000000000000000000000000 --- a/spaces/kottu/stabble_diffusion_sketch/style.css +++ /dev/null @@ -1,10 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: #fff; - background: #1565c0; - border-radius: 100vh; -} \ No newline at end of file diff --git a/spaces/kouenYoung/anime-tts/text/mandarin.py b/spaces/kouenYoung/anime-tts/text/mandarin.py deleted file mode 100644 index 8bc31aea94e1abe111f9bb78c878c1c71e55d4ba..0000000000000000000000000000000000000000 --- a/spaces/kouenYoung/anime-tts/text/mandarin.py +++ /dev/null @@ -1,170 +0,0 @@ -import os -import re -import sys - -import jieba -import cn2an -import logging -from pypinyin import lazy_pinyin, BOPOMOFO - -# logging.getLogger('jieba').setLevel(logging.WARNING) -# jieba.set_dictionary(os.path.dirname(sys.argv[0]) + '/jieba/dict.txt') - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - if re.match('[\u3105-\u3129]', bopomofos[i][-1]): - bopomofos[i] += 'ˉ' - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i[aoe]', lambda x: 'y' + x.group(0)[1:], text) - text = re.sub('u[aoəe]', lambda x: 'w' + x.group(0)[1:], text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', lambda x: x.group(1) + - 'ɹ`' + x.group(2), text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', - lambda x: x.group(1) + 'ɹ' + x.group(2), text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/kukuhtw/AutoGPT/autogpt/permanent_memory/__init__.py b/spaces/kukuhtw/AutoGPT/autogpt/permanent_memory/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/helpers.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/helpers.py deleted file mode 100644 index 874ab1ac076bc311d8853f08bb5fe454b650099f..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/helpers.py +++ /dev/null @@ -1,878 +0,0 @@ -"""Various helper functions""" - -import asyncio -import base64 -import binascii -import datetime -import functools -import inspect -import netrc -import os -import platform -import re -import sys -import time -import warnings -import weakref -from collections import namedtuple -from contextlib import suppress -from email.parser import HeaderParser -from email.utils import parsedate -from math import ceil -from pathlib import Path -from types import TracebackType -from typing import ( - Any, - Callable, - ContextManager, - Dict, - Generator, - Generic, - Iterable, - Iterator, - List, - Mapping, - Optional, - Pattern, - Set, - Tuple, - Type, - TypeVar, - Union, - cast, -) -from urllib.parse import quote -from urllib.request import getproxies, proxy_bypass - -import async_timeout -import attr -from multidict import MultiDict, MultiDictProxy -from yarl import URL - -from . import hdrs -from .log import client_logger, internal_logger -from .typedefs import PathLike, Protocol # noqa - -__all__ = ("BasicAuth", "ChainMapProxy", "ETag") - -IS_MACOS = platform.system() == "Darwin" -IS_WINDOWS = platform.system() == "Windows" - -PY_36 = sys.version_info >= (3, 6) -PY_37 = sys.version_info >= (3, 7) -PY_38 = sys.version_info >= (3, 8) -PY_310 = sys.version_info >= (3, 10) -PY_311 = sys.version_info >= (3, 11) - -if sys.version_info < (3, 7): - import idna_ssl - - idna_ssl.patch_match_hostname() - - def all_tasks( - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> Set["asyncio.Task[Any]"]: - tasks = list(asyncio.Task.all_tasks(loop)) - return {t for t in tasks if not t.done()} - -else: - all_tasks = asyncio.all_tasks - - -_T = TypeVar("_T") -_S = TypeVar("_S") - - -sentinel: Any = object() -NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) - -# N.B. sys.flags.dev_mode is available on Python 3.7+, use getattr -# for compatibility with older versions -DEBUG: bool = getattr(sys.flags, "dev_mode", False) or ( - not sys.flags.ignore_environment and bool(os.environ.get("PYTHONASYNCIODEBUG")) -) - - -CHAR = {chr(i) for i in range(0, 128)} -CTL = {chr(i) for i in range(0, 32)} | { - chr(127), -} -SEPARATORS = { - "(", - ")", - "<", - ">", - "@", - ",", - ";", - ":", - "\\", - '"', - "/", - "[", - "]", - "?", - "=", - "{", - "}", - " ", - chr(9), -} -TOKEN = CHAR ^ CTL ^ SEPARATORS - - -class noop: - def __await__(self) -> Generator[None, None, None]: - yield - - -class BasicAuth(namedtuple("BasicAuth", ["login", "password", "encoding"])): - """Http basic authentication helper.""" - - def __new__( - cls, login: str, password: str = "", encoding: str = "latin1" - ) -> "BasicAuth": - if login is None: - raise ValueError("None is not allowed as login value") - - if password is None: - raise ValueError("None is not allowed as password value") - - if ":" in login: - raise ValueError('A ":" is not allowed in login (RFC 1945#section-11.1)') - - return super().__new__(cls, login, password, encoding) - - @classmethod - def decode(cls, auth_header: str, encoding: str = "latin1") -> "BasicAuth": - """Create a BasicAuth object from an Authorization HTTP header.""" - try: - auth_type, encoded_credentials = auth_header.split(" ", 1) - except ValueError: - raise ValueError("Could not parse authorization header.") - - if auth_type.lower() != "basic": - raise ValueError("Unknown authorization method %s" % auth_type) - - try: - decoded = base64.b64decode( - encoded_credentials.encode("ascii"), validate=True - ).decode(encoding) - except binascii.Error: - raise ValueError("Invalid base64 encoding.") - - try: - # RFC 2617 HTTP Authentication - # https://www.ietf.org/rfc/rfc2617.txt - # the colon must be present, but the username and password may be - # otherwise blank. - username, password = decoded.split(":", 1) - except ValueError: - raise ValueError("Invalid credentials.") - - return cls(username, password, encoding=encoding) - - @classmethod - def from_url(cls, url: URL, *, encoding: str = "latin1") -> Optional["BasicAuth"]: - """Create BasicAuth from url.""" - if not isinstance(url, URL): - raise TypeError("url should be yarl.URL instance") - if url.user is None: - return None - return cls(url.user, url.password or "", encoding=encoding) - - def encode(self) -> str: - """Encode credentials.""" - creds = (f"{self.login}:{self.password}").encode(self.encoding) - return "Basic %s" % base64.b64encode(creds).decode(self.encoding) - - -def strip_auth_from_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - auth = BasicAuth.from_url(url) - if auth is None: - return url, None - else: - return url.with_user(None), auth - - -def netrc_from_env() -> Optional[netrc.netrc]: - """Load netrc from file. - - Attempt to load it from the path specified by the env-var - NETRC or in the default location in the user's home directory. - - Returns None if it couldn't be found or fails to parse. - """ - netrc_env = os.environ.get("NETRC") - - if netrc_env is not None: - netrc_path = Path(netrc_env) - else: - try: - home_dir = Path.home() - except RuntimeError as e: # pragma: no cover - # if pathlib can't resolve home, it may raise a RuntimeError - client_logger.debug( - "Could not resolve home directory when " - "trying to look for .netrc file: %s", - e, - ) - return None - - netrc_path = home_dir / ("_netrc" if IS_WINDOWS else ".netrc") - - try: - return netrc.netrc(str(netrc_path)) - except netrc.NetrcParseError as e: - client_logger.warning("Could not parse .netrc file: %s", e) - except OSError as e: - # we couldn't read the file (doesn't exist, permissions, etc.) - if netrc_env or netrc_path.is_file(): - # only warn if the environment wanted us to load it, - # or it appears like the default file does actually exist - client_logger.warning("Could not read .netrc file: %s", e) - - return None - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ProxyInfo: - proxy: URL - proxy_auth: Optional[BasicAuth] - - -def proxies_from_env() -> Dict[str, ProxyInfo]: - proxy_urls = { - k: URL(v) - for k, v in getproxies().items() - if k in ("http", "https", "ws", "wss") - } - netrc_obj = netrc_from_env() - stripped = {k: strip_auth_from_url(v) for k, v in proxy_urls.items()} - ret = {} - for proto, val in stripped.items(): - proxy, auth = val - if proxy.scheme in ("https", "wss"): - client_logger.warning( - "%s proxies %s are not supported, ignoring", proxy.scheme.upper(), proxy - ) - continue - if netrc_obj and auth is None: - auth_from_netrc = None - if proxy.host is not None: - auth_from_netrc = netrc_obj.authenticators(proxy.host) - if auth_from_netrc is not None: - # auth_from_netrc is a (`user`, `account`, `password`) tuple, - # `user` and `account` both can be username, - # if `user` is None, use `account` - *logins, password = auth_from_netrc - login = logins[0] if logins[0] else logins[-1] - auth = BasicAuth(cast(str, login), cast(str, password)) - ret[proto] = ProxyInfo(proxy, auth) - return ret - - -def current_task( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> "Optional[asyncio.Task[Any]]": - if sys.version_info >= (3, 7): - return asyncio.current_task(loop=loop) - else: - return asyncio.Task.current_task(loop=loop) - - -def get_running_loop( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> asyncio.AbstractEventLoop: - if loop is None: - loop = asyncio.get_event_loop() - if not loop.is_running(): - warnings.warn( - "The object should be created within an async function", - DeprecationWarning, - stacklevel=3, - ) - if loop.get_debug(): - internal_logger.warning( - "The object should be created within an async function", stack_info=True - ) - return loop - - -def isasyncgenfunction(obj: Any) -> bool: - func = getattr(inspect, "isasyncgenfunction", None) - if func is not None: - return func(obj) # type: ignore[no-any-return] - else: - return False - - -def get_env_proxy_for_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - """Get a permitted proxy for the given URL from the env.""" - if url.host is not None and proxy_bypass(url.host): - raise LookupError(f"Proxying is disallowed for `{url.host!r}`") - - proxies_in_env = proxies_from_env() - try: - proxy_info = proxies_in_env[url.scheme] - except KeyError: - raise LookupError(f"No proxies found for `{url!s}` in the env") - else: - return proxy_info.proxy, proxy_info.proxy_auth - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class MimeType: - type: str - subtype: str - suffix: str - parameters: "MultiDictProxy[str]" - - -@functools.lru_cache(maxsize=56) -def parse_mimetype(mimetype: str) -> MimeType: - """Parses a MIME type into its components. - - mimetype is a MIME type string. - - Returns a MimeType object. - - Example: - - >>> parse_mimetype('text/html; charset=utf-8') - MimeType(type='text', subtype='html', suffix='', - parameters={'charset': 'utf-8'}) - - """ - if not mimetype: - return MimeType( - type="", subtype="", suffix="", parameters=MultiDictProxy(MultiDict()) - ) - - parts = mimetype.split(";") - params: MultiDict[str] = MultiDict() - for item in parts[1:]: - if not item: - continue - key, value = cast( - Tuple[str, str], item.split("=", 1) if "=" in item else (item, "") - ) - params.add(key.lower().strip(), value.strip(' "')) - - fulltype = parts[0].strip().lower() - if fulltype == "*": - fulltype = "*/*" - - mtype, stype = ( - cast(Tuple[str, str], fulltype.split("/", 1)) - if "/" in fulltype - else (fulltype, "") - ) - stype, suffix = ( - cast(Tuple[str, str], stype.split("+", 1)) if "+" in stype else (stype, "") - ) - - return MimeType( - type=mtype, subtype=stype, suffix=suffix, parameters=MultiDictProxy(params) - ) - - -def guess_filename(obj: Any, default: Optional[str] = None) -> Optional[str]: - name = getattr(obj, "name", None) - if name and isinstance(name, str) and name[0] != "<" and name[-1] != ">": - return Path(name).name - return default - - -not_qtext_re = re.compile(r"[^\041\043-\133\135-\176]") -QCONTENT = {chr(i) for i in range(0x20, 0x7F)} | {"\t"} - - -def quoted_string(content: str) -> str: - """Return 7-bit content as quoted-string. - - Format content into a quoted-string as defined in RFC5322 for - Internet Message Format. Notice that this is not the 8-bit HTTP - format, but the 7-bit email format. Content must be in usascii or - a ValueError is raised. - """ - if not (QCONTENT > set(content)): - raise ValueError(f"bad content for quoted-string {content!r}") - return not_qtext_re.sub(lambda x: "\\" + x.group(0), content) - - -def content_disposition_header( - disptype: str, quote_fields: bool = True, _charset: str = "utf-8", **params: str -) -> str: - """Sets ``Content-Disposition`` header for MIME. - - This is the MIME payload Content-Disposition header from RFC 2183 - and RFC 7579 section 4.2, not the HTTP Content-Disposition from - RFC 6266. - - disptype is a disposition type: inline, attachment, form-data. - Should be valid extension token (see RFC 2183) - - quote_fields performs value quoting to 7-bit MIME headers - according to RFC 7578. Set to quote_fields to False if recipient - can take 8-bit file names and field values. - - _charset specifies the charset to use when quote_fields is True. - - params is a dict with disposition params. - """ - if not disptype or not (TOKEN > set(disptype)): - raise ValueError("bad content disposition type {!r}" "".format(disptype)) - - value = disptype - if params: - lparams = [] - for key, val in params.items(): - if not key or not (TOKEN > set(key)): - raise ValueError( - "bad content disposition parameter" " {!r}={!r}".format(key, val) - ) - if quote_fields: - if key.lower() == "filename": - qval = quote(val, "", encoding=_charset) - lparams.append((key, '"%s"' % qval)) - else: - try: - qval = quoted_string(val) - except ValueError: - qval = "".join( - (_charset, "''", quote(val, "", encoding=_charset)) - ) - lparams.append((key + "*", qval)) - else: - lparams.append((key, '"%s"' % qval)) - else: - qval = val.replace("\\", "\\\\").replace('"', '\\"') - lparams.append((key, '"%s"' % qval)) - sparams = "; ".join("=".join(pair) for pair in lparams) - value = "; ".join((value, sparams)) - return value - - -class _TSelf(Protocol, Generic[_T]): - _cache: Dict[str, _T] - - -class reify(Generic[_T]): - """Use as a class method decorator. - - It operates almost exactly like - the Python `@property` decorator, but it puts the result of the - method it decorates into the instance dict after the first call, - effectively replacing the function it decorates with an instance - variable. It is, in Python parlance, a data descriptor. - """ - - def __init__(self, wrapped: Callable[..., _T]) -> None: - self.wrapped = wrapped - self.__doc__ = wrapped.__doc__ - self.name = wrapped.__name__ - - def __get__(self, inst: _TSelf[_T], owner: Optional[Type[Any]] = None) -> _T: - try: - try: - return inst._cache[self.name] - except KeyError: - val = self.wrapped(inst) - inst._cache[self.name] = val - return val - except AttributeError: - if inst is None: - return self - raise - - def __set__(self, inst: _TSelf[_T], value: _T) -> None: - raise AttributeError("reified property is read-only") - - -reify_py = reify - -try: - from ._helpers import reify as reify_c - - if not NO_EXTENSIONS: - reify = reify_c # type: ignore[misc,assignment] -except ImportError: - pass - -_ipv4_pattern = ( - r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" -) -_ipv6_pattern = ( - r"^(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}" - r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}$)(([0-9A-F]{1,4}:){0,5}|:)" - r"((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})" - r"(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])|(?:[A-F0-9]{1,4}:){7}" - r"[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}$)" - r"(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}" - r":|:(:[A-F0-9]{1,4}){7})$" -) -_ipv4_regex = re.compile(_ipv4_pattern) -_ipv6_regex = re.compile(_ipv6_pattern, flags=re.IGNORECASE) -_ipv4_regexb = re.compile(_ipv4_pattern.encode("ascii")) -_ipv6_regexb = re.compile(_ipv6_pattern.encode("ascii"), flags=re.IGNORECASE) - - -def _is_ip_address( - regex: Pattern[str], regexb: Pattern[bytes], host: Optional[Union[str, bytes]] -) -> bool: - if host is None: - return False - if isinstance(host, str): - return bool(regex.match(host)) - elif isinstance(host, (bytes, bytearray, memoryview)): - return bool(regexb.match(host)) - else: - raise TypeError(f"{host} [{type(host)}] is not a str or bytes") - - -is_ipv4_address = functools.partial(_is_ip_address, _ipv4_regex, _ipv4_regexb) -is_ipv6_address = functools.partial(_is_ip_address, _ipv6_regex, _ipv6_regexb) - - -def is_ip_address(host: Optional[Union[str, bytes, bytearray, memoryview]]) -> bool: - return is_ipv4_address(host) or is_ipv6_address(host) - - -def next_whole_second() -> datetime.datetime: - """Return current time rounded up to the next whole second.""" - return datetime.datetime.now(datetime.timezone.utc).replace( - microsecond=0 - ) + datetime.timedelta(seconds=0) - - -_cached_current_datetime: Optional[int] = None -_cached_formatted_datetime = "" - - -def rfc822_formatted_time() -> str: - global _cached_current_datetime - global _cached_formatted_datetime - - now = int(time.time()) - if now != _cached_current_datetime: - # Weekday and month names for HTTP date/time formatting; - # always English! - # Tuples are constants stored in codeobject! - _weekdayname = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun") - _monthname = ( - "", # Dummy so we can use 1-based month numbers - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", - ) - - year, month, day, hh, mm, ss, wd, *tail = time.gmtime(now) - _cached_formatted_datetime = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % ( - _weekdayname[wd], - day, - _monthname[month], - year, - hh, - mm, - ss, - ) - _cached_current_datetime = now - return _cached_formatted_datetime - - -def _weakref_handle(info: "Tuple[weakref.ref[object], str]") -> None: - ref, name = info - ob = ref() - if ob is not None: - with suppress(Exception): - getattr(ob, name)() - - -def weakref_handle( - ob: object, name: str, timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout >= 5: - when = ceil(when) - - return loop.call_at(when, _weakref_handle, (weakref.ref(ob), name)) - return None - - -def call_later( - cb: Callable[[], Any], timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout > 5: - when = ceil(when) - return loop.call_at(when, cb) - return None - - -class TimeoutHandle: - """Timeout handle""" - - def __init__( - self, loop: asyncio.AbstractEventLoop, timeout: Optional[float] - ) -> None: - self._timeout = timeout - self._loop = loop - self._callbacks: List[ - Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]] - ] = [] - - def register( - self, callback: Callable[..., None], *args: Any, **kwargs: Any - ) -> None: - self._callbacks.append((callback, args, kwargs)) - - def close(self) -> None: - self._callbacks.clear() - - def start(self) -> Optional[asyncio.Handle]: - timeout = self._timeout - if timeout is not None and timeout > 0: - when = self._loop.time() + timeout - if timeout >= 5: - when = ceil(when) - return self._loop.call_at(when, self.__call__) - else: - return None - - def timer(self) -> "BaseTimerContext": - if self._timeout is not None and self._timeout > 0: - timer = TimerContext(self._loop) - self.register(timer.timeout) - return timer - else: - return TimerNoop() - - def __call__(self) -> None: - for cb, args, kwargs in self._callbacks: - with suppress(Exception): - cb(*args, **kwargs) - - self._callbacks.clear() - - -class BaseTimerContext(ContextManager["BaseTimerContext"]): - pass - - -class TimerNoop(BaseTimerContext): - def __enter__(self) -> BaseTimerContext: - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - return - - -class TimerContext(BaseTimerContext): - """Low resolution timeout context manager""" - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - self._loop = loop - self._tasks: List[asyncio.Task[Any]] = [] - self._cancelled = False - - def __enter__(self) -> BaseTimerContext: - task = current_task(loop=self._loop) - - if task is None: - raise RuntimeError( - "Timeout context manager should be used " "inside a task" - ) - - if self._cancelled: - raise asyncio.TimeoutError from None - - self._tasks.append(task) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - if self._tasks: - self._tasks.pop() - - if exc_type is asyncio.CancelledError and self._cancelled: - raise asyncio.TimeoutError from None - return None - - def timeout(self) -> None: - if not self._cancelled: - for task in set(self._tasks): - task.cancel() - - self._cancelled = True - - -def ceil_timeout(delay: Optional[float]) -> async_timeout.Timeout: - if delay is None or delay <= 0: - return async_timeout.timeout(None) - - loop = get_running_loop() - now = loop.time() - when = now + delay - if delay > 5: - when = ceil(when) - return async_timeout.timeout_at(when) - - -class HeadersMixin: - - ATTRS = frozenset(["_content_type", "_content_dict", "_stored_content_type"]) - - _content_type: Optional[str] = None - _content_dict: Optional[Dict[str, str]] = None - _stored_content_type = sentinel - - def _parse_content_type(self, raw: str) -> None: - self._stored_content_type = raw - if raw is None: - # default value according to RFC 2616 - self._content_type = "application/octet-stream" - self._content_dict = {} - else: - msg = HeaderParser().parsestr("Content-Type: " + raw) - self._content_type = msg.get_content_type() - params = msg.get_params() - self._content_dict = dict(params[1:]) # First element is content type again - - @property - def content_type(self) -> str: - """The value of content part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_type # type: ignore[return-value] - - @property - def charset(self) -> Optional[str]: - """The value of charset part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_dict.get("charset") # type: ignore[union-attr] - - @property - def content_length(self) -> Optional[int]: - """The value of Content-Length HTTP header.""" - content_length = self._headers.get( # type: ignore[attr-defined] - hdrs.CONTENT_LENGTH - ) - - if content_length is not None: - return int(content_length) - else: - return None - - -def set_result(fut: "asyncio.Future[_T]", result: _T) -> None: - if not fut.done(): - fut.set_result(result) - - -def set_exception(fut: "asyncio.Future[_T]", exc: BaseException) -> None: - if not fut.done(): - fut.set_exception(exc) - - -class ChainMapProxy(Mapping[str, Any]): - __slots__ = ("_maps",) - - def __init__(self, maps: Iterable[Mapping[str, Any]]) -> None: - self._maps = tuple(maps) - - def __init_subclass__(cls) -> None: - raise TypeError( - "Inheritance class {} from ChainMapProxy " - "is forbidden".format(cls.__name__) - ) - - def __getitem__(self, key: str) -> Any: - for mapping in self._maps: - try: - return mapping[key] - except KeyError: - pass - raise KeyError(key) - - def get(self, key: str, default: Any = None) -> Any: - return self[key] if key in self else default - - def __len__(self) -> int: - # reuses stored hash values if possible - return len(set().union(*self._maps)) # type: ignore[arg-type] - - def __iter__(self) -> Iterator[str]: - d: Dict[str, Any] = {} - for mapping in reversed(self._maps): - # reuses stored hash values if possible - d.update(mapping) - return iter(d) - - def __contains__(self, key: object) -> bool: - return any(key in m for m in self._maps) - - def __bool__(self) -> bool: - return any(self._maps) - - def __repr__(self) -> str: - content = ", ".join(map(repr, self._maps)) - return f"ChainMapProxy({content})" - - -# https://tools.ietf.org/html/rfc7232#section-2.3 -_ETAGC = r"[!#-}\x80-\xff]+" -_ETAGC_RE = re.compile(_ETAGC) -_QUOTED_ETAG = rf'(W/)?"({_ETAGC})"' -QUOTED_ETAG_RE = re.compile(_QUOTED_ETAG) -LIST_QUOTED_ETAG_RE = re.compile(rf"({_QUOTED_ETAG})(?:\s*,\s*|$)|(.)") - -ETAG_ANY = "*" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ETag: - value: str - is_weak: bool = False - - -def validate_etag_value(value: str) -> None: - if value != ETAG_ANY and not _ETAGC_RE.fullmatch(value): - raise ValueError( - f"Value {value!r} is not a valid etag. Maybe it contains '\"'?" - ) - - -def parse_http_date(date_str: Optional[str]) -> Optional[datetime.datetime]: - """Process a date string, return a datetime object""" - if date_str is not None: - timetuple = parsedate(date_str) - if timetuple is not None: - with suppress(ValueError): - return datetime.datetime(*timetuple[:6], tzinfo=datetime.timezone.utc) - return None diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/utils/save.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/utils/save.py deleted file mode 100644 index 90d36f14bc5ebf5cb1e07cb469191ed21e4b3f4b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/utils/save.py +++ /dev/null @@ -1,176 +0,0 @@ -import json -import pathlib -import warnings - -from .mimebundle import spec_to_mimebundle -from ..vegalite.v5.data import data_transformers - - -def write_file_or_filename(fp, content, mode="w", encoding=None): - """Write content to fp, whether fp is a string, a pathlib Path or a - file-like object""" - if isinstance(fp, str) or isinstance(fp, pathlib.PurePath): - with open(file=fp, mode=mode, encoding=encoding) as f: - f.write(content) - else: - fp.write(content) - - -def set_inspect_format_argument(format, fp, inline): - """Inspect the format argument in the save function""" - if format is None: - if isinstance(fp, str): - format = fp.split(".")[-1] - elif isinstance(fp, pathlib.PurePath): - format = fp.suffix.lstrip(".") - else: - raise ValueError( - "must specify file format: " - "['png', 'svg', 'pdf', 'html', 'json', 'vega']" - ) - - if format != "html" and inline: - warnings.warn("inline argument ignored for non HTML formats.", stacklevel=1) - - return format - - -def set_inspect_mode_argument(mode, embed_options, spec, vegalite_version): - """Inspect the mode argument in the save function""" - if mode is None: - if "mode" in embed_options: - mode = embed_options["mode"] - elif "$schema" in spec: - mode = spec["$schema"].split("/")[-2] - else: - mode = "vega-lite" - - if mode != "vega-lite": - raise ValueError("mode must be 'vega-lite', " "not '{}'".format(mode)) - - if mode == "vega-lite" and vegalite_version is None: - raise ValueError("must specify vega-lite version") - - return mode - - -def save( - chart, - fp, - vega_version, - vegaembed_version, - format=None, - mode=None, - vegalite_version=None, - embed_options=None, - json_kwds=None, - webdriver=None, - scale_factor=1, - engine=None, - inline=False, - **kwargs, -): - """Save a chart to file in a variety of formats - - Supported formats are [json, html, png, svg, pdf] - - Parameters - ---------- - chart : alt.Chart - the chart instance to save - fp : string filename, pathlib.Path or file-like object - file to which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - mode : string (optional) - Must be 'vega-lite'. If not specified, then infer the mode from - the '$schema' property of the spec, or the ``opt`` dictionary. - If it's not specified in either of those places, then use 'vega-lite'. - vega_version : string (optional) - For html output, the version of vega.js to use - vegalite_version : string (optional) - For html output, the version of vegalite.js to use - vegaembed_version : string (optional) - For html output, the version of vegaembed.js to use - embed_options : dict (optional) - The vegaEmbed options dictionary. Default is {} - (See https://github.com/vega/vega-embed for details) - json_kwds : dict (optional) - Additional keyword arguments are passed to the output method - associated with the specified format. - webdriver : string {'chrome' | 'firefox'} (optional) - Webdriver to use for png or svg output - scale_factor : float (optional) - scale_factor to use to change size/resolution of png or svg output - engine: string {'vl-convert', 'altair_saver'} - the conversion engine to use for 'png', 'svg', and 'pdf' formats - inline: bool (optional) - If False (default), the required JavaScript libraries are loaded - from a CDN location in the resulting html file. - If True, the required JavaScript libraries are inlined into the resulting - html file so that it will work without an internet connection. - The altair_viewer package is required if True. - **kwargs : - additional kwargs passed to spec_to_mimebundle. - """ - if json_kwds is None: - json_kwds = {} - - if embed_options is None: - embed_options = {} - - format = set_inspect_format_argument(format, fp, inline) - - # Temporarily turn off any data transformers so that all data is inlined - # when calling chart.to_dict. This is relevant for vl-convert which cannot access - # local json files which could be created by a json data transformer. Furthermore, - # we don't exit the with statement until this function completed due to the issue - # described at https://github.com/vega/vl-convert/issues/31 - with data_transformers.enable("default"), data_transformers.disable_max_rows(): - spec = chart.to_dict() - - mode = set_inspect_mode_argument(mode, embed_options, spec, vegalite_version) - - if format == "json": - json_spec = json.dumps(spec, **json_kwds) - write_file_or_filename(fp, json_spec, mode="w") - elif format == "html": - if inline: - kwargs["template"] = "inline" - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - embed_options=embed_options, - json_kwds=json_kwds, - **kwargs, - ) - write_file_or_filename(fp, mimebundle["text/html"], mode="w") - elif format in ["png", "svg", "pdf", "vega"]: - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - webdriver=webdriver, - scale_factor=scale_factor, - engine=engine, - **kwargs, - ) - if format == "png": - write_file_or_filename(fp, mimebundle["image/png"], mode="wb") - elif format == "pdf": - write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb") - else: - encoding = kwargs.get("encoding", "utf-8") - write_file_or_filename( - fp, mimebundle["image/svg+xml"], mode="w", encoding=encoding - ) - else: - raise ValueError("Unsupported format: '{}'".format(format)) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/interpolatable.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/interpolatable.py deleted file mode 100644 index 26a7c48a49d6e986f720e929b92ed42d2c682640..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/interpolatable.py +++ /dev/null @@ -1,580 +0,0 @@ -""" -Tool to find wrong contour order between different masters, and -other interpolatability (or lack thereof) issues. - -Call as: -$ fonttools varLib.interpolatable font1 font2 ... -""" - -from fontTools.pens.basePen import AbstractPen, BasePen -from fontTools.pens.pointPen import SegmentToPointPen -from fontTools.pens.recordingPen import RecordingPen -from fontTools.pens.statisticsPen import StatisticsPen -from fontTools.pens.momentsPen import OpenContourError -from collections import OrderedDict -import math -import itertools -import sys - - -def _rot_list(l, k): - """Rotate list by k items forward. Ie. item at position 0 will be - at position k in returned list. Negative k is allowed.""" - n = len(l) - k %= n - if not k: - return l - return l[n - k :] + l[: n - k] - - -class PerContourPen(BasePen): - def __init__(self, Pen, glyphset=None): - BasePen.__init__(self, glyphset) - self._glyphset = glyphset - self._Pen = Pen - self._pen = None - self.value = [] - - def _moveTo(self, p0): - self._newItem() - self._pen.moveTo(p0) - - def _lineTo(self, p1): - self._pen.lineTo(p1) - - def _qCurveToOne(self, p1, p2): - self._pen.qCurveTo(p1, p2) - - def _curveToOne(self, p1, p2, p3): - self._pen.curveTo(p1, p2, p3) - - def _closePath(self): - self._pen.closePath() - self._pen = None - - def _endPath(self): - self._pen.endPath() - self._pen = None - - def _newItem(self): - self._pen = pen = self._Pen() - self.value.append(pen) - - -class PerContourOrComponentPen(PerContourPen): - def addComponent(self, glyphName, transformation): - self._newItem() - self.value[-1].addComponent(glyphName, transformation) - - -class RecordingPointPen(BasePen): - def __init__(self): - self.value = [] - - def beginPath(self, identifier=None, **kwargs): - pass - - def endPath(self) -> None: - pass - - def addPoint(self, pt, segmentType=None): - self.value.append((pt, False if segmentType is None else True)) - - -def _vdiff(v0, v1): - return tuple(b - a for a, b in zip(v0, v1)) - - -def _vlen(vec): - v = 0 - for x in vec: - v += x * x - return v - - -def _complex_vlen(vec): - v = 0 - for x in vec: - v += abs(x) * abs(x) - return v - - -def _matching_cost(G, matching): - return sum(G[i][j] for i, j in enumerate(matching)) - - -def min_cost_perfect_bipartite_matching(G): - n = len(G) - try: - from scipy.optimize import linear_sum_assignment - - rows, cols = linear_sum_assignment(G) - assert (rows == list(range(n))).all() - return list(cols), _matching_cost(G, cols) - except ImportError: - pass - - try: - from munkres import Munkres - - cols = [None] * n - for row, col in Munkres().compute(G): - cols[row] = col - return cols, _matching_cost(G, cols) - except ImportError: - pass - - if n > 6: - raise Exception("Install Python module 'munkres' or 'scipy >= 0.17.0'") - - # Otherwise just brute-force - permutations = itertools.permutations(range(n)) - best = list(next(permutations)) - best_cost = _matching_cost(G, best) - for p in permutations: - cost = _matching_cost(G, p) - if cost < best_cost: - best, best_cost = list(p), cost - return best, best_cost - - -def test(glyphsets, glyphs=None, names=None, ignore_missing=False): - if names is None: - names = glyphsets - if glyphs is None: - # `glyphs = glyphsets[0].keys()` is faster, certainly, but doesn't allow for sparse TTFs/OTFs given out of order - # ... risks the sparse master being the first one, and only processing a subset of the glyphs - glyphs = {g for glyphset in glyphsets for g in glyphset.keys()} - - hist = [] - problems = OrderedDict() - - def add_problem(glyphname, problem): - problems.setdefault(glyphname, []).append(problem) - - for glyph_name in glyphs: - try: - m0idx = 0 - allVectors = [] - allNodeTypes = [] - allContourIsomorphisms = [] - for glyphset, name in zip(glyphsets, names): - glyph = glyphset[glyph_name] - - if glyph is None: - if not ignore_missing: - add_problem(glyph_name, {"type": "missing", "master": name}) - allNodeTypes.append(None) - allVectors.append(None) - allContourIsomorphisms.append(None) - continue - - perContourPen = PerContourOrComponentPen( - RecordingPen, glyphset=glyphset - ) - try: - glyph.draw(perContourPen, outputImpliedClosingLine=True) - except TypeError: - glyph.draw(perContourPen) - contourPens = perContourPen.value - del perContourPen - - contourVectors = [] - contourIsomorphisms = [] - nodeTypes = [] - allNodeTypes.append(nodeTypes) - allVectors.append(contourVectors) - allContourIsomorphisms.append(contourIsomorphisms) - for ix, contour in enumerate(contourPens): - nodeVecs = tuple(instruction[0] for instruction in contour.value) - nodeTypes.append(nodeVecs) - - stats = StatisticsPen(glyphset=glyphset) - try: - contour.replay(stats) - except OpenContourError as e: - add_problem( - glyph_name, - {"master": name, "contour": ix, "type": "open_path"}, - ) - continue - size = math.sqrt(abs(stats.area)) * 0.5 - vector = ( - int(size), - int(stats.meanX), - int(stats.meanY), - int(stats.stddevX * 2), - int(stats.stddevY * 2), - int(stats.correlation * size), - ) - contourVectors.append(vector) - # print(vector) - - # Check starting point - if nodeVecs[0] == "addComponent": - continue - assert nodeVecs[0] == "moveTo" - assert nodeVecs[-1] in ("closePath", "endPath") - points = RecordingPointPen() - converter = SegmentToPointPen(points, False) - contour.replay(converter) - # points.value is a list of pt,bool where bool is true if on-curve and false if off-curve; - # now check all rotations and mirror-rotations of the contour and build list of isomorphic - # possible starting points. - bits = 0 - for pt, b in points.value: - bits = (bits << 1) | b - n = len(points.value) - mask = (1 << n) - 1 - isomorphisms = [] - contourIsomorphisms.append(isomorphisms) - for i in range(n): - b = ((bits << i) & mask) | ((bits >> (n - i))) - if b == bits: - isomorphisms.append( - _rot_list([complex(*pt) for pt, bl in points.value], i) - ) - # Add mirrored rotations - mirrored = list(reversed(points.value)) - reversed_bits = 0 - for pt, b in mirrored: - reversed_bits = (reversed_bits << 1) | b - for i in range(n): - b = ((reversed_bits << i) & mask) | ((reversed_bits >> (n - i))) - if b == bits: - isomorphisms.append( - _rot_list([complex(*pt) for pt, bl in mirrored], i) - ) - - # m0idx should be the index of the first non-None item in allNodeTypes, - # else give it the first index of None, which is likely 0 - m0idx = allNodeTypes.index( - next((x for x in allNodeTypes if x is not None), None) - ) - # m0 is the first non-None item in allNodeTypes, or the first item if all are None - m0 = allNodeTypes[m0idx] - for i, m1 in enumerate(allNodeTypes[m0idx + 1 :]): - if m1 is None: - continue - if len(m0) != len(m1): - add_problem( - glyph_name, - { - "type": "path_count", - "master_1": names[m0idx], - "master_2": names[m0idx + i + 1], - "value_1": len(m0), - "value_2": len(m1), - }, - ) - if m0 == m1: - continue - for pathIx, (nodes1, nodes2) in enumerate(zip(m0, m1)): - if nodes1 == nodes2: - continue - if len(nodes1) != len(nodes2): - add_problem( - glyph_name, - { - "type": "node_count", - "path": pathIx, - "master_1": names[m0idx], - "master_2": names[m0idx + i + 1], - "value_1": len(nodes1), - "value_2": len(nodes2), - }, - ) - continue - for nodeIx, (n1, n2) in enumerate(zip(nodes1, nodes2)): - if n1 != n2: - add_problem( - glyph_name, - { - "type": "node_incompatibility", - "path": pathIx, - "node": nodeIx, - "master_1": names[m0idx], - "master_2": names[m0idx + i + 1], - "value_1": n1, - "value_2": n2, - }, - ) - continue - - # m0idx should be the index of the first non-None item in allVectors, - # else give it the first index of None, which is likely 0 - m0idx = allVectors.index( - next((x for x in allVectors if x is not None), None) - ) - # m0 is the first non-None item in allVectors, or the first item if all are None - m0 = allVectors[m0idx] - for i, m1 in enumerate(allVectors[m0idx + 1 :]): - if m1 is None: - continue - if len(m0) != len(m1): - # We already reported this - continue - if not m0: - continue - costs = [[_vlen(_vdiff(v0, v1)) for v1 in m1] for v0 in m0] - matching, matching_cost = min_cost_perfect_bipartite_matching(costs) - identity_matching = list(range(len(m0))) - identity_cost = sum(costs[i][i] for i in range(len(m0))) - if ( - matching != identity_matching - and matching_cost < identity_cost * 0.95 - ): - add_problem( - glyph_name, - { - "type": "contour_order", - "master_1": names[m0idx], - "master_2": names[m0idx + i + 1], - "value_1": list(range(len(m0))), - "value_2": matching, - }, - ) - break - - # m0idx should be the index of the first non-None item in allContourIsomorphisms, - # else give it the first index of None, which is likely 0 - m0idx = allContourIsomorphisms.index( - next((x for x in allContourIsomorphisms if x is not None), None) - ) - # m0 is the first non-None item in allContourIsomorphisms, or the first item if all are None - m0 = allContourIsomorphisms[m0idx] - for i, m1 in enumerate(allContourIsomorphisms[m0idx + 1 :]): - if m1 is None: - continue - if len(m0) != len(m1): - # We already reported this - continue - if not m0: - continue - for ix, (contour0, contour1) in enumerate(zip(m0, m1)): - c0 = contour0[0] - costs = [ - v for v in (_complex_vlen(_vdiff(c0, c1)) for c1 in contour1) - ] - min_cost = min(costs) - first_cost = costs[0] - if min_cost < first_cost * 0.95: - add_problem( - glyph_name, - { - "type": "wrong_start_point", - "contour": ix, - "master_1": names[m0idx], - "master_2": names[m0idx + i + 1], - }, - ) - - except ValueError as e: - add_problem( - glyph_name, - {"type": "math_error", "master": name, "error": e}, - ) - return problems - - -def main(args=None): - """Test for interpolatability issues between fonts""" - import argparse - - parser = argparse.ArgumentParser( - "fonttools varLib.interpolatable", - description=main.__doc__, - ) - parser.add_argument( - "--json", - action="store_true", - help="Output report in JSON format", - ) - parser.add_argument( - "--quiet", - action="store_true", - help="Only exit with code 1 or 0, no output", - ) - parser.add_argument( - "--ignore-missing", - action="store_true", - help="Will not report glyphs missing from sparse masters as errors", - ) - parser.add_argument( - "inputs", - metavar="FILE", - type=str, - nargs="+", - help="Input a single DesignSpace/Glyphs file, or multiple TTF/UFO files", - ) - - args = parser.parse_args(args) - glyphs = None - # glyphs = ['uni08DB', 'uniFD76'] - # glyphs = ['uni08DE', 'uni0034'] - # glyphs = ['uni08DE', 'uni0034', 'uni0751', 'uni0753', 'uni0754', 'uni08A4', 'uni08A4.fina', 'uni08A5.fina'] - - from os.path import basename - - fonts = [] - names = [] - - if len(args.inputs) == 1: - if args.inputs[0].endswith(".designspace"): - from fontTools.designspaceLib import DesignSpaceDocument - - designspace = DesignSpaceDocument.fromfile(args.inputs[0]) - args.inputs = [master.path for master in designspace.sources] - - elif args.inputs[0].endswith(".glyphs"): - from glyphsLib import GSFont, to_ufos - - gsfont = GSFont(args.inputs[0]) - fonts.extend(to_ufos(gsfont)) - names = ["%s-%s" % (f.info.familyName, f.info.styleName) for f in fonts] - args.inputs = [] - - elif args.inputs[0].endswith(".ttf"): - from fontTools.ttLib import TTFont - - font = TTFont(args.inputs[0]) - if "gvar" in font: - # Is variable font - gvar = font["gvar"] - # Gather all "master" locations - locs = set() - for variations in gvar.variations.values(): - for var in variations: - loc = [] - for tag, val in sorted(var.axes.items()): - loc.append((tag, val[1])) - locs.add(tuple(loc)) - # Rebuild locs as dictionaries - new_locs = [{}] - names.append("()") - for loc in sorted(locs, key=lambda v: (len(v), v)): - names.append(str(loc)) - l = {} - for tag, val in loc: - l[tag] = val - new_locs.append(l) - locs = new_locs - del new_locs - # locs is all master locations now - - for loc in locs: - fonts.append(font.getGlyphSet(location=loc, normalized=True)) - - args.inputs = [] - - for filename in args.inputs: - if filename.endswith(".ufo"): - from fontTools.ufoLib import UFOReader - - fonts.append(UFOReader(filename)) - else: - from fontTools.ttLib import TTFont - - fonts.append(TTFont(filename)) - - names.append(basename(filename).rsplit(".", 1)[0]) - - glyphsets = [] - for font in fonts: - if hasattr(font, "getGlyphSet"): - glyphset = font.getGlyphSet() - else: - glyphset = font - glyphsets.append({k: glyphset[k] for k in glyphset.keys()}) - - if not glyphs: - glyphs = set([gn for glyphset in glyphsets for gn in glyphset.keys()]) - - for glyphset in glyphsets: - glyphSetGlyphNames = set(glyphset.keys()) - diff = glyphs - glyphSetGlyphNames - if diff: - for gn in diff: - glyphset[gn] = None - - problems = test( - glyphsets, glyphs=glyphs, names=names, ignore_missing=args.ignore_missing - ) - - if not args.quiet: - if args.json: - import json - - print(json.dumps(problems)) - else: - for glyph, glyph_problems in problems.items(): - print(f"Glyph {glyph} was not compatible: ") - for p in glyph_problems: - if p["type"] == "missing": - print(" Glyph was missing in master %s" % p["master"]) - if p["type"] == "open_path": - print(" Glyph has an open path in master %s" % p["master"]) - if p["type"] == "path_count": - print( - " Path count differs: %i in %s, %i in %s" - % (p["value_1"], p["master_1"], p["value_2"], p["master_2"]) - ) - if p["type"] == "node_count": - print( - " Node count differs in path %i: %i in %s, %i in %s" - % ( - p["path"], - p["value_1"], - p["master_1"], - p["value_2"], - p["master_2"], - ) - ) - if p["type"] == "node_incompatibility": - print( - " Node %o incompatible in path %i: %s in %s, %s in %s" - % ( - p["node"], - p["path"], - p["value_1"], - p["master_1"], - p["value_2"], - p["master_2"], - ) - ) - if p["type"] == "contour_order": - print( - " Contour order differs: %s in %s, %s in %s" - % ( - p["value_1"], - p["master_1"], - p["value_2"], - p["master_2"], - ) - ) - if p["type"] == "wrong_start_point": - print( - " Contour %d start point differs: %s, %s" - % ( - p["contour"], - p["master_1"], - p["master_2"], - ) - ) - if p["type"] == "math_error": - print( - " Miscellaneous error in %s: %s" - % ( - p["master"], - p["error"], - ) - ) - if problems: - return problems - - -if __name__ == "__main__": - import sys - - problems = main() - sys.exit(int(bool(problems))) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Image-003ee87c.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Image-003ee87c.css deleted file mode 100644 index 60f45635043d082881d8d8a529c1142ee028a68b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Image-003ee87c.css +++ /dev/null @@ -1 +0,0 @@ -img.svelte-gqt00k{border-radius:var(--radius-lg);max-width:none}img.selected.svelte-gqt00k{border-color:var(--border-color-accent)}.table.svelte-gqt00k{margin:0 auto;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);width:var(--size-20);height:var(--size-20);object-fit:cover}.gallery.svelte-gqt00k{border:2px solid var(--border-color-primary);max-height:var(--size-20);object-fit:cover} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/templates/modelcard_template.md b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/templates/modelcard_template.md deleted file mode 100644 index ec2d18d427c9fc96eb5c8b89103632620ed4a0b6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/templates/modelcard_template.md +++ /dev/null @@ -1,202 +0,0 @@ ---- -# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 -# Doc / guide: https://huggingface.co/docs/hub/model-cards -{{ card_data }} ---- - -# Model Card for {{ model_id | default("Model ID", true) }} - - - -{{ model_summary | default("", true) }} - -## Model Details - -### Model Description - - - -{{ model_description | default("", true) }} - -- **Developed by:** {{ developers | default("[More Information Needed]", true)}} -- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}} -- **Model type:** {{ model_type | default("[More Information Needed]", true)}} -- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}} -- **License:** {{ license | default("[More Information Needed]", true)}} -- **Finetuned from model [optional]:** {{ finetuned_from | default("[More Information Needed]", true)}} - -### Model Sources [optional] - - - -- **Repository:** {{ repo | default("[More Information Needed]", true)}} -- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}} -- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} - -## Uses - - - -### Direct Use - - - -{{ direct_use | default("[More Information Needed]", true)}} - -### Downstream Use [optional] - - - -{{ downstream_use | default("[More Information Needed]", true)}} - -### Out-of-Scope Use - - - -{{ out_of_scope_use | default("[More Information Needed]", true)}} - -## Bias, Risks, and Limitations - - - -{{ bias_risks_limitations | default("[More Information Needed]", true)}} - -### Recommendations - - - -{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}} - -## How to Get Started with the Model - -Use the code below to get started with the model. - -{{ get_started_code | default("[More Information Needed]", true)}} - -## Training Details - -### Training Data - - - -{{ training_data | default("[More Information Needed]", true)}} - -### Training Procedure - - - -#### Preprocessing [optional] - -{{ preprocessing | default("[More Information Needed]", true)}} - - -#### Training Hyperparameters - -- **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} - -#### Speeds, Sizes, Times [optional] - - - -{{ speeds_sizes_times | default("[More Information Needed]", true)}} - -## Evaluation - - - -### Testing Data, Factors & Metrics - -#### Testing Data - - - -{{ testing_data | default("[More Information Needed]", true)}} - -#### Factors - - - -{{ testing_factors | default("[More Information Needed]", true)}} - -#### Metrics - - - -{{ testing_metrics | default("[More Information Needed]", true)}} - -### Results - -{{ results | default("[More Information Needed]", true)}} - -#### Summary - -{{ results_summary | default("", true) }} - -## Model Examination [optional] - - - -{{ model_examination | default("[More Information Needed]", true)}} - -## Environmental Impact - - - -Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - -- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}} -- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}} -- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}} -- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}} -- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}} - -## Technical Specifications [optional] - -### Model Architecture and Objective - -{{ model_specs | default("[More Information Needed]", true)}} - -### Compute Infrastructure - -{{ compute_infrastructure | default("[More Information Needed]", true)}} - -#### Hardware - -{{ hardware | default("[More Information Needed]", true)}} - -#### Software - -{{ software | default("[More Information Needed]", true)}} - -## Citation [optional] - - - -**BibTeX:** - -{{ citation_bibtex | default("[More Information Needed]", true)}} - -**APA:** - -{{ citation_apa | default("[More Information Needed]", true)}} - -## Glossary [optional] - - - -{{ glossary | default("[More Information Needed]", true)}} - -## More Information [optional] - -{{ more_information | default("[More Information Needed]", true)}} - -## Model Card Authors [optional] - -{{ model_card_authors | default("[More Information Needed]", true)}} - -## Model Card Contact - -{{ model_card_contact | default("[More Information Needed]", true)}} - - - diff --git a/spaces/langfab/movie-plot-genre-predictor/app.py b/spaces/langfab/movie-plot-genre-predictor/app.py deleted file mode 100644 index 633606f2367e04450cc90a4c41369aee5c7cfffb..0000000000000000000000000000000000000000 --- a/spaces/langfab/movie-plot-genre-predictor/app.py +++ /dev/null @@ -1,45 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[2]: - - -import gradio as gr -import torch - - -# In[3]: - - -model_ckpt = "langfab/distilbert-base-uncased-finetuned-movie-genre" - -from transformers import (AutoTokenizer, AutoConfig, - AutoModelForSequenceClassification) - -tokenizer = AutoTokenizer.from_pretrained(model_ckpt) -config = AutoConfig.from_pretrained(model_ckpt) -model = AutoModelForSequenceClassification.from_pretrained(model_ckpt,config=config) - - -# In[4]: - - -id2label = model.config.id2label - -def predict(plot): - encoding = tokenizer(plot, padding=True, truncation=True, return_tensors="pt") - encoding = {k: v.to(model.device) for k,v in encoding.items()} - - outputs = model(**encoding) - - logits = outputs.logits - logits.shape - - predictions = torch.nn.functional.softmax(logits.squeeze().cpu(), dim=-1) - predictions - - return id2label[int(predictions.argmax())] - -iface = gr.Interface(title = "Movie Plot Genre Predictor", fn=predict, inputs="text", outputs="text") -iface.launch(share=True) - diff --git a/spaces/leuschnm/CrowdCounting-with-Scale-Adaptive-Selection-SASNet/app.py b/spaces/leuschnm/CrowdCounting-with-Scale-Adaptive-Selection-SASNet/app.py deleted file mode 100644 index fa82789d679279957e72e82dcca47702cae95781..0000000000000000000000000000000000000000 --- a/spaces/leuschnm/CrowdCounting-with-Scale-Adaptive-Selection-SASNet/app.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright 2021 Tencent - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================= -import os -import numpy as np -import torch -import warnings -import random -import matplotlib.pyplot as plt -import gradio as gr -import torchvision.transforms as standard_transforms -from torch.utils.data import DataLoader -from torch.utils.data import Dataset -from model import SASNet - -warnings.filterwarnings('ignore') - -# define the GPU id to be used -#os.environ['CUDA_VISIBLE_DEVICES'] = '0' - -class data(Dataset): - def __init__(self, img, transform=None): - self.image = img - self.transform = transform - - def __len__(self): - return 1000 - - def __getitem__(self, x): - # open image here as PIL / numpy - image = self.image - image = image.convert('RGB') - if self.transform is not None: - image = self.transform(image) - - image = torch.Tensor(image) - return image - -def loading_data(img): - # the augumentations - transform = standard_transforms.Compose([ - standard_transforms.ToTensor(), standard_transforms.Normalize(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - ]) - # dcreate the dataset - test_set = data(img=img, transform=transform) - test_loader = DataLoader(test_set, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - return test_loader - - -def predict(img): - if img is None: - return "No image selected", plt.figure() - """the main process of inference""" - test_loader = loading_data(img) - #model = SASNet() - model = SASNet().cpu() - model_path = "./SHHA.pth" - # load the trained model - model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - print('successfully load model from', model_path) - - with torch.no_grad(): - model.eval() - - for vi, data in enumerate(test_loader, 0): - img = data - #img = img.cuda() - img = img.cpu() - pred_map = model(img) - pred_map = pred_map.data.cpu().numpy() - for i_img in range(pred_map.shape[0]): - pred_cnt = np.sum(pred_map[i_img]) / 1000 - - den_map = np.squeeze(pred_map[i_img]) - fig = plt.figure(frameon=False) - ax = plt.Axes(fig, [0., 0., 1., 1.]) - ax.set_axis_off() - fig.add_axes(ax) - ax.imshow(den_map, aspect='auto') - - return int(np.round(pred_cnt, 0)), fig - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Crowd Counting based on SASNet -

        - This space implements crowd counting following the paper of Song et. al (2021). The model is a VGG16 base with MultiBranch-Channels. For more details see the official publication on AAAI. - Training data is the Shanghai-Tech A/B data set with Gaussian augmentation for density map creation. The data set annotates more than 300k people. -

        - - ## Abstract -

        - In this paper, we address the large scale variation problem in crowd counting by taking full advantage of the multi-scale feature representations in a multi-level network. We - implement such an idea by keeping the counting error of a patch as small as possible with a proper feature level selection strategy, since a specific feature level tends to perform - better for a certain range of scales. However, without scale annotations, it is sub-optimal and error-prone to manually assign the predictions for heads of different scales to - specific feature levels. Therefore, we propose a Scale-Adaptive Selection Network (SASNet), which automatically learns the internal correspondence between the scales and the feature - levels. Instead of directly using the predictions from the most appropriate feature level as the final estimation, our SASNet also considers the predictions from other feature - levels via weighted average, which helps to mitigate the gap between discrete feature levels and continuous scale variation. Since the heads in a local patch share roughly a same - scale, we conduct the adaptive selection strategy in a patch-wise style. However, pixels within a patch contribute different counting errors due to the various difficulty degrees of - learning. Thus, we further propose a Pyramid Region Awareness Loss (PRA Loss) to recursively select the most hard sub-regions within a patch until reaching the pixel level. With - awareness of whether the parent patch is over-estimated or under-estimated, the fine-grained optimization with the PRA Loss for these region-aware hard pixels helps to alleviate the - inconsistency problem between training target and evaluation metric. The state-of-the-art results on four datasets demonstrate the superiority of our approach. -

        - - ## Demo - """ - ) - with gr.Row(): - with gr.Column(): - gr.Markdown( - """ - Upload an image or use some of the example to let the model count your crowd. The estimated density map is plotted as well. Have fun! - Visit my [**github**](https://github.com/MalteLeuschner/CrowdCounting_SASNet) for more! - """ - ) - with gr.Column(): - text_output = gr.Label() - with gr.Row(): - with gr.Column(): - image_input = gr.Image(type="pil") - with gr.Column(): - image_output = gr.Plot() - with gr.Row(): - with gr.Column(): - image_button = gr.Button("Count the Crowd!", variant = "primary") - with gr.Column(): - gr.Markdown("") - with gr.Column(): - gr.Markdown("") - - gr.Examples(["IMG_1.jpg", "IMG_2.jpg", "IMG_3.jpg"], image_input) - - gr.Markdown( - """ - ## References - The code will be available at: https://github.com/TencentYoutuResearch/CrowdCounting-SASNet. - - Song, Q., Wang, C., Wang, Y., Tai, Y., Wang, C., Li, J., … Ma, J. (2021). To Choose or to Fuse? Scale Selection for Crowd Counting. The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21). - """) - - image_button.click(predict, inputs=image_input, outputs=[text_output, image_output]) - -demo.launch() - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Freeshreelipi60fullwithcrackNewVersion !!BETTER!!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Freeshreelipi60fullwithcrackNewVersion !!BETTER!!.md deleted file mode 100644 index 502141a70f012e512b6b445cff5f83293eb2510e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Freeshreelipi60fullwithcrackNewVersion !!BETTER!!.md +++ /dev/null @@ -1,77 +0,0 @@ - -

        Freeshreelipi60fullwithcrackNewVersion: A Complete Guide

        -

        If you are looking for a way to type in Indian languages with ease and accuracy, you might have heard of Shree-Lipi, a modular package of Indian language fonts. Shree-Lipi contains hundreds of fonts for Devanagari, Gujarati, Tamil, and other scripts, as well as bilingual and Unicode fonts. It also has many features to enhance your typing experience, such as date and time insertion, online dictionary, spellchecker, and text styler.

        -

        Freeshreelipi60fullwithcrackNewVersion


        DOWNLOADhttps://bytlly.com/2uGy4w



        -

        However, Shree-Lipi is not a free software. You have to pay a hefty price to get the full version with all the fonts and features. That's why many people are looking for Freeshreelipi60fullwithcrackNewVersion, a cracked version of Shree-Lipi that can be downloaded for free from various websites.

        -

        How to Download Freeshreelipi60fullwithcrackNewVersion

        -

        There are many websites that claim to offer Freeshreelipi60fullwithcrackNewVersion for download. However, you should be careful when downloading anything from unknown sources, as they might contain viruses, malware, or spyware that can harm your computer or steal your personal information.

        -

        One of the websites that seems to be reliable is Get Into PC, which offers a direct link to download Shree-Lipi Setup With All Fonts Free. The file size is 154 MB and it is compatible with Windows XP/Vista/7/8/8.1/10. You can also download Utkal Oriya Language Font from the same website.

        -

        -

        To download Freeshreelipi60fullwithcrackNewVersion from Get Into PC, follow these steps:

        -
          -
        1. Go to https://getintopc.com/softwares/fonts/download-shreelipi-setup-with-all-fonts-free/
        2. -
        3. Click on the green Download button at the bottom of the page.
        4. -
        5. Wait for the download to start automatically or click on the link that says "Click here to start download manually".
        6. -
        7. Save the file ShreeLipi_Setup_Plus_Fonts.rar on your computer.
        8. -
        9. Extract the file using WinRAR or any other software that can open .rar files.
        10. -
        11. Run the file INSTALL.EXE to install Shree-Lipi on your computer.
        12. -
        13. Follow the instructions on the screen to complete the installation.
        14. -
        15. Enjoy using Shree-Lipi with all the fonts and features for free.
        16. -
        -

        How to Use Freeshreelipi60fullwithcrackNewVersion

        -

        Once you have installed Freeshreelipi60fullwithcrackNewVersion on your computer, you can start using it to type in Indian languages in any application that supports fonts. You can also use it to create documents, web pages, presentations, and other projects that require Indian language fonts.

        -

        To use Freeshreelipi60fullwithcrackNewVersion, follow these steps:

        -
          -
        1. Open the application where you want to type in Indian languages.
        2. -
        3. Select Shree-Lipi as your font from the font menu or toolbar.
        4. -
        5. Select the language and script that you want to use from the language menu or toolbar.
        6. -
        7. Type using your keyboard or use the on-screen keyboard that comes with Shree-Lipi.
        8. -
        9. If you need help with spelling, grammar, or meaning of words, use the online dictionary or spellchecker that comes with Shree-Lipi.
        10. -
        11. If you want to add some style or effects to your text, use the text styler ROOPA that comes with Shree-Lipi.
        12. -
        13. Save your work and share it with others.
        14. -
        -

        The Benefits of Freeshreelipi60fullwithcrackNewVersion

        -

        Freeshreelipi60fullwithcrackNewVersion is a great option for anyone who wants to type in Indian languages without spending money on expensive software. It has many benefits, such as:

        -
          -
        • It contains hundreds of fonts for various Indian languages and scripts.
        • -
        • It has two new font layouts: ShreeLipi-Ex and ShreeLipi-7 that are compatible with Windows applications.
        • -
        • It allows you to insert date and time in Indian languages in 12 different formats.
        • -
        • It has an online Hindi dictionary that can help you with meanings of words.
        • -
        • It has a spellchecker that can check your spelling and grammar in Indian languages.
        • -
        • It has a text styler ROOPA that can give effects like condensation, shadow, expansion, rotation, and outline to text.
        • -
        • It is easy to install and use.
        • -
        • It is free of cost.
        • -
        - -

        The Risks of Freeshreelipi60fullwithcrackNewVersion

        - -

        While Freeshreelipi60fullwithcrackNewVersion might seem like a perfect solution for typing in Indian languages, it also comes with some risks that you should be aware of. These include:

        - -
          - -
        • It is illegal to use cracked software without paying for it. You might face legal consequences if you are caught using Freeshreelipi60fullwithcrackNewVersion without a license.
        • - -
        • It might not be updated or supported by the developers. You might miss out on new features or bug fixes that are available in the official version of Shree-Lipi.
        • - -
        • It might not be compatible with some applications or devices. You might encounter errors or glitches when using Freeshreelipi60fullwithcrackNewVersion with certain programs or platforms.
        • - -
        • It might contain viruses or malware that can damage your computer or compromise your security. You might expose your personal data or files to hackers or cybercriminals if you download Freeshreelipi60fullwithcrackNewVersion from untrusted sources.
        • - -
        - -

        The Conclusion

        - -

        In conclusion, Freeshreelipi60fullwithcrackNewVersion is a cracked version of Shree-Lipi that can be downloaded for free from some websites. It offers a rich package of Indian language fonts and features that can help you type in various scripts with ease and accuracy. However, it also has some drawbacks that might outweigh its benefits. It is illegal, unsafe, and unreliable to use cracked software without paying for it. Therefore, we recommend that you buy the official version of Shree-Lipi from its developers if you want to enjoy its full potential and avoid any risks or problems.

        -

        The Alternatives to Freeshreelipi60fullwithcrackNewVersion

        -

        If you are not comfortable with using Freeshreelipi60fullwithcrackNewVersion, or if you want to support the developers of Shree-Lipi, you might want to consider some alternatives to this cracked software. There are some other options that can help you type in Indian languages without breaking the law or risking your security. These include:

        -
          -
        • Buying the official version of Shree-Lipi from its developers. This is the best way to get access to all the fonts and features of Shree-Lipi, as well as getting updates and support from the developers. You can buy Shree-Lipi from their website: https://www.modular-infotech.com/html/shreelipi.html
        • -
        • Using free or open source fonts for Indian languages. There are some fonts that are available for free or under open source licenses that can be used for typing in Indian languages. Some examples are: Google Noto Fonts, Lohit Fonts, Samyak Fonts, etc. You can find these fonts online and download them for free.
        • -
        • Using online tools or apps for Indian language typing. There are some websites or applications that can help you type in Indian languages without installing any fonts or software on your computer. Some examples are: Google Input Tools, Microsoft Indic Language Input Tool, Lipikaar, etc. You can use these tools or apps online or offline and type in Indian languages with ease.
        • -
        -

        The Conclusion

        -

        In conclusion, Freeshreelipi60fullwithcrackNewVersion is a cracked version of Shree-Lipi that can be downloaded for free from some websites. It offers a rich package of Indian language fonts and features that can help you type in various scripts with ease and accuracy. However, it also has some drawbacks that might outweigh its benefits. It is illegal, unsafe, and unreliable to use cracked software without paying for it. Therefore, we recommend that you buy the official version of Shree-Lipi from its developers if you want to enjoy its full potential and avoid any risks or problems.

        -

        If you found this article helpful, please share it with your friends and colleagues who might be interested in Freeshreelipi60fullwithcrackNewVersion or Shree-Lipi. Also, feel free to leave a comment below if you have any questions or feedback about this topic. Thank you for reading!

        -

        In conclusion, Freeshreelipi60fullwithcrackNewVersion is a cracked version of Shree-Lipi that can be downloaded for free from some websites. It offers a rich package of Indian language fonts and features that can help you type in various scripts with ease and accuracy. However, it also has some drawbacks that might outweigh its benefits. It is illegal, unsafe, and unreliable to use cracked software without paying for it. Therefore, we recommend that you buy the official version of Shree-Lipi from its developers if you want to enjoy its full potential and avoid any risks or problems.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/facerender/sync_batchnorm/unittest.py b/spaces/lithiumice/SadTalker/src/facerender/sync_batchnorm/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/facerender/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/lixq/bingo61/src/components/ui/input.tsx b/spaces/lixq/bingo61/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/lmz/candle-yolo/build/m.d.ts b/spaces/lmz/candle-yolo/build/m.d.ts deleted file mode 100644 index 27c83eefcd148bb8671af7812af56a142554a7b7..0000000000000000000000000000000000000000 --- a/spaces/lmz/candle-yolo/build/m.d.ts +++ /dev/null @@ -1,75 +0,0 @@ -/* tslint:disable */ -/* eslint-disable */ -/** -*/ -export class Model { - free(): void; -/** -* @param {Uint8Array} data -* @param {string} model_size -*/ - constructor(data: Uint8Array, model_size: string); -/** -* @param {Uint8Array} image -* @param {number} conf_threshold -* @param {number} iou_threshold -* @returns {string} -*/ - run(image: Uint8Array, conf_threshold: number, iou_threshold: number): string; -} -/** -*/ -export class ModelPose { - free(): void; -/** -* @param {Uint8Array} data -* @param {string} model_size -*/ - constructor(data: Uint8Array, model_size: string); -/** -* @param {Uint8Array} image -* @param {number} conf_threshold -* @param {number} iou_threshold -* @returns {string} -*/ - run(image: Uint8Array, conf_threshold: number, iou_threshold: number): string; -} - -export type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembly.Module; - -export interface InitOutput { - readonly memory: WebAssembly.Memory; - readonly __wbg_model_free: (a: number) => void; - readonly model_new: (a: number, b: number, c: number, d: number, e: number) => void; - readonly model_run: (a: number, b: number, c: number, d: number, e: number, f: number) => void; - readonly __wbg_modelpose_free: (a: number) => void; - readonly modelpose_new: (a: number, b: number, c: number, d: number, e: number) => void; - readonly modelpose_run: (a: number, b: number, c: number, d: number, e: number, f: number) => void; - readonly main: (a: number, b: number) => number; - readonly __wbindgen_add_to_stack_pointer: (a: number) => number; - readonly __wbindgen_malloc: (a: number, b: number) => number; - readonly __wbindgen_realloc: (a: number, b: number, c: number, d: number) => number; - readonly __wbindgen_free: (a: number, b: number, c: number) => void; - readonly __wbindgen_start: () => void; -} - -export type SyncInitInput = BufferSource | WebAssembly.Module; -/** -* Instantiates the given `module`, which can either be bytes or -* a precompiled `WebAssembly.Module`. -* -* @param {SyncInitInput} module -* -* @returns {InitOutput} -*/ -export function initSync(module: SyncInitInput): InitOutput; - -/** -* If `module_or_path` is {RequestInfo} or {URL}, makes a request and -* for everything else, calls `WebAssembly.instantiate` directly. -* -* @param {InitInput | Promise} module_or_path -* -* @returns {Promise} -*/ -export default function __wbg_init (module_or_path?: InitInput | Promise): Promise; diff --git a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp b/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp deleted file mode 100644 index 448a776b3cda9f39f4dd0ad908f1b135c647ca8f..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/masked_image.cpp +++ /dev/null @@ -1,138 +0,0 @@ -#include "masked_image.h" -#include -#include - -const cv::Size MaskedImage::kDownsampleKernelSize = cv::Size(6, 6); -const int MaskedImage::kDownsampleKernel[6] = {1, 5, 10, 10, 5, 1}; - -bool MaskedImage::contains_mask(int y, int x, int patch_size) const { - auto mask_size = size(); - for (int dy = -patch_size; dy <= patch_size; ++dy) { - for (int dx = -patch_size; dx <= patch_size; ++dx) { - int yy = y + dy, xx = x + dx; - if (yy >= 0 && yy < mask_size.height && xx >= 0 && xx < mask_size.width) { - if (is_masked(yy, xx) && !is_globally_masked(yy, xx)) return true; - } - } - } - return false; -} - -MaskedImage MaskedImage::downsample() const { - const auto &kernel_size = MaskedImage::kDownsampleKernelSize; - const auto &kernel = MaskedImage::kDownsampleKernel; - - const auto size = this->size(); - const auto new_size = cv::Size(size.width / 2, size.height / 2); - - auto ret = MaskedImage(new_size.width, new_size.height); - if (!m_global_mask.empty()) ret.init_global_mask_mat(); - for (int y = 0; y < size.height - 1; y += 2) { - for (int x = 0; x < size.width - 1; x += 2) { - int r = 0, g = 0, b = 0, ksum = 0; - bool is_gmasked = true; - - for (int dy = -kernel_size.height / 2 + 1; dy <= kernel_size.height / 2; ++dy) { - for (int dx = -kernel_size.width / 2 + 1; dx <= kernel_size.width / 2; ++dx) { - int yy = y + dy, xx = x + dx; - if (yy >= 0 && yy < size.height && xx >= 0 && xx < size.width) { - if (!is_globally_masked(yy, xx)) { - is_gmasked = false; - } - if (!is_masked(yy, xx)) { - auto source_ptr = get_image(yy, xx); - int k = kernel[kernel_size.height / 2 - 1 + dy] * kernel[kernel_size.width / 2 - 1 + dx]; - r += source_ptr[0] * k, g += source_ptr[1] * k, b += source_ptr[2] * k; - ksum += k; - } - } - } - } - - if (ksum > 0) r /= ksum, g /= ksum, b /= ksum; - - if (!m_global_mask.empty()) { - ret.set_global_mask(y / 2, x / 2, is_gmasked); - } - if (ksum > 0) { - auto target_ptr = ret.get_mutable_image(y / 2, x / 2); - target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b; - ret.set_mask(y / 2, x / 2, 0); - } else { - ret.set_mask(y / 2, x / 2, 1); - } - } - } - - return ret; -} - -MaskedImage MaskedImage::upsample(int new_w, int new_h) const { - const auto size = this->size(); - auto ret = MaskedImage(new_w, new_h); - if (!m_global_mask.empty()) ret.init_global_mask_mat(); - for (int y = 0; y < new_h; ++y) { - for (int x = 0; x < new_w; ++x) { - int yy = y * size.height / new_h; - int xx = x * size.width / new_w; - - if (is_globally_masked(yy, xx)) { - ret.set_global_mask(y, x, 1); - ret.set_mask(y, x, 1); - } else { - if (!m_global_mask.empty()) ret.set_global_mask(y, x, 0); - - if (is_masked(yy, xx)) { - ret.set_mask(y, x, 1); - } else { - auto source_ptr = get_image(yy, xx); - auto target_ptr = ret.get_mutable_image(y, x); - for (int c = 0; c < 3; ++c) - target_ptr[c] = source_ptr[c]; - ret.set_mask(y, x, 0); - } - } - } - } - - return ret; -} - -MaskedImage MaskedImage::upsample(int new_w, int new_h, const cv::Mat &new_global_mask) const { - auto ret = upsample(new_w, new_h); - ret.set_global_mask_mat(new_global_mask); - return ret; -} - -void MaskedImage::compute_image_gradients() { - if (m_image_grad_computed) { - return; - } - - const auto size = m_image.size(); - m_image_grady = cv::Mat(size, CV_8UC3); - m_image_gradx = cv::Mat(size, CV_8UC3); - m_image_grady = cv::Scalar::all(0); - m_image_gradx = cv::Scalar::all(0); - - for (int i = 1; i < size.height - 1; ++i) { - const auto *ptr = m_image.ptr(i, 0); - const auto *ptry1 = m_image.ptr(i + 1, 0); - const auto *ptry2 = m_image.ptr(i - 1, 0); - const auto *ptrx1 = m_image.ptr(i, 0) + 3; - const auto *ptrx2 = m_image.ptr(i, 0) - 3; - auto *mptry = m_image_grady.ptr(i, 0); - auto *mptrx = m_image_gradx.ptr(i, 0); - for (int j = 3; j < size.width * 3 - 3; ++j) { - mptry[j] = (ptry1[j] / 2 - ptry2[j] / 2) + 128; - mptrx[j] = (ptrx1[j] / 2 - ptrx2[j] / 2) + 128; - } - } - - m_image_grad_computed = true; -} - -void MaskedImage::compute_image_gradients() const { - const_cast(this)->compute_image_gradients(); -} - diff --git a/spaces/lunarflu/falcon-180b-demo-duplicate/README.md b/spaces/lunarflu/falcon-180b-demo-duplicate/README.md deleted file mode 100644 index df73d0c3cb8d7f2f9a494e136bc4bc8955daf44e..0000000000000000000000000000000000000000 --- a/spaces/lunarflu/falcon-180b-demo-duplicate/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Falcon-180B Demo -emoji: 💬 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- diff --git a/spaces/ma-xu/LIVE/LIVE/README.md b/spaces/ma-xu/LIVE/LIVE/README.md deleted file mode 100644 index 041ee859808934aa5f50b2bcca412893da126f11..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/LIVE/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# LIVE-pytorch -Towards Layer-wise Image Vectorization - -### Updated for rebuttal (Jan/28/2022): -#### User study -We create a [user study](https://wj.qq.com/s2/9665341/19ed) as suggested. A more complex user study will be added in the revised version. - -The results are collected here: [user study details](user_study_state.csv) - -#### Code installation - -we added detailed [conda env file](env.yml) and collected detail [system information](system_info.txt) to help the installation. - -A more detailed docker and Google Colab demo will be provided. - - -
        - -
        -LIVE is able to explicitly presents a Layer-wise representation for simple images. - -## Installation -```bash -pip3 install torch torchvision -pip install svgwrite -pip install svgpathtools -pip install cssutils -pip install numba -pip install torch-tools -pip install visdom -pip install scikit-fmm -pip install opencv-python==4.5.4.60 -pip install easydict -pip install scikit-fmm - -``` -Next, please refer DiffVG to install [pydiffvg](https://github.com/BachiLi/diffvg) - - -## Run -```bash -python main.py --config config/all.yaml --experiment experiment_8x1 --signature demo1 --target data/demo1.png -``` -Please modify the config files to change configurations. diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/reduce_by_key.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/reduce_by_key.h deleted file mode 100644 index a2c7744249bdd644f2c8bf8e4d8b86bb583ac332..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/reduce_by_key.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits reduce_by_key -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/mismatch.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/mismatch.h deleted file mode 100644 index 98c462e8446b7a54da43b90457ee90393188e225..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/mismatch.h +++ /dev/null @@ -1,117 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -template -pair __host__ __device__ -mismatch(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2, - BinaryPred binary_pred); - -template -pair __host__ __device__ -mismatch(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2); -} // namespace cuda_ -} // end namespace thrust - -#include - -namespace thrust -{ -namespace cuda_cub { - -template -pair __host__ __device__ -mismatch(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2, - BinaryPred binary_pred) -{ - typedef transform_pair_of_input_iterators_t - transform_t; - - transform_t transform_first = transform_t(first1, first2, binary_pred); - - transform_t result = cuda_cub::find_if_not(policy, - transform_first, - transform_first + thrust::distance(first1, last1), - identity()); - - return thrust::make_pair(first1 + thrust::distance(transform_first,result), - first2 + thrust::distance(transform_first,result)); -} - -template -pair __host__ __device__ -mismatch(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2) -{ - typedef typename thrust::iterator_value::type InputType1; - return cuda_cub::mismatch(policy, - first1, - last1, - first2, - equal_to()); -} - - - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h deleted file mode 100644 index a13b18aa8d73dae57b450a0a53e2fb97de2165ea..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the uninitialized_copy.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch uninitialized_copy - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/uninitialized_copy.h> -#include __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER -#undef __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER - -#define __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/uninitialized_copy.h> -#include __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER -#undef __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER - diff --git a/spaces/mamiksik/commit-message-generator/README.md b/spaces/mamiksik/commit-message-generator/README.md deleted file mode 100644 index 2f322b7c37f80cd5cafc12fc3f65508b03d4aa52..0000000000000000000000000000000000000000 --- a/spaces/mamiksik/commit-message-generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Commit Message Generator -emoji: 🏃 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/manjuvallayil/te-reo/urls.py b/spaces/manjuvallayil/te-reo/urls.py deleted file mode 100644 index 98f9493920d9df6416aaf7001c57ac1384c6532b..0000000000000000000000000000000000000000 --- a/spaces/manjuvallayil/te-reo/urls.py +++ /dev/null @@ -1,7 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Predefined URLs used to make google translate requests. -""" -BASE = 'https://translate.google.com' -TRANSLATE = 'https://{host}/translate_a/single' -TRANSLATE_RPC = 'https://{host}/_/TranslateWebserverUi/data/batchexecute' diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/postalign.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/postalign.py deleted file mode 100644 index 2e2be1ab47e06e5eec468a805dbadd76f7c32ff2..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/postalign.py +++ /dev/null @@ -1,113 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import os, glob -import numpy as np -import cv2 -import scipy.ndimage - -fs = ['suit1_pred_fls_t7_audio_embed.mp4' ] - -for f in fs: - - os.system('ffmpeg -y -i MakeItTalk/examples/{} -filter:v crop=256:256:256:0 -strict -2 MakeItTalk/examples/crop_{}'.format(f, f)) - - cap = cv2.VideoCapture('MakeItTalk/examples/crop_{}'.format(f)) - writer = cv2.VideoWriter('MakeItTalk/examples/tmp_{}.mp4'.format(f[:-4]), - cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 62.5, (256, 256)) - - length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - ret, frame1 = cap.read() - prvs = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY) - fir = np.copy(prvs) - - # params for ShiTomasi corner detection - feature_params = dict( maxCorners = 100, - qualityLevel = 0.9, - minDistance = 3, - blockSize = 3) - - # Parameters for lucas kanade optical flow - lk_params = dict( winSize = (15,15), - maxLevel = 2, - criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03)) - - # Create some random colors - color = np.random.randint(0,255,(100,3)) - - # Take first frame and find corners in it - ret, old_frame = cap.read() - old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY) - - mask = np.zeros_like(old_gray) - mask[-50:, 128:] = 1 - - p0 = cv2.goodFeaturesToTrack(old_gray, mask = mask, **feature_params) - p0 = p0[0:1] - - ori_ab = None - - # Create a mask image for drawing purposes - mask = np.zeros_like(old_frame) - ii = 0 - while(ii>-1): - print(f, ii, length) - ii += 1 - - ret,frame = cap.read() - if(not ret): - break - frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) - - # calculate optical flow - p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params) - - # Select good points - good_new = p1[st==1] - good_old = p0[st==1] - - # draw the tracks - for i,(new,old) in enumerate(zip(good_new,good_old)): - a,b = new.ravel() - c,d = old.ravel() - # mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2) - # frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1) - if(ori_ab is None): - ori_ab = [a, b] - - # add dot - # img = cv2.add(frame,mask) - - # rgb = img - rgb = scipy.ndimage.shift(frame, shift=[ori_ab[1]-b, ori_ab[0]-a, 0], mode='reflect') - - # cv2.imshow('frame',rgb) - writer.write(rgb) - - # Now update the previous frame and previous points - old_gray = frame_gray.copy() - p0 = good_new.reshape(-1,1,2) - - cv2.destroyAllWindows() - cap.release() - writer.release() - - f = f[:-4] - os.system('ffmpeg -loglevel error -y -i {} -vn {}'.format( - os.path.join('../examples', '{}.mp4'.format(f)), os.path.join('../examples', 'a_' + f + '.wav') - )) - - os.system('ffmpeg -loglevel error -y -i {} -i {} -pix_fmt yuv420p -shortest -strict -2 {}'.format( - os.path.join('../examples', 'tmp_{}.mp4'.format(f)), os.path.join('../examples', 'a_' + f + '.wav'), - os.path.join('../examples', 'f_' + f + '.mp4') - )) - os.remove(os.path.join('../examples', 'tmp_{}.mp4'.format(f))) - os.remove(os.path.join('../examples', 'a_' + f + '.wav')) diff --git a/spaces/matthoffner/chatbot-mini/Makefile b/spaces/matthoffner/chatbot-mini/Makefile deleted file mode 100644 index 8dc4e12dc227a0ffe26ac1769fd9da539e5b438c..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/Makefile +++ /dev/null @@ -1,18 +0,0 @@ -include .env - -.PHONY: all - -build: - docker build -t chatbot-ui . - -run: - export $(cat .env | xargs) - docker stop chatbot-ui || true && docker rm chatbot-ui || true - docker run --name chatbot-ui --rm -e OPENAI_API_KEY=${OPENAI_API_KEY} -p 3000:3000 chatbot-ui - -logs: - docker logs -f chatbot-ui - -push: - docker tag chatbot-ui:latest ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG} - docker push ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG} \ No newline at end of file diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/scoring.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/scoring.py deleted file mode 100644 index 21377f66cec15b7b01c23031f9b5b5357cf38e38..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/model/scoring.py +++ /dev/null @@ -1,300 +0,0 @@ - -## -## lk and lk term -#(LJ_RADIUS LJ_WDEPTH LK_DGFREE LK_LAMBDA LK_VOLUME) -type2ljlk = { - "CNH2":(1.968297,0.094638,3.077030,3.5000,13.500000), - "COO":(1.916661,0.141799,-3.332648,3.5000,14.653000), - "CH0":(2.011760,0.062642,1.409284,3.5000,8.998000), - "CH1":(2.011760,0.062642,-3.538387,3.5000,10.686000), - "CH2":(2.011760,0.062642,-1.854658,3.5000,18.331000), - "CH3":(2.011760,0.062642,7.292929,3.5000,25.855000), - "aroC":(2.016441,0.068775,1.797950,3.5000,16.704000), - "Ntrp":(1.802452,0.161725,-8.413116,3.5000,9.522100), - "Nhis":(1.802452,0.161725,-9.739606,3.5000,9.317700), - "NtrR":(1.802452,0.161725,-5.158080,3.5000,9.779200), - "NH2O":(1.802452,0.161725,-8.101638,3.5000,15.689000), - "Nlys":(1.802452,0.161725,-20.864641,3.5000,16.514000), - "Narg":(1.802452,0.161725,-8.968351,3.5000,15.717000), - "Npro":(1.802452,0.161725,-0.984585,3.5000,3.718100), - "OH":(1.542743,0.161947,-8.133520,3.5000,10.722000), - "OHY":(1.542743,0.161947,-8.133520,3.5000,10.722000), - "ONH2":(1.548662,0.182924,-6.591644,3.5000,10.102000), - "OOC":(1.492871,0.099873,-9.239832,3.5000,9.995600), - "S":(1.975967,0.455970,-1.707229,3.5000,17.640000), - "SH1":(1.975967,0.455970,3.291643,3.5000,23.240000), - "Nbb":(1.802452,0.161725,-9.969494,3.5000,15.992000), - "CAbb":(2.011760,0.062642,2.533791,3.5000,12.137000), - "CObb":(1.916661,0.141799,3.104248,3.5000,13.221000), - "OCbb":(1.540580,0.142417,-8.006829,3.5000,12.196000), - "HNbb":(0.901681,0.005000,0.0000,3.5000,0.0000), - "Hapo":(1.421272,0.021808,0.0000,3.5000,0.0000), - "Haro":(1.374914,0.015909,0.0000,3.5000,0.0000), - "Hpol":(0.901681,0.005000,0.0000,3.5000,0.0000), - "HS":(0.363887,0.050836,0.0000,3.5000,0.0000), -} - -# hbond donor/acceptors -class HbAtom: - NO = 0 - DO = 1 # donor - AC = 2 # acceptor - DA = 3 # donor & acceptor - HP = 4 # polar H - -type2hb = { - "CNH2":HbAtom.NO, "COO":HbAtom.NO, "CH0":HbAtom.NO, "CH1":HbAtom.NO, - "CH2":HbAtom.NO, "CH3":HbAtom.NO, "aroC":HbAtom.NO, "Ntrp":HbAtom.DO, - "Nhis":HbAtom.AC, "NtrR":HbAtom.DO, "NH2O":HbAtom.DO, "Nlys":HbAtom.DO, - "Narg":HbAtom.DO, "Npro":HbAtom.NO, "OH":HbAtom.DA, "OHY":HbAtom.DA, - "ONH2":HbAtom.AC, "OOC":HbAtom.AC, "S":HbAtom.NO, "SH1":HbAtom.NO, - "Nbb":HbAtom.DO, "CAbb":HbAtom.NO, "CObb":HbAtom.NO, "OCbb":HbAtom.AC, - "HNbb":HbAtom.HP, "Hapo":HbAtom.NO, "Haro":HbAtom.NO, "Hpol":HbAtom.HP, - "HS":HbAtom.HP, # HP in rosetta(?) -} - -## -## hbond term -class HbDonType: - PBA = 0 - IND = 1 - IME = 2 - GDE = 3 - CXA = 4 - AMO = 5 - HXL = 6 - AHX = 7 - NTYPES = 8 - -class HbAccType: - PBA = 0 - CXA = 1 - CXL = 2 - HXL = 3 - AHX = 4 - IME = 5 - NTYPES = 6 - -class HbHybType: - SP2 = 0 - SP3 = 1 - RING = 2 - NTYPES = 3 - -type2dontype = { - "Nbb": HbDonType.PBA, - "Ntrp": HbDonType.IND, - "NtrR": HbDonType.GDE, - "Narg": HbDonType.GDE, - "NH2O": HbDonType.CXA, - "Nlys": HbDonType.AMO, - "OH": HbDonType.HXL, - "OHY": HbDonType.AHX, -} - -type2acctype = { - "OCbb": HbAccType.PBA, - "ONH2": HbAccType.CXA, - "OOC": HbAccType.CXL, - "OH": HbAccType.HXL, - "OHY": HbAccType.AHX, - "Nhis": HbAccType.IME, -} - -type2hybtype = { - "OCbb": HbHybType.SP2, - "ONH2": HbHybType.SP2, - "OOC": HbHybType.SP2, - "OHY": HbHybType.SP3, - "OH": HbHybType.SP3, - "Nhis": HbHybType.RING, -} - -dontype2wt = { - HbDonType.PBA: 1.45, - HbDonType.IND: 1.15, - HbDonType.IME: 1.42, - HbDonType.GDE: 1.11, - HbDonType.CXA: 1.29, - HbDonType.AMO: 1.17, - HbDonType.HXL: 0.99, - HbDonType.AHX: 1.00, -} - -acctype2wt = { - HbAccType.PBA: 1.19, - HbAccType.CXA: 1.21, - HbAccType.CXL: 1.10, - HbAccType.HXL: 1.15, - HbAccType.AHX: 1.15, - HbAccType.IME: 1.17, -} - -class HbPolyType: - ahdist_aASN_dARG = 0 - ahdist_aASN_dASN = 1 - ahdist_aASN_dGLY = 2 - ahdist_aASN_dHIS = 3 - ahdist_aASN_dLYS = 4 - ahdist_aASN_dSER = 5 - ahdist_aASN_dTRP = 6 - ahdist_aASN_dTYR = 7 - ahdist_aASP_dARG = 8 - ahdist_aASP_dASN = 9 - ahdist_aASP_dGLY = 10 - ahdist_aASP_dHIS = 11 - ahdist_aASP_dLYS = 12 - ahdist_aASP_dSER = 13 - ahdist_aASP_dTRP = 14 - ahdist_aASP_dTYR = 15 - ahdist_aGLY_dARG = 16 - ahdist_aGLY_dASN = 17 - ahdist_aGLY_dGLY = 18 - ahdist_aGLY_dHIS = 19 - ahdist_aGLY_dLYS = 20 - ahdist_aGLY_dSER = 21 - ahdist_aGLY_dTRP = 22 - ahdist_aGLY_dTYR = 23 - ahdist_aHIS_dARG = 24 - ahdist_aHIS_dASN = 25 - ahdist_aHIS_dGLY = 26 - ahdist_aHIS_dHIS = 27 - ahdist_aHIS_dLYS = 28 - ahdist_aHIS_dSER = 29 - ahdist_aHIS_dTRP = 30 - ahdist_aHIS_dTYR = 31 - ahdist_aSER_dARG = 32 - ahdist_aSER_dASN = 33 - ahdist_aSER_dGLY = 34 - ahdist_aSER_dHIS = 35 - ahdist_aSER_dLYS = 36 - ahdist_aSER_dSER = 37 - ahdist_aSER_dTRP = 38 - ahdist_aSER_dTYR = 39 - ahdist_aTYR_dARG = 40 - ahdist_aTYR_dASN = 41 - ahdist_aTYR_dGLY = 42 - ahdist_aTYR_dHIS = 43 - ahdist_aTYR_dLYS = 44 - ahdist_aTYR_dSER = 45 - ahdist_aTYR_dTRP = 46 - ahdist_aTYR_dTYR = 47 - cosBAH_off = 48 - cosBAH_7 = 49 - cosBAH_6i = 50 - AHD_1h = 51 - AHD_1i = 52 - AHD_1j = 53 - AHD_1k = 54 - -# map donor:acceptor pairs to polynomials -hbtypepair2poly = { - (HbDonType.PBA,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dGLY,HbPolyType.cosBAH_off,HbPolyType.AHD_1j), - (HbDonType.CXA,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dASN,HbPolyType.cosBAH_off,HbPolyType.AHD_1j), - (HbDonType.IME,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dHIS,HbPolyType.cosBAH_off,HbPolyType.AHD_1j), - (HbDonType.IND,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dTRP,HbPolyType.cosBAH_off,HbPolyType.AHD_1j), - (HbDonType.AMO,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dLYS,HbPolyType.cosBAH_off,HbPolyType.AHD_1h), - (HbDonType.GDE,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dARG,HbPolyType.cosBAH_off,HbPolyType.AHD_1j), - (HbDonType.AHX,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dTYR,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.HXL,HbAccType.PBA): (HbPolyType.ahdist_aGLY_dSER,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.PBA,HbAccType.CXA): (HbPolyType.ahdist_aASN_dGLY,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.CXA,HbAccType.CXA): (HbPolyType.ahdist_aASN_dASN,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.IME,HbAccType.CXA): (HbPolyType.ahdist_aASN_dHIS,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.IND,HbAccType.CXA): (HbPolyType.ahdist_aASN_dTRP,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.AMO,HbAccType.CXA): (HbPolyType.ahdist_aASN_dLYS,HbPolyType.cosBAH_off,HbPolyType.AHD_1h), - (HbDonType.GDE,HbAccType.CXA): (HbPolyType.ahdist_aASN_dARG,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.AHX,HbAccType.CXA): (HbPolyType.ahdist_aASN_dTYR,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.HXL,HbAccType.CXA): (HbPolyType.ahdist_aASN_dSER,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.PBA,HbAccType.CXL): (HbPolyType.ahdist_aASP_dGLY,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.CXA,HbAccType.CXL): (HbPolyType.ahdist_aASP_dASN,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.IME,HbAccType.CXL): (HbPolyType.ahdist_aASP_dHIS,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.IND,HbAccType.CXL): (HbPolyType.ahdist_aASP_dTRP,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.AMO,HbAccType.CXL): (HbPolyType.ahdist_aASP_dLYS,HbPolyType.cosBAH_off,HbPolyType.AHD_1h), - (HbDonType.GDE,HbAccType.CXL): (HbPolyType.ahdist_aASP_dARG,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.AHX,HbAccType.CXL): (HbPolyType.ahdist_aASP_dTYR,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.HXL,HbAccType.CXL): (HbPolyType.ahdist_aASP_dSER,HbPolyType.cosBAH_off,HbPolyType.AHD_1k), - (HbDonType.PBA,HbAccType.IME): (HbPolyType.ahdist_aHIS_dGLY,HbPolyType.cosBAH_7,HbPolyType.AHD_1i), - (HbDonType.CXA,HbAccType.IME): (HbPolyType.ahdist_aHIS_dASN,HbPolyType.cosBAH_7,HbPolyType.AHD_1i), - (HbDonType.IME,HbAccType.IME): (HbPolyType.ahdist_aHIS_dHIS,HbPolyType.cosBAH_7,HbPolyType.AHD_1h), - (HbDonType.IND,HbAccType.IME): (HbPolyType.ahdist_aHIS_dTRP,HbPolyType.cosBAH_7,HbPolyType.AHD_1h), - (HbDonType.AMO,HbAccType.IME): (HbPolyType.ahdist_aHIS_dLYS,HbPolyType.cosBAH_7,HbPolyType.AHD_1i), - (HbDonType.GDE,HbAccType.IME): (HbPolyType.ahdist_aHIS_dARG,HbPolyType.cosBAH_7,HbPolyType.AHD_1h), - (HbDonType.AHX,HbAccType.IME): (HbPolyType.ahdist_aHIS_dTYR,HbPolyType.cosBAH_7,HbPolyType.AHD_1i), - (HbDonType.HXL,HbAccType.IME): (HbPolyType.ahdist_aHIS_dSER,HbPolyType.cosBAH_7,HbPolyType.AHD_1i), - (HbDonType.PBA,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dGLY,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.CXA,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dASN,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.IME,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dHIS,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.IND,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dTRP,HbPolyType.cosBAH_6i,HbPolyType.AHD_1h), - (HbDonType.AMO,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dLYS,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.GDE,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dARG,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.AHX,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dTYR,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.HXL,HbAccType.AHX): (HbPolyType.ahdist_aTYR_dSER,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.PBA,HbAccType.HXL): (HbPolyType.ahdist_aSER_dGLY,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.CXA,HbAccType.HXL): (HbPolyType.ahdist_aSER_dASN,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.IME,HbAccType.HXL): (HbPolyType.ahdist_aSER_dHIS,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.IND,HbAccType.HXL): (HbPolyType.ahdist_aSER_dTRP,HbPolyType.cosBAH_6i,HbPolyType.AHD_1h), - (HbDonType.AMO,HbAccType.HXL): (HbPolyType.ahdist_aSER_dLYS,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.GDE,HbAccType.HXL): (HbPolyType.ahdist_aSER_dARG,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.AHX,HbAccType.HXL): (HbPolyType.ahdist_aSER_dTYR,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), - (HbDonType.HXL,HbAccType.HXL): (HbPolyType.ahdist_aSER_dSER,HbPolyType.cosBAH_6i,HbPolyType.AHD_1i), -} - - -# polynomials are triplets, (x_min, x_max), (y[xx_max]), (c_9,...,c_0) -hbpolytype2coeffs = { # Parameters imported from rosetta sp2_elec_params @v2017.48-dev59886 - HbPolyType.ahdist_aASN_dARG: ((0.7019094761929999, 2.86820307153,),(1.1, 1.1,),( 0.58376113, -9.29345473, 64.86270904, -260.3946711, 661.43138077, -1098.01378958, 1183.58371466, -790.82929582, 291.33125475, -43.01629727,)), - HbPolyType.ahdist_aASN_dASN: ((0.625841094801, 2.75107708444,),(1.1, 1.1,),( -1.31243015, 18.6745072, -112.63858313, 373.32878091, -734.99145504, 861.38324861, -556.21026097, 143.5626977, 20.03238394, -11.52167705,)), - HbPolyType.ahdist_aASN_dGLY: ((0.7477341047139999, 2.6796350782799996,),(1.1, 1.1,),( -1.61294554, 23.3150793, -144.11313069, 496.13575, -1037.83809166, 1348.76826073, -1065.14368678, 473.89008925, -100.41142701, 7.44453515,)), - HbPolyType.ahdist_aASN_dHIS: ((0.344789524346, 2.8303582266000005,),(1.1, 1.1,),( -0.2657122, 4.1073775, -26.9099632, 97.10486507, -209.96002602, 277.33057268, -218.74766996, 97.42852213, -24.07382402, 3.73962807,)), - HbPolyType.ahdist_aASN_dLYS: ((0.542905671869, 2.45259389314,),(1.1, 1.1,),( 1.38531754, -18.48733797, 106.14444613, -344.70585054, 698.91577956, -917.0879402, 775.32787908, -403.09588787, 113.65054778, -11.66516403,)), - HbPolyType.ahdist_aASN_dSER: ((1.0812774602500002, 2.6832123582599996,),(1.1, 1.1,),( -3.51524353, 47.54032873, -254.40168577, 617.84606386, -255.49935027, -2361.56230539, 6426.85797934, -7760.4403891, 4694.08106855, -1149.83549068,)), - HbPolyType.ahdist_aASN_dTRP: ((0.6689984999999999, 3.0704254,),(1.1, 1.1,),( -0.5284840422, 8.3510150838, -56.4100479414, 212.4884326254, -488.3178610608, 703.7762350506, -628.9936994633999, 331.4294356146, -93.265817571, 11.9691623698,)), - HbPolyType.ahdist_aASN_dTYR: ((1.08950268805, 2.6887046709400004,),(1.1, 1.1,),( -4.4488705, 63.27696281, -371.44187037, 1121.71921621, -1638.11394306, 142.99988401, 3436.65879147, -5496.07011787, 3709.30505237, -962.79669688,)), - HbPolyType.ahdist_aASP_dARG: ((0.8100404642229999, 2.9851230124799994,),(1.1, 1.1,),( -0.66430344, 10.41343145, -70.12656205, 265.12578414, -617.05849171, 911.39378582, -847.25013928, 472.09090981, -141.71513167, 18.57721132,)), - HbPolyType.ahdist_aASP_dASN: ((1.05401125073, 3.11129675908,),(1.1, 1.1,),( 0.02090728, -0.24144928, -0.19578075, 16.80904547, -117.70216251, 407.18551288, -809.95195924, 939.83137947, -593.94527692, 159.57610528,)), - HbPolyType.ahdist_aASP_dGLY: ((0.886260952629, 2.66843608743,),(1.1, 1.1,),( -7.00699267, 107.33021779, -713.45752385, 2694.43092298, -6353.05100287, 9667.94098394, -9461.9261027, 5721.0086877, -1933.97818198, 279.47763789,)), - HbPolyType.ahdist_aASP_dHIS: ((1.03597611139, 2.78208509117,),(1.1, 1.1,),( -1.34823406, 17.08925926, -78.75087193, 106.32795459, 400.18459698, -2041.04320193, 4033.83557387, -4239.60530204, 2324.00877252, -519.38410941,)), - HbPolyType.ahdist_aASP_dLYS: ((0.97789485082, 2.50496946108,),(1.1, 1.1,),( -0.41300315, 6.59243438, -44.44525308, 163.11796012, -351.2307798, 443.2463146, -297.84582856, 62.38600547, 33.77496227, -14.11652182,)), - HbPolyType.ahdist_aASP_dSER: ((0.542905671869, 2.45259389314,),(1.1, 1.1,),( 1.38531754, -18.48733797, 106.14444613, -344.70585054, 698.91577956, -917.0879402, 775.32787908, -403.09588787, 113.65054778, -11.66516403,)), - HbPolyType.ahdist_aASP_dTRP: ((0.419155746414, 3.0486938610500003,),(1.1, 1.1,),( -0.24563471, 3.85598551, -25.75176874, 95.36525025, -214.13175785, 299.76133553, -259.0691378, 132.06975835, -37.15612683, 5.60445773,)), - HbPolyType.ahdist_aASP_dTYR: ((1.01057521468, 2.7207545786900003,),(1.1, 1.1,),( -0.15808672, -10.21398871, 178.80080949, -1238.0583801, 4736.25248274, -11071.96777725, 16239.07550047, -14593.21092621, 7335.66765017, -1575.08145078,)), - HbPolyType.ahdist_aGLY_dARG: ((0.499016667857, 2.9377031027599996,),(1.1, 1.1,),( -0.15923533, 2.5526639, -17.38788803, 65.71046957, -151.13491186, 218.78048387, -199.15882919, 110.56568974, -35.95143745, 6.47580213,)), - HbPolyType.ahdist_aGLY_dASN: ((0.7194388032060001, 2.9303772333599998,),(1.1, 1.1,),( -1.40718342, 23.65929694, -172.97144348, 720.64417348, -1882.85420815, 3194.87197776, -3515.52467458, 2415.75238278, -941.47705161, 159.84784277,)), - HbPolyType.ahdist_aGLY_dGLY: ((1.38403812683, 2.9981039433,),(1.1, 1.1,),( -0.5307601, 6.47949946, -22.39522814, -55.14303544, 708.30945242, -2619.49318162, 5227.8805795, -6043.31211632, 3806.04676175, -1007.66024144,)), - HbPolyType.ahdist_aGLY_dHIS: ((0.47406840932899996, 2.9234200830400003,),(1.1, 1.1,),( -0.12881679, 1.933838, -12.03134888, 39.92691227, -75.41519959, 78.87968016, -37.82769801, -0.13178679, 4.50193019, 0.45408359,)), - HbPolyType.ahdist_aGLY_dLYS: ((0.545347533475, 2.42624380351,),(1.1, 1.1,),( -0.22921901, 2.07015714, -6.2947417, 0.66645697, 45.21805416, -130.26668981, 176.32401031, -126.68226346, 43.96744431, -4.40105281,)), - HbPolyType.ahdist_aGLY_dSER: ((1.2803349239700001, 2.2465996077400003,),(1.1, 1.1,),( 6.72508613, -86.98495585, 454.18518444, -1119.89141452, 715.624663, 3172.36852982, -9455.49113097, 11797.38766934, -7363.28302948, 1885.50119665,)), - HbPolyType.ahdist_aGLY_dTRP: ((0.686512740494, 3.02901351815,),(1.1, 1.1,),( -0.1051487, 1.41597708, -7.42149173, 17.31830704, -6.98293652, -54.76605063, 130.95272289, -132.77575305, 62.75460448, -9.89110842,)), - HbPolyType.ahdist_aGLY_dTYR: ((1.28894687639, 2.26335316892,),(1.1, 1.1,),( 13.84536925, -169.40579865, 893.79467505, -2670.60617561, 5016.46234701, -6293.79378818, 5585.1049063, -3683.50722701, 1709.48661405, -399.5712153,)), - HbPolyType.ahdist_aHIS_dARG: ((0.8967400957230001, 2.96809434226,),(1.1, 1.1,),( 0.43460495, -10.52727665, 103.16979807, -551.42887412, 1793.25378923, -3701.08304991, 4861.05155388, -3922.4285529, 1763.82137881, -335.43441944,)), - HbPolyType.ahdist_aHIS_dASN: ((0.887120931718, 2.59166903153,),(1.1, 1.1,),( -3.50289894, 54.42813924, -368.14395507, 1418.90186454, -3425.60485859, 5360.92334837, -5428.54462336, 3424.68800187, -1221.49631986, 189.27122436,)), - HbPolyType.ahdist_aHIS_dGLY: ((1.01629363411, 2.58523052904,),(1.1, 1.1,),( -1.68095217, 21.31894078, -107.72203494, 251.81021758, -134.07465831, -707.64527046, 1894.6282743, -2156.85951846, 1216.83585872, -275.48078944,)), - HbPolyType.ahdist_aHIS_dHIS: ((0.9773010778919999, 2.72533796329,),(1.1, 1.1,),( -2.33350626, 35.66072412, -233.98966111, 859.13714961, -1925.30958567, 2685.35293578, -2257.48067507, 1021.49796136, -169.36082523, -12.1348055,)), - HbPolyType.ahdist_aHIS_dLYS: ((0.7080936539849999, 2.47191718632,),(1.1, 1.1,),( -1.88479369, 28.38084382, -185.74039957, 690.81875917, -1605.11404391, 2414.83545623, -2355.9723201, 1442.24496229, -506.45880637, 79.47512505,)), - HbPolyType.ahdist_aHIS_dSER: ((0.90846809159, 2.5477956147,),(1.1, 1.1,),( -0.92004641, 15.91841533, -117.83979251, 488.22211296, -1244.13047376, 2017.43704053, -2076.04468019, 1302.42621488, -451.29138643, 67.15812575,)), - HbPolyType.ahdist_aHIS_dTRP: ((0.991999676806, 2.81296584506,),(1.1, 1.1,),( -1.29358587, 19.97152857, -131.89796017, 485.29199356, -1084.0466445, 1497.3352889, -1234.58042682, 535.8048197, -75.58951691, -9.91148332,)), - HbPolyType.ahdist_aHIS_dTYR: ((0.882661836357, 2.5469016429900004,),(1.1, 1.1,),( -6.94700143, 109.07997256, -747.64035726, 2929.83959536, -7220.15788571, 11583.34170519, -12078.443492, 7881.85479715, -2918.19482068, 468.23988622,)), - HbPolyType.ahdist_aSER_dARG: ((1.0204658147399999, 2.8899566041900004,),(1.1, 1.1,),( 0.33887327, -7.54511361, 70.87316645, -371.88263665, 1206.67454443, -2516.82084076, 3379.45432693, -2819.73384601, 1325.33307517, -265.54533008,)), - HbPolyType.ahdist_aSER_dASN: ((1.01393052233, 3.0024434159299997,),(1.1, 1.1,),( 0.37012361, -7.46486204, 64.85775924, -318.6047209, 974.66322243, -1924.37334018, 2451.63840629, -1943.1915675, 867.07870559, -163.83771761,)), - HbPolyType.ahdist_aSER_dGLY: ((1.3856562156299999, 2.74160605537,),(1.1, 1.1,),( -1.32847415, 22.67528654, -172.53450064, 770.79034865, -2233.48829652, 4354.38807288, -5697.35144236, 4803.38686157, -2361.48028857, 518.28202382,)), - HbPolyType.ahdist_aSER_dHIS: ((0.550992321207, 2.68549261999,),(1.1, 1.1,),( -1.98041793, 29.59668639, -190.36751773, 688.43324385, -1534.68894765, 2175.66568976, -1952.07622113, 1066.28943929, -324.23381388, 43.41006168,)), - HbPolyType.ahdist_aSER_dLYS: ((0.8603189393170001, 2.77729502744,),(1.1, 1.1,),( 0.90884741, -17.24690746, 141.78469099, -661.85989315, 1929.7674992, -3636.43392779, 4419.00727923, -3332.43482061, 1410.78913266, -253.53829424,)), - HbPolyType.ahdist_aSER_dSER: ((1.10866545921, 2.61727781204,),(1.1, 1.1,),( -0.38264308, 4.41779675, -10.7016645, -81.91314845, 668.91174735, -2187.50684758, 3983.56103269, -4213.32320546, 2418.41531442, -580.28918569,)), - HbPolyType.ahdist_aSER_dTRP: ((1.4092077245899999, 2.8066121197099996,),(1.1, 1.1,),( 0.73762477, -11.70741276, 73.05154232, -205.00144794, 89.58794368, 1082.94541375, -3343.98293188, 4601.70815729, -3178.53568678, 896.59487831,)), - HbPolyType.ahdist_aSER_dTYR: ((1.10773547919, 2.60403567341,),(1.1, 1.1,),( -1.13249925, 14.66643161, -69.01708791, 93.96846742, 380.56063898, -1984.56675689, 4074.08891127, -4492.76927139, 2613.13168054, -627.71933508,)), - HbPolyType.ahdist_aTYR_dARG: ((1.05581400627, 2.85499888099,),(1.1, 1.1,),( -0.30396592, 5.30288548, -39.75788579, 167.5416547, -435.15958911, 716.52357586, -735.95195083, 439.76284677, -130.00400085, 13.23827556,)), - HbPolyType.ahdist_aTYR_dASN: ((1.0994919065200002, 2.8400869077900004,),(1.1, 1.1,),( 0.33548259, -3.5890451, 8.97769025, 48.1492734, -400.5983616, 1269.89613211, -2238.03101675, 2298.33009115, -1290.42961162, 308.43185147,)), - HbPolyType.ahdist_aTYR_dGLY: ((1.36546155066, 2.7303075916400004,),(1.1, 1.1,),( -1.55312915, 18.62092487, -70.91365499, -41.83066505, 1248.88835245, -4719.81948329, 9186.09528168, -10266.11434548, 6266.21959533, -1622.19652457,)), - HbPolyType.ahdist_aTYR_dHIS: ((0.5955982461899999, 2.6643551317500003,),(1.1, 1.1,),( -0.47442788, 7.16629863, -46.71287553, 171.46128947, -388.17484011, 558.45202337, -506.35587481, 276.46237273, -83.52554392, 12.05709329,)), - HbPolyType.ahdist_aTYR_dLYS: ((0.7978598238760001, 2.7620933782,),(1.1, 1.1,),( -0.20201464, 1.69684984, 0.27677515, -55.05786347, 286.29918332, -725.92372531, 1054.771746, -889.33602341, 401.11342256, -73.02221189,)), - HbPolyType.ahdist_aTYR_dSER: ((0.7083554962559999, 2.7032011990599996,),(1.1, 1.1,),( -0.70764192, 11.67978065, -82.80447482, 329.83401367, -810.58976486, 1269.57613941, -1261.04047117, 761.72890446, -254.37526011, 37.24301861,)), - HbPolyType.ahdist_aTYR_dTRP: ((1.10934023051, 2.8819112108,),(1.1, 1.1,),( -11.58453967, 204.88308091, -1589.77384548, 7100.84791905, -20113.61354433, 37457.83646055, -45850.02969172, 35559.8805122, -15854.78726237, 3098.04931146,)), - HbPolyType.ahdist_aTYR_dTYR: ((1.1105954899400001, 2.60081798685,),(1.1, 1.1,),( -1.63120628, 19.48493187, -81.0332905, 56.80517706, 687.42717782, -2842.77799908, 5385.52231471, -5656.74159307, 3178.83470588, -744.70042777,)), - HbPolyType.AHD_1h: ((1.76555274367, 3.1416,),(1.1, 1.1,),( 0.62725838, -9.98558225, 59.39060071, -120.82930213, -333.26536028, 2603.13082592, -6895.51207142, 9651.25238056, -7127.13394872, 2194.77244026,)), - HbPolyType.AHD_1i: ((1.59914724347, 3.1416,),(1.1, 1.1,),( -0.18888801, 3.48241679, -25.65508662, 89.57085435, -95.91708218, -367.93452341, 1589.6904702, -2662.3582135, 2184.40194483, -723.28383545,)), - HbPolyType.AHD_1j: ((1.1435646388, 3.1416,),(1.1, 1.1,),( 0.47683259, -9.54524724, 83.62557693, -420.55867774, 1337.19354878, -2786.26265686, 3803.178227, -3278.62879901, 1619.04116204, -347.50157909,)), - HbPolyType.AHD_1k: ((1.15651981164, 3.1416,),(1.1, 1.1,),( -0.10757999, 2.0276542, -16.51949978, 75.83866839, -214.18025678, 380.55117567, -415.47847283, 255.66998474, -69.94662165, 3.21313428,)), - HbPolyType.cosBAH_off: ((-1234.0, 1.1,),(1.1, 1.1,),( 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,)), - HbPolyType.cosBAH_6i: ((-0.23538144897100002, 1.1,),(1.1, 1.1,),( -0.822093, -3.75364636, 46.88852157, -129.5440564, 146.69151428, -67.60598792, 2.91683129, 9.26673173, -3.84488178, 0.05706659,)), - HbPolyType.cosBAH_7: ((-0.019373850666900002, 1.1,),(1.1, 1.1,),( 0.0, -27.942923450028, 136.039920253368, -268.06959056747, 275.400462507919, -153.502076215949, 39.741591385461, 0.693861510121, -3.885952320499, 1.024765090788892)), -} \ No newline at end of file diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/potentials.py b/spaces/merle/PROTEIN_GENERATOR/utils/potentials.py deleted file mode 100644 index 597decc44506fd322f9bf154e640daa92098269b..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/potentials.py +++ /dev/null @@ -1,691 +0,0 @@ -import os, sys -import shutil -import glob -import torch -import numpy as np -import copy -from itertools import groupby -from operator import itemgetter -import json -import re -import random -import matplotlib.pyplot as plt -import pandas as pd -from tqdm import tqdm -import random -import Bio -from icecream import ic -DEVICE = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') - -conversion = 'ARNDCQEGHILKMFPSTWYVX-' - - -### IF ADDING NEW POTENTIAL MAKE SURE TO ADD TO BOTTOM DICTIONARY ### - - -# TEMPLATE CLASS -class Potential: - - def get_gradients(seq): - ''' - EVERY POTENTIAL CLASS MUST RETURN GRADIENTS - ''' - - sys.exit('ERROR POTENTIAL HAS NOT BEEN IMPLEMENTED') - - -class AACompositionalBias(Potential): - """ - T = number of timesteps to set up diffuser with - - schedule = type of noise schedule to use linear, cosine, gaussian - - noise = type of ditribution to sample from; DEFAULT - normal_gaussian - - """ - - def __init__(self, args, features, potential_scale, DEVICE): - - self.L = features['L'] - self.DEVICE = DEVICE - self.frac_seq_to_weight = args['frac_seq_to_weight'] - self.add_weight_every_n = args['add_weight_every_n'] - self.aa_weights_json = args['aa_weights_json'] - self.one_weight_per_position = args['one_weight_per_position'] - self.aa_weight = args['aa_weight'] - self.aa_spec = args['aa_spec'] - self.aa_composition = args['aa_composition'] - self.potential_scale = potential_scale - - self.aa_weights_to_add = [0 for l in range(21)] - self.aa_max_potential = None - - - if self.aa_weights_json != None: - with open(self.aa_weights_json, 'r') as f: - aa_weights = json.load(f) - else: - aa_weights = {} - - for k,v in aa_weights.items(): - aa_weights_to_add[conversion.index(k)] = v - - aa_weights_to_add = [0 for l in range(21)] - - self.aa_weights_to_add = torch.tensor(aa_weights_to_add)[None].repeat(self.L,1).to(self.DEVICE, non_blocking=True) - - # BLOCK TO FIND OUT HOW YOU ARE LOOKING TO PROVIDE AA COMPOSITIONAL BIAS - if self.add_weight_every_n > 1 or self.frac_seq_to_weight > 0: - - assert (self.add_weight_every_n > 1) ^ (self.frac_seq_to_weight > 0), 'use either --add_weight_every_n or --frac_seq_to_weight but not both' - weight_mask = torch.zeros_like(self.aa_weights_to_add) - if add_weight_every_n > 1: - idxs_to_unmask = torch.arange(0,self.L,self.add_weight_every_n) - else: - indexs = np.arange(0,self.L).tolist() - idxs_to_unmask = random.sample(indexs,int(self.frac_seq_to_weight*self.L)) - idxs_to_unmask.sort() - - weight_mask[idxs_to_unmask,:] = 1 - self.aa_weights_to_add *= weight_mask - - if one_weight_per_position: - for p in range(self.aa_weights_to_add.shape[0]): - where_ones = torch.where(self.aa_weights_to_add[p,:] > 0)[0].tolist() - if len(where_ones) > 0: - w_sample = random.sample(where_ones,1)[0] - self.aa_weights_to_add[p,:w_sample] = 0 - self.aa_weights_to_add[p,w_sample+1:] = 0 - - elif self.aa_spec != None: - - assert self.aa_weight != None, 'please specify --aa_weight' - # Use specified repeat structure to bias sequence - - repeat_len = len(self.aa_spec) - weight_split = [float(x) for x in self.aa_weight.split(',')] - - aa_idxs = [] - for k,c in enumerate(self.aa_spec): - if c != 'X': - assert c in conversion, f'the letter you have chosen is not an amino acid: {c}' - aa_idxs.append((k,conversion.index(c))) - - if len(self.aa_weight) > 1: - assert len(aa_idxs) == len(weight_split), f'need to give same number of weights as AAs in weight spec' - - self.aa_weights_to_add = torch.zeros(self.L,21) - - for p,w in zip(aa_idxs,weight_split): - x,a = p - self.aa_weights_to_add[x,a] = w - - self.aa_weights_to_add = self.aa_weights_to_add[:repeat_len,:].repeat(self.L//repeat_len+1,1)[:self.L].to(self.DEVICE, non_blocking=True) - - elif self.aa_composition != None: - - self.aa_comp = [(x[0],float(x[1:])) for x in self.aa_composition.split(',')] - self.aa_max_potential = 0 #just a place holder so not None - assert sum([f for aa,f in self.aa_comp]) <= 1, f'total sequence fraction specified in aa_composition is > 1' - - else: - sys.exit(f'You are missing an argument to use the aa_bias potential') - - def get_gradients(self, seq): - ''' - seq = L,21 - - return gradients to update the sequence with for the next pass - ''' - - if self.aa_max_potential != None: - soft_seq = torch.softmax(seq, dim=1) - print('ADDING SOFTMAXED SEQUENCE POTENTIAL') - - aa_weights_to_add_list = [] - for aa,f in self.aa_comp: - aa_weights_to_add_copy = self.aa_weights_to_add.clone() - - soft_seq_tmp = soft_seq.clone().detach().requires_grad_(True) - aa_idx = conversion.index(aa) - - # get top-k probability of logit to add to - where_add = torch.topk(soft_seq_tmp[:,aa_idx], int(f*self.L))[1] - - # set up aa_potenital - aa_potential = torch.zeros(21) - aa_potential[conversion.index(aa)] = 1.0 - aa_potential = aa_potential.repeat(self.L,1).to(self.DEVICE, non_blocking=True) - - # apply "loss" - aa_comp_loss = torch.sum(torch.sum((aa_potential - soft_seq_tmp)**2, dim=1)**0.5) - - # get gradients - aa_comp_loss.backward() - update_grads = soft_seq_tmp.grad - - for k in range(self.L): - if k in where_add: - aa_weights_to_add_copy[k,:] = -update_grads[k,:]*self.potential_scale - else: - aa_weights_to_add_copy[k,:] = update_grads[k,:]*self.potential_scale - aa_weights_to_add_list.append(aa_weights_to_add_copy) - - aa_weights_to_add_array = torch.stack((aa_weights_to_add_list)) - self.aa_weights_to_add = torch.mean(aa_weights_to_add_array.float(), 0) - - - return self.aa_weights_to_add - - -class HydrophobicBias(Potential): - """ - Calculate loss with respect to soft_seq of the sequence hydropathy index (Kyte and Doolittle, 1986). - - T = number of timesteps to set up diffuser with - - schedule = type of noise schedule to use linear, cosine, gaussian - - noise = type of ditribution to sample from; DEFAULT - normal_gaussian - - """ - def __init__(self, args, features, potential_scale, DEVICE): - - self.target_score = args['hydrophobic_score'] - self.potential_scale = potential_scale - self.loss_type = args['hydrophobic_loss_type'] - print(f'USING {self.loss_type} LOSS TYPE...') - - # ----------------------------------------------------------------------- - # ---------------------GRAVY index data structures----------------------- - # ----------------------------------------------------------------------- - - # AA conversion - self.alpha_1 = list("ARNDCQEGHILKMFPSTWYVX") - - # Dictionary to convert amino acids to their hyropathy index - self.gravy_dict = {'C': 2.5, 'D': -3.5, 'S': -0.8, 'Q': -3.5, 'K': -3.9, - 'I': 4.5, 'P': -1.6, 'T': -0.7, 'F': 2.8, 'N': -3.5, - 'G': -0.4, 'H': -3.2, 'L': 3.8, 'R': -4.5, 'W': -0.9, - 'A': 1.8, 'V':4.2, 'E': -3.5, 'Y': -1.3, 'M': 1.9, 'X': 0, '-': 0} - - self.gravy_list = [self.gravy_dict[a] for a in self.alpha_1] - - # ----------------------------------------------------------------------- - # ----------------------------------------------------------------------- - - print(f'GUIDING SEQUENCES TO HAVE TARGET GRAVY SCORE OF: {self.target_score}') - return None - - - def get_gradients(self, seq): - """ - Calculate gradients with respect to GRAVY index of input seq. - Uses a MSE loss. - - Arguments - --------- - seq : tensor - L X 21 logits after saving seq_out from xt - - Returns - ------- - gradients : list of tensors - gradients of soft_seq with respect to loss on partial_charge - """ - # Get GRAVY matrix based on length of seq - gravy_matrix = torch.tensor(self.gravy_list)[None].repeat(seq.shape[0],1).to(DEVICE) - - # Get softmax of seq - soft_seq = torch.softmax(seq,dim=-1).requires_grad_(requires_grad=True).to(DEVICE) - - # Calculate simple MSE loss on gravy_score - if self.loss_type == 'simple': - gravy_score = torch.mean(torch.sum(soft_seq*gravy_matrix,dim=-1), dim=0) - loss = ((gravy_score - self.target_score)**2)**0.5 - #print(f'LOSS: {loss}') - # Take backward step - loss.backward() - - # Get gradients from soft_seq - self.gradients = soft_seq.grad - # plt.imshow(self.gradients.cpu().detach().numpy()) - # plt.colorbar() - # plt.title('gradients') - - # Calculate MSE loss on gravy_score - elif self.loss_type == 'complex': - loss = torch.mean((torch.sum(soft_seq*gravy_matrix, dim = -1) - self.target_score)**2) - #print(f'LOSS: {loss}') - # Take backward step - loss.backward() - - # Get gradients from soft_seq - self.gradients = soft_seq.grad - # plt.imshow(self.gradients.cpu().detach().numpy()) - # plt.colorbar() - # plt.title('gradients') - - return -self.gradients*self.potential_scale - - -class ChargeBias(Potential): - """ - Calculate losses and get gradients with respect to soft_seq for the sequence charge at a given pH. - - T = number of timesteps to set up diffuser with - - schedule = type of noise schedule to use linear, cosine, gaussian - - noise = type of ditribution to sample from; DEFAULT - normal_gaussian - - """ - def __init__(self, args, features, potential_scale, DEVICE): - - self.target_charge = args['target_charge'] - self.pH = args['target_pH'] - self.loss_type = args['charge_loss_type'] - self.potential_scale = potential_scale - self.L = features['L'] - self.DEVICE = DEVICE - - # ----------------------------------------------------------------------- - # ------------------------pI data structures----------------------------- - # ----------------------------------------------------------------------- - - # pKa lists to account for every residue. - pos_pKs_list = [[0.0, 12.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.98, 0.0, 0.0, 10.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]] - neg_pKs_list = [[0.0, 0.0, 0.0, 4.05, 9.0, 0.0, 4.45, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0, 0.0, 0.0]] - cterm_pKs_list = [[0.0, 0.0, 0.0, 4.55, 0.0, 0.0, 4.75, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]] - nterm_pKs_list = [[7.59, 0.0, 0.0, 0.0, 0.0, 0.0, 7.7, 0.0, 0.0, 0.0, 0.0, 0.0, 7.0, 0.0, 8.36, 6.93, 6.82, 0.0, 0.0, 7.44, 0.0]] - - # Convert pKa lists to tensors - self.cterm_pKs = torch.tensor(cterm_pKs_list) - self.nterm_pKs = torch.tensor(nterm_pKs_list) - self.pos_pKs = torch.tensor(pos_pKs_list) - self.neg_pKs = torch.tensor(neg_pKs_list) - - # Repeat charged pKs L - 2 times to populate in all non-terminal residue indices - pos_pKs_repeat = self.pos_pKs.repeat(self.L - 2, 1) - neg_pKs_repeat = self.neg_pKs.repeat(self.L - 2, 1) - - # Concatenate all pKs tensors with N-term and C-term pKas to get full L X 21 charge matrix - self.pos_pKs_matrix = torch.cat((torch.zeros_like(self.nterm_pKs), pos_pKs_repeat, self.nterm_pKs)).to(DEVICE) - self.neg_pKs_matrix = torch.cat((self.cterm_pKs, neg_pKs_repeat, torch.zeros_like(self.cterm_pKs))).to(DEVICE) - - # Get indices of positive, neutral, and negative residues - self.cterm_charged_idx = torch.nonzero(self.cterm_pKs) - self.cterm_neutral_idx = torch.nonzero(self.cterm_pKs == 0) - self.nterm_charged_idx = torch.nonzero(self.nterm_pKs) - self.nterm_neutral_idx = torch.nonzero(self.nterm_pKs == 0) - self.pos_pKs_idx = torch.tensor([[1, 8, 11]]) - self.neg_pKs_idx = torch.tensor([[3, 4, 6, 18]]) - self.neutral_pKs_idx = torch.tensor([[0, 2, 5, 7, 9, 10, 12, 13, 14, 15, 16, 17, 19, 20]]) - - # ----------------------------------------------------------------------- - # ----------------------------------------------------------------------- - - print(f'OPTIMIZING SEQUENCE TO HAVE CHARGE = {self.target_charge}\nAT pH = {self.pH}' ) - - def sum_tensor_indices(self, indices, tensor): - total = 0 - for idx in indices: - i, j = idx[0], idx[1] - total += tensor[i][j] - return total - - def sum_tensor_indices_2(self, indices, tensor): - # Create a tensor with the appropriate dimensions - j = indices.clone().detach().long().to(self.DEVICE) - # Select the values using advanced indexing and sum along dim=-1 - row_sums = tensor[:, j].sum(dim=-1) - - # Reshape the result to an L x 1 tensor - return row_sums.reshape(-1, 1).clone().detach() - - - def make_table(self, L): - """ - Make table of all (positive, neutral, negative) charges -> (i, j, k) - such that: - i + j + k = L - (1 * i) + (0 * j) + (-1 * k) = target_charge - - Arguments: - L: int - - length of sequence, defined as seq.shape[0] - target_charge : float - - Target charge for the sequence to be guided towards - - Returns: - table: N x 3 tensor - - All combinations of i, j, k such that the above conditions are satisfied - """ - - table = [] - for i in range(L): - for j in range(L): - for k in range(L): - # Check that number of residues = L and that sum of charge (i - k) = target_charge - # and that there are no 0 entries, as having no pos, no neg, or no neutral is not realistic - if i+j+k == L and i-k == self.target_charge and i != 0 and j != 0 and k != 0: - table.append([i,j,k]) - return torch.tensor(np.array(table)) - - - def classify_resis(self, seq): - """ - Classify each position in seq as either positive, neutral, or negative. - Classification = max( [sum(positive residue logits), sum(neutral residue logits), sum(negative residue logits)] ) - - Arguments: - seq: L x 21 tensor - - sequence logits from the model - - Returns: - charges: tensor - - 1 x 3 tensor counting total # of each charge type in the input sequence - - charges[0] = # positive residues - - charges[1] = # neutral residues - - charges[2] = # negative residues - charge_classification: tensor - - L x 1 tensor of each position's classification. 1 is positive, 0 is neutral, -1 is negative - """ - L = seq.shape[0] - # Get softmax of seq - soft_seq = torch.softmax(seq.clone(),dim=-1).requires_grad_(requires_grad=True).to(self.DEVICE) - - # Sum the softmax of all the positive and negative charges along dim = -1 (21 amino acids): - # Sum across c-term pKs - sum_cterm_charged = self.sum_tensor_indices(self.cterm_charged_idx, soft_seq).item() - # print(f'SUM OF CTERM CHARGED RESIS: {sum_cterm_charged}') - # print(type(sum_cterm_charged.item())) - sum_cterm_neutral = self.sum_tensor_indices(self.cterm_neutral_idx, soft_seq).item() - # print(f'SUM OF CTERM NEUTRAL RESIS: {sum_cterm_neutral}') - - # Classify c-term as negative or neutral - cterm_max = max(sum_cterm_charged, sum_cterm_neutral) - # print(f'CTERM MAX: {cterm_max}') - if cterm_max == sum_cterm_charged: - cterm_class = torch.tensor([[-1]]).to(self.DEVICE) - else: - cterm_class = torch.tensor([[0]]).to(self.DEVICE) - # Prep cterm dataframe - cterm_df = torch.tensor([[0, sum_cterm_neutral, sum_cterm_charged, cterm_max, cterm_class]]).to(self.DEVICE) - - # Sum across positive, neutral, and negative pKs - sum_pos = self.sum_tensor_indices_2(self.pos_pKs_idx, soft_seq[1:L-1, ...]).to(self.DEVICE) - # print(f'SUM POS: {sum_pos}') - sum_neg = self.sum_tensor_indices_2(self.neg_pKs_idx, soft_seq[1:L-1, ...]).to(self.DEVICE) - # print(f'SUM NEG: {sum_neg}') - sum_neutral = self.sum_tensor_indices_2(self.neutral_pKs_idx, soft_seq[1:L-1, ...]).to(self.DEVICE) - # print(f'SUM NEUTRAL: {sum_neutral}') - - # Classify non-terminal residues along dim = -1 - middle_max, _ = torch.max(torch.stack((sum_pos, sum_neg, sum_neutral), dim=-1), dim=-1) - middle_max = middle_max.to(self.DEVICE) - # create an L x 1 tensor to store the result - middle_class = torch.zeros((L - 2, 1), dtype=torch.long).to(self.DEVICE) - # set the values of the result tensor based on which tensor had the maximum value - middle_class[sum_neg == middle_max] = -1 - middle_class[sum_neutral == middle_max] = 0 - middle_class[sum_pos == middle_max] = 1 - - # Prepare df of all middle residue classifications and corresponding values - middle_df = pd.DataFrame((torch.cat((sum_pos, sum_neutral, sum_neg, middle_max, middle_class), dim=-1)).detach().cpu().numpy()) - middle_df.rename(columns={0: 'sum_pos', - 1: 'sum_neutral', 2: 'sum_neg', 3: 'middle_max', 4: 'middle_classified'}, - inplace=True, errors='raise') - - # Sum across n-term pKs - sum_nterm_charged = self.sum_tensor_indices(self.nterm_charged_idx, soft_seq).to(self.DEVICE) - # print(f'SUM OF NTERM CHARGED RESIS: {sum_nterm_charged}') - sum_nterm_neutral = self.sum_tensor_indices(self.nterm_neutral_idx, soft_seq).to(self.DEVICE) - # print(f'SUM OF NTERM NEUTRAL RESIS: {sum_nterm_neutral}') - - # Classify n-term as negative or neutral - nterm_max = max(sum_nterm_charged, sum_nterm_neutral) - if nterm_max == sum_nterm_charged: - nterm_class = torch.tensor([[-1]]).to(self.DEVICE) - else: - nterm_class = torch.tensor([[0]]).to(self.DEVICE) - nterm_df = torch.tensor([[sum_nterm_charged, sum_nterm_neutral, 0, nterm_max, nterm_class]]).to(self.DEVICE) - - # Prep data to be concatenated into output df - middle_df_2 = (torch.cat((sum_pos, sum_neutral, sum_neg, middle_max, middle_class), dim=-1)).to(self.DEVICE) - # Concat cterm, middle, and nterm data into one master df with all summed probs, max, and final classification - full_tens_np = torch.cat((cterm_df, middle_df_2, nterm_df), dim = 0).detach().cpu().numpy() - classification_df = pd.DataFrame(full_tens_np) - classification_df.rename(columns={0: 'sum_pos', - 1: 'sum_neutral', 2: 'sum_neg', 3: 'max', 4: 'classification'}, - inplace=True, errors='raise') - # Count number of positive, neutral, and negative charges that are stored in charge_classification as 1, 0, -1 respectively - charge_classification = torch.cat((cterm_class, middle_class, nterm_class), dim = 0).to(self.DEVICE) - charges = [torch.sum(charge_classification == 1).item(), torch.sum(charge_classification == 0).item(), torch.sum(charge_classification == -1).item()] - # print('*'*100) - # print(classification_df) - - return torch.tensor(charges), classification_df - - def get_target_charge_ratios(self, table, charges): - """ - Find closest distance between x, y, z in table and i, j, k in charges - - Arguments: - table: N x 3 tensor of all combinations of positive, neutral, and negative charges that obey the conditions in make_table - charges: 1 x 3 tensor - - 1 x 3 tensor counting total # of each charge type in the input sequence - - charges[0] = # positive residues - - charges[1] = # neutral residues - - charges[2] = # negative residues - - Returns: - target_charge_tensor: tensor - - 1 x 3 tensor of closest row in table that matches charges of input sequence - """ - # Compute the difference between the charges and each row of the table - diff = table - charges - - # Compute the square of the Euclidean distance between the charges and each row of the table - sq_distance = torch.sum(diff ** 2, dim=-1) - - # Find the index of the row with the smallest distance - min_idx = torch.argmin(sq_distance) - - # Return the smallest distance and the corresponding row of the table - target_charge_tensor = torch.sqrt(sq_distance[min_idx]), table[min_idx] - #print(f'CLOSEST COMBINATION OF VALID RESIDUES: {target_charge_tensor[1]}') - return target_charge_tensor[1] - - def draft_resis(self, classification_df, target_charge_tensor): - """ - Based on target_charge_tensor, draft the top (i, j, k) positive, neutral, and negative positions from - charge_classification and return the idealized guided_charge_classification. - guided_charge_classification will determine whether the gradients should be positive or negative - - Draft pick algorithm for determining gradient guided_charge_classification: - 1) Define how many positive, negative, and neutral charges are needed - 2) Current charge being drafted = sign of target charge, otherwise opposite charge - 3) From the classification_df of the currently sampled sequence, choose the position with the highest probability of being current_charge - 4) Make that residue +1, 0, or -1 in guided_charge_classification to dictate the sign of gradients - 5) Keep drafting that residue charge until it is used up, then move to the next type - - Arguments: - classification_df: tensor - - L x 1 tensor of each position's classification. 1 is positive, 0 is neutral, -1 is negative - target_charge_tensor: tensor - - 1 x 3 tensor of closest row in table that matches charges of input sequence - - Returns: - guided_charge_classification: L x 1 tensor - - L x 1 tensor populated with 1 = positive, 0 = neutral, -1 = negative - - in get_gradients, multiply the gradients by guided_charge_classification to determine which direction - the gradients should guide toward based on the current sequence distribution and the target charge - """ - charge_dict = {'pos': 0, 'neutral': 0, 'neg': 0} - # Define the target number of positive, neutral, and negative charges - charge_dict['pos'] = target_charge_tensor[0].detach().clone() - charge_dict['neutral'] = target_charge_tensor[1].detach().clone() - charge_dict['neg'] = target_charge_tensor[2].detach().clone() - # Determine which charge to start drafting - if self.target_charge > 0: - start_charge = 'pos' - elif self.target_charge < 0: - start_charge = 'neg' - else: - start_charge = 'neutral' - - # Initialize guided_charge_classification - guided_charge_classification = torch.zeros((classification_df.shape[0], 1)) - - # Start drafting - draft_charge = start_charge - while charge_dict[draft_charge] > 0: - # Find the residue with the max probability for the current draft charge - max_residue_idx = classification_df.loc[:, ['sum_' + draft_charge]].idxmax()[0] - # print(max_residue_idx[0]) - # print(type(max_residue_idx)) - #print(f'MAX RESIDUE INDEX for {draft_charge}: {max_residue_idx}') - # Populate guided_charge_classification with the appropriate charge - if draft_charge == 'pos': - guided_charge_classification[max_residue_idx] = 1 - elif draft_charge == 'neg': - guided_charge_classification[max_residue_idx] = -1 - else: - guided_charge_classification[max_residue_idx] = 0 - # Remove selected row from classification_df - classification_df = classification_df.drop(max_residue_idx) - # print(classification_df) - # Update charges dictionary - charge_dict[draft_charge] -= 1 - #print(f'{charge_dict[draft_charge]} {draft_charge} residues left to draft...') - # Switch to the other charged residue if the starting charge has been depleted - if charge_dict[draft_charge] == 0: - if draft_charge == start_charge: - draft_charge = 'neg' if start_charge == 'pos' else 'pos' - elif draft_charge == 'neg': - draft_charge = 'pos' - elif draft_charge == 'pos': - draft_charge = 'neg' - else: - draft_charge = 'neutral' - - return guided_charge_classification.requires_grad_() - - def get_gradients(self, seq):#, guided_charge_classification): - """ - Calculate gradients with respect to SEQUENCE CHARGE at pH. - Uses a MSE loss. - - Arguments - --------- - seq : tensor - L X 21 logits after saving seq_out from xt - - Returns - ------- - gradients : list of tensors - gradients of soft_seq with respect to loss on partial_charge - """ - # Get softmax of seq - # soft_seq = torch.softmax(seq.clone(),dim=-1).requires_grad_(requires_grad=True).to(DEVICE) - soft_seq = torch.softmax(seq,dim=-1).requires_grad_(requires_grad=True).to(DEVICE) - - # Get partial positive charges only for titratable residues - pos_charge = torch.where(self.pos_pKs_matrix != 0, ((1) / (((10) ** ((self.pH) - self.pos_pKs_matrix)) + (1.0))), (0.0)) - neg_charge = torch.where(self.neg_pKs_matrix != 0, ((1) / (((10) ** (self.neg_pKs_matrix - (self.pH))) + (1.0))), (0.0)) - # partial_charge = torch.sum((soft_seq*(pos_charge - neg_charge)).requires_grad_(requires_grad=True)) - - - if self.loss_type == 'simple': - # Calculate net partial charge of soft_seq - partial_charge = torch.sum((soft_seq*(pos_charge - neg_charge)).requires_grad_(requires_grad=True)) - - print(f'CURRENT PARTIAL CHARGE: {partial_charge.item()}') - # Calculate MSE loss on partial_charge - loss = ((partial_charge - self.target_charge)**2)**0.5 - #print(f'LOSS: {loss}') - # Take backward step - loss.backward() - # Get gradients from soft_seq - self.gradients = soft_seq.grad - - # plt.imshow(self.gradients) - # plt.colorbar() - # plt.title('gradients') - - elif self.loss_type == 'simple2': - # Calculate net partial charge of soft_seq - # partial_charge = torch.sum((soft_seq*(pos_charge - neg_charge)).requires_grad_(requires_grad=True)) - - print(f'CURRENT PARTIAL CHARGE: {partial_charge.item()}') - # Calculate MSE loss on partial_charge - loss = (((torch.sum((soft_seq*(pos_charge - neg_charge)).requires_grad_(requires_grad=True))) - - self.target_charge)**2)**0.5 - #print(f'LOSS: {loss}') - # Take backward step - loss.backward() - # Get gradients from soft_seq - self.gradients = soft_seq.grad - - # plt.imshow(self.gradients) - # plt.colorbar() - # plt.title('gradients') - - elif self.loss_type == 'complex': - # Preprocessing using method functions - table = self.make_table(seq.shape[0]) - charges, classification_df = self.classify_resis(seq) - target_charge_tensor = self.get_target_charge_ratios(table, charges) - guided_charge_classification = self.draft_resis(classification_df, target_charge_tensor) - - # Calculate net partial charge of soft_seq - soft_partial_charge = (soft_seq*(pos_charge - neg_charge)) - # print(f'SOFT PARTIAL CHARGE SHAPE: {soft_partial_charge.shape}') - # Define partial charge as the sum of softmax * partial charge matrix - partial_charge = torch.sum(soft_partial_charge, dim=-1).requires_grad_() - #print(partial_charge) - # partial_charge = torch.sum((soft_seq*(pos_charge - neg_charge)).requires_grad_(requires_grad=True)) - print(f'CURRENT PARTIAL CHARGE: {partial_charge.sum().item()}') - - # print(f'DIFFERENCE BETWEEN TARGET CHARGES AND CURRENT CHARGES: {((guided_charge_classification.to(self.DEVICE) - partial_charge.unsqueeze(1).to(self.DEVICE))**2)**0.5}') - - # Calculate loss on partial_charge - loss = torch.mean(((guided_charge_classification.to(self.DEVICE) - partial_charge.unsqueeze(1).to(self.DEVICE))**2)**0.5) - # loss = torch.mean((guided_charge_classification.to(self.DEVICE) - partial_charge.to(self.DEVICE))**2) - #print(f'LOSS: {loss}') - # Take backward step - loss.backward() - # Get gradients from soft_seq - self.gradients = soft_seq.grad - - # print(f'GUIDED CHARGE CLASSIFICATION SHAPE: {guided_charge_classification.shape}') - # print(f'PARTIAL CHARGE SHAPE: {partial_charge.unsqueeze(1).shape}') - # print(partial_charge) - # fig, ax = plt.subplots(1,2, dpi=200) - # ax[0].imshow((partial_charge.unsqueeze(1)).detach().numpy()) - # ax[0].set_title('soft_seq partial charge') - # ax[1].imshow(self.gradients)#.detach().numpy()) - # ax[1].set_title('gradients') - # print(seq) - return -self.gradients*self.potential_scale - -class PSSMbias(Potential): - - def __init__(self, args, features, potential_scale, DEVICE): - - self.features = features - self.args = args - self.potential_scale = potential_scale - self.DEVICE = DEVICE - self.PSSM = np.loadtxt(args['PSSM'], delimiter=",", dtype=float) - self.PSSM = torch.from_numpy(self.PSSM).to(self.DEVICE) - - def get_gradients(self, seq): - print(seq.shape) - - - return self.PSSM*self.potential_scale - -### ADD NEW POTENTIALS INTO LIST DOWN BELOW ### -POTENTIALS = {'aa_bias':AACompositionalBias, 'charge':ChargeBias, 'hydrophobic':HydrophobicBias, 'PSSM':PSSMbias} diff --git a/spaces/merve/hidden-bias/public/private-and-fair/accuracy-v-privacy-dataset_size.js b/spaces/merve/hidden-bias/public/private-and-fair/accuracy-v-privacy-dataset_size.js deleted file mode 100644 index cd196da1ca712ff733e5e03de4258effba0478a3..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/private-and-fair/accuracy-v-privacy-dataset_size.js +++ /dev/null @@ -1,157 +0,0 @@ -!(async function(){ - var data = await util.getFile('cns-cache/model_grid_test_accuracy.json') - - data = data - .filter(d => util.epsilonExtent[1] <= d.epsilon && d.epsilon <= util.epsilonExtent[0]) - .filter(d => d.dataset_size > 1000) - - // .filter(d => d.dataset_size > 4000) - - // console.log(data) - - var bySize = d3.nestBy(data, d => d.dataset_size) - bySize.forEach((d, i) => { - d.dataset_size = d.key - - d.color = d3.interpolatePlasma(.84- i/6) - if (d.key == 60000){ - d3.selectAll('.tp60').st({background: d.color, padding: 2}) - } - if (d.key == 7500){ - d3.selectAll('.tp75').st({background: d.color, color: '#fff', padding: 2}) - } - - d.label = { - 60000: {pos: [7, 11], textAnchor: 'middle', text: '60,000'}, - 30000: {pos: [7, 11], textAnchor: 'middle', text: '30,000'}, - 15000: {pos: [7, -5], textAnchor: 'start', text: '15,000'}, - 7500: {pos: [0, 8], textAnchor: 'start', text: '7,500'}, - // 3750: {pos: [0, 14], textAnchor: 'end', text: '3,750 training points'}, - 3750: {pos: [-34, 10], textAnchor: 'start', text: '3,750'}, - 2000: {pos: [-50, 10], textAnchor: 'end', text: '2,000 training points'}, - }[d.key] - - d.forEach(e => e.size = d) - }) - - var sel = d3.select('.accuracy-v-privacy-dataset_size').html('') - .at({role: 'graphics-document', 'aria-label': `High privacy and accuracy requires more training data. Line chart showing too much differential privacy without enough data decreases accuracy.`}) - - sel.append('div.chart-title').text('High privacy and accuracy requires more training data') - - var c = d3.conventions({ - sel, - height: 400, - margin: {bottom: 125, top: 5}, - layers: 'sd', - }) - - c.x = d3.scaleLog().domain(util.epsilonExtent).range(c.x.range()) - c.xAxis = d3.axisBottom(c.x).tickFormat(d => { - var rv = d + '' - if (rv.split('').filter(d => d !=0 && d != '.')[0] == 1) return rv - }) - - c.yAxis.tickFormat(d => d3.format('.0%')(d))//.ticks(8) - - d3.drawAxis(c) - util.addAxisLabel(c, 'Higher Privacy →', 'Test Accuracy') - util.ggPlotBg(c, false) - c.layers[1].append('div') - .st({fontSize: 12, color: '#555', width: 120*2, textAlign: 'center', lineHeight: '1.3em'}) - .translate([c.width/2 - 120, c.height + 70]) - .html('in ε, a measure of how much modifying a single training point can change the model (models with a lower ε are more private)') - - - c.svg.selectAll('.y .tick').filter(d => d == .9) - .select('text').st({fontWeight: 600}).parent() - .append('path') - .at({stroke: '#000', strokeDasharray: '2 2', d: 'M 0 0 H ' + c.width}) - - var line = d3.line() - .x(d => c.x(d.epsilon)) - .y(d => c.y(d.accuracy)) - .curve(d3.curveMonotoneX) - - - var lineSel = c.svg.append('g').appendMany('path.accuracy-line', bySize) - .at({ - d: line, - fill: 'none', - }) - .st({ stroke: d => d.color, }) - .on('mousemove', setActiveDigit) - - var circleSel = c.svg.append('g') - .appendMany('g.accuracy-circle', data) - .translate(d => [c.x(d.epsilon), c.y(d.accuracy)]) - .on('mousemove', setActiveDigit) - // .call(d3.attachTooltip) - - circleSel.append('circle') - .at({r: 4, stroke: '#fff'}) - .st({fill: d => d.size.color }) - - - var labelSel = c.svg.appendMany('g.accuracy-label', bySize) - .translate(d => [c.x(d[0].epsilon), c.y(d[0].accuracy)]) - labelSel.append('text') - .filter(d => d.label) - .translate(d => d.label.pos) - .st({fill: d => d.color, fontWeight: 400}) - .at({textAnchor: d => d.label.textAnchor, fontSize: 14, fill: '#000', dy: '.66em'}) - .text(d => d.label.text) - .filter(d => d.key == 2000) - .text('') - .tspans(d => d.label.text.split(' ')) - - - c.svg.append('text.annotation') - .translate([225, 106]) - .tspans(d3.wordwrap('With limited data, adding more differential privacy improves accuracy...', 25), 12) - - c.svg.append('text.annotation') - .translate([490, 230]) - .tspans(d3.wordwrap(`...until it doesn't`, 20)) - - // setActiveDigit({dataset_size: 60000}) - function setActiveDigit({dataset_size}){ - lineSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - circleSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - labelSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - } -})() - - - - -// aVal: 0.5 -// accuracy: 0.8936 -// accuracy_0: 0.9663265306122449 -// accuracy_1: 0.9806167400881057 -// accuracy_2: 0.9011627906976745 -// accuracy_3: 0.8633663366336634 -// accuracy_4: 0.8859470468431772 -// accuracy_5: 0.8733183856502242 -// accuracy_6: 0.9384133611691023 -// accuracy_7: 0.8657587548638133 -// accuracy_8: 0.8059548254620124 -// accuracy_9: 0.8434093161546086 -// dataset_size: 60000 -// epochs: 4 -// epsilon: 0.19034890168775565 -// l2_norm_clip: 0.75 -// noise_multiplier: 2.6 diff --git a/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/README.md b/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/README.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/README.md b/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/README.md deleted file mode 100644 index 325c7b4fe1ee3e4b72f48c0849b0c4a7136f368d..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# StyleGAN 2 in PyTorch - -Implementation of Analyzing and Improving the Image Quality of StyleGAN (https://arxiv.org/abs/1912.04958) in PyTorch - -## Notice - -I have tried to match official implementation as close as possible, but maybe there are some details I missed. So please use this implementation with care. - -## Requirements - -I have tested on: - -* PyTorch 1.3.1 -* CUDA 10.1/10.2 - -## Usage - -First create lmdb datasets: - -> python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH - -This will convert images to jpeg and pre-resizes it. This implementation does not use progressive growing, but you can create multiple resolution datasets using size arguments with comma separated lists, for the cases that you want to try another resolutions later. - -Then you can train model in distributed settings - -> python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train.py --batch BATCH_SIZE LMDB_PATH - -train.py supports Weights & Biases logging. If you want to use it, add --wandb arguments to the script. - -### Convert weight from official checkpoints - -You need to clone official repositories, (https://github.com/NVlabs/stylegan2) as it is requires for load official checkpoints. - -Next, create a conda environment with TF-GPU and Torch-CPU (using GPU for both results in CUDA version mismatches):
        -`conda create -n tf_torch python=3.7 requests tensorflow-gpu=1.14 cudatoolkit=10.0 numpy=1.14 pytorch=1.6 torchvision cpuonly -c pytorch` - -For example, if you cloned repositories in ~/stylegan2 and downloaded stylegan2-ffhq-config-f.pkl, You can convert it like this: - -> python convert_weight.py --repo ~/stylegan2 stylegan2-ffhq-config-f.pkl - -This will create converted stylegan2-ffhq-config-f.pt file. - -If using GCC, you might have to set `-D_GLIBCXX_USE_CXX11_ABI=1` in `~/stylegan2/dnnlib/tflib/custom_ops.py`. - -### Generate samples - -> python generate.py --sample N_FACES --pics N_PICS --ckpt PATH_CHECKPOINT - -You should change your size (--size 256 for example) if you train with another dimension. - -### Project images to latent spaces - -> python projector.py --ckpt [CHECKPOINT] --size [GENERATOR_OUTPUT_SIZE] FILE1 FILE2 ... - -## Pretrained Checkpoints - -[Link](https://drive.google.com/open?id=1PQutd-JboOCOZqmd95XWxWrO8gGEvRcO) - -I have trained the 256px model on FFHQ 550k iterations. I got FID about 4.5. Maybe data preprocessing, resolution, training loop could made this difference, but currently I don't know the exact reason of FID differences. - -## Samples - -![Sample with truncation](doc/sample.png) - -At 110,000 iterations. (trained on 3.52M images) - -### Samples from converted weights - -![Sample from FFHQ](doc/stylegan2-ffhq-config-f.png) - -Sample from FFHQ (1024px) - -![Sample from LSUN Church](doc/stylegan2-church-config-f.png) - -Sample from LSUN Church (256px) - -## License - -Model details and custom CUDA kernel codes are from official repostiories: https://github.com/NVlabs/stylegan2 - -Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity - -To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid diff --git a/spaces/miaomiaoren/vits-uma-genshin-honkai/attentions.py b/spaces/miaomiaoren/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/miaomiaoren/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/mithril-security/blind_chat/src/routes/tools.ts b/spaces/mithril-security/blind_chat/src/routes/tools.ts deleted file mode 100644 index 7d0e7eeda22eca3da8f24573d65cef5e2e736df0..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/routes/tools.ts +++ /dev/null @@ -1,27 +0,0 @@ -export async function getApiKey() { - try { - const response = await fetch("https://cloud.mithrilsecurity.io/api/apiKeys/chat", { - method: "GET", - credentials: "include", - headers: { - "Content-Type": "application/json", - }, - }); - - if (response.ok) { - // Parse the JSON response - const data = await response.json(); - - const apiKeyValue = data.value; - - return apiKeyValue; - } else { - // Handle errors - console.error("API Key retrieval failed"); - } - } catch (error) { - // Handle network errors - console.error("Network error", error); - } - return ""; -} \ No newline at end of file diff --git a/spaces/mithril-security/blind_chat/vite.config.ts b/spaces/mithril-security/blind_chat/vite.config.ts deleted file mode 100644 index 00ebb7ef2d30fb7e30cd4d1b8329b4b5026ebb2d..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/vite.config.ts +++ /dev/null @@ -1,15 +0,0 @@ -import { sveltekit } from "@sveltejs/kit/vite"; -import { defineConfig, searchForWorkspaceRoot } from "vite"; -import Icons from "unplugin-icons/vite"; - -export default defineConfig({ - plugins: [ - sveltekit(), - Icons({ - compiler: "svelte", - }), - ], - server: { - fs: {}, - }, -}); diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/open_vocab_transformer_predictor.py b/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/open_vocab_transformer_predictor.py deleted file mode 100644 index 0efee3e14c71400a1cc5a55ea6c21b6876189aaa..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/open_vocab_transformer_predictor.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -from torch import nn -from detectron2.config import configurable -from .transformer_predictor import TransformerPredictor, MLP - - -class OpenVocabTransformerPredictor(TransformerPredictor): - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - embedding_dim: int, - embed_hidden_dim: int, - embed_layers: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dropout: float, - dim_feedforward: int, - enc_layers: int, - dec_layers: int, - pre_norm: bool, - deep_supervision: bool, - mask_dim: int, - enforce_input_project: bool, - ): - super().__init__( - in_channels, - False, - num_classes=embedding_dim, - hidden_dim=hidden_dim, - num_queries=num_queries, - nheads=nheads, - dropout=dropout, - dim_feedforward=dim_feedforward, - enc_layers=enc_layers, - dec_layers=dec_layers, - pre_norm=pre_norm, - deep_supervision=deep_supervision, - mask_dim=mask_dim, - enforce_input_project=enforce_input_project, - ) - self.mask_classification = mask_classification - # output FFNs - if self.mask_classification: - self.class_embed = MLP( - hidden_dim, embed_hidden_dim, embedding_dim, embed_layers - ) - - def freeze_pretrained(self): - for name, module in self.named_children(): - if name not in ["class_embed"]: - for param in module.parameters(): - param.requires_grad = False - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["embedding_dim"] = cfg.MODEL.SEM_SEG_HEAD.EMBEDDING_DIM - ret["embed_hidden_dim"] = cfg.MODEL.SEM_SEG_HEAD.EMBED_HIDDEN_DIM - ret["embed_layers"] = cfg.MODEL.SEM_SEG_HEAD.EMBED_LAYERS - ret["hidden_dim"] = cfg.MODEL.MASK_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT - ret["dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - ret["enc_layers"] = cfg.MODEL.MASK_FORMER.ENC_LAYERS - ret["dec_layers"] = cfg.MODEL.MASK_FORMER.DEC_LAYERS - ret["pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - ret["deep_supervision"] = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION - ret["enforce_input_project"] = cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ - - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - - return ret diff --git a/spaces/monra/freegpt-webui/g4f/Provider/Providers/Phind.py b/spaces/monra/freegpt-webui/g4f/Provider/Providers/Phind.py deleted file mode 100644 index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/g4f/Provider/Providers/Phind.py +++ /dev/null @@ -1,36 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://phind.com' -model = ['gpt-4'] -supports_stream = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'model': model, - 'messages': messages}, separators=(',', ':')) - - cmd = ['python', f'{path}/helpers/phind.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - if b'Just a moment...' in line: - os.system('clear' if os.name == 'posix' else 'cls') - yield 'Clouflare error, please try again...' - os._exit(0) - - else: - if b'ping - 2023-' in line: - continue - - yield line.decode('cp1251') #[:-1] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/mrdbourke/foodvision_mini/model.py b/spaces/mrdbourke/foodvision_mini/model.py deleted file mode 100644 index 52c2696c874740179528f0bdae8ce87b774a138f..0000000000000000000000000000000000000000 --- a/spaces/mrdbourke/foodvision_mini/model.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import torchvision - -from torch import nn - - -def create_effnetb2_model(num_classes:int=3, - seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - # Create EffNetB2 pretrained weights, transforms and model - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - # Freeze all layers in base model - for param in model.parameters(): - param.requires_grad = False - - # Change classifier head with random seed for reproducibility - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/cross_entropy.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/cross_entropy.py deleted file mode 100644 index 6f33c24cb56e25f91595009af38e63784c2263a0..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/cross_entropy.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"): - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - return F.nll_loss( - lprobs, - target, - ignore_index=ignore_index, - reduction=reduction, - ) - - -try: - import xentropy_cuda - from apex.contrib import xentropy - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - if logits.device == torch.device("cpu"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) - else: - if not getattr(cross_entropy, "_has_logged_once", False): - logger.info("using fused cross entropy") - cross_entropy._has_logged_once = True - - half_to_float = logits.dtype == torch.half - losses = xentropy.SoftmaxCrossEntropyLoss.apply( - logits, - target, - 0.0, - ignore_index, - half_to_float, - ) - if reduction == "sum": - return losses.sum() - elif reduction == "mean": - if ignore_index >= 0: - return losses.sum() / target.ne(ignore_index).sum() - else: - return losses.mean() - elif reduction == "none": - return losses - else: - raise NotImplementedError - - -except ImportError: - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnliground_caption_stage_1_lr1e5.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnliground_caption_stage_1_lr1e5.sh deleted file mode 100644 index 7984c21acdc00ced3105977708ec739683fcd4ae..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnliground_caption_stage_1_lr1e5.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=ofa_wacaption_vqasnliground_caption_stage_1_lr1e5 -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s2b0n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/ofa_wacaption_vqasnliground_caption_stage_1_lr1e5.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/caption/ofa_wacaption_vqasnliground_caption_stage_1_lr1e5.sh - - diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5.sh deleted file mode 100644 index a1e53acea2dff99bcb53a3e41be72bc52cc78880..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5 -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s2b0n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/caption/ofa_wacaption_vqasnligroundofapt_caption_stage_1_lr1e5.sh - - diff --git a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/__init__.py b/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/latentdiffusion/latent-diffusion/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/README.md b/spaces/myrad01/Inpaint-Anything/third_party/lama/README.md deleted file mode 100644 index 390e111ca1de77832210aa2c7ffe5ccd890973b3..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/README.md +++ /dev/null @@ -1,459 +0,0 @@ -# 🦙 LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions - -by Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, -Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky. - -

        - 🔥🔥🔥 -
        - -LaMa generalizes surprisingly well to much higher resolutions (~2k❗️) than it saw during training (256x256), and achieves the excellent performance even in challenging scenarios, e.g. completion of periodic structures. -

        - -[[Project page](https://advimman.github.io/lama-project/)] [[arXiv](https://arxiv.org/abs/2109.07161)] [[Supplementary](https://ashukha.com/projects/lama_21/lama_supmat_2021.pdf)] [[BibTeX](https://senya-ashukha.github.io/projects/lama_21/paper.txt)] [[Casual GAN Papers Summary](https://www.casualganpapers.com/large-masks-fourier-convolutions-inpainting/LaMa-explained.html)] - -

        - - - -
        - Try out in Google Colab -

        - -

        - -

        - - -

        - -

        - -# LaMa development -(Feel free to share your paper by creating an issue) -- Amazing results [paper](https://arxiv.org/abs/2206.13644) / [video](https://www.youtube.com/watch?v=gEukhOheWgE) / code https://github.com/advimman/lama/pull/112 / by Geomagical Labs ([geomagical.com](geomagical.com)) -

        - -

        - -# Non-official 3rd party apps: -(Feel free to share your app/implementation/demo by creating an issue) -- [https://cleanup.pictures](https://cleanup.pictures/) - a simple interactive object removal tool by [@cyrildiagne](https://twitter.com/cyrildiagne) - - [lama-cleaner](https://github.com/Sanster/lama-cleaner) by [@Sanster](https://github.com/Sanster/lama-cleaner) is a self-host version of [https://cleanup.pictures](https://cleanup.pictures/) -- Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See demo: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/lama) by [@AK391](https://github.com/AK391) -- Telegram bot [@MagicEraserBot](https://t.me/MagicEraserBot) by [@Moldoteck](https://github.com/Moldoteck), [code](https://github.com/Moldoteck/MagicEraser) -- [Auto-LaMa](https://github.com/andy971022/auto-lama) = DE:TR object detection + LaMa inpainting by [@andy971022](https://github.com/andy971022) -- [LAMA-Magic-Eraser-Local](https://github.com/zhaoyun0071/LAMA-Magic-Eraser-Local) = a standalone inpainting application built with PyQt5 by [@zhaoyun0071](https://github.com/zhaoyun0071) -- [Hama](https://www.hama.app/) - object removal with a smart brush which simplifies mask drawing. -- [ModelScope](https://www.modelscope.cn/models/damo/cv_fft_inpainting_lama/summary) = the largest Model Community in Chinese by [@chenbinghui1](https://github.com/chenbinghui1). -- [LaMa with MaskDINO](https://github.com/qwopqwop200/lama-with-maskdino) = MaskDINO object detection + LaMa inpainting with refinement by [@qwopqwop200](https://github.com/qwopqwop200). - -# Environment setup - -Clone the repo: -`git clone https://github.com/advimman/lama.git` - -There are three options of an environment: - -1. Python virtualenv: - - ``` - virtualenv inpenv --python=/usr/bin/python3 - source inpenv/bin/activate - pip install torch==1.8.0 torchvision==0.9.0 - - cd lama - pip install -r requirements.txt - ``` - -2. Conda - - ``` - % Install conda for Linux, for other OS download miniconda at https://docs.conda.io/en/latest/miniconda.html - wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh - bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda - $HOME/miniconda/bin/conda init bash - - cd lama - conda env create -f conda_env.yml - conda activate lama - conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -y - pip install pytorch-lightning==1.2.9 - ``` - -3. Docker: No actions are needed 🎉. - -# Inference - -Run -``` -cd lama -export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd) -``` - -**1. Download pre-trained models** - -Install tool for yandex disk link extraction: - -``` -pip3 install wldhx.yadisk-direct -``` - -The best model (Places2, Places Challenge): - -``` -curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip -unzip big-lama.zip -``` - -All models (Places & CelebA-HQ): - -``` -curl -L $(yadisk-direct https://disk.yandex.ru/d/EgqaSnLohjuzAg) -o lama-models.zip -unzip lama-models.zip -``` - -**2. Prepare images and masks** - -Download test images: - -``` -curl -L $(yadisk-direct https://disk.yandex.ru/d/xKQJZeVRk5vLlQ) -o LaMa_test_images.zip -unzip LaMa_test_images.zip -``` -
        - OR prepare your data: -1) Create masks named as `[images_name]_maskXXX[image_suffix]`, put images and masks in the same folder. - -- You can use the [script](https://github.com/advimman/lama/blob/main/bin/gen_mask_dataset.py) for random masks generation. -- Check the format of the files: - ``` - image1_mask001.png - image1.png - image2_mask001.png - image2.png - ``` - -2) Specify `image_suffix`, e.g. `.png` or `.jpg` or `_input.jpg` in `configs/prediction/default.yaml`. - -
        - - -**3. Predict** - -On the host machine: - - python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output - -**OR** in the docker - -The following command will pull the docker image from Docker Hub and execute the prediction script -``` -bash docker/2_predict.sh $(pwd)/big-lama $(pwd)/LaMa_test_images $(pwd)/output device=cpu -``` -Docker cuda: TODO - -**4. Predict with Refinement** - -On the host machine: - - python3 bin/predict.py refine=True model.path=$(pwd)/big-lama indir=$(pwd)/LaMa_test_images outdir=$(pwd)/output - -# Train and Eval - -Make sure you run: - -``` -cd lama -export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd) -``` - -Then download models for _perceptual loss_: - - mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/ - wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth - - -## Places - -⚠️ NB: FID/SSIM/LPIPS metric values for Places that we see in LaMa paper are computed on 30000 images that we produce in evaluation section below. -For more details on evaluation data check [[Section 3. Dataset splits in Supplementary](https://ashukha.com/projects/lama_21/lama_supmat_2021.pdf#subsection.3.1)] ⚠️ - -On the host machine: - - # Download data from http://places2.csail.mit.edu/download.html - # Places365-Standard: Train(105GB)/Test(19GB)/Val(2.1GB) from High-resolution images section - wget http://data.csail.mit.edu/places/places365/train_large_places365standard.tar - wget http://data.csail.mit.edu/places/places365/val_large.tar - wget http://data.csail.mit.edu/places/places365/test_large.tar - - # Unpack train/test/val data and create .yaml config for it - bash fetch_data/places_standard_train_prepare.sh - bash fetch_data/places_standard_test_val_prepare.sh - - # Sample images for test and viz at the end of epoch - bash fetch_data/places_standard_test_val_sample.sh - bash fetch_data/places_standard_test_val_gen_masks.sh - - # Run training - python3 bin/train.py -cn lama-fourier location=places_standard - - # To evaluate trained model and report metrics as in our paper - # we need to sample previously unseen 30k images and generate masks for them - bash fetch_data/places_standard_evaluation_prepare_data.sh - - # Infer model on thick/thin/medium masks in 256 and 512 and run evaluation - # like this: - python3 bin/predict.py \ - model.path=$(pwd)/experiments/__lama-fourier_/ \ - indir=$(pwd)/places_standard_dataset/evaluation/random_thick_512/ \ - outdir=$(pwd)/inference/random_thick_512 model.checkpoint=last.ckpt - - python3 bin/evaluate_predicts.py \ - $(pwd)/configs/eval2_gpu.yaml \ - $(pwd)/places_standard_dataset/evaluation/random_thick_512/ \ - $(pwd)/inference/random_thick_512 \ - $(pwd)/inference/random_thick_512_metrics.csv - - - -Docker: TODO - -## CelebA -On the host machine: - - # Make shure you are in lama folder - cd lama - export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd) - - # Download CelebA-HQ dataset - # Download data256x256.zip from https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P - - # unzip & split into train/test/visualization & create config for it - bash fetch_data/celebahq_dataset_prepare.sh - - # generate masks for test and visual_test at the end of epoch - bash fetch_data/celebahq_gen_masks.sh - - # Run training - python3 bin/train.py -cn lama-fourier-celeba data.batch_size=10 - - # Infer model on thick/thin/medium masks in 256 and run evaluation - # like this: - python3 bin/predict.py \ - model.path=$(pwd)/experiments/__lama-fourier-celeba_/ \ - indir=$(pwd)/celeba-hq-dataset/visual_test_256/random_thick_256/ \ - outdir=$(pwd)/inference/celeba_random_thick_256 model.checkpoint=last.ckpt - - -Docker: TODO - -## Places Challenge - -On the host machine: - - # This script downloads multiple .tar files in parallel and unpacks them - # Places365-Challenge: Train(476GB) from High-resolution images (to train Big-Lama) - bash places_challenge_train_download.sh - - TODO: prepare - TODO: train - TODO: eval - -Docker: TODO - -## Create your data - -Please check bash scripts for data preparation and mask generation from CelebaHQ section, -if you stuck at one of the following steps. - - -On the host machine: - - # Make shure you are in lama folder - cd lama - export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd) - - # You need to prepare following image folders: - $ ls my_dataset - train - val_source # 2000 or more images - visual_test_source # 100 or more images - eval_source # 2000 or more images - - # LaMa generates random masks for the train data on the flight, - # but needs fixed masks for test and visual_test for consistency of evaluation. - - # Suppose, we want to evaluate and pick best models - # on 512x512 val dataset with thick/thin/medium masks - # And your images have .jpg extention: - - python3 bin/gen_mask_dataset.py \ - $(pwd)/configs/data_gen/random__512.yaml \ # thick, thin, medium - my_dataset/val_source/ \ - my_dataset/val/random__512.yaml \# thick, thin, medium - --ext jpg - - # So the mask generator will: - # 1. resize and crop val images and save them as .png - # 2. generate masks - - ls my_dataset/val/random_medium_512/ - image1_crop000_mask000.png - image1_crop000.png - image2_crop000_mask000.png - image2_crop000.png - ... - - # Generate thick, thin, medium masks for visual_test folder: - - python3 bin/gen_mask_dataset.py \ - $(pwd)/configs/data_gen/random__512.yaml \ #thick, thin, medium - my_dataset/visual_test_source/ \ - my_dataset/visual_test/random__512/ \ #thick, thin, medium - --ext jpg - - - ls my_dataset/visual_test/random_thick_512/ - image1_crop000_mask000.png - image1_crop000.png - image2_crop000_mask000.png - image2_crop000.png - ... - - # Same process for eval_source image folder: - - python3 bin/gen_mask_dataset.py \ - $(pwd)/configs/data_gen/random__512.yaml \ #thick, thin, medium - my_dataset/eval_source/ \ - my_dataset/eval/random__512/ \ #thick, thin, medium - --ext jpg - - - - # Generate location config file which locate these folders: - - touch my_dataset.yaml - echo "data_root_dir: $(pwd)/my_dataset/" >> my_dataset.yaml - echo "out_root_dir: $(pwd)/experiments/" >> my_dataset.yaml - echo "tb_dir: $(pwd)/tb_logs/" >> my_dataset.yaml - mv my_dataset.yaml ${PWD}/configs/training/location/ - - - # Check data config for consistency with my_dataset folder structure: - $ cat ${PWD}/configs/training/data/abl-04-256-mh-dist - ... - train: - indir: ${location.data_root_dir}/train - ... - val: - indir: ${location.data_root_dir}/val - img_suffix: .png - visual_test: - indir: ${location.data_root_dir}/visual_test - img_suffix: .png - - - # Run training - python3 bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10 - - # Evaluation: LaMa training procedure picks best few models according to - # scores on my_dataset/val/ - - # To evaluate one of your best models (i.e. at epoch=32) - # on previously unseen my_dataset/eval do the following - # for thin, thick and medium: - - # infer: - python3 bin/predict.py \ - model.path=$(pwd)/experiments/__lama-fourier_/ \ - indir=$(pwd)/my_dataset/eval/random__512/ \ - outdir=$(pwd)/inference/my_dataset/random__512 \ - model.checkpoint=epoch32.ckpt - - # metrics calculation: - python3 bin/evaluate_predicts.py \ - $(pwd)/configs/eval2_gpu.yaml \ - $(pwd)/my_dataset/eval/random__512/ \ - $(pwd)/inference/my_dataset/random__512 \ - $(pwd)/inference/my_dataset/random__512_metrics.csv - - -**OR** in the docker: - - TODO: train - TODO: eval - -# Hints - -### Generate different kinds of masks -The following command will execute a script that generates random masks. - - bash docker/1_generate_masks_from_raw_images.sh \ - configs/data_gen/random_medium_512.yaml \ - /directory_with_input_images \ - /directory_where_to_store_images_and_masks \ - --ext png - -The test data generation command stores images in the format, -which is suitable for [prediction](#prediction). - -The table below describes which configs we used to generate different test sets from the paper. -Note that we *do not fix a random seed*, so the results will be slightly different each time. - -| | Places 512x512 | CelebA 256x256 | -|--------|------------------------|------------------------| -| Narrow | random_thin_512.yaml | random_thin_256.yaml | -| Medium | random_medium_512.yaml | random_medium_256.yaml | -| Wide | random_thick_512.yaml | random_thick_256.yaml | - -Feel free to change the config path (argument #1) to any other config in `configs/data_gen` -or adjust config files themselves. - -### Override parameters in configs -Also you can override parameters in config like this: - - python3 bin/train.py -cn data.batch_size=10 run_title=my-title - -Where .yaml file extension is omitted - -### Models options -Config names for models from paper (substitude into the training command): - - * big-lama - * big-lama-regular - * lama-fourier - * lama-regular - * lama_small_train_masks - -Which are seated in configs/training/folder - -### Links -- All the data (models, test images, etc.) https://disk.yandex.ru/d/AmdeG-bIjmvSug -- Test images from the paper https://disk.yandex.ru/d/xKQJZeVRk5vLlQ -- The pre-trained models https://disk.yandex.ru/d/EgqaSnLohjuzAg -- The models for perceptual loss https://disk.yandex.ru/d/ncVmQlmT_kTemQ -- Our training logs are available at https://disk.yandex.ru/d/9Bt1wNSDS4jDkQ - - -### Training time & resources - -TODO - -## Acknowledgments - -* Segmentation code and models if form [CSAILVision](https://github.com/CSAILVision/semantic-segmentation-pytorch). -* LPIPS metric is from [richzhang](https://github.com/richzhang/PerceptualSimilarity) -* SSIM is from [Po-Hsun-Su](https://github.com/Po-Hsun-Su/pytorch-ssim) -* FID is from [mseitzer](https://github.com/mseitzer/pytorch-fid) - -## Citation -If you found this code helpful, please consider citing: -``` -@article{suvorov2021resolution, - title={Resolution-robust Large Mask Inpainting with Fourier Convolutions}, - author={Suvorov, Roman and Logacheva, Elizaveta and Mashikhin, Anton and Remizova, Anastasia and Ashukha, Arsenii and Silvestrov, Aleksei and Kong, Naejin and Goka, Harshith and Park, Kiwoong and Lempitsky, Victor}, - journal={arXiv preprint arXiv:2109.07161}, - year={2021} -} -``` diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/downloads.py b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/downloads.py deleted file mode 100644 index d7b87cb2cadd22fcdfaafc7fd56fc29e14d9a538..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/downloads.py +++ /dev/null @@ -1,153 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Download utils -""" - -import os -import platform -import subprocess -import time -import urllib -from pathlib import Path -from zipfile import ZipFile - -import requests -import torch - - -def gsutil_getsize(url=''): - # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du - s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') - return eval(s.split(' ')[0]) if len(s) else 0 # bytes - - -def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''): - # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes - file = Path(file) - assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}" - try: # url1 - print(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file)) - assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check - except Exception as e: # url2 - file.unlink(missing_ok=True) # remove partial downloads - print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') - os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail - finally: - if not file.exists() or file.stat().st_size < min_bytes: # check - file.unlink(missing_ok=True) # remove partial downloads - print(f"ERROR: {assert_msg}\n{error_msg}") - print('') - - -def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads import *; attempt_download() - # Attempt file download if does not exist - file = Path(str(file).strip().replace("'", '')) - - if not file.exists(): - # URL specified - name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc. - if str(file).startswith(('http:/', 'https:/')): # download - url = str(file).replace(':/', '://') # Pathlib turns :// -> :/ - file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... - if Path(file).is_file(): - print(f'Found {url} locally at {file}') # file already exists - else: - safe_download(file=file, url=url, min_bytes=1E5) - return file - - # GitHub assets - file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required) - try: - response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api - assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...] - tag = response['tag_name'] # i.e. 'v1.0' - except Exception: # fallback plan - assets = ['yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', - 'yolov5n6.pt', 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt'] - try: - tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] - except Exception: - tag = 'v6.0' # current release - - if name in assets: - safe_download(file, - url=f'https://github.com/{repo}/releases/download/{tag}/{name}', - # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional) - min_bytes=1E5, - error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/') - - return str(file) - - -def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'): - # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download() - t = time.time() - file = Path(file) - cookie = Path('cookie') # gdrive cookie - print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') - file.unlink(missing_ok=True) # remove existing file - cookie.unlink(missing_ok=True) # remove existing cookie - - # Attempt file download - out = "NUL" if platform.system() == "Windows" else "/dev/null" - os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') - if os.path.exists('cookie'): # large file - s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' - else: # small file - s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' - r = os.system(s) # execute, capture return - cookie.unlink(missing_ok=True) # remove existing cookie - - # Error check - if r != 0: - file.unlink(missing_ok=True) # remove partial - print('Download error ') # raise Exception('Download error') - return r - - # Unzip if archive - if file.suffix == '.zip': - print('unzipping... ', end='') - ZipFile(file).extractall(path=file.parent) # unzip - file.unlink() # remove zip - - print(f'Done ({time.time() - t:.1f}s)') - return r - - -def get_token(cookie="./cookie"): - with open(cookie) as f: - for line in f: - if "download" in line: - return line.split()[-1] - return "" - -# Google utils: https://cloud.google.com/storage/docs/reference/libraries ---------------------------------------------- -# -# -# def upload_blob(bucket_name, source_file_name, destination_blob_name): -# # Uploads a file to a bucket -# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python -# -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(destination_blob_name) -# -# blob.upload_from_filename(source_file_name) -# -# print('File {} uploaded to {}.'.format( -# source_file_name, -# destination_blob_name)) -# -# -# def download_blob(bucket_name, source_blob_name, destination_file_name): -# # Uploads a blob from a bucket -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(source_blob_name) -# -# blob.download_to_filename(destination_file_name) -# -# print('Blob {} downloaded to {}.'.format( -# source_blob_name, -# destination_file_name)) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Flash Player Mac Chrome Download FREE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Flash Player Mac Chrome Download FREE.md deleted file mode 100644 index 718ce3c77a594a381c8613084757c075e9a37380..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Flash Player Mac Chrome Download FREE.md +++ /dev/null @@ -1,46 +0,0 @@ - -```html -

        How to Download and Install Adobe Flash Player on Mac Chrome

        -

        Adobe Flash Player is a software that enables you to view and interact with multimedia content on the web, such as animations, games, videos, and more. However, Adobe Flash Player is no longer supported by most browsers, including Google Chrome, due to security and performance issues. If you still need to use Adobe Flash Player on your Mac Chrome browser, you will need to download and install it manually. In this article, we will show you how to do that in a few simple steps.

        -

        Adobe Flash Player Mac Chrome Download


        Download Ziphttps://urlcod.com/2uIbg7



        -Adobe Flash Player logo -

        Step 1: Check if you have Adobe Flash Player installed on your Mac

        -

        Before you download and install Adobe Flash Player on your Mac Chrome browser, you should check if you already have it installed on your system. To do that, follow these steps:

        -
          -
        • Open the Finder app on your Mac.
        • -
        • Go to Applications > Utilities and double-click on Adobe Flash Player Install Manager.
        • -
        • If you see a message that says "Adobe Flash Player is not installed", then you need to download and install it. If you see a message that says "Adobe Flash Player is installed", then you can skip to step 3.
        • -
        -

        Step 2: Download Adobe Flash Player from the official website

        -

        If you don't have Adobe Flash Player installed on your Mac, you will need to download it from the official website. To do that, follow these steps:

        -
          -
        • Open your Chrome browser and go to https://get.adobe.com/flashplayer/.
        • -
        • Uncheck any optional offers that you don't want to install along with Adobe Flash Player.
        • -
        • Click on the Download now button and save the file to your Downloads folder.
        • -
        -

        Step 3: Install Adobe Flash Player on your Mac

        -

        Once you have downloaded Adobe Flash Player from the official website, you will need to install it on your Mac. To do that, follow these steps:

        -
          -
        • Open your Downloads folder and double-click on the file named "AdobeFlashPlayerInstaller.dmg".
        • -
        • A window will pop up with an icon of Adobe Flash Player. Drag and drop the icon to the Applications folder.
        • -
        • Open the Applications folder and double-click on the icon of Adobe Flash Player.
        • -
        • A window will pop up asking for your permission to run the installer. Click on Open.
        • -
        • Follow the instructions on the screen to complete the installation process.
        • -
        • When the installation is finished, click on Done.
        • -
        -

        Step 4: Enable Adobe Flash Player on your Chrome browser

        -

        After installing Adobe Flash Player on your Mac, you will need to enable it on your Chrome browser. To do that, follow these steps:

        -
          -
        • Open your Chrome browser and click on the three-dot menu icon at the top right corner.
        • -
        • Select Settings from the drop-down menu.
        • -
        • Click on Privacy and security from the left sidebar.
        • -
        • Click on Site settings under Privacy and security.
        • -
        • Scroll down and click on Flash under Content.
        • -
        • Toggle on the switch next to Ask first (recommended).
        • -
        -

        Congratulations! You have successfully downloaded and installed Adobe Flash Player on your Mac Chrome browser. Now you can view and interact with flash content on the web. However, keep in mind that Adobe Flash Player will be discontinued by the end of 2020, so you should look for alternatives or update your content to HTML5 or other formats.

        -

        - -```

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/QuickTime Pro V7.0.3.50 Serial Serial Key.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/QuickTime Pro V7.0.3.50 Serial Serial Key.md deleted file mode 100644 index c99c3345297c219fc21bd48f296b239498284646..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/QuickTime Pro V7.0.3.50 Serial Serial Key.md +++ /dev/null @@ -1,29 +0,0 @@ -
        -

        How to Download and Install QuickTime Pro V7.0.3.50 Serial Key for Free

        -

        QuickTime Pro is a powerful and versatile media player that can handle various formats of video, audio, and image files. It also allows you to convert, record, edit, and share your media files with ease. However, QuickTime Pro is not a free software and you need to purchase a registration code to unlock its full features. In this article, we will show you how to download and install QuickTime Pro V7.0.3.50 serial key for free without paying anything.

        -

        QuickTime Pro V7.0.3.50 Serial Serial Key


        Download Zip 🗹 https://urlcod.com/2uI9LA



        -

        What is QuickTime Pro V7.0.3.50 Serial Key?

        -

        QuickTime Pro V7.0.3.50 serial key is a unique code that can activate the QuickTime Pro software on your computer. It is also known as a license key or an unlock key. You can get the serial key from the official website of Apple or from other online sources that offer it for free or at a discounted price.

        -

        Why Do You Need QuickTime Pro V7.0.3.50 Serial Key?

        -

        QuickTime Pro V7.0.3.50 serial key is necessary if you want to enjoy the full benefits of QuickTime Pro software. Some of the advantages of QuickTime Pro are:

        -
          -
        • It supports major video formats including H.264, MPEG-4 and Motion JPEG[^6^].
        • -
        • It can convert your media files to different formats that are optimized for iPhone, iPod, Apple TV, or other devices[^4^].
        • -
        • It can record audio and video directly from your built-in iSight camera, FireWire camcorder, or microphone[^4^]. You can also trim what you have recorded to the ideal length.
        • -
        • It can edit your media files by adding effects, transitions, annotations, subtitles, and more.
        • -
        • It can share your media files with your friends and family via email, web, or social media platforms.
        • -
        -

        How to Download and Install QuickTime Pro V7.0.3.50 Serial Key for Free?

        -

        If you want to download and install QuickTime Pro V7.0.3.50 serial key for free, you need to follow these steps:

        -
          -
        1. Download the QuickTime 7 installer for Windows from Apple's website. Make sure you choose the correct version for your operating system.
        2. -
        3. Run the installer and follow the instructions on the screen to install QuickTime 7 on your computer.
        4. -
        5. Open QuickTime 7 and go to Edit > Preferences > Register.
        6. -
        7. Enter the following serial key: G4HQ-5QEA-KZ9T-EA5S-Q6TL. This is one of the free serial keys that are available online[^1^] [^2^]. You can also search for other serial keys on the internet if this one does not work.
        8. -
        9. Click Register and enjoy using QuickTime Pro V7.0.3.50 for free.
        10. -
        -

        Conclusion

        -

        QuickTime Pro V7.0.3.50 is a great software that can enhance your multimedia experience on your computer. However, it is not a free software and you need to buy a registration code to use it fully. Fortunately, there are ways to download and install QuickTime Pro V7.0.3.50 serial key for free without spending any money. We hope this article has helped you in doing so.

        -

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Re Kabira Maan Ja 1080p Hdtv UPD.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Re Kabira Maan Ja 1080p Hdtv UPD.md deleted file mode 100644 index 8274e1c96ff54c081fcd84d71581f9859bf0f689..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Re Kabira Maan Ja 1080p Hdtv UPD.md +++ /dev/null @@ -1,12 +0,0 @@ -
        -

        Re Kabira Maan Ja: A Soulful Song of Love and Longing

        -

        Re Kabira Maan Ja is a song from the 2013 Bollywood movie Yeh Jawaani Hai Deewani, starring Ranbir Kapoor and Deepika Padukone. The song has two versions, one sung by Rekha Bhardwaj and Tochi Raina, and the other by Arijit Singh and Harshdeep Kaur. The music is composed by Pritam and the lyrics are written by Amitabh Bhattacharya.

        -

        Re Kabira Maan Ja 1080p Hdtv


        Download · https://urlcod.com/2uIbMY



        -

        The song is a plea from a lover to his beloved, who has left him for a different life. The lover asks her to listen to her heart and return to him, as he still loves her and misses her. He reminds her of their past memories and their bond, and hopes that she will come back someday. The song is full of emotions and expressions of love, regret, nostalgia, and hope.

        -

        The song uses the imagery of Kabir, a 15th-century mystic poet and saint, who is known for his simple and profound verses on spirituality and human nature. The song also refers to some of Kabir's teachings, such as the importance of inner peace, detachment, and harmony. The song is a beautiful blend of folk, classical, and contemporary music styles, and has a soothing and melodious feel to it.

        -

        Re Kabira Maan Ja is one of the most popular songs of Yeh Jawaani Hai Deewani, and has received critical acclaim and awards for its lyrics, music, and vocals. The song has also touched the hearts of many listeners, who can relate to its message of love and longing.

        Yeh Jawaani Hai Deewani is a 2013 Bollywood romantic comedy-drama film, directed by Ayan Mukerji and produced by Karan Johar. The film follows the lives of four friends, who meet after a long time and rediscover themselves. The film explores the themes of friendship, love, dreams, and choices.

        -

        The film stars Ranbir Kapoor as Kabir Thapa aka Bunny, a charming and adventurous travel journalist, who wants to explore the world and live life to the fullest. Deepika Padukone plays Naina Talwar, a studious and shy medical student, who falls in love with Bunny during a trekking trip. Kalki Koechlin plays Aditi Mehra, a tomboyish and fun-loving girl, who is secretly in love with Avi. Aditya Roy Kapur plays Avinash Arora aka Avi, a carefree and loyal friend, who struggles with his gambling addiction and unrequited love for Aditi.

        -

        The film is divided into two parts: the first part shows the friends' trekking trip in Manali, where they bond and have fun. The second part shows their reunion eight years later in Udaipur, where they attend Aditi's wedding with Taran (Kunaal Roy Kapur), a wealthy and mature businessman. The film depicts how the friends have changed over the years, and how they deal with their unresolved feelings and conflicts.

        -

        Yeh Jawaani Hai Deewani is a commercial and critical success, becoming one of the highest-grossing Bollywood films of all time. The film is praised for its performances, music, cinematography, and direction. The film also received several awards and nominations, including nine Filmfare Awards.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/nightfury/SD-InPainting/clipseg/score.py b/spaces/nightfury/SD-InPainting/clipseg/score.py deleted file mode 100644 index 8db8915b109953931fa2a330a7731db4a51b44f8..0000000000000000000000000000000000000000 --- a/spaces/nightfury/SD-InPainting/clipseg/score.py +++ /dev/null @@ -1,453 +0,0 @@ -from torch.functional import Tensor - -import torch -import inspect -import json -import yaml -import time -import sys - -from general_utils import log - -import numpy as np -from os.path import expanduser, join, isfile, realpath - -from torch.utils.data import DataLoader - -from metrics import FixedIntervalMetrics - -from general_utils import load_model, log, score_config_from_cli_args, AttributeDict, get_attribute, filter_args - - -DATASET_CACHE = dict() - -def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False, ignore_weights=False): - - config = json.load(open(join('logs', checkpoint_id, 'config.json'))) - - if model_args != 'from_config' and type(model_args) != dict: - raise ValueError('model_args must either be "from_config" or a dictionary of values') - - model_cls = get_attribute(config['model']) - - # load model - if model_args == 'from_config': - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - - model = model_cls(**model_args) - - if weights_file is None: - weights_file = realpath(join('logs', checkpoint_id, 'weights.pth')) - else: - weights_file = realpath(join('logs', checkpoint_id, weights_file)) - - if isfile(weights_file) and not ignore_weights: - weights = torch.load(weights_file) - for _, w in weights.items(): - assert not torch.any(torch.isnan(w)), 'weights contain NaNs' - model.load_state_dict(weights, strict=strict) - else: - if not ignore_weights: - raise FileNotFoundError(f'model checkpoint {weights_file} was not found') - - if with_config: - return model, config - - return model - - -def compute_shift2(model, datasets, seed=123, repetitions=1): - """ computes shift """ - - model.eval() - model.cuda() - - import random - random.seed(seed) - - preds, gts = [], [] - for i_dataset, dataset in enumerate(datasets): - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - max_iterations = int(repetitions * len(dataset.dataset.data_list)) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if v is not None else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if v is not None else v for v in data_y] - - pred, = model(data_x[0], data_x[1], data_x[2]) - preds += [pred.detach()] - gts += [data_y] - - i += 1 - if max_iterations and i >= max_iterations: - break - - from metrics import FixedIntervalMetrics - n_values = 51 - thresholds = np.linspace(0, 1, n_values)[1:-1] - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, n_values=n_values) - - for p, y in zip(preds, gts): - metric.add(p.unsqueeze(1), y) - - best_idx = np.argmax(metric.value()['fgiou_scores']) - best_thresh = thresholds[best_idx] - - return best_thresh - - -def get_cached_pascal_pfe(split, config): - from datasets.pfe_dataset import PFEPascalWrapper - try: - dataset = DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] - except KeyError: - dataset = PFEPascalWrapper(mode='val', split=split, mask=config.mask, image_size=config.image_size, label_support=config.label_support) - DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] = dataset - return dataset - - - - -def main(): - config, train_checkpoint_id = score_config_from_cli_args() - - metrics = score(config, train_checkpoint_id, None) - - for dataset in metrics.keys(): - for k in metrics[dataset]: - if type(metrics[dataset][k]) in {float, int}: - print(dataset, f'{k:<16} {metrics[dataset][k]:.3f}') - - -def score(config, train_checkpoint_id, train_config): - - config = AttributeDict(config) - - print(config) - - # use training dataset and loss - train_config = AttributeDict(json.load(open(f'logs/{train_checkpoint_id}/config.json'))) - - cp_str = f'_{config.iteration_cp}' if config.iteration_cp is not None else '' - - - model_cls = get_attribute(train_config['model']) - - _, model_args, _ = filter_args(train_config, inspect.signature(model_cls).parameters) - - model_args = {**model_args, **{k: config[k] for k in ['process_cond', 'fix_shift'] if k in config}} - - strict_models = {'ConditionBase4', 'PFENetWrapper'} - model = load_model(train_checkpoint_id, strict=model_cls.__name__ in strict_models, model_args=model_args, - weights_file=f'weights{cp_str}.pth', ) - - - model.eval() - model.cuda() - - metric_args = dict() - - if 'threshold' in config: - if config.metric.split('.')[-1] == 'SkLearnMetrics': - metric_args['threshold'] = config.threshold - - if 'resize_to' in config: - metric_args['resize_to'] = config.resize_to - - if 'sigmoid' in config: - metric_args['sigmoid'] = config.sigmoid - - if 'custom_threshold' in config: - metric_args['custom_threshold'] = config.custom_threshold - - if config.test_dataset == 'pascal': - - loss_fn = get_attribute(train_config.loss) - # assume that if no split is specified in train_config, test on all splits, - - if 'splits' in config: - splits = config.splits - else: - if 'split' in train_config and type(train_config.split) == int: - # unless train_config has a split set, in that case assume train mode in training - splits = [train_config.split] - assert train_config.mode == 'train' - else: - splits = [0,1,2,3] - - log.info('Test on these splits', splits) - - scores = dict() - for split in splits: - - shift = config.shift if 'shift' in config else 0 - - # automatic shift - if shift == 'auto': - shift_compute_t = time.time() - shift = compute_shift2(model, [get_cached_pascal_pfe(s, config) for s in range(4) if s != split], repetitions=config.compute_shift_fac) - log.info(f'Best threshold is {shift}, computed on splits: {[s for s in range(4) if s != split]}, took {time.time() - shift_compute_t:.1f}s') - - dataset = get_cached_pascal_pfe(split, config) - - eval_start_t = time.time() - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - assert config.batch_size is None or config.batch_size == 1, 'When PFE Dataset is used, batch size must be 1' - - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, custom_threshold=shift, **metric_args) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - # loss = loss_fn(pred, data_y[0]) - metric.add(pred.unsqueeze(1) + shift, data_y) - - # losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - #scores[split] = {m: s for m, s in zip(metric.names(), metric.value())} - - log.info(f'Dataset length: {len(dataset)}, took {time.time() - eval_start_t:.1f}s to evaluate.') - - print(metric.value()['mean_iou_scores']) - - scores[split] = metric.scores() - - log.info(f'Completed split {split}') - - key_prefix = config['name'] if 'name' in config else 'pas' - - all_keys = set.intersection(*[set(v.keys()) for v in scores.values()]) - - valid_keys = [k for k in all_keys if all(v[k] is not None and isinstance(v[k], (int, float, np.float)) for v in scores.values())] - - return {key_prefix: {k: np.mean([s[k] for s in scores.values()]) for k in valid_keys}} - - - if config.test_dataset == 'coco': - from datasets.coco_wrapper import COCOWrapper - - coco_dataset = COCOWrapper('test', fold=train_config.fold, image_size=train_config.image_size, mask=config.mask, - with_class_label=True) - - log.info('Dataset length', len(coco_dataset)) - loader = DataLoader(coco_dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'coco' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - - if config.test_dataset == 'phrasecut': - from datasets.phrasecut import PhraseCut - - only_visual = config.only_visual is not None and config.only_visual - with_visual = config.with_visual is not None and config.with_visual - - dataset = PhraseCut('test', - image_size=train_config.image_size, - mask=config.mask, - with_visual=with_visual, only_visual=only_visual, aug_crop=False, - aug_color=False) - - loader = DataLoader(dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'phrasecut' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - if config.test_dataset == 'pascal_zs': - from third_party.JoEm.model.metric import Evaluator - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - from datasets.pascal_zeroshot import PascalZeroShot, PASCAL_VOC_CLASSES_ZS - - from models.clipseg import CLIPSegMultiLabel - - n_unseen = train_config.remove_classes[1] - - pz = PascalZeroShot('val', n_unseen, image_size=352) - m = CLIPSegMultiLabel(model=train_config.name).cuda() - m.eval(); - - print(len(pz), n_unseen) - print('training removed', [c for class_set in PASCAL_VOC_CLASSES_ZS[:n_unseen // 2] for c in class_set]) - - print('unseen', [VOC[i] for i in get_unseen_idx(n_unseen)]) - print('seen', [VOC[i] for i in get_seen_idx(n_unseen)]) - - loader = DataLoader(pz, batch_size=8) - evaluator = Evaluator(21, get_unseen_idx(n_unseen), get_seen_idx(n_unseen)) - - for i, (data_x, data_y) in enumerate(loader): - pred = m(data_x[0].cuda()) - evaluator.add_batch(data_y[0].numpy(), pred.argmax(1).cpu().detach().numpy()) - - if config.max_iter is not None and i > config.max_iter: - break - - scores = evaluator.Mean_Intersection_over_Union() - key_prefix = config['name'] if 'name' in config else 'pas_zs' - - return {key_prefix: {k: scores[k] for k in ['seen', 'unseen', 'harmonic', 'overall']}} - - elif config.test_dataset in {'same_as_training', 'affordance'}: - loss_fn = get_attribute(train_config.loss) - - metric_cls = get_attribute(config.metric) - metric = metric_cls(**metric_args) - - if config.test_dataset == 'same_as_training': - dataset_cls = get_attribute(train_config.dataset) - elif config.test_dataset == 'affordance': - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_Affordance') - dataset_name = 'aff' - else: - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_OneShot') - dataset_name = 'lvis' - - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset_args['image_size'] = train_config.image_size # explicitly use training image size for evaluation - - if model.__class__.__name__ == 'PFENetWrapper': - dataset_args['image_size'] = config.image_size - - log.info('init dataset', str(dataset_cls)) - dataset = dataset_cls(**dataset_args) - - log.info(f'Score on {model.__class__.__name__} on {dataset_cls.__name__}') - - data_loader = torch.utils.data.DataLoader(dataset, batch_size=config.batch_size, shuffle=config.shuffle) - - # explicitly set prompts - if config.prompt == 'plain': - model.prompt_list = ['{}'] - elif config.prompt == 'fixed': - model.prompt_list = ['a photo of a {}.'] - elif config.prompt == 'shuffle': - model.prompt_list = ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif config.prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - model.prompt_list = imagenet_templates - - config.assume_no_unused_keys(exceptions=['max_iterations']) - - t_start = time.time() - - with torch.no_grad(): # TODO: switch to inference_mode (torch 1.9) - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - if model.__class__.__name__ in {'ConditionBase4', 'PFENetWrapper'}: - pred, = model(data_x[0], data_x[1], data_x[2]) - visual_q = None - else: - pred, visual_q, _, _ = model(data_x[0], data_x[1], return_features=True) - - loss = loss_fn(pred, data_y[0]) - - metric.add([pred], data_y) - - losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - # scores = {m: s for m, s in zip(metric.names(), metric.value())} - scores = metric.scores() - - keys = set(scores.keys()) - if dataset.negative_prob > 0 and 'mIoU' in keys: - keys.remove('mIoU') - - name_mask = dataset.mask.replace('text_label', 'txt')[:3] - name_neg = '' if dataset.negative_prob == 0 else '_' + str(dataset.negative_prob) - - score_name = config.name if 'name' in config else f'{dataset_name}_{name_mask}{name_neg}' - - scores = {score_name: {k: v for k,v in scores.items() if k in keys}} - scores[score_name].update({'test_loss': np.mean(losses)}) - - log.info(f'Evaluation took {time.time() - t_start:.1f}s') - - return scores - else: - raise ValueError('invalid test dataset') - - - - - - - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/docker/Dockerfile b/spaces/nikitaPDL2023/assignment4/detectron2/docker/Dockerfile deleted file mode 100644 index fae0060b2b78b26e4cef9631a04e84db4eb2c567..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/docker/Dockerfile +++ /dev/null @@ -1,47 +0,0 @@ -FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04 -# use an older system (18.04) to avoid opencv incompatibility (issue#3524) - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - python3-opencv ca-certificates python3-dev git wget sudo ninja-build -RUN ln -sv /usr/bin/python3 /usr/bin/python - -# create a non-root user -ARG USER_ID=1000 -RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo -RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers -USER appuser -WORKDIR /home/appuser - -ENV PATH="/home/appuser/.local/bin:${PATH}" -RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \ - python3 get-pip.py --user && \ - rm get-pip.py - -# install dependencies -# See https://pytorch.org/ for other options if you use a different version of CUDA -RUN pip install --user tensorboard cmake onnx # cmake from apt-get is too old -RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html - -RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' -# install detectron2 -RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo -# set FORCE_CUDA because during `docker build` cuda is not accessible -ENV FORCE_CUDA="1" -# This will by default build detectron2 for all common cuda architectures and take a lot more time, -# because inside `docker build`, there is no way to tell which architecture will be used. -ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" -ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" - -RUN pip install --user -e detectron2_repo - -# Set a fixed model cache directory. -ENV FVCORE_CACHE="/tmp" -WORKDIR /home/appuser/detectron2_repo - -# run detectron2 under user "appuser": -# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg -# python3 demo/demo.py \ - #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - #--input input.jpg --output outputs/ \ - #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl diff --git a/spaces/notable12/DermDetectAI/README.md b/spaces/notable12/DermDetectAI/README.md deleted file mode 100644 index 795d97359ff674db0903bde687915ee1b09c6b54..0000000000000000000000000000000000000000 --- a/spaces/notable12/DermDetectAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youngp5 Skin Conditions -emoji: 🌖 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/oliver2023/chatgpt-on-wechat/bridge/bridge.py b/spaces/oliver2023/chatgpt-on-wechat/bridge/bridge.py deleted file mode 100644 index 5c8448ee57d526ecf91b094db692bdafc398c4ea..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/bridge/bridge.py +++ /dev/null @@ -1,50 +0,0 @@ -from bridge.context import Context -from bridge.reply import Reply -from common.log import logger -from bot import bot_factory -from common.singleton import singleton -from voice import voice_factory -from config import conf -from common import const - - -@singleton -class Bridge(object): - def __init__(self): - self.btype={ - "chat": const.CHATGPT, - "voice_to_text": conf().get("voice_to_text", "openai"), - "text_to_voice": conf().get("text_to_voice", "google") - } - model_type = conf().get("model") - if model_type in ["text-davinci-003"]: - self.btype['chat'] = const.OPEN_AI - if conf().get("use_azure_chatgpt"): - self.btype['chat'] = const.CHATGPTONAZURE - self.bots={} - - def get_bot(self,typename): - if self.bots.get(typename) is None: - logger.info("create bot {} for {}".format(self.btype[typename],typename)) - if typename == "text_to_voice": - self.bots[typename] = voice_factory.create_voice(self.btype[typename]) - elif typename == "voice_to_text": - self.bots[typename] = voice_factory.create_voice(self.btype[typename]) - elif typename == "chat": - self.bots[typename] = bot_factory.create_bot(self.btype[typename]) - return self.bots[typename] - - def get_bot_type(self,typename): - return self.btype[typename] - - - def fetch_reply_content(self, query, context : Context) -> Reply: - return self.get_bot("chat").reply(query, context) - - - def fetch_voice_to_text(self, voiceFile) -> Reply: - return self.get_bot("voice_to_text").voiceToText(voiceFile) - - def fetch_text_to_voice(self, text) -> Reply: - return self.get_bot("text_to_voice").textToVoice(text) - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_lora_sdxl.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_lora_sdxl.py deleted file mode 100644 index 24dbf4313662d54cb4bf423b42a68a1d9548e61a..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_lora_sdxl.py +++ /dev/null @@ -1,1368 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import gc -import hashlib -import itertools -import logging -import math -import os -import shutil -import warnings -from pathlib import Path -from typing import Dict - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from packaging import version -from PIL import Image -from PIL.ImageOps import exif_transpose -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DPMSolverMultistepScheduler, - StableDiffusionXLPipeline, - UNet2DConditionModel, -) -from diffusers.loaders import LoraLoaderMixin, text_encoder_lora_state_dict -from diffusers.models.attention_processor import LoRAAttnProcessor, LoRAAttnProcessor2_0 -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.22.0.dev0") - -logger = get_logger(__name__) - - -def save_model_card( - repo_id: str, images=None, base_model=str, train_text_encoder=False, prompt=str, repo_folder=None, vae_path=None -): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: openrail++ -base_model: {base_model} -instance_prompt: {prompt} -tags: -- stable-diffusion-xl -- stable-diffusion-xl-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA DreamBooth - {repo_id} - -These are LoRA adaption weights for {base_model}. The weights were trained on {prompt} using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. \n -{img_str} - -LoRA for the text encoder was enabled: {train_text_encoder}. - -Special VAE used for training: {vae_path}. -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def import_model_class_from_model_name_or_path( - pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder" -): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, subfolder=subfolder, revision=revision - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "CLIPTextModelWithProjection": - from transformers import CLIPTextModelWithProjection - - return CLIPTextModelWithProjection - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_vae_model_name_or_path", - type=str, - default=None, - help="Path to pretrained VAE model with better numerical stability. More details: https://github.com/huggingface/diffusers/pull/4038.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="lora-dreambooth-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=1024, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--crops_coords_top_left_h", - type=int, - default=0, - help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), - ) - parser.add_argument( - "--crops_coords_top_left_w", - type=int, - default=0, - help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_text_encoder", - action="store_true", - help="Whether to train the text encoder. If set, the text encoder should be float32 precision.", - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - parser.add_argument( - "--rank", - type=int, - default=4, - help=("The dimension of the LoRA update matrices."), - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images. - """ - - def __init__( - self, - instance_data_root, - class_data_root=None, - class_num=None, - size=1024, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - if class_num is not None: - self.num_class_images = min(len(self.class_images_path), class_num) - else: - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - instance_image = exif_transpose(instance_image) - - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - class_image = exif_transpose(class_image) - - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - - return example - - -def collate_fn(examples, with_prior_preservation=False): - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - batch = {"pixel_values": pixel_values} - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def tokenize_prompt(tokenizer, prompt): - text_inputs = tokenizer( - prompt, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - return text_input_ids - - -# Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt -def encode_prompt(text_encoders, tokenizers, prompt, text_input_ids_list=None): - prompt_embeds_list = [] - - for i, text_encoder in enumerate(text_encoders): - if tokenizers is not None: - tokenizer = tokenizers[i] - text_input_ids = tokenize_prompt(tokenizer, prompt) - else: - assert text_input_ids_list is not None - text_input_ids = text_input_ids_list[i] - - prompt_embeds = text_encoder( - text_input_ids.to(text_encoder.device), - output_hidden_states=True, - ) - - # We are only ALWAYS interested in the pooled output of the final text encoder - pooled_prompt_embeds = prompt_embeds[0] - prompt_embeds = prompt_embeds.hidden_states[-2] - bs_embed, seq_len, _ = prompt_embeds.shape - prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1) - prompt_embeds_list.append(prompt_embeds) - - prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) - pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1) - return prompt_embeds, pooled_prompt_embeds - - -def unet_attn_processors_state_dict(unet) -> Dict[str, torch.tensor]: - """ - Returns: - a state dict containing just the attention processor parameters. - """ - attn_processors = unet.attn_processors - - attn_processors_state_dict = {} - - for attn_processor_key, attn_processor in attn_processors.items(): - for parameter_key, parameter in attn_processor.state_dict().items(): - attn_processors_state_dict[f"{attn_processor_key}.{parameter_key}"] = parameter - - return attn_processors_state_dict - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = StableDiffusionXLPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizers - tokenizer_one = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False - ) - tokenizer_two = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False - ) - - # import correct text encoder classes - text_encoder_cls_one = import_model_class_from_model_name_or_path( - args.pretrained_model_name_or_path, args.revision - ) - text_encoder_cls_two = import_model_class_from_model_name_or_path( - args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" - ) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder_one = text_encoder_cls_one.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - text_encoder_two = text_encoder_cls_two.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision - ) - vae_path = ( - args.pretrained_model_name_or_path - if args.pretrained_vae_model_name_or_path is None - else args.pretrained_vae_model_name_or_path - ) - vae = AutoencoderKL.from_pretrained( - vae_path, subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None, revision=args.revision - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # We only train the additional adapter LoRA layers - vae.requires_grad_(False) - text_encoder_one.requires_grad_(False) - text_encoder_two.requires_grad_(False) - unet.requires_grad_(False) - - # For mixed precision training we cast all non-trainable weigths (vae, non-lora text_encoder and non-lora unet) to half-precision - # as these weights are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move unet, vae and text_encoder to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - - # The VAE is always in float32 to avoid NaN losses. - vae.to(accelerator.device, dtype=torch.float32) - - text_encoder_one.to(accelerator.device, dtype=weight_dtype) - text_encoder_two.to(accelerator.device, dtype=weight_dtype) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder_one.gradient_checkpointing_enable() - text_encoder_two.gradient_checkpointing_enable() - - # now we will add new LoRA weights to the attention layers - # Set correct lora layers - unet_lora_attn_procs = {} - unet_lora_parameters = [] - for name, attn_processor in unet.attn_processors.items(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_processor_class = ( - LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor - ) - module = lora_attn_processor_class( - hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, rank=args.rank - ) - unet_lora_attn_procs[name] = module - unet_lora_parameters.extend(module.parameters()) - - unet.set_attn_processor(unet_lora_attn_procs) - - # The text encoder comes from 🤗 transformers, so we cannot directly modify it. - # So, instead, we monkey-patch the forward calls of its attention-blocks. - if args.train_text_encoder: - # ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16 - text_lora_parameters_one = LoraLoaderMixin._modify_text_encoder( - text_encoder_one, dtype=torch.float32, rank=args.rank - ) - text_lora_parameters_two = LoraLoaderMixin._modify_text_encoder( - text_encoder_two, dtype=torch.float32, rank=args.rank - ) - - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - if accelerator.is_main_process: - # there are only two options here. Either are just the unet attn processor layers - # or there are the unet and text encoder atten layers - unet_lora_layers_to_save = None - text_encoder_one_lora_layers_to_save = None - text_encoder_two_lora_layers_to_save = None - - for model in models: - if isinstance(model, type(accelerator.unwrap_model(unet))): - unet_lora_layers_to_save = unet_attn_processors_state_dict(model) - elif isinstance(model, type(accelerator.unwrap_model(text_encoder_one))): - text_encoder_one_lora_layers_to_save = text_encoder_lora_state_dict(model) - elif isinstance(model, type(accelerator.unwrap_model(text_encoder_two))): - text_encoder_two_lora_layers_to_save = text_encoder_lora_state_dict(model) - else: - raise ValueError(f"unexpected save model: {model.__class__}") - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - StableDiffusionXLPipeline.save_lora_weights( - output_dir, - unet_lora_layers=unet_lora_layers_to_save, - text_encoder_lora_layers=text_encoder_one_lora_layers_to_save, - text_encoder_2_lora_layers=text_encoder_two_lora_layers_to_save, - ) - - def load_model_hook(models, input_dir): - unet_ = None - text_encoder_one_ = None - text_encoder_two_ = None - - while len(models) > 0: - model = models.pop() - - if isinstance(model, type(accelerator.unwrap_model(unet))): - unet_ = model - elif isinstance(model, type(accelerator.unwrap_model(text_encoder_one))): - text_encoder_one_ = model - elif isinstance(model, type(accelerator.unwrap_model(text_encoder_two))): - text_encoder_two_ = model - else: - raise ValueError(f"unexpected save model: {model.__class__}") - - lora_state_dict, network_alphas = LoraLoaderMixin.lora_state_dict(input_dir) - LoraLoaderMixin.load_lora_into_unet(lora_state_dict, network_alphas=network_alphas, unet=unet_) - - text_encoder_state_dict = {k: v for k, v in lora_state_dict.items() if "text_encoder." in k} - LoraLoaderMixin.load_lora_into_text_encoder( - text_encoder_state_dict, network_alphas=network_alphas, text_encoder=text_encoder_one_ - ) - - text_encoder_2_state_dict = {k: v for k, v in lora_state_dict.items() if "text_encoder_2." in k} - LoraLoaderMixin.load_lora_into_text_encoder( - text_encoder_2_state_dict, network_alphas=network_alphas, text_encoder=text_encoder_two_ - ) - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - params_to_optimize = ( - itertools.chain(unet_lora_parameters, text_lora_parameters_one, text_lora_parameters_two) - if args.train_text_encoder - else unet_lora_parameters - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Computes additional embeddings/ids required by the SDXL UNet. - # regular text emebddings (when `train_text_encoder` is not True) - # pooled text embeddings - # time ids - - def compute_time_ids(): - # Adapted from pipeline.StableDiffusionXLPipeline._get_add_time_ids - original_size = (args.resolution, args.resolution) - target_size = (args.resolution, args.resolution) - crops_coords_top_left = (args.crops_coords_top_left_h, args.crops_coords_top_left_w) - add_time_ids = list(original_size + crops_coords_top_left + target_size) - add_time_ids = torch.tensor([add_time_ids]) - add_time_ids = add_time_ids.to(accelerator.device, dtype=weight_dtype) - return add_time_ids - - if not args.train_text_encoder: - tokenizers = [tokenizer_one, tokenizer_two] - text_encoders = [text_encoder_one, text_encoder_two] - - def compute_text_embeddings(prompt, text_encoders, tokenizers): - with torch.no_grad(): - prompt_embeds, pooled_prompt_embeds = encode_prompt(text_encoders, tokenizers, prompt) - prompt_embeds = prompt_embeds.to(accelerator.device) - pooled_prompt_embeds = pooled_prompt_embeds.to(accelerator.device) - return prompt_embeds, pooled_prompt_embeds - - # Handle instance prompt. - instance_time_ids = compute_time_ids() - if not args.train_text_encoder: - instance_prompt_hidden_states, instance_pooled_prompt_embeds = compute_text_embeddings( - args.instance_prompt, text_encoders, tokenizers - ) - - # Handle class prompt for prior-preservation. - if args.with_prior_preservation: - class_time_ids = compute_time_ids() - if not args.train_text_encoder: - class_prompt_hidden_states, class_pooled_prompt_embeds = compute_text_embeddings( - args.class_prompt, text_encoders, tokenizers - ) - - # Clear the memory here. - if not args.train_text_encoder: - del tokenizers, text_encoders - gc.collect() - torch.cuda.empty_cache() - - # Pack the statically computed variables appropriately. This is so that we don't - # have to pass them to the dataloader. - add_time_ids = instance_time_ids - if args.with_prior_preservation: - add_time_ids = torch.cat([add_time_ids, class_time_ids], dim=0) - - if not args.train_text_encoder: - prompt_embeds = instance_prompt_hidden_states - unet_add_text_embeds = instance_pooled_prompt_embeds - if args.with_prior_preservation: - prompt_embeds = torch.cat([prompt_embeds, class_prompt_hidden_states], dim=0) - unet_add_text_embeds = torch.cat([unet_add_text_embeds, class_pooled_prompt_embeds], dim=0) - else: - tokens_one = tokenize_prompt(tokenizer_one, args.instance_prompt) - tokens_two = tokenize_prompt(tokenizer_two, args.instance_prompt) - if args.with_prior_preservation: - class_tokens_one = tokenize_prompt(tokenizer_one, args.class_prompt) - class_tokens_two = tokenize_prompt(tokenizer_two, args.class_prompt) - tokens_one = torch.cat([tokens_one, class_tokens_one], dim=0) - tokens_two = torch.cat([tokens_two, class_tokens_two], dim=0) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_num=args.num_class_images, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - if args.train_text_encoder: - unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth-lora-sd-xl", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder_one.train() - text_encoder_two.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - pixel_values = batch["pixel_values"].to(dtype=vae.dtype) - - # Convert images to latent space - model_input = vae.encode(pixel_values).latent_dist.sample() - model_input = model_input * vae.config.scaling_factor - if args.pretrained_vae_model_name_or_path is None: - model_input = model_input.to(weight_dtype) - - # Sample noise that we'll add to the latents - noise = torch.randn_like(model_input) - bsz = model_input.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=model_input.device - ) - timesteps = timesteps.long() - - # Add noise to the model input according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) - - # Calculate the elements to repeat depending on the use of prior-preservation. - elems_to_repeat = bsz // 2 if args.with_prior_preservation else bsz - - # Predict the noise residual - if not args.train_text_encoder: - unet_added_conditions = { - "time_ids": add_time_ids.repeat(elems_to_repeat, 1), - "text_embeds": unet_add_text_embeds.repeat(elems_to_repeat, 1), - } - prompt_embeds_input = prompt_embeds.repeat(elems_to_repeat, 1, 1) - model_pred = unet( - noisy_model_input, - timesteps, - prompt_embeds_input, - added_cond_kwargs=unet_added_conditions, - ).sample - else: - unet_added_conditions = {"time_ids": add_time_ids.repeat(elems_to_repeat, 1)} - prompt_embeds, pooled_prompt_embeds = encode_prompt( - text_encoders=[text_encoder_one, text_encoder_two], - tokenizers=None, - prompt=None, - text_input_ids_list=[tokens_one, tokens_two], - ) - unet_added_conditions.update({"text_embeds": pooled_prompt_embeds.repeat(elems_to_repeat, 1)}) - prompt_embeds_input = prompt_embeds.repeat(elems_to_repeat, 1, 1) - model_pred = unet( - noisy_model_input, timesteps, prompt_embeds_input, added_cond_kwargs=unet_added_conditions - ).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(model_input, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet_lora_parameters, text_lora_parameters_one, text_lora_parameters_two) - if args.train_text_encoder - else unet_lora_parameters - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if accelerator.is_main_process: - if global_step % args.checkpointing_steps == 0: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - if not args.train_text_encoder: - text_encoder_one = text_encoder_cls_one.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - text_encoder_two = text_encoder_cls_two.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision - ) - pipeline = StableDiffusionXLPipeline.from_pretrained( - args.pretrained_model_name_or_path, - vae=vae, - text_encoder=accelerator.unwrap_model(text_encoder_one), - text_encoder_2=accelerator.unwrap_model(text_encoder_two), - unet=accelerator.unwrap_model(unet), - revision=args.revision, - torch_dtype=weight_dtype, - ) - - # We train on the simplified learning objective. If we were previously predicting a variance, we need the scheduler to ignore it - scheduler_args = {} - - if "variance_type" in pipeline.scheduler.config: - variance_type = pipeline.scheduler.config.variance_type - - if variance_type in ["learned", "learned_range"]: - variance_type = "fixed_small" - - scheduler_args["variance_type"] = variance_type - - pipeline.scheduler = DPMSolverMultistepScheduler.from_config( - pipeline.scheduler.config, **scheduler_args - ) - - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None - pipeline_args = {"prompt": args.validation_prompt} - - with torch.cuda.amp.autocast(): - images = [ - pipeline(**pipeline_args, generator=generator).images[0] - for _ in range(args.num_validation_images) - ] - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - unet = unet.to(torch.float32) - unet_lora_layers = unet_attn_processors_state_dict(unet) - - if args.train_text_encoder: - text_encoder_one = accelerator.unwrap_model(text_encoder_one) - text_encoder_lora_layers = text_encoder_lora_state_dict(text_encoder_one.to(torch.float32)) - text_encoder_two = accelerator.unwrap_model(text_encoder_two) - text_encoder_2_lora_layers = text_encoder_lora_state_dict(text_encoder_two.to(torch.float32)) - else: - text_encoder_lora_layers = None - text_encoder_2_lora_layers = None - - StableDiffusionXLPipeline.save_lora_weights( - save_directory=args.output_dir, - unet_lora_layers=unet_lora_layers, - text_encoder_lora_layers=text_encoder_lora_layers, - text_encoder_2_lora_layers=text_encoder_2_lora_layers, - ) - - # Final inference - # Load previous pipeline - vae = AutoencoderKL.from_pretrained( - vae_path, - subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline = StableDiffusionXLPipeline.from_pretrained( - args.pretrained_model_name_or_path, vae=vae, revision=args.revision, torch_dtype=weight_dtype - ) - - # We train on the simplified learning objective. If we were previously predicting a variance, we need the scheduler to ignore it - scheduler_args = {} - - if "variance_type" in pipeline.scheduler.config: - variance_type = pipeline.scheduler.config.variance_type - - if variance_type in ["learned", "learned_range"]: - variance_type = "fixed_small" - - scheduler_args["variance_type"] = variance_type - - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, **scheduler_args) - - # load attention processors - pipeline.load_lora_weights(args.output_dir) - - # run inference - images = [] - if args.validation_prompt and args.num_validation_images > 0: - pipeline = pipeline.to(accelerator.device) - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None - images = [ - pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - for _ in range(args.num_validation_images) - ] - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_model_name_or_path, - train_text_encoder=args.train_text_encoder, - prompt=args.instance_prompt, - repo_folder=args.output_dir, - vae_path=args.pretrained_vae_model_name_or_path, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_t2i_adapter.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_t2i_adapter.py deleted file mode 100644 index 01a1fecf4e4b4a458cd1d866786cc7c975ed8ad2..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_original_t2i_adapter.py +++ /dev/null @@ -1,250 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Conversion script for the T2I-Adapter checkpoints. -""" - -import argparse - -import torch - -from diffusers import T2IAdapter - - -def convert_adapter(src_state, in_channels): - original_body_length = max([int(x.split(".")[1]) for x in src_state.keys() if "body." in x]) + 1 - - assert original_body_length == 8 - - # (0, 1) -> channels 1 - assert src_state["body.0.block1.weight"].shape == (320, 320, 3, 3) - - # (2, 3) -> channels 2 - assert src_state["body.2.in_conv.weight"].shape == (640, 320, 1, 1) - - # (4, 5) -> channels 3 - assert src_state["body.4.in_conv.weight"].shape == (1280, 640, 1, 1) - - # (6, 7) -> channels 4 - assert src_state["body.6.block1.weight"].shape == (1280, 1280, 3, 3) - - res_state = { - "adapter.conv_in.weight": src_state.pop("conv_in.weight"), - "adapter.conv_in.bias": src_state.pop("conv_in.bias"), - # 0.resnets.0 - "adapter.body.0.resnets.0.block1.weight": src_state.pop("body.0.block1.weight"), - "adapter.body.0.resnets.0.block1.bias": src_state.pop("body.0.block1.bias"), - "adapter.body.0.resnets.0.block2.weight": src_state.pop("body.0.block2.weight"), - "adapter.body.0.resnets.0.block2.bias": src_state.pop("body.0.block2.bias"), - # 0.resnets.1 - "adapter.body.0.resnets.1.block1.weight": src_state.pop("body.1.block1.weight"), - "adapter.body.0.resnets.1.block1.bias": src_state.pop("body.1.block1.bias"), - "adapter.body.0.resnets.1.block2.weight": src_state.pop("body.1.block2.weight"), - "adapter.body.0.resnets.1.block2.bias": src_state.pop("body.1.block2.bias"), - # 1 - "adapter.body.1.in_conv.weight": src_state.pop("body.2.in_conv.weight"), - "adapter.body.1.in_conv.bias": src_state.pop("body.2.in_conv.bias"), - # 1.resnets.0 - "adapter.body.1.resnets.0.block1.weight": src_state.pop("body.2.block1.weight"), - "adapter.body.1.resnets.0.block1.bias": src_state.pop("body.2.block1.bias"), - "adapter.body.1.resnets.0.block2.weight": src_state.pop("body.2.block2.weight"), - "adapter.body.1.resnets.0.block2.bias": src_state.pop("body.2.block2.bias"), - # 1.resnets.1 - "adapter.body.1.resnets.1.block1.weight": src_state.pop("body.3.block1.weight"), - "adapter.body.1.resnets.1.block1.bias": src_state.pop("body.3.block1.bias"), - "adapter.body.1.resnets.1.block2.weight": src_state.pop("body.3.block2.weight"), - "adapter.body.1.resnets.1.block2.bias": src_state.pop("body.3.block2.bias"), - # 2 - "adapter.body.2.in_conv.weight": src_state.pop("body.4.in_conv.weight"), - "adapter.body.2.in_conv.bias": src_state.pop("body.4.in_conv.bias"), - # 2.resnets.0 - "adapter.body.2.resnets.0.block1.weight": src_state.pop("body.4.block1.weight"), - "adapter.body.2.resnets.0.block1.bias": src_state.pop("body.4.block1.bias"), - "adapter.body.2.resnets.0.block2.weight": src_state.pop("body.4.block2.weight"), - "adapter.body.2.resnets.0.block2.bias": src_state.pop("body.4.block2.bias"), - # 2.resnets.1 - "adapter.body.2.resnets.1.block1.weight": src_state.pop("body.5.block1.weight"), - "adapter.body.2.resnets.1.block1.bias": src_state.pop("body.5.block1.bias"), - "adapter.body.2.resnets.1.block2.weight": src_state.pop("body.5.block2.weight"), - "adapter.body.2.resnets.1.block2.bias": src_state.pop("body.5.block2.bias"), - # 3.resnets.0 - "adapter.body.3.resnets.0.block1.weight": src_state.pop("body.6.block1.weight"), - "adapter.body.3.resnets.0.block1.bias": src_state.pop("body.6.block1.bias"), - "adapter.body.3.resnets.0.block2.weight": src_state.pop("body.6.block2.weight"), - "adapter.body.3.resnets.0.block2.bias": src_state.pop("body.6.block2.bias"), - # 3.resnets.1 - "adapter.body.3.resnets.1.block1.weight": src_state.pop("body.7.block1.weight"), - "adapter.body.3.resnets.1.block1.bias": src_state.pop("body.7.block1.bias"), - "adapter.body.3.resnets.1.block2.weight": src_state.pop("body.7.block2.weight"), - "adapter.body.3.resnets.1.block2.bias": src_state.pop("body.7.block2.bias"), - } - - assert len(src_state) == 0 - - adapter = T2IAdapter(in_channels=in_channels, adapter_type="full_adapter") - - adapter.load_state_dict(res_state) - - return adapter - - -def convert_light_adapter(src_state): - original_body_length = max([int(x.split(".")[1]) for x in src_state.keys() if "body." in x]) + 1 - - assert original_body_length == 4 - - res_state = { - # body.0.in_conv - "adapter.body.0.in_conv.weight": src_state.pop("body.0.in_conv.weight"), - "adapter.body.0.in_conv.bias": src_state.pop("body.0.in_conv.bias"), - # body.0.resnets.0 - "adapter.body.0.resnets.0.block1.weight": src_state.pop("body.0.body.0.block1.weight"), - "adapter.body.0.resnets.0.block1.bias": src_state.pop("body.0.body.0.block1.bias"), - "adapter.body.0.resnets.0.block2.weight": src_state.pop("body.0.body.0.block2.weight"), - "adapter.body.0.resnets.0.block2.bias": src_state.pop("body.0.body.0.block2.bias"), - # body.0.resnets.1 - "adapter.body.0.resnets.1.block1.weight": src_state.pop("body.0.body.1.block1.weight"), - "adapter.body.0.resnets.1.block1.bias": src_state.pop("body.0.body.1.block1.bias"), - "adapter.body.0.resnets.1.block2.weight": src_state.pop("body.0.body.1.block2.weight"), - "adapter.body.0.resnets.1.block2.bias": src_state.pop("body.0.body.1.block2.bias"), - # body.0.resnets.2 - "adapter.body.0.resnets.2.block1.weight": src_state.pop("body.0.body.2.block1.weight"), - "adapter.body.0.resnets.2.block1.bias": src_state.pop("body.0.body.2.block1.bias"), - "adapter.body.0.resnets.2.block2.weight": src_state.pop("body.0.body.2.block2.weight"), - "adapter.body.0.resnets.2.block2.bias": src_state.pop("body.0.body.2.block2.bias"), - # body.0.resnets.3 - "adapter.body.0.resnets.3.block1.weight": src_state.pop("body.0.body.3.block1.weight"), - "adapter.body.0.resnets.3.block1.bias": src_state.pop("body.0.body.3.block1.bias"), - "adapter.body.0.resnets.3.block2.weight": src_state.pop("body.0.body.3.block2.weight"), - "adapter.body.0.resnets.3.block2.bias": src_state.pop("body.0.body.3.block2.bias"), - # body.0.out_conv - "adapter.body.0.out_conv.weight": src_state.pop("body.0.out_conv.weight"), - "adapter.body.0.out_conv.bias": src_state.pop("body.0.out_conv.bias"), - # body.1.in_conv - "adapter.body.1.in_conv.weight": src_state.pop("body.1.in_conv.weight"), - "adapter.body.1.in_conv.bias": src_state.pop("body.1.in_conv.bias"), - # body.1.resnets.0 - "adapter.body.1.resnets.0.block1.weight": src_state.pop("body.1.body.0.block1.weight"), - "adapter.body.1.resnets.0.block1.bias": src_state.pop("body.1.body.0.block1.bias"), - "adapter.body.1.resnets.0.block2.weight": src_state.pop("body.1.body.0.block2.weight"), - "adapter.body.1.resnets.0.block2.bias": src_state.pop("body.1.body.0.block2.bias"), - # body.1.resnets.1 - "adapter.body.1.resnets.1.block1.weight": src_state.pop("body.1.body.1.block1.weight"), - "adapter.body.1.resnets.1.block1.bias": src_state.pop("body.1.body.1.block1.bias"), - "adapter.body.1.resnets.1.block2.weight": src_state.pop("body.1.body.1.block2.weight"), - "adapter.body.1.resnets.1.block2.bias": src_state.pop("body.1.body.1.block2.bias"), - # body.1.body.2 - "adapter.body.1.resnets.2.block1.weight": src_state.pop("body.1.body.2.block1.weight"), - "adapter.body.1.resnets.2.block1.bias": src_state.pop("body.1.body.2.block1.bias"), - "adapter.body.1.resnets.2.block2.weight": src_state.pop("body.1.body.2.block2.weight"), - "adapter.body.1.resnets.2.block2.bias": src_state.pop("body.1.body.2.block2.bias"), - # body.1.body.3 - "adapter.body.1.resnets.3.block1.weight": src_state.pop("body.1.body.3.block1.weight"), - "adapter.body.1.resnets.3.block1.bias": src_state.pop("body.1.body.3.block1.bias"), - "adapter.body.1.resnets.3.block2.weight": src_state.pop("body.1.body.3.block2.weight"), - "adapter.body.1.resnets.3.block2.bias": src_state.pop("body.1.body.3.block2.bias"), - # body.1.out_conv - "adapter.body.1.out_conv.weight": src_state.pop("body.1.out_conv.weight"), - "adapter.body.1.out_conv.bias": src_state.pop("body.1.out_conv.bias"), - # body.2.in_conv - "adapter.body.2.in_conv.weight": src_state.pop("body.2.in_conv.weight"), - "adapter.body.2.in_conv.bias": src_state.pop("body.2.in_conv.bias"), - # body.2.body.0 - "adapter.body.2.resnets.0.block1.weight": src_state.pop("body.2.body.0.block1.weight"), - "adapter.body.2.resnets.0.block1.bias": src_state.pop("body.2.body.0.block1.bias"), - "adapter.body.2.resnets.0.block2.weight": src_state.pop("body.2.body.0.block2.weight"), - "adapter.body.2.resnets.0.block2.bias": src_state.pop("body.2.body.0.block2.bias"), - # body.2.body.1 - "adapter.body.2.resnets.1.block1.weight": src_state.pop("body.2.body.1.block1.weight"), - "adapter.body.2.resnets.1.block1.bias": src_state.pop("body.2.body.1.block1.bias"), - "adapter.body.2.resnets.1.block2.weight": src_state.pop("body.2.body.1.block2.weight"), - "adapter.body.2.resnets.1.block2.bias": src_state.pop("body.2.body.1.block2.bias"), - # body.2.body.2 - "adapter.body.2.resnets.2.block1.weight": src_state.pop("body.2.body.2.block1.weight"), - "adapter.body.2.resnets.2.block1.bias": src_state.pop("body.2.body.2.block1.bias"), - "adapter.body.2.resnets.2.block2.weight": src_state.pop("body.2.body.2.block2.weight"), - "adapter.body.2.resnets.2.block2.bias": src_state.pop("body.2.body.2.block2.bias"), - # body.2.body.3 - "adapter.body.2.resnets.3.block1.weight": src_state.pop("body.2.body.3.block1.weight"), - "adapter.body.2.resnets.3.block1.bias": src_state.pop("body.2.body.3.block1.bias"), - "adapter.body.2.resnets.3.block2.weight": src_state.pop("body.2.body.3.block2.weight"), - "adapter.body.2.resnets.3.block2.bias": src_state.pop("body.2.body.3.block2.bias"), - # body.2.out_conv - "adapter.body.2.out_conv.weight": src_state.pop("body.2.out_conv.weight"), - "adapter.body.2.out_conv.bias": src_state.pop("body.2.out_conv.bias"), - # body.3.in_conv - "adapter.body.3.in_conv.weight": src_state.pop("body.3.in_conv.weight"), - "adapter.body.3.in_conv.bias": src_state.pop("body.3.in_conv.bias"), - # body.3.body.0 - "adapter.body.3.resnets.0.block1.weight": src_state.pop("body.3.body.0.block1.weight"), - "adapter.body.3.resnets.0.block1.bias": src_state.pop("body.3.body.0.block1.bias"), - "adapter.body.3.resnets.0.block2.weight": src_state.pop("body.3.body.0.block2.weight"), - "adapter.body.3.resnets.0.block2.bias": src_state.pop("body.3.body.0.block2.bias"), - # body.3.body.1 - "adapter.body.3.resnets.1.block1.weight": src_state.pop("body.3.body.1.block1.weight"), - "adapter.body.3.resnets.1.block1.bias": src_state.pop("body.3.body.1.block1.bias"), - "adapter.body.3.resnets.1.block2.weight": src_state.pop("body.3.body.1.block2.weight"), - "adapter.body.3.resnets.1.block2.bias": src_state.pop("body.3.body.1.block2.bias"), - # body.3.body.2 - "adapter.body.3.resnets.2.block1.weight": src_state.pop("body.3.body.2.block1.weight"), - "adapter.body.3.resnets.2.block1.bias": src_state.pop("body.3.body.2.block1.bias"), - "adapter.body.3.resnets.2.block2.weight": src_state.pop("body.3.body.2.block2.weight"), - "adapter.body.3.resnets.2.block2.bias": src_state.pop("body.3.body.2.block2.bias"), - # body.3.body.3 - "adapter.body.3.resnets.3.block1.weight": src_state.pop("body.3.body.3.block1.weight"), - "adapter.body.3.resnets.3.block1.bias": src_state.pop("body.3.body.3.block1.bias"), - "adapter.body.3.resnets.3.block2.weight": src_state.pop("body.3.body.3.block2.weight"), - "adapter.body.3.resnets.3.block2.bias": src_state.pop("body.3.body.3.block2.bias"), - # body.3.out_conv - "adapter.body.3.out_conv.weight": src_state.pop("body.3.out_conv.weight"), - "adapter.body.3.out_conv.bias": src_state.pop("body.3.out_conv.bias"), - } - - assert len(src_state) == 0 - - adapter = T2IAdapter(in_channels=3, channels=[320, 640, 1280], num_res_blocks=4, adapter_type="light_adapter") - - adapter.load_state_dict(res_state) - - return adapter - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - parser.add_argument( - "--output_path", default=None, type=str, required=True, help="Path to the store the result checkpoint." - ) - parser.add_argument( - "--is_adapter_light", - action="store_true", - help="Is checkpoint come from Adapter-Light architecture. ex: color-adapter", - ) - parser.add_argument("--in_channels", required=False, type=int, help="Input channels for non-light adapter") - - args = parser.parse_args() - src_state = torch.load(args.checkpoint_path) - - if args.is_adapter_light: - adapter = convert_light_adapter(src_state) - else: - if args.in_channels is None: - raise ValueError("set `--in_channels=`") - adapter = convert_adapter(src_state, args.in_channels) - - adapter.save_pretrained(args.output_path) diff --git a/spaces/patrickvonplaten/convert/convert.py b/spaces/patrickvonplaten/convert/convert.py deleted file mode 100644 index 5c9a893dddabd59551b40adcc7f17243a65b4f6c..0000000000000000000000000000000000000000 --- a/spaces/patrickvonplaten/convert/convert.py +++ /dev/null @@ -1,285 +0,0 @@ -import argparse -import json -import os -import shutil -from collections import defaultdict -from inspect import signature -from tempfile import TemporaryDirectory -from typing import Dict, List, Optional, Set - -import torch - -from huggingface_hub import CommitInfo, CommitOperationAdd, Discussion, HfApi, hf_hub_download -from huggingface_hub.file_download import repo_folder_name -from safetensors.torch import load_file, save_file -from transformers import AutoConfig -from transformers.pipelines.base import infer_framework_load_model - - -class AlreadyExists(Exception): - pass - - -def shared_pointers(tensors): - ptrs = defaultdict(list) - for k, v in tensors.items(): - ptrs[v.data_ptr()].append(k) - failing = [] - for ptr, names in ptrs.items(): - if len(names) > 1: - failing.append(names) - return failing - - -def check_file_size(sf_filename: str, pt_filename: str): - sf_size = os.stat(sf_filename).st_size - pt_size = os.stat(pt_filename).st_size - - if (sf_size - pt_size) / pt_size > 0.01: - raise RuntimeError( - f"""The file size different is more than 1%: - - {sf_filename}: {sf_size} - - {pt_filename}: {pt_size} - """ - ) - - -def rename(pt_filename: str) -> str: - filename, ext = os.path.splitext(pt_filename) - local = f"{filename}.safetensors" - local = local.replace("pytorch_model", "model") - return local - - -def convert_multi(model_id: str, folder: str) -> List["CommitOperationAdd"]: - filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin.index.json") - with open(filename, "r") as f: - data = json.load(f) - - filenames = set(data["weight_map"].values()) - local_filenames = [] - for filename in filenames: - pt_filename = hf_hub_download(repo_id=model_id, filename=filename) - - sf_filename = rename(pt_filename) - sf_filename = os.path.join(folder, sf_filename) - convert_file(pt_filename, sf_filename) - local_filenames.append(sf_filename) - - index = os.path.join(folder, "model.safetensors.index.json") - with open(index, "w") as f: - newdata = {k: v for k, v in data.items()} - newmap = {k: rename(v) for k, v in data["weight_map"].items()} - newdata["weight_map"] = newmap - json.dump(newdata, f, indent=4) - local_filenames.append(index) - - operations = [ - CommitOperationAdd(path_in_repo=local.split("/")[-1], path_or_fileobj=local) for local in local_filenames - ] - - return operations - - -def convert_single(model_id: str, folder: str) -> List["CommitOperationAdd"]: - pt_filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin") - - sf_name = "model.safetensors" - sf_filename = os.path.join(folder, sf_name) - convert_file(pt_filename, sf_filename) - operations = [CommitOperationAdd(path_in_repo=sf_name, path_or_fileobj=sf_filename)] - return operations - - -def convert_file( - pt_filename: str, - sf_filename: str, -): - loaded = torch.load(pt_filename, map_location="cpu") - if "state_dict" in loaded: - loaded = loaded["state_dict"] - shared = shared_pointers(loaded) - for shared_weights in shared: - for name in shared_weights[1:]: - loaded.pop(name) - - # For tensors to be contiguous - loaded = {k: v.contiguous() for k, v in loaded.items()} - - dirname = os.path.dirname(sf_filename) - os.makedirs(dirname, exist_ok=True) - save_file(loaded, sf_filename, metadata={"format": "pt"}) - check_file_size(sf_filename, pt_filename) - reloaded = load_file(sf_filename) - for k in loaded: - pt_tensor = loaded[k] - sf_tensor = reloaded[k] - if not torch.equal(pt_tensor, sf_tensor): - raise RuntimeError(f"The output tensors do not match for key {k}") - - -def create_diff(pt_infos: Dict[str, List[str]], sf_infos: Dict[str, List[str]]) -> str: - errors = [] - for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: - pt_set = set(pt_infos[key]) - sf_set = set(sf_infos[key]) - - pt_only = pt_set - sf_set - sf_only = sf_set - pt_set - - if pt_only: - errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") - if sf_only: - errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") - return "\n".join(errors) - - -def check_final_model(model_id: str, folder: str): - config = hf_hub_download(repo_id=model_id, filename="config.json") - shutil.copy(config, os.path.join(folder, "config.json")) - config = AutoConfig.from_pretrained(folder) - - _, (pt_model, pt_infos) = infer_framework_load_model(model_id, config, output_loading_info=True) - _, (sf_model, sf_infos) = infer_framework_load_model(folder, config, output_loading_info=True) - - if pt_infos != sf_infos: - error_string = create_diff(pt_infos, sf_infos) - raise ValueError(f"Different infos when reloading the model: {error_string}") - - pt_params = pt_model.state_dict() - sf_params = sf_model.state_dict() - - pt_shared = shared_pointers(pt_params) - sf_shared = shared_pointers(sf_params) - if pt_shared != sf_shared: - raise RuntimeError("The reconstructed model is wrong, shared tensors are different {shared_pt} != {shared_tf}") - - sig = signature(pt_model.forward) - input_ids = torch.arange(10).unsqueeze(0) - pixel_values = torch.randn(1, 3, 224, 224) - input_values = torch.arange(1000).float().unsqueeze(0) - kwargs = {} - if "input_ids" in sig.parameters: - kwargs["input_ids"] = input_ids - if "decoder_input_ids" in sig.parameters: - kwargs["decoder_input_ids"] = input_ids - if "pixel_values" in sig.parameters: - kwargs["pixel_values"] = pixel_values - if "input_values" in sig.parameters: - kwargs["input_values"] = input_values - if "bbox" in sig.parameters: - kwargs["bbox"] = torch.zeros((1, 10, 4)).long() - if "image" in sig.parameters: - kwargs["image"] = pixel_values - - if torch.cuda.is_available(): - pt_model = pt_model.cuda() - sf_model = sf_model.cuda() - kwargs = {k: v.cuda() for k, v in kwargs.items()} - - pt_logits = pt_model(**kwargs)[0] - sf_logits = sf_model(**kwargs)[0] - - torch.testing.assert_close(sf_logits, pt_logits) - print(f"Model {model_id} is ok !") - - -def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]: - try: - discussions = api.get_repo_discussions(repo_id=model_id) - except Exception: - return None - for discussion in discussions: - if discussion.status == "open" and discussion.is_pull_request and discussion.title == pr_title: - return discussion - - -def convert_generic(model_id: str, folder: str, filenames: Set[str]) -> List["CommitOperationAdd"]: - operations = [] - - extensions = set([".bin"]) - for filename in filenames: - prefix, ext = os.path.splitext(filename) - if ext in extensions: - pt_filename = hf_hub_download(model_id, filename=filename) - _, raw_filename = os.path.split(filename) - if raw_filename == "pytorch_model.bin": - # XXX: This is a special case to handle `transformers` and the - # `transformers` part of the model which is actually loaded by `transformers`. - sf_in_repo = "model.safetensors" - else: - sf_in_repo = f"{prefix}.safetensors" - sf_filename = os.path.join(folder, sf_in_repo) - convert_file(pt_filename, sf_filename) - operations.append(CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename)) - return operations - - -def convert(api: "HfApi", model_id: str, force: bool = False) -> Optional["CommitInfo"]: - pr_title = "Adding `safetensors` variant of this model" - info = api.model_info(model_id) - filenames = set(s.rfilename for s in info.siblings) - - with TemporaryDirectory() as d: - folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models")) - os.makedirs(folder) - new_pr = None - try: - operations = None - pr = previous_pr(api, model_id, pr_title) - - library_name = getattr(info, "library_name", None) - if any(filename.endswith(".safetensors") for filename in filenames) and not force: - raise AlreadyExists(f"Model {model_id} is already converted, skipping..") - elif pr is not None and not force: - url = f"https://huggingface.co/{model_id}/discussions/{pr.num}" - new_pr = pr - raise AlreadyExists(f"Model {model_id} already has an open PR check out {url}") - elif library_name == "transformers": - if "pytorch_model.bin" in filenames: - operations = convert_single(model_id, folder) - elif "pytorch_model.bin.index.json" in filenames: - operations = convert_multi(model_id, folder) - else: - raise RuntimeError(f"Model {model_id} doesn't seem to be a valid pytorch model. Cannot convert") - check_final_model(model_id, folder) - else: - operations = convert_generic(model_id, folder, filenames) - - if operations: - new_pr = api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=pr_title, - create_pr=True, - ) - print(f"Pr created at {new_pr.pr_url}") - else: - print("No files to convert") - finally: - shutil.rmtree(folder) - return new_pr - - -if __name__ == "__main__": - DESCRIPTION = """ - Simple utility tool to convert automatically some weights on the hub to `safetensors` format. - It is PyTorch exclusive for now. - It works by downloading the weights (PT), converting them locally, and uploading them back - as a PR on the hub. - """ - parser = argparse.ArgumentParser(description=DESCRIPTION) - parser.add_argument( - "model_id", - type=str, - help="The name of the model on the hub to convert. E.g. `gpt2` or `facebook/wav2vec2-base-960h`", - ) - parser.add_argument( - "--force", - action="store_true", - help="Create the PR even if it already exists of if the model was already converted.", - ) - args = parser.parse_args() - model_id = args.model_id - api = HfApi() - convert(api, model_id, force=args.force) diff --git a/spaces/pedrogengo/style_loss_showdown/README.md b/spaces/pedrogengo/style_loss_showdown/README.md deleted file mode 100644 index e1bb19c64d091e008e4986a7450b6f3ff177fc03..0000000000000000000000000000000000000000 --- a/spaces/pedrogengo/style_loss_showdown/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: style_loss_showdown -emoji: 👀 -colorFrom: purple -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pixiou/bingo/src/lib/utils.ts b/spaces/pixiou/bingo/src/lib/utils.ts deleted file mode 100644 index 760ab389d213df20c359d7993475fe3bf9031bff..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/lib/utils.ts +++ /dev/null @@ -1,154 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' -import { debug } from './isomorphic' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export const defaultUID = 'xxx' - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = value ? 86400 * 30 : 0 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function mockUser(cookies: Partial<{ [key: string]: string }>) { - const { - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - _U = defaultUID, - } = cookies - const ua = parseUA(BING_UA) - - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${_U}` || '', - } -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_HEADER = process.env.BING_HEADER, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY)) - if (BING_HEADER) { - if ( - (imageOnly && type === 'image') - || !imageOnly - ) { - return extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - } - } - return mockUser(cookies) -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filetypes.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filetypes.py deleted file mode 100644 index 5948570178f3e6e79d1ff574241d09d4d8ed78de..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filetypes.py +++ /dev/null @@ -1,27 +0,0 @@ -"""Filetype information. -""" - -from typing import Tuple - -from pip._internal.utils.misc import splitext - -WHEEL_EXTENSION = ".whl" -BZ2_EXTENSIONS: Tuple[str, ...] = (".tar.bz2", ".tbz") -XZ_EXTENSIONS: Tuple[str, ...] = ( - ".tar.xz", - ".txz", - ".tlz", - ".tar.lz", - ".tar.lzma", -) -ZIP_EXTENSIONS: Tuple[str, ...] = (".zip", WHEEL_EXTENSION) -TAR_EXTENSIONS: Tuple[str, ...] = (".tar.gz", ".tgz", ".tar") -ARCHIVE_EXTENSIONS = ZIP_EXTENSIONS + BZ2_EXTENSIONS + TAR_EXTENSIONS + XZ_EXTENSIONS - - -def is_archive_file(name: str) -> bool: - """Return True if `name` is a considered as an archive file.""" - ext = splitext(name)[1].lower() - if ext in ARCHIVE_EXTENSIONS: - return True - return False diff --git a/spaces/politweet-sh/politweet/README.md b/spaces/politweet-sh/politweet/README.md deleted file mode 100644 index 2612e77b7903a698235b1f0b412727760ed9d229..0000000000000000000000000000000000000000 --- a/spaces/politweet-sh/politweet/README.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Politweet -emoji: 📉 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: mit ---- - -# Politweet -In this summer project at Softhouse, we have developed a tool for analyzing Twitter posts by Swedish party leaders. -The UI of this tool is in the form of a webpage which lets a user see a graphical representation of the predicted -features: topics, sentiments (positive, negative or neutral), and targets in the party leaders' tweets. - -### Data Gathering -The tweets were gathered using the Twitter scraping tool [Twint](https://github.com/twintproject/twint). -"An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to -scrape a user's followers, following, Tweets and more while evading most API limitations.". - -### Predicting: Topics, Sentiments & Targets -The classifications that are given by GPT-3 for every tweet contain: -- ```main_topic``` - a general topic which the tweet is about -- ```sub_topic``` - a more descriptive topic -- ```sentiment``` - is the tweet positive, negative or neutral? -- ```target``` - who is the tweet targeting? - -The predicted features were extracted using AI, and more specifically by utilizing the autoregressive -language model GPT-3 by [OpenAI](https://openai.com/api/) in combination with our prompt engineering. -The final prompt is a result of experimentation while also trying to limit the length of the prompt. -Since OpenAI charge users based on "tokens", which is closely related to number of words, it would be -economically unsuitable to use lengthy prompts when classifying several thousands of tweets. - -### Merging Topics & Targets -Since the output from GPT-3 varies a lot, e.g. topics can be similar but not identical, a method for -clustering similar topics and targets was needed in order to be able to represent statistics of these -classifications. Thus, the NN-algorithm was implemented using the cosine similarity as metric, after -transforming topics and targets to dense vector representations with -[Sentence Transformers](https://github.com/UKPLab/sentence-transformers). The similarities between the -classification from GPT-3 and words from a predefined set of classes are then calculated, and the -classification is changed to the predefined class that yielded the highest cosine similarity. It is worth -noting that each predefined class has several "synonyms" or categorical words, and that the highest cosine -similarity can be found between the classification and a word from that list. - -Example - The GPT-3 classified topics are: ```main_topic = "sport"``` and ```sub_topic = "soccer"``` -> -```old_topic = "sport and soccer"```, and gets the highest similarity when compared to the word/synonym -"soccer" which is in the subset to the predefined class ```"Civil society and sport"``` -> -```new_topic = "Civil society and sport"``` - - -### Website User Manual -1. Enter the time period of the tweets that you want to look into - The dates need to be on the format -"YYYY-MM-DD", and between 2021-06-28 and 2022-08-10. It is preferable to choose a bigger span rather than a -smaller one. Keep in mind that tweets the number of tweets posted by the party leaders will vary a lot -(Annie Lööf: 1814 vs Jimmie Åkesson: 185 tweets in total). -2. Select the party leader(s) you want to look into - At least one party leader has to be selected. -3. Select the classifications you want to see statistics of: topic, sentiment and/or target. -4. Apply - always press this button after you check a new box or change the dates to update the website. -5. Run -The pie charts and bar graphs should appear for your selected party leaders. Under the plots, a new panel -will appear which lets users see how a prediction was made, i.e. classification from GPT-3 -> the -6. To see examples of how the topic/sentiment/target was predicted, the user can select a type of -classification and check the box "show stats". To download the CSV file containing all tweets and -classifications for the checked party leaders, the user can check "Export file". After the selections, Apply and Run. - -### Data Frame Structure -Each row in the database has the following structure: -```id,tweet,date,user_id,username,urls,nlikes,nreplies,nretweets,class_tuple,main_topic,sub_topic,sentiment,target,merged_tuple,merged_topic,merged_target,cos_sim_topic,synonym_topic,cos_sim_target,synonym_target```. - - - diff --git a/spaces/portal/Xenova-Semantic-Image-Search/gab.html b/spaces/portal/Xenova-Semantic-Image-Search/gab.html deleted file mode 100644 index 25901b912f00936a4c58fd158ca1f1edfa12c932..0000000000000000000000000000000000000000 --- a/spaces/portal/Xenova-Semantic-Image-Search/gab.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/test/patest_read_record.c b/spaces/prerna9811/Chord/portaudio/test/patest_read_record.c deleted file mode 100644 index bd9c7feb0ce4a4bf6fdb25572eba8cec4e714f97..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/test/patest_read_record.c +++ /dev/null @@ -1,243 +0,0 @@ -/** @file patest_read_record.c - @ingroup test_src - @brief Record input into an array; Save array to a file; Playback recorded - data. Implemented using the blocking API (Pa_ReadStream(), Pa_WriteStream() ) - @author Phil Burk http://www.softsynth.com - @author Ross Bencina rossb@audiomulch.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -/* #define SAMPLE_RATE (17932) // Test failure to open with this value. */ -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (1024) -#define NUM_SECONDS (5) -#define NUM_CHANNELS (2) -/* #define DITHER_FLAG (paDitherOff) */ -#define DITHER_FLAG (0) /**/ - -/* Select sample format. */ -#if 1 -#define PA_SAMPLE_TYPE paFloat32 -typedef float SAMPLE; -#define SAMPLE_SILENCE (0.0f) -#define PRINTF_S_FORMAT "%.8f" -#elif 1 -#define PA_SAMPLE_TYPE paInt16 -typedef short SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#elif 0 -#define PA_SAMPLE_TYPE paInt8 -typedef char SAMPLE; -#define SAMPLE_SILENCE (0) -#define PRINTF_S_FORMAT "%d" -#else -#define PA_SAMPLE_TYPE paUInt8 -typedef unsigned char SAMPLE; -#define SAMPLE_SILENCE (128) -#define PRINTF_S_FORMAT "%d" -#endif - - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStreamParameters inputParameters, outputParameters; - PaStream *stream; - PaError err; - SAMPLE *recordedSamples; - int i; - int totalFrames; - int numSamples; - int numBytes; - SAMPLE max, average, val; - - - printf("patest_read_record.c\n"); fflush(stdout); - - totalFrames = NUM_SECONDS * SAMPLE_RATE; /* Record for a few seconds. */ - numSamples = totalFrames * NUM_CHANNELS; - - numBytes = numSamples * sizeof(SAMPLE); - recordedSamples = (SAMPLE *) malloc( numBytes ); - if( recordedSamples == NULL ) - { - printf("Could not allocate record array.\n"); - exit(1); - } - for( i=0; idefaultLowInputLatency; - inputParameters.hostApiSpecificStreamInfo = NULL; - - /* Record some audio. -------------------------------------------- */ - err = Pa_OpenStream( - &stream, - &inputParameters, - NULL, /* &outputParameters, */ - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - NULL, /* no callback, use blocking API */ - NULL ); /* no callback, so no callback userData */ - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - printf("Now recording!!\n"); fflush(stdout); - - err = Pa_ReadStream( stream, recordedSamples, totalFrames ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - /* Measure maximum peak amplitude. */ - max = 0; - average = 0; - for( i=0; i max ) - { - max = val; - } - average += val; - } - - average = average / numSamples; - - printf("Sample max amplitude = "PRINTF_S_FORMAT"\n", max ); - printf("Sample average = "PRINTF_S_FORMAT"\n", average ); -/* Was as below. Better choose at compile time because this - keeps generating compiler-warnings: - if( PA_SAMPLE_TYPE == paFloat32 ) - { - printf("sample max amplitude = %f\n", max ); - printf("sample average = %f\n", average ); - } - else - { - printf("sample max amplitude = %d\n", max ); - printf("sample average = %d\n", average ); - } -*/ - /* Write recorded data to a file. */ -#if 0 - { - FILE *fid; - fid = fopen("recorded.raw", "wb"); - if( fid == NULL ) - { - printf("Could not open file."); - } - else - { - fwrite( recordedSamples, NUM_CHANNELS * sizeof(SAMPLE), totalFrames, fid ); - fclose( fid ); - printf("Wrote data to 'recorded.raw'\n"); - } - } -#endif - - /* Playback recorded data. -------------------------------------------- */ - - outputParameters.device = Pa_GetDefaultOutputDevice(); /* default output device */ - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - goto error; - } - outputParameters.channelCount = NUM_CHANNELS; - outputParameters.sampleFormat = PA_SAMPLE_TYPE; - outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - printf("Begin playback.\n"); fflush(stdout); - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - NULL, /* no callback, use blocking API */ - NULL ); /* no callback, so no callback userData */ - if( err != paNoError ) goto error; - - if( stream ) - { - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - printf("Waiting for playback to finish.\n"); fflush(stdout); - - err = Pa_WriteStream( stream, recordedSamples, totalFrames ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - printf("Done.\n"); fflush(stdout); - } - free( recordedSamples ); - - Pa_Terminate(); - return 0; - -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return -1; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/TiffTags.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/TiffTags.py deleted file mode 100644 index 30b05e4e1d41fa21a7b7bf12c04ee05af6aa5284..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/TiffTags.py +++ /dev/null @@ -1,560 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TIFF tags -# -# This module provides clear-text names for various well-known -# TIFF tags. the TIFF codec works just fine without it. -# -# Copyright (c) Secret Labs AB 1999. -# -# See the README file for information on usage and redistribution. -# - -## -# This module provides constants and clear-text names for various -# well-known TIFF tags. -## - -from collections import namedtuple - - -class TagInfo(namedtuple("_TagInfo", "value name type length enum")): - __slots__ = [] - - def __new__(cls, value=None, name="unknown", type=None, length=None, enum=None): - return super().__new__(cls, value, name, type, length, enum or {}) - - def cvt_enum(self, value): - # Using get will call hash(value), which can be expensive - # for some types (e.g. Fraction). Since self.enum is rarely - # used, it's usually better to test it first. - return self.enum.get(value, value) if self.enum else value - - -def lookup(tag, group=None): - """ - :param tag: Integer tag number - :param group: Which :py:data:`~PIL.TiffTags.TAGS_V2_GROUPS` to look in - - .. versionadded:: 8.3.0 - - :returns: Taginfo namedtuple, From the ``TAGS_V2`` info if possible, - otherwise just populating the value and name from ``TAGS``. - If the tag is not recognized, "unknown" is returned for the name - - """ - - if group is not None: - info = TAGS_V2_GROUPS[group].get(tag) if group in TAGS_V2_GROUPS else None - else: - info = TAGS_V2.get(tag) - return info or TagInfo(tag, TAGS.get(tag, "unknown")) - - -## -# Map tag numbers to tag info. -# -# id: (Name, Type, Length, enum_values) -# -# The length here differs from the length in the tiff spec. For -# numbers, the tiff spec is for the number of fields returned. We -# agree here. For string-like types, the tiff spec uses the length of -# field in bytes. In Pillow, we are using the number of expected -# fields, in general 1 for string-like types. - - -BYTE = 1 -ASCII = 2 -SHORT = 3 -LONG = 4 -RATIONAL = 5 -SIGNED_BYTE = 6 -UNDEFINED = 7 -SIGNED_SHORT = 8 -SIGNED_LONG = 9 -SIGNED_RATIONAL = 10 -FLOAT = 11 -DOUBLE = 12 -IFD = 13 -LONG8 = 16 - -TAGS_V2 = { - 254: ("NewSubfileType", LONG, 1), - 255: ("SubfileType", SHORT, 1), - 256: ("ImageWidth", LONG, 1), - 257: ("ImageLength", LONG, 1), - 258: ("BitsPerSample", SHORT, 0), - 259: ( - "Compression", - SHORT, - 1, - { - "Uncompressed": 1, - "CCITT 1d": 2, - "Group 3 Fax": 3, - "Group 4 Fax": 4, - "LZW": 5, - "JPEG": 6, - "PackBits": 32773, - }, - ), - 262: ( - "PhotometricInterpretation", - SHORT, - 1, - { - "WhiteIsZero": 0, - "BlackIsZero": 1, - "RGB": 2, - "RGB Palette": 3, - "Transparency Mask": 4, - "CMYK": 5, - "YCbCr": 6, - "CieLAB": 8, - "CFA": 32803, # TIFF/EP, Adobe DNG - "LinearRaw": 32892, # Adobe DNG - }, - ), - 263: ("Threshholding", SHORT, 1), - 264: ("CellWidth", SHORT, 1), - 265: ("CellLength", SHORT, 1), - 266: ("FillOrder", SHORT, 1), - 269: ("DocumentName", ASCII, 1), - 270: ("ImageDescription", ASCII, 1), - 271: ("Make", ASCII, 1), - 272: ("Model", ASCII, 1), - 273: ("StripOffsets", LONG, 0), - 274: ("Orientation", SHORT, 1), - 277: ("SamplesPerPixel", SHORT, 1), - 278: ("RowsPerStrip", LONG, 1), - 279: ("StripByteCounts", LONG, 0), - 280: ("MinSampleValue", SHORT, 0), - 281: ("MaxSampleValue", SHORT, 0), - 282: ("XResolution", RATIONAL, 1), - 283: ("YResolution", RATIONAL, 1), - 284: ("PlanarConfiguration", SHORT, 1, {"Contiguous": 1, "Separate": 2}), - 285: ("PageName", ASCII, 1), - 286: ("XPosition", RATIONAL, 1), - 287: ("YPosition", RATIONAL, 1), - 288: ("FreeOffsets", LONG, 1), - 289: ("FreeByteCounts", LONG, 1), - 290: ("GrayResponseUnit", SHORT, 1), - 291: ("GrayResponseCurve", SHORT, 0), - 292: ("T4Options", LONG, 1), - 293: ("T6Options", LONG, 1), - 296: ("ResolutionUnit", SHORT, 1, {"none": 1, "inch": 2, "cm": 3}), - 297: ("PageNumber", SHORT, 2), - 301: ("TransferFunction", SHORT, 0), - 305: ("Software", ASCII, 1), - 306: ("DateTime", ASCII, 1), - 315: ("Artist", ASCII, 1), - 316: ("HostComputer", ASCII, 1), - 317: ("Predictor", SHORT, 1, {"none": 1, "Horizontal Differencing": 2}), - 318: ("WhitePoint", RATIONAL, 2), - 319: ("PrimaryChromaticities", RATIONAL, 6), - 320: ("ColorMap", SHORT, 0), - 321: ("HalftoneHints", SHORT, 2), - 322: ("TileWidth", LONG, 1), - 323: ("TileLength", LONG, 1), - 324: ("TileOffsets", LONG, 0), - 325: ("TileByteCounts", LONG, 0), - 330: ("SubIFDs", LONG, 0), - 332: ("InkSet", SHORT, 1), - 333: ("InkNames", ASCII, 1), - 334: ("NumberOfInks", SHORT, 1), - 336: ("DotRange", SHORT, 0), - 337: ("TargetPrinter", ASCII, 1), - 338: ("ExtraSamples", SHORT, 0), - 339: ("SampleFormat", SHORT, 0), - 340: ("SMinSampleValue", DOUBLE, 0), - 341: ("SMaxSampleValue", DOUBLE, 0), - 342: ("TransferRange", SHORT, 6), - 347: ("JPEGTables", UNDEFINED, 1), - # obsolete JPEG tags - 512: ("JPEGProc", SHORT, 1), - 513: ("JPEGInterchangeFormat", LONG, 1), - 514: ("JPEGInterchangeFormatLength", LONG, 1), - 515: ("JPEGRestartInterval", SHORT, 1), - 517: ("JPEGLosslessPredictors", SHORT, 0), - 518: ("JPEGPointTransforms", SHORT, 0), - 519: ("JPEGQTables", LONG, 0), - 520: ("JPEGDCTables", LONG, 0), - 521: ("JPEGACTables", LONG, 0), - 529: ("YCbCrCoefficients", RATIONAL, 3), - 530: ("YCbCrSubSampling", SHORT, 2), - 531: ("YCbCrPositioning", SHORT, 1), - 532: ("ReferenceBlackWhite", RATIONAL, 6), - 700: ("XMP", BYTE, 0), - 33432: ("Copyright", ASCII, 1), - 33723: ("IptcNaaInfo", UNDEFINED, 1), - 34377: ("PhotoshopInfo", BYTE, 0), - # FIXME add more tags here - 34665: ("ExifIFD", LONG, 1), - 34675: ("ICCProfile", UNDEFINED, 1), - 34853: ("GPSInfoIFD", LONG, 1), - 36864: ("ExifVersion", UNDEFINED, 1), - 37724: ("ImageSourceData", UNDEFINED, 1), - 40965: ("InteroperabilityIFD", LONG, 1), - 41730: ("CFAPattern", UNDEFINED, 1), - # MPInfo - 45056: ("MPFVersion", UNDEFINED, 1), - 45057: ("NumberOfImages", LONG, 1), - 45058: ("MPEntry", UNDEFINED, 1), - 45059: ("ImageUIDList", UNDEFINED, 0), # UNDONE, check - 45060: ("TotalFrames", LONG, 1), - 45313: ("MPIndividualNum", LONG, 1), - 45569: ("PanOrientation", LONG, 1), - 45570: ("PanOverlap_H", RATIONAL, 1), - 45571: ("PanOverlap_V", RATIONAL, 1), - 45572: ("BaseViewpointNum", LONG, 1), - 45573: ("ConvergenceAngle", SIGNED_RATIONAL, 1), - 45574: ("BaselineLength", RATIONAL, 1), - 45575: ("VerticalDivergence", SIGNED_RATIONAL, 1), - 45576: ("AxisDistance_X", SIGNED_RATIONAL, 1), - 45577: ("AxisDistance_Y", SIGNED_RATIONAL, 1), - 45578: ("AxisDistance_Z", SIGNED_RATIONAL, 1), - 45579: ("YawAngle", SIGNED_RATIONAL, 1), - 45580: ("PitchAngle", SIGNED_RATIONAL, 1), - 45581: ("RollAngle", SIGNED_RATIONAL, 1), - 40960: ("FlashPixVersion", UNDEFINED, 1), - 50741: ("MakerNoteSafety", SHORT, 1, {"Unsafe": 0, "Safe": 1}), - 50780: ("BestQualityScale", RATIONAL, 1), - 50838: ("ImageJMetaDataByteCounts", LONG, 0), # Can be more than one - 50839: ("ImageJMetaData", UNDEFINED, 1), # see Issue #2006 -} -TAGS_V2_GROUPS = { - # ExifIFD - 34665: { - 36864: ("ExifVersion", UNDEFINED, 1), - 40960: ("FlashPixVersion", UNDEFINED, 1), - 40965: ("InteroperabilityIFD", LONG, 1), - 41730: ("CFAPattern", UNDEFINED, 1), - }, - # GPSInfoIFD - 34853: { - 0: ("GPSVersionID", BYTE, 4), - 1: ("GPSLatitudeRef", ASCII, 2), - 2: ("GPSLatitude", RATIONAL, 3), - 3: ("GPSLongitudeRef", ASCII, 2), - 4: ("GPSLongitude", RATIONAL, 3), - 5: ("GPSAltitudeRef", BYTE, 1), - 6: ("GPSAltitude", RATIONAL, 1), - 7: ("GPSTimeStamp", RATIONAL, 3), - 8: ("GPSSatellites", ASCII, 0), - 9: ("GPSStatus", ASCII, 2), - 10: ("GPSMeasureMode", ASCII, 2), - 11: ("GPSDOP", RATIONAL, 1), - 12: ("GPSSpeedRef", ASCII, 2), - 13: ("GPSSpeed", RATIONAL, 1), - 14: ("GPSTrackRef", ASCII, 2), - 15: ("GPSTrack", RATIONAL, 1), - 16: ("GPSImgDirectionRef", ASCII, 2), - 17: ("GPSImgDirection", RATIONAL, 1), - 18: ("GPSMapDatum", ASCII, 0), - 19: ("GPSDestLatitudeRef", ASCII, 2), - 20: ("GPSDestLatitude", RATIONAL, 3), - 21: ("GPSDestLongitudeRef", ASCII, 2), - 22: ("GPSDestLongitude", RATIONAL, 3), - 23: ("GPSDestBearingRef", ASCII, 2), - 24: ("GPSDestBearing", RATIONAL, 1), - 25: ("GPSDestDistanceRef", ASCII, 2), - 26: ("GPSDestDistance", RATIONAL, 1), - 27: ("GPSProcessingMethod", UNDEFINED, 0), - 28: ("GPSAreaInformation", UNDEFINED, 0), - 29: ("GPSDateStamp", ASCII, 11), - 30: ("GPSDifferential", SHORT, 1), - }, - # InteroperabilityIFD - 40965: {1: ("InteropIndex", ASCII, 1), 2: ("InteropVersion", UNDEFINED, 1)}, -} - -# Legacy Tags structure -# these tags aren't included above, but were in the previous versions -TAGS = { - 347: "JPEGTables", - 700: "XMP", - # Additional Exif Info - 32932: "Wang Annotation", - 33434: "ExposureTime", - 33437: "FNumber", - 33445: "MD FileTag", - 33446: "MD ScalePixel", - 33447: "MD ColorTable", - 33448: "MD LabName", - 33449: "MD SampleInfo", - 33450: "MD PrepDate", - 33451: "MD PrepTime", - 33452: "MD FileUnits", - 33550: "ModelPixelScaleTag", - 33723: "IptcNaaInfo", - 33918: "INGR Packet Data Tag", - 33919: "INGR Flag Registers", - 33920: "IrasB Transformation Matrix", - 33922: "ModelTiepointTag", - 34264: "ModelTransformationTag", - 34377: "PhotoshopInfo", - 34735: "GeoKeyDirectoryTag", - 34736: "GeoDoubleParamsTag", - 34737: "GeoAsciiParamsTag", - 34850: "ExposureProgram", - 34852: "SpectralSensitivity", - 34855: "ISOSpeedRatings", - 34856: "OECF", - 34864: "SensitivityType", - 34865: "StandardOutputSensitivity", - 34866: "RecommendedExposureIndex", - 34867: "ISOSpeed", - 34868: "ISOSpeedLatitudeyyy", - 34869: "ISOSpeedLatitudezzz", - 34908: "HylaFAX FaxRecvParams", - 34909: "HylaFAX FaxSubAddress", - 34910: "HylaFAX FaxRecvTime", - 36864: "ExifVersion", - 36867: "DateTimeOriginal", - 36868: "DateTimeDigitized", - 37121: "ComponentsConfiguration", - 37122: "CompressedBitsPerPixel", - 37724: "ImageSourceData", - 37377: "ShutterSpeedValue", - 37378: "ApertureValue", - 37379: "BrightnessValue", - 37380: "ExposureBiasValue", - 37381: "MaxApertureValue", - 37382: "SubjectDistance", - 37383: "MeteringMode", - 37384: "LightSource", - 37385: "Flash", - 37386: "FocalLength", - 37396: "SubjectArea", - 37500: "MakerNote", - 37510: "UserComment", - 37520: "SubSec", - 37521: "SubSecTimeOriginal", - 37522: "SubsecTimeDigitized", - 40960: "FlashPixVersion", - 40961: "ColorSpace", - 40962: "PixelXDimension", - 40963: "PixelYDimension", - 40964: "RelatedSoundFile", - 40965: "InteroperabilityIFD", - 41483: "FlashEnergy", - 41484: "SpatialFrequencyResponse", - 41486: "FocalPlaneXResolution", - 41487: "FocalPlaneYResolution", - 41488: "FocalPlaneResolutionUnit", - 41492: "SubjectLocation", - 41493: "ExposureIndex", - 41495: "SensingMethod", - 41728: "FileSource", - 41729: "SceneType", - 41730: "CFAPattern", - 41985: "CustomRendered", - 41986: "ExposureMode", - 41987: "WhiteBalance", - 41988: "DigitalZoomRatio", - 41989: "FocalLengthIn35mmFilm", - 41990: "SceneCaptureType", - 41991: "GainControl", - 41992: "Contrast", - 41993: "Saturation", - 41994: "Sharpness", - 41995: "DeviceSettingDescription", - 41996: "SubjectDistanceRange", - 42016: "ImageUniqueID", - 42032: "CameraOwnerName", - 42033: "BodySerialNumber", - 42034: "LensSpecification", - 42035: "LensMake", - 42036: "LensModel", - 42037: "LensSerialNumber", - 42112: "GDAL_METADATA", - 42113: "GDAL_NODATA", - 42240: "Gamma", - 50215: "Oce Scanjob Description", - 50216: "Oce Application Selector", - 50217: "Oce Identification Number", - 50218: "Oce ImageLogic Characteristics", - # Adobe DNG - 50706: "DNGVersion", - 50707: "DNGBackwardVersion", - 50708: "UniqueCameraModel", - 50709: "LocalizedCameraModel", - 50710: "CFAPlaneColor", - 50711: "CFALayout", - 50712: "LinearizationTable", - 50713: "BlackLevelRepeatDim", - 50714: "BlackLevel", - 50715: "BlackLevelDeltaH", - 50716: "BlackLevelDeltaV", - 50717: "WhiteLevel", - 50718: "DefaultScale", - 50719: "DefaultCropOrigin", - 50720: "DefaultCropSize", - 50721: "ColorMatrix1", - 50722: "ColorMatrix2", - 50723: "CameraCalibration1", - 50724: "CameraCalibration2", - 50725: "ReductionMatrix1", - 50726: "ReductionMatrix2", - 50727: "AnalogBalance", - 50728: "AsShotNeutral", - 50729: "AsShotWhiteXY", - 50730: "BaselineExposure", - 50731: "BaselineNoise", - 50732: "BaselineSharpness", - 50733: "BayerGreenSplit", - 50734: "LinearResponseLimit", - 50735: "CameraSerialNumber", - 50736: "LensInfo", - 50737: "ChromaBlurRadius", - 50738: "AntiAliasStrength", - 50740: "DNGPrivateData", - 50778: "CalibrationIlluminant1", - 50779: "CalibrationIlluminant2", - 50784: "Alias Layer Metadata", -} - - -def _populate(): - for k, v in TAGS_V2.items(): - # Populate legacy structure. - TAGS[k] = v[0] - if len(v) == 4: - for sk, sv in v[3].items(): - TAGS[(k, sv)] = sk - - TAGS_V2[k] = TagInfo(k, *v) - - for group, tags in TAGS_V2_GROUPS.items(): - for k, v in tags.items(): - tags[k] = TagInfo(k, *v) - - -_populate() -## -# Map type numbers to type names -- defined in ImageFileDirectory. - -TYPES = {} - -# was: -# TYPES = { -# 1: "byte", -# 2: "ascii", -# 3: "short", -# 4: "long", -# 5: "rational", -# 6: "signed byte", -# 7: "undefined", -# 8: "signed short", -# 9: "signed long", -# 10: "signed rational", -# 11: "float", -# 12: "double", -# } - -# -# These tags are handled by default in libtiff, without -# adding to the custom dictionary. From tif_dir.c, searching for -# case TIFFTAG in the _TIFFVSetField function: -# Line: item. -# 148: case TIFFTAG_SUBFILETYPE: -# 151: case TIFFTAG_IMAGEWIDTH: -# 154: case TIFFTAG_IMAGELENGTH: -# 157: case TIFFTAG_BITSPERSAMPLE: -# 181: case TIFFTAG_COMPRESSION: -# 202: case TIFFTAG_PHOTOMETRIC: -# 205: case TIFFTAG_THRESHHOLDING: -# 208: case TIFFTAG_FILLORDER: -# 214: case TIFFTAG_ORIENTATION: -# 221: case TIFFTAG_SAMPLESPERPIXEL: -# 228: case TIFFTAG_ROWSPERSTRIP: -# 238: case TIFFTAG_MINSAMPLEVALUE: -# 241: case TIFFTAG_MAXSAMPLEVALUE: -# 244: case TIFFTAG_SMINSAMPLEVALUE: -# 247: case TIFFTAG_SMAXSAMPLEVALUE: -# 250: case TIFFTAG_XRESOLUTION: -# 256: case TIFFTAG_YRESOLUTION: -# 262: case TIFFTAG_PLANARCONFIG: -# 268: case TIFFTAG_XPOSITION: -# 271: case TIFFTAG_YPOSITION: -# 274: case TIFFTAG_RESOLUTIONUNIT: -# 280: case TIFFTAG_PAGENUMBER: -# 284: case TIFFTAG_HALFTONEHINTS: -# 288: case TIFFTAG_COLORMAP: -# 294: case TIFFTAG_EXTRASAMPLES: -# 298: case TIFFTAG_MATTEING: -# 305: case TIFFTAG_TILEWIDTH: -# 316: case TIFFTAG_TILELENGTH: -# 327: case TIFFTAG_TILEDEPTH: -# 333: case TIFFTAG_DATATYPE: -# 344: case TIFFTAG_SAMPLEFORMAT: -# 361: case TIFFTAG_IMAGEDEPTH: -# 364: case TIFFTAG_SUBIFD: -# 376: case TIFFTAG_YCBCRPOSITIONING: -# 379: case TIFFTAG_YCBCRSUBSAMPLING: -# 383: case TIFFTAG_TRANSFERFUNCTION: -# 389: case TIFFTAG_REFERENCEBLACKWHITE: -# 393: case TIFFTAG_INKNAMES: - -# Following pseudo-tags are also handled by default in libtiff: -# TIFFTAG_JPEGQUALITY 65537 - -# some of these are not in our TAGS_V2 dict and were included from tiff.h - -# This list also exists in encode.c -LIBTIFF_CORE = { - 255, - 256, - 257, - 258, - 259, - 262, - 263, - 266, - 274, - 277, - 278, - 280, - 281, - 340, - 341, - 282, - 283, - 284, - 286, - 287, - 296, - 297, - 321, - 320, - 338, - 32995, - 322, - 323, - 32998, - 32996, - 339, - 32997, - 330, - 531, - 530, - 301, - 532, - 333, - # as above - 269, # this has been in our tests forever, and works - 65537, -} - -LIBTIFF_CORE.remove(255) # We don't have support for subfiletypes -LIBTIFF_CORE.remove(322) # We don't have support for writing tiled images with libtiff -LIBTIFF_CORE.remove(323) # Tiled images -LIBTIFF_CORE.remove(333) # Ink Names either - -# Note to advanced users: There may be combinations of these -# parameters and values that when added properly, will work and -# produce valid tiff images that may work in your application. -# It is safe to add and remove tags from this set from Pillow's point -# of view so long as you test against libtiff. diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/pytest_plugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/pytest_plugin.py deleted file mode 100644 index 044ce6914dd70a200cbc90cbbb9abc9135a66340..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/pytest_plugin.py +++ /dev/null @@ -1,142 +0,0 @@ -from __future__ import annotations - -from contextlib import contextmanager -from inspect import isasyncgenfunction, iscoroutinefunction -from typing import Any, Dict, Generator, Tuple, cast - -import pytest -import sniffio - -from ._core._eventloop import get_all_backends, get_asynclib -from .abc import TestRunner - -_current_runner: TestRunner | None = None - - -def extract_backend_and_options(backend: object) -> tuple[str, dict[str, Any]]: - if isinstance(backend, str): - return backend, {} - elif isinstance(backend, tuple) and len(backend) == 2: - if isinstance(backend[0], str) and isinstance(backend[1], dict): - return cast(Tuple[str, Dict[str, Any]], backend) - - raise TypeError("anyio_backend must be either a string or tuple of (string, dict)") - - -@contextmanager -def get_runner( - backend_name: str, backend_options: dict[str, Any] -) -> Generator[TestRunner, object, None]: - global _current_runner - if _current_runner: - yield _current_runner - return - - asynclib = get_asynclib(backend_name) - token = None - if sniffio.current_async_library_cvar.get(None) is None: - # Since we're in control of the event loop, we can cache the name of the async library - token = sniffio.current_async_library_cvar.set(backend_name) - - try: - backend_options = backend_options or {} - with asynclib.TestRunner(**backend_options) as runner: - _current_runner = runner - yield runner - finally: - _current_runner = None - if token: - sniffio.current_async_library_cvar.reset(token) - - -def pytest_configure(config: Any) -> None: - config.addinivalue_line( - "markers", - "anyio: mark the (coroutine function) test to be run " - "asynchronously via anyio.", - ) - - -def pytest_fixture_setup(fixturedef: Any, request: Any) -> None: - def wrapper(*args, anyio_backend, **kwargs): # type: ignore[no-untyped-def] - backend_name, backend_options = extract_backend_and_options(anyio_backend) - if has_backend_arg: - kwargs["anyio_backend"] = anyio_backend - - with get_runner(backend_name, backend_options) as runner: - if isasyncgenfunction(func): - yield from runner.run_asyncgen_fixture(func, kwargs) - else: - yield runner.run_fixture(func, kwargs) - - # Only apply this to coroutine functions and async generator functions in requests that involve - # the anyio_backend fixture - func = fixturedef.func - if isasyncgenfunction(func) or iscoroutinefunction(func): - if "anyio_backend" in request.fixturenames: - has_backend_arg = "anyio_backend" in fixturedef.argnames - fixturedef.func = wrapper - if not has_backend_arg: - fixturedef.argnames += ("anyio_backend",) - - -@pytest.hookimpl(tryfirst=True) -def pytest_pycollect_makeitem(collector: Any, name: Any, obj: Any) -> None: - if collector.istestfunction(obj, name): - inner_func = obj.hypothesis.inner_test if hasattr(obj, "hypothesis") else obj - if iscoroutinefunction(inner_func): - marker = collector.get_closest_marker("anyio") - own_markers = getattr(obj, "pytestmark", ()) - if marker or any(marker.name == "anyio" for marker in own_markers): - pytest.mark.usefixtures("anyio_backend")(obj) - - -@pytest.hookimpl(tryfirst=True) -def pytest_pyfunc_call(pyfuncitem: Any) -> bool | None: - def run_with_hypothesis(**kwargs: Any) -> None: - with get_runner(backend_name, backend_options) as runner: - runner.run_test(original_func, kwargs) - - backend = pyfuncitem.funcargs.get("anyio_backend") - if backend: - backend_name, backend_options = extract_backend_and_options(backend) - - if hasattr(pyfuncitem.obj, "hypothesis"): - # Wrap the inner test function unless it's already wrapped - original_func = pyfuncitem.obj.hypothesis.inner_test - if original_func.__qualname__ != run_with_hypothesis.__qualname__: - if iscoroutinefunction(original_func): - pyfuncitem.obj.hypothesis.inner_test = run_with_hypothesis - - return None - - if iscoroutinefunction(pyfuncitem.obj): - funcargs = pyfuncitem.funcargs - testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames} - with get_runner(backend_name, backend_options) as runner: - runner.run_test(pyfuncitem.obj, testargs) - - return True - - return None - - -@pytest.fixture(params=get_all_backends()) -def anyio_backend(request: Any) -> Any: - return request.param - - -@pytest.fixture -def anyio_backend_name(anyio_backend: Any) -> str: - if isinstance(anyio_backend, str): - return anyio_backend - else: - return anyio_backend[0] - - -@pytest.fixture -def anyio_backend_options(anyio_backend: Any) -> dict[str, Any]: - if isinstance(anyio_backend, str): - return {} - else: - return anyio_backend[1] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/shell_completion.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/shell_completion.py deleted file mode 100644 index dc9e00b9b0c6f4903b674f03343e887bd490b081..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/shell_completion.py +++ /dev/null @@ -1,596 +0,0 @@ -import os -import re -import typing as t -from gettext import gettext as _ - -from .core import Argument -from .core import BaseCommand -from .core import Context -from .core import MultiCommand -from .core import Option -from .core import Parameter -from .core import ParameterSource -from .parser import split_arg_string -from .utils import echo - - -def shell_complete( - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - complete_var: str, - instruction: str, -) -> int: - """Perform shell completion for the given CLI program. - - :param cli: Command being called. - :param ctx_args: Extra arguments to pass to - ``cli.make_context``. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - :param instruction: Value of ``complete_var`` with the completion - instruction and shell, in the form ``instruction_shell``. - :return: Status code to exit with. - """ - shell, _, instruction = instruction.partition("_") - comp_cls = get_completion_class(shell) - - if comp_cls is None: - return 1 - - comp = comp_cls(cli, ctx_args, prog_name, complete_var) - - if instruction == "source": - echo(comp.source()) - return 0 - - if instruction == "complete": - echo(comp.complete()) - return 0 - - return 1 - - -class CompletionItem: - """Represents a completion value and metadata about the value. The - default metadata is ``type`` to indicate special shell handling, - and ``help`` if a shell supports showing a help string next to the - value. - - Arbitrary parameters can be passed when creating the object, and - accessed using ``item.attr``. If an attribute wasn't passed, - accessing it returns ``None``. - - :param value: The completion suggestion. - :param type: Tells the shell script to provide special completion - support for the type. Click uses ``"dir"`` and ``"file"``. - :param help: String shown next to the value if supported. - :param kwargs: Arbitrary metadata. The built-in implementations - don't use this, but custom type completions paired with custom - shell support could use it. - """ - - __slots__ = ("value", "type", "help", "_info") - - def __init__( - self, - value: t.Any, - type: str = "plain", - help: t.Optional[str] = None, - **kwargs: t.Any, - ) -> None: - self.value: t.Any = value - self.type: str = type - self.help: t.Optional[str] = help - self._info = kwargs - - def __getattr__(self, name: str) -> t.Any: - return self._info.get(name) - - -# Only Bash >= 4.4 has the nosort option. -_SOURCE_BASH = """\ -%(complete_func)s() { - local IFS=$'\\n' - local response - - response=$(env COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \ -%(complete_var)s=bash_complete $1) - - for completion in $response; do - IFS=',' read type value <<< "$completion" - - if [[ $type == 'dir' ]]; then - COMPREPLY=() - compopt -o dirnames - elif [[ $type == 'file' ]]; then - COMPREPLY=() - compopt -o default - elif [[ $type == 'plain' ]]; then - COMPREPLY+=($value) - fi - done - - return 0 -} - -%(complete_func)s_setup() { - complete -o nosort -F %(complete_func)s %(prog_name)s -} - -%(complete_func)s_setup; -""" - -_SOURCE_ZSH = """\ -#compdef %(prog_name)s - -%(complete_func)s() { - local -a completions - local -a completions_with_descriptions - local -a response - (( ! $+commands[%(prog_name)s] )) && return 1 - - response=("${(@f)$(env COMP_WORDS="${words[*]}" COMP_CWORD=$((CURRENT-1)) \ -%(complete_var)s=zsh_complete %(prog_name)s)}") - - for type key descr in ${response}; do - if [[ "$type" == "plain" ]]; then - if [[ "$descr" == "_" ]]; then - completions+=("$key") - else - completions_with_descriptions+=("$key":"$descr") - fi - elif [[ "$type" == "dir" ]]; then - _path_files -/ - elif [[ "$type" == "file" ]]; then - _path_files -f - fi - done - - if [ -n "$completions_with_descriptions" ]; then - _describe -V unsorted completions_with_descriptions -U - fi - - if [ -n "$completions" ]; then - compadd -U -V unsorted -a completions - fi -} - -if [[ $zsh_eval_context[-1] == loadautofunc ]]; then - # autoload from fpath, call function directly - %(complete_func)s "$@" -else - # eval/source/. command, register function for later - compdef %(complete_func)s %(prog_name)s -fi -""" - -_SOURCE_FISH = """\ -function %(complete_func)s; - set -l response (env %(complete_var)s=fish_complete COMP_WORDS=(commandline -cp) \ -COMP_CWORD=(commandline -t) %(prog_name)s); - - for completion in $response; - set -l metadata (string split "," $completion); - - if test $metadata[1] = "dir"; - __fish_complete_directories $metadata[2]; - else if test $metadata[1] = "file"; - __fish_complete_path $metadata[2]; - else if test $metadata[1] = "plain"; - echo $metadata[2]; - end; - end; -end; - -complete --no-files --command %(prog_name)s --arguments \ -"(%(complete_func)s)"; -""" - - -class ShellComplete: - """Base class for providing shell completion support. A subclass for - a given shell will override attributes and methods to implement the - completion instructions (``source`` and ``complete``). - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param complete_var: Name of the environment variable that holds - the completion instruction. - - .. versionadded:: 8.0 - """ - - name: t.ClassVar[str] - """Name to register the shell as with :func:`add_completion_class`. - This is used in completion instructions (``{name}_source`` and - ``{name}_complete``). - """ - - source_template: t.ClassVar[str] - """Completion script template formatted by :meth:`source`. This must - be provided by subclasses. - """ - - def __init__( - self, - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - complete_var: str, - ) -> None: - self.cli = cli - self.ctx_args = ctx_args - self.prog_name = prog_name - self.complete_var = complete_var - - @property - def func_name(self) -> str: - """The name of the shell function defined by the completion - script. - """ - safe_name = re.sub(r"\W*", "", self.prog_name.replace("-", "_"), flags=re.ASCII) - return f"_{safe_name}_completion" - - def source_vars(self) -> t.Dict[str, t.Any]: - """Vars for formatting :attr:`source_template`. - - By default this provides ``complete_func``, ``complete_var``, - and ``prog_name``. - """ - return { - "complete_func": self.func_name, - "complete_var": self.complete_var, - "prog_name": self.prog_name, - } - - def source(self) -> str: - """Produce the shell script that defines the completion - function. By default this ``%``-style formats - :attr:`source_template` with the dict returned by - :meth:`source_vars`. - """ - return self.source_template % self.source_vars() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - """Use the env vars defined by the shell script to return a - tuple of ``args, incomplete``. This must be implemented by - subclasses. - """ - raise NotImplementedError - - def get_completions( - self, args: t.List[str], incomplete: str - ) -> t.List[CompletionItem]: - """Determine the context and last complete command or parameter - from the complete args. Call that object's ``shell_complete`` - method to get the completions for the incomplete value. - - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - ctx = _resolve_context(self.cli, self.ctx_args, self.prog_name, args) - obj, incomplete = _resolve_incomplete(ctx, args, incomplete) - return obj.shell_complete(ctx, incomplete) - - def format_completion(self, item: CompletionItem) -> str: - """Format a completion item into the form recognized by the - shell script. This must be implemented by subclasses. - - :param item: Completion item to format. - """ - raise NotImplementedError - - def complete(self) -> str: - """Produce the completion data to send back to the shell. - - By default this calls :meth:`get_completion_args`, gets the - completions, then calls :meth:`format_completion` for each - completion. - """ - args, incomplete = self.get_completion_args() - completions = self.get_completions(args, incomplete) - out = [self.format_completion(item) for item in completions] - return "\n".join(out) - - -class BashComplete(ShellComplete): - """Shell completion for Bash.""" - - name = "bash" - source_template = _SOURCE_BASH - - @staticmethod - def _check_version() -> None: - import subprocess - - output = subprocess.run( - ["bash", "-c", 'echo "${BASH_VERSION}"'], stdout=subprocess.PIPE - ) - match = re.search(r"^(\d+)\.(\d+)\.\d+", output.stdout.decode()) - - if match is not None: - major, minor = match.groups() - - if major < "4" or major == "4" and minor < "4": - echo( - _( - "Shell completion is not supported for Bash" - " versions older than 4.4." - ), - err=True, - ) - else: - echo( - _("Couldn't detect Bash version, shell completion is not supported."), - err=True, - ) - - def source(self) -> str: - self._check_version() - return super().source() - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type},{item.value}" - - -class ZshComplete(ShellComplete): - """Shell completion for Zsh.""" - - name = "zsh" - source_template = _SOURCE_ZSH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - cword = int(os.environ["COMP_CWORD"]) - args = cwords[1:cword] - - try: - incomplete = cwords[cword] - except IndexError: - incomplete = "" - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - return f"{item.type}\n{item.value}\n{item.help if item.help else '_'}" - - -class FishComplete(ShellComplete): - """Shell completion for Fish.""" - - name = "fish" - source_template = _SOURCE_FISH - - def get_completion_args(self) -> t.Tuple[t.List[str], str]: - cwords = split_arg_string(os.environ["COMP_WORDS"]) - incomplete = os.environ["COMP_CWORD"] - args = cwords[1:] - - # Fish stores the partial word in both COMP_WORDS and - # COMP_CWORD, remove it from complete args. - if incomplete and args and args[-1] == incomplete: - args.pop() - - return args, incomplete - - def format_completion(self, item: CompletionItem) -> str: - if item.help: - return f"{item.type},{item.value}\t{item.help}" - - return f"{item.type},{item.value}" - - -ShellCompleteType = t.TypeVar("ShellCompleteType", bound=t.Type[ShellComplete]) - - -_available_shells: t.Dict[str, t.Type[ShellComplete]] = { - "bash": BashComplete, - "fish": FishComplete, - "zsh": ZshComplete, -} - - -def add_completion_class( - cls: ShellCompleteType, name: t.Optional[str] = None -) -> ShellCompleteType: - """Register a :class:`ShellComplete` subclass under the given name. - The name will be provided by the completion instruction environment - variable during completion. - - :param cls: The completion class that will handle completion for the - shell. - :param name: Name to register the class under. Defaults to the - class's ``name`` attribute. - """ - if name is None: - name = cls.name - - _available_shells[name] = cls - - return cls - - -def get_completion_class(shell: str) -> t.Optional[t.Type[ShellComplete]]: - """Look up a registered :class:`ShellComplete` subclass by the name - provided by the completion instruction environment variable. If the - name isn't registered, returns ``None``. - - :param shell: Name the class is registered under. - """ - return _available_shells.get(shell) - - -def _is_incomplete_argument(ctx: Context, param: Parameter) -> bool: - """Determine if the given parameter is an argument that can still - accept values. - - :param ctx: Invocation context for the command represented by the - parsed complete args. - :param param: Argument object being checked. - """ - if not isinstance(param, Argument): - return False - - assert param.name is not None - # Will be None if expose_value is False. - value = ctx.params.get(param.name) - return ( - param.nargs == -1 - or ctx.get_parameter_source(param.name) is not ParameterSource.COMMANDLINE - or ( - param.nargs > 1 - and isinstance(value, (tuple, list)) - and len(value) < param.nargs - ) - ) - - -def _start_of_option(ctx: Context, value: str) -> bool: - """Check if the value looks like the start of an option.""" - if not value: - return False - - c = value[0] - return c in ctx._opt_prefixes - - -def _is_incomplete_option(ctx: Context, args: t.List[str], param: Parameter) -> bool: - """Determine if the given parameter is an option that needs a value. - - :param args: List of complete args before the incomplete value. - :param param: Option object being checked. - """ - if not isinstance(param, Option): - return False - - if param.is_flag or param.count: - return False - - last_option = None - - for index, arg in enumerate(reversed(args)): - if index + 1 > param.nargs: - break - - if _start_of_option(ctx, arg): - last_option = arg - - return last_option is not None and last_option in param.opts - - -def _resolve_context( - cli: BaseCommand, - ctx_args: t.MutableMapping[str, t.Any], - prog_name: str, - args: t.List[str], -) -> Context: - """Produce the context hierarchy starting with the command and - traversing the complete arguments. This only follows the commands, - it doesn't trigger input prompts or callbacks. - - :param cli: Command being called. - :param prog_name: Name of the executable in the shell. - :param args: List of complete args before the incomplete value. - """ - ctx_args["resilient_parsing"] = True - ctx = cli.make_context(prog_name, args.copy(), **ctx_args) - args = ctx.protected_args + ctx.args - - while args: - command = ctx.command - - if isinstance(command, MultiCommand): - if not command.chain: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - ctx = cmd.make_context(name, args, parent=ctx, resilient_parsing=True) - args = ctx.protected_args + ctx.args - else: - sub_ctx = ctx - - while args: - name, cmd, args = command.resolve_command(ctx, args) - - if cmd is None: - return ctx - - sub_ctx = cmd.make_context( - name, - args, - parent=ctx, - allow_extra_args=True, - allow_interspersed_args=False, - resilient_parsing=True, - ) - args = sub_ctx.args - - ctx = sub_ctx - args = [*sub_ctx.protected_args, *sub_ctx.args] - else: - break - - return ctx - - -def _resolve_incomplete( - ctx: Context, args: t.List[str], incomplete: str -) -> t.Tuple[t.Union[BaseCommand, Parameter], str]: - """Find the Click object that will handle the completion of the - incomplete value. Return the object and the incomplete value. - - :param ctx: Invocation context for the command represented by - the parsed complete args. - :param args: List of complete args before the incomplete value. - :param incomplete: Value being completed. May be empty. - """ - # Different shells treat an "=" between a long option name and - # value differently. Might keep the value joined, return the "=" - # as a separate item, or return the split name and value. Always - # split and discard the "=" to make completion easier. - if incomplete == "=": - incomplete = "" - elif "=" in incomplete and _start_of_option(ctx, incomplete): - name, _, incomplete = incomplete.partition("=") - args.append(name) - - # The "--" marker tells Click to stop treating values as options - # even if they start with the option character. If it hasn't been - # given and the incomplete arg looks like an option, the current - # command will provide option name completions. - if "--" not in args and _start_of_option(ctx, incomplete): - return ctx.command, incomplete - - params = ctx.command.get_params(ctx) - - # If the last complete arg is an option name with an incomplete - # value, the option will provide value completions. - for param in params: - if _is_incomplete_option(ctx, args, param): - return param, incomplete - - # It's not an option name or value. The first argument without a - # parsed value will provide value completions. - for param in params: - if _is_incomplete_argument(ctx, param): - return param, incomplete - - # There were no unparsed arguments, the command may be a group that - # will provide command name completions. - return ctx.command, incomplete diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/audio/shared/types.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/audio/shared/types.ts deleted file mode 100644 index c543d6dafc1370d303a24504f8b5497c173d1d34..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/audio/shared/types.ts +++ /dev/null @@ -1,6 +0,0 @@ -export type WaveformOptions = { - waveform_color?: string; - waveform_progress_color?: string; - show_controls?: boolean; - skip_length?: number; -}; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-5acde2d8.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-5acde2d8.js deleted file mode 100644 index ab6c603bb941d900ec5fd515d63018b23f326fb4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-5acde2d8.js +++ /dev/null @@ -1,2 +0,0 @@ -import{r as I}from"./file-url-595a5096.js";const H=new Error("failed to get response body reader"),F=new Error("failed to complete download"),K="Content-Length",Y=async(t,i)=>{const e=await fetch(t);let a;try{const l=parseInt(e.headers.get(K)||"-1"),n=e.body?.getReader();if(!n)throw H;const s=[];let o=0;for(;;){const{done:p,value:x}=await n.read(),v=x?x.length:0;if(p){if(l!=-1&&l!==o)throw F;i&&i({url:t,total:l,received:o,delta:v,done:p});break}s.push(x),o+=v,i&&i({url:t,total:l,received:o,delta:v,done:p})}const m=new Uint8Array(o);let u=0;for(const p of s)m.set(p,u),u+=p.length;a=m.buffer}catch(l){console.log("failed to send download progress event: ",l),a=await e.arrayBuffer(),i&&i({url:t,total:a.byteLength,received:a.byteLength,delta:0,done:!0})}return a},R=async(t,i,e=!1,a)=>{const l=e?await Y(t,a):await(await fetch(t)).arrayBuffer(),n=new Blob([l],{type:i});return URL.createObjectURL(n)};var r;(function(t){t.LOAD="LOAD",t.EXEC="EXEC",t.WRITE_FILE="WRITE_FILE",t.READ_FILE="READ_FILE",t.DELETE_FILE="DELETE_FILE",t.RENAME="RENAME",t.CREATE_DIR="CREATE_DIR",t.LIST_DIR="LIST_DIR",t.DELETE_DIR="DELETE_DIR",t.ERROR="ERROR",t.DOWNLOAD="DOWNLOAD",t.PROGRESS="PROGRESS",t.LOG="LOG",t.MOUNT="MOUNT",t.UNMOUNT="UNMOUNT"})(r||(r={}));const J=(()=>{let t=0;return()=>t++})(),Q=new Error("ffmpeg is not loaded, call `await ffmpeg.load()` first"),Z=new Error("called FFmpeg.terminate()");class ${#t=null;#a={};#e={};#l=[];#o=[];loaded=!1;#n=()=>{this.#t&&(this.#t.onmessage=({data:{id:i,type:e,data:a}})=>{switch(e){case r.LOAD:this.loaded=!0,this.#a[i](a);break;case r.MOUNT:case r.UNMOUNT:case r.EXEC:case r.WRITE_FILE:case r.READ_FILE:case r.DELETE_FILE:case r.RENAME:case r.CREATE_DIR:case r.LIST_DIR:case r.DELETE_DIR:this.#a[i](a);break;case r.LOG:this.#l.forEach(l=>l(a));break;case r.PROGRESS:this.#o.forEach(l=>l(a));break;case r.ERROR:this.#e[i](a);break}delete this.#a[i],delete this.#e[i]})};#i=({type:i,data:e},a=[],l)=>this.#t?new Promise((n,s)=>{const o=J();this.#t&&this.#t.postMessage({id:o,type:i,data:e},a),this.#a[o]=n,this.#e[o]=s,l?.addEventListener("abort",()=>{s(new DOMException(`Message # ${o} was aborted`,"AbortError"))},{once:!0})}):Promise.reject(Q);on(i,e){i==="log"?this.#l.push(e):i==="progress"&&this.#o.push(e)}off(i,e){i==="log"?this.#l=this.#l.filter(a=>a!==e):i==="progress"&&(this.#o=this.#o.filter(a=>a!==e))}load=(i={},{signal:e}={})=>(this.#t||(this.#t=new Worker(new URL(""+new URL("worker-1779ba70.js",import.meta.url).href,self.location),{type:"module"}),this.#n()),this.#i({type:r.LOAD,data:i},void 0,e));exec=(i,e=-1,{signal:a}={})=>this.#i({type:r.EXEC,data:{args:i,timeout:e}},void 0,a);terminate=()=>{const i=Object.keys(this.#e);for(const e of i)this.#e[e](Z),delete this.#e[e],delete this.#a[e];this.#t&&(this.#t.terminate(),this.#t=null,this.loaded=!1)};writeFile=(i,e,{signal:a}={})=>{const l=[];return e instanceof Uint8Array&&l.push(e.buffer),this.#i({type:r.WRITE_FILE,data:{path:i,data:e}},l,a)};mount=(i,e,a)=>{const l=[];return this.#i({type:r.MOUNT,data:{fsType:i,options:e,mountPoint:a}},l)};unmount=i=>{const e=[];return this.#i({type:r.UNMOUNT,data:{mountPoint:i}},e)};readFile=(i,e="binary",{signal:a}={})=>this.#i({type:r.READ_FILE,data:{path:i,encoding:e}},void 0,a);deleteFile=(i,{signal:e}={})=>this.#i({type:r.DELETE_FILE,data:{path:i}},void 0,e);rename=(i,e,{signal:a}={})=>this.#i({type:r.RENAME,data:{oldPath:i,newPath:e}},void 0,a);createDir=(i,{signal:e}={})=>this.#i({type:r.CREATE_DIR,data:{path:i}},void 0,e);listDir=(i,{signal:e}={})=>this.#i({type:r.LIST_DIR,data:{path:i}},void 0,e);deleteDir=(i,{signal:e}={})=>this.#i({type:r.DELETE_DIR,data:{path:i}},void 0,e)}const ii={ez:"application/andrew-inset",aw:"application/applixware",atom:"application/atom+xml",atomcat:"application/atomcat+xml",atomdeleted:"application/atomdeleted+xml",atomsvc:"application/atomsvc+xml",dwd:"application/atsc-dwd+xml",held:"application/atsc-held+xml",rsat:"application/atsc-rsat+xml",bdoc:"application/bdoc",xcs:"application/calendar+xml",ccxml:"application/ccxml+xml",cdfx:"application/cdfx+xml",cdmia:"application/cdmi-capability",cdmic:"application/cdmi-container",cdmid:"application/cdmi-domain",cdmio:"application/cdmi-object",cdmiq:"application/cdmi-queue",cu:"application/cu-seeme",mpd:"application/dash+xml",davmount:"application/davmount+xml",dbk:"application/docbook+xml",dssc:"application/dssc+der",xdssc:"application/dssc+xml",es:"application/ecmascript",ecma:"application/ecmascript",emma:"application/emma+xml",emotionml:"application/emotionml+xml",epub:"application/epub+zip",exi:"application/exi",fdt:"application/fdt+xml",pfr:"application/font-tdpfr",geojson:"application/geo+json",gml:"application/gml+xml",gpx:"application/gpx+xml",gxf:"application/gxf",gz:"application/gzip",hjson:"application/hjson",stk:"application/hyperstudio",ink:"application/inkml+xml",inkml:"application/inkml+xml",ipfix:"application/ipfix",its:"application/its+xml",jar:"application/java-archive",war:"application/java-archive",ear:"application/java-archive",ser:"application/java-serialized-object",class:"application/java-vm",js:"application/javascript",mjs:"application/javascript",json:"application/json",map:"application/json",json5:"application/json5",jsonml:"application/jsonml+json",jsonld:"application/ld+json",lgr:"application/lgr+xml",lostxml:"application/lost+xml",hqx:"application/mac-binhex40",cpt:"application/mac-compactpro",mads:"application/mads+xml",webmanifest:"application/manifest+json",mrc:"application/marc",mrcx:"application/marcxml+xml",ma:"application/mathematica",nb:"application/mathematica",mb:"application/mathematica",mathml:"application/mathml+xml",mbox:"application/mbox",mscml:"application/mediaservercontrol+xml",metalink:"application/metalink+xml",meta4:"application/metalink4+xml",mets:"application/mets+xml",maei:"application/mmt-aei+xml",musd:"application/mmt-usd+xml",mods:"application/mods+xml",m21:"application/mp21",mp21:"application/mp21",mp4s:"application/mp4",m4p:"application/mp4",doc:"application/msword",dot:"application/msword",mxf:"application/mxf",nq:"application/n-quads",nt:"application/n-triples",cjs:"application/node",bin:"application/octet-stream",dms:"application/octet-stream",lrf:"application/octet-stream",mar:"application/octet-stream",so:"application/octet-stream",dist:"application/octet-stream",distz:"application/octet-stream",pkg:"application/octet-stream",bpk:"application/octet-stream",dump:"application/octet-stream",elc:"application/octet-stream",deploy:"application/octet-stream",exe:"application/octet-stream",dll:"application/octet-stream",deb:"application/octet-stream",dmg:"application/octet-stream",iso:"application/octet-stream",img:"application/octet-stream",msi:"application/octet-stream",msp:"application/octet-stream",msm:"application/octet-stream",buffer:"application/octet-stream",oda:"application/oda",opf:"application/oebps-package+xml",ogx:"application/ogg",omdoc:"application/omdoc+xml",onetoc:"application/onenote",onetoc2:"application/onenote",onetmp:"application/onenote",onepkg:"application/onenote",oxps:"application/oxps",relo:"application/p2p-overlay+xml",xer:"application/patch-ops-error+xml",pdf:"application/pdf",pgp:"application/pgp-encrypted",asc:"application/pgp-signature",sig:"application/pgp-signature",prf:"application/pics-rules",p10:"application/pkcs10",p7m:"application/pkcs7-mime",p7c:"application/pkcs7-mime",p7s:"application/pkcs7-signature",p8:"application/pkcs8",ac:"application/pkix-attr-cert",cer:"application/pkix-cert",crl:"application/pkix-crl",pkipath:"application/pkix-pkipath",pki:"application/pkixcmp",pls:"application/pls+xml",ai:"application/postscript",eps:"application/postscript",ps:"application/postscript",provx:"application/provenance+xml",cww:"application/prs.cww",pskcxml:"application/pskc+xml",raml:"application/raml+yaml",rdf:"application/rdf+xml",owl:"application/rdf+xml",rif:"application/reginfo+xml",rnc:"application/relax-ng-compact-syntax",rl:"application/resource-lists+xml",rld:"application/resource-lists-diff+xml",rs:"application/rls-services+xml",rapd:"application/route-apd+xml",sls:"application/route-s-tsid+xml",rusd:"application/route-usd+xml",gbr:"application/rpki-ghostbusters",mft:"application/rpki-manifest",roa:"application/rpki-roa",rsd:"application/rsd+xml",rss:"application/rss+xml",rtf:"application/rtf",sbml:"application/sbml+xml",scq:"application/scvp-cv-request",scs:"application/scvp-cv-response",spq:"application/scvp-vp-request",spp:"application/scvp-vp-response",sdp:"application/sdp",senmlx:"application/senml+xml",sensmlx:"application/sensml+xml",setpay:"application/set-payment-initiation",setreg:"application/set-registration-initiation",shf:"application/shf+xml",siv:"application/sieve",sieve:"application/sieve",smi:"application/smil+xml",smil:"application/smil+xml",rq:"application/sparql-query",srx:"application/sparql-results+xml",gram:"application/srgs",grxml:"application/srgs+xml",sru:"application/sru+xml",ssdl:"application/ssdl+xml",ssml:"application/ssml+xml",swidtag:"application/swid+xml",tei:"application/tei+xml",teicorpus:"application/tei+xml",tfi:"application/thraud+xml",tsd:"application/timestamped-data",toml:"application/toml",trig:"application/trig",ttml:"application/ttml+xml",ubj:"application/ubjson",rsheet:"application/urc-ressheet+xml",td:"application/urc-targetdesc+xml",vxml:"application/voicexml+xml",wasm:"application/wasm",wgt:"application/widget",hlp:"application/winhlp",wsdl:"application/wsdl+xml",wspolicy:"application/wspolicy+xml",xaml:"application/xaml+xml",xav:"application/xcap-att+xml",xca:"application/xcap-caps+xml",xdf:"application/xcap-diff+xml",xel:"application/xcap-el+xml",xns:"application/xcap-ns+xml",xenc:"application/xenc+xml",xhtml:"application/xhtml+xml",xht:"application/xhtml+xml",xlf:"application/xliff+xml",xml:"application/xml",xsl:"application/xml",xsd:"application/xml",rng:"application/xml",dtd:"application/xml-dtd",xop:"application/xop+xml",xpl:"application/xproc+xml",xslt:"application/xml",xspf:"application/xspf+xml",mxml:"application/xv+xml",xhvml:"application/xv+xml",xvml:"application/xv+xml",xvm:"application/xv+xml",yang:"application/yang",yin:"application/yin+xml",zip:"application/zip","3gpp":"video/3gpp",adp:"audio/adpcm",amr:"audio/amr",au:"audio/basic",snd:"audio/basic",mid:"audio/midi",midi:"audio/midi",kar:"audio/midi",rmi:"audio/midi",mxmf:"audio/mobile-xmf",mp3:"audio/mpeg",m4a:"audio/mp4",mp4a:"audio/mp4",mpga:"audio/mpeg",mp2:"audio/mpeg",mp2a:"audio/mpeg",m2a:"audio/mpeg",m3a:"audio/mpeg",oga:"audio/ogg",ogg:"audio/ogg",spx:"audio/ogg",opus:"audio/ogg",s3m:"audio/s3m",sil:"audio/silk",wav:"audio/wav",weba:"audio/webm",xm:"audio/xm",ttc:"font/collection",otf:"font/otf",ttf:"font/ttf",woff:"font/woff",woff2:"font/woff2",exr:"image/aces",apng:"image/apng",avif:"image/avif",bmp:"image/bmp",cgm:"image/cgm",drle:"image/dicom-rle",emf:"image/emf",fits:"image/fits",g3:"image/g3fax",gif:"image/gif",heic:"image/heic",heics:"image/heic-sequence",heif:"image/heif",heifs:"image/heif-sequence",hej2:"image/hej2k",hsj2:"image/hsj2",ief:"image/ief",jls:"image/jls",jp2:"image/jp2",jpg2:"image/jp2",jpeg:"image/jpeg",jpg:"image/jpeg",jpe:"image/jpeg",jph:"image/jph",jhc:"image/jphc",jpm:"image/jpm",jpx:"image/jpx",jpf:"image/jpx",jxr:"image/jxr",jxra:"image/jxra",jxrs:"image/jxrs",jxs:"image/jxs",jxsc:"image/jxsc",jxsi:"image/jxsi",jxss:"image/jxss",ktx:"image/ktx",ktx2:"image/ktx2",png:"image/png",btif:"image/prs.btif",pti:"image/prs.pti",sgi:"image/sgi",svg:"image/svg+xml",svgz:"image/svg+xml",t38:"image/t38",tif:"image/tiff",tiff:"image/tiff",tfx:"image/tiff-fx",webp:"image/webp",wmf:"image/wmf","disposition-notification":"message/disposition-notification",u8msg:"message/global",u8dsn:"message/global-delivery-status",u8mdn:"message/global-disposition-notification",u8hdr:"message/global-headers",eml:"message/rfc822",mime:"message/rfc822","3mf":"model/3mf",gltf:"model/gltf+json",glb:"model/gltf-binary",igs:"model/iges",iges:"model/iges",msh:"model/mesh",mesh:"model/mesh",silo:"model/mesh",mtl:"model/mtl",obj:"model/obj",stpz:"model/step+zip",stpxz:"model/step-xml+zip",stl:"model/stl",wrl:"model/vrml",vrml:"model/vrml",x3db:"model/x3d+fastinfoset",x3dbz:"model/x3d+binary",x3dv:"model/x3d-vrml",x3dvz:"model/x3d+vrml",x3d:"model/x3d+xml",x3dz:"model/x3d+xml",appcache:"text/cache-manifest",manifest:"text/cache-manifest",ics:"text/calendar",ifb:"text/calendar",coffee:"text/coffeescript",litcoffee:"text/coffeescript",css:"text/css",csv:"text/csv",html:"text/html",htm:"text/html",shtml:"text/html",jade:"text/jade",jsx:"text/jsx",less:"text/less",markdown:"text/markdown",md:"text/markdown",mml:"text/mathml",mdx:"text/mdx",n3:"text/n3",txt:"text/plain",text:"text/plain",conf:"text/plain",def:"text/plain",list:"text/plain",log:"text/plain",in:"text/plain",ini:"text/plain",dsc:"text/prs.lines.tag",rtx:"text/richtext",sgml:"text/sgml",sgm:"text/sgml",shex:"text/shex",slim:"text/slim",slm:"text/slim",spdx:"text/spdx",stylus:"text/stylus",styl:"text/stylus",tsv:"text/tab-separated-values",t:"text/troff",tr:"text/troff",roff:"text/troff",man:"text/troff",me:"text/troff",ms:"text/troff",ttl:"text/turtle",uri:"text/uri-list",uris:"text/uri-list",urls:"text/uri-list",vcard:"text/vcard",vtt:"text/vtt",yaml:"text/yaml",yml:"text/yaml","3gp":"video/3gpp","3g2":"video/3gpp2",h261:"video/h261",h263:"video/h263",h264:"video/h264",m4s:"video/iso.segment",jpgv:"video/jpeg",jpgm:"image/jpm",mj2:"video/mj2",mjp2:"video/mj2",ts:"video/mp2t",mp4:"video/mp4",mp4v:"video/mp4",mpg4:"video/mp4",mpeg:"video/mpeg",mpg:"video/mpeg",mpe:"video/mpeg",m1v:"video/mpeg",m2v:"video/mpeg",ogv:"video/ogg",qt:"video/quicktime",mov:"video/quicktime",webm:"video/webm"};function ti(t){let i=(""+t).trim().toLowerCase(),e=i.lastIndexOf(".");return ii[~e?i.substring(++e):i]}const Qi=t=>{let i=["B","KB","MB","GB","PB"],e=0;for(;t>1024;)t/=1024,e++;let a=i[e];return t.toFixed(1)+" "+a},Zi=()=>!0;function ei(t,{autoplay:i}){async function e(){i&&await t.play()}return t.addEventListener("loadeddata",e),{destroy(){t.removeEventListener("loadeddata",e)}}}async function $i(){const t=new $,i="https://unpkg.com/@ffmpeg/core@0.12.4/dist/esm";return await t.load({coreURL:await R(`${i}/ffmpeg-core.js`,"text/javascript"),wasmURL:await R(`${i}/ffmpeg-core.wasm`,"application/wasm")}),t}async function it(t,i,e,a){try{const l=a.src,n=ti(a.src)||"video/mp4",s=await R(l,n),m=await(await fetch(s)).blob(),u=ai(n)||"mp4",p=`input.${u}`,x=`output.${u}`;await t.writeFile(p,new Uint8Array(await m.arrayBuffer()));let v=["-i",p,"-ss",i.toString(),"-to",e.toString(),"-c:a","copy",x];await t.exec(v);const _=await t.readFile(x);return new Blob([_],{type:`video/${u}`})}catch(l){console.error("Error initializing FFmpeg:",l)}}const ai=t=>({"video/mp4":"mp4","video/webm":"webm","video/ogg":"ogv","video/quicktime":"mov","video/x-msvideo":"avi","video/x-matroska":"mkv","video/mpeg":"mpeg","video/3gpp":"3gp","video/3gpp2":"3g2","video/h261":"h261","video/h263":"h263","video/h264":"h264","video/jpeg":"jpgv","video/jpm":"jpm","video/mj2":"mj2","video/mpv":"mpv","video/vnd.ms-playready.media.pyv":"pyv","video/vnd.uvvu.mp4":"uvu","video/vnd.vivo":"viv","video/x-f4v":"f4v","video/x-fli":"fli","video/x-flv":"flv","video/x-m4v":"m4v","video/x-ms-asf":"asf","video/x-ms-wm":"wm","video/x-ms-wmv":"wmv","video/x-ms-wmx":"wmx","video/x-ms-wvx":"wvx","video/x-sgi-movie":"movie","video/x-smv":"smv"})[t]||null;const{SvelteComponent:li,action_destroyer:oi,add_render_callback:ni,append:pi,assign:T,attr:b,binding_callbacks:si,create_slot:mi,detach:k,element:L,empty:ci,exclude_internal_props:A,get_all_dirty_from_scope:di,get_slot_changes:ri,handle_promise:N,init:ui,insert:w,is_function:fi,listen:h,noop:E,raf:gi,run_all:xi,safe_not_equal:vi,set_data:hi,set_style:_i,space:bi,src_url_equal:U,text:Ei,toggle_class:q,transition_in:C,transition_out:B,update_await_block_branch:ji,update_slot_base:yi}=window.__gradio__svelte__internal,{createEventDispatcher:ki}=window.__gradio__svelte__internal;function wi(t){let i,e=t[20].message+"",a;return{c(){i=L("p"),a=Ei(e),_i(i,"color","red")},m(l,n){w(l,i,n),pi(i,a)},p(l,n){n&16&&e!==(e=l[20].message+"")&&hi(a,e)},i:E,o:E,d(l){l&&k(i)}}}function Ri(t){let i,e,a,l,n,s=!1,o,m=!0,u,p,x,v;const _=t[14].default,g=mi(_,t,t[13],null);function y(){cancelAnimationFrame(o),a.paused||(o=gi(y),s=!0),t[15].call(a)}return{c(){i=L("div"),i.innerHTML='',e=bi(),a=L("video"),g&&g.c(),b(i,"class","overlay svelte-1wkm2e0"),q(i,"hidden",!t[10]),U(a.src,l=t[19])||b(a,"src",l),a.muted=t[5],a.playsInline=t[6],b(a,"preload",t[7]),a.autoplay=t[8],a.controls=t[9],b(a,"data-testid",n=t[12]["data-testid"]),b(a,"crossorigin","anonymous"),b(a,"class","svelte-1wkm2e0"),t[1]===void 0&&ni(()=>t[16].call(a))},m(c,f){w(c,i,f),w(c,e,f),w(c,a,f),g&&g.m(a,null),t[18](a),p=!0,x||(v=[h(a,"loadeddata",t[11].bind(null,"loadeddata")),h(a,"click",t[11].bind(null,"click")),h(a,"play",t[11].bind(null,"play")),h(a,"pause",t[11].bind(null,"pause")),h(a,"ended",t[11].bind(null,"ended")),h(a,"mouseover",t[11].bind(null,"mouseover")),h(a,"mouseout",t[11].bind(null,"mouseout")),h(a,"focus",t[11].bind(null,"focus")),h(a,"blur",t[11].bind(null,"blur")),h(a,"timeupdate",y),h(a,"durationchange",t[16]),h(a,"play",t[17]),h(a,"pause",t[17]),oi(u=ei.call(null,a,{autoplay:t[8]??!1}))],x=!0)},p(c,f){(!p||f&1024)&&q(i,"hidden",!c[10]),g&&g.p&&(!p||f&8192)&&yi(g,_,c,c[13],p?ri(_,c[13],f,null):di(c[13]),null),(!p||f&16&&!U(a.src,l=c[19]))&&b(a,"src",l),(!p||f&32)&&(a.muted=c[5]),(!p||f&64)&&(a.playsInline=c[6]),(!p||f&128)&&b(a,"preload",c[7]),(!p||f&256)&&(a.autoplay=c[8]),(!p||f&512)&&(a.controls=c[9]),(!p||f&4096&&n!==(n=c[12]["data-testid"]))&&b(a,"data-testid",n),!s&&f&1&&!isNaN(c[0])&&(a.currentTime=c[0]),s=!1,f&4&&m!==(m=c[2])&&a[m?"pause":"play"](),u&&fi(u.update)&&f&256&&u.update.call(null,{autoplay:c[8]??!1})},i(c){p||(C(g,c),p=!0)},o(c){B(g,c),p=!1},d(c){c&&(k(i),k(e),k(a)),g&&g.d(c),t[18](null),x=!1,xi(v)}}}function Li(t){return{c:E,m:E,p:E,i:E,o:E,d:E}}function Di(t){let i,e,a,l={ctx:t,current:null,token:null,hasCatch:!0,pending:Li,then:Ri,catch:wi,value:19,error:20,blocks:[,,,]};return N(e=I(t[4]),l),{c(){i=ci(),l.block.c()},m(n,s){w(n,i,s),l.block.m(n,l.anchor=s),l.mount=()=>i.parentNode,l.anchor=i,a=!0},p(n,[s]){t=n,l.ctx=t,s&16&&e!==(e=I(t[4]))&&N(e,l)||ji(l,t,s)},i(n){a||(C(l.block),a=!0)},o(n){for(let s=0;s<3;s+=1){const o=l.blocks[s];B(o)}a=!1},d(n){n&&k(i),l.block.d(n),l.token=null,l=null}}}function Oi(t,i,e){let{$$slots:a={},$$scope:l}=i,{src:n=void 0}=i,{muted:s=void 0}=i,{playsinline:o=void 0}=i,{preload:m=void 0}=i,{autoplay:u=void 0}=i,{controls:p=void 0}=i,{currentTime:x=void 0}=i,{duration:v=void 0}=i,{paused:_=void 0}=i,{node:g=void 0}=i,{processingVideo:y=!1}=i;const c=ki();function f(){x=this.currentTime,e(0,x)}function W(){v=this.duration,e(1,v)}function G(){_=this.paused,e(2,_)}function X(d){si[d?"unshift":"push"](()=>{g=d,e(3,g)})}return t.$$set=d=>{e(12,i=T(T({},i),A(d))),"src"in d&&e(4,n=d.src),"muted"in d&&e(5,s=d.muted),"playsinline"in d&&e(6,o=d.playsinline),"preload"in d&&e(7,m=d.preload),"autoplay"in d&&e(8,u=d.autoplay),"controls"in d&&e(9,p=d.controls),"currentTime"in d&&e(0,x=d.currentTime),"duration"in d&&e(1,v=d.duration),"paused"in d&&e(2,_=d.paused),"node"in d&&e(3,g=d.node),"processingVideo"in d&&e(10,y=d.processingVideo),"$$scope"in d&&e(13,l=d.$$scope)},i=A(i),[x,v,_,g,n,s,o,m,u,p,y,c,i,l,a,f,W,G,X]}class Ii extends li{constructor(i){super(),ui(this,i,Oi,Di,vi,{src:4,muted:5,playsinline:6,preload:7,autoplay:8,controls:9,currentTime:0,duration:1,paused:2,node:3,processingVideo:10})}}const{SvelteComponent:Ti,add_flush_callback:Ai,append:Ni,attr:Ui,bind:qi,binding_callbacks:Si,create_component:zi,destroy_component:Ci,detach:D,element:M,empty:Bi,init:Mi,insert:O,is_function:S,mount_component:Pi,noop:z,safe_not_equal:Vi,set_data:Wi,text:Gi,toggle_class:j,transition_in:P,transition_out:V}=window.__gradio__svelte__internal;function Xi(t){let i,e;return{c(){i=M("div"),e=Gi(t[2])},m(a,l){O(a,i,l),Ni(i,e)},p(a,l){l&4&&Wi(e,a[2])},i:z,o:z,d(a){a&&D(i)}}}function Hi(t){let i,e,a,l;function n(o){t[6](o)}let s={muted:!0,playsinline:!0,src:t[3]+t[2]};return t[4]!==void 0&&(s.node=t[4]),e=new Ii({props:s}),Si.push(()=>qi(e,"node",n)),e.$on("loadeddata",t[5]),e.$on("mouseover",function(){S(t[4].play.bind(t[4]))&&t[4].play.bind(t[4]).apply(this,arguments)}),e.$on("mouseout",function(){S(t[4].pause.bind(t[4]))&&t[4].pause.bind(t[4]).apply(this,arguments)}),{c(){i=M("div"),zi(e.$$.fragment),Ui(i,"class","container svelte-1jmx6y1"),j(i,"table",t[0]==="table"),j(i,"gallery",t[0]==="gallery"),j(i,"selected",t[1])},m(o,m){O(o,i,m),Pi(e,i,null),l=!0},p(o,m){t=o;const u={};m&12&&(u.src=t[3]+t[2]),!a&&m&16&&(a=!0,u.node=t[4],Ai(()=>a=!1)),e.$set(u),(!l||m&1)&&j(i,"table",t[0]==="table"),(!l||m&1)&&j(i,"gallery",t[0]==="gallery"),(!l||m&2)&&j(i,"selected",t[1])},i(o){l||(P(e.$$.fragment,o),l=!0)},o(o){V(e.$$.fragment,o),l=!1},d(o){o&&D(i),Ci(e)}}}function Fi(t){let i,e,a,l;const n=[Hi,Xi],s=[];function o(m,u){return 0}return i=o(),e=s[i]=n[i](t),{c(){e.c(),a=Bi()},m(m,u){s[i].m(m,u),O(m,a,u),l=!0},p(m,[u]){e.p(m,u)},i(m){l||(P(e),l=!0)},o(m){V(e),l=!1},d(m){m&&D(a),s[i].d(m)}}}function Ki(t,i,e){let{type:a}=i,{selected:l=!1}=i,{value:n}=i,{samples_dir:s}=i,o;async function m(){e(4,o.muted=!0,o),e(4,o.playsInline=!0,o),e(4,o.controls=!1,o),o.setAttribute("muted",""),await o.play(),o.pause()}function u(p){o=p,e(4,o)}return t.$$set=p=>{"type"in p&&e(0,a=p.type),"selected"in p&&e(1,l=p.selected),"value"in p&&e(2,n=p.value),"samples_dir"in p&&e(3,s=p.samples_dir)},[a,l,n,s,o,m,u]}class Yi extends Ti{constructor(i){super(),Mi(this,i,Ki,Fi,Vi,{type:0,selected:1,value:2,samples_dir:3})}}const tt=Object.freeze(Object.defineProperty({__proto__:null,default:Yi},Symbol.toStringTag,{value:"Module"}));export{Yi as E,Ii as V,Zi as a,ei as b,tt as c,$i as l,Qi as p,it as t}; -//# sourceMappingURL=Example-5acde2d8.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h deleted file mode 100644 index 6455d40d223b8a13c9903c95e7282b9621311414..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h +++ /dev/null @@ -1,124 +0,0 @@ -#ifndef NPY_DEPRECATED_INCLUDES -#error "Should never include npy_*_*_deprecated_api directly." -#endif - -#ifndef NUMPY_CORE_INCLUDE_NUMPY_NPY_1_7_DEPRECATED_API_H_ -#define NUMPY_CORE_INCLUDE_NUMPY_NPY_1_7_DEPRECATED_API_H_ - -/* Emit a warning if the user did not specifically request the old API */ -#ifndef NPY_NO_DEPRECATED_API -#if defined(_WIN32) -#define _WARN___STR2__(x) #x -#define _WARN___STR1__(x) _WARN___STR2__(x) -#define _WARN___LOC__ __FILE__ "(" _WARN___STR1__(__LINE__) ") : Warning Msg: " -#pragma message(_WARN___LOC__"Using deprecated NumPy API, disable it with " \ - "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION") -#else -#warning "Using deprecated NumPy API, disable it with " \ - "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" -#endif -#endif - -/* - * This header exists to collect all dangerous/deprecated NumPy API - * as of NumPy 1.7. - * - * This is an attempt to remove bad API, the proliferation of macros, - * and namespace pollution currently produced by the NumPy headers. - */ - -/* These array flags are deprecated as of NumPy 1.7 */ -#define NPY_CONTIGUOUS NPY_ARRAY_C_CONTIGUOUS -#define NPY_FORTRAN NPY_ARRAY_F_CONTIGUOUS - -/* - * The consistent NPY_ARRAY_* names which don't pollute the NPY_* - * namespace were added in NumPy 1.7. - * - * These versions of the carray flags are deprecated, but - * probably should only be removed after two releases instead of one. - */ -#define NPY_C_CONTIGUOUS NPY_ARRAY_C_CONTIGUOUS -#define NPY_F_CONTIGUOUS NPY_ARRAY_F_CONTIGUOUS -#define NPY_OWNDATA NPY_ARRAY_OWNDATA -#define NPY_FORCECAST NPY_ARRAY_FORCECAST -#define NPY_ENSURECOPY NPY_ARRAY_ENSURECOPY -#define NPY_ENSUREARRAY NPY_ARRAY_ENSUREARRAY -#define NPY_ELEMENTSTRIDES NPY_ARRAY_ELEMENTSTRIDES -#define NPY_ALIGNED NPY_ARRAY_ALIGNED -#define NPY_NOTSWAPPED NPY_ARRAY_NOTSWAPPED -#define NPY_WRITEABLE NPY_ARRAY_WRITEABLE -#define NPY_BEHAVED NPY_ARRAY_BEHAVED -#define NPY_BEHAVED_NS NPY_ARRAY_BEHAVED_NS -#define NPY_CARRAY NPY_ARRAY_CARRAY -#define NPY_CARRAY_RO NPY_ARRAY_CARRAY_RO -#define NPY_FARRAY NPY_ARRAY_FARRAY -#define NPY_FARRAY_RO NPY_ARRAY_FARRAY_RO -#define NPY_DEFAULT NPY_ARRAY_DEFAULT -#define NPY_IN_ARRAY NPY_ARRAY_IN_ARRAY -#define NPY_OUT_ARRAY NPY_ARRAY_OUT_ARRAY -#define NPY_INOUT_ARRAY NPY_ARRAY_INOUT_ARRAY -#define NPY_IN_FARRAY NPY_ARRAY_IN_FARRAY -#define NPY_OUT_FARRAY NPY_ARRAY_OUT_FARRAY -#define NPY_INOUT_FARRAY NPY_ARRAY_INOUT_FARRAY -#define NPY_UPDATE_ALL NPY_ARRAY_UPDATE_ALL - -/* This way of accessing the default type is deprecated as of NumPy 1.7 */ -#define PyArray_DEFAULT NPY_DEFAULT_TYPE - -/* These DATETIME bits aren't used internally */ -#define PyDataType_GetDatetimeMetaData(descr) \ - ((descr->metadata == NULL) ? NULL : \ - ((PyArray_DatetimeMetaData *)(PyCapsule_GetPointer( \ - PyDict_GetItemString( \ - descr->metadata, NPY_METADATA_DTSTR), NULL)))) - -/* - * Deprecated as of NumPy 1.7, this kind of shortcut doesn't - * belong in the public API. - */ -#define NPY_AO PyArrayObject - -/* - * Deprecated as of NumPy 1.7, an all-lowercase macro doesn't - * belong in the public API. - */ -#define fortran fortran_ - -/* - * Deprecated as of NumPy 1.7, as it is a namespace-polluting - * macro. - */ -#define FORTRAN_IF PyArray_FORTRAN_IF - -/* Deprecated as of NumPy 1.7, datetime64 uses c_metadata instead */ -#define NPY_METADATA_DTSTR "__timeunit__" - -/* - * Deprecated as of NumPy 1.7. - * The reasoning: - * - These are for datetime, but there's no datetime "namespace". - * - They just turn NPY_STR_ into "", which is just - * making something simple be indirected. - */ -#define NPY_STR_Y "Y" -#define NPY_STR_M "M" -#define NPY_STR_W "W" -#define NPY_STR_D "D" -#define NPY_STR_h "h" -#define NPY_STR_m "m" -#define NPY_STR_s "s" -#define NPY_STR_ms "ms" -#define NPY_STR_us "us" -#define NPY_STR_ns "ns" -#define NPY_STR_ps "ps" -#define NPY_STR_fs "fs" -#define NPY_STR_as "as" - -/* - * The macros in old_defines.h are Deprecated as of NumPy 1.7 and will be - * removed in the next major release. - */ -#include "old_defines.h" - -#endif /* NUMPY_CORE_INCLUDE_NUMPY_NPY_1_7_DEPRECATED_API_H_ */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_errstate.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_errstate.py deleted file mode 100644 index 3a5647f6f34036711337bfe7f625242afd1e2b28..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_errstate.py +++ /dev/null @@ -1,61 +0,0 @@ -import pytest -import sysconfig - -import numpy as np -from numpy.testing import assert_, assert_raises, IS_WASM - -# The floating point emulation on ARM EABI systems lacking a hardware FPU is -# known to be buggy. This is an attempt to identify these hosts. It may not -# catch all possible cases, but it catches the known cases of gh-413 and -# gh-15562. -hosttype = sysconfig.get_config_var('HOST_GNU_TYPE') -arm_softfloat = False if hosttype is None else hosttype.endswith('gnueabi') - -class TestErrstate: - @pytest.mark.skipif(IS_WASM, reason="fp errors don't work in wasm") - @pytest.mark.skipif(arm_softfloat, - reason='platform/cpu issue with FPU (gh-413,-15562)') - def test_invalid(self): - with np.errstate(all='raise', under='ignore'): - a = -np.arange(3) - # This should work - with np.errstate(invalid='ignore'): - np.sqrt(a) - # While this should fail! - with assert_raises(FloatingPointError): - np.sqrt(a) - - @pytest.mark.skipif(IS_WASM, reason="fp errors don't work in wasm") - @pytest.mark.skipif(arm_softfloat, - reason='platform/cpu issue with FPU (gh-15562)') - def test_divide(self): - with np.errstate(all='raise', under='ignore'): - a = -np.arange(3) - # This should work - with np.errstate(divide='ignore'): - a // 0 - # While this should fail! - with assert_raises(FloatingPointError): - a // 0 - # As should this, see gh-15562 - with assert_raises(FloatingPointError): - a // a - - def test_errcall(self): - def foo(*args): - print(args) - - olderrcall = np.geterrcall() - with np.errstate(call=foo): - assert_(np.geterrcall() is foo, 'call is not foo') - with np.errstate(call=None): - assert_(np.geterrcall() is None, 'call is not None') - assert_(np.geterrcall() is olderrcall, 'call is not olderrcall') - - def test_errstate_decorator(self): - @np.errstate(all='ignore') - def foo(): - a = -np.arange(3) - a // 0 - - foo() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_quoted_character.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_quoted_character.py deleted file mode 100644 index 82671cd8e72f84733f5a28acdb4b5fb9d56a0a03..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_quoted_character.py +++ /dev/null @@ -1,16 +0,0 @@ -"""See https://github.com/numpy/numpy/pull/10676. - -""" -import sys -import pytest - -from . import util - - -class TestQuotedCharacter(util.F2PyTest): - sources = [util.getpath("tests", "src", "quoted_character", "foo.f")] - - @pytest.mark.skipif(sys.platform == "win32", - reason="Fails with MinGW64 Gfortran (Issue #9673)") - def test_quoted_character(self): - assert self.module.foo() == (b"'", b'"', b";", b"!", b"(", b")") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sasreader.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sasreader.py deleted file mode 100644 index 7fdfd214c452c69db615b4eb18e22143a63ee49c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sasreader.py +++ /dev/null @@ -1,180 +0,0 @@ -""" -Read SAS sas7bdat or xport files. -""" -from __future__ import annotations - -from typing import ( - TYPE_CHECKING, - Protocol, - overload, -) - -from pandas.util._decorators import doc - -from pandas.core.shared_docs import _shared_docs - -from pandas.io.common import stringify_path - -if TYPE_CHECKING: - from collections.abc import Hashable - from types import TracebackType - - from pandas._typing import ( - CompressionOptions, - FilePath, - ReadBuffer, - ) - - from pandas import DataFrame - - -class ReaderBase(Protocol): - """ - Protocol for XportReader and SAS7BDATReader classes. - """ - - def read(self, nrows: int | None = None) -> DataFrame: - ... - - def close(self) -> None: - ... - - def __enter__(self) -> ReaderBase: - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_value: BaseException | None, - traceback: TracebackType | None, - ) -> None: - self.close() - - -@overload -def read_sas( - filepath_or_buffer: FilePath | ReadBuffer[bytes], - *, - format: str | None = ..., - index: Hashable | None = ..., - encoding: str | None = ..., - chunksize: int = ..., - iterator: bool = ..., - compression: CompressionOptions = ..., -) -> ReaderBase: - ... - - -@overload -def read_sas( - filepath_or_buffer: FilePath | ReadBuffer[bytes], - *, - format: str | None = ..., - index: Hashable | None = ..., - encoding: str | None = ..., - chunksize: None = ..., - iterator: bool = ..., - compression: CompressionOptions = ..., -) -> DataFrame | ReaderBase: - ... - - -@doc(decompression_options=_shared_docs["decompression_options"] % "filepath_or_buffer") -def read_sas( - filepath_or_buffer: FilePath | ReadBuffer[bytes], - *, - format: str | None = None, - index: Hashable | None = None, - encoding: str | None = None, - chunksize: int | None = None, - iterator: bool = False, - compression: CompressionOptions = "infer", -) -> DataFrame | ReaderBase: - """ - Read SAS files stored as either XPORT or SAS7BDAT format files. - - Parameters - ---------- - filepath_or_buffer : str, path object, or file-like object - String, path object (implementing ``os.PathLike[str]``), or file-like - object implementing a binary ``read()`` function. The string could be a URL. - Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is - expected. A local file could be: - ``file://localhost/path/to/table.sas7bdat``. - format : str {{'xport', 'sas7bdat'}} or None - If None, file format is inferred from file extension. If 'xport' or - 'sas7bdat', uses the corresponding format. - index : identifier of index column, defaults to None - Identifier of column that should be used as index of the DataFrame. - encoding : str, default is None - Encoding for text data. If None, text data are stored as raw bytes. - chunksize : int - Read file `chunksize` lines at a time, returns iterator. - - .. versionchanged:: 1.2 - - ``TextFileReader`` is a context manager. - iterator : bool, defaults to False - If True, returns an iterator for reading the file incrementally. - - .. versionchanged:: 1.2 - - ``TextFileReader`` is a context manager. - {decompression_options} - - Returns - ------- - DataFrame if iterator=False and chunksize=None, else SAS7BDATReader - or XportReader - - Examples - -------- - >>> df = pd.read_sas("sas_data.sas7bdat") # doctest: +SKIP - """ - if format is None: - buffer_error_msg = ( - "If this is a buffer object rather " - "than a string name, you must specify a format string" - ) - filepath_or_buffer = stringify_path(filepath_or_buffer) - if not isinstance(filepath_or_buffer, str): - raise ValueError(buffer_error_msg) - fname = filepath_or_buffer.lower() - if ".xpt" in fname: - format = "xport" - elif ".sas7bdat" in fname: - format = "sas7bdat" - else: - raise ValueError( - f"unable to infer format of SAS file from filename: {repr(fname)}" - ) - - reader: ReaderBase - if format.lower() == "xport": - from pandas.io.sas.sas_xport import XportReader - - reader = XportReader( - filepath_or_buffer, - index=index, - encoding=encoding, - chunksize=chunksize, - compression=compression, - ) - elif format.lower() == "sas7bdat": - from pandas.io.sas.sas7bdat import SAS7BDATReader - - reader = SAS7BDATReader( - filepath_or_buffer, - index=index, - encoding=encoding, - chunksize=chunksize, - compression=compression, - ) - else: - raise ValueError("unknown SAS format") - - if iterator or chunksize: - return reader - - with reader: - return reader.read() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimelike_/test_sort_values.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimelike_/test_sort_values.py deleted file mode 100644 index ab1c15f003d4dc6fd6508cdb59580facc1fedab0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimelike_/test_sort_values.py +++ /dev/null @@ -1,315 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - DatetimeIndex, - Index, - NaT, - PeriodIndex, - TimedeltaIndex, - timedelta_range, -) -import pandas._testing as tm - - -def check_freq_ascending(ordered, orig, ascending): - """ - Check the expected freq on a PeriodIndex/DatetimeIndex/TimedeltaIndex - when the original index is generated (or generate-able) with - period_range/date_range/timedelta_range. - """ - if isinstance(ordered, PeriodIndex): - assert ordered.freq == orig.freq - elif isinstance(ordered, (DatetimeIndex, TimedeltaIndex)): - if ascending: - assert ordered.freq.n == orig.freq.n - else: - assert ordered.freq.n == -1 * orig.freq.n - - -def check_freq_nonmonotonic(ordered, orig): - """ - Check the expected freq on a PeriodIndex/DatetimeIndex/TimedeltaIndex - when the original index is _not_ generated (or generate-able) with - period_range/date_range//timedelta_range. - """ - if isinstance(ordered, PeriodIndex): - assert ordered.freq == orig.freq - elif isinstance(ordered, (DatetimeIndex, TimedeltaIndex)): - assert ordered.freq is None - - -class TestSortValues: - @pytest.fixture(params=[DatetimeIndex, TimedeltaIndex, PeriodIndex]) - def non_monotonic_idx(self, request): - if request.param is DatetimeIndex: - return DatetimeIndex(["2000-01-04", "2000-01-01", "2000-01-02"]) - elif request.param is PeriodIndex: - dti = DatetimeIndex(["2000-01-04", "2000-01-01", "2000-01-02"]) - return dti.to_period("D") - else: - return TimedeltaIndex( - ["1 day 00:00:05", "1 day 00:00:01", "1 day 00:00:02"] - ) - - def test_argmin_argmax(self, non_monotonic_idx): - assert non_monotonic_idx.argmin() == 1 - assert non_monotonic_idx.argmax() == 0 - - def test_sort_values(self, non_monotonic_idx): - idx = non_monotonic_idx - ordered = idx.sort_values() - assert ordered.is_monotonic_increasing - ordered = idx.sort_values(ascending=False) - assert ordered[::-1].is_monotonic_increasing - - ordered, dexer = idx.sort_values(return_indexer=True) - assert ordered.is_monotonic_increasing - tm.assert_numpy_array_equal(dexer, np.array([1, 2, 0], dtype=np.intp)) - - ordered, dexer = idx.sort_values(return_indexer=True, ascending=False) - assert ordered[::-1].is_monotonic_increasing - tm.assert_numpy_array_equal(dexer, np.array([0, 2, 1], dtype=np.intp)) - - def check_sort_values_with_freq(self, idx): - ordered = idx.sort_values() - tm.assert_index_equal(ordered, idx) - check_freq_ascending(ordered, idx, True) - - ordered = idx.sort_values(ascending=False) - expected = idx[::-1] - tm.assert_index_equal(ordered, expected) - check_freq_ascending(ordered, idx, False) - - ordered, indexer = idx.sort_values(return_indexer=True) - tm.assert_index_equal(ordered, idx) - tm.assert_numpy_array_equal(indexer, np.array([0, 1, 2], dtype=np.intp)) - check_freq_ascending(ordered, idx, True) - - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) - expected = idx[::-1] - tm.assert_index_equal(ordered, expected) - tm.assert_numpy_array_equal(indexer, np.array([2, 1, 0], dtype=np.intp)) - check_freq_ascending(ordered, idx, False) - - @pytest.mark.parametrize("freq", ["D", "H"]) - def test_sort_values_with_freq_timedeltaindex(self, freq): - # GH#10295 - idx = timedelta_range(start=f"1{freq}", periods=3, freq=freq).rename("idx") - - self.check_sort_values_with_freq(idx) - - @pytest.mark.parametrize( - "idx", - [ - DatetimeIndex( - ["2011-01-01", "2011-01-02", "2011-01-03"], freq="D", name="idx" - ), - DatetimeIndex( - ["2011-01-01 09:00", "2011-01-01 10:00", "2011-01-01 11:00"], - freq="H", - name="tzidx", - tz="Asia/Tokyo", - ), - ], - ) - def test_sort_values_with_freq_datetimeindex(self, idx): - self.check_sort_values_with_freq(idx) - - @pytest.mark.parametrize("freq", ["D", "2D", "4D"]) - def test_sort_values_with_freq_periodindex(self, freq): - # here with_freq refers to being period_range-like - idx = PeriodIndex( - ["2011-01-01", "2011-01-02", "2011-01-03"], freq=freq, name="idx" - ) - self.check_sort_values_with_freq(idx) - - @pytest.mark.parametrize( - "idx", - [ - PeriodIndex(["2011", "2012", "2013"], name="pidx", freq="A"), - Index([2011, 2012, 2013], name="idx"), # for compatibility check - ], - ) - def test_sort_values_with_freq_periodindex2(self, idx): - # here with_freq indicates this is period_range-like - self.check_sort_values_with_freq(idx) - - def check_sort_values_without_freq(self, idx, expected): - ordered = idx.sort_values(na_position="first") - tm.assert_index_equal(ordered, expected) - check_freq_nonmonotonic(ordered, idx) - - if not idx.isna().any(): - ordered = idx.sort_values() - tm.assert_index_equal(ordered, expected) - check_freq_nonmonotonic(ordered, idx) - - ordered = idx.sort_values(ascending=False) - tm.assert_index_equal(ordered, expected[::-1]) - check_freq_nonmonotonic(ordered, idx) - - ordered, indexer = idx.sort_values(return_indexer=True, na_position="first") - tm.assert_index_equal(ordered, expected) - - exp = np.array([0, 4, 3, 1, 2], dtype=np.intp) - tm.assert_numpy_array_equal(indexer, exp) - check_freq_nonmonotonic(ordered, idx) - - if not idx.isna().any(): - ordered, indexer = idx.sort_values(return_indexer=True) - tm.assert_index_equal(ordered, expected) - - exp = np.array([0, 4, 3, 1, 2], dtype=np.intp) - tm.assert_numpy_array_equal(indexer, exp) - check_freq_nonmonotonic(ordered, idx) - - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) - tm.assert_index_equal(ordered, expected[::-1]) - - exp = np.array([2, 1, 3, 0, 4], dtype=np.intp) - tm.assert_numpy_array_equal(indexer, exp) - check_freq_nonmonotonic(ordered, idx) - - def test_sort_values_without_freq_timedeltaindex(self): - # GH#10295 - - idx = TimedeltaIndex( - ["1 hour", "3 hour", "5 hour", "2 hour ", "1 hour"], name="idx1" - ) - expected = TimedeltaIndex( - ["1 hour", "1 hour", "2 hour", "3 hour", "5 hour"], name="idx1" - ) - self.check_sort_values_without_freq(idx, expected) - - @pytest.mark.parametrize( - "index_dates,expected_dates", - [ - ( - ["2011-01-01", "2011-01-03", "2011-01-05", "2011-01-02", "2011-01-01"], - ["2011-01-01", "2011-01-01", "2011-01-02", "2011-01-03", "2011-01-05"], - ), - ( - ["2011-01-01", "2011-01-03", "2011-01-05", "2011-01-02", "2011-01-01"], - ["2011-01-01", "2011-01-01", "2011-01-02", "2011-01-03", "2011-01-05"], - ), - ( - [NaT, "2011-01-03", "2011-01-05", "2011-01-02", NaT], - [NaT, NaT, "2011-01-02", "2011-01-03", "2011-01-05"], - ), - ], - ) - def test_sort_values_without_freq_datetimeindex( - self, index_dates, expected_dates, tz_naive_fixture - ): - tz = tz_naive_fixture - - # without freq - idx = DatetimeIndex(index_dates, tz=tz, name="idx") - expected = DatetimeIndex(expected_dates, tz=tz, name="idx") - - self.check_sort_values_without_freq(idx, expected) - - @pytest.mark.parametrize( - "idx,expected", - [ - ( - PeriodIndex( - [ - "2011-01-01", - "2011-01-03", - "2011-01-05", - "2011-01-02", - "2011-01-01", - ], - freq="D", - name="idx1", - ), - PeriodIndex( - [ - "2011-01-01", - "2011-01-01", - "2011-01-02", - "2011-01-03", - "2011-01-05", - ], - freq="D", - name="idx1", - ), - ), - ( - PeriodIndex( - [ - "2011-01-01", - "2011-01-03", - "2011-01-05", - "2011-01-02", - "2011-01-01", - ], - freq="D", - name="idx2", - ), - PeriodIndex( - [ - "2011-01-01", - "2011-01-01", - "2011-01-02", - "2011-01-03", - "2011-01-05", - ], - freq="D", - name="idx2", - ), - ), - ( - PeriodIndex( - [NaT, "2011-01-03", "2011-01-05", "2011-01-02", NaT], - freq="D", - name="idx3", - ), - PeriodIndex( - [NaT, NaT, "2011-01-02", "2011-01-03", "2011-01-05"], - freq="D", - name="idx3", - ), - ), - ( - PeriodIndex( - ["2011", "2013", "2015", "2012", "2011"], name="pidx", freq="A" - ), - PeriodIndex( - ["2011", "2011", "2012", "2013", "2015"], name="pidx", freq="A" - ), - ), - ( - # For compatibility check - Index([2011, 2013, 2015, 2012, 2011], name="idx"), - Index([2011, 2011, 2012, 2013, 2015], name="idx"), - ), - ], - ) - def test_sort_values_without_freq_periodindex(self, idx, expected): - # here without_freq means not generateable by period_range - self.check_sort_values_without_freq(idx, expected) - - def test_sort_values_without_freq_periodindex_nat(self): - # doesn't quite fit into check_sort_values_without_freq - idx = PeriodIndex(["2011", "2013", "NaT", "2011"], name="pidx", freq="D") - expected = PeriodIndex(["NaT", "2011", "2011", "2013"], name="pidx", freq="D") - - ordered = idx.sort_values(na_position="first") - tm.assert_index_equal(ordered, expected) - check_freq_nonmonotonic(ordered, idx) - - ordered = idx.sort_values(ascending=False) - tm.assert_index_equal(ordered, expected[::-1]) - check_freq_nonmonotonic(ordered, idx) - - -def test_order_stability_compat(): - # GH#35922. sort_values is stable both for normal and datetime-like Index - pidx = PeriodIndex(["2011", "2013", "2015", "2012", "2011"], name="pidx", freq="A") - iidx = Index([2011, 2013, 2015, 2012, 2011], name="idx") - ordered1, indexer1 = pidx.sort_values(return_indexer=True, ascending=False) - ordered2, indexer2 = iidx.sort_values(return_indexer=True, ascending=False) - tm.assert_numpy_array_equal(indexer1, indexer2) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/moments/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/moments/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/murphy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/murphy.py deleted file mode 100644 index b2e8f716eb9af49c06a7436e2b41098c9497d3e5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/murphy.py +++ /dev/null @@ -1,78 +0,0 @@ -""" - pygments.styles.murphy - ~~~~~~~~~~~~~~~~~~~~~~ - - Murphy's style from CodeRay. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Whitespace - - -class MurphyStyle(Style): - """ - Murphy's style from CodeRay. - """ - - styles = { - Whitespace: "#bbbbbb", - Comment: "#666 italic", - Comment.Preproc: "#579 noitalic", - Comment.Special: "#c00 bold", - - Keyword: "bold #289", - Keyword.Pseudo: "#08f", - Keyword.Type: "#66f", - - Operator: "#333", - Operator.Word: "bold #000", - - Name.Builtin: "#072", - Name.Function: "bold #5ed", - Name.Class: "bold #e9e", - Name.Namespace: "bold #0e84b5", - Name.Exception: "bold #F00", - Name.Variable: "#036", - Name.Variable.Instance: "#aaf", - Name.Variable.Class: "#ccf", - Name.Variable.Global: "#f84", - Name.Constant: "bold #5ed", - Name.Label: "bold #970", - Name.Entity: "#800", - Name.Attribute: "#007", - Name.Tag: "#070", - Name.Decorator: "bold #555", - - String: "bg:#e0e0ff", - String.Char: "#88F bg:", - String.Doc: "#D42 bg:", - String.Interpol: "bg:#eee", - String.Escape: "bold #666", - String.Regex: "bg:#e0e0ff #000", - String.Symbol: "#fc8 bg:", - String.Other: "#f88", - - Number: "bold #60E", - Number.Integer: "bold #66f", - Number.Float: "bold #60E", - Number.Hex: "bold #058", - Number.Oct: "bold #40E", - - Generic.Heading: "bold #000080", - Generic.Subheading: "bold #800080", - Generic.Deleted: "#A00000", - Generic.Inserted: "#00A000", - Generic.Error: "#FF0000", - Generic.Emph: "italic", - Generic.Strong: "bold", - Generic.EmphStrong: "bold italic", - Generic.Prompt: "bold #c65d09", - Generic.Output: "#888", - Generic.Traceback: "#04D", - - Error: "#F00 bg:#FAA" - } diff --git a/spaces/pyodide-demo/self-hosted/setuptools.js b/spaces/pyodide-demo/self-hosted/setuptools.js deleted file mode 100644 index 23d1705e1d12e5e5132eeec4141db28c75f35d0c..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/setuptools.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="setuptools.data";var REMOTE_PACKAGE_BASE="setuptools.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","_distutils_hack",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","pkg_resources",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pkg_resources","_vendor",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pkg_resources/_vendor","packaging",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pkg_resources","extern",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","setuptools",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools","_distutils",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools/_distutils","command",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools","_vendor",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools/_vendor","more_itertools",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools/_vendor","packaging",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools","command",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/setuptools","extern",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","setuptools-60.3.1-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:1716587,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1327,2599,3968,5205,6625,7963,9038,10371,11575,12767,13850,15098,16399,17722,18792,20070,21251,22472,23678,24900,26153,27095,28479,29859,31174,32325,33363,34282,35594,36833,38055,39181,40326,41543,42783,43875,45090,46394,47673,48836,50049,51401,52720,53918,54877,56167,57346,58534,59702,60709,61954,63194,64549,65862,67238,68783,70187,71402,72646,73947,75270,76652,77892,79270,80411,81606,82654,83756,85322,86690,88151,89611,90771,92039,93196,94189,95170,96104,97194,98216,99339,100443,101645,102516,103670,104724,105719,106945,108143,109239,110443,111665,112992,114118,115255,115958,116863,117547,118647,119874,121115,122264,123434,124572,125815,126612,127545,128143,128879,130022,131043,132261,133386,134300,135395,136488,137832,138909,140116,141114,142521,143642,144515,145813,147057,148068,149181,150398,151612,152678,153644,154598,155778,156736,157887,159021,160088,161248,162545,163485,164602,165529,166718,167961,169183,170381,171623,172739,173752,175054,176225,177409,178542,179783,181060,182351,183386,184776,186043,187393,188739,190147,191331,192602,193851,195156,196399,197763,198779,199765,201032,201874,203137,204133,205471,206469,207298,208389,209323,210683,211798,213100,213916,215133,216514,217750,218940,220354,221720,222667,223904,225039,226218,227350,228687,230010,231221,232214,233111,234266,235520,236516,237311,238510,239541,240523,241660,242891,243940,244984,246177,247401,248740,250065,251258,252338,253454,254545,255475,256845,258001,259264,260114,261197,262410,263326,264345,265766,266979,268225,269520,270785,271947,273377,274551,275890,277025,278249,279626,280905,281978,283047,284365,285638,286922,288128,289317,290494,291649,292772,294041,295101,296161,297130,298130,299360,300564,301954,303049,304030,305242,306446,307507,308823,310236,311500,312670,313812,315005,316078,317237,318278,319534,320621,321820,322740,324041,325277,326419,327478,328692,329860,331098,332495,333896,335274,336490,337624,338839,340054,341369,342459,343820,344878,345698,346216,347273,348388,349547,350584,351557,352595,353537,354470,355640,356420,357199,358088,358934,359782,360840,361513,362759,363997,365471,366556,367836,369276,370568,371823,373104,374419,375624,376957,378256,379313,380513,381549,382704,383894,384935,386166,387499,388900,390128,391311,392595,393553,394509,395497,396826,398176,399445,400628,401681,402885,404213,406022,407883,409673,411393,413127,415008,416847,418377,420202,422076,423955,425796,427592,429235,430885,432515,434386,436150,437959,439820,441591,443418,445173,447071,448679,450091,450582,451741,452913,453364,454394,455673,457458,459236,461007,462827,464665,466384,468171,470023,471718,473428,475234,476893,478612,480341,482107,483614,485409,487192,489042,490839,492580,494317,496093,497903,499741,501227,502380,503395,504335,505939,507412,508166,508622,509446,510956,512305,513825,515083,516873,518480,520184,521849,523657,524941,526147,527789,529598,531349,532880,534553,536267,538103,539794,541414,543194,544863,546138,547859,549505,551275,553056,554853,556494,558315,560058,561572,563065,564796,566436,568180,569857,571475,573321,574779,576407,577936,579603,581335,583163,584871,586467,587647,588608,589712,590665,591682,592656,593593,594650,595812,596838,597882,598907,600183,602229,603616,604957,606257,607211,607740,609659,611361,612327,614144,615976,617844,619451,621286,622995,624863,626496,628314,630102,631979,633834,635618,637245,638960,640649,642402,644204,645968,647772,649564,651403,653248,655014,656855,658469,659294,660314,661635,662395,662976,663923,665741,667581,669456,671070,672902,674614,676478,678126,679932,681720,683587,685463,687226,688882,690616,692281,694032,695838,697611,699468,701204,703062,704931,706703,708538,710243,711068,712096,713406,714173,714754,715740,717458,719215,720994,722827,724646,726442,728163,729972,731696,733500,735203,737009,738741,740433,742172,743862,745477,747306,749103,750838,752530,754348,756034,757802,759620,761398,762566,763935,764638,766280,767741,769093,769494,770056,771115,772781,773743,775246,776551,778304,779956,781575,783260,784875,786178,787620,789177,790960,792665,794322,795917,797693,799484,801155,802864,804643,806330,807621,809395,810928,812715,814536,816367,817994,819843,821572,823067,824591,826371,827970,829684,831233,832867,834790,836285,838010,839549,841160,843002,844782,846417,848038,848888,849849,850953,851903,852920,853894,854831,855888,857050,858076,859120,860145,861421,863467,864854,866196,867495,868449,868978,870904,872624,873585,875403,877243,879118,880732,882564,884276,886140,887788,889594,891382,893249,895125,896888,898544,900278,901943,903694,905500,907273,909130,910866,912724,914593,916365,918200,919905,920730,921758,923068,923835,924416,925703,927087,928372,929799,931152,932113,933182,934220,935232,936306,937568,938758,940093,941358,942392,943792,945054,946146,947124,948210,949429,950726,951763,953164,954497,955692,957038,958143,959243,960361,961517,962592,963528,964774,966160,967452,968683,969678,970549,971784,973003,974003,975248,976539,978004,979373,980796,982196,983464,984671,986015,987092,988209,989286,990556,991878,992731,994009,995320,996645,998015,999355,1000715,1001817,1003123,1004424,1005713,1006918,1008293,1009618,1011019,1012459,1013839,1015039,1016467,1017981,1018919,1020291,1021626,1022948,1024076,1025270,1026321,1027571,1028885,1030153,1031166,1032418,1033427,1034532,1035475,1036617,1037764,1038932,1040076,1041076,1041981,1042960,1043682,1044990,1046185,1047651,1048932,1049838,1051122,1051991,1053448,1054699,1055815,1056896,1058147,1059339,1060542,1061835,1063316,1064453,1065955,1067280,1068463,1069795,1070815,1071401,1072794,1074109,1075531,1076708,1078063,1079195,1080400,1081748,1082925,1084307,1085468,1086462,1087468,1088377,1089248,1090336,1091605,1092816,1094106,1095315,1096534,1097859,1099137,1100123,1101076,1102072,1103066,1104008,1105259,1106521,1107822,1109206,1110653,1111980,1113256,1114415,1115768,1116687,1117950,1118913,1120333,1121733,1122935,1124364,1125646,1126837,1128185,1129372,1130353,1131848,1133135,1134232,1135319,1136574,1137769,1138971,1140282,1141622,1142932,1144277,1145679,1147135,1148499,1149959,1151294,1152526,1153923,1154977,1156264,1157321,1158866,1160426,1161753,1162923,1164057,1165299,1166342,1167423,1168600,1169611,1170690,1171778,1172875,1173955,1175080,1176220,1177367,1178406,1179261,1180397,1181528,1182533,1183479,1184527,1185448,1186750,1187792,1188777,1190006,1191049,1191895,1192950,1193965,1195116,1196235,1197456,1198470,1199671,1200814,1201853,1202937,1204094,1205315,1206431,1207581,1208684,1209919,1211172,1212240,1213492,1214600,1215653,1216518,1217907,1219115,1220060,1221139,1222124,1223320,1224449,1225524,1226616,1227736,1229031,1230239,1231478,1232673,1234025,1235253,1236484,1237765,1238992,1240098,1241366,1242467,1243642,1244879,1246071,1247058,1248134,1249331,1250430,1251530,1252744,1253851,1254911,1256223,1257313,1258412,1259610,1260516,1261610,1262884,1263988,1265194,1266576,1267735,1268719,1269959,1270949,1271829,1272871,1274147,1275202,1276262,1277355,1278539,1279779,1281032,1282115,1283277,1284336,1285464,1286761,1287899,1289128,1290242,1291210,1292407,1293632,1294777,1295982,1297214,1298292,1299185,1300462,1301727,1303069,1304360,1305494,1306607,1307807,1309163,1310329,1311464,1312434,1313572,1314429,1315486,1316643,1318204,1319559,1321023,1322491,1323633,1324919,1326085,1327049,1328048,1328985,1330098,1331104,1332217,1333340,1334528,1335440,1336591,1337642,1338641,1339870,1341050,1342131,1343330,1344546,1345856,1346991,1348126,1348817,1349741,1350419,1351527,1352749,1353973,1355111,1356284,1357406,1358653,1359461,1360397,1360984,1361740,1362893,1363914,1365120,1366264,1367155,1368250,1369328,1370669,1371745,1372964,1373962,1375364,1376471,1377374,1378679,1379925,1380912,1382047,1383289,1384506,1385548,1386512,1387457,1388617,1389547,1390727,1391849,1392901,1394043,1395353,1396307,1397437,1398349,1399528,1400747,1401956,1403175,1404394,1405506,1406546,1407823,1409002,1410181,1411313,1412577,1413848,1415138,1416211,1417591,1418863,1420215,1421550,1422946,1424119,1425390,1426653,1427963,1429212,1430574,1431585,1432589,1433849,1434701,1435963,1436951,1438291,1439271,1440132,1441225,1442166,1443538,1444640,1445878,1447161,1448303,1449466,1450534,1451953,1453465,1454706,1455851,1457005,1458367,1459711,1461020,1462151,1463377,1464610,1465891,1466825,1467951,1469235,1470464,1471763,1473101,1474371,1475650,1476931,1478072,1479464,1480686,1481626,1482559,1483498,1484862,1485941,1487221,1488263,1489531,1490860,1492250,1493354,1494432,1495749,1496970,1498286,1499457,1500807,1502e3,1503220,1504594,1505716,1506929,1508316,1509560,1510740,1512028,1513397,1514579,1515967,1517219,1518538,1519893,1521243,1522478,1523716,1524824,1526118,1527369,1528265,1529702,1531145,1532208,1533529,1534852,1536351,1537259,1538246,1539489,1540702,1541938,1543175,1544384,1545534,1546706,1547796,1549005,1549922,1551111,1551893,1553107,1553985,1554840,1556077,1557302,1558170,1559391,1560554,1562017,1563253,1564537,1565679,1566833,1567976,1568880,1570094,1571331,1572466,1573722,1574358,1575746,1576785,1577539,1578745,1579956,1581204,1582460,1583682,1584874,1585948,1587198,1588416,1589763,1591170,1592419,1593592,1594565,1595855,1597117,1598158,1599128,1600392,1601413,1602723,1603789,1604884,1606037,1607260,1608422,1609575,1610737,1612037,1613371,1614753,1615925,1617187,1618305,1619400,1620521,1621766,1622925,1624256,1625291,1626414,1627466,1628610,1629796,1631121,1632143,1633054,1634346,1635387,1636545,1637810,1639116,1640256,1641574,1642757,1643950,1645220,1646445,1647587,1648909,1650226,1651663,1653004,1654290,1655598,1656860,1658189,1659297,1660464,1661630,1662878,1664185,1665274,1666550,1667827,1668976,1670090,1671194,1671997,1673255,1674437,1675800,1676923,1678280,1679445,1680775,1682021,1683225,1684386,1685581,1686809,1687851,1688986,1690210,1691547,1692817,1694095,1695430,1696409,1697542,1698767,1699859,1701067,1702362,1703582,1704812,1706067,1707331,1708579,1709728,1711210,1711996,1712716,1713266,1713721,1714412,1715168,1715973],sizes:[1327,1272,1369,1237,1420,1338,1075,1333,1204,1192,1083,1248,1301,1323,1070,1278,1181,1221,1206,1222,1253,942,1384,1380,1315,1151,1038,919,1312,1239,1222,1126,1145,1217,1240,1092,1215,1304,1279,1163,1213,1352,1319,1198,959,1290,1179,1188,1168,1007,1245,1240,1355,1313,1376,1545,1404,1215,1244,1301,1323,1382,1240,1378,1141,1195,1048,1102,1566,1368,1461,1460,1160,1268,1157,993,981,934,1090,1022,1123,1104,1202,871,1154,1054,995,1226,1198,1096,1204,1222,1327,1126,1137,703,905,684,1100,1227,1241,1149,1170,1138,1243,797,933,598,736,1143,1021,1218,1125,914,1095,1093,1344,1077,1207,998,1407,1121,873,1298,1244,1011,1113,1217,1214,1066,966,954,1180,958,1151,1134,1067,1160,1297,940,1117,927,1189,1243,1222,1198,1242,1116,1013,1302,1171,1184,1133,1241,1277,1291,1035,1390,1267,1350,1346,1408,1184,1271,1249,1305,1243,1364,1016,986,1267,842,1263,996,1338,998,829,1091,934,1360,1115,1302,816,1217,1381,1236,1190,1414,1366,947,1237,1135,1179,1132,1337,1323,1211,993,897,1155,1254,996,795,1199,1031,982,1137,1231,1049,1044,1193,1224,1339,1325,1193,1080,1116,1091,930,1370,1156,1263,850,1083,1213,916,1019,1421,1213,1246,1295,1265,1162,1430,1174,1339,1135,1224,1377,1279,1073,1069,1318,1273,1284,1206,1189,1177,1155,1123,1269,1060,1060,969,1e3,1230,1204,1390,1095,981,1212,1204,1061,1316,1413,1264,1170,1142,1193,1073,1159,1041,1256,1087,1199,920,1301,1236,1142,1059,1214,1168,1238,1397,1401,1378,1216,1134,1215,1215,1315,1090,1361,1058,820,518,1057,1115,1159,1037,973,1038,942,933,1170,780,779,889,846,848,1058,673,1246,1238,1474,1085,1280,1440,1292,1255,1281,1315,1205,1333,1299,1057,1200,1036,1155,1190,1041,1231,1333,1401,1228,1183,1284,958,956,988,1329,1350,1269,1183,1053,1204,1328,1809,1861,1790,1720,1734,1881,1839,1530,1825,1874,1879,1841,1796,1643,1650,1630,1871,1764,1809,1861,1771,1827,1755,1898,1608,1412,491,1159,1172,451,1030,1279,1785,1778,1771,1820,1838,1719,1787,1852,1695,1710,1806,1659,1719,1729,1766,1507,1795,1783,1850,1797,1741,1737,1776,1810,1838,1486,1153,1015,940,1604,1473,754,456,824,1510,1349,1520,1258,1790,1607,1704,1665,1808,1284,1206,1642,1809,1751,1531,1673,1714,1836,1691,1620,1780,1669,1275,1721,1646,1770,1781,1797,1641,1821,1743,1514,1493,1731,1640,1744,1677,1618,1846,1458,1628,1529,1667,1732,1828,1708,1596,1180,961,1104,953,1017,974,937,1057,1162,1026,1044,1025,1276,2046,1387,1341,1300,954,529,1919,1702,966,1817,1832,1868,1607,1835,1709,1868,1633,1818,1788,1877,1855,1784,1627,1715,1689,1753,1802,1764,1804,1792,1839,1845,1766,1841,1614,825,1020,1321,760,581,947,1818,1840,1875,1614,1832,1712,1864,1648,1806,1788,1867,1876,1763,1656,1734,1665,1751,1806,1773,1857,1736,1858,1869,1772,1835,1705,825,1028,1310,767,581,986,1718,1757,1779,1833,1819,1796,1721,1809,1724,1804,1703,1806,1732,1692,1739,1690,1615,1829,1797,1735,1692,1818,1686,1768,1818,1778,1168,1369,703,1642,1461,1352,401,562,1059,1666,962,1503,1305,1753,1652,1619,1685,1615,1303,1442,1557,1783,1705,1657,1595,1776,1791,1671,1709,1779,1687,1291,1774,1533,1787,1821,1831,1627,1849,1729,1495,1524,1780,1599,1714,1549,1634,1923,1495,1725,1539,1611,1842,1780,1635,1621,850,961,1104,950,1017,974,937,1057,1162,1026,1044,1025,1276,2046,1387,1342,1299,954,529,1926,1720,961,1818,1840,1875,1614,1832,1712,1864,1648,1806,1788,1867,1876,1763,1656,1734,1665,1751,1806,1773,1857,1736,1858,1869,1772,1835,1705,825,1028,1310,767,581,1287,1384,1285,1427,1353,961,1069,1038,1012,1074,1262,1190,1335,1265,1034,1400,1262,1092,978,1086,1219,1297,1037,1401,1333,1195,1346,1105,1100,1118,1156,1075,936,1246,1386,1292,1231,995,871,1235,1219,1e3,1245,1291,1465,1369,1423,1400,1268,1207,1344,1077,1117,1077,1270,1322,853,1278,1311,1325,1370,1340,1360,1102,1306,1301,1289,1205,1375,1325,1401,1440,1380,1200,1428,1514,938,1372,1335,1322,1128,1194,1051,1250,1314,1268,1013,1252,1009,1105,943,1142,1147,1168,1144,1e3,905,979,722,1308,1195,1466,1281,906,1284,869,1457,1251,1116,1081,1251,1192,1203,1293,1481,1137,1502,1325,1183,1332,1020,586,1393,1315,1422,1177,1355,1132,1205,1348,1177,1382,1161,994,1006,909,871,1088,1269,1211,1290,1209,1219,1325,1278,986,953,996,994,942,1251,1262,1301,1384,1447,1327,1276,1159,1353,919,1263,963,1420,1400,1202,1429,1282,1191,1348,1187,981,1495,1287,1097,1087,1255,1195,1202,1311,1340,1310,1345,1402,1456,1364,1460,1335,1232,1397,1054,1287,1057,1545,1560,1327,1170,1134,1242,1043,1081,1177,1011,1079,1088,1097,1080,1125,1140,1147,1039,855,1136,1131,1005,946,1048,921,1302,1042,985,1229,1043,846,1055,1015,1151,1119,1221,1014,1201,1143,1039,1084,1157,1221,1116,1150,1103,1235,1253,1068,1252,1108,1053,865,1389,1208,945,1079,985,1196,1129,1075,1092,1120,1295,1208,1239,1195,1352,1228,1231,1281,1227,1106,1268,1101,1175,1237,1192,987,1076,1197,1099,1100,1214,1107,1060,1312,1090,1099,1198,906,1094,1274,1104,1206,1382,1159,984,1240,990,880,1042,1276,1055,1060,1093,1184,1240,1253,1083,1162,1059,1128,1297,1138,1229,1114,968,1197,1225,1145,1205,1232,1078,893,1277,1265,1342,1291,1134,1113,1200,1356,1166,1135,970,1138,857,1057,1157,1561,1355,1464,1468,1142,1286,1166,964,999,937,1113,1006,1113,1123,1188,912,1151,1051,999,1229,1180,1081,1199,1216,1310,1135,1135,691,924,678,1108,1222,1224,1138,1173,1122,1247,808,936,587,756,1153,1021,1206,1144,891,1095,1078,1341,1076,1219,998,1402,1107,903,1305,1246,987,1135,1242,1217,1042,964,945,1160,930,1180,1122,1052,1142,1310,954,1130,912,1179,1219,1209,1219,1219,1112,1040,1277,1179,1179,1132,1264,1271,1290,1073,1380,1272,1352,1335,1396,1173,1271,1263,1310,1249,1362,1011,1004,1260,852,1262,988,1340,980,861,1093,941,1372,1102,1238,1283,1142,1163,1068,1419,1512,1241,1145,1154,1362,1344,1309,1131,1226,1233,1281,934,1126,1284,1229,1299,1338,1270,1279,1281,1141,1392,1222,940,933,939,1364,1079,1280,1042,1268,1329,1390,1104,1078,1317,1221,1316,1171,1350,1193,1220,1374,1122,1213,1387,1244,1180,1288,1369,1182,1388,1252,1319,1355,1350,1235,1238,1108,1294,1251,896,1437,1443,1063,1321,1323,1499,908,987,1243,1213,1236,1237,1209,1150,1172,1090,1209,917,1189,782,1214,878,855,1237,1225,868,1221,1163,1463,1236,1284,1142,1154,1143,904,1214,1237,1135,1256,636,1388,1039,754,1206,1211,1248,1256,1222,1192,1074,1250,1218,1347,1407,1249,1173,973,1290,1262,1041,970,1264,1021,1310,1066,1095,1153,1223,1162,1153,1162,1300,1334,1382,1172,1262,1118,1095,1121,1245,1159,1331,1035,1123,1052,1144,1186,1325,1022,911,1292,1041,1158,1265,1306,1140,1318,1183,1193,1270,1225,1142,1322,1317,1437,1341,1286,1308,1262,1329,1108,1167,1166,1248,1307,1089,1276,1277,1149,1114,1104,803,1258,1182,1363,1123,1357,1165,1330,1246,1204,1161,1195,1228,1042,1135,1224,1337,1270,1278,1335,979,1133,1225,1092,1208,1295,1220,1230,1255,1264,1248,1149,1482,786,720,550,455,691,756,805,614],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_setuptools.data")}Module["addRunDependency"]("datafile_setuptools.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/_distutils_hack/__init__.py",start:0,end:5271,audio:0},{filename:"/lib/python3.9/site-packages/_distutils_hack/override.py",start:5271,end:5315,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/__init__.py",start:5315,end:113888,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/__init__.py",start:113888,end:113888,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/appdirs.py",start:113888,end:138589,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/pyparsing.py",start:138589,end:370644,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/__about__.py",start:370644,end:371305,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/__init__.py",start:371305,end:371802,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/_manylinux.py",start:371802,end:383290,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/_musllinux.py",start:383290,end:387668,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/_structures.py",start:387668,end:389297,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/markers.py",start:389297,end:397793,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/requirements.py",start:397793,end:402499,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/specifiers.py",start:402499,end:433463,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/tags.py",start:433463,end:449173,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/utils.py",start:449173,end:453373,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/version.py",start:453373,end:468038,audio:0},{filename:"/lib/python3.9/site-packages/pkg_resources/extern/__init__.py",start:468038,end:470400,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/__init__.py",start:470400,end:477894,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_deprecation_warning.py",start:477894,end:478112,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_imp.py",start:478112,end:480504,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/archive_util.py",start:480504,end:487581,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/build_meta.py",start:487581,end:498117,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/config.py",start:498117,end:521270,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/dep_util.py",start:521270,end:522219,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/depends.py",start:522219,end:527718,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/dist.py",start:527718,end:570872,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/errors.py",start:570872,end:572427,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/extension.py",start:572427,end:574111,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/glob.py",start:574111,end:578984,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/installer.py",start:578984,end:582808,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/launch.py",start:582808,end:583620,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/logging.py",start:583620,end:584483,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/monkey.py",start:584483,end:589700,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/msvc.py",start:589700,end:640261,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/namespaces.py",start:640261,end:643354,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/package_index.py",start:643354,end:683446,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/py34compat.py",start:683446,end:683691,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/sandbox.py",start:683691,end:698039,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/unicode_utils.py",start:698039,end:698980,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/version.py",start:698980,end:699124,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/wheel.py",start:699124,end:707412,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/windows_support.py",start:707412,end:708126,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/script (dev).tmpl",start:708126,end:708344,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/script.tmpl",start:708344,end:708482,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/cli-32.exe",start:708482,end:774018,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/cli-64.exe",start:774018,end:848770,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/cli-arm64.exe",start:848770,end:985986,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/cli.exe",start:985986,end:1051522,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/gui-32.exe",start:1051522,end:1117058,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/gui-64.exe",start:1117058,end:1192322,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/gui-arm64.exe",start:1192322,end:1330050,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/gui.exe",start:1330050,end:1395586,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/__init__.py",start:1395586,end:1396122,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/_collections.py",start:1396122,end:1397452,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/_msvccompiler.py",start:1397452,end:1418270,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/archive_util.py",start:1418270,end:1426842,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/bcppcompiler.py",start:1426842,end:1441736,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/ccompiler.py",start:1441736,end:1489380,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/cmd.py",start:1489380,end:1507459,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/config.py",start:1507459,end:1512286,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/core.py",start:1512286,end:1521568,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/cygwinccompiler.py",start:1521568,end:1536115,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/debug.py",start:1536115,end:1536254,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/dep_util.py",start:1536254,end:1539745,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/dir_util.py",start:1539745,end:1547523,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/dist.py",start:1547523,end:1597944,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/errors.py",start:1597944,end:1601521,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/extension.py",start:1601521,end:1612036,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/fancy_getopt.py",start:1612036,end:1629820,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/file_util.py",start:1629820,end:1637968,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/filelist.py",start:1637968,end:1651375,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/log.py",start:1651375,end:1653348,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/msvc9compiler.py",start:1653348,end:1683831,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/msvccompiler.py",start:1683831,end:1707371,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/py35compat.py",start:1707371,end:1707826,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/py38compat.py",start:1707826,end:1708038,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/spawn.py",start:1708038,end:1711536,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/sysconfig.py",start:1711536,end:1732639,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/text_file.py",start:1732639,end:1745122,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/unixccompiler.py",start:1745122,end:1759660,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/util.py",start:1759660,end:1780315,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/version.py",start:1780315,end:1793330,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/versionpredicate.py",start:1793330,end:1798607,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/__init__.py",start:1798607,end:1799406,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/bdist.py",start:1799406,end:1804968,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_dumb.py",start:1804968,end:1809881,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_msi.py",start:1809881,end:1845460,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_rpm.py",start:1845460,end:1866997,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/bdist_wininst.py",start:1866997,end:1883027,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/build.py",start:1883027,end:1888800,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/build_clib.py",start:1888800,end:1896822,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py",start:1896822,end:1928434,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/build_py.py",start:1928434,end:1944929,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/build_scripts.py",start:1944929,end:1950892,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/check.py",start:1950892,end:1956529,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/clean.py",start:1956529,end:1959305,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/config.py",start:1959305,end:1972422,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install.py",start:1972422,end:2002496,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install_data.py",start:2002496,end:2005318,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install_egg_info.py",start:2005318,end:2008071,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install_headers.py",start:2008071,end:2009369,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install_lib.py",start:2009369,end:2017766,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/install_scripts.py",start:2017766,end:2019783,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/py37compat.py",start:2019783,end:2020454,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/register.py",start:2020454,end:2032166,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py",start:2032166,end:2051171,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_distutils/command/upload.py",start:2051171,end:2058768,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/__init__.py",start:2058768,end:2058768,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/ordered_set.py",start:2058768,end:2073898,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/pyparsing.py",start:2073898,end:2305953,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/__init__.py",start:2305953,end:2306035,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/more.py",start:2306035,end:2424003,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/recipes.py",start:2424003,end:2440259,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/__about__.py",start:2440259,end:2440920,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/__init__.py",start:2440920,end:2441417,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/_manylinux.py",start:2441417,end:2452905,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/_musllinux.py",start:2452905,end:2457283,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/_structures.py",start:2457283,end:2458912,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/markers.py",start:2458912,end:2467405,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/requirements.py",start:2467405,end:2472105,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/specifiers.py",start:2472105,end:2503069,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/tags.py",start:2503069,end:2518779,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/utils.py",start:2518779,end:2522979,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/_vendor/packaging/version.py",start:2522979,end:2537644,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/__init__.py",start:2537644,end:2537861,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/alias.py",start:2537861,end:2540242,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/bdist_egg.py",start:2540242,end:2556846,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/bdist_rpm.py",start:2556846,end:2558028,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/build_clib.py",start:2558028,end:2562443,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/build_ext.py",start:2562443,end:2575655,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/build_py.py",start:2575655,end:2584406,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/develop.py",start:2584406,end:2591418,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/dist_info.py",start:2591418,end:2592378,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/easy_install.py",start:2592378,end:2678168,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/egg_info.py",start:2678168,end:2704294,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/install.py",start:2704294,end:2709200,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/install_egg_info.py",start:2709200,end:2711403,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/install_lib.py",start:2711403,end:2715278,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/install_scripts.py",start:2715278,end:2717871,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/py36compat.py",start:2717871,end:2722817,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/register.py",start:2722817,end:2723285,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/rotate.py",start:2723285,end:2725413,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/saveopts.py",start:2725413,end:2726071,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/sdist.py",start:2726071,end:2732484,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/setopt.py",start:2732484,end:2737570,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/test.py",start:2737570,end:2745658,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/upload.py",start:2745658,end:2746120,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/upload_docs.py",start:2746120,end:2753338,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/command/launcher manifest.xml",start:2753338,end:2753966,audio:0},{filename:"/lib/python3.9/site-packages/setuptools/extern/__init__.py",start:2753966,end:2756373,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/PKG-INFO",start:2756373,end:2760055,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/SOURCES.txt",start:2760055,end:2772668,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/dependency_links.txt",start:2772668,end:2772669,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/entry_points.txt",start:2772669,end:2775305,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/requires.txt",start:2775305,end:2775793,audio:0},{filename:"/lib/python3.9/site-packages/setuptools-60.3.1-py3.9.egg-info/top_level.txt",start:2775793,end:2775834,audio:0}],remote_package_size:1720683,package_uuid:"9a714891-8090-4905-88e1-8396ffef51b5"})})(); \ No newline at end of file diff --git a/spaces/qingxu98/academic-chatgpt-beta/toolbox.py b/spaces/qingxu98/academic-chatgpt-beta/toolbox.py deleted file mode 100644 index 038d7be858f3b7fbc6ff62f5031dcacdebe4d70c..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/toolbox.py +++ /dev/null @@ -1,507 +0,0 @@ -import markdown -import importlib -import traceback -import inspect -import re -from latex2mathml.converter import convert as tex2mathml -from functools import wraps, lru_cache -############################### 插件输入输出接驳区 ####################################### -class ChatBotWithCookies(list): - def __init__(self, cookie): - self._cookies = cookie - - def write_list(self, list): - for t in list: - self.append(t) - - def get_list(self): - return [t for t in self] - - def get_cookies(self): - return self._cookies - -def ArgsGeneralWrapper(f): - """ - 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。 - """ - def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args): - txt_passon = txt - if txt == "" and txt2 != "": txt_passon = txt2 - # 引入一个有cookie的chatbot - cookies.update({ - 'top_p':top_p, - 'temperature':temperature, - }) - llm_kwargs = { - 'api_key': cookies['api_key'], - 'llm_model': llm_model, - 'top_p':top_p, - 'max_length': max_length, - 'temperature':temperature, - } - plugin_kwargs = { - # 目前还没有 - } - chatbot_with_cookie = ChatBotWithCookies(cookies) - chatbot_with_cookie.write_list(chatbot) - yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) - return decorated - -def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面 - """ - 刷新用户界面 - """ - assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。" - yield chatbot.get_cookies(), chatbot, history, msg - -def CatchException(f): - """ - 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 - """ - @wraps(f) - def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - try: - yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) - except Exception as e: - from check_proxy import check_proxy - from toolbox import get_conf - proxies, = get_conf('proxies') - tb_str = '```\n' + traceback.format_exc() + '```' - if chatbot is None or len(chatbot) == 0: - chatbot = [["插件调度异常", "异常原因"]] - chatbot[-1] = (chatbot[-1][0], - f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}") - yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面 - return decorated - - -def HotReload(f): - """ - HotReload的装饰器函数,用于实现Python函数插件的热更新。 - 函数热更新是指在不停止程序运行的情况下,更新函数代码,从而达到实时更新功能。 - 在装饰器内部,使用wraps(f)来保留函数的元信息,并定义了一个名为decorated的内部函数。 - 内部函数通过使用importlib模块的reload函数和inspect模块的getmodule函数来重新加载并获取函数模块, - 然后通过getattr函数获取函数名,并在新模块中重新加载函数。 - 最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。 - 最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。 - """ - @wraps(f) - def decorated(*args, **kwargs): - fn_name = f.__name__ - f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name) - yield from f_hot_reload(*args, **kwargs) - return decorated - - -####################################### 其他小工具 ##################################### - -def get_reduce_token_percent(text): - """ - * 此函数未来将被弃用 - """ - try: - # text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens" - pattern = r"(\d+)\s+tokens\b" - match = re.findall(pattern, text) - EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题 - max_limit = float(match[0]) - EXCEED_ALLO - current_tokens = float(match[1]) - ratio = max_limit/current_tokens - assert ratio > 0 and ratio < 1 - return ratio, str(int(current_tokens-max_limit)) - except: - return 0.5, '不详' - - - -def write_results_to_file(history, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - # file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - file_name = 'chatGPT分析报告' + \ - time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write('# chatGPT 分析报告\n') - for i, content in enumerate(history): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: - content = str(content) - except: - continue - if i % 2 == 0: - f.write('## ') - f.write(content) - f.write('\n\n') - res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - - -def regular_txt_to_markdown(text): - """ - 将普通文本转换为Markdown格式的文本。 - """ - text = text.replace('\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - return text - - - - -def report_execption(chatbot, history, a, b): - """ - 向chatbot中添加错误信息 - """ - chatbot.append((a, b)) - history.append(a) - history.append(b) - - -def text_divide_paragraph(text): - """ - 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - """ - if '```' in text: - # careful input - return text - else: - # wtf input - lines = text.split("\n") - for i, line in enumerate(lines): - lines[i] = lines[i].replace(" ", " ") - text = "
        ".join(lines) - return text - - -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - pre = '
        ' - suf = '
        ' - markdown_extension_configs = { - 'mdx_math': { - 'enable_dollar_delimiter': True, - 'use_gitlab_delimiters': False, - }, - } - find_equation_pattern = r'\n', '') - return content - - - if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识 - # convert everything to html format - split = markdown.markdown(text='---') - convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) - convert_stage_1 = markdown_bug_hunt(convert_stage_1) - # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s). - # 1. convert to easy-to-copy tex (do not render math) - convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) - # 2. convert to rendered equation - convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL) - # cat them together - return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf - else: - return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf - - -def close_up_code_segment_during_stream(gpt_reply): - """ - 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的``` - - Args: - gpt_reply (str): GPT模型返回的回复字符串。 - - Returns: - str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。 - - """ - if '```' not in gpt_reply: - return gpt_reply - if gpt_reply.endswith('```'): - return gpt_reply - - # 排除了以上两个情况,我们 - segments = gpt_reply.split('```') - n_mark = len(segments) - 1 - if n_mark % 2 == 1: - # print('输出代码片段中!') - return gpt_reply+'\n```' - else: - return gpt_reply - - -def format_io(self, y): - """ - 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。 - """ - if y is None or y == []: - return [] - i_ask, gpt_reply = y[-1] - i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波 - gpt_reply = close_up_code_segment_during_stream(gpt_reply) # 当代码输出半截的时候,试着补上后个``` - y[-1] = ( - None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code', 'tables']), - None if gpt_reply is None else markdown_convertion(gpt_reply) - ) - return y - - -def find_free_port(): - """ - 返回当前系统中可用的未使用端口。 - """ - import socket - from contextlib import closing - with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: - s.bind(('', 0)) - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - return s.getsockname()[1] - - -def extract_archive(file_path, dest_dir): - import zipfile - import tarfile - import os - # Get the file extension of the input file - file_extension = os.path.splitext(file_path)[1] - - # Extract the archive based on its extension - if file_extension == '.zip': - with zipfile.ZipFile(file_path, 'r') as zipobj: - zipobj.extractall(path=dest_dir) - print("Successfully extracted zip archive to {}".format(dest_dir)) - - elif file_extension in ['.tar', '.gz', '.bz2']: - with tarfile.open(file_path, 'r:*') as tarobj: - tarobj.extractall(path=dest_dir) - print("Successfully extracted tar archive to {}".format(dest_dir)) - - # 第三方库,需要预先pip install rarfile - # 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以 - elif file_extension == '.rar': - try: - import rarfile - with rarfile.RarFile(file_path) as rf: - rf.extractall(path=dest_dir) - print("Successfully extracted rar archive to {}".format(dest_dir)) - except: - print("Rar format requires additional dependencies to install") - return '\n\n需要安装pip install rarfile来解压rar文件' - - # 第三方库,需要预先pip install py7zr - elif file_extension == '.7z': - try: - import py7zr - with py7zr.SevenZipFile(file_path, mode='r') as f: - f.extractall(path=dest_dir) - print("Successfully extracted 7z archive to {}".format(dest_dir)) - except: - print("7z format requires additional dependencies to install") - return '\n\n需要安装pip install py7zr来解压7z文件' - else: - return '' - return '' - - -def find_recent_files(directory): - """ - me: find files that is created with in one minutes under a directory with python, write a function - gpt: here it is! - """ - import os - import time - current_time = time.time() - one_minute_ago = current_time - 60 - recent_files = [] - - for filename in os.listdir(directory): - file_path = os.path.join(directory, filename) - if file_path.endswith('.log'): - continue - created_time = os.path.getmtime(file_path) - if created_time >= one_minute_ago: - if os.path.isdir(file_path): - continue - recent_files.append(file_path) - - return recent_files - - -def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): - if len(files) == 0: - return chatbot, txt - import shutil - import os - import time - import glob - from toolbox import extract_archive - try: - shutil.rmtree('./private_upload/') - except: - pass - time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - os.makedirs(f'private_upload/{time_tag}', exist_ok=True) - err_msg = '' - for file in files: - file_origin_name = os.path.basename(file.orig_name) - shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') - err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}', - dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') - moved_files = [fp for fp in glob.glob( - 'private_upload/**/*', recursive=True)] - if "底部输入区" in checkboxes: - txt = "" - txt2 = f'private_upload/{time_tag}' - else: - txt = f'private_upload/{time_tag}' - txt2 = "" - moved_files_str = '\t\n\n'.join(moved_files) - chatbot.append(['我上传了文件,请查收', - f'[Local Message] 收到以下文件: \n\n{moved_files_str}' + - f'\n\n调用路径参数已自动修正到: \n\n{txt}' + - f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg]) - return chatbot, txt, txt2 - - -def on_report_generated(files, chatbot): - from toolbox import find_recent_files - report_files = find_recent_files('gpt_log') - if len(report_files) == 0: - return None, chatbot - # files.extend(report_files) - chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。']) - return report_files, chatbot - -def is_openai_api_key(key): - API_MATCH = re.match(r"sk-[a-zA-Z0-9]{48}$", key) - return bool(API_MATCH) - -def is_api2d_key(key): - if key.startswith('fk') and len(key) == 41: - return True - else: - return False - -def is_any_api_key(key): - if ',' in key: - keys = key.split(',') - for k in keys: - if is_any_api_key(k): return True - return False - else: - return is_openai_api_key(key) or is_api2d_key(key) - - -def select_api_key(keys, llm_model): - import random - avail_key_list = [] - key_list = keys.split(',') - - if llm_model.startswith('gpt-'): - for k in key_list: - if is_openai_api_key(k): avail_key_list.append(k) - - if llm_model.startswith('api2d-'): - for k in key_list: - if is_api2d_key(k): avail_key_list.append(k) - - if len(avail_key_list) == 0: - raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。") - - api_key = random.choice(avail_key_list) # 随机负载均衡 - return api_key - -@lru_cache(maxsize=128) -def read_single_conf_with_lru_cache(arg): - from colorful import print亮红, print亮绿 - try: - r = getattr(importlib.import_module('config_private'), arg) - except: - r = getattr(importlib.import_module('config'), arg) - # 在读取API_KEY时,检查一下是不是忘了改config - if arg == 'API_KEY': - if is_any_api_key(r): - print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功") - else: - print亮红( "[API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。") - if arg == 'proxies': - if r is None: - print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。') - else: - print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r) - assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。' - return r - - -def get_conf(*args): - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - res = [] - for arg in args: - r = read_single_conf_with_lru_cache(arg) - res.append(r) - return res - - -def clear_line_break(txt): - txt = txt.replace('\n', ' ') - txt = txt.replace(' ', ' ') - txt = txt.replace(' ', ' ') - return txt - - -class DummyWith(): - """ - 这段代码定义了一个名为DummyWith的空上下文管理器, - 它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。 - 上下文管理器是一种Python对象,用于与with语句一起使用, - 以确保一些资源在代码块执行期间得到正确的初始化和清理。 - 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。 - 在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用, - 而在上下文执行结束时,__exit__()方法则会被调用。 - """ - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - return diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/go-applio.bat b/spaces/r3gm/Aesthetic_RVC_Inference_HF/go-applio.bat deleted file mode 100644 index 70cc1bea97c811535eb36665c4a57acfe788dde4..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/go-applio.bat +++ /dev/null @@ -1,100 +0,0 @@ -@echo off -setlocal -title Applio - Start -cd %~dp0 - -::: -::: _ _ -::: /\ | (_) -::: / \ _ __ _ __ | |_ ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | -::: /_/ \_\ .__/| .__/|_|_|\___/ -::: | | | | -::: |_| |_| -::: -::: - -for /f "usebackq delims=" %%i in ("%cd%\assets\configs\version.txt") do ( - set "localVersion=%%i" -) -for /f %%i in ('powershell -command "(Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/IAHispano/Applio-RVC-Fork/main/assets/configs/version.txt').Content"') do set "onlineVersion=%%i" - -:menu -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A -powershell -command "if ('%localVersion%' -lt '%onlineVersion%') { exit 1 } else { exit 0 }" -if %errorlevel% equ 1 ( - echo You are currently using an outdated version %localVersion% - echo. - echo We're excited to announce that version %onlineVersion% is now available for download on https://github.com/IAHispano/Applio-RVC-Fork. - echo Upgrade now to access the latest features and improvements! - echo. - goto continue -) else ( - goto continue -) - -:continue -echo Runtime: Recommended for regular users -echo [1] Start Applio - Runtime ^(Nvidia Support^) -echo [2] Start Applio - Runtime ^(Intel Support. Requires Nvidia runtime^) -echo [3] Start Applio - Runtime ^(AMD Support^) -echo. -echo Dependencies: Only recommended for experienced users -echo [4] Start Applio ^(Nvidia Support^) -echo [5] Start Applio ^(AMD Support^) -echo. -echo [6] Exit -echo. - -set /p choice=Select an option: -set choice=%choice: =% - -if "%choice%"=="6" ( - goto finish -) else if "%choice%"=="5" ( - cls - echo Starting Applio with AMD support... - python infer-web.py --pycmd python --port 7897 --dml --theme dark - pause - cls - goto menu -) else if "%choice%"=="4" ( - cls - echo Starting Applio with Nvidia support... - python infer-web.py --pycmd python --port 7897 --theme dark - pause - cls - goto menu -) else if "%choice%"=="3" ( - cls - echo Starting Applio with runtime for AMD support ^(you must have it installed^)... - runtime\python.exe infer-web.py --pycmd runtime/python.exe --port 7897 --dml --theme dark - pause - cls - goto menu -) else if "%choice%"=="2" ( - runtime\python.exe -m pip install scikit-learn-intelex - cls - echo Starting Applio with runtime for Intel CPU support ^(you must have Nvidia support installed^)... - runtime\python.exe -m sklearnex infer-web.py --pycmd runtime/python.exe --port 7897 --theme dark - pause - cls - goto menu -) else if "%choice%"=="1" ( - cls - echo Starting Applio with runtime for Nvidia support ^(you must have it installed^)... - runtime\python.exe infer-web.py --pycmd runtime/python.exe --port 7897 --theme dark - pause - cls - goto menu -) - -cls -echo Invalid option. Please enter a number from 1 to 5. -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu -:finish diff --git a/spaces/rachana219/MODT2/trackers/strongsort/sort/iou_matching.py b/spaces/rachana219/MODT2/trackers/strongsort/sort/iou_matching.py deleted file mode 100644 index 62d5a3f63b70db5e322b6f8766444dd824c010ae..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/trackers/strongsort/sort/iou_matching.py +++ /dev/null @@ -1,82 +0,0 @@ -# vim: expandtab:ts=4:sw=4 -from __future__ import absolute_import -import numpy as np -from . import linear_assignment - - -def iou(bbox, candidates): - """Computer intersection over union. - - Parameters - ---------- - bbox : ndarray - A bounding box in format `(top left x, top left y, width, height)`. - candidates : ndarray - A matrix of candidate bounding boxes (one per row) in the same format - as `bbox`. - - Returns - ------- - ndarray - The intersection over union in [0, 1] between the `bbox` and each - candidate. A higher score means a larger fraction of the `bbox` is - occluded by the candidate. - - """ - bbox_tl, bbox_br = bbox[:2], bbox[:2] + bbox[2:] - candidates_tl = candidates[:, :2] - candidates_br = candidates[:, :2] + candidates[:, 2:] - - tl = np.c_[np.maximum(bbox_tl[0], candidates_tl[:, 0])[:, np.newaxis], - np.maximum(bbox_tl[1], candidates_tl[:, 1])[:, np.newaxis]] - br = np.c_[np.minimum(bbox_br[0], candidates_br[:, 0])[:, np.newaxis], - np.minimum(bbox_br[1], candidates_br[:, 1])[:, np.newaxis]] - wh = np.maximum(0., br - tl) - - area_intersection = wh.prod(axis=1) - area_bbox = bbox[2:].prod() - area_candidates = candidates[:, 2:].prod(axis=1) - return area_intersection / (area_bbox + area_candidates - area_intersection) - - -def iou_cost(tracks, detections, track_indices=None, - detection_indices=None): - """An intersection over union distance metric. - - Parameters - ---------- - tracks : List[deep_sort.track.Track] - A list of tracks. - detections : List[deep_sort.detection.Detection] - A list of detections. - track_indices : Optional[List[int]] - A list of indices to tracks that should be matched. Defaults to - all `tracks`. - detection_indices : Optional[List[int]] - A list of indices to detections that should be matched. Defaults - to all `detections`. - - Returns - ------- - ndarray - Returns a cost matrix of shape - len(track_indices), len(detection_indices) where entry (i, j) is - `1 - iou(tracks[track_indices[i]], detections[detection_indices[j]])`. - - """ - if track_indices is None: - track_indices = np.arange(len(tracks)) - if detection_indices is None: - detection_indices = np.arange(len(detections)) - - cost_matrix = np.zeros((len(track_indices), len(detection_indices))) - for row, track_idx in enumerate(track_indices): - if tracks[track_idx].time_since_update > 1: - cost_matrix[row, :] = linear_assignment.INFTY_COST - continue - - bbox = tracks[track_idx].to_tlwh() - candidates = np.asarray( - [detections[i].tlwh for i in detection_indices]) - cost_matrix[row, :] = 1. - iou(bbox, candidates) - return cost_matrix diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py deleted file mode 100644 index 563c03fba6e304eced73ca283152a968a65c3b8e..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py +++ /dev/null @@ -1,290 +0,0 @@ -#from data.config import raw_dataset, render_dataset, archive_dataset, model_list, zip_path - -from lib.renderer.camera import Camera -import numpy as np -from lib.renderer.mesh import load_obj_mesh, compute_tangent, compute_normal, load_obj_mesh_mtl -from lib.renderer.camera import Camera -import os -import cv2 -import time -import math -import random -import pyexr -import argparse -from tqdm import tqdm - - -def make_rotate(rx, ry, rz): - sinX = np.sin(rx) - sinY = np.sin(ry) - sinZ = np.sin(rz) - - cosX = np.cos(rx) - cosY = np.cos(ry) - cosZ = np.cos(rz) - - Rx = np.zeros((3,3)) - Rx[0, 0] = 1.0 - Rx[1, 1] = cosX - Rx[1, 2] = -sinX - Rx[2, 1] = sinX - Rx[2, 2] = cosX - - Ry = np.zeros((3,3)) - Ry[0, 0] = cosY - Ry[0, 2] = sinY - Ry[1, 1] = 1.0 - Ry[2, 0] = -sinY - Ry[2, 2] = cosY - - Rz = np.zeros((3,3)) - Rz[0, 0] = cosZ - Rz[0, 1] = -sinZ - Rz[1, 0] = sinZ - Rz[1, 1] = cosZ - Rz[2, 2] = 1.0 - - R = np.matmul(np.matmul(Rz,Ry),Rx) - return R - -def rotateSH(SH, R): - SHn = SH - - # 1st order - SHn[1] = R[1,1]*SH[1] - R[1,2]*SH[2] + R[1,0]*SH[3] - SHn[2] = -R[2,1]*SH[1] + R[2,2]*SH[2] - R[2,0]*SH[3] - SHn[3] = R[0,1]*SH[1] - R[0,2]*SH[2] + R[0,0]*SH[3] - - # 2nd order - SHn[4:,0] = rotateBand2(SH[4:,0],R) - SHn[4:,1] = rotateBand2(SH[4:,1],R) - SHn[4:,2] = rotateBand2(SH[4:,2],R) - - return SHn - -def rotateBand2(x, R): - s_c3 = 0.94617469575 - s_c4 = -0.31539156525 - s_c5 = 0.54627421529 - - s_c_scale = 1.0/0.91529123286551084 - s_c_scale_inv = 0.91529123286551084 - - s_rc2 = 1.5853309190550713*s_c_scale - s_c4_div_c3 = s_c4/s_c3 - s_c4_div_c3_x2 = (s_c4/s_c3)*2.0 - - s_scale_dst2 = s_c3 * s_c_scale_inv - s_scale_dst4 = s_c5 * s_c_scale_inv - - sh0 = x[3] + x[4] + x[4] - x[1] - sh1 = x[0] + s_rc2*x[2] + x[3] + x[4] - sh2 = x[0] - sh3 = -x[3] - sh4 = -x[1] - - r2x = R[0][0] + R[0][1] - r2y = R[1][0] + R[1][1] - r2z = R[2][0] + R[2][1] - - r3x = R[0][0] + R[0][2] - r3y = R[1][0] + R[1][2] - r3z = R[2][0] + R[2][2] - - r4x = R[0][1] + R[0][2] - r4y = R[1][1] + R[1][2] - r4z = R[2][1] + R[2][2] - - sh0_x = sh0 * R[0][0] - sh0_y = sh0 * R[1][0] - d0 = sh0_x * R[1][0] - d1 = sh0_y * R[2][0] - d2 = sh0 * (R[2][0] * R[2][0] + s_c4_div_c3) - d3 = sh0_x * R[2][0] - d4 = sh0_x * R[0][0] - sh0_y * R[1][0] - - sh1_x = sh1 * R[0][2] - sh1_y = sh1 * R[1][2] - d0 += sh1_x * R[1][2] - d1 += sh1_y * R[2][2] - d2 += sh1 * (R[2][2] * R[2][2] + s_c4_div_c3) - d3 += sh1_x * R[2][2] - d4 += sh1_x * R[0][2] - sh1_y * R[1][2] - - sh2_x = sh2 * r2x - sh2_y = sh2 * r2y - d0 += sh2_x * r2y - d1 += sh2_y * r2z - d2 += sh2 * (r2z * r2z + s_c4_div_c3_x2) - d3 += sh2_x * r2z - d4 += sh2_x * r2x - sh2_y * r2y - - sh3_x = sh3 * r3x - sh3_y = sh3 * r3y - d0 += sh3_x * r3y - d1 += sh3_y * r3z - d2 += sh3 * (r3z * r3z + s_c4_div_c3_x2) - d3 += sh3_x * r3z - d4 += sh3_x * r3x - sh3_y * r3y - - sh4_x = sh4 * r4x - sh4_y = sh4 * r4y - d0 += sh4_x * r4y - d1 += sh4_y * r4z - d2 += sh4 * (r4z * r4z + s_c4_div_c3_x2) - d3 += sh4_x * r4z - d4 += sh4_x * r4x - sh4_y * r4y - - dst = x - dst[0] = d0 - dst[1] = -d1 - dst[2] = d2 * s_scale_dst2 - dst[3] = -d3 - dst[4] = d4 * s_scale_dst4 - - return dst - -def render_prt_ortho(out_path, folder_name, subject_name, shs, rndr, rndr_uv, im_size, angl_step=4, n_light=1, pitch=[0]): - cam = Camera(width=im_size, height=im_size) - cam.ortho_ratio = 0.4 * (512 / im_size) - cam.near = -100 - cam.far = 100 - cam.sanity_check() - - # set path for obj, prt - mesh_file = os.path.join(folder_name, subject_name + '_100k.obj') - if not os.path.exists(mesh_file): - print('ERROR: obj file does not exist!!', mesh_file) - return - prt_file = os.path.join(folder_name, 'bounce', 'bounce0.txt') - if not os.path.exists(prt_file): - print('ERROR: prt file does not exist!!!', prt_file) - return - face_prt_file = os.path.join(folder_name, 'bounce', 'face.npy') - if not os.path.exists(face_prt_file): - print('ERROR: face prt file does not exist!!!', prt_file) - return - text_file = os.path.join(folder_name, 'tex', subject_name + '_dif_2k.jpg') - if not os.path.exists(text_file): - print('ERROR: dif file does not exist!!', text_file) - return - - texture_image = cv2.imread(text_file) - texture_image = cv2.cvtColor(texture_image, cv2.COLOR_BGR2RGB) - - vertices, faces, normals, faces_normals, textures, face_textures = load_obj_mesh(mesh_file, with_normal=True, with_texture=True) - vmin = vertices.min(0) - vmax = vertices.max(0) - up_axis = 1 if (vmax-vmin).argmax() == 1 else 2 - - vmed = np.median(vertices, 0) - vmed[up_axis] = 0.5*(vmax[up_axis]+vmin[up_axis]) - y_scale = 180/(vmax[up_axis] - vmin[up_axis]) - - rndr.set_norm_mat(y_scale, vmed) - rndr_uv.set_norm_mat(y_scale, vmed) - - tan, bitan = compute_tangent(vertices, faces, normals, textures, face_textures) - prt = np.loadtxt(prt_file) - face_prt = np.load(face_prt_file) - rndr.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr.set_albedo(texture_image) - - rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr_uv.set_albedo(texture_image) - - os.makedirs(os.path.join(out_path, 'GEO', 'OBJ', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'PARAM', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_POS', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_NORMAL', subject_name),exist_ok=True) - - if not os.path.exists(os.path.join(out_path, 'val.txt')): - f = open(os.path.join(out_path, 'val.txt'), 'w') - f.close() - - # copy obj file - cmd = 'cp %s %s' % (mesh_file, os.path.join(out_path, 'GEO', 'OBJ', subject_name)) - print(cmd) - os.system(cmd) - - for p in pitch: - for y in tqdm(range(0, 360, angl_step)): - R = np.matmul(make_rotate(math.radians(p), 0, 0), make_rotate(0, math.radians(y), 0)) - if up_axis == 2: - R = np.matmul(R, make_rotate(math.radians(90),0,0)) - - rndr.rot_matrix = R - rndr_uv.rot_matrix = R - rndr.set_camera(cam) - rndr_uv.set_camera(cam) - - for j in range(n_light): - sh_id = random.randint(0,shs.shape[0]-1) - sh = shs[sh_id] - sh_angle = 0.2*np.pi*(random.random()-0.5) - sh = rotateSH(sh, make_rotate(0, sh_angle, 0).T) - - dic = {'sh': sh, 'ortho_ratio': cam.ortho_ratio, 'scale': y_scale, 'center': vmed, 'R': R} - - rndr.set_sh(sh) - rndr.analytic = False - rndr.use_inverse_depth = False - rndr.display() - - out_all_f = rndr.get_color(0) - out_mask = out_all_f[:,:,3] - out_all_f = cv2.cvtColor(out_all_f, cv2.COLOR_RGBA2BGR) - - np.save(os.path.join(out_path, 'PARAM', subject_name, '%d_%d_%02d.npy'%(y,p,j)),dic) - cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f) - cv2.imwrite(os.path.join(out_path, 'MASK', subject_name, '%d_%d_%02d.png'%(y,p,j)),255.0*out_mask) - - rndr_uv.set_sh(sh) - rndr_uv.analytic = False - rndr_uv.use_inverse_depth = False - rndr_uv.display() - - uv_color = rndr_uv.get_color(0) - uv_color = cv2.cvtColor(uv_color, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*uv_color) - - if y == 0 and j == 0 and p == pitch[0]: - uv_pos = rndr_uv.get_color(1) - uv_mask = uv_pos[:,:,3] - cv2.imwrite(os.path.join(out_path, 'UV_MASK', subject_name, '00.png'),255.0*uv_mask) - - data = {'default': uv_pos[:,:,:3]} # default is a reserved name - pyexr.write(os.path.join(out_path, 'UV_POS', subject_name, '00.exr'), data) - - uv_nml = rndr_uv.get_color(2) - uv_nml = cv2.cvtColor(uv_nml, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_NORMAL', subject_name, '00.png'),255.0*uv_nml) - - -if __name__ == '__main__': - shs = np.load('./env_sh.npy') - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-o', '--out_dir', type=str, default='/home/shunsuke/Documents/hf_human') - parser.add_argument('-m', '--ms_rate', type=int, default=1, help='higher ms rate results in less aliased output. MESA renderer only supports ms_rate=1.') - parser.add_argument('-e', '--egl', action='store_true', help='egl rendering option. use this when rendering with headless server with NVIDIA GPU') - parser.add_argument('-s', '--size', type=int, default=512, help='rendering image size') - args = parser.parse_args() - - # NOTE: GL context has to be created before any other OpenGL function loads. - from lib.renderer.gl.init_gl import initialize_GL_context - initialize_GL_context(width=args.size, height=args.size, egl=args.egl) - - from lib.renderer.gl.prt_render import PRTRender - rndr = PRTRender(width=args.size, height=args.size, ms_rate=args.ms_rate, egl=args.egl) - rndr_uv = PRTRender(width=args.size, height=args.size, uv_mode=True, egl=args.egl) - - if args.input[-1] == '/': - args.input = args.input[:-1] - subject_name = args.input.split('/')[-1][:-4] - render_prt_ortho(args.out_dir, args.input, subject_name, shs, rndr, rndr_uv, args.size, 1, 1, pitch=[0]) \ No newline at end of file diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/analyze/extract/__init__.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/analyze/extract/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/nginx-gradio-reverse-proxy/static/index.html b/spaces/radames/nginx-gradio-reverse-proxy/static/index.html deleted file mode 100644 index b6fc4c620b67d95f953a5c1c1230aaab5db5a1b0..0000000000000000000000000000000000000000 --- a/spaces/radames/nginx-gradio-reverse-proxy/static/index.html +++ /dev/null @@ -1 +0,0 @@ -hello \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Age2 x1.exe No Cd Crack How to Play Age of Empires 2 The Conquerors Without a Disc.md b/spaces/raedeXanto/academic-chatgpt-beta/Age2 x1.exe No Cd Crack How to Play Age of Empires 2 The Conquerors Without a Disc.md deleted file mode 100644 index d5f4e39dd8d39c3cf2a29ab35c98e0383fff984c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Age2 x1.exe No Cd Crack How to Play Age of Empires 2 The Conquerors Without a Disc.md +++ /dev/null @@ -1,119 +0,0 @@ - -
        - An explanation of what a No Cd Crack is and why some players use it.
        - A disclaimer that using a No Cd Crack may violate the game's terms of service and cause technical issues. | | H2: How to install Age2 x1.exe No Cd Crack? | - A step-by-step guide on how to download and apply the No Cd Crack for Age of Empires 2: The Conquerors.
        - A list of sources where the No Cd Crack can be found.
        - A warning that some No Cd Cracks may contain viruses or malware and that users should scan them before installing. | | H2: How to play Age of Empires 2: The Conquerors with Age2 x1.exe No Cd Crack? | - A brief overview of the game's features and gameplay modes.
        - A comparison of playing online and offline with the No Cd Crack.
        - A tip on how to run the game in compatibility mode if it does not work on newer operating systems. | | H2: What are the pros and cons of using Age2 x1.exe No Cd Crack? | - A summary of the advantages and disadvantages of using a No Cd Crack for Age of Empires 2: The Conquerors.
        - A table that shows the pros and cons in a concise way.
        - A conclusion that recommends using a No Cd Crack only as a last resort and advises users to buy the original game if possible. | | H3: Pros | - No need to insert the CD every time you want to play.
        - Saves disk space and reduces wear and tear on the CD drive.
        - Allows you to play on multiple computers without buying multiple copies of the game. | | H3: Cons | - May violate the game's terms of service and result in a ban or legal action.
        - May cause technical issues such as crashes, bugs, or compatibility problems.
        - May contain viruses or malware that can harm your computer or steal your personal information. | ## Article with HTML formatting

        What is Age2 x1.exe No Cd Crack?

        -

        If you are a fan of strategy games, you may have heard of Age of Empires 2: The Conquerors, a popular game released in 2000 by Microsoft. It is an expansion pack for Age of Empires 2: The Age of Kings, which was released in 1999. The game lets you control one of 18 civilizations from the Middle Ages and lead them to victory in various historical scenarios.

        -

        However, to play the game, you need to have the original CD inserted in your CD drive every time you launch it. This can be inconvenient and annoying for some players, especially if they have lost or damaged their CD, or if they want to play on multiple computers without buying multiple copies of the game.

        -

        Age2 x1.exe No Cd Crack


        Download >>> https://tinourl.com/2uL4gn



        -

        This is where a No Cd Crack comes in handy. A No Cd Crack is a modified version of the game's executable file (Age2_x1.exe) that bypasses the CD check and allows you to play without inserting the CD. Some players use a No Cd Crack to save disk space, reduce wear and tear on their CD drive, or simply avoid the hassle of swapping CDs.

        -

        However, using a No Cd Crack is not without risks. First of all, it may violate the game's terms of service and result in a ban or legal action from Microsoft. Secondly, it may cause technical issues such as crashes, bugs, or compatibility problems with newer operating systems. Thirdly, it may contain viruses or malware that can harm your computer or steal your personal information.

        -

        Therefore, before you decide to use a No Cd Crack for Age of Empires 2: The Conquerors, you should be aware of the pros and cons and weigh them carefully. In this article, we will show you how to install and use a No Cd Crack for Age of Empires 2: The Conquerors, as well as what are the advantages and disadvantages of doing so.

        -

        How to install Age2 x1.exe No Cd Crack?

        -

        If you have decided to use a No Cd Crack for Age of Empires 2: The Conquerors, here are the steps you need to follow:

        -
          -
        1. Make sure you have installed the game and its expansion pack on your computer.
        2. -
        3. Download a No Cd Crack for Age of Empires 2: The Conquerors from one of these sources:
        4. -
        5. Scan the downloaded file with an antivirus program before opening it.
        6. -
        7. Extract the file using a program like WinRAR or 7-Zip.
        8. -
        9. Copy the extracted file (Age2_x1.exe) and paste it into your game folder (usually C:\Program Files\Microsoft Games\Age Of Empires II).
        10. -
        11. Rename your original file (Age2_x1.exe) to something else (e.g., Age2_x1_old.exe) or move it to another location as a backup.
        12. -
        13. Launch the game using the new file (Age2_x1.exe) and enjoy playing without inserting the CD.
        14. -
        -

        How to play Age of Empires 2: The Conquerors with Age2 x1.exe No Cd Crack?

        -

        Once you have installed the No Cd Crack for Age of Empires 2: The Conquerors, you can play the game as usual. You can choose from various gameplay modes such as single-player campaigns, random maps, custom scenarios, multiplayer matches, etc.

        -

        If you want to play online with other players who also use a No Cd Crack, you need to make sure that you have applied the same patch version as them (e.g., v1.0c). You can check your patch version by looking at the bottom left corner of your main menu screen.

        -

        You can also use third-party platforms such as Voobly or GameRanger to find and join online games with other players who use a No Cd Crack.

        -

        Age of Empires 2 The Conquerors No Cd Patch
        -How to Play Age2 x1.exe Without Cd
        -Age2 x1.exe Crack Download Free
        -Age of Empires 2 Expansion No Cd Fix
        -Age2 x1.exe Not Working on Windows 10
        -Age of Empires 2 No Cd Required
        -Age2 x1.exe Missing or Corrupt
        -Age of Empires 2 The Conquerors Crack Only
        -How to Install Age2 x1.exe No Cd Crack
        -Age of Empires 2 Expansion No Cd Key
        -Age2 x1.exe Error Unable to Initialize
        -Age of Empires 2 No Cd Multiplayer
        -Age2 x1.exe Virus or Safe
        -Age of Empires 2 The Conquerors No Cd Iso
        -How to Run Age2 x1.exe in Compatibility Mode
        -Age of Empires 2 Expansion No Cd Mac
        -Age2 x1.exe File Size and Location
        -Age of Empires 2 The Conquerors No Cd Steam
        -How to Update Age2 x1.exe to Version 1.0c
        -Age of Empires 2 Expansion No Cd Linux
        -Age2 x1.exe Mods and Cheats
        -Age of Empires 2 The Conquerors No Cd Lan
        -How to Backup and Restore Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Windows 7
        -Age2 x1.exe Resolution and Graphics Settings
        -Age of Empires 2 The Conquerors No Cd Trainer
        -How to Change Language in Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Windows 8
        -Age2 x1.exe Sound and Music Problems
        -Age of Empires 2 The Conquerors No Cd Skidrow
        -How to Create Custom Scenarios in Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Windows XP
        -Age2 x1.exe Keyboard and Mouse Controls
        -Age of Empires 2 The Conquerors No Cd Reloaded
        -How to Play Online with Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Crack Zip
        -Age2 x1.exe Tips and Tricks for Beginners
        -Age of Empires 2 The Conquerors No Cd Megaupload
        -How to Uninstall and Reinstall Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Crack Rar
        -Age2 x1.exe Best Civilizations and Strategies
        -Age of Empires 2 The Conquerors No Cd Torrent
        -How to Record and Watch Replays in Age2 x1.exe
        -Age of Empires 2 Expansion No Cd Crack Exe
        -Age2 x1.exe System Requirements and Performance Issues
        -Age of Empires 2 The Conquerors No Cd Full Version Download

        -

        If you want to play offline with other players who do not use a No Cd Crack, you need to either insert your original CD or use another computer that has it.

        -

        If you encounter any technical issues while playing with a No Cd Crack, such as crashes, bugs, or compatibility problems with newer operating systems, you can try running the game in compatibility mode for Windows XP or Windows 98.

        -

        What are the pros and cons of using Age2 x1.exe No Cd Crack?

        -

        Using a No Cd Crack for Age of Empires 2: The Conquerors has its advantages and disadvantages. Here is a summary of them:

        -

        Pros

        -
          -
        • No need to insert the CD every time you want to play.
        • -
        • Saves disk space and reduces wear and tear on your CD drive.
        • -
        • Allows you to play on multiple computers without buying multiple copies of the game.
        • -
        -

        Cons

        -
          -
        • May violate the game's terms of service and result in a ban or legal action from Microsoft.
        • -
        • May cause technical issues such as crashes, bugs, or compatibility problems with newer operating systems.
        • -
        • May contain viruses or malware that can harm your computer or steal your personal information.
        • -
        -

        To illustrate these pros and cons more clearly, here is a table that compares them:

        - - -
        No Cd CrackNo CD
        +No need to insert CD
        +Saves disk space
        +Plays on multiple computers
        -Need to insert CD
        -Takes up disk space
        Tips and tricks for playing Age of Empires 2: The Conquerors -

        Playing Age of Empires 2: The Conquerors with a No Cd Crack can be fun and challenging, but it also requires some skills and strategies to win. Here are some tips and tricks that can help you improve your game:

        -
          -
        • Learn the basics of the game. If you are new to Age of Empires 2, you should familiarize yourself with the game's mechanics, such as how to create villagers, gather resources, build structures, research technologies, train units, and fight battles. You should also learn the strengths and weaknesses of each civilization and their unique units and bonuses.
        • -
        • Practice against the AI. Before you jump into online matches with other players, you should practice against the AI on different difficulty levels and maps. This will help you improve your speed, efficiency, and decision-making skills. You can also use the AI to test different strategies and tactics.
        • -
        • Follow a build order. A build order is a sequence of actions that you follow in the early stages of the game to optimize your economy and military. A good build order will help you reach the next age faster, produce more villagers, and prepare for an attack or defense. There are many build orders for different situations and civilizations, but some common ones are Fast Castle, Scout Rush, Archer Rush, and Tower Rush.
        • -
        • Scout your map and your enemy. Scouting is essential for gathering information about your surroundings and your opponent. You should use your scout cavalry or other fast units to explore the map and find your resources, extra sheep, relics, gold and stone mines, choke points, hills, etc. You should also scout your enemy's base and see what they are doing, such as their economy, military, buildings, technologies, etc. This will help you plan your strategy accordingly.
        • -
        • Balance your economy. Your economy is the backbone of your game. You need to balance your resource income according to your needs and goals. You should always have enough villagers working on food, wood, gold, and stone to support your production and research. You should also avoid having idle villagers or excess resources that are not being used.
        • -
        • Use hotkeys and control groups. Hotkeys are keyboard shortcuts that allow you to perform actions faster and easier than using the mouse. You should learn and use hotkeys for creating units, buildings, technologies, commands, etc. Control groups are numbers that you assign to a group of units or buildings for quick selection and control. You should use control groups for your army, scouts, town centers, barracks, archery ranges, etc.
        • -
        • Micro and macro your units. Micro is the term used for controlling individual units or small groups of units in combat situations. Macro is the term used for managing your economy and production in large-scale situations. You should be able to do both effectively to win battles and games. For example, you should micro your archers to avoid melee units or skirmishers, while macroing your town centers to create more villagers or research upgrades.
        • -
        -

        Conclusion

        -

        Age2 x1.exe No Cd Crack is a modified version of the game's executable file that allows you to play Age of Empires 2: The Conquerors without inserting the CD. It has its pros and cons that you should consider before using it. It can save you disk space and hassle, but it can also cause legal issues and technical problems.

        -

        If you decide to use a No Cd Crack for Age of Empires 2: The Conquerors, you should follow the steps in this article to install it correctly and safely. You should also follow the tips and tricks in this article to improve your gameplay skills and strategies.

        -

        Age of Empires 2: The Conquerors is a classic game that has stood the test of time. It offers a variety of civilizations, scenarios, modes, and challenges that will keep you entertained for hours. Whether you play with or without a No Cd Crack, we hope you enjoy this game as much as we do.

        -

        FAQs

        -
          -
        1. Is Age2 x1.exe No Cd Crack legal?
          A: No Cd Cracks are generally considered illegal because they violate the game's terms of service and copyright laws. They may also infringe on the rights of the game developers and publishers who invested time and money into creating the game.
        2. -
        3. Is Age2 x1.exe No Cd Crack safe?
          A: No Cd Cracks may not be safe because they may contain viruses or malware that can harm your computer or steal your personal information. They may also cause technical issues such as crashes, bugs, or compatibility problems with newer operating systems.
        4. -
        5. Where can I download Age2 x1.exe No Cd Crack?
          A: You can download Age2 x1.exe No Cd Crack from various sources on the internet such as GameCopyWorld , GameBurnWorld, baytanacomplamb.wixsite.com, movireralu.weebly.com, or MegaGames. However, you should be careful and scan the downloaded file with an antivirus program before opening it.
        6. -
        7. How can I play Age of Empires 2: The Conquerors without a No Cd Crack?
          A: You can play Age of Empires 2: The Conquerors without a No Cd Crack by either inserting your original CD or buying the digital version of the game from platforms such as Steam or Microsoft Store. These versions do not require a CD and are updated with bug fixes and new features.
        8. -
        9. How can I learn more about Age of Empires 2: The Conquerors?
          A: You can learn more about Age of Empires 2: The Conquerors by visiting the official website of the game, watching tutorials and guides on YouTube, reading forums and blogs, or joining online communities and discord servers.
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FxSound Enhancer 13.027 Crack Activation Code Free Download 2019.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FxSound Enhancer 13.027 Crack Activation Code Free Download 2019.md deleted file mode 100644 index 8ba1fa096142ca2480da4e5dd4763010f861c254..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FxSound Enhancer 13.027 Crack Activation Code Free Download 2019.md +++ /dev/null @@ -1,6 +0,0 @@ -

        FxSound Enhancer 13.027 Crack Activation Code Free Download 2019


        DOWNLOAD ··· https://urlgoal.com/2uCKXZ



        -
        -FxSound Enhancer 13.027 Crack + Serial Key Free Download 2019. FxSound Enhancer offers you booming bass, clear glass audio and ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/richardzhangy26/yandian_flow_classification/label/utils_action_recognition.py b/spaces/richardzhangy26/yandian_flow_classification/label/utils_action_recognition.py deleted file mode 100644 index 916e3af792da6ff1e60735912cd67c925bcc0b35..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/label/utils_action_recognition.py +++ /dev/null @@ -1,902 +0,0 @@ -import time -import os -import pickle -import cv2 -import matplotlib.pyplot as plt -import matplotlib.animation as manimation -import matplotlib.patheffects as pe -import matplotlib.patches as mpatches -import torch -from torch.utils.data import DataLoader, TensorDataset -import torch.nn.functional as F -import torchvision.transforms as transforms -from sklearn.model_selection import train_test_split -from sklearn.metrics import confusion_matrix -import math -from tqdm import tqdm -# from tqdm import tnrange, tqdm_notebook #used when I run in colab/GCloud -from random import sample -import numpy as np -from collections import Counter -import sys -from PIL import Image - -#设置保存运行记录的文件夹 -def set_project_folder_dir(if_open_new_folder, local_dir, use_model_folder_dir=False, mode=None): - if use_model_folder_dir: - folder_dir = os.path.join(os.path.normpath(local_dir + os.sep + os.pardir), mode) - create_folder_dir_if_needed(folder_dir) - else: - if if_open_new_folder != 'False': - folder_dir = open_new_folder(if_open_new_folder, local_dir) - else: - folder_dir = local_dir - return folder_dir - -#每运行一次就创建一个新的文件夹来保存运行记录 -def open_new_folder(if_open_new_folder, local_dir): - if if_open_new_folder == 'True': - folder_name = time.strftime("%Y%m%d-%H%M%S") - else: - folder_name = 'debug' - folder_dir = os.path.join(local_dir, folder_name) - create_folder_dir_if_needed(folder_dir) - return folder_dir - -#保存设置信息 -def save_setting_info(args, device, folder_dir): - setting_file_name = os.path.join(folder_dir, 'setting_info.txt') - args_dict = args.__dict__ - with open(setting_file_name, 'w') as f: - for key, value in args_dict.items(): - f.write(key + ' : ' + str(value) + '\n') - f.write(str(device)) - -#图形化类别的数量(可删) -def plot_label_distribution(dataloaders, folder_dir, load_all_data_to_RAM_mode, label_decoder_dict, mode='train'): - if mode == 'train': - datasets = [dataloaders[dataloader_name].dataset for dataloader_name in dataloaders.keys()] - plot_distribution(datasets, list(dataloaders.keys()), load_all_data_to_RAM_mode, folder_dir, label_decoder_dict) - else: - plot_distribution([dataloaders.dataset], ['test'], load_all_data_to_RAM_mode, folder_dir, label_decoder_dict) - -#图形化类别数量(可删) -def plot_distribution(datasets_list, dataset_names_list, load_all_data_to_RAM_mode, folder_dir, label_decoder_dict): - plt.figure(figsize=(10, 6)) - for index, dataset in enumerate(datasets_list): - if load_all_data_to_RAM_mode: - counter_occurrence_of_each_class = Counter(dataset.tensors[1].tolist()) - else: - counter_occurrence_of_each_class = Counter(dataset.labels) - with open(os.path.join(folder_dir, 'frequency_of_each_class_{}.pkl'.format(dataset_names_list[index])), 'wb') as f: - pickle.dump(counter_occurrence_of_each_class, f, pickle.HIGHEST_PROTOCOL) - sorted_counter = sorted(counter_occurrence_of_each_class.items()) - x, y = zip(*sorted_counter) - plt.bar(x, y) - plt.legend(dataset_names_list) - plt.title('The frequency of each class\n' + '&'.join(dataset_names_list)) - plt.xlabel('label') - plt.ylabel('Frequency') - x_ticks_labels = [label_decoder_dict[label_code] for label_code in x] - plt.xticks(x, x_ticks_labels, fontsize=8, rotation=90) - plt.yticks(fontsize=8) - plt.tight_layout() - plt.xlim(-1, max(x) + 1) - plt.savefig(os.path.join(folder_dir, '_'.join(dataset_names_list) + '.jpg'), dpi=300, bbox_inches="tight") - plt.close() - - -#划分数据集 -def split_data(ucf_list_root): - video_names_train, video_names_val, video_names_test, labels_train, labels_val, labels_test = get_video_list(ucf_list_root) - - ''' - save_video_names_test_and_add_labels(video_names_test, labels_decoder_dict, folder_dir, number_of_classes) - # save labels_decoder_dict - with open(os.path.join(folder_dir, 'labels_decoder_dict.pkl'), 'wb') as f: - pickle.dump(labels_decoder_dict, f, pickle.HIGHEST_PROTOCOL) - ''' - - return [video_names_train, labels_train], [video_names_val, labels_val], [video_names_test, labels_test] - - -def split_data_for_test(ucf_list_root): - video_names_train, video_names_val, video_names_test, labels_train, labels_val, labels_test = get_video_list(ucf_list_root) - - ''' - save_video_names_test_and_add_labels(video_names_test, labels_decoder_dict, folder_dir, number_of_classes) - # save labels_decoder_dict - with open(os.path.join(folder_dir, 'labels_decoder_dict.pkl'), 'wb') as f: - pickle.dump(labels_decoder_dict, f, pickle.HIGHEST_PROTOCOL) - ''' - - return video_names_test, labels_test - -#将数据文件设置为列表,这样就不会使系统过载 -def get_data(video_names, list, labels=[]): - # setting the data files as a list so the not overpower the system - dict = {'0012': 0, '0221': 1, '1012': 2, '1102': 3, '1122': 4, '1221': 5} - for video_name in video_names: - name = video_name.split('_')[0] - # rstrip()函数可以帮助删除字符串末尾的指定字符、字符序列或者空白字符 - label = dict[name] - - labels.append(label) - list.append(video_name) - - - return list, labels - - -def get_video_list(ucf_list_root): - # ====== get a list of video names ====== - video_names_train, video_names_val, video_names_test, labels_train, labels_val, labels_test = [], [], [], [], [], [] - for dir in os.listdir(ucf_list_root): - file_path = os.path.join(ucf_list_root, dir) - video_names = [] - for file in os.listdir(file_path): - video_names.append(file) - if dir == "train": - video_names_train, labels_train = get_data(video_names, video_names_train, labels_train) - elif dir == "val": - video_names_val, labels_val = get_data(video_names, video_names_val, labels_val) - else: - video_names_test, labels_test = get_data(video_names, video_names_test, labels_test) - - - return video_names_train, video_names_val, video_names_test, labels_train, labels_val, labels_test - -def save_video_names_test_and_add_labels(video_names_test, labels_decoder_dict, folder_dir, number_of_classes): - save_test_video_details = os.path.join(folder_dir, 'test_videos_detailes.txt') - with open(save_test_video_details, 'w') as f: - for text_video_name in video_names_test: - label_string = text_video_name.split('/')[0] - # endoce label - for key, value in labels_decoder_dict.items(): - if value == label_string: - label_code = key - else: - continue - if number_of_classes is None or label_code in range(0, number_of_classes): - f.write(text_video_name + ' ' + str(label_code) + '\n') - else: - continue - -#查看验证集中预测标签和真实标签的差别(可删) -def plot_images_with_predicted_labels(local_x, label_decoder_dict, predicted_labels, folder_dir, epoch): - folder_save_images = os.path.join(folder_dir, 'Images') - create_folder_dir_if_needed(folder_save_images) - n_rows = math.trunc(math.sqrt(len(local_x))) - n_cols = n_rows - if n_rows == 1 and n_cols == 1: - plot_single_images_with_predicted_labels(local_x, label_decoder_dict, predicted_labels, folder_save_images, epoch) - else: - fig, ax = plt.subplots(ncols=n_cols, nrows=n_rows, figsize=(10, 10)) - for row in range(n_rows): - for col in range(n_cols): - img = local_x[col + (row * n_cols)][0].permute(1, 2, 0) - img_scale = (img - img.min()) / (img.max() - img.min()) - ax[row, col].imshow(img_scale) - label_for_title = label_decoder_dict[predicted_labels[col + (row * n_cols)].item()] - ax[row, col].set_title(label_for_title) - ax[row, col].set_xticks([]) - ax[row, col].set_yticks([]) - plt.savefig(os.path.join(folder_save_images, 'predicted_labels {} epoch.png'.format(epoch))) - plt.close() - -#同上(可删) - -def plot_single_images_with_predicted_labels(local_x, label_decoder_dict, predicted_labels, folder_save_images, epoch): - fig, ax = plt.subplots(figsize=(10, 10)) - img = local_x[0][0].permute(1, 2, 0) - img_scale = (img - img.min()) / (img.max() - img.min()) - ax.imshow(img_scale) - label_for_title = label_decoder_dict[predicted_labels[0].item()] - ax.set_title(label_for_title) - ax.set_xticks([]) - ax.set_yticks([]) - plt.savefig(os.path.join(folder_save_images, 'predicted_labels {} epoch.png'.format(epoch))) - plt.close() - -#创建一个文件夹 -def create_folder_dir_if_needed(folder_save_dir): - if not os.path.exists(folder_save_dir): - os.makedirs(folder_save_dir) - - -#没有使用(可删) -def load_all_dataset_to_RAM(dataloaders, dataset_order, batch_size): - images_train, labels_train, images_val, labels_val = [], [], [], [] - for i, mode in enumerate(['train', 'val']): - images_list = [images_train, images_val][i] - labels_list = [labels_train, labels_val][i] - with tqdm(total=len(dataloaders[mode])) as pbar: - # with tqdm_notebook(total=len(dataloaders[mode])) as pbar: - for local_images, local_label in dataloaders[mode]: - images_list += [local_images] - labels_list += [local_label] - pbar.update(1) - images_train = torch.cat(images_train, axis=0) - labels_train = torch.cat(labels_train, axis=0) - images_val = torch.cat(images_val, axis=0) - labels_val = torch.cat(labels_val, axis=0) - datasets = {dataset_order[index]: TensorDataset(x[0], x[1]) for index, x in - enumerate([[images_train, labels_train], [images_val, labels_val]])} - dataloaders = {x: DataLoader(datasets[x], batch_size=batch_size, shuffle=True) - for x in ['train', 'val']} - return dataloaders - - -#没有使用(可删) -def load_all_dataset_to_RAM_test(dataloader, batch_size): - images_test, labels_test = [], [] - with tqdm(total=len(dataloader)) as pbar: - # with tqdm_notebook(total=len(dataloader)) as pbar: - for local_images, local_label in dataloader: - images_test += [local_images] - labels_test += [local_label] - pbar.update(1) - images_test = torch.cat(images_test, axis=0) - labels_test = torch.cat(labels_test, axis=0) - dataset = TensorDataset(images_test, labels_test) - dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False) - return dataloader - -#设置每批次处理完之后为None,否则LSTM会使用前面的信息 -def foward_step_no_labels(model, images): - # Must be done before you run a new batch. Otherwise the LSTM will treat a new batch as a continuation of a sequence - model.Lstm.reset_hidden_state() - with torch.no_grad(): - output = model(images) - predicted_labels = output.detach().cpu().argmax(dim=1) - return predicted_labels - -#设置每批次处理完之后为None,否则LSTM会使用前面的信息 -def foward_step(model, images, labels, criterion, mode=''): # predections - # Must be done before you run a new batch. Otherwise the LSTM will treat a new batch as a continuation of a sequence - model.Lstm.reset_hidden_state() - if mode == 'test': - with torch.no_grad(): - output = model(images) - else: - output = model(images) - loss = criterion(output, labels) - # Accuracy calculation - predicted_labels = output.detach().argmax(dim=1) - acc = (predicted_labels == labels).cpu().numpy().sum() - return loss, acc, predicted_labels.cpu() - - -#训练模型 -def train_model(model, dataloader, device, optimizer, criterion): - train_loss, train_acc = 0.0, 0.0 - model.train() - with tqdm(total=len(dataloader)) as pbar: - # with tqdm_notebook(total=len(dataloader)) as pbar: - for local_images, local_labels, ___ in dataloader: - local_images, local_labels = local_images.to(device), local_labels.to(device) - #将参数梯度归零 - optimizer.zero_grad() # zero the parameter gradients - loss, acc, ___ = foward_step(model, local_images, local_labels, criterion, mode='train') - train_loss += loss.item() - train_acc += acc - #计算梯度 - loss.backward() # compute the gradients - #用梯度更新参数 - optimizer.step() # update the parameters with the gradients - pbar.update(1) - - train_acc = 100 * (train_acc / dataloader.dataset.__len__()) - train_loss = train_loss / len(dataloader) - return train_loss, train_acc - - -#验证和测试模型 -def test_model(model, dataloader, device, criterion, mode='test'): - val_loss, val_acc = 0.0, 0.0 - model.eval() - if mode == 'save_prediction_label_list': - prediction_labels_list = [] - true_labels_list = [] - with tqdm(total=len(dataloader)) as pbar: - # with tqdm_notebook(total=len(dataloader)) as pbar: - for local_images, local_labels, indexs in dataloader: - local_images, local_labels = local_images.to(device), local_labels.to(device) - loss, acc, predicted_labels = foward_step(model, local_images, local_labels, criterion, mode='test') - if mode == 'save_prediction_label_list': - prediction_labels_list += [predicted_labels.detach().cpu()] - true_labels_list += [local_labels.detach().cpu()] - val_loss += loss.item() - val_acc += acc - pbar.update(1) - val_acc = 100 * (val_acc / dataloader.dataset.__len__()) - val_loss = val_loss / len(dataloader) - if mode == 'save_prediction_label_list': - return val_loss, val_acc, prediction_labels_list, local_images.cpu(), true_labels_list, indexs - else: - return val_loss, val_acc, predicted_labels, local_images.cpu() - -#原模型中没有使用 -def test_model_continues_movie(model, dataloader, device, criterion, save_path, label_decoder_dict): - val_loss, val_acc = 0.0, 0.0 - model.eval() - # ====== choosing one random batch from the dataloader ====== - dataloader_iter = iter(dataloader) - images, labels, ___ = next(dataloader_iter) - predicted_labels_list = [] - # ===== create continues movie and labels tensor, with X frames from each movie ====== - # ===== and stack a sliding window of size 5 frames to new dim so they will act as batch ====== - num_frames_to_sample = images.shape[1] - sliding_window_images, continues_labels, continues_movie = create_sliding_window_x_frames_size_dataset\ - (images, labels, num_frames_to_sample) - # ====== predict the label of each sliding window, use batches because of GPU memory ====== - for batch_boundaries in range(0, len(sliding_window_images), dataloader.batch_size): - batch_images_to_plot = sliding_window_images[batch_boundaries: batch_boundaries + dataloader.batch_size].to( - device) - batch_labels = continues_labels[batch_boundaries: batch_boundaries + dataloader.batch_size].to(device) - loss, acc, predicted_labels = foward_step(model, batch_images_to_plot, batch_labels, criterion, mode='test') - predicted_labels_list += [predicted_labels] - val_acc += acc - predicted_labels = torch.cat(predicted_labels_list, axis=0) - val_loss += loss.item() - create_video_with_labels(save_path, 'Video_with_prediction_vs_true_labels.avi', continues_movie, continues_labels, - predicted_labels, label_decoder_dict, mode='continues_test_movie') - save_path_plots = os.path.join(save_path, 'Plots') - create_folder_dir_if_needed(save_path_plots) - plot_sliding_window_prediction_for_each_frame(continues_labels, predicted_labels, save_path_plots, - label_decoder_dict, labels) - plot_function_of_num_frames_in_window_on_prediction(continues_labels, predicted_labels, save_path_plots, - num_frames_to_sample) - val_acc = 100 * (val_acc / len(sliding_window_images)) - val_loss = val_loss / len(dataloader) - return val_loss, val_acc, predicted_labels, images.cpu() - -#原模型中没有使用 -def test_model_continues_movie_youtube(model, data, device, save_path, label_decoder_dict, batch_size, - preprocessing_movie_mode, dataset_type='youtube', video_original_size=None): - model.eval() - if preprocessing_movie_mode == 'preprocessed': - # ====== choosing one random batch from the dataloader ====== - dataloader_iter = iter(data) - images = next(dataloader_iter) - images = images.squeeze(0) - video_original_size = video_original_size[dataloader_iter._dataset.images[0].split('.avi')[0]] - else: - images = data - num_frames_to_sample = 5 - # ===== create continues movie and labels tensor, with X frames from each movie ====== - # ===== and stack a sliding window of size 5 frames to new dim so they will act as batch ====== - sliding_window_images = create_sliding_window_x_frames_size_dataset \ - (images, None, num_frames_to_sample, dataset_type) - # ====== predict the label of each sliding window, use batches beacuse of GPU memory ====== - predicted_labels = predict_labels_of_sliding_window(sliding_window_images, batch_size, device, model) - save_path_plots = os.path.join(save_path, 'Plots') - create_folder_dir_if_needed(save_path_plots) - plot_sliding_window_prediction_for_each_frame_no_labels(predicted_labels, save_path_plots, label_decoder_dict) - create_video_with_labels(save_path, 'Video_with_prediction_vs_true_labels.avi', - images[:len(images) - num_frames_to_sample + 1], None, predicted_labels, - label_decoder_dict, - video_original_size=video_original_size, fps=5, mode='youtube') - -#原模型中没有使用 -def create_sliding_window_x_frames_size_dataset(local_images, local_labels, num_frames_to_sample, - dataset_type='UCF101'): - """" - This function would join all of the images in the batch to one long continues movie, which would be - composed from num_batch human action movies (shape - num_batch*num_frames_to_sample, 3, 224, 224). - Than, a sliding window of num_frames_to_sample would be passed on the continues movie, - creating a stack of mini videos that can be used as an input to the LRCN network. - (shape - (num_batch - num_frames_to_sample+1), num_of_frames_to_samples, 3, 224, 224) - The label for each sliding window would be set according the majority of frames we have for each action, - meaning if the sliding window has 3 frames from the first action and two from the next action, the label of the sliding - window would be the first action - """ - # ===== create continues movie, with X frames from each movie ====== - if dataset_type == 'UCF101': - local_images = local_images[:, :num_frames_to_sample] - continues_frames = local_images.view(local_images.shape[0] * local_images.shape[1], local_images.shape[2], - local_images.shape[3], local_images.shape[4]) - else: - continues_frames = local_images - sliding_window_images = [] - for num_frame in range(continues_frames.shape[0] - num_frames_to_sample + 1): - # ===== normalize the frames according to the imagenet preprocessing ======= - sliding_window_images += [continues_frames[num_frame: num_frame + num_frames_to_sample]] - sliding_window_images = torch.stack(sliding_window_images) - continues_frames = continues_frames[:len(sliding_window_images)] - if dataset_type == 'UCF101': - # ==== create continues label tensor where each frame has its own label ====== - majority_of_num_of_frames = math.ceil( - num_frames_to_sample / 2) if num_frames_to_sample % 2 != 0 else num_frames_to_sample / 2 + 1 - mid_continues_labels = local_labels[1:len(local_labels) - 1].view(-1, 1).repeat(1, num_frames_to_sample).view( - -1) - start_continues_labels = local_labels[0].view(-1, 1).repeat(1, majority_of_num_of_frames).view(-1) - end_continues_labeels = local_labels[-1].view(-1, 1).repeat(1, majority_of_num_of_frames).view(-1) - continues_labels = torch.cat((start_continues_labels, mid_continues_labels, end_continues_labeels)) - return sliding_window_images, continues_labels, continues_frames - else: - return sliding_window_images - -#原模型中没有使用 -def plot_function_of_num_frames_in_window_on_prediction(continues_labels, predicted_labels, save_path_plots, - num_frames_to_sample): - mean_acc_array = [] - for num_frames in range(num_frames_to_sample): - predicted_labels_with_num_frames_in_window = np.array( - [predicted_labels[i] for i in range(num_frames, len(predicted_labels), num_frames_to_sample)]) - labels_with_num_frames = np.array( - [continues_labels[i] for i in range(num_frames, len(continues_labels), num_frames_to_sample)]) - mean_acc_array += [(predicted_labels_with_num_frames_in_window == labels_with_num_frames).sum() / len( - labels_with_num_frames) * 100] - mean_acc_array.reverse() - x_axis = np.arange(num_frames_to_sample) - plt.plot(x_axis, mean_acc_array, linestyle='-', marker="o") - plt.xticks(x_axis, np.arange(num_frames_to_sample, 0, -1)) - plt.xlabel('Number of frames from a specific human action') - plt.ylabel('Mean accuracy [%]') - plt.ylim(0, 100) - plt.title('Change in accuracy with the change in frame num') - plt.savefig(os.path.join(save_path_plots, 'analysis_of_predicted_labels_in_sliding_window.png'), dpi=300, - bbox_inches='tight') - plt.close() - -#保存loss信息到文件中 -def save_loss_info_into_a_file(train_loss, train_acc, val_loss, val_acc, test_loss, test_acc, folder_dir, epoch): - file_name = os.path.join(folder_dir, 'loss_per_epoch.txt') - with open(file_name, 'a+') as f: - f.write('Epoch {} : Train loss {:.8f}, Train acc {:.4f}, Val loss {:.8f}, Val acc {:.4f}, Test loss {:.8f},Test acc {:.4f}\n' - .format(epoch, train_loss, train_acc, val_loss, val_acc, test_loss, test_acc)) - - - -def save_loss_info_into_a_file_old(train_loss, val_loss, train_acc, val_acc, folder_dir, epoch): - file_name = os.path.join(folder_dir, 'loss_per_epoch.txt') - with open(file_name, 'a+') as f: - f.write('Epoch {} : Train loss {:.8f}, Train acc {:.4f}, Val loss {:.8f}, Val acc {:.4f}\n' - .format(epoch, train_loss, train_acc, val_loss, val_acc)) - - -#变化图片形状 -def set_transforms(mode): - if mode == 'train': - transform = transforms.Compose( - [transforms.Resize(256), # this is set only because we are using Imagenet pre-train model. - transforms.RandomCrop(224), - # transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - # transforms.Normalize(mean=(0.485, 0.456, 0.406), - # std=(0.229, 0.224, 0.225)) - ]) - elif mode == 'test' or mode == 'val': - transform = transforms.Compose([transforms.Resize((224, 224)), - transforms.ToTensor() - # transforms.Normalize(mean=(0.485, 0.456, 0.406), - # std=(0.229, 0.224, 0.225)) - ]) - return transform - -#生成新的视频文件 -def create_new_video(save_path, video_name, image_array): - (h, w) = image_array[0].shape[:2] - if len(video_name.split('/')) > 1: - video_name = video_name.split('/')[1] - else: - video_name = video_name.split('.mp4')[0] - video_name = video_name + '.avi' - save_video_path = os.path.join(save_path, video_name) - output_video = cv2.VideoWriter(save_video_path, cv2.VideoWriter_fourcc(*'MJPG'), 5, (w, h), True) - for frame in range(len(image_array)): - output_video.write(image_array[frame]) - output_video.release() - cv2.destroyAllWindows() - -#生成新的带标签的视频 -def create_video_with_labels(save_path, video_name, image_array, continues_labels, predicted_labels, label_decoder_dict, - video_original_size=None, fps=2.5, mode='single_movie'): - if mode == 'single_movie': - predicted_labels = torch.tensor(predicted_labels) - predicted_labels = predicted_labels.view(-1, 1).repeat(1, len(image_array)).view(-1) - path_save_videos = os.path.join(save_path, 'Videos') - create_folder_dir_if_needed(path_save_videos) - dpi = 300 - w, h = setting_video_size(video_original_size) - image_array = F.interpolate(image_array, size=(h, w)) - image_array = image_array.transpose(2, 1).transpose(2, 3) - n_frames = len(image_array) - figure_size_w = round((w - 50) / float(dpi) * 2) - figure_size_h = round(h / float(dpi) * 3) - h_fig = plt.figure(figsize=(figure_size_w, figure_size_h), dpi=dpi) - # ====== plot frame, would change with every frame ====== - if mode != 'youtube': - h_ax = h_fig.add_axes([0.08, 0.25, 0.85, 0.8]) - else: - h_ax = h_fig.add_axes([0.03, 0.1, 0.95, 0.95]) - img = (image_array[0] - image_array[0].min()) / (image_array[0].max() - image_array[0].min()) - h_im = h_ax.matshow(img) - h_ax.set_axis_off() - h_im.set_interpolation('none') - h_ax.set_aspect('equal') - # ======== plot the label prediction with the frame ===== - if mode != 'youtube': - h_ax_plot = h_fig.add_axes([0.08, 0.25, 0.85, 0.05]) - else: - h_ax_plot = h_fig.add_axes([0.03, 0.25, 0.95, 0.04]) - # h_ax_plot = h_fig.add_axes([0.1, 0.22, 0.8, 0.06]) - x_array = np.arange(len(predicted_labels)) + 0.5 - y_array = np.zeros(len(x_array)) - bool_array = None if continues_labels is None else continues_labels == predicted_labels - color_dict = create_color_dict(predicted_labels) - color_list = [] - h_text_object = set_text_to_video_frame(continues_labels, label_decoder_dict, - predicted_labels, mode, bool_array=bool_array) - - FFMpegWriter = manimation.writers['ffmpeg'] - metadata = dict(title=video_name, artist='Matplotlib') - writer = FFMpegWriter(fps=fps, metadata=metadata) - with writer.saving(h_fig, os.path.join(path_save_videos, video_name), dpi=dpi): # change from 600 dpi - for i in range(n_frames): - set_text_to_video_frame(continues_labels, label_decoder_dict, - predicted_labels, mode, h_text_object=h_text_object, bool_array=bool_array, frame=i) - img = (image_array[i] - image_array[i].min()) / (image_array[i].max() - image_array[i].min()) - h_im.set_array(img) - if i > 0: - h_im_2.remove() - y_array[:i + 1] = 1 - if mode != 'continues_test_movie': - color_list += [color_dict[predicted_labels[i].item()]] - else: - color_list += ['green' if bool_array[i].item() else color_dict[predicted_labels[i].item()]] - h_im_2 = h_ax_plot.bar(x_array, y_array, color=color_list, width=1.0) - h_ax_plot.get_yaxis().set_ticks([]) - h_ax_plot.set_ylim(0, 1) - h_ax_plot.tick_params(axis="x", labelsize=4) - h_ax_plot.set_xlim(0, len(x_array)) - writer.grab_frame() - plt.close() - -#原模型中没有使用 -def setting_sample_rate(num_frames_to_extract, sampling_rate, video, fps, ucf101_fps): - video.set(cv2.CAP_PROP_POS_AVI_RATIO, 1) - video_length = video.get(cv2.CAP_PROP_POS_MSEC) / 1000 - num_frames = int(video_length * fps) - if num_frames_to_extract == 'all': - sample_start_point = 0 - if fps != ucf101_fps and sampling_rate != 0: - sampling_rate = math.ceil(fps / (ucf101_fps / sampling_rate)) - elif video_length < (num_frames_to_extract * sampling_rate): - sample_start_point = 0 - sampling_rate = 2 - else: - sample_start_point = sample(range(num_frames - (num_frames_to_extract * sampling_rate)), 1)[0] - return sample_start_point, sampling_rate, num_frames - -#采样视频中部分帧 -def capture_and_sample_video(row_data_dir, video_name, num_frames_to_extract, sampling_rate, fps, save_path, - ucf101_fps, processing_mode): - video = cv2.VideoCapture(os.path.join(row_data_dir, video_name)) - if fps == 'Not known': - fps = video.get(cv2.CAP_PROP_FPS) - video_width = video.get(cv2.CAP_PROP_FRAME_WIDTH) - video_height = video.get(cv2.CAP_PROP_FRAME_HEIGHT) - sample_start_point, sampling_rate, num_frames = setting_sample_rate(num_frames_to_extract, sampling_rate, video, - fps, ucf101_fps) - # ====== setting the video to start reading from the frame we want ====== - image_array = [] - if num_frames_to_extract == 'all': - num_frames_to_extract = int(num_frames / sampling_rate) if sampling_rate != 0 else num_frames - if processing_mode == 'live': - transform = set_transforms(mode='test') - for frame in range(num_frames_to_extract): - video.set(1, sample_start_point) - success, image = video.read() - if not success: - print('Error in reading frames from row video') - else: - RGB_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if processing_mode == 'live' else image - image = Image.fromarray(RGB_img.astype('uint8'), 'RGB') - if processing_mode == 'live': - image_array += [transform(image)] - else: - image_array += [np.uint8(image)] - sample_start_point = sample_start_point + sampling_rate - video.release() - if processing_mode == 'main': - create_new_video(save_path, video_name, image_array) - return image_array, [video_width, video_height] - - -def load_test_data(model_dir, mode='load_all'): - global_dir = os.path.normpath(model_dir + os.sep + os.pardir) - if mode == 'load_all': - with open(os.path.join(global_dir, 'test_videos_detailes.txt')) as f: - video_list = f.readlines() - test_videos_names, labels = [], [] - for video_name_with_label in video_list: - video_name, label = video_name_with_label.split(' ') - test_videos_names += [video_name] - labels += [int(label.rstrip('\n'))] - # open labels_decoder_dict - with open(os.path.join(global_dir, 'labels_decoder_dict.pkl'), 'rb') as f: - labels_decoder_dict = pickle.load(f) - if mode == 'load_all': - return test_videos_names, labels, labels_decoder_dict - else: - return labels_decoder_dict - - -#原模型中没有使用 -def plot_confusion_matrix(predicted_labels, true_labels, label_decoder_dict, save_path): - class_order_to_plot = list(label_decoder_dict.keys())[:true_labels.max() + 1] - cm = confusion_matrix(true_labels, predicted_labels, labels=class_order_to_plot, normalize='true') - # ==== plot the cm as heatmap ====== - plt.figure(figsize=(8, 6)) - plt.imshow(cm, interpolation='none', aspect='auto', cmap=plt.cm.Blues) - cb = plt.colorbar() - cb.ax.tick_params(labelsize=10) - x_labels = [label_decoder_dict[label_code] for label_code in class_order_to_plot] - plt.xticks(class_order_to_plot, x_labels, rotation=90, fontsize=6) - plt.yticks(class_order_to_plot, x_labels, fontsize=6) - plt.ylim(len(class_order_to_plot), -0.5) - plt.title('Normalized confusion matrix') - plt.tight_layout() - plt.savefig(os.path.join(save_path, 'Normalized_confusion_matrix.png'), dpi=300, bbox_inches='tight') - plt.close() - -#画每一类的准确度 -def plot_acc_per_class(predicted_labels, true_labels, label_decoder_dict, save_path): - # ===== count the number of times each class appear in the test data ===== - frequency_of_each_class = Counter(true_labels.tolist()) - # ===== load the frequency counter for the train dataset, would be used to mark low frequency classes ===== - global_dir = os.path.normpath(save_path + os.sep + os.pardir + os.sep + os.pardir) - with open(os.path.join(global_dir, 'frequency_of_each_class_train.pkl'), 'rb') as f: - frequency_of_each_class_train = pickle.load(f) - # ===== count the number of times each class is labeled correctly ======= - class_list = list(label_decoder_dict.keys())[: true_labels.max() + 1] - acc = true_labels == predicted_labels - counter_correct_labeled = Counter() - for index, true_label in enumerate(true_labels): - counter_correct_labeled[true_label.item()] += acc[index].item() - # ==== calculate the accuracy to predict each class ===== - acc_per_class = [] - mean_frequency = sum(list(frequency_of_each_class_train.values())) / len(frequency_of_each_class_train) - classes_with_lower_frequency_compare_to_average = [] - for class_ in class_list: - acc_per_class += [counter_correct_labeled[class_] / frequency_of_each_class[class_] * 100] - if frequency_of_each_class_train[class_] <= (0.9 * mean_frequency): - classes_with_lower_frequency_compare_to_average += [class_] - acc_classes_with_lower_frequency_compare_to_average = [acc_per_class[class_] for class_ in - classes_with_lower_frequency_compare_to_average] - plt.figure(figsize=(10, 10)) - plt.bar(class_list, acc_per_class) - plt.bar(classes_with_lower_frequency_compare_to_average, acc_classes_with_lower_frequency_compare_to_average, - color='red') - x_labels = [label_decoder_dict[label_code] for label_code in class_list] - plt.xticks(class_list, x_labels, rotation=90, fontsize=12) - plt.yticks(fontsize=12) - plt.xlabel('Classes', fontsize=16) - plt.ylabel('Accuracy [%]', fontsize=16) - plt.xlim(-1, class_list[-1] + 1) - plt.ylim(0, 109) - plt.legend(['freq > 0.9 * avr freq of a class', 'freq <= 0.9 * avr freq of a class']) - plt.title('The accuracy score for each class', fontsize=18) - plt.tight_layout() - plt.savefig(os.path.join(save_path, 'The_accuracy_score_for_each_class.png'), dpi=300, bbox_inches='tight') - plt.close() - -#判断batch_size大小是否超过类别数(未使用) -def check_if_batch_size_bigger_than_num_classes(batch_size, num_of_classes): - if num_of_classes is None: - num_of_classes = 101 - if batch_size > num_of_classes: - print( - 'Your batch size is bigger than the num of classes you are testing. This would cause an Error in the custom sampler. Your options are:\n' - '1. Reduce the batch size so it would be smaller or equal to the number of classes you are testing.\n' - '2. Reduce the number of classes so it would be bigger or equal to the batch size.\n' - '3. Stop using the custom sampler: erase the sampler parameter from the dataloader and change the shuffle ' - 'parameter to True.') - sys.exit() - -#原模型中没有使用 -def plot_sliding_window_prediction_for_each_frame(continues_labels, predicted_labels, save_path_plots, - label_decoder_dict, original_order_of_labels): - max_label_code = max(max(predicted_labels).item(), max(continues_labels).item()) - predicted_labels_one_hot = create_one_hot_vector_matrix(predicted_labels.numpy(), max_label_code) - labels_one_hot = create_one_hot_vector_matrix(continues_labels.numpy(), max_label_code) - labels_one_hot = labels_one_hot * 2 - one_hot_matrix_to_plot = predicted_labels_one_hot + labels_one_hot - one_hot_matrix_to_plot, labels_new_order = resort_matrix(original_order_of_labels, one_hot_matrix_to_plot) - one_hot_matrix_to_plot = one_hot_matrix_to_plot[~np.all(one_hot_matrix_to_plot == 0, axis=1)] - one_hot_matrix_to_plot = np.apply_along_axis(increase_the_error_value_for_non_neighbors_labels, 0, - one_hot_matrix_to_plot) - plt.figure(figsize=(12, 10)) - if 5 not in np.unique(one_hot_matrix_to_plot): - one_hot_matrix_to_plot = np.vstack((one_hot_matrix_to_plot, np.full((1, one_hot_matrix_to_plot.shape[1]), 5))) - im = plt.imshow(one_hot_matrix_to_plot[:-1,:], cmap='bwr', aspect='auto') - values = ['None', 'Predicted_labels_next_movie', 'true_label', 'predicted_label_is_true_label'] - else: - im = plt.imshow(one_hot_matrix_to_plot, cmap='bwr', aspect='auto') - values = ['None', 'Predicted_labels_next_movie', 'true_label', 'predicted_label_is_true_label', 'Predicted_label_errors'] - skip_x_ticks = math.ceil(len(continues_labels) / 15) - x_array = np.arange(0, len(continues_labels), skip_x_ticks) - y_labels = [label_decoder_dict[label_code] for label_code in labels_new_order] - plt.ylim(len(y_labels), -0.3) - plt.xticks(x_array, x_array, fontsize=10) - plt.yticks(np.arange(len(labels_new_order)), y_labels, fontsize=10) - # ==== create coustomize legand to the heat map ===== - colors = [im.cmap(im.norm(value)) for value in range(len(values))] - patches = [mpatches.Patch(color=colors[i], label=values[i], edgecolor='b') for i in - range(len(values))] - plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.5, frameon=True) - plt.title('Label Prediction in each frame', fontsize=14) - plt.savefig(os.path.join(save_path_plots, 'change_in_accuracy_with_the_movment_of_sliding_window.png'), dpi=300, - bbox_inches='tight') - plt.close() - - -def create_one_hot_vector_matrix(array, array_max): - one_hot_array = np.zeros((array.size, array_max + 1)) - one_hot_array[np.arange(array.size), array] = 1 - one_hot_array = one_hot_array.transpose() - return one_hot_array - - -def resort_matrix(labels_order, matrix): - sorted_matrix = np.zeros(matrix.shape) - classes_that_we_plotted = [] - for row_index, label in enumerate(labels_order): - if label.item() in classes_that_we_plotted: - pass - else: - sorted_matrix[row_index] = matrix[label.item()] - classes_that_we_plotted += [label.item()] - index_of_filled_rows = row_index + 1 - for index in range(len(matrix)): - if index in classes_that_we_plotted: - pass - else: - sorted_matrix[index_of_filled_rows] = matrix[index] - index_of_filled_rows += 1 - if np.nonzero(matrix[index])[0].size != 0 and index not in classes_that_we_plotted: - classes_that_we_plotted += [index] - return sorted_matrix, classes_that_we_plotted - - -def increase_the_error_value_for_non_neighbors_labels(matrix_col): - indices_of_non_zero_elements = np.nonzero(matrix_col) - if len(indices_of_non_zero_elements[0]) > 1: - dist_between_indices = indices_of_non_zero_elements[0][1] - indices_of_non_zero_elements[0][0] - if dist_between_indices > 1: - matrix_col[matrix_col == 1] = 5 - return matrix_col - - -def print_dataset_type_error(): - print( - 'You have enter a wrong dataset type in the dataset function. please fix it. possabilites are youtube or UCF101(the default)') - sys.exit() - - -def plot_sliding_window_prediction_for_each_frame_no_labels(predicted_labels, save_path_plots, label_decoder_dict): - original_order_of_labels = [] - for label in predicted_labels: - if label.item() in original_order_of_labels: - pass - else: - original_order_of_labels += [label] - one_hot_matrix_to_plot = create_one_hot_vector_matrix(predicted_labels.numpy(), max(predicted_labels).item()) - one_hot_matrix_to_plot, ____ = resort_matrix(original_order_of_labels, one_hot_matrix_to_plot) - one_hot_matrix_to_plot = one_hot_matrix_to_plot[~np.all(one_hot_matrix_to_plot == 0, axis=1)] - fig, ax = plt.subplots(figsize=(12, 10)) - im = ax.imshow(one_hot_matrix_to_plot, cmap='GnBu', aspect='auto') - skip_x_ticks = math.ceil(len(predicted_labels) / 15) - x_array = np.arange(0, len(predicted_labels), skip_x_ticks) - y_labels = [label_decoder_dict[label_code.item()] for label_code in original_order_of_labels] - y_ticks = np.arange(len(original_order_of_labels)) - ax.set_ylim(len(y_labels), -0.3) - ax.set_xticklabels(x_array, fontsize=10) - ax.set_yticks(y_ticks) - ax.set_yticklabels(y_labels, fontsize=10) - # ==== create coustomize legand to the heat map ===== - values = ['None', 'Predicted_labels'] - colors = [im.cmap(im.norm(value)) for value in range(len(values))] - patches = [mpatches.Patch(color=colors[i], label=values[i], edgecolor='b') for i in - range(len(values))] - plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.5, frameon=True) - plt.title('Label Prediction in each frame', fontsize=14) - plt.savefig(os.path.join(save_path_plots, 'change_in_accuracy_with_the_movement_of_sliding_window.png'), dpi=300, - bbox_inches='tight') - - -def print_error_preprocessing_movie_mode(): - print('Your value in the pre-processing movie mode is incorrect. your options are:\n' - '1. live pre-processing.\n' - '2. pre-processied movie. \n' - 'please choose one of them') - sys.exit() - - -def predict_labels_of_sliding_window(sliding_window_images, batch_size, device, model): - predicted_labels_list = [] - for batch_boundaries in range(0, len(sliding_window_images), batch_size): - batch_images_to_plot = sliding_window_images[batch_boundaries: batch_boundaries + batch_size].to(device) - predicted_labels = foward_step_no_labels(model, batch_images_to_plot) - predicted_labels_list += [predicted_labels] - return torch.cat(predicted_labels_list, axis=0) - - -def set_text_to_video_frame(continues_labels, label_decoder_dict, predicted_labels, mode, h_text_object=None, - frame='start', bool_array=None): - if frame == 'start': - height = 0.07 if mode != 'youtube' else 0.12 - fontsize = 5 if mode!= 'youtube' else 8 - h_text_1 = plt.text(0.18, height, 'Predicted labels - {}'.format(label_decoder_dict[predicted_labels[0].item()]), - color='blue', fontsize=fontsize, transform=plt.gcf().transFigure) - if continues_labels is not None: - h_text_2 = plt.text(0.18, 0.11, 'Original_labels', color='black', fontsize=5, - transform=plt.gcf().transFigure) - h_text_3 = plt.text(0.44, 0.01, 'True/False', color='red', fontsize=6, transform=plt.gcf().transFigure, - path_effects=[pe.withStroke(linewidth=1, foreground="black")]) - return {index + 1: text_object for index, text_object in enumerate([h_text_1, h_text_2, h_text_3])} - else: - return {1: h_text_1} - else: - h_text_object[1].set_text('Predicted labels - {}'.format(label_decoder_dict[predicted_labels[frame].item()])) - if continues_labels is not None: - h_text_object[2].set_text('Original label - {}'.format(label_decoder_dict[continues_labels[frame].item()])) - color = 'green' if bool_array[frame].item() else 'red' - h_text_object[3].remove() - h_text_object[3] = plt.text(0.44, 0.01, str(bool_array[frame].item()), color=color, fontsize=6, - transform=plt.gcf().transFigure, - path_effects=[pe.withStroke(linewidth=1, foreground="black")]) - - -def generate_list_of_colors(num_labels): - green_color_codes = [[154, 205, 50], [85, 107, 47], [107, 142, 35], [124, 252, 0], [127, 255, 0], [173, 255, 47], - [0, 100, 0], - [0, 128, 0], [34, 139, 34], [0, 255, 0], [50, 205, 50], [144, 238, 144], [152, 251, 152], - [60, 179, 113], - [46, 139, 87], [0, 255, 127], [0, 250, 154]] - color_list = [] - for i in range(num_labels): - color = list(np.random.choice(range(256), size=3)) - - while color in color_list or color in green_color_codes: - color = list(np.random.choice(range(256), size=3)) - color_norm = [single_color / 255 for single_color in color] - color_list += [color_norm] - color_list_tuple = [tuple(color_as_list) for color_as_list in color_list] - return color_list_tuple - - -def create_color_dict(predicted_labels): - unique_labels = predicted_labels.unique() - color_list = generate_list_of_colors(len(unique_labels)) - color_dict = {} - for index, label in enumerate(unique_labels): - index_of_specific_label = (predicted_labels == label.item()).nonzero() - if len(index_of_specific_label) > (0.5 * len(predicted_labels)): - color_list[index] = 'green' - color_dict[label.item()] = color_list[index] - return color_dict - - -def save_video_original_size_dict(video_original_size_dict, save_path): - with open(os.path.join(save_path, 'video_original_size_dict.pkl'), 'wb') as f: - pickle.dump(video_original_size_dict, f, pickle.HIGHEST_PROTOCOL) - - -def load_and_extract_video_original_size(read_video_original_size_dir): - with open(os.path.join(read_video_original_size_dir, 'video_original_size_dict.pkl'), 'rb') as f: - dict = pickle.load(f) - return dict - - -def setting_video_size(video_original_size): - if video_original_size is None: - (w, h) = (320, 240) - else: - (w, h) = (int(video_original_size[0]), int(video_original_size[1])) - for size_element in [w, h]: - if size_element % 2 == 0: - size_element += 1 - return w, h diff --git a/spaces/rizmyabdulla/Medicine_predictor/README.md b/spaces/rizmyabdulla/Medicine_predictor/README.md deleted file mode 100644 index e6731674d4dc6f27a5e574b25d9f0417c074c4b7..0000000000000000000000000000000000000000 --- a/spaces/rizmyabdulla/Medicine_predictor/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Medicine Predictor -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: true -license: artistic-2.0 ---- - -# Symptoms Used for this project - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/guided_anchor_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/guided_anchor_head.py deleted file mode 100644 index 53e8cd8a750287ca60b33a5cdcb9ce2b02e4c2e3..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/guided_anchor_head.py +++ /dev/null @@ -1,868 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -import torch.nn as nn -from mmcv.ops import DeformConv2d, MaskedConv2d -from mmcv.runner import BaseModule, force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_bbox_coder, - build_prior_generator, build_sampler, calc_region, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class FeatureAdaption(BaseModule): - """Feature Adaption Module. - - Feature Adaption Module is implemented based on DCN v1. - It uses anchor shape prediction rather than feature map to - predict offsets of deform conv layer. - - Args: - in_channels (int): Number of channels in the input feature map. - out_channels (int): Number of channels in the output feature map. - kernel_size (int): Deformable conv kernel size. - deform_groups (int): Deformable conv group size. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.1, - override=dict( - type='Normal', name='conv_adaption', std=0.01))): - super(FeatureAdaption, self).__init__(init_cfg) - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 2, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x, shape): - offset = self.conv_offset(shape.detach()) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class GuidedAnchorHead(AnchorHead): - """Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.). - - This GuidedAnchorHead will predict high-quality feature guided - anchors and locations where anchors will be kept in inference. - There are mainly 3 categories of bounding-boxes. - - - Sampled 9 pairs for target assignment. (approxes) - - The square boxes where the predicted anchors are based on. (squares) - - Guided anchors. - - Please refer to https://arxiv.org/abs/1901.03278 for more details. - - Args: - num_classes (int): Number of classes. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. - approx_anchor_generator (dict): Config dict for approx generator - square_anchor_generator (dict): Config dict for square generator - anchor_coder (dict): Config dict for anchor coder - bbox_coder (dict): Config dict for bbox coder - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - deform_groups: (int): Group number of DCN in - FeatureAdaption module. - loc_filter_thr (float): Threshold to filter out unconcerned regions. - loss_loc (dict): Config of location loss. - loss_shape (dict): Config of anchor shape loss. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of bbox regression loss. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ - - def __init__( - self, - num_classes, - in_channels, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0] - ), - reg_decoded_bbox=False, - deform_groups=4, - loc_filter_thr=0.01, - train_cfg=None, - test_cfg=None, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0), - init_cfg=dict(type='Normal', layer='Conv2d', std=0.01, - override=dict(type='Normal', - name='conv_loc', - std=0.01, - bias_prob=0.01))): # yapf: disable - super(AnchorHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.deform_groups = deform_groups - self.loc_filter_thr = loc_filter_thr - - # build approx_anchor_generator and square_anchor_generator - assert (approx_anchor_generator['octave_base_scale'] == - square_anchor_generator['scales'][0]) - assert (approx_anchor_generator['strides'] == - square_anchor_generator['strides']) - self.approx_anchor_generator = build_prior_generator( - approx_anchor_generator) - self.square_anchor_generator = build_prior_generator( - square_anchor_generator) - self.approxs_per_octave = self.approx_anchor_generator \ - .num_base_priors[0] - - self.reg_decoded_bbox = reg_decoded_bbox - - # one anchor per location - self.num_base_priors = self.square_anchor_generator.num_base_priors[0] - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - self.loc_focal_loss = loss_loc['type'] in ['FocalLoss'] - self.sampling = loss_cls['type'] not in ['FocalLoss'] - self.ga_sampling = train_cfg is not None and hasattr( - train_cfg, 'ga_sampler') - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - - # build bbox_coder - self.anchor_coder = build_bbox_coder(anchor_coder) - self.bbox_coder = build_bbox_coder(bbox_coder) - - # build losses - self.loss_loc = build_loss(loss_loc) - self.loss_shape = build_loss(loss_shape) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.ga_assigner = build_assigner(self.train_cfg.ga_assigner) - if self.ga_sampling: - ga_sampler_cfg = self.train_cfg.ga_sampler - else: - ga_sampler_cfg = dict(type='PseudoSampler') - self.ga_sampler = build_sampler(ga_sampler_cfg, context=self) - - self.fp16_enabled = False - - self._init_layers() - - @property - def num_anchors(self): - warnings.warn('DeprecationWarning: `num_anchors` is deprecated, ' - 'please use "num_base_priors" instead') - return self.square_anchor_generator.num_base_priors[0] - - def _init_layers(self): - self.relu = nn.ReLU(inplace=True) - self.conv_loc = nn.Conv2d(self.in_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.in_channels, self.num_base_priors * 2, - 1) - self.feature_adaption = FeatureAdaption( - self.in_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = MaskedConv2d( - self.feat_channels, self.num_base_priors * self.cls_out_channels, - 1) - self.conv_reg = MaskedConv2d(self.feat_channels, - self.num_base_priors * 4, 1) - - def forward_single(self, x): - loc_pred = self.conv_loc(x) - shape_pred = self.conv_shape(x) - x = self.feature_adaption(x, shape_pred) - # masked conv is only used during inference for speed-up - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.conv_cls(x, mask) - bbox_pred = self.conv_reg(x, mask) - return cls_score, bbox_pred, shape_pred, loc_pred - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def get_sampled_approxs(self, featmap_sizes, img_metas, device='cuda'): - """Get sampled approxs and inside flags according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): device for returned tensors - - Returns: - tuple: approxes of each image, inside flags of each image - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # approxes for one time - multi_level_approxs = self.approx_anchor_generator.grid_priors( - featmap_sizes, device=device) - approxs_list = [multi_level_approxs for _ in range(num_imgs)] - - # for each image, we compute inside flags of multi level approxes - inside_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - multi_level_approxs = approxs_list[img_id] - - # obtain valid flags for each approx first - multi_level_approx_flags = self.approx_anchor_generator \ - .valid_flags(featmap_sizes, - img_meta['pad_shape'], - device=device) - - for i, flags in enumerate(multi_level_approx_flags): - approxs = multi_level_approxs[i] - inside_flags_list = [] - for i in range(self.approxs_per_octave): - split_valid_flags = flags[i::self.approxs_per_octave] - split_approxs = approxs[i::self.approxs_per_octave, :] - inside_flags = anchor_inside_flags( - split_approxs, split_valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - inside_flags_list.append(inside_flags) - # inside_flag for a position is true if any anchor in this - # position is true - inside_flags = ( - torch.stack(inside_flags_list, 0).sum(dim=0) > 0) - multi_level_flags.append(inside_flags) - inside_flag_list.append(multi_level_flags) - return approxs_list, inside_flag_list - - def get_anchors(self, - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=False, - device='cuda'): - """Get squares according to feature map sizes and guided anchors. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - shape_preds (list[tensor]): Multi-level shape predictions. - loc_preds (list[tensor]): Multi-level location predictions. - img_metas (list[dict]): Image meta info. - use_loc_filter (bool): Use loc filter or not. - device (torch.device | str): device for returned tensors - - Returns: - tuple: square approxs of each image, guided anchors of each image, - loc masks of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # squares for one time - multi_level_squares = self.square_anchor_generator.grid_priors( - featmap_sizes, device=device) - squares_list = [multi_level_squares for _ in range(num_imgs)] - - # for each image, we compute multi level guided anchors - guided_anchors_list = [] - loc_mask_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_guided_anchors = [] - multi_level_loc_mask = [] - for i in range(num_levels): - squares = squares_list[img_id][i] - shape_pred = shape_preds[i][img_id] - loc_pred = loc_preds[i][img_id] - guided_anchors, loc_mask = self._get_guided_anchors_single( - squares, - shape_pred, - loc_pred, - use_loc_filter=use_loc_filter) - multi_level_guided_anchors.append(guided_anchors) - multi_level_loc_mask.append(loc_mask) - guided_anchors_list.append(multi_level_guided_anchors) - loc_mask_list.append(multi_level_loc_mask) - return squares_list, guided_anchors_list, loc_mask_list - - def _get_guided_anchors_single(self, - squares, - shape_pred, - loc_pred, - use_loc_filter=False): - """Get guided anchors and loc masks for a single level. - - Args: - square (tensor): Squares of a single level. - shape_pred (tensor): Shape predictions of a single level. - loc_pred (tensor): Loc predictions of a single level. - use_loc_filter (list[tensor]): Use loc filter or not. - - Returns: - tuple: guided anchors, location masks - """ - # calculate location filtering mask - loc_pred = loc_pred.sigmoid().detach() - if use_loc_filter: - loc_mask = loc_pred >= self.loc_filter_thr - else: - loc_mask = loc_pred >= 0.0 - mask = loc_mask.permute(1, 2, 0).expand(-1, -1, self.num_base_priors) - mask = mask.contiguous().view(-1) - # calculate guided anchors - squares = squares[mask] - anchor_deltas = shape_pred.permute(1, 2, 0).contiguous().view( - -1, 2).detach()[mask] - bbox_deltas = anchor_deltas.new_full(squares.size(), 0) - bbox_deltas[:, 2:] = anchor_deltas - guided_anchors = self.anchor_coder.decode( - squares, bbox_deltas, wh_ratio_clip=1e-6) - return guided_anchors, mask - - def ga_loc_targets(self, gt_bboxes_list, featmap_sizes): - """Compute location targets for guided anchoring. - - Each feature map is divided into positive, negative and ignore regions. - - positive regions: target 1, weight 1 - - ignore regions: target 0, weight 0 - - negative regions: target 0, weight 0.1 - - Args: - gt_bboxes_list (list[Tensor]): Gt bboxes of each image. - featmap_sizes (list[tuple]): Multi level sizes of each feature - maps. - - Returns: - tuple - """ - anchor_scale = self.approx_anchor_generator.octave_base_scale - anchor_strides = self.approx_anchor_generator.strides - # Currently only supports same stride in x and y direction. - for stride in anchor_strides: - assert (stride[0] == stride[1]) - anchor_strides = [stride[0] for stride in anchor_strides] - - center_ratio = self.train_cfg.center_ratio - ignore_ratio = self.train_cfg.ignore_ratio - img_per_gpu = len(gt_bboxes_list) - num_lvls = len(featmap_sizes) - r1 = (1 - center_ratio) / 2 - r2 = (1 - ignore_ratio) / 2 - all_loc_targets = [] - all_loc_weights = [] - all_ignore_map = [] - for lvl_id in range(num_lvls): - h, w = featmap_sizes[lvl_id] - loc_targets = torch.zeros( - img_per_gpu, - 1, - h, - w, - device=gt_bboxes_list[0].device, - dtype=torch.float32) - loc_weights = torch.full_like(loc_targets, -1) - ignore_map = torch.zeros_like(loc_targets) - all_loc_targets.append(loc_targets) - all_loc_weights.append(loc_weights) - all_ignore_map.append(ignore_map) - for img_id in range(img_per_gpu): - gt_bboxes = gt_bboxes_list[img_id] - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - # assign gt bboxes to different feature levels w.r.t. their scales - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - for gt_id in range(gt_bboxes.size(0)): - lvl = target_lvls[gt_id].item() - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[lvl] - # calculate ignore regions - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[lvl]) - # calculate positive (center) regions - ctr_x1, ctr_y1, ctr_x2, ctr_y2 = calc_region( - gt_, r1, featmap_sizes[lvl]) - all_loc_targets[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - all_loc_weights[lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 0 - all_loc_weights[lvl][img_id, 0, ctr_y1:ctr_y2 + 1, - ctr_x1:ctr_x2 + 1] = 1 - # calculate ignore map on nearby low level feature - if lvl > 0: - d_lvl = lvl - 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[d_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[d_lvl]) - all_ignore_map[d_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - # calculate ignore map on nearby high level feature - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - # rescaled to corresponding feature map - gt_ = gt_bboxes[gt_id, :4] / anchor_strides[u_lvl] - ignore_x1, ignore_y1, ignore_x2, ignore_y2 = calc_region( - gt_, r2, featmap_sizes[u_lvl]) - all_ignore_map[u_lvl][img_id, 0, ignore_y1:ignore_y2 + 1, - ignore_x1:ignore_x2 + 1] = 1 - for lvl_id in range(num_lvls): - # ignore negative regions w.r.t. ignore map - all_loc_weights[lvl_id][(all_loc_weights[lvl_id] < 0) - & (all_ignore_map[lvl_id] > 0)] = 0 - # set negative regions with weight 0.1 - all_loc_weights[lvl_id][all_loc_weights[lvl_id] < 0] = 0.1 - # loc average factor to balance loss - loc_avg_factor = sum( - [t.size(0) * t.size(-1) * t.size(-2) - for t in all_loc_targets]) / 200 - return all_loc_targets, all_loc_weights, loc_avg_factor - - def _ga_shape_target_single(self, - flat_approxs, - inside_flags, - flat_squares, - gt_bboxes, - gt_bboxes_ignore, - img_meta, - unmap_outputs=True): - """Compute guided anchoring targets. - - This function returns sampled anchors and gt bboxes directly - rather than calculates regression targets. - - Args: - flat_approxs (Tensor): flat approxs of a single image, - shape (n, 4) - inside_flags (Tensor): inside flags of a single image, - shape (n, ). - flat_squares (Tensor): flat squares of a single image, - shape (approxs_per_octave * n, 4) - gt_bboxes (Tensor): Ground truth bboxes of a single image. - img_meta (dict): Meta info of a single image. - approxs_per_octave (int): number of approxs per octave - cfg (dict): RPN train configs. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - if not inside_flags.any(): - return (None, ) * 5 - # assign gt and sample anchors - expand_inside_flags = inside_flags[:, None].expand( - -1, self.approxs_per_octave).reshape(-1) - approxs = flat_approxs[expand_inside_flags, :] - squares = flat_squares[inside_flags, :] - - assign_result = self.ga_assigner.assign(approxs, squares, - self.approxs_per_octave, - gt_bboxes, gt_bboxes_ignore) - sampling_result = self.ga_sampler.sample(assign_result, squares, - gt_bboxes) - - bbox_anchors = torch.zeros_like(squares) - bbox_gts = torch.zeros_like(squares) - bbox_weights = torch.zeros_like(squares) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - bbox_anchors[pos_inds, :] = sampling_result.pos_bboxes - bbox_gts[pos_inds, :] = sampling_result.pos_gt_bboxes - bbox_weights[pos_inds, :] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_squares.size(0) - bbox_anchors = unmap(bbox_anchors, num_total_anchors, inside_flags) - bbox_gts = unmap(bbox_gts, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (bbox_anchors, bbox_gts, bbox_weights, pos_inds, neg_inds) - - def ga_shape_targets(self, - approx_list, - inside_flag_list, - square_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - unmap_outputs=True): - """Compute guided anchoring targets. - - Args: - approx_list (list[list]): Multi level approxs of each image. - inside_flag_list (list[list]): Multi level inside flags of each - image. - square_list (list[list]): Multi level squares of each image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes. - unmap_outputs (bool): unmap outputs or not. - - Returns: - tuple - """ - num_imgs = len(img_metas) - assert len(approx_list) == len(inside_flag_list) == len( - square_list) == num_imgs - # anchor number of multi levels - num_level_squares = [squares.size(0) for squares in square_list[0]] - # concat all level anchors and flags to a single tensor - inside_flag_flat_list = [] - approx_flat_list = [] - square_flat_list = [] - for i in range(num_imgs): - assert len(square_list[i]) == len(inside_flag_list[i]) - inside_flag_flat_list.append(torch.cat(inside_flag_list[i])) - approx_flat_list.append(torch.cat(approx_list[i])) - square_flat_list.append(torch.cat(square_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - (all_bbox_anchors, all_bbox_gts, all_bbox_weights, pos_inds_list, - neg_inds_list) = multi_apply( - self._ga_shape_target_single, - approx_flat_list, - inside_flag_flat_list, - square_flat_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - img_metas, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([bbox_anchors is None for bbox_anchors in all_bbox_anchors]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - bbox_anchors_list = images_to_levels(all_bbox_anchors, - num_level_squares) - bbox_gts_list = images_to_levels(all_bbox_gts, num_level_squares) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_squares) - return (bbox_anchors_list, bbox_gts_list, bbox_weights_list, - num_total_pos, num_total_neg) - - def loss_shape_single(self, shape_pred, bbox_anchors, bbox_gts, - anchor_weights, anchor_total_num): - shape_pred = shape_pred.permute(0, 2, 3, 1).contiguous().view(-1, 2) - bbox_anchors = bbox_anchors.contiguous().view(-1, 4) - bbox_gts = bbox_gts.contiguous().view(-1, 4) - anchor_weights = anchor_weights.contiguous().view(-1, 4) - bbox_deltas = bbox_anchors.new_full(bbox_anchors.size(), 0) - bbox_deltas[:, 2:] += shape_pred - # filter out negative samples to speed-up weighted_bounded_iou_loss - inds = torch.nonzero( - anchor_weights[:, 0] > 0, as_tuple=False).squeeze(1) - bbox_deltas_ = bbox_deltas[inds] - bbox_anchors_ = bbox_anchors[inds] - bbox_gts_ = bbox_gts[inds] - anchor_weights_ = anchor_weights[inds] - pred_anchors_ = self.anchor_coder.decode( - bbox_anchors_, bbox_deltas_, wh_ratio_clip=1e-6) - loss_shape = self.loss_shape( - pred_anchors_, - bbox_gts_, - anchor_weights_, - avg_factor=anchor_total_num) - return loss_shape - - def loss_loc_single(self, loc_pred, loc_target, loc_weight, - loc_avg_factor): - loss_loc = self.loss_loc( - loc_pred.reshape(-1, 1), - loc_target.reshape(-1).long(), - loc_weight.reshape(-1), - avg_factor=loc_avg_factor) - return loss_loc - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def loss(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.approx_anchor_generator.num_levels - - device = cls_scores[0].device - - # get loc targets - loc_targets, loc_weights, loc_avg_factor = self.ga_loc_targets( - gt_bboxes, featmap_sizes) - - # get sampled approxes - approxs_list, inside_flag_list = self.get_sampled_approxs( - featmap_sizes, img_metas, device=device) - # get squares and guided anchors - squares_list, guided_anchors_list, _ = self.get_anchors( - featmap_sizes, shape_preds, loc_preds, img_metas, device=device) - - # get shape targets - shape_targets = self.ga_shape_targets(approxs_list, inside_flag_list, - squares_list, gt_bboxes, - img_metas) - if shape_targets is None: - return None - (bbox_anchors_list, bbox_gts_list, anchor_weights_list, anchor_fg_num, - anchor_bg_num) = shape_targets - anchor_total_num = ( - anchor_fg_num if not self.ga_sampling else anchor_fg_num + - anchor_bg_num) - - # get anchor targets - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - guided_anchors_list, - inside_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [ - anchors.size(0) for anchors in guided_anchors_list[0] - ] - # concat all level anchors to a single tensor - concat_anchor_list = [] - for i in range(len(guided_anchors_list)): - concat_anchor_list.append(torch.cat(guided_anchors_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - # get classification and bbox regression losses - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # get anchor location loss - losses_loc = [] - for i in range(len(loc_preds)): - loss_loc = self.loss_loc_single( - loc_preds[i], - loc_targets[i], - loc_weights[i], - loc_avg_factor=loc_avg_factor) - losses_loc.append(loss_loc) - - # get anchor shape loss - losses_shape = [] - for i in range(len(shape_preds)): - loss_shape = self.loss_shape_single( - shape_preds[i], - bbox_anchors_list[i], - bbox_gts_list[i], - anchor_weights_list[i], - anchor_total_num=anchor_total_num) - losses_shape.append(loss_shape) - - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_shape=losses_shape, - loss_loc=losses_loc) - - @force_fp32( - apply_to=('cls_scores', 'bbox_preds', 'shape_preds', 'loc_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - shape_preds, - loc_preds, - img_metas, - cfg=None, - rescale=False): - assert len(cls_scores) == len(bbox_preds) == len(shape_preds) == len( - loc_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - device = cls_scores[0].device - # get guided anchors - _, guided_anchors, loc_masks = self.get_anchors( - featmap_sizes, - shape_preds, - loc_preds, - img_metas, - use_loc_filter=not self.training, - device=device) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - guided_anchor_list = [ - guided_anchors[img_id][i].detach() for i in range(num_levels) - ] - loc_mask_list = [ - loc_masks[img_id][i].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - guided_anchor_list, - loc_mask_list, img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - mlvl_masks, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors, mask in zip(cls_scores, bbox_preds, - mlvl_anchors, - mlvl_masks): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - # if no location is kept, end. - if mask.sum() == 0: - continue - # reshape scores and bbox_pred - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - # filter scores, bbox_pred w.r.t. mask. - # anchors are filtered in get_anchors() beforehand. - scores = scores[mask, :] - bbox_pred = bbox_pred[mask, :] - if scores.dim() == 0: - anchors = anchors.unsqueeze(0) - scores = scores.unsqueeze(0) - bbox_pred = bbox_pred.unsqueeze(0) - # filter anchors, bbox_pred, scores w.r.t. scores - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - # multi class NMS - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/rorallitri/biomedical-language-models/logs/Battleship Tamil Dubbed Movie Free Download [CRACKED] - Discover the Hidden Easter Eggs and Trivia in the Film!.md b/spaces/rorallitri/biomedical-language-models/logs/Battleship Tamil Dubbed Movie Free Download [CRACKED] - Discover the Hidden Easter Eggs and Trivia in the Film!.md deleted file mode 100644 index 48faf08129f7e3c115b79ea2e7a2cc9658dc03dc..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Battleship Tamil Dubbed Movie Free Download [CRACKED] - Discover the Hidden Easter Eggs and Trivia in the Film!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Battleship Tamil Dubbed Movie Free Download [CRACKED]


        DOWNLOAD →→→ https://tinurll.com/2uznho



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Civ 5 Product Code Keygen How to Unlock and Download Sid Meiers Civilization V on Steam with a Fully Genuine CD Key.md b/spaces/rorallitri/biomedical-language-models/logs/Civ 5 Product Code Keygen How to Unlock and Download Sid Meiers Civilization V on Steam with a Fully Genuine CD Key.md deleted file mode 100644 index 82028a70dddf7e2f117bcf045032bf065bc49d09..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Civ 5 Product Code Keygen How to Unlock and Download Sid Meiers Civilization V on Steam with a Fully Genuine CD Key.md +++ /dev/null @@ -1,22 +0,0 @@ -
        -

        I bought a physical copy of Civilization V: Game of the Year Edition, and the game arrived, a day later. I installed the game, and entered the product code through Steam. It said the product code had already been used, but I had purchased it brand new.

        -

        civ 5 product code keygen


        Download Zip 🔗 https://tinurll.com/2uzlr5



        -

        I thought that somebody may have used the wrong key, and that it could be an accident, so I sent the game back for a replacement. Nevertheless, I had the same issue with the new product key. It is too unlikely that it is a coincidence, having happened twice, in a row.

        -

        When you buy a game (or any other software) on the Mac App Store, you aren't provided with a CD key, since the MAS itself handles product activation. (Apple's requirement is that all software sold on the MAS uses Apple's activation methods. CD keys and the like are disallowed.)

        -

        ROHM provides a broad array of products that meets the needs of smart logistics, including digitization in the industrial sector with the adoption of IIoT that allows users to streamline operations by incorporating big data, AI, machine learning, and sensors to address existing bottlenecks.

        -

        Teledyne LeCroy's WaveSurfer 4000HD high definition oscilloscope family is a unique product in the marketplace. They use unique HD4096 technology to provide superior and uncompromised measurement performance.

        -

        -

        The registration code is needed to unlock content on the SQUARE ENIX ID. For Windows, Steam, and Mac, it is a 20-digit code and for PlayStation 4 it is a 12-digit code. The registration code is received from the retailer by different methods.
        *The PlayStation®5 version is only available as a digital download, so no code is required.

        Physical/Retail Versions:

        Your registration code will be found on an insert within the disc case for both the Windows and PlayStation 4 physical versions.

        Please note that an 18-digit code may also be provided on an insert included in the packaging. This is a SQUARE ENIX MEMBERS site code and is not required to play the game.

        Digital Versions:

        SQUARE ENIX Online Store:

        1) Once your order has been confirmed, you should receive a confirmation email containing a button to "Click Here To Get Access To Your Products" which will take you to a page to Unlock any purchased codes.



        2) If you have confirmed a purchase on the SQUARE ENIX Online Store but still have not received a confirmation e-mail, please contact us with your Order Number at the following link:

        -enix.com/contact.php?id=489&la=1

        If you are having any issues with your order, please contact the SQUARE ENIX Online Store support team at

        Amazon.com:

        1) Log into your Amazon account and locate your FINAL FANTASY XIV purchase under Your Account and "Digital games and software."

        2) Use the Redeem Product Key button to find your code.

        3) Redeem the code on the Mog Station at



        GameStop:

        1) Log into your GameStop account and locate your FINAL FANTASY XIV purchase under Your Account and "Digital Locker."

        2) Locate the Activation Code.

        3) Redeem the code on the Mog Station at



        Steam:

        1) Visit your Library within the Steam client.

        2) Choose FINAL FANTASY XIV Online on your game list, click on the cog or Options button, then choose "Manage" and "CD keys" to view the registration code provided by Steam.

        3) Redeem the code on the Mog Station at

        -

        Shared communication is another element that all civilizations share. Shared communication may include spoken language; alphabets; numeric systems; signs, ideas, and symbols; and illustration and representation. Shared communication allows the infrastructure necessary for technology, trade, cultural exchange, and government to be developed and shared throughout the civilization. The Inca civilization, for example, had no written script that we know of, but its complex khipu system of accounting allowed the government to conduct censuses of its population and production across the vast stretch of the Andes. A khipu is a recording device made of a series of strings knotted in particular patterns and colors.

        -

        Language also played a part in Roman infrastructure. Romans spread the Latin language throughout southern Europe. The so-called "Romance languages" (Spanish, French, Portuguese, Romanian, Catalan, and Italian) are called that because they all developed from the Roman language: Latin. Having a similar language made communication and leadership easier for Rome in its far-flung territories. Roman leaders relied on a series of legal codes for administration. These codes helped structure laws between different parts of Roman territory, as well as between rich and poor, men and women, slave and free. Roman laws included restrictions on marriage, ownership of land, and access to professions such as priesthoods.

        -

        This product upgrades the Civilization VI base game to Civilization VI Anthology - the complete collection of all Civ VI content released. Including six DLC packs, the expansions Rise and Fall and Gathering Storm, the full New Frontier Pass, and the Leader Pass, which is available to all Anthology owners at no additional cost..Save vs buying DLC packs and expansions separately. If you already own any Civilization VI standalone content beyond the base game (like expansions or DLCs), do not buy this product or you will be double-charged for content.

        -

        A barcode or bar code is a method of representing data in a visual, machine-readable form. Initially, barcodes represented data by varying the widths, spacings and sizes of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by special optical scanners, called barcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots, hexagons and other patterns, called matrix codes or 2D barcodes, although they do not use bars as such. 2D barcodes can be read using purpose-built 2D optical scanners, which exist in a few different forms. 2D barcodes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the 2D barcode. A mobile device with an inbuilt camera, such as smartphone, can function as the latter type of 2D barcode reader using specialized application software (The same sort of mobile device could also read 1D barcodes, depending on the application software).

        -

        Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed by George Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver.[5] Their use has spread to many other tasks that are generically referred to as automatic identification and data capture (AIDC). The first use of barcodes in supermarkets was by Sainsbury's in 1973 using a system developed by Plessy.[6] In June 1974, Marsh supermarket in Troy, Ohio used a scanner made by Photographic Sciences Corporation to scan the Universal Product Code (UPC) barcode on a pack of Wrigley's chewing gum.[7][5] QR codes, a specific type of 2D barcode, have recently become very popular due to the growth in smartphone ownership.[8]

        -

        Other systems have made inroads in the AIDC market, but the simplicity, universality and low cost of barcodes has limited the role of these other systems, particularly before technologies such as radio-frequency identification (RFID) became available after 1995.

        -

        In 1948 Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout.[9] Silver told his friend Norman Joseph Woodland about the request, and they started working on a variety of systems. Their first working system used ultraviolet ink, but the ink faded too easily and was expensive.[10]

        -

        Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, and continued working on the system. His next inspiration came from Morse code, and he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them."[10] To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto an RCA935 photomultiplier tube (from a movie projector) on the far side. He later decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction.

        -

        On 20 October 1949, Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear and bull's eye printing patterns, as well as the mechanical and electronic systems needed to read the code. The patent was issued on 7 October 1952 as US Patent 2,612,994.[1] In 1951, Woodland moved to IBM and continually tried to interest IBM in developing the system. The company eventually commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment that was some time off in the future.

        -

        In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough, and they saw no need to branch out so quickly.

        -

        Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at a General Motors (Buick) factory in Flint, Michigan.[10] The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/EZ VgHD (c) Utorrent How to Unlock the Full Potential of Your PC Gaming with HD Games.md b/spaces/rorallitri/biomedical-language-models/logs/EZ VgHD (c) Utorrent How to Unlock the Full Potential of Your PC Gaming with HD Games.md deleted file mode 100644 index 9a3cb24621da9781172a3a3a640cc6c2932f3854..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/EZ VgHD (c) Utorrent How to Unlock the Full Potential of Your PC Gaming with HD Games.md +++ /dev/null @@ -1,6 +0,0 @@ -

        EZ VgHD (c) Utorrent


        DOWNLOADhttps://tinurll.com/2uzlDI



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/runa91/barc_gradio/src/configs/anipose_data_info.py b/spaces/runa91/barc_gradio/src/configs/anipose_data_info.py deleted file mode 100644 index 8e7bad68b45cf9926fdfd3ca1b7e1f147e909cfd..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/configs/anipose_data_info.py +++ /dev/null @@ -1,74 +0,0 @@ -from dataclasses import dataclass -from typing import List -import json -import numpy as np -import os - -STATISTICS_DATA_DIR = os.path.join(os.path.dirname(__file__), '..', '..', 'data', 'statistics') -STATISTICS_PATH = os.path.join(STATISTICS_DATA_DIR, 'statistics_modified_v1.json') - -@dataclass -class DataInfo: - rgb_mean: List[float] - rgb_stddev: List[float] - joint_names: List[str] - hflip_indices: List[int] - n_joints: int - n_keyp: int - n_bones: int - n_betas: int - image_size: int - trans_mean: np.ndarray - trans_std: np.ndarray - flength_mean: np.ndarray - flength_std: np.ndarray - pose_rot6d_mean: np.ndarray - keypoint_weights: List[float] - -# SMAL samples 3d statistics -# statistics like mean values were calculated once when the project was started and they were not changed afterwards anymore -def load_statistics(statistics_path): - with open(statistics_path) as f: - statistics = json.load(f) - '''new_pose_mean = [[[np.round(val, 2) for val in sublst] for sublst in sublst_big] for sublst_big in statistics['pose_mean']] - statistics['pose_mean'] = new_pose_mean - j_out = json.dumps(statistics, indent=4) #, sort_keys=True) - with open(self.statistics_path, 'w') as file: file.write(j_out)''' - new_statistics = {'trans_mean': np.asarray(statistics['trans_mean']), - 'trans_std': np.asarray(statistics['trans_std']), - 'flength_mean': np.asarray(statistics['flength_mean']), - 'flength_std': np.asarray(statistics['flength_std']), - 'pose_mean': np.asarray(statistics['pose_mean']), - } - new_statistics['pose_rot6d_mean'] = new_statistics['pose_mean'][:, :, :2].reshape((-1, 6)) - return new_statistics -STATISTICS = load_statistics(STATISTICS_PATH) - -AniPose_JOINT_NAMES_swapped = [ - 'L_F_Paw', 'L_F_Knee', 'L_F_Elbow', - 'L_B_Paw', 'L_B_Knee', 'L_B_Elbow', - 'R_F_Paw', 'R_F_Knee', 'R_F_Elbow', - 'R_B_Paw', 'R_B_Knee', 'R_B_Elbow', - 'TailBase', '_Tail_end_', 'L_EarBase', 'R_EarBase', - 'Nose', '_Chin_', '_Left_ear_tip_', '_Right_ear_tip_', - 'L_Eye', 'R_Eye', 'Withers', 'Throat'] - -KEYPOINT_WEIGHTS = [3, 2, 2, 3, 2, 2, 3, 2, 2, 3, 2, 2, 3, 3, 2, 2, 3, 1, 2, 2] - -COMPLETE_DATA_INFO = DataInfo( - rgb_mean=[0.4404, 0.4440, 0.4327], # not sure - rgb_stddev=[0.2458, 0.2410, 0.2468], # not sure - joint_names=AniPose_JOINT_NAMES_swapped, # AniPose_JOINT_NAMES, - hflip_indices=[6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 12, 13, 15, 14, 16, 17, 19, 18, 21, 20, 22, 23], - n_joints = 35, - n_keyp = 24, # 20, # 25, - n_bones = 24, - n_betas = 30, # 10, - image_size = 256, - trans_mean = STATISTICS['trans_mean'], - trans_std = STATISTICS['trans_std'], - flength_mean = STATISTICS['flength_mean'], - flength_std = STATISTICS['flength_std'], - pose_rot6d_mean = STATISTICS['pose_rot6d_mean'], - keypoint_weights = KEYPOINT_WEIGHTS - ) diff --git a/spaces/runa91/bite_gradio/src/stacked_hourglass/datasets/dogsvoc.py b/spaces/runa91/bite_gradio/src/stacked_hourglass/datasets/dogsvoc.py deleted file mode 100644 index 7ba0ed31e568fe610a4117c01ac2e6e30f5a0960..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/stacked_hourglass/datasets/dogsvoc.py +++ /dev/null @@ -1,376 +0,0 @@ -# 24 joints instead of 20!! - - -import gzip -import json -import os -import random -import math -import numpy as np -import torch -import torch.utils.data as data -from importlib_resources import open_binary -from scipy.io import loadmat -from tabulate import tabulate -import itertools -import json -from scipy import ndimage - -from csv import DictReader -from pycocotools.mask import decode as decode_RLE - -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', '..')) -# import stacked_hourglass.res -# from stacked_hourglass.datasets.common import DataInfo -# from configs.data_info import COMPLETE_DATA_INFO -# from configs.anipose_data_info import COMPLETE_DATA_INFO_24 -from src.configs.data_info import COMPLETE_DATA_INFO_24 -from src.stacked_hourglass.utils.imutils import load_image, draw_labelmap, draw_multiple_labelmaps -from src.stacked_hourglass.utils.misc import to_torch -from src.stacked_hourglass.utils.transforms import shufflelr, crop, color_normalize, fliplr, transform -import src.stacked_hourglass.datasets.utils_stanext as utils_stanext -from src.stacked_hourglass.utils.visualization import save_input_image_with_keypoints - - - -class DogsVOC(data.Dataset): - DATA_INFO = COMPLETE_DATA_INFO_24 - - # Suggested joints to use for average PCK calculations. - ACC_JOINTS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 16] # don't know ... - - def __init__(self, image_path=None, is_train=True, inp_res=256, out_res=64, sigma=1, - scale_factor=0.25, rot_factor=30, label_type='Gaussian', - do_augment='default', shorten_dataset_to=None, dataset_mode='keyp_only', V12=None): - # self.img_folder_mpii = image_path # root image folders - self.V12 = V12 - self.is_train = is_train # training set or test set - if do_augment == 'yes': - self.do_augment = True - elif do_augment == 'no': - self.do_augment = False - elif do_augment=='default': - if self.is_train: - self.do_augment = True - else: - self.do_augment = False - else: - raise ValueError - self.inp_res = inp_res - self.out_res = out_res - self.sigma = sigma - self.scale_factor = scale_factor - self.rot_factor = rot_factor - self.label_type = label_type - self.dataset_mode = dataset_mode - if self.dataset_mode=='complete' or self.dataset_mode=='keyp_and_seg' or self.dataset_mode=='keyp_and_seg_and_partseg': - self.calc_seg = True - else: - self.calc_seg = False - - # create train/val split - # REMARK: I assume we should have a different train / test split here - self.img_folder = utils_stanext.get_img_dir(V12=self.V12) - self.train_dict, self.test_dict, self.val_dict = utils_stanext.load_stanext_json_as_dict(split_train_test=True, V12=self.V12) - self.train_name_list = list(self.train_dict.keys()) # 7004 - self.test_name_list = list(self.test_dict.keys()) # 5031 - - # breed json_path - breed_json_path = '/ps/scratch/nrueegg/new_projects/Animals/data/dog_datasets/Stanford_Dogs_Dataset/StanfordExtra/StanExt_breed_dict_v2.json' - - # only use images that show fully visible dogs in standing or walking poses - '''path_easy_images_list = '/ps/scratch/nrueegg/new_projects/Animals/data/dog_datasets/Stanford_Dogs_Dataset/StanfordExtra/AMT_StanExt_easy_images.txt' - easy_images_list = [line.rstrip('\n') for line in open(path_easy_images_list)] - self.train_name_list = sorted(list(set(easy_images_list) & set(self.train_name_list))) - self.test_name_list = sorted(list(set(easy_images_list) & set(self.test_name_list)))''' - self.train_name_list = sorted(self.train_name_list) - self.test_name_list = sorted(self.test_name_list) - - random.seed(4) - random.shuffle(self.train_name_list) - random.shuffle(self.test_name_list) - - - if shorten_dataset_to is not None: - self.train_name_list = self.train_name_list[0 : min(len(self.train_name_list), shorten_dataset_to)] - self.test_name_list = self.test_name_list[0 : min(len(self.test_name_list), shorten_dataset_to)] - - if shorten_dataset_to == 12: - # my_sample = self.test_name_list[2] # black haired dog - my_sample = self.test_name_list[2] - for ind in range(0, 12): - self.test_name_list[ind] = my_sample - - # add results for eyes, whithers and throat as obtained through anipose - self.path_anipose_out_root = '/ps/scratch/nrueegg/new_projects/Animals/data/dog_datasets/Stanford_Dogs_Dataset/StanfordExtra/animalpose_hg8_v0_results_on_StanExt/' - - - ############################################### - - self.dogvoc_path_root = '/ps/scratch/nrueegg/new_projects/Animals/data/pascal_voc_parts/' - self.dogvoc_path_images = self.dogvoc_path_root + 'dog_images/' - self.dogvoc_path_masks = self.dogvoc_path_root + 'dog_masks/' - - with open(self.dogvoc_path_masks + 'voc_dogs_bodypart_info.json', 'r') as file: - self.body_part_info = json.load(file) - with open(self.dogvoc_path_masks + 'voc_dogs_train.json', 'r') as file: - train_set_init = json.load(file) # 707 - with open(self.dogvoc_path_masks + 'voc_dogs_val.json', 'r') as file: - val_set_init = json.load(file) # 709 - self.train_set = train_set_init + val_set_init[:-36] - self.val_set = val_set_init[-36:] - - print('len(dataset): ' + str(self.__len__())) - # print(self.test_name_list[0:10]) - - def get_body_part_indices(self): - silh = [ - ('background', [0]), - ('foreground', [255, 21, 57, 30, 59, 34, 48, 50, 79, 49, 61, 60, 54, 53, 36, 35, 27, 26, 78])] - full_body = [ - ('other', [255]), - ('head', [21, 57, 30, 59, 34, 48, 50]), - ('torso', [79, 49]), - ('right front leg', [61, 60]), - ('right back leg', [54, 53]), - ('left front leg', [36, 35]), - ('left back leg', [27, 26]), - ('tail', [78])] - head = [ - ('other', [21, 59, 34]), - ('right ear', [57]), - ('left ear', [30]), - ('muzzle', [48]), - ('nose', [50])] - torso = [ - ('other', [79]), # wrong 34 - ('neck', [49])] - all_parts = { - 'silh': silh, - 'full_body': full_body, - 'head': head, - 'torso': torso} - return all_parts - - - - - - def __getitem__(self, index): - - if self.is_train: - name = self.train_name_list[index] - data = self.train_dict[name] - # data = utils_stanext.get_dog(self.train_dict, name) - else: - name = self.test_name_list[index] - data = self.test_dict[name] - # data = utils_stanext.get_dog(self.test_dict, name) - - # self.do_augment = False - - # index = 5 ########################## - if self.is_train: - img_info = self.train_set[index] - else: - img_info = self.val_set[index] - - sf = self.scale_factor - rf = self.rot_factor - - img_path = os.path.join(self.dogvoc_path_images, img_info['img_name']) - - # bbox_yxhw = img_info['bbox'] - # bbox_xywh = [bbox_yxhw[1], bbox_yxhw[0], bbox_yxhw[2], bbox_yxhw[3]] - bbox_xywh = img_info['bbox'] - bbox_c = [bbox_xywh[0]+0.5*bbox_xywh[2], bbox_xywh[1]+0.5*bbox_xywh[3]] - bbox_max = max(bbox_xywh[2], bbox_xywh[3]) - bbox_diag = math.sqrt(bbox_xywh[2]**2 + bbox_xywh[3]**2) - # bbox_s = bbox_max / 200. # the dog will fill the image -> bbox_max = 256 - # bbox_s = bbox_diag / 200. # diagonal of the boundingbox will be 200 - bbox_s = bbox_max / 200. * 256. / 200. # maximum side of the bbox will be 200 - c = torch.Tensor(bbox_c) - s = bbox_s - - # For single-person pose estimation with a centered/scaled figure - img = load_image(img_path) # CxHxW - - # img_test = img[0, img_info['bbox'][1]:img_info['bbox'][1]+img_info['bbox'][3], img_info['bbox'][0]:img_info['bbox'][0]+img_info['bbox'][2]] - # import cv2 - # cv2.imwrite('/ps/scratch/nrueegg/new_projects/Animals/dog_project/pytorch-stacked-hourglass/yy.png', np.asarray(img_test*255, np.uint8)) - - - # segmentation map (we reshape it to 3xHxW, such that we can do the - # same transformations as with the image) - if self.do_augment and (random.random() <= 0.5): - do_flip = True - else: - do_flip = False - - if self.calc_seg: - mask = np.load(os.path.join(self.dogvoc_path_masks, img_info['img_name'].split('.')[0] + '_' + str(img_info['ind_bbox']) + '.npz.npy')) - seg_np = mask.copy() - seg_np[mask==0] = 0 - seg_np[mask>0] = 1 - seg = torch.Tensor(seg_np[None, :, :]) - seg = torch.cat(3*[seg]) - - # NEW: body parts - all_parts = self.get_body_part_indices() - body_part_index_list = [] - body_part_name_list = [] - n_tbp = 3 - n_bp = 15 - # body_part_matrix_multiple_hot = np.zeros((n_bp, mask.shape[0], mask.shape[1])) - body_part_matrix_np = np.ones((n_tbp, mask.shape[0], mask.shape[1])) * (-1) - ind_bp = 0 - for ind_tbp, part in enumerate(['full_body', 'head', 'torso']): - # import pdb; pdb.set_trace() - if part == 'full_body': - inds_mirr = [0, 1, 2, 5, 6, 3, 4, 7] - elif part == 'head': - inds_mirr = [0, 2, 1, 3, 4] - else: - inds_mirr = [0, 1] - for ind_sbp, subpart in enumerate(all_parts[part]): - if do_flip: - ind_sbp_corr = inds_mirr[ind_sbp] # we use this if the image is mirrored later on - else: - ind_sbp_corr = ind_sbp - bp_name = subpart[0] - bp_indices = subpart[1] - body_part_index_list.append(bp_indices) - body_part_name_list.append(bp_name) - # create matrix slice - xx = [mask==ind for ind in bp_indices] - xx_mat = (np.stack(xx).sum(axis=0)) - # body_part_matrix_multiple_hot[ind_bp, :, :] = xx_mat - # add to matrix - body_part_matrix_np[ind_tbp, xx_mat>0] = ind_sbp_corr - ind_bp += 1 - body_part_weight_masks_np = np.zeros((n_tbp, mask.shape[0], mask.shape[1])) - body_part_weight_masks_np[0, mask>0] = 1 # full body - body_part_weight_masks_np[1, body_part_matrix_np[0, :, :]==1] = 1 # head - body_part_weight_masks_np[2, body_part_matrix_np[0, :, :]==2] = 1 # torso - body_part_matrix_np[body_part_weight_masks_np==0] = 16 - body_part_matrix = torch.Tensor(body_part_matrix_np + 2.0) # / 100 - - # import pdb; pdb.set_trace() - - bbox_c_int0 = [int(bbox_c[0]), int(bbox_c[1])] - bbox_c_int1 = [int(bbox_c[0])+10, int(bbox_c[1])+10] - '''bpm_c0 = body_part_matrix[:, bbox_c_int0[1], bbox_c_int0[0]].clone() - bpm_c1 = body_part_matrix[:, bbox_c_int1[1], bbox_c_int1[0]].clone() - zero_replacement = torch.Tensor([0, 0, 0.99]) - body_part_matrix[:, bbox_c_int0[1], bbox_c_int0[0]] = zero_replacement - body_part_matrix[:, bbox_c_int1[1], bbox_c_int1[0]] = 1''' - ii = 3 - bpm_c0 = body_part_matrix[2, bbox_c_int0[1]-ii:bbox_c_int0[1]+ii, bbox_c_int0[0]-ii:bbox_c_int0[0]+ii] - bpm_c1 = body_part_matrix[2, bbox_c_int1[1]-ii:bbox_c_int1[1]+ii, bbox_c_int1[0]-ii:bbox_c_int1[0]+ii] - body_part_matrix[2, bbox_c_int0[1]-ii:bbox_c_int0[1]+ii, bbox_c_int0[0]-ii:bbox_c_int0[0]+ii] = 0 - body_part_matrix[2, bbox_c_int1[1]-ii:bbox_c_int1[1]+ii, bbox_c_int1[0]-ii:bbox_c_int1[0]+ii] = 255 - body_part_matrix = (body_part_matrix).long() - # body_part_name_list - # ['other', 'head', 'torso', 'right front leg', 'right back leg', 'left front leg', 'left back leg', 'tail', 'other', 'right ear', 'left ear', 'muzzle', 'nose', 'other', 'neck'] - # swap indices: - # bp_mirroring_inds = [0, 1, 2, 5, 6, 3, 4, 7, 8, 10, 9, 11, 12, 13, 14] - - - r = 0 - # self.is_train = False - if self.do_augment: - s = s*torch.randn(1).mul_(sf).add_(1).clamp(1-sf, 1+sf)[0] - r = torch.randn(1).mul_(rf).clamp(-2*rf, 2*rf)[0] if random.random() <= 0.6 else 0 - # Flip - if do_flip: - img = fliplr(img) - if self.calc_seg: - seg = fliplr(seg) - body_part_matrix = fliplr(body_part_matrix) - c[0] = img.size(2) - c[0] - # Color - img[0, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - img[1, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - img[2, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - - # Prepare image and groundtruth map - inp = crop(img, c, s, [self.inp_res, self.inp_res], rot=r) - inp = color_normalize(inp, self.DATA_INFO.rgb_mean, self.DATA_INFO.rgb_stddev) - - # import pdb; pdb.set_trace() - - if self.calc_seg: - seg = crop(seg, c, s, [self.inp_res, self.inp_res], rot=r) - - # 'crop' will divide by 255 and perform zero padding ( - # -> weird function that tries to rescale! Because of that I add zeros and ones in the beginning - xx = body_part_matrix.clone() - - # import pdb; pdb.set_trace() - - - body_part_matrix = crop(body_part_matrix, c, s, [self.inp_res, self.inp_res], rot=r, interp='nearest') - - body_part_matrix = body_part_matrix*255 - 2 - - body_part_matrix[body_part_matrix == -2] = -1 - body_part_matrix[body_part_matrix == 16] = -1 - body_part_matrix[body_part_matrix == 253] = -1 - - '''print(np.unique(body_part_matrix.numpy())) - print(np.unique(body_part_matrix[0, :, :].numpy())) - print(np.unique(body_part_matrix[1, :, :].numpy())) - print(np.unique(body_part_matrix[2, :, :].numpy()))''' - - # import cv2 - # cv2.imwrite('/ps/scratch/nrueegg/new_projects/Animals/dog_project/pytorch-stacked-hourglass/yy2.png', np.asarray((inp[0, :, :]+1)*100, np.uint8)) - # cv2.imwrite('/ps/scratch/nrueegg/new_projects/Animals/dog_project/pytorch-stacked-hourglass/yy3.png', (40*(1+body_part_matrix[0, :, :].numpy())).astype(np.uint8)) - - - - # Generate ground truth - nparts = 24 - target_weight = torch.zeros(nparts, 1) - target = torch.zeros(nparts, self.out_res, self.out_res) - pts = torch.zeros((nparts, 3)) - tpts = torch.zeros((nparts, 3)) - - # import pdb; pdb.set_trace() - - - # meta = {'index' : index, 'center' : c, 'scale' : s, 'do_flip' : do_flip, 'rot' : r, 'resolution' : [self.out_res, self.out_res], 'name' : name, - # 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, 'breed_index': this_breed['index']} - # meta = {'index' : index, 'center' : c, 'scale' : s, 'do_flip' : do_flip, 'rot' : r, 'resolution' : self.out_res, - # 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, 'breed_index': this_breed['index']} - # meta = {'index' : index, 'center' : c, 'scale' : s, - # 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, - # 'breed_index': this_breed['index'], 'sim_breed_index': sim_breed_index, - # 'ind_dataset': 0} # ind_dataset: 0 for stanext or stanexteasy or stanext 24 - meta = {'index' : index, 'center' : c, 'scale' : s, - 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, - 'ind_dataset': 3} - - #import pdb; pdb.set_trace() - - - if self.dataset_mode=='keyp_and_seg_and_partseg': - # meta = {} - meta['silh'] = seg[0, :, :] - meta['name'] = name - meta['body_part_matrix'] = body_part_matrix.long() - # meta['body_part_weights'] = body_part_weight_masks - # import pdb; pdb.set_trace() - return inp, target, meta - else: - raise ValueError - - - - def __len__(self): - if self.is_train: - return len(self.train_set) # len(self.train_list) - else: - return len(self.val_set) # len(self.valid_list) - - diff --git a/spaces/ryn-85/NousResearch-Yarn-Mistral-7b-128k/README.md b/spaces/ryn-85/NousResearch-Yarn-Mistral-7b-128k/README.md deleted file mode 100644 index 2e4a254221c88769854e5566f9ca15c33c1c2a5c..0000000000000000000000000000000000000000 --- a/spaces/ryn-85/NousResearch-Yarn-Mistral-7b-128k/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NousResearch Yarn Mistral 7b 128k -emoji: 🐢 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/samayg/StriimTheme/app.py b/spaces/samayg/StriimTheme/app.py deleted file mode 100644 index 399bda556d44bc90adbc47f586e402adb676184f..0000000000000000000000000000000000000000 --- a/spaces/samayg/StriimTheme/app.py +++ /dev/null @@ -1,172 +0,0 @@ -import time - -import gradio as gr -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -dropdown, js = create_theme_dropdown() - - -# Front end web application using Gradio -CSS =""" -.contain { display: flex; flex-direction: column; } -footer.svelte-1ax1toq.svelte-1ax1toq.svelte-1ax1toq.svelte-1ax1toq { display: none; } -#component-0 { height: 100%; } -#component-2 { height: 70vh !important; } -#chatbot { flex-grow: 1; overflow: auto;} -#submit-button { background: #00A7E5; color: white; } -#submit-button:hover { background: #00A7E5; color: white; box-shadow: 0 8px 10px 1px #9d9ea124, 0 3px 14px 2px #9d9ea11f, 0 5px 5px -3px #9d9ea133; } -""" -JS = """ -() => { - if (document.body.classList.contains('dark')) { - console.log("it's dark in here"); - document.body.classList.remove('dark'); - } - console.log("Hello world"); -} -""" -with gr.Blocks(theme='samayg/StriimTheme@1.0.0', css=CSS, js=JS) as demo: - chatbot = gr.Chatbot(show_label=False) - msg = gr.Textbox(label="Question:") - examples = gr.Examples(examples=[['What\'s new in Striim version 4.2.0?'], ['My Striim application keeps crashing. What should I do?'], ['How can I improve Striim performance?'], ['It says could not connect to source or target. What should I do?']], inputs=msg, label="Examples") - submit = gr.Button("Submit", elem_id="submit-button") - -# with gr.Blocks(theme='samayg/StriimTheme') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `StriimTheme` - To use this theme, set `theme='samayg/StriimTheme'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg", - label="Image", - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpgjpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/pyro.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/pyro.py deleted file mode 100644 index 2f4129bb45366b8146620f3103027a4fc2dc22f8..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/pyro.py +++ /dev/null @@ -1,39 +0,0 @@ -import random - -import torch -from torch import nn - -from utils import default_device -from .utils import get_batch_to_dataloader - - -def get_batch(batch_size, seq_len, batch_size_per_gp_sample=None, **config): - batch_size_per_gp_sample = batch_size_per_gp_sample or batch_size // 16 - assert batch_size % batch_size_per_gp_sample == 0, 'Please choose a batch_size divisible by batch_size_per_gp_sample.' - num_models = batch_size // batch_size_per_gp_sample - # standard kaiming uniform init currently... - - models = [config['model']() for _ in range(num_models)] - - sample = sum([[model(seq_len=seq_len) for _ in range(0,batch_size_per_gp_sample)] for model in models],[]) - - def normalize_data(data): - mean = data.mean(0) - std = data.std(0) + .000001 - eval_xs = (data - mean) / std - - return eval_xs - - x, y = zip(*sample) - - y = torch.stack(y, 1).squeeze(-1).detach() - x = torch.stack(x, 1).detach() - - x, y = normalize_data(x), y - - return x, y, y - - -DataLoader = get_batch_to_dataloader(get_batch) -DataLoader.num_outputs = 1 - diff --git a/spaces/scedlatioru/img-to-music/example/Final Nights 3 Download Fixed.md b/spaces/scedlatioru/img-to-music/example/Final Nights 3 Download Fixed.md deleted file mode 100644 index 1ddc7fbf8c59fd2bc1a53b120ec2a58adcf2ef80..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Final Nights 3 Download Fixed.md +++ /dev/null @@ -1,17 +0,0 @@ -

        final nights 3 download


        Download ✏ ✏ ✏ https://gohhs.com/2uEAmT



        - -Final Nights 3 | Menu | Short version - Spookhead; 6.Thanks for choosing - Exotic Guitar -Kevin MacLeod; 7.We are number one - Hand Unit edition. - Aldous Franklin; 8. When you love - Analogue Productions. -- Aldous Franklin; 9. Love - Jerry Goldsmith; 10. -Not this time - Wardan; eleven. -What's the Secret - Exotic Guitar - Kenny Burrell; 12. -Alone - The K.C. project; 13. -Alone in the Dark - Juan Atkinson; fourteen. -I will remember - Chino Moreno; 15. -When I'm gone - Chino Moreno; 16.Thank you - The K.C. project; 17. -I see the light - Juan Atkinson; eighteen. -I Don't Want - Juan Atkinson 19. -I've heard it before - Juan Atkinson; twenty. -It happens here - J 8a78ff9644
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Football Manager 2012 Crack Update 12.1.1 Skidrow Crack LINK.md b/spaces/scedlatioru/img-to-music/example/Football Manager 2012 Crack Update 12.1.1 Skidrow Crack LINK.md deleted file mode 100644 index 306f6b13b272349a09a3eeace3f7d37092c42649..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Football Manager 2012 Crack Update 12.1.1 Skidrow Crack LINK.md +++ /dev/null @@ -1,20 +0,0 @@ -

        Football Manager 2012 Crack Update 12.1.1 Skidrow Crack


        Download 🌟 https://gohhs.com/2uEyL5



        -
        -[…] - -Football Manager 2012 Pro Cracked. Football Manager 2012 Pro Crack UPDATE. Football Manager 2012 Pro Crack UPDATE. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Get Unlimited Cracks and Registration. Football Manager 2012 Online Registration. Football Manager 2012 Registration. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Football Manager 2012 Cracked Download. Football Manager 2012 Cracked Download. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Get Unlimited Cracks and Registration. Football Manager 2012 Offline Registration. Football Manager 2012 Offline Registration. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Football Manager 2012 Offline for PC – Football Manager 2012 Online for PC DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Football Manager 2012 Crack Download. Football Manager 2012 Crack Download. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. […] - -Football Manager 2012 Crack Download. Football Manager 2012 Crack Download. DOWNLOAD:  . You can also update by registration. Updating by registration can be done here:  . To download or update, click the “Download” button and then enter the serial key. 4fefd39f24
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/One Stop Teacher Shop Weekly Math Homework Answers.md b/spaces/scedlatioru/img-to-music/example/One Stop Teacher Shop Weekly Math Homework Answers.md deleted file mode 100644 index 490af2713826e351d5b928819fe5e8cf0ffa273c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/One Stop Teacher Shop Weekly Math Homework Answers.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        this workshop will discuss the correct approach to the study of math and the skills needed to help you organize and maximize your study and learning of mathematics. five topics will be presented in relation to their place in a math course: listening in the classroom, using a math textbook, taking notes in math, doing homework correctly, and preparing for the math test. you will leave this workshop with tips for organizing your notebooks and, most importantly, the ability to combine classroom notes with textbook reading when doing homework in a manner that improves test preparation.

        -

        One Stop Teacher Shop Weekly Math Homework Answers


        Download File ✦✦✦ https://gohhs.com/2uEzFT



        -

        math study skills
        this workshop will discuss the correct approach to the study of math and the skills needed to help you organize and maximize your study and learning of mathematics. five topics will be presented in relation to their place in a math course: listening in the classroom, using a math textbook, taking notes in math, doing homework correctly, and preparing for the math test.

        -

        is your child missing a lot of homework you might not realize it, but if a student misses a lot of assignments, your child might be falling behind in school. you may need to speak to the teacher to see if you can work a schedule with your child that will keep homework consistent. if your child needs extra help, a tutor can be an asset. look into tutoring programs in your community, and ask about any mentors that might be available.

        -

        is your child struggling with homework, and struggling to understand the material being able to understand the material is key to learning. your child may need help from a tutor. ask a teacher, guidance counselor or principal about any programs your child can participate in. ask if there are any tutoring programs you can look into. ask who you should contact.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/fused_act/__init__.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/fused_act/__init__.py deleted file mode 100644 index 241dc0754fae7d88dbbd9a02e665ca30a73c7422..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/fused_act/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu - -__all__ = ['FusedLeakyReLU', 'fused_leaky_relu'] diff --git a/spaces/segments-tobias/conex/espnet/utils/dynamic_import.py b/spaces/segments-tobias/conex/espnet/utils/dynamic_import.py deleted file mode 100644 index db885d0069bfb8f59dcf03f5477c13706574b217..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/utils/dynamic_import.py +++ /dev/null @@ -1,23 +0,0 @@ -import importlib - - -def dynamic_import(import_path, alias=dict()): - """dynamic import module and class - - :param str import_path: syntax 'module_name:class_name' - e.g., 'espnet.transform.add_deltas:AddDeltas' - :param dict alias: shortcut for registered class - :return: imported class - """ - if import_path not in alias and ":" not in import_path: - raise ValueError( - "import_path should be one of {} or " - 'include ":", e.g. "espnet.transform.add_deltas:AddDeltas" : ' - "{}".format(set(alias), import_path) - ) - if ":" not in import_path: - import_path = alias[import_path] - - module_name, objname = import_path.split(":") - m = importlib.import_module(module_name) - return getattr(m, objname) diff --git a/spaces/shi-labs/FcF-Inpainting/training/data/dataset.py b/spaces/shi-labs/FcF-Inpainting/training/data/dataset.py deleted file mode 100644 index eb5deb6fc27b1b068ba4cb6d6e632a08b3d558f4..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/training/data/dataset.py +++ /dev/null @@ -1,242 +0,0 @@ -import os -import numpy as np -import PIL.Image -import json -import torch -import dnnlib -import dnnlib -import cv2 -from icecream import ic -from . import mask_generator -import os.path as osp -import matplotlib.pyplot as plt -from icecream import ic -import matplotlib.cm as cm -import copy -import albumentations as A -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageDataset(Dataset): - - def __init__(self, - img_path, # Path to images. - resolution = None, # Ensure specific resolution, None = highest available. - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self.sz = resolution - self.img_path = img_path - self._type = 'dir' - self.files = [] - - self._all_fnames = [os.path.relpath(os.path.join(root, fname), start=self.img_path) for root, _dirs, files in os.walk(self.img_path) for fname in files] - PIL.Image.init() - self._image_fnames = sorted(os.path.join(self.img_path,fname) for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - self.files = [] - - for f in self._image_fnames: - if not '_mask' in f: - self.files.append(f) - - self.files = sorted(self.files) - - self.transform = A.Compose([ - A.PadIfNeeded(min_height=self.sz, min_width=self.sz), - A.OpticalDistortion(), - A.RandomCrop(height=self.sz, width=self.sz), - A.HorizontalFlip(), - A.CLAHE(), - A.ToFloat() - ]) - - name = os.path.splitext(os.path.basename(self.img_path))[0] - raw_shape = [len(self.files)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - def __len__(self): - return len(self.files) - - def _load_image(self, fn): - return PIL.Image.open(fn).convert('RGB') - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _load_raw_image(self, raw_idx): - fname = self.files[raw_idx] - image = np.array(PIL.Image.open(fname).convert('RGB')) - image = self.transform(image=image)['image'] - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - - def _get_image(self, idx): - fname = self.files[idx] - mask = mask_generator.generate_random_mask(s=self.sz, hole_range=[0.1,0.7]) - - rgb = np.array(self._load_image(fname)) # uint8 - rgb = self.transform(image=rgb)['image'] - rgb = np.rint(rgb * 255).clip(0, 255).astype(np.uint8) - - return rgb, mask - - def __getitem__(self, idx): - rgb, mask = self._get_image(idx) # modal, uint8 {0, 1} - rgb = rgb.transpose(2,0,1) - - return rgb, mask, super().get_label(idx) \ No newline at end of file diff --git a/spaces/shi-labs/OneFormer/oneformer/modeling/pixel_decoder/ops/make.sh b/spaces/shi-labs/OneFormer/oneformer/modeling/pixel_decoder/ops/make.sh deleted file mode 100644 index ca5c0b469da786c847ba04d437bb31ee0fc938da..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/modeling/pixel_decoder/ops/make.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env bash -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -FORCE_CUDA=1 python setup.py build install diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py deleted file mode 100644 index f6dfcf4c9983b431f0a978701e5ddd9598faf381..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/archs/vqgan_arch.py +++ /dev/null @@ -1,435 +0,0 @@ -''' -VQGAN code, adapted from the original created by the Unleashing Transformers authors: -https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py - -''' -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -@torch.jit.script -def swish(x): - return x*torch.sigmoid(x) - - -# Define VQVAE classes -class VectorQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, beta): - super(VectorQuantizer, self).__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - self.embedding = nn.Embedding(self.codebook_size, self.emb_dim) - self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.emb_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \ - 2 * torch.matmul(z_flattened, self.embedding.weight.t()) - - mean_distance = torch.mean(d) - # find closest encodings - # min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False) - # [0-1], higher score, higher confidence - min_encoding_scores = torch.exp(-min_encoding_scores/10) - - min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, { - "perplexity": perplexity, - "min_encodings": min_encodings, - "min_encoding_indices": min_encoding_indices, - "min_encoding_scores": min_encoding_scores, - "mean_distance": mean_distance - } - - def get_codebook_feat(self, indices, shape): - # input indices: batch*token_num -> (batch*token_num)*1 - # shape: batch, height, width, channel - indices = indices.view(-1,1) - min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices) - min_encodings.scatter_(1, indices, 1) - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: # reshape back to match original input shape - z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0): - super().__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits - self.embed = nn.Embedding(codebook_size, emb_dim) - - def forward(self, z): - hard = self.straight_through if self.training else True - - logits = self.proj(z) - - soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard) - - z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean() - min_encoding_indices = soft_one_hot.argmax(dim=1) - - return z_q, diff, { - "min_encoding_indices": min_encoding_indices - } - - -class Downsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - return x - - -class Upsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - x = self.conv(x) - - return x - - -class ResBlock(nn.Module): - def __init__(self, in_channels, out_channels=None): - super(ResBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = in_channels if out_channels is None else out_channels - self.norm1 = normalize(in_channels) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.norm2 = normalize(out_channels) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x_in): - x = x_in - x = self.norm1(x) - x = swish(x) - x = self.conv1(x) - x = self.norm2(x) - x = swish(x) - x = self.conv2(x) - if self.in_channels != self.out_channels: - x_in = self.conv_out(x_in) - - return x + x_in - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h*w) - q = q.permute(0, 2, 1) - k = k.reshape(b, c, h*w) - w_ = torch.bmm(q, k) - w_ = w_ * (int(c)**(-0.5)) - w_ = F.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h*w) - w_ = w_.permute(0, 2, 1) - h_ = torch.bmm(v, w_) - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Encoder(nn.Module): - def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions): - super().__init__() - self.nf = nf - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.attn_resolutions = attn_resolutions - - curr_res = self.resolution - in_ch_mult = (1,)+tuple(ch_mult) - - blocks = [] - # initial convultion - blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1)) - - # residual and downsampling blocks, with attention on smaller res (16x16) - for i in range(self.num_resolutions): - block_in_ch = nf * in_ch_mult[i] - block_out_ch = nf * ch_mult[i] - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - if curr_res in attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != self.num_resolutions - 1: - blocks.append(Downsample(block_in_ch)) - curr_res = curr_res // 2 - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - # normalise and convert to latent size - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1)) - self.blocks = nn.ModuleList(blocks) - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -class Generator(nn.Module): - def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions): - super().__init__() - self.nf = nf - self.ch_mult = ch_mult - self.num_resolutions = len(self.ch_mult) - self.num_res_blocks = res_blocks - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.in_channels = emb_dim - self.out_channels = 3 - block_in_ch = self.nf * self.ch_mult[-1] - curr_res = self.resolution // 2 ** (self.num_resolutions-1) - - blocks = [] - # initial conv - blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)) - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - for i in reversed(range(self.num_resolutions)): - block_out_ch = self.nf * self.ch_mult[i] - - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - - if curr_res in self.attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != 0: - blocks.append(Upsample(block_in_ch)) - curr_res = curr_res * 2 - - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1)) - - self.blocks = nn.ModuleList(blocks) - - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -@ARCH_REGISTRY.register() -class VQAutoEncoder(nn.Module): - def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, - beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): - super().__init__() - logger = get_root_logger() - self.in_channels = 3 - self.nf = nf - self.n_blocks = res_blocks - self.codebook_size = codebook_size - self.embed_dim = emb_dim - self.ch_mult = ch_mult - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.quantizer_type = quantizer - self.encoder = Encoder( - self.in_channels, - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - if self.quantizer_type == "nearest": - self.beta = beta #0.25 - self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta) - elif self.quantizer_type == "gumbel": - self.gumbel_num_hiddens = emb_dim - self.straight_through = gumbel_straight_through - self.kl_weight = gumbel_kl_weight - self.quantize = GumbelQuantizer( - self.codebook_size, - self.embed_dim, - self.gumbel_num_hiddens, - self.straight_through, - self.kl_weight - ) - self.generator = Generator( - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_ema' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema']) - logger.info(f'vqgan is loaded from: {model_path} [params_ema]') - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - logger.info(f'vqgan is loaded from: {model_path} [params]') - else: - raise ValueError(f'Wrong params!') - - - def forward(self, x): - x = self.encoder(x) - quant, codebook_loss, quant_stats = self.quantize(x) - x = self.generator(quant) - return x, codebook_loss, quant_stats - - - -# patch based discriminator -@ARCH_REGISTRY.register() -class VQGANDiscriminator(nn.Module): - def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None): - super().__init__() - - layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n, 8) - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n_layers, 8) - - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_d' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d']) - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - else: - raise ValueError(f'Wrong params!') - - def forward(self, x): - return self.main(x) \ No newline at end of file diff --git a/spaces/shvuuuu/Credit_Card_Churn_Predictor/app.py b/spaces/shvuuuu/Credit_Card_Churn_Predictor/app.py deleted file mode 100644 index 111b9cbb42fd18f7f3464ec3e2a203a4917c724a..0000000000000000000000000000000000000000 --- a/spaces/shvuuuu/Credit_Card_Churn_Predictor/app.py +++ /dev/null @@ -1,194 +0,0 @@ -import gradio as gr -import pickle - -def example1(): - - model = pickle.load(open('model.pkl', 'rb')) - input_model = [[65,1.8,2,0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1]] - pred=model.predict(input_model) - churn = "False" - if pred[0] == 1: - churn = "He Will Churn" - elif pred[0] == 0: - churn = "He Will Not Churn" - return churn - -def example2(): - - model = pickle.load(open('model.pkl', 'rb')) - input_model = [[41,2,2,0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0]] - pred=model.predict(input_model) - churn = "False" - if pred[0] == 1: - churn = "He Will Churn" - elif pred[0] == 0: - churn = "He Will Not Churn" - return churn - - -def example3(): - - model = pickle.load(open('model.pkl', 'rb')) - input_model = [[10,1.1,2,0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0]] - pred=model.predict(input_model) - churn = "False" - if pred[0] == 1: - churn = "He Will Churn" - elif pred[0] == 0: - churn = "He Will Not Churn" - return churn - -def example4(): - - model = pickle.load(open('model.pkl', 'rb')) - input_model = [[7,0.8,5,0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,0, 0, 1]] - pred=model.predict(input_model) - churn = "False" - if pred[0] == 0: - churn = "She Will Churn" - elif pred[0] == 1: - churn = "She Will Not Churn" - return churn - - -def greet(Total_Transaction, Total_Ct_Chng_Q4_Q1, Total_Relationship_Count, Education=None, Annual_Income=None, Marital_Status=None, Card_Type=None): - educ, edud, edug, eduh, edup, eduu, ai0, ai40, ai60, ai80, ai120, msd, msm, mss, ctb, ctg, cts = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 - - if Annual_Income == "0k-40k": - ai0 = 1 - elif Annual_Income == "40k-60k": - ai40 = 1 - elif Annual_Income == "60k-80k": - ai60 = 1 - elif Annual_Income == "80k-120k": - ai80 = 1 - elif Annual_Income == "120k+": - ai120 = 1 - - if Marital_Status == "Single": - mss = 1 - elif Marital_Status == "Married": - msm = 1 - elif Marital_Status == "Divorced": - msd = 1 - - if Card_Type == "Blue": - ctb = 1 - elif Card_Type == "Gold": - ctg = 1 - elif Card_Type == "Silver": - cts = 1 - - if Education == "College": - educ = 1 - elif Education == "Doctorate": - edud = 1 - elif Education == "Graduate": - edug = 1 - elif Education == "High-School": - eduh = 1 - elif Education == "Post-Graduate": - edup = 1 - elif Education == "Uneducated": - eduu = 1 - - - input_model = [[Total_Transaction,Total_Ct_Chng_Q4_Q1,Total_Relationship_Count,educ, edud, edug, eduh, edup, eduu, ai120, ai40, ai60, ai80, ai0, msd, msm, mss,ctb, ctg, cts]] - model = pickle.load(open('model.pkl', 'rb')) - pred=model.predict(input_model) - churn = "False" - if pred[0] == 1: - churn = "True" - elif pred[0] == 0: - churn = "False" - return churn - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1,min_width=600): - gr.Image("logo2.png").style(height='7') - Total_Transaction = gr.Slider(0, 200,label="Total Transaction Count") - Total_Ct_Chng_Q4_Q1 = gr.Slider(0, 30,label="Transaction Count Q4 vs Q1") - Total_Relationship_Count = gr.Slider(0, 20,step=1,label="Total Relationship Count") - - with gr.Column(scale=2,min_width=600): - with gr.Row(): - with gr.Column(scale=1,min_width=300): - Annual_Income = gr.Dropdown(["0k-40k","40k-60k","60k-80k","80k-120K","120k+"],label="Annual Income") - with gr.Column(scale=2,min_width=300): - Education = gr.Dropdown(["College","Doctorate","Graduate","High-School","Post-Graduate","Uneducated","Unknown"],label="Education") - - with gr.Row(): - with gr.Column(scale=3,min_width=300): - Marital_Status = gr.Dropdown(["Single","Married","Divorced","Unknown"],label="Marital Status") - with gr.Column(scale=4,min_width=300): - Card_Type = gr.Dropdown(["Blue","Silver","Gold"],label="Crad Type") - churn = gr.Textbox(value="", label="Churn") - btn = gr.Button("PREDICT").style() - btn.click(fn=greet, inputs=[Total_Transaction,Total_Ct_Chng_Q4_Q1,Total_Relationship_Count,Education,Annual_Income,Marital_Status,Card_Type], outputs=[churn]) - gr.Markdown("""# Few Examples Based on Real-World Simulations""") - - with gr.Row(): - with gr.Column(scale=1,min_width=300): - gr.Image("avatars/1.png") - churn1 = gr.Textbox(value="", label="Churn") - btn1 = gr.Button("PREDICT").style() - exp =1 - btn1.click(fn=example1, inputs=[], outputs=[churn1]) - gr.Markdown(""" - # Corporate Professional! - Total Transaction Count - 45\n - Transaction Count Q4 vs Q1 - 1.3\n - Total Relationship Count - 2\n - Annual Income - 40k-60k\n - Education - Graduate\n - Marital Status - Married\n - Card Type - Silver\n - """) - with gr.Column(scale=2,min_width=300): - gr.Image("avatars/4.png") - churn2 = gr.Textbox(value="", label="Churn") - bt2 = gr.Button("PREDICT").style() - bt2.click(fn=example4, inputs=[], outputs=[churn2]) - gr.Markdown(""" - # Medical Professional! - Total Transaction Count - 7\n - Transaction Count Q4 vs Q1 - 0.8\n - Total Relationship Count - 5\n - Annual Income - 80k-120k\n - Education - Doctorate\n - Marital Status - Married\n - Card Type - Gold\n - """) - with gr.Column(scale=3,min_width=300): - gr.Image("avatars/2.png") - churn3 = gr.Textbox(value="", label="Churn") - btn3 = gr.Button("PREDICT").style() - btn3.click(fn=example2, inputs=[], outputs=[churn3]) - gr.Markdown(""" - # Freelance Photographer! - Total Transaction Count - 41\n - Transaction Count Q4 vs Q1 - 2\n - Total Relationship Count - 2\n - Annual Income - 0k-40k\n - Education - High-School\n - Marital Status - Single\n - Card Type - Blue\n - """) - with gr.Column(scale=4,min_width=300): - gr.Image("avatars/3.png") - churn4 = gr.Textbox(value="", label="Churn") - btn4 = gr.Button("PREDICT").style() - btn4.click(fn=example3, inputs=[], outputs=[churn4]) - gr.Markdown(""" - # Retired Veteran Pensioner! - Total Transaction Count - 10\n - Transaction Count Q4 vs Q1 - 1.1\n - Total Relationship Count - 2\n - Annual Income - 80k-120k\n - Education - Post-Graduate\n - Marital Status - Divorced\n - Card Type - GOld\n - """) - -demo.launch() \ No newline at end of file diff --git a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/distributed.py b/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/distributed.py deleted file mode 100644 index 51fa243257ef302e2015d5ff36ac531b86a9a0ce..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/distributed.py +++ /dev/null @@ -1,126 +0,0 @@ -import math -import pickle - -import torch -from torch import distributed as dist -from torch.utils.data.sampler import Sampler - - -def get_rank(): - if not dist.is_available(): - return 0 - - if not dist.is_initialized(): - return 0 - - return dist.get_rank() - - -def synchronize(): - if not dist.is_available(): - return - - if not dist.is_initialized(): - return - - world_size = dist.get_world_size() - - if world_size == 1: - return - - dist.barrier() - - -def get_world_size(): - if not dist.is_available(): - return 1 - - if not dist.is_initialized(): - return 1 - - return dist.get_world_size() - - -def reduce_sum(tensor): - if not dist.is_available(): - return tensor - - if not dist.is_initialized(): - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - - return tensor - - -def gather_grad(params): - world_size = get_world_size() - - if world_size == 1: - return - - for param in params: - if param.grad is not None: - dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM) - param.grad.data.div_(world_size) - - -def all_gather(data): - world_size = get_world_size() - - if world_size == 1: - return [data] - - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to('cuda') - - local_size = torch.IntTensor([tensor.numel()]).to('cuda') - size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda')) - - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda') - tensor = torch.cat((tensor, padding), 0) - - dist.all_gather(tensor_list, tensor) - - data_list = [] - - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_loss_dict(loss_dict): - world_size = get_world_size() - - if world_size < 2: - return loss_dict - - with torch.no_grad(): - keys = [] - losses = [] - - for k in sorted(loss_dict.keys()): - keys.append(k) - losses.append(loss_dict[k]) - - losses = torch.stack(losses, 0) - dist.reduce(losses, dst=0) - - if dist.get_rank() == 0: - losses /= world_size - - reduced_losses = {k: v for k, v in zip(keys, losses)} - - return reduced_losses diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Minecraft 3D Mod and Enhance Your Gaming Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Minecraft 3D Mod and Enhance Your Gaming Experience.md deleted file mode 100644 index 339eab883116625e560b88d3f3f9ea63981b56ae..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Minecraft 3D Mod and Enhance Your Gaming Experience.md +++ /dev/null @@ -1,113 +0,0 @@ - -

        Introduction

        -

        Minecraft is a sandbox game that allows players to create and explore infinite worlds made of blocks. While the game has a distinctive pixelated style, some players may want to enhance their experience with more realistic graphics and models. That's where mods come in.

        -

        download minecraft 3d mod


        Download File ⚙⚙⚙ https://ssurll.com/2uO17M



        -

        Mods are additions and expansions that can change or improve various aspects of Minecraft, such as gameplay, interface, performance, or visuals. One of the most popular mods for improving the appearance of Minecraft is Minecraft 3D.

        -

        Minecraft 3D is a resource pack that adds 3D models and textures to various blocks and items in Minecraft, giving them a more realistic and immersive look. For example, bookshelves have books sticking out of them, crafting tables have tools on them, ores have different shapes and sizes, mushrooms have stems and caps, sugar cane has leaves and segments, and so on.

        -

        In this article, I will show you how to install and play with the Minecraft 3D mod, as well as some of its features, benefits, compatibility, and alternatives. Let's get started!

        -

        How to download minecraft 3d mod for free
        -Minecraft 3d mod download link
        -Best minecraft 3d resource packs
        -Minecraft 3d rediscovered mod download
        -Minecraft 3d model editor blockbench
        -Download minecraft 3d mod apk
        -Minecraft 3d mod for java edition
        -Minecraft 3d mod for bedrock edition
        -Minecraft 3d mod review
        -Minecraft 3d mod tutorial
        -Minecraft 3d mod gameplay
        -Minecraft 3d mod showcase
        -Minecraft 3d mod installation guide
        -Minecraft 3d mod compatible versions
        -Minecraft 3d mod requirements
        -Minecraft 3d mod features
        -Minecraft 3d mod screenshots
        -Minecraft 3d mod videos
        -Minecraft 3d mod download size
        -Minecraft 3d mod download site
        -Minecraft 3d mod download error
        -Minecraft 3d mod download virus
        -Minecraft 3d mod download safe
        -Minecraft 3d mod download reddit
        -Minecraft 3d mod download curseforge
        -Minecraft 3d mod download minecraft.net
        -Minecraft 3d mod download windows 10
        -Minecraft 3d mod download mac
        -Minecraft 3d mod download linux
        -Minecraft 3d mod download android
        -Minecraft 3d mod download ios
        -Minecraft 3d mod download xbox one
        -Minecraft 3d mod download ps4
        -Minecraft 3d mod download switch
        -Minecraft 3d mod alternatives
        -Minecraft 3d mod updates
        -Minecraft 3d mod bugs
        -Minecraft 3d mod fixes
        -Minecraft 3d mod support
        -Minecraft 3d mod feedback
        -Minecraft 3d mod forum
        -Minecraft 3d mod discord server
        -Minecraft 3d mod developer contact
        -Minecraft 3d mod donations
        -Minecraft 3d mod license agreement
        -Minecraft 3d mod terms of service
        -Minecraft 3d mod privacy policy
        -Minecraft 3d mod disclaimer

        -

        How to install the Minecraft 3D mod

        -

        To install and use the Minecraft 3D mod, you will need to follow these steps:

        -

        Step 1: Install Minecraft Forge

        -

        Minecraft Forge is a free add-on for the Java edition of Minecraft that allows you to run mods. You can download it from here. Make sure you choose the correct version for your game (for example, if you are playing on version 1.16.5, you will need Forge version 36.2.8).

        -

        Once you have downloaded the Forge installer file (.jar), double-click it to run it. A window will pop up asking you to install client or server. Choose Install client and click OK. The installer will create a new profile in your Minecraft launcher called forge.

        -

        Step 2: Download the Minecraft 3D mod

        -

        The next step is to download the mod file (.zip) from a reliable source. One of the best places to find mods for Minecraft is CurseForge, a website that hosts thousands of mods and modpacks for various games. You can also use other websites that offer mods for Minecraft, such as Planet Minecraft or Minecraft Mods, but be careful of potential malware or viruses.

        -

        You can find the Minecraft 3D mod on CurseForge here. Make sure you download the latest version that matches your game version (for example, if you are playing on version 1.16.5, you will need Minecraft 3D version 6.1.0).

        -

        Step 3: Copy the mod file to the mods folder

        -

        Once you have downloaded the mod file, you will need to copy it to the mods folder in your Minecraft installation directory. This is where Forge will look for mods to load when you launch the game.

        -

        To find the mods folder, you can follow these steps:

        -
          -
        • Open the Minecraft launcher and click on Installations.
        • -
        • Find the forge profile and click on the three dots (...) next to it.
        • -
        • Select Edit and then click on More Options.
        • -
        • Under Game Directory, you will see a path that leads to your Minecraft installation folder. Copy this path.
        • -
        • Open a file explorer window and paste the path in the address bar. Press enter to go to the folder.
        • -
        • If you don't see a folder called mods, create one by right-clicking and selecting New > Folder.
        • -
        • Copy and paste the mod file (.zip) that you downloaded into the mods folder.
        • -
        -

        Step 4: Launch Minecraft with Forge and enable the mod

        -

        The final step is to launch Minecraft with Forge and enable the resource pack that contains the mod. To do this, follow these steps:

        -
          -
        • Open the Minecraft launcher and click on Play.
        • -
        • In the dropdown menu next to Play, select the forge profile and click on Play again.
        • -
        • Once the game loads, click on Options > Resource Packs....
        • -
        • In the available resource packs list, find and select Minecraft 3D. It should have a green check mark next to it.
        • -
        • Click on Done. The game will reload and apply the resource pack.
        • -
        • You can now enjoy playing Minecraft with 3D models and textures!
        • -
        -

        How to play with the Minecraft 3D mod

        -

        The Minecraft 3D mod adds 3D models and textures to various blocks and items in Minecraft, giving them a more realistic and immersive look. You can see some examples of how they look in the table below:

        - | Block or Item | Description | | --- | --- | | Bookshelf | Has books sticking out of it, some of which are open or tilted | | Crafting Table | Has tools on it, such as a hammer, a saw, a chisel, and a knife | | Ore | Has different shapes and sizes, depending on the type of ore (coal, iron, gold, diamond, etc.) | | Mushroom | Has a stem and a cap, which can be different colors depending on the type of mushroom (red, brown, or glowshroom) | | Sugar Cane | Has leaves and segments, which can be broken individually | | And more! | There are over 100 blocks and items that have 3D models and textures in this mod |

        To play with the mod, you just need to explore your world and find these blocks and items. You can also craft them or obtain them from chests, villagers, or other sources. You can interact with them as you normally would in Minecraft, such as breaking them, placing them, using them, etc.

        -

        Benefits of the Minecraft 3D mod

        -

        The Minecraft 3D mod has several benefits that make it worth trying out. Some of them are:

        - - It improves the aesthetics of Minecraft by adding more detail and depth to the blocks and items. - It enhances the immersion of Minecraft by making it feel more realistic and alive. - It adds more variety and diversity to Minecraft by giving different appearances to different blocks and items. - It is compatible with other mods that use Forge or OptiFine, such as shaders or biome mods. - It is easy to install and use, as it does not require any additional software or configuration.

        Compatibility of the Minecraft 3D mod mod

        -

        The Minecraft 3D mod is generally compatible with other mods that use Forge or OptiFine, as long as they do not modify the same blocks or items that the mod does. However, some potential issues or conflicts may arise when using the mod with other mods or versions of Minecraft. Here are some tips on how to resolve them:

        - - If the mod does not load or crashes the game, make sure you have the correct version of Forge and the mod for your game version. You can check the version number in the file name or on the website where you downloaded it. - If the mod causes lag or performance issues, try lowering your graphics settings or disabling some of the 3D models in the resource pack options. You can also use OptiFine to optimize your game and improve your FPS. - If the mod conflicts with another mod that changes the same blocks or items, try changing the load order of the mods in the mods folder. The mod that is lower in the list will overwrite the one that is higher. You can also disable one of the mods if they are incompatible. - If the mod does not work with a newer version of Minecraft, you will have to wait for the mod author to update it or find an alternative mod that works with that version.

        Alternatives to the Minecraft 3D mod

        -

        If you are looking for other mods or resource packs that offer similar or different features and effects to the Minecraft 3D mod, here are some suggestions:

        - - Nautilus3D: This is another resource pack that adds 3D models and textures to various blocks and items in Minecraft, such as chests, furnaces, doors, beds, flowers, etc. It has a more cartoonish and colorful style than Minecraft 3D. - Canvas Renderer: This is a mod that replaces the default renderer of Minecraft with a new one that supports more advanced graphics features, such as shaders, shadows, reflections, ambient occlusion, etc. It can make your game look more realistic and dynamic. - Blockbench: This is a free and open-source software that allows you to create and edit 3D models and textures for Minecraft. You can use it to make your own custom resource packs or mods, or modify existing ones. - And more!: There are many other mods and resource packs that can change or improve the visuals of Minecraft, such as Better Foliage, Chisel, Biomes O' Plenty, Faithful, Sphax PureBDcraft, etc. You can find them on websites like CurseForge or Planet Minecraft.

        Conclusion

        -

        Minecraft 3D is a resource pack that adds 3D models and textures to various blocks and items in Minecraft, giving them a more realistic and immersive look. It is easy to install and use with Forge or OptiFine, and it is compatible with most other mods. It also has several benefits, such as improved aesthetics, immersion, variety, and performance.

        -

        If you want to enhance your Minecraft experience with more realistic graphics and models, you should definitely give this mod a try. You can download it from CurseForge or other websites, and follow the steps in this article to install and play with it. You can also explore other mods or resource packs that offer similar or different features and effects.

        -

        I hope you enjoyed this article and learned something new about the Minecraft 3D mod. If you have any questions or feedback, feel free to leave a comment below. Happy mining!

        -

        FAQs

        -
          -
        • Q: Is Minecraft 3D a mod or a resource pack?
        • -
        • A: Minecraft 3D is technically a resource pack that contains 3D models and textures for various blocks and items in Minecraft. However, it requires Forge or OptiFine to run properly, so it is often considered a mod as well.
        • -
        • Q: Does Minecraft 3D work with Bedrock edition?
        • -
        • A: No, Minecraft 3D only works with Java edition of Minecraft. Bedrock edition has a different format and system for resource packs and mods.
        • -
        • Q: How do I uninstall Minecraft 3D?
        • -
        • A: To uninstall Minecraft 3D, you just need to delete the mod file (.zip) from the mods folder in your Minecraft installation directory. You can also disable the resource pack in the options menu if you want to keep it for later use.
        • -
        • Q: How do I update Minecraft 3D?
        • -
        • A: To update Minecraft 3D, you just need to download the latest version of the mod file (.zip) from CurseForge or other websites, and replace the old one in the mods folder. You can also check for updates on the website where you downloaded it.
        • -
        • Q: How do I customize Minecraft 3D?
        • -
        • A: To customize Minecraft 3D, you can use a software like Blockbench to create and edit 3D models and textures for Minecraft. You can also modify the existing ones in the mod file (.zip) by opening it with a file archiver like WinRAR or 7-Zip.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and enjoy the latest apps for Android with Malavida.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and enjoy the latest apps for Android with Malavida.md deleted file mode 100644 index 387047522f13768ff5891835631c979ff73a31e3..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and enjoy the latest apps for Android with Malavida.md +++ /dev/null @@ -1,121 +0,0 @@ -
        -

        What is Malavida and why you should use it

        -

        If you are looking for a reliable and convenient way to download apps for your Android, Windows, or Mac devices, you should check out Malavida. Malavida is more than just a website that offers free and safe apps. It is also a platform that provides news, reviews, and tips about apps and software. And it is a community that connects users and developers of apps and software. In this article, we will explain what Malavida is, why you should use it, and how to use it.

        -

        What is Malavida?

        -

        Malavida is a website that was founded in 2001 by a group of Spanish enthusiasts of technology and software. Since then, it has grown to become one of the most popular websites for downloading apps for Android, Windows, and Mac devices. According to its official website, Malavida has more than 20 million monthly visitors from all over the world.

        -

        malavida


        Download File ————— https://ssurll.com/2uNXyI



        -

        A website that offers free and safe apps for Android, Windows, and Mac

        -

        One of the main features of Malavida is that it offers a wide range of free apps for Android, Windows, and Mac devices. You can find apps for various categories such as games, social networks, music, photo, video, productivity, education, health, security, etc. You can also find apps that are not available on the official app stores such as Google Play or Apple Store.

        -

        Another important feature of Malavida is that it ensures that all the apps that it offers are safe and free of malware or viruses. Malavida has a team of experts who test and review every app before publishing it on the website. They also update the apps regularly to fix any bugs or issues. You can download apps from Malavida without any risk or worry.

        -

        A platform that provides news, reviews, and tips about apps and software

        -

        Malavida is not only a website that offers free and safe apps. It is also a platform that provides news, reviews, and tips about apps and software. You can find articles that cover the latest trends, innovations, features, updates, comparisons, opinions, recommendations, etc. about apps and software. You can also find tutorials and guides that teach you how to use and optimize your apps and software.

        -

        Malavida has a team of journalists who write high-quality content about apps and software. They also have a team of video editors who produce engaging videos about apps and software. You can watch these videos on their YouTube channel, which has more than 300 thousand subscribers.

        -

        A community that connects users and developers of apps and software

        -

        Malavida is not only a website that offers free and safe apps. It is also a community that connects users and developers of apps and software. You can interact with other users and developers of apps and software on Malavida's website or app store. You can leave comments, ratings, questions, suggestions, feedback, etc. You can also share your experiences and opinions with other users and developers of apps and software. You can also participate in surveys, polls, contests, giveaways, etc. that Malavida organizes from time to time.

        -

        Malavida also supports and promotes independent and emerging developers of apps and software. You can find apps and software that are created by talented and innovative developers who are not affiliated with big corporations or brands. You can also contact these developers directly through Malavida and support their work.

        -

        malavida app download
        -malavida android games
        -malavida windows software
        -malavida mac apps
        -malavida apk free
        -malavida app store
        -malavida app reviews
        -malavida app news
        -malavida app tips
        -malavida app updates
        -malavida best apps
        -malavida social media apps
        -malavida photo apps
        -malavida music apps
        -malavida video apps
        -malavida productivity apps
        -malavida personalization apps
        -malavida security apps
        -malavida education apps
        -malavida entertainment apps
        -malavida health apps
        -malavida lifestyle apps
        -malavida sports apps
        -malavida travel apps
        -malavida shopping apps
        -malavida finance apps
        -malavida communication apps
        -malavida utility apps
        -malavida gaming apps
        -malavida action games
        -malavida adventure games
        -malavida arcade games
        -malavida board games
        -malavida card games
        -malavida casino games
        -malavida casual games
        -malavida educational games
        -malavida music games
        -malavida puzzle games
        -malavida racing games
        -malavida role playing games
        -malavida simulation games
        -malavida sports games
        -malavida strategy games
        -malavida trivia games
        -malavida word games

        -

        Why you should use Malavida?

        -

        Now that you know what Malavida is, you might be wondering why you should use it. Here are some of the benefits and advantages of using Malavida:

        -

        You can download apps for Android, Windows, and Mac without any risk of malware or viruses

        -

        As we mentioned before, Malavida offers a wide range of free and safe apps for Android, Windows, and Mac devices. You can download apps from Malavida without any risk of malware or viruses. Malavida has a team of experts who test and review every app before publishing it on the website. They also update the apps regularly to fix any bugs or issues.

        -

        This means that you can enjoy your apps on your devices without any worry or hassle. You don't have to worry about your devices getting infected or damaged by malicious software. You don't have to waste your time or money on antivirus or anti-malware programs. You don't have to deal with annoying ads or pop-ups that interrupt your experience. You can download apps from Malavida with confidence and peace of mind.

        -

        You can discover new and useful apps and software for your devices

        -

        Another benefit of using Malavida is that you can discover new and useful apps and software for your devices. You can find apps for various categories such as games, social networks, music, photo, video, productivity, education, health, security, etc. You can also find apps that are not available on the official app stores such as Google Play or Apple Store.

        -

        This means that you can enhance your devices with apps and software that suit your needs and preferences. You can find apps and software that make your life easier, more fun, more productive, more creative, more healthy, more secure, etc. You can also find apps and software that are unique, original, innovative, or exclusive to Malavida.

        -

        You can learn how to use and optimize your apps and software with tutorials and guides

        -

        A third benefit of using Malavida is that you can learn how to use and optimize your apps and software with tutorials and guides. You can find articles that cover the latest trends, innovations, features, updates, comparisons, opinions, recommendations, etc. about apps and software. You can also find tutorials and guides that teach you how to use and optimize your apps and software.

        -

        This means that you can improve your skills and knowledge about apps and software. You can learn how to install, configure, update, uninstall, troubleshoot, etc. your apps and software. You can also learn how to customize, enhance, integrate, etc. your apps and software. You can also learn how to solve common problems or issues that you might encounter with your apps and software.

        -

        You can interact with other users and developers of apps and software

        -

        A fourth benefit of using Malavida is that you can interact with other users and developers of apps and software. You can leave comments, ratings, questions, suggestions, feedback, etc. on Malavida's website or app store. You can also share your experiences and opinions with other users and developers of apps and software. You can also participate in surveys, polls, contests, giveaways, etc. that Malavida organizes from time to time.

        -

        This means that you can be part of a community that shares your interests and passions about apps and software. You can exchange ideas, tips, advice, support, etc. with other users and developers of apps and software. You can also discover new apps and software that are recommended or created by other users and developers of apps and software. You can also have fun and win prizes by joining Malavida's activities and events.

        -

        How to use Malavida?

        -

        Now that you know why you should use Malavida, you might be wondering how to use it. Here are some of the steps and instructions on how to use Malavida:

        -

        How to download apps from Malavida?

        -

        Downloading apps from Malavida is very easy and simple. Here are the steps on how to download apps from Malavida:

        -

        Choose the app you want to download from the website or the app store

        -

        The first step is to choose the app you want to download from Malavida's website or app store. You can browse the website or the app store for the different categories of apps such as games, social networks, music, photo, video, productivity, education, health, security, etc. You can also search for the app by name or keyword using the search bar.

        -

        Once you find the app you want to download, you can click on it to see more details about it such as its description, features, screenshots, video, rating, comments, etc. You can also read the reviews and tips from Malavida's experts and other users.

        -

        Click on the download button and follow the instructions

        -

        The second step is to click on the download button and follow the instructions. Depending on the device you are using, the download button might be different. For Android devices, you will see a green button that says "Download APK". For Windows devices, you will see a blue button that says "Download". For Mac devices, you will see a red button that says "Download for Mac".

        -

        Once you click on the download button, you will be redirected to a page where you can choose the version of the app you want to download. You can also see the size and the requirements of the app. After choosing the version, you will see another page where you can confirm the download. You might also see some ads or offers from Malavida's partners. You can skip them if you are not interested.

        -

        After confirming the download, you will see a progress bar that shows the status of the download. You can also cancel or pause the download if you want. Once the download is complete, you will see a notification that says "Download completed".

        -

        Enjoy your app on your device

        -

        The third step is to enjoy your app on your device. Depending on the device you are using, the installation process might be different. For Android devices, you will need to enable the option "Unknown sources" in your settings to install apps from sources other than Google Play. You can also use Malavida's app store to install apps directly from Malavida without enabling this option.

        -

        For Windows devices, you will need to run the executable file that you downloaded and follow the installation wizard. You might also need to accept some terms and conditions or permissions before installing the app. For Mac devices, you will need to drag and drop the app icon into your Applications folder or run the installer file that you downloaded.

        -

        Once the installation is complete, you can launch your app and start using it on your device. You can also update or uninstall your app from Malavida's website or app store if you want.

        -

        How to access news, reviews, and tips from Malavida?

        -

        Accessing news, reviews, and tips from Malavida is also very easy and simple. Here are the steps on how to access news, reviews, and tips from Malavida:

        -

        Browse the website or the app store for the latest news, reviews, and tips about apps and software

        -

        The first step is to browse Malavida's website or app store for the latest news, reviews, and tips about apps and software. You can find articles that cover the latest trends, innovations, features, updates, comparisons, opinions, recommendations, etc. about apps and software. You can also find tutorials and guides that teach you how to use and optimize your apps and software.

        -

        You can browse the website or the app store by categories such as games, social networks, music, photo, video, productivity, education, health, security, etc. You can also search for the app or software by name or keyword using the search bar. You can also filter the results by date, popularity, rating, etc.

        -

        Once you find the article you want to read, you can click on it to see more details about it such as its title, author, summary, content, images, video, etc. You can also read the comments from other users and leave your own comment if you want.

        -

        Subscribe to the newsletter or follow Malavida on social media for updates and notifications

        -

        The second step is to subscribe to Malavida's newsletter or follow Malavida on social media for updates and notifications. You can subscribe to Malavida's newsletter by entering your email address on the website or the app store. You will receive a confirmation email that you need to verify before receiving the newsletter. You can unsubscribe from the newsletter at any time if you want.

        -

        You can also follow Malavida on social media such as Facebook, Twitter, Instagram, or Pinterest. You will see posts from Malavida that share news, reviews, tips, videos, etc. about apps and software. You can also like, comment, share, or message Malavida on social media if you want.

        -

        Share your opinions and feedback with other users and developers of apps and software

        -

        The third step is to share your opinions and feedback with other users and developers of apps and software. You can leave comments, ratings, questions, suggestions, feedback, etc. on Malavida's website or app store. You can also share your experiences and opinions with other users and developers of apps and software. You can also participate in surveys, polls, contests, giveaways, etc. that Malavida organizes from time to time.

        -

        This means that you can contribute to the improvement and development of apps and software. You can provide valuable information and insights to other users and developers of apps and software. You can also receive useful information and insights from other users and developers of apps and software. You can also have fun and win prizes by joining Malavida's activities and events.

        -

        Conclusion

        -

        Malavida is a website that offers free and safe apps for Android, Windows, and Mac devices. It is also a platform that provides news, reviews, and tips about apps and software. And it is a community that connects users and developers of apps and software.

        -

        By using Malavida, you can download apps for your devices without any risk of malware or viruses. You can discover new and useful apps and software for your devices. You can learn how to use and optimize your apps and software with tutorials and guides. And you can interact with other users and developers of apps and software.

        -

        If you are interested in Malavida, you can visit their website or download their app store for your device. You can also follow them on social media such as Facebook, Twitter, Instagram, or Pinterest. You can also subscribe to their newsletter by entering your email address on their website or app store.

        -

        We hope you enjoyed this article about Malavida. If you have any questions or comments, please feel free to leave them below. We would love to hear from you.

        -

        FAQs

        -

        Here are some of the frequently asked questions about Malavida:

        -

        Is Malavida legal?

        -

        Yes, Malavida is legal. Malavida respects the intellectual property rights of the developers of the apps and software that it offers. Malavida only offers apps and software that are free or have a free version. Malavida does not offer any pirated or cracked apps or software.

        -

        Is Malavida safe?

        -

        Yes, Malavida is safe. Malavida ensures that all the apps and software that it offers are safe and free of malware or viruses. Malavida has a team of experts who test and review every app before publishing it on the website. They also update the apps regularly to fix any bugs or issues.

        -

        How does Malavida make money?

        -

        Malavida makes money by displaying ads or offers from its partners on its website or app store. These ads or offers are clearly marked as such and do not interfere with the user experience. Malavida also makes money by receiving commissions from some of the developers of the apps or software that it offers.

        -

        How can I contact Malavida?

        -

        You can contact Malavida by using the contact form on their website or by sending an email to info@malavida.com. You can also contact them by using their social media accounts such as Facebook, Twitter, Instagram, or Pinterest.

        -

        How can I support Malavida?

        -

        You can support Malavida by using their website or app store to download apps for your devices. You can also support them by sharing their content with your friends and family on social media or other platforms. You can also support them by leaving positive ratings and reviews on their website or app store.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get FRAG PRO SHOOTER Vol 4 MOD APK for Free and Enjoy the Best Shooting Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get FRAG PRO SHOOTER Vol 4 MOD APK for Free and Enjoy the Best Shooting Experience.md deleted file mode 100644 index 98b82a81feef19d08a5516c6283c0241fd222cdc..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Get FRAG PRO SHOOTER Vol 4 MOD APK for Free and Enjoy the Best Shooting Experience.md +++ /dev/null @@ -1,159 +0,0 @@ -
        -

        FRAG Pro Shooter Vol 4 Mod APK: A Guide for Beginners

        -

        If you are a fan of online first-person shooters, you might have heard of FRAG Pro Shooter, a popular game that lets you compete with other players in various modes and arenas. But did you know that there is a way to make your gaming experience even more exciting and rewarding? That's right, we are talking about FRAG Pro Shooter Vol 4 Mod APK, a modified version of the game that gives you access to unlimited money, no ads, all characters unlocked, and high-quality graphics and sound. In this article, we will tell you everything you need to know about FRAG Pro Shooter Vol 4 Mod APK, including its features, how to download and install it, how to play it, and its pros and cons. Read on to find out more!

        -

        frag pro shooter vol 4 mod apk


        Download Zip ✦✦✦ https://ssurll.com/2uNSM0



        -

        Features of FRAG Pro Shooter Vol 4 Mod APK

        -

        FRAG Pro Shooter Vol 4 Mod APK is not just a regular version of the game with some minor tweaks. It is a fully loaded package that offers you a ton of features that will make your gameplay more fun and satisfying. Here are some of the features that you can enjoy with FRAG Pro Shooter Vol 4 Mod APK:

        -

        Unlimited Money

        -

        One of the best features of FRAG Pro Shooter Vol 4 Mod APK is that it gives you unlimited money to spend on whatever you want in the game. You can buy new weapons, skins, and upgrades for your characters, making them more powerful and stylish. You can also unlock new cards and chests that contain rare items and rewards. With unlimited money, you can enjoy the game without worrying about running out of resources or spending real money.

        -

        No Ads

        -

        Another great feature of FRAG Pro Shooter Vol 4 Mod APK is that it removes all the annoying ads that interrupt your gameplay. You can play the game without any distractions or delays, and focus on your strategy and skills. No more waiting for ads to load or skipping them every few minutes. You can also save your data and battery life by not having to watch or download ads.

        -

        All Characters Unlocked

        -

        FRAG Pro Shooter Vol 4 Mod APK also unlocks all the characters in the game, including the new ones added in Vol 4. You can choose from over 100 characters, each with their own unique abilities, weapons, and personalities. You can mix and match your team of five characters, and switch between them during the match. You can also customize your characters with different skins and outfits, making them stand out from the crowd. With all characters unlocked, you can experiment with different combinations and find your favorite ones.

        -

        High-Quality Graphics and Sound

        -

        FRAG Pro Shooter Vol 4 Mod APK also enhances the graphics and sound quality of the game, making it more immersive and realistic. You can enjoy the stunning visuals and animations of the game, as well as the dynamic sound effects and music. The game runs smoothly and fast on your device, without any lag or glitches. You can also adjust the settings to suit your preferences and device specifications.

        -

        How to Download and Install FRAG Pro Shooter Vol 4 Mod APK

        -

        Now that you know the features of FRAG Pro Shooter Vol 4 Mod APK, you might be wondering how to download and install it on your device. Don't worry, it's not a complicated process, but you do need to follow some steps carefully. Here are the requirements and steps for downloading and installing FRAG Pro Shooter Vol 4 Mod APK:

        -

        frag pro shooter mod apk latest version download
        -frag pro shooter hack mod apk unlimited money and gems
        -frag pro shooter mod menu apk free download
        -frag pro shooter vol 4 mod apk android 1
        -frag pro shooter mod apk revdl
        -frag pro shooter mod apk offline
        -frag pro shooter mod apk no root
        -frag pro shooter vol 4 mod apk rexdl
        -frag pro shooter mod apk unlimited everything
        -frag pro shooter mod apk online
        -frag pro shooter vol 4 mod apk happymod
        -frag pro shooter mod apk all characters unlocked
        -frag pro shooter mod apk obb
        -frag pro shooter vol 4 mod apk no ads
        -frag pro shooter mod apk unlimited ammo
        -frag pro shooter mod apk god mode
        -frag pro shooter vol 4 mod apk vip
        -frag pro shooter mod apk anti ban
        -frag pro shooter mod apk unlimited diamonds
        -frag pro shooter vol 4 mod apk premium
        -frag pro shooter mod apk new update
        -frag pro shooter hack mod apk download for android
        -frag pro shooter vol 4 mod apk original
        -frag pro shooter mod apk high damage
        -frag pro shooter mod apk unlimited coins and gems
        -frag pro shooter vol 4 mod apk full unlocked
        -frag pro shooter mod apk mega
        -frag pro shooter hack mod apk ios
        -frag pro shooter vol 4 mod apk mediafıre
        -frag pro shooter mod apk unlimited health and money
        -frag pro shooter vol 4 mod apk latest version 2023
        -frag pro shooter mod apk with data
        -frag pro shooter hack mod apk free fire
        -frag pro shooter vol 4 mod apk unlimited gold and gems
        -frag pro shooter mod apk one hit kill
        -frag pro shooter hack mod apk 2023
        -frag pro shooter vol 4 mod apk android oyun club
        -frag pro shooter mod apk all skins unlocked
        -frag pro shooter hack mod apk no verification
        -frag pro shooter vol 4 mod apk unlimited cards and coins

        -

        Requirements

        -

        Before you download and install FRAG Pro Shooter Vol 4 Mod APK, you need to make sure that your device meets some requirements. These are:

        -
          -
        • Your device must have an Android version of 4.3 or higher.
        • -
        • Your device must have at least 1 GB of RAM and 500 MB of free storage space.
        • -
        • Your device must have a stable internet connection to download and play the game.
        • -
        • You must enable unknown sources on your device to install the mod apk. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        • -
        -

        Steps

        -

        Once you have checked the requirements, you can proceed to download and install FRAG Pro Shooter Vol 4 Mod APK by following these steps:

        -
          -
        1. Go to a reliable source that offers FRAG Pro Shooter Vol 4 Mod APK for free download. You can use this link: .
        2. -
        3. Click on the download button and wait for the file to be downloaded on your device.
        4. -
        5. Locate the file in your device's file manager and tap on it to start the installation process.
        6. -
        7. Follow the instructions on the screen and wait for the installation to be completed.
        8. -
        9. Launch the game from your app drawer or home screen and enjoy!
        10. -
        -

        Tips and Tricks

        -

        To use FRAG Pro Shooter Vol 4 Mod APK safely and effectively, here are some tips and tricks that you should keep in mind:

        -
          -
        • Back up your data before installing the mod apk, in case something goes wrong or you want to switch back to the original version of the game.
        • -
        • Check for updates regularly to get the latest features and bug fixes of the mod apk.
        • -
        • Avoid using the mod apk online with other players, as it might get you banned from the game or expose you to malware or viruses.
        • -
        • Use a VPN service to protect your privacy and security while using the mod apk.
        • -
        • Have fun and don't forget to share your feedback with us!
        • -
        -

        How to Play FRAG Pro Shooter Vol 4 Mod APK

        -

        FRAG Pro Shooter Vol 4 Mod APK is easy to play, but hard to master. It is a fast-paced shooter game that requires quick reflexes, strategic thinking, and teamwork. Here are some basics on how to play FRAG Pro Shooter Vol 4 Mod APK:

        -

        Game Modes

        -

        The game offers different game modes that you can choose from, such as:

        -
          -
        • Team Deathmatch: This is the classic mode where you join a team of five players and fight against another team of five players. The team with the most kills at the end of the match wins.
        • -
        • Free for All: This is the mode where you play solo and compete with nine other players. The player with the most kills at the end of the match wins.
        • -
        • Capture the Flag: This is the mode where you join a team of five players and try to capture the flag of the enemy team and bring it back to your base. The team with the most flag captures at the end of the match wins.
        • -
        • Battle Royale: This is the mode where you play solo and try to survive in a shrinking map with 49 other players. The last player standing wins.
        • -
        -

        Characters and Weapons

        -

        The game features over 100 characters that you can choose from, each with their own unique abilities, weapons, and personalities. You can unlock them with money or cards, and customize them with skins and outfits. You can also upgrade them with coins and gems, improving their stats and skills. Some of the characters are:

        -
          -
        • Lolly Pop: A cute and cheerful girl who uses a giant lollipop as a weapon. She can heal herself and her teammates with candy.
        • -
        • R0N1N: A cybernetic samurai who wields a katana and a shuriken launcher. He can dash and slash his enemies with speed and precision.
        • -
        • Dr. Frost: A cold-hearted scientist who uses a freeze ray and an ice bomb. He can freeze his enemies and create ice walls.
        • -
        • Andrometa: A futuristic warrior who uses a plasma rifle and a jetpack. She can fly and shoot lasers from her eyes.
        • -
        • DJ Equalizer: A music-loving DJ who uses a turntable and a speaker. He can create sound waves that damage and stun his enemies.
        • -
        -

        Maps and Environments

        -

        The game offers different maps and environments that you can play on, each with their own layouts, hazards, and secrets. You can explore them and find the best spots to hide, snipe, or ambush your enemies. Some of the maps are:

        -
          -
        • City Center: A urban map with skyscrapers, streets, and bridges. Watch out for cars, trains, and helicopters.
        • -
        • Jungle Temple: A tropical map with ancient ruins, trees, and waterfalls. Watch out for traps, vines, and animals.
        • -
        • Space Station: A sci-fi map with futuristic structures, platforms, and portals. Watch out for lasers, robots, and asteroids.
        • -
        • Pirate Cove: A nautical map with ships, islands, and cannons. Watch out for sharks, pirates, and bombs.
        • -
        • Candy Land: A sweet map with candy canes, cakes, and chocolate. Watch out for gumdrops, sprinkles, and lollipops.
        • -
        -

        Pros and Cons of FRAG Pro Shooter Vol 4 Mod APK

        -

        FRAG Pro Shooter Vol 4 Mod APK is not perfect, and it has its pros and cons that you should consider before using it. Here are some of them:

        -

        Pros

        -

        Some of the advantages of using FRAG Pro Shooter Vol 4 Mod APK are:

        -
          -
        • You can have more fun and satisfaction with unlimited money, no ads, all characters unlocked, and high-quality graphics and sound.
        • -
        • You can customize your gameplay according to your preferences and style.
        • -
        • You can challenge yourself with different game modes, characters, weapons, maps, and environments.
        • -
        • You can enjoy the game offline or online with other players.
        • -
        • You can download and install the mod apk for free from a reliable source.
        • -
        -

        Cons

        -

        Some of the disadvantages of using FRAG Pro Shooter Vol 4 Mod APK are:

        -
          -
        • You might face security risks such as malware or viruses from downloading or installing the mod apk from an untrusted source.
        • -
        • You might face compatibility issues such as crashes or errors from using an outdated or incompatible version of the mod apk or the game.
        • -
        • You might face ethical concerns such as cheating or unfair advantage from using the mod apk online with other players.
        • -
        • You might face legal issues such as infringement or violation from using the mod apk without permission from the developers or owners of the game.
        • -
        • You might lose your data or progress if you uninstall or update the mod apk or the game without backing up your data.
        • -
        -

        Conclusion

        -

        In conclusion , FRAG Pro Shooter Vol 4 Mod APK is a modified version of the game that offers you a lot of features that can enhance your gameplay and enjoyment. However, it also comes with some risks and drawbacks that you should be aware of before using it. Ultimately, the decision to use FRAG Pro Shooter Vol 4 Mod APK is up to you, but we hope that this article has helped you understand what it is, how to download and install it, how to play it, and what are its pros and cons. If you decide to use FRAG Pro Shooter Vol 4 Mod APK, we hope that you have fun and stay safe!

        FAQs

        -

        Here are some frequently asked questions about FRAG Pro Shooter Vol 4 Mod APK that you might find useful:

        -

        Is FRAG Pro Shooter Vol 4 Mod APK safe to use?

        -

        FRAG Pro Shooter Vol 4 Mod APK is safe to use if you download and install it from a trusted source, such as the link we provided in this article. However, if you download and install it from an untrusted source, you might expose your device to malware or viruses that can harm your data or privacy. Therefore, we recommend that you use a reliable antivirus software and a VPN service to protect your device and your identity while using FRAG Pro Shooter Vol 4 Mod APK.

        -

        How can I update FRAG Pro Shooter Vol 4 Mod APK?

        -

        To update FRAG Pro Shooter Vol 4 Mod APK, you need to follow the same steps as downloading and installing it. You need to go to the source that offers the latest version of the mod apk, download the file, and install it on your device. However, before you do that, you need to back up your data and uninstall the previous version of the mod apk or the game. Otherwise, you might lose your data or face compatibility issues.

        -

        Can I play FRAG Pro Shooter Vol 4 Mod APK online with other players?

        -

        Yes, you can play FRAG Pro Shooter Vol 4 Mod APK online with other players, but we advise you not to do so. This is because using the mod apk online with other players might get you banned from the game or expose you to malware or viruses from other players. Moreover, using the mod apk online with other players might be considered cheating or unfair advantage by the developers or owners of the game, which might result in legal action against you. Therefore, we suggest that you play FRAG Pro Shooter Vol 4 Mod APK offline or with your friends only.

        -

        What are some alternatives to FRAG Pro Shooter Vol 4 Mod APK?

        -

        If you are looking for some alternatives to FRAG Pro Shooter Vol 4 Mod APK, you might want to try these games:

        -
          -
        • Call of Duty Mobile: A popular shooter game that lets you play various modes and maps with realistic graphics and sound.
        • -
        • Garena Free Fire: A survival shooter game that lets you play solo or with a team in a shrinking map with 50 players.
        • -
        • Brawl Stars: A fun shooter game that lets you play with different characters and modes in a colorful and cartoonish style.
        • -
        • PUBG Mobile: A realistic shooter game that lets you play solo or with a team in a large map with 100 players.
        • -
        • Fortnite: A creative shooter game that lets you build structures and use various weapons and items in a vibrant and dynamic map.
        • -
        -

        Where can I find more information about FRAG Pro Shooter Vol 4 Mod APK?

        -

        If you want to find more information about FRAG Pro Shooter Vol 4 Mod APK, you can visit these websites:

        -
          -
        • The official website of FRAG Pro Shooter: .
        • -
        • The official Facebook page of FRAG Pro Shooter: .
        • -
        • The official YouTube channel of FRAG Pro Shooter: .
        • -
        • The official Instagram account of FRAG Pro Shooter: .
        • -
        • The official Twitter account of FRAG Pro Shooter: .
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/skyler36237/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/skyler36237/vits-uma-genshin-honkai/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/skyler36237/vits-uma-genshin-honkai/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/distilbert-base-uncased.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/distilbert-base-uncased.md deleted file mode 100644 index 33fcbe27c8a5acef6bac1248f7df3b3392064623..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/distilbert-base-uncased.md +++ /dev/null @@ -1,208 +0,0 @@ -# DistilBERT base model (uncased) - -This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was -introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found -[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does -not make a difference between english and English. - -## Model description - -DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a -self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, -with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic -process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained -with three objectives: - -- Distillation loss: the model was trained to return the same probabilities as the BERT base model. -- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a - sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the - model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that - usually see the words one after the other, or from autoregressive models like GPT which internally mask the future - tokens. It allows the model to learn a bidirectional representation of the sentence. -- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base - model. - -This way, the model learns the same inner representation of the English language than its teacher model, while being -faster for inference or downstream tasks. - -## Intended uses & limitations - -You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to -be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for -fine-tuned versions on a task that interests you. - -Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) -to make decisions, such as sequence classification, token classification or question answering. For tasks such as text -generation you should look at model like GPT2. - -### How to use - -You can use this model directly with a pipeline for masked language modeling: - -```python ->>> from transformers import pipeline ->>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased') ->>> unmasker("Hello I'm a [MASK] model.") - -[{'sequence': "[CLS] hello i'm a role model. [SEP]", - 'score': 0.05292855575680733, - 'token': 2535, - 'token_str': 'role'}, - {'sequence': "[CLS] hello i'm a fashion model. [SEP]", - 'score': 0.03968575969338417, - 'token': 4827, - 'token_str': 'fashion'}, - {'sequence': "[CLS] hello i'm a business model. [SEP]", - 'score': 0.034743521362543106, - 'token': 2449, - 'token_str': 'business'}, - {'sequence': "[CLS] hello i'm a model model. [SEP]", - 'score': 0.03462274372577667, - 'token': 2944, - 'token_str': 'model'}, - {'sequence': "[CLS] hello i'm a modeling model. [SEP]", - 'score': 0.018145186826586723, - 'token': 11643, - 'token_str': 'modeling'}] -``` - -Here is how to use this model to get the features of a given text in PyTorch: - -```python -from transformers import DistilBertTokenizer, DistilBertModel -tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') -model = DistilBertModel.from_pretrained("distilbert-base-uncased") -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='pt') -output = model(**encoded_input) -``` - -and in TensorFlow: - -```python -from transformers import DistilBertTokenizer, TFDistilBertModel -tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') -model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='tf') -output = model(encoded_input) -``` - -### Limitations and bias - -Even if the training data used for this model could be characterized as fairly neutral, this model can have biased -predictions. It also inherits some of -[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias). - -```python ->>> from transformers import pipeline ->>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased') ->>> unmasker("The White man worked as a [MASK].") - -[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]', - 'score': 0.1235365942120552, - 'token': 20987, - 'token_str': 'blacksmith'}, - {'sequence': '[CLS] the white man worked as a carpenter. [SEP]', - 'score': 0.10142576694488525, - 'token': 10533, - 'token_str': 'carpenter'}, - {'sequence': '[CLS] the white man worked as a farmer. [SEP]', - 'score': 0.04985016956925392, - 'token': 7500, - 'token_str': 'farmer'}, - {'sequence': '[CLS] the white man worked as a miner. [SEP]', - 'score': 0.03932540491223335, - 'token': 18594, - 'token_str': 'miner'}, - {'sequence': '[CLS] the white man worked as a butcher. [SEP]', - 'score': 0.03351764753460884, - 'token': 14998, - 'token_str': 'butcher'}] - ->>> unmasker("The Black woman worked as a [MASK].") - -[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]', - 'score': 0.13283951580524445, - 'token': 13877, - 'token_str': 'waitress'}, - {'sequence': '[CLS] the black woman worked as a nurse. [SEP]', - 'score': 0.12586183845996857, - 'token': 6821, - 'token_str': 'nurse'}, - {'sequence': '[CLS] the black woman worked as a maid. [SEP]', - 'score': 0.11708822101354599, - 'token': 10850, - 'token_str': 'maid'}, - {'sequence': '[CLS] the black woman worked as a prostitute. [SEP]', - 'score': 0.11499975621700287, - 'token': 19215, - 'token_str': 'prostitute'}, - {'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]', - 'score': 0.04722772538661957, - 'token': 22583, - 'token_str': 'housekeeper'}] -``` - -This bias will also affect all fine-tuned versions of this model. - -## Training data - -DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset -consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) -(excluding lists, tables and headers). - -## Training procedure - -### Preprocessing - -The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are -then of the form: - -``` -[CLS] Sentence A [SEP] Sentence B [SEP] -``` - -With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in -the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a -consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two -"sentences" has a combined length of less than 512 tokens. - -The details of the masking procedure for each sentence are the following: -- 15% of the tokens are masked. -- In 80% of the cases, the masked tokens are replaced by `[MASK]`. -- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. -- In the 10% remaining cases, the masked tokens are left as is. - -### Pretraining - -The model was trained on 8 16 GB V100 for 90 hours. See the -[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters -details. - -## Evaluation results - -When fine-tuned on downstream tasks, this model achieves the following results: - -Glue test results: - -| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | -|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:| -| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 | - - -### BibTeX entry and citation info - -```bibtex -@article{Sanh2019DistilBERTAD, - title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, - author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, - journal={ArXiv}, - year={2019}, - volume={abs/1910.01108} -} -``` - - - - \ No newline at end of file diff --git a/spaces/sowmika/content-generation-text/app.py b/spaces/sowmika/content-generation-text/app.py deleted file mode 100644 index d2fe609f1ba1d71b89e010bf385d6f0ca1b768f7..0000000000000000000000000000000000000000 --- a/spaces/sowmika/content-generation-text/app.py +++ /dev/null @@ -1,48 +0,0 @@ -# -*- coding: utf-8 -*- -"""Text Generation - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1lwRbuau69DB0MhPrQMO371TyufcGSMFW - -**Import the necessary modules** - -""" - - - -#install the necessary modules -from transformers import GPT2Tokenizer, GPT2LMHeadModel - -#create the tokenizer for the model to tokenize the input string and convert it into vector form -tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') -#create the model -model = GPT2LMHeadModel.from_pretrained('gpt2-large', pad_token_id=tokenizer.eos_token_id) -#It helps in decoding the numeric representation to word representation. -tokenizer.decode(tokenizer.eos_token_id) - -"""**Using Gradio to display the output as a web application**""" - -#Function used to correct the grammar of the input and with the help of GPT-2 model it is used to generate the next n words and return the generated text -import gradio as gr -def correct(sentence,textbox): - - - inp=sentence - - #predict next n words and return - length=int(textbox) - numeric_ids = tokenizer.encode(inp, return_tensors = 'pt') - result = model.generate(numeric_ids, max_length = length, num_beams=5, no_repeat_ngram_size=2, early_stopping=True) - generated_text = tokenizer.decode(result[0], skip_special_tokens=True) - return generated_text - -app_inputs = [gr.inputs.Textbox(lines=2, placeholder="Enter sentence here...",label='Input Text'),gr.inputs.Textbox(lines=2, placeholder="Enter number of words here...",label='Number of words')] - -interface = gr.Interface(fn=correct, - inputs=app_inputs, - outputs='text', - title='Text Generation App') - -interface.launch() \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/lm_context_window_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/lm_context_window_dataset.py deleted file mode 100644 index 1a945927cf0d96719003685676a990737a3762b2..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/lm_context_window_dataset.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from typing import Dict - -from fairseq.data.monolingual_dataset import MonolingualDataset - -from . import FairseqDataset - - -class LMContextWindowDataset(FairseqDataset): - """ - Wraps a MonolingualDataset and provides more context for evaluation. - - Each item in the new dataset will have a maximum size of - ``tokens_per_sample + context_window``. - - Args: - dataset: dataset to wrap - tokens_per_sample (int): the max number of tokens in each dataset item - context_window (int): the number of accumulated tokens to add to each - dataset item - pad_idx (int): padding symbol - """ - - def __init__( - self, - dataset: MonolingualDataset, - tokens_per_sample: int, - context_window: int, - pad_idx: int, - ): - assert context_window > 0 - self.dataset = dataset - self.tokens_per_sample = tokens_per_sample - self.context_window = context_window - self.pad_idx = pad_idx - self.prev_tokens = np.empty([0]) - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples) -> Dict: - sample = self.dataset.collater(samples) - - pad = self.pad_idx - max_sample_len = self.tokens_per_sample + self.context_window - - bsz, tsz = sample["net_input"]["src_tokens"].shape - start_idxs = [0] * bsz - toks = sample["net_input"]["src_tokens"] - lengths = sample["net_input"]["src_lengths"] - tgt = sample["target"] - new_toks = np.empty([bsz, tsz + self.context_window], dtype=np.int64) - new_tgt = np.full([bsz, tsz + self.context_window], pad, dtype=np.int64) - sample_lens = toks.ne(pad).long().sum(dim=1).cpu() - for i in range(bsz): - sample_len = sample_lens[i] - extra = len(self.prev_tokens) + sample_len - max_sample_len - if extra > 0: - self.prev_tokens = self.prev_tokens[extra:] - pads = np.full(self.context_window - len(self.prev_tokens), pad) - new_toks[i] = np.concatenate([self.prev_tokens, toks[i].numpy(), pads]) - new_tgt[ - i, len(self.prev_tokens) : len(self.prev_tokens) + len(tgt[i]) - ] = tgt[i] - start_idxs[i] = len(self.prev_tokens) - lengths[i] += len(self.prev_tokens) - self.prev_tokens = new_toks[i][new_toks[i] != pad][-self.context_window :] - sample["net_input"]["src_tokens"] = torch.from_numpy(new_toks) - sample["target"] = torch.from_numpy(new_tgt) - sample["start_indices"] = start_idxs - return sample - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - # NOTE we don't shuffle the data to retain access to the previous dataset elements - return np.arange(len(self.dataset)) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Activation Premium Code Anonymox.md b/spaces/stomexserde/gpt4-ui/Examples/Activation Premium Code Anonymox.md deleted file mode 100644 index 01c74c50855ada73315c9de693d21a66635837b0..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Activation Premium Code Anonymox.md +++ /dev/null @@ -1,80 +0,0 @@ - -

        Activation Premium Code Anonymox: How to Browse the Web Anonymously and Securely

        -

        Do you want to surf the internet without revealing your identity and location? Do you want to access websites that are blocked or censored in your country? Do you want to protect your privacy and prevent tracking by websites and third parties?

        -

        Activation Premium Code Anonymox


        Download Zip ✔✔✔ https://urlgoal.com/2uI9kv



        -

        If you answered yes to any of these questions, then you need Anonymox, a free VPN extension for Chrome and Firefox that lets you browse the web anonymously and securely. In this article, we will show you how to get Anonymox Premium, which offers even more features and benefits than the free version. We will also show you how to activate it with a simple code, how to use it, and how to troubleshoot any issues that may arise.

        -

        What is Anonymox and why do you need it?

        -

        Anonymox is an initiative for anonymization on the internet. The aim is to restore the user's right of anonymity in the web. Most websites monitor the behaviour of their users, giving the websites hosts the ability to analyze the general user behaviour and create detailed user profiles, which are frequently sold to third parties.

        -

        Anonymox is a free VPN extension for Chrome and Firefox

        -

        Anonymox is not just an extension. It is also a VPN service that encrypts your traffic and routes it through different servers around the world. This way, you can change your IP address and country, visit blocked sites, and surf anonymously. Anonymox is compatible with Chrome and Firefox browsers, and you can install it in seconds from their official website or from the Chrome Web Store or Firefox Add-ons.

        -

        Anonymox lets you change your IP address and country, visit blocked sites, and surf anonymously

        -

        With Anonymox, you can choose from a list of identities that represent different countries and IP addresses. You can switch between them with a single click, and your browser will reload the page with the new identity. This way, you can access websites that are restricted or censored in your region, such as Netflix, YouTube, Facebook, Twitter, etc. You can also surf the web anonymously, without revealing your real identity and location to anyone. Anonymox will also delete your cookies and history after each session, so you can leave no traces behind.

        -

        Anonymox protects your privacy and prevents tracking by websites and third parties

        -

        Anonymox not only hides your IP address and country, but also protects your privacy and security online. Anonymox encrypts your traffic with SSL (Secure Sockets Layer), which is a protocol that ensures secure communication between your browser and the website you are visiting. This way, you can prevent hackers, ISPs (Internet Service Providers), governments, or anyone else from snooping on your online activities. Anonymox also blocks trackers, ads, malware, and other unwanted elements that may compromise your browsing experience or collect your personal data.

        -

        What are the benefits of Anonymox Premium?

        -

        Anonymox offers a free version that allows you to use up to 500 MB of traffic per month, which is enough for basic browsing. However, if you want to enjoy more features and benefits, you can upgrade to Anonymox Premium, which is a paid subscription service that offers the following advantages:

        -

        -

        Anonymox Premium offers unlimited traffic, faster downloads, additional encryption, no ads, and more identities

        -

        With Anonymox Premium, you can use as much traffic as you want, without any limitations or restrictions. You can also download files faster, thanks to the optimized servers and bandwidth. Anonymox Premium also adds an extra layer of encryption to your traffic, using AES-256 (Advanced Encryption Standard), which is one of the most secure encryption methods available. Moreover, Anonymox Premium removes all the ads that may annoy you or slow down your browsing. Finally, Anonymox Premium gives you access to more identities than the free version, so you can have more options to choose from.

        -

        Anonymox Premium gives you access to more than 74 IPs from over 14 countries

        -

        With Anonymox Premium, you can choose from more than 74 IPs from over 14 countries around the world. These include: USA (24 IPs), Germany (12 IPs), UK (8 IPs), France (6 IPs), Netherlands (6 IPs), Canada (4 IPs), Switzerland (4 IPs), Romania (2 IPs), Spain (2 IPs), Italy (2 IPs), Poland (1 IP), Singapore (1 IP), Japan (1 IP), and Australia (1 IP). You can see the full list of identities on their website. With such a variety of countries and IPs, you can access any website or service that you want, regardless of where you are.

        -

        Anonymox Premium costs $6.55 per month or less with longer subscriptions

        -

        Anonymox Premium is very affordable compared to other VPN services. You can get it for as low as $6.55 per month if you choose the monthly plan. However, if you want to save more money, you can opt for longer subscriptions that offer discounts. For example, you can get Anonymox Premium for $5.50 per month if you choose the 6-month plan ($33 in total), or for $4.16 per month if you choose the 12-month plan ($50 in total). You can also get a free trial of Anonymox Premium for 7 days if you want to test it before buying it.

        -

        How to get Anonymox Premium activation code?

        -

        If you are interested in getting Anonymox Premium, you need to follow these simple steps:

        - H3: You can buy Anonymox Premium from their official website or from other online platforms - You can go to https://www.anonymox.net/en/premium and click on the "Buy Now" button. You will be redirected to a secure payment page where you can choose your preferred plan and payment method. Alternatively, you can buy Anonymox Premium from other online platforms such as https://www.cleverbridge.com/ or https://www.softpedia.com/, which offer similar prices and options. - H3: You can use various payment methods such as credit card, PayPal, Bitcoin, etc. - Depending on the platform you choose, you can pay for Anonymox Premium with different payment methods. The most common ones are credit card (Visa, Mastercard, American Express, etc.), PayPal, Bitcoin, and other cryptocurrencies. You can also use other methods such as bank transfer, Sofort, Giropay, WebMoney, etc. You can see the full list of available payment methods on the payment page of each platform. - H3: You will receive the activation code by email after the payment is confirmed - After you complete the payment process, you will receive an email from Anonymox or the platform you bought it from. The email will contain your activation code, which is a 16-digit alphanumeric code that looks something like this: XXXX-XXXX-XXXX-XXXX. You will need this code to activate your Anonymox Premium subscription on your browser.

        How to activate Anonymox Premium with the code?

        -

        Once you have your activation code, you can activate your Anonymox Premium subscription by following these steps:

        - - H3: You need to install the Anonymox extension for Chrome or Firefox first - If you haven't already done so, you need to install the Anonymox extension for your browser. You can do this by going to https://www.anonymox.net/en/download and clicking on the "Download" button for Chrome or Firefox. You will be taken to the Chrome Web Store or Firefox Add-ons page where you can add the extension to your browser. You may need to restart your browser after installing the extension. - H3: You need to click on the Anonymox icon on your browser and select "Premium" - After installing the extension, you will see an Anonymox icon on the top right corner of your browser. Click on it and a pop-up window will appear. On the window, you will see a tab that says "Premium". Click on it and you will see a field where you can enter your activation code. - H3: You need to enter the activation code in the provided field and click "Activate" - Enter your activation code in the field and make sure it is correct. Then click on the "Activate" button and wait for a few seconds. If everything goes well, you will see a message that says "Congratulations! Your premium account has been activated." This means that you have successfully activated your Anonymox Premium subscription and you can start using it right away.

        How to use Anonymox Premium features?

        -

        Now that you have activated your Anonymox Premium subscription, you can enjoy all the features and benefits that it offers. Here are some tips on how to use them:

        - H3: You can choose your desired identity from the list of available countries and IPs - To change your identity, click on the Anonymox icon on your browser and select the "Identity" tab. You will see a list of countries and IPs that you can choose from. You can also see the flag, the speed, and the encryption status of each identity. To select an identity, simply click on it and your browser will reload the page with the new identity. You can also use the search bar to find a specific country or IP. To go back to your original identity, click on the "Default" button at the bottom of the list. - H3: You can enable or disable the additional encryption option for extra security - To enable or disable the additional encryption option, click on the Anonymox icon on your browser and select the "Settings" tab. You will see a checkbox that says "Use additional encryption". If you check it, Anonymox will use AES-256 encryption to secure your traffic, which is more secure than SSL encryption. If you uncheck it, Anonymox will use SSL encryption only, which is still secure but faster. You can also see the encryption status of your current identity on the "Identity" tab. - H3: You can check your traffic usage and download speed on the dashboard - To check your traffic usage and download speed, click on the Anonymox icon on your browser and select the "Dashboard" tab. You will see a graph that shows your traffic usage in MB per hour, day, week, or month. You can also see your download speed in KB/s or MB/s. You can use these information to monitor your bandwidth consumption and performance.

        How to troubleshoot Anonymox Premium issues?

        -

        Although Anonymox Premium is designed to work smoothly and reliably, you may encounter some issues or problems while using it. Here are some tips on how to troubleshoot them:

        - - H3: If you encounter any problems with Anonymox Premium, you can contact their support team by email or social media - If you have any questions, complaints, suggestions, or feedback about Anonymox Premium, you can contact their support team by email at support@anonymox.net. You can also reach them through their social media accounts on Facebook (https://www.facebook.com/anonymoX.net) and Twitter (https://twitter.com/anonymoX). They will try to respond to your inquiries as soon as possible and help you solve any issues that you may have. - H3: You can also check their FAQ section for common questions and answers - If you have some common questions about Anonymox Premium, such as how to install it, how to use it, how to cancel it, etc., you can check their FAQ section on their website (https://www.anonymox.net/en/faq). You may find the answers that you are looking for there. The FAQ section covers topics such as installation, activation, usage, payment, cancellation, privacy, security, etc. - H3: You can cancel your subscription at any time if you are not satisfied with the service - If you are not happy with Anonymox Premium or you want to stop using it for any reason, you can cancel your subscription at any time. To do this, you need to go to the platform where you bought it (Anonymox website, Cleverbridge, Softpedia, etc.) and follow their instructions on how to cancel your subscription. You will not be charged for any future payments after you cancel your subscription. However, you will not receive any refunds for any past payments that you have made.

        Conclusion

        -

        Anonymox Premium is a great solution for anyone who wants to browse the web anonymously and securely. It offers many advantages over the free version, such as faster speed, more countries, no ads, unlimited traffic, additional encryption, and more identities. It is easy to get, activate, and use with a simple code. It is also affordable and flexible compared to other VPN services.

        -

        If you want to try Anonymox Premium for yourself, you can get it from their official website or from other online platforms. You can also get a free trial for 7 days before buying it. You can use various payment methods such as credit card, PayPal, Bitcoin, etc. You will receive an activation code by email after the payment is confirmed. You can then activate it on your browser by entering the code in the provided field.

        -

        Anonymox Premium is compatible with Chrome and Firefox browsers. You can install it in seconds from their website or from the Chrome Web Store or Firefox Add-ons. You can then choose your desired identity from the list of available countries and IPs. You can also enable or disable the additional encryption option for extra security. You can check your traffic usage and download speed on the dashboard.

        -

        If you encounter any problems with Anonymox Premium, you can contact their support team by email or social media. You can also check their FAQ section for common questions and answers. You can cancel your subscription at any time if you are not satisfied with the service.

        -

        Anonymox Premium is a reliable and easy-to-use VPN extension that will help you browse the web anonymously and securely. It is worth trying if you value your privacy and freedom online.

        -

        FAQs

        -

        What is the difference between Anonymox and other VPN services?

        -

        Anonymox is different from other VPN services in several ways. First, Anonymox is an extension for Chrome and Firefox, which means that it only works on your browser and not on your whole device. This makes it more lightweight and convenient to use. Second, Anonymox is free to use, with a premium option that offers more features and benefits. Other VPN services usually charge a fee for their service, with no free option. Third, Anonymox is more focused on anonymity and privacy, rather than speed and performance. Other VPN services may offer faster speed and more servers, but they may also keep logs of your activity or share your data with third parties.

        -

        How can I test if Anonymox is working properly?

        -

        You can test if Anonymox is working properly by checking your IP address and location on websites such as https://www.whatismyip.com/ or https://www.iplocation.net/. If Anonymox is working properly, you should see a different IP address and location than your original one. You should also see a green icon on the Anonymox extension that indicates that you are connected to an identity. If Anonymox is not working properly, you may see a red icon on the extension that indicates that there is an error or a problem.

        -

        Can I use Anonymox Premium on multiple devices?

        -

        Yes, you can use Anonymox Premium on multiple devices, as long as they are compatible with Chrome or Firefox browsers. You can use the same activation code to activate Anonymox Premium on up to 5 devices. However, you cannot use Anonymox Premium on devices that do not support Chrome or Firefox browsers, such as smartphones, tablets, smart TVs, etc.

        -

        Is Anonymox legal to use?

        -

        Anonymox is legal to use in most countries, as it does not violate any laws or regulations. However, some countries may have strict rules or bans on VPN services, such as China, Iran, Russia, etc. In these countries, using Anonymox may be risky or illegal, and you may face consequences such as fines, arrests, or censorship. Therefore, you should check the laws of your country before using Anonymox or any other VPN service.

        -

        What are some alternatives to Anonymox?

        -

        If you are looking for some alternatives to Anonymox, you may want to try some of these VPN extensions for Chrome or Firefox:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        NameDescriptionPrice
        ZenMateZenMate is a popular VPN extension that offers unlimited traffic, high speed, strong encryption, and access to over 74 countries.$10.99 per month or less with longer subscriptions
        HolaHola is a free VPN extension that uses peer-to-peer technology to provide fast and secure browsing. It also has a premium option that offers more features and benefits.$14.99 per month or less with longer subscriptions
        TunnelBearTunnelBear is a simple and user-friendly VPN extension that offers 500 MB of free traffic per month, plus unlimited traffic with a premium subscription. It also has a cute bear theme.$9.99 per month or less with longer subscriptions
        WindscribeWindscribe is a powerful VPN extension that offers 10 GB of free traffic per month, plus unlimited traffic with a premium subscription. It also has features such as ad blocking, firewall, split tunneling, etc.$9 per month or less with longer subscriptions
        NordVPNNordVPN is a well-known VPN service that offers a browser extension that works with Chrome and Firefox. It offers high speed, strong encryption, and access to over 60 countries.$11.95 per month or less with longer subscriptions
        -

        These are some of the alternatives to Anonymox that you can try if you want to compare different VPN extensions for your browser. However, Anonymox still remains a great option for anyone who wants to browse the web anonymously and securely with a simple and easy-to-use extension.

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Hunterrr Full Movie In Hd 1080p Download.md b/spaces/stomexserde/gpt4-ui/Examples/Hunterrr Full Movie In Hd 1080p Download.md deleted file mode 100644 index 27a957246bd0157fc47a2da1ed88a24db6339c2c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Hunterrr Full Movie In Hd 1080p Download.md +++ /dev/null @@ -1,18 +0,0 @@ - -

        Hunterrr: A Comedy Hindi Movie You Can Watch Online in Full HD

        -

        Hunterrr is a 2015 Hindi comedy movie directed by Harshavardhan Kulkarni and starring Gulshan Devaiah, Radhika Apte, Sai Tamhankar and others. The movie revolves around Mandar, a womaniser who is obsessed with sex and casual relationships. However, when he meets Tripti and falls in love with her, he decides to mend his ways and settle down with her. But he finds it hard and invites trouble when he attempts to hide his notorious past from her.

        -

        If you are looking for a fun and entertaining movie to watch online, Hunterrr might be a good choice for you. The movie has received positive reviews from critics and audiences for its witty dialogues, realistic characters and hilarious situations. The movie also explores the themes of love, lust, friendship and commitment in a humorous way.

        -

        Hunterrr full movie in hd 1080p download


        Download ✓✓✓ https://urlgoal.com/2uI8D6



        -

        You can watch Hunterrr online in full HD quality on various streaming platforms such as JioCinema[^1^] and Voot[^2^]. You can also download the movie for offline viewing using a video downloader software such as WonderFox HD Video Converter Factory Pro[^3^]. This software allows you to download any 1080p movie from any website with fast speed and high quality. You can also convert the downloaded movie to any format or device you want.

        -

        So what are you waiting for? Grab your popcorn and enjoy Hunterrr online in full HD today!

        - -

        Hunterrr is not just a comedy movie, but also a slice-of-life story that depicts the struggles and dilemmas of a modern-day man who is torn between his desires and his emotions. The movie does not shy away from showing the raw and realistic aspects of sex and relationships, but also balances them with humor and sensitivity. The movie also has a catchy soundtrack composed by Khamosh Shah, featuring songs like "Chori Chori", "Bachpan" and "Naina".

        -

        The movie has been praised for its performances, especially by Gulshan Devaiah who plays the lead role of Mandar. He portrays the character with charm, vulnerability and sincerity, making him relatable and likable despite his flaws. Radhika Apte plays Tripti, the girl who changes Mandar's life. She delivers a natural and nuanced performance as a smart and independent woman who is not afraid to speak her mind. Sai Tamhankar plays Jyotsna, Mandar's childhood friend and one of his sexual partners. She gives a bold and confident performance as a woman who knows what she wants and how to get it.

        -

        Hunterrr is a movie that will make you laugh, think and feel. It is a movie that celebrates life in all its shades and complexities. It is a movie that you should not miss if you are looking for a refreshing and entertaining watch online.

        - -

        If you are wondering how to watch Hunterrr online in full HD quality, you can follow these simple steps. First, you need to choose a streaming platform that offers the movie, such as JioCinema or Voot. You can access these platforms through their websites or apps on your devices. You may need to sign up or log in to watch the movie. Second, you need to search for Hunterrr on the platform and click on the play button. You can adjust the video quality and subtitles according to your preference. Third, you need to sit back and enjoy the movie.

        -

        If you want to download Hunterrr for offline viewing, you can use a video downloader software such as WonderFox HD Video Converter Factory Pro. This software allows you to download any 1080p movie from any website with fast speed and high quality. You can also convert the downloaded movie to any format or device you want. To use this software, you need to follow these steps. First, you need to download and install the software on your computer. Second, you need to copy the URL of the movie from the streaming platform and paste it into the software. Third, you need to choose the output format and quality for the movie. Fourth, you need to click on the download button and wait for the process to finish. Fifth, you need to transfer the downloaded movie to your device or watch it on your computer.

        -

        -

        With these easy methods, you can watch Hunterrr online in full HD anytime and anywhere. You can also share the movie with your friends and family and have a fun time together.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git "a/spaces/stomexserde/gpt4-ui/Examples/JAWS.io Hack Cheats Mod 300 000 Cash For Free\302\240Non Ads.md" "b/spaces/stomexserde/gpt4-ui/Examples/JAWS.io Hack Cheats Mod 300 000 Cash For Free\302\240Non Ads.md" deleted file mode 100644 index 799ca3b59c8431cb0fd23b63dc8ad724619db50b..0000000000000000000000000000000000000000 --- "a/spaces/stomexserde/gpt4-ui/Examples/JAWS.io Hack Cheats Mod 300 000 Cash For Free\302\240Non Ads.md" +++ /dev/null @@ -1,39 +0,0 @@ -
        -

        JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads: How to Get Unlimited Resources in the Game

        - -

        If you are a fan of the JAWS movie franchise, you might have heard of JAWS.io, a multiplayer online game where you can play as a shark or a boat and compete with other players in a thrilling ocean battle. The game is fun and addictive, but it also requires a lot of cash to unlock new sharks, boats, skins and upgrades. Cash is the main currency in the game, and you can earn it by playing matches, watching ads or buying it with real money.

        -

        JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads


        DOWNLOAD ❤❤❤ https://urlgoal.com/2uIbQf



        - -

        However, if you want to get more cash without spending any money or watching annoying ads, you might be interested in JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads. This is a tool that can generate unlimited cash for your JAWS.io account in a matter of minutes. You don't need to download anything or root your device to use it. All you need is an internet connection and a few clicks.

        - -

        How to Use JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads

        - -

        Using JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads is very easy and simple. Just follow these steps:

        - -
          -
        1. Go to the website of JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads by clicking on this link: https://jawsiohackcheatsmod.com
        2. -
        3. Enter your JAWS.io username or email in the field provided.
        4. -
        5. Select your platform (Android or iOS) and click on Connect.
        6. -
        7. Wait for the tool to connect to your account and verify your identity.
        8. -
        9. Choose how much cash you want to generate (up to 300 000 per day) and click on Generate.
        10. -
        11. Wait for the tool to process your request and add the cash to your account.
        12. -
        13. Enjoy your unlimited cash and dominate the game!
        14. -
        - -

        Why Use JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads

        - -

        There are many reasons why you should use JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads. Here are some of them:

        - -
          -
        • It is free and safe. You don't need to pay anything or risk your device's security to use it.
        • -
        • It is fast and easy. You can get unlimited cash in a matter of minutes with just a few clicks.
        • -
        • It is undetectable and reliable. The tool uses advanced encryption and proxy servers to protect your account from being banned or detected by the game's servers.
        • -
        • It is compatible and updated. The tool works with any device and any version of the game. It is also regularly updated to ensure its functionality and efficiency.
        • -
        - -

        JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads: Conclusion

        - -

        JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads is the best way to get unlimited cash in JAWS.io without spending any money or watching any ads. It is a powerful and convenient tool that can help you unlock all the features and items in the game and enjoy it to the fullest. If you want to try it out, just visit the website of JAWS.io Hack, Cheats Mod 300 000 Cash for free Non ads and follow the instructions. You will be amazed by how much cash you can get in no time!

        -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/transforms.py b/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/sunmaiyyyy/combined-GI-RVC-model/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/supertori/files/lycoris/loha.py b/spaces/supertori/files/lycoris/loha.py deleted file mode 100644 index 00616cb0376e7f2963a87f8d8531aa3e998175c2..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/lycoris/loha.py +++ /dev/null @@ -1,198 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class HadaWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, orig_weight, w1a, w1b, w2a, w2b, scale=torch.tensor(1)): - ctx.save_for_backward(w1a, w1b, w2a, w2b, scale) - diff_weight = ((w1a@w1b)*(w2a@w2b)) * scale - return orig_weight.reshape(diff_weight.shape) + diff_weight - - @staticmethod - def backward(ctx, grad_out): - (w1a, w1b, w2a, w2b, scale) = ctx.saved_tensors - grad_out = grad_out * scale - temp = grad_out*(w2a@w2b) - grad_w1a = temp @ w1b.T - grad_w1b = w1a.T @ temp - - temp = grad_out * (w1a@w1b) - grad_w2a = temp @ w2b.T - grad_w2b = w2a.T @ temp - - del temp - return grad_out, grad_w1a, grad_w1b, grad_w2a, grad_w2b, None - - -class HadaWeightCP(torch.autograd.Function): - @staticmethod - def forward(ctx, orig_weight, t1, w1a, w1b, t2, w2a, w2b, scale=torch.tensor(1)): - ctx.save_for_backward(t1, w1a, w1b, t2, w2a, w2b, scale) - - rebuild1 = torch.einsum('i j k l, j r, i p -> p r k l', t1, w1b, w1a) - rebuild2 = torch.einsum('i j k l, j r, i p -> p r k l', t2, w2b, w2a) - - return orig_weight + rebuild1*rebuild2*scale - - @staticmethod - def backward(ctx, grad_out): - (t1, w1a, w1b, t2, w2a, w2b, scale) = ctx.saved_tensors - - grad_out = grad_out*scale - - temp = torch.einsum('i j k l, j r -> i r k l', t2, w2b) - rebuild = torch.einsum('i j k l, i r -> r j k l', temp, w2a) - - grad_w = rebuild*grad_out - del rebuild - - grad_w1a = torch.einsum('r j k l, i j k l -> r i', temp, grad_w) - grad_temp = torch.einsum('i j k l, i r -> r j k l', grad_w, w1a.T) - del grad_w, temp - - grad_w1b = torch.einsum('i r k l, i j k l -> r j', t1, grad_temp) - grad_t1 = torch.einsum('i j k l, j r -> i r k l', grad_temp, w1b.T) - del grad_temp - - temp = torch.einsum('i j k l, j r -> i r k l', t1, w1b) - rebuild = torch.einsum('i j k l, i r -> r j k l', temp, w1a) - - grad_w = rebuild*grad_out - del rebuild - - grad_w2a = torch.einsum('r j k l, i j k l -> r i', temp, grad_w) - grad_temp = torch.einsum('i j k l, i r -> r j k l', grad_w, w2a.T) - del grad_w, temp - - grad_w2b = torch.einsum('i r k l, i j k l -> r j', t2, grad_temp) - grad_t2 = torch.einsum('i j k l, j r -> i r k l', grad_temp, w2b.T) - del grad_temp - return grad_out, grad_t1, grad_w1a, grad_w1b, grad_t2, grad_w2a, grad_w2b, None - - -def make_weight(orig_weight, w1a, w1b, w2a, w2b, scale): - return HadaWeight.apply(orig_weight, w1a, w1b, w2a, w2b, scale) - - -def make_weight_cp(orig_weight, t1, w1a, w1b, t2, w2a, w2b, scale): - return HadaWeightCP.apply(orig_weight, t1, w1a, w1b, t2, w2a, w2b, scale) - - -class LohaModule(nn.Module): - """ - Hadamard product Implementaion for Low Rank Adaptation - """ - - def __init__( - self, - lora_name, - org_module: nn.Module, - multiplier=1.0, lora_dim=4, alpha=1, dropout=0., - use_cp=True, - ): - """ if alpha == 0 or None, alpha is rank (no scaling). """ - super().__init__() - self.lora_name = lora_name - self.lora_dim = lora_dim - self.cp=False - - self.shape = org_module.weight.shape - if org_module.__class__.__name__ == 'Conv2d': - in_dim = org_module.in_channels - k_size = org_module.kernel_size - out_dim = org_module.out_channels - self.cp = use_cp and k_size!=(1, 1) - if self.cp: - shape = (out_dim, in_dim, *k_size) - else: - shape = (out_dim, in_dim*k_size[0]*k_size[1]) - self.op = F.conv2d - self.extra_args = { - "stride": org_module.stride, - "padding": org_module.padding, - "dilation": org_module.dilation, - "groups": org_module.groups - } - else: - in_dim = org_module.in_features - out_dim = org_module.out_features - shape = (out_dim, in_dim) - self.op = F.linear - self.extra_args = {} - - if self.cp: - self.hada_t1 = nn.Parameter(torch.empty(lora_dim, lora_dim, shape[2], shape[3])) - self.hada_w1_a = nn.Parameter(torch.empty(lora_dim, shape[0])) # out_dim, 1-mode - self.hada_w1_b = nn.Parameter(torch.empty(lora_dim, shape[1])) # in_dim , 2-mode - - self.hada_t2 = nn.Parameter(torch.empty(lora_dim, lora_dim, shape[2], shape[3])) - self.hada_w2_a = nn.Parameter(torch.empty(lora_dim, shape[0])) # out_dim, 1-mode - self.hada_w2_b = nn.Parameter(torch.empty(lora_dim, shape[1])) # in_dim , 2-mode - else: - self.hada_w1_a = nn.Parameter(torch.empty(shape[0], lora_dim)) - self.hada_w1_b = nn.Parameter(torch.empty(lora_dim, shape[1])) - - self.hada_w2_a = nn.Parameter(torch.empty(shape[0], lora_dim)) - self.hada_w2_b = nn.Parameter(torch.empty(lora_dim, shape[1])) - - if dropout: - self.dropout = nn.Dropout(dropout) - else: - self.dropout = nn.Identity() - - if type(alpha) == torch.Tensor: - alpha = alpha.detach().float().numpy() # without casting, bf16 causes error - alpha = lora_dim if alpha is None or alpha == 0 else alpha - self.scale = alpha / self.lora_dim - self.register_buffer('alpha', torch.tensor(alpha)) # 定数として扱える - - # Need more experiences on init method - if self.cp: - torch.nn.init.normal_(self.hada_t1, std=0.1) - torch.nn.init.normal_(self.hada_t2, std=0.1) - torch.nn.init.normal_(self.hada_w1_b, std=1) - torch.nn.init.normal_(self.hada_w2_b, std=0.01) - torch.nn.init.normal_(self.hada_w1_a, std=1) - torch.nn.init.constant_(self.hada_w2_a, 0) - - self.multiplier = multiplier - self.org_module = [org_module] # remove in applying - self.grad_ckpt = False - - def apply_to(self): - self.org_module[0].forward = self.forward - - def get_weight(self): - d_weight = self.hada_w1_a @ self.hada_w1_b - d_weight *= self.hada_w2_a @ self.hada_w2_b - return (d_weight).reshape(self.shape) - - @torch.enable_grad() - def forward(self, x): - # print(torch.mean(torch.abs(self.orig_w1a.to(x.device) - self.hada_w1_a)), end='\r') - if self.cp: - weight = make_weight_cp( - self.org_module[0].weight.data, - self.hada_t1, self.hada_w1_a, self.hada_w1_b, - self.hada_t1, self.hada_w2_a, self.hada_w2_b, - scale = torch.tensor(self.scale*self.multiplier), - ) - else: - weight = make_weight( - self.org_module[0].weight.data, - self.hada_w1_a, self.hada_w1_b, - self.hada_w2_a, self.hada_w2_b, - scale = torch.tensor(self.scale*self.multiplier), - ) - - bias = None if self.org_module[0].bias is None else self.org_module[0].bias.data - return self.op( - x, - weight.view(self.shape), - bias, - **self.extra_args - ) \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack.md deleted file mode 100644 index dff0f61a55546d2c2302f23acf1f826c7285b5e2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Agisoft Metashape Professional 1.5.5 Build 9057 With Crack.md +++ /dev/null @@ -1,121 +0,0 @@ - -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack: A Review

        -

        Agisoft Metashape Professional is a software product that performs photogrammetric processing of digital images and generates 3D spatial data. It is widely used for cultural heritage documentation, architecture and engineering, geology and mining, and other applications that require high-quality 3D models.

        -

        In this article, we will review the features and benefits of Agisoft Metashape Professional 1.5.5 Build 9057 With Crack, which is the latest version of the software that comes with a crack file that allows you to activate it without a license key. We will also show you how to download, install and activate the software on your Windows PC.

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack


        Download File ⚹⚹⚹ https://cinurl.com/2uEY0P



        -

        Features of Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack offers a number of features that make it a powerful and versatile tool for 3D modeling. Some of the main features are:

        -
          -
        • Photogrammetric triangulation: The software can process thousands of images and automatically align them, estimate camera positions and orientations, and reconstruct a sparse point cloud.
        • -
        • Dense point cloud generation: The software can generate a dense point cloud based on the image alignment results, which can be edited, classified, filtered, and exported to various formats.
        • -
        • 3D model generation: The software can create textured polygonal models, tiled models, digital elevation models (DEMs), and orthomosaics from the dense point cloud data.
        • -
        • Measurements and annotations: The software can perform various measurements on the 3D models, such as distances, areas, volumes, angles, etc. It can also add labels, markers, shapes, and comments to the models.
        • -
        • Python scripting: The software supports Python scripting, which allows you to automate various tasks and customize the workflow according to your needs.
        • -
        -

        Benefits of Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack has several advantages over other similar software products. Some of the benefits are:

        -
          -
        • High accuracy: The software uses advanced algorithms and techniques to ensure high accuracy and quality of the 3D models.
        • -
        • High performance: The software can process large datasets efficiently and quickly, thanks to its multi-core and GPU processing capabilities.
        • -
        • High compatibility: The software can work with any camera type, including DSLRs, smartphones, drones, etc. It can also import and export data in various formats, such as OBJ, PLY, STL, PDF, etc.
        • -
        • High flexibility: The software can handle various types of scenes and objects, such as buildings, landscapes, plants, animals, humans, etc. It can also adjust to different lighting conditions and camera settings.
        • -
        • Free activation: The software comes with a crack file that allows you to activate it without a license key or an internet connection.
        • -
        -

        How to Download, Install and Activate Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        If you want to try Agisoft Metashape Professional 1.5.5 Build 9057 With Crack on your Windows PC, you can follow these simple steps:

        -
          -
        1. Download the software from the link below.
        2. -
        3. Extract the ZIP file using WinRAR or any other extraction tool.
        4. -
        5. Run the setup file and follow the installation instructions.
        6. -
        7. Copy the crack file from the crack folder and paste it into the installation directory.
        8. -
        9. Run the software and enjoy!
        10. -
        -

        Note: This is only for educational purposes. We do not support or promote piracy in any way. If you like the software, please buy it from the official website.

        -

        Examples of Agisoft Metashape Professional 1.5.5 Build 9057 With Crack Applications

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack can be used for various purposes and projects that require high-quality 3D models. Here are some examples of how the software can be applied:

        -

        -
          -
        • Cultural heritage documentation: The software can create accurate and detailed 3D models of historical monuments, buildings, sculptures, artifacts, etc. from photos taken from different angles and distances. This can help preserve and study the cultural heritage of different regions and civilizations.
        • -
        • Architecture and engineering: The software can create realistic and detailed 3D models of buildings, structures, bridges, roads, etc. from photos taken from drones or helicopters. This can help with design, planning, inspection, and renovation of architectural and engineering projects.
        • -
        • Geology and mining: The software can create accurate and detailed 3D models of terrain, rocks, minerals, fossils, etc. from photos taken from ground or aerial platforms. This can help with geological mapping, exploration, surveying, and mining operations.
        • -
        • Other applications: The software can also be used for other fields and domains that require high-quality 3D models, such as biology, medicine, education, entertainment, art, etc.
        • -
        -

        Conclusion

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is a powerful and versatile software product that performs photogrammetric processing of digital images and generates 3D spatial data. It offers a number of features and benefits that make it a superior tool for 3D modeling. It can also be used for various applications and projects that require high-quality 3D models.

        -

        If you want to try Agisoft Metashape Professional 1.5.5 Build 9057 With Crack on your Windows PC, you can download it from the link below and follow the instructions to install and activate it. However, this is only for educational purposes. We do not support or promote piracy in any way. If you like the software, please buy it from the official website.

        -

        How to Use Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is easy to use and has a user-friendly interface. You can follow these simple steps to use the software:

        -
          -
        1. Launch the software and create a new project or open an existing one.
        2. -
        3. Add photos to the project by clicking on the Add Photos button or dragging and dropping them into the workspace.
        4. -
        5. Select the photos and click on the Align Photos button to start the image alignment process.
        6. -
        7. After the image alignment is completed, you can view the sparse point cloud and the camera positions and orientations in the 3D view.
        8. -
        9. Click on the Build Dense Cloud button to start the dense point cloud generation process.
        10. -
        11. After the dense point cloud is completed, you can view it in the 3D view and edit, classify, filter, or export it as needed.
        12. -
        13. Click on the Build Mesh button to start the polygonal model generation process.
        14. -
        15. After the polygonal model is completed, you can view it in the 3D view and edit, simplify, or export it as needed.
        16. -
        17. Click on the Build Texture button to start the texture generation process.
        18. -
        19. After the texture is completed, you can view it in the 3D view and export it as needed.
        20. -
        21. Click on the Build DEM button to start the digital elevation model generation process.
        22. -
        23. After the DEM is completed, you can view it in the 3D view and export it as needed.
        24. -
        25. Click on the Build Orthomosaic button to start the orthomosaic generation process.
        26. -
        27. After the orthomosaic is completed, you can view it in the 2D view and export it as needed.
        28. -
        29. You can also perform various measurements and annotations on the 3D models by using the tools in the toolbar.
        30. -
        -

        Tips and Tricks for Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is a powerful software product that can produce high-quality 3D models from photos. However, there are some tips and tricks that can help you improve your results and optimize your workflow. Here are some of them:

        -
          -
        • Use a good camera: The quality of your photos will affect the quality of your 3D models. Therefore, use a good camera that has a high resolution, a low distortion, and a good exposure control.
        • -
        • Use a good lighting: The lighting of your scene will affect the accuracy and detail of your 3D models. Therefore, use a good lighting that is uniform, natural, and avoids shadows and reflections.
        • -
        • Use a good overlap: The overlap of your photos will affect the alignment and reconstruction of your 3D models. Therefore, use a good overlap that covers at least 60% of each photo and captures different angles and distances of your scene.
        • -
        • Use a good mask: The mask of your photos will affect the speed and quality of your 3D models. Therefore, use a good mask that removes unwanted areas from your photos, such as sky, background, foreground, etc.
        • -
        • Use a good settings: The settings of your software will affect the performance and quality of your 3D models. Therefore, use a good settings that match your hardware specifications, your project requirements, and your desired output formats.
        • -
        -

        Comparison of Agisoft Metashape Professional 1.5.5 Build 9057 With Crack with Other Software Products

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is not the only software product that can perform photogrammetric processing of digital images and generate 3D spatial data. There are other software products that offer similar or different features and capabilities. Here are some of them:

        -
          -
        • RealityCapture: RealityCapture is a software product that can create 3D models from photos and laser scans in a fast and easy way. It claims to be the fastest and most accurate photogrammetry software on the market.
        • -
        • 3DF Zephyr: 3DF Zephyr is a software product that can create 3D models from photos automatically. It uses a proprietary technology that does not require coded targets, manual editing, or special equipment.
        • -
        • PhotoModeler: PhotoModeler is a software product that can create 3D models from photos using photogrammetry and image-based modeling techniques. It can handle both close-range and aerial photography.
        • -
        • Pix4Dmapper: Pix4Dmapper is a software product that can create 3D models from drone imagery. It can process RGB, thermal, multispectral, and LiDAR data.
        • -
        • Meshroom: Meshroom is a free and open-source software product that can create 3D models from photos using photogrammetry. It is based on the AliceVision framework.
        • -
        -

        Each of these software products has its own strengths and weaknesses, and may suit different needs and preferences. You can compare them with Agisoft Metashape Professional 1.5.5 Build 9057 With Crack and choose the one that best fits your project.

        -

        Frequently Asked Questions about Agisoft Metashape Professional 1.5.5 Build 9057 With Crack

        -

        Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is a complex and sophisticated software product that may raise some questions among users. Here are some of the frequently asked questions about the software and their answers:

        -
          -
        1. What are the system requirements for Agisoft Metashape Professional 1.5.5 Build 9057 With Crack?
        2. -

          The minimum system requirements for Agisoft Metashape Professional 1.5.5 Build 9057 With Crack are:

          -
            -
          • Operating System: Windows 7 or later (64-bit)
          • -
          • Processor: Intel Core i3 or equivalent
          • -
          • Memory: 8 GB RAM
          • -
          • Graphics: NVIDIA GeForce GTX 770 or equivalent
          • -
          • Storage: 10 GB available space
          • -
          -

          The recommended system requirements for Agisoft Metashape Professional 1.5.5 Build 9057 With Crack are:

          -
            -
          • Operating System: Windows 10 (64-bit)
          • -
          • Processor: Intel Core i7 or equivalent
          • -
          • Memory: 32 GB RAM
          • -
          • Graphics: NVIDIA GeForce GTX 1080 or equivalent
          • -
          • Storage: SSD drive
          • -
          - -
        3. How to update Agisoft Metashape Professional 1.5.5 Build 9057 With Crack?
        4. -

          To update Agisoft Metashape Professional 1.5.5 Build 9057 With Crack, you can follow these steps:

          -
            -
          1. Download the latest version of the software from the official website.
          2. -
          3. Uninstall the previous version of the software from your PC.
          4. -
          5. Install the new version of the software on your PC.
          6. -
          7. Copy the crack file from the crack folder and paste it into the installation directory.
          8. -
          9. Run the software and enjoy!
          10. -
          - -
        5. Is Agisoft Metashape Professional 1.5.5 Build 9057 With Crack safe to use?
        6. -

          Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it on your PC. However, using a cracked version of the software may violate the terms and conditions of the original software developer and may expose you to legal risks and penalties. Therefore, we do not recommend using a cracked version of the software and advise you to buy it from the official website if you like it.

          -

          Conclusion

          -

          Agisoft Metashape Professional 1.5.5 Build 9057 With Crack is a powerful and versatile software product that performs photogrammetric processing of digital images and generates 3D spatial data. It offers a number of features and benefits that make it a superior tool for 3D modeling. It can also be used for various applications and projects that require high-quality 3D models.

          -

          If you want to try Agisoft Metashape Professional 1.5.5 Build 9057 With Crack on your Windows PC, you can download it from the link below and follow the instructions to install and activate it. However, this is only for educational purposes. We do not support or promote piracy in any way. If you like the software, please buy it from the official website.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Estructura Tridilosa Para Grandes Claros.pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Estructura Tridilosa Para Grandes Claros.pdf.md deleted file mode 100644 index 6c173ed28e2886bcc5b8807c0d59c3cfb74350cc..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Estructura Tridilosa Para Grandes Claros.pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Estructura Tridilosa Para Grandes Claros.pdf


          Download Filehttps://cinurl.com/2uEXMs



          -
          -rar lubacsignature.com, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros pdf gratis lubacsignature.com, o_es_spolitico_aposta_esta_memoria, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros, estructura tridilosa para grandes claros pdf gratis lubacsignature.com, lo_separar, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis lubacsignature.com, o_estrategic_a_casa_que_surgiu_do_povo, estructura tridilosa para grandes claros pdf, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros pdf gratis, estructura tridilosa para grandes claros 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FULL Google Earth Pro V7.0.3.8542 Incl Crack [TorDigger].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FULL Google Earth Pro V7.0.3.8542 Incl Crack [TorDigger].md deleted file mode 100644 index f6e6c5620e0af527bfc1b410bb171233b4c75150..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FULL Google Earth Pro V7.0.3.8542 Incl Crack [TorDigger].md +++ /dev/null @@ -1,6 +0,0 @@ -

          FULL Google Earth Pro v7.0.3.8542 Incl Crack [TorDigger]


          Download Zip ⚙⚙⚙ https://cinurl.com/2uEXHX



          -
          -google earth pro v7.0.3.8542 incl [tordigger] [google earth pro v7.0.3.8542] [tordigger] [google earth pro v7.0.3.8542] [tordigger] [google earth pro v7.0.3.8542] [tordigger ] [google earth pro v7.0.3.8542] [tordigger] [google earth pro v7.0.3.8542] [tordigger] [google earth pro v7.0.3.8542] [tordigger] [google 8a78ff9644
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows 10 Enterprise LTSB 32 Bits PT BR.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows 10 Enterprise LTSB 32 Bits PT BR.md deleted file mode 100644 index 4edd277e65a37b59536f59c933ca4b9782d8a019..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows 10 Enterprise LTSB 32 Bits PT BR.md +++ /dev/null @@ -1,22 +0,0 @@ -

          Windows 10 Enterprise LTSB 32 Bits PT BR


          Download Zip 🔗 https://cinurl.com/2uEYtK



          - -For windows 10 only "Windows XP Mode" (32-bit) - read the FAQ - The program only supports 32-bit Windows XP. At the bottom of the window, click the start button or press Ctrl. You must own a copy of Windows XP to install this version of Windows. In the new window that opens, click the radio button labeled Windows Server 2012 R2. Support is provided for the Windows XP and Windows Vista operating systems only. - -For Windows 8 or later, use the. 1. OLD-WINDOWS-XP-OFFICE-2003. ;;;. Open your Office 2003 installation folder and locate the files with the following filename extensions. - -You must be running Windows XP Home Edition or Professional, Windows Vista, Windows 7 Professional or Windows 7 Home Premium, or Windows 8, Windows 8. 1 and Windows 8. 1, Microsoft Office 2004 or later is not supported for the following:. 64 bit. Windows 10 does not support Office 2004 or earlier. There is no support for the Windows XP operating system. How do I install 32-bit Office on 64-bit Windows 7, Windows 8 or Windows 10? The system requirements listed for the installer do not specify a 32-bit version of Windows. - -If you want to install Office on Windows 7, Windows 8, Windows 8. 1, and Windows 10, you need to install Office 2016, Office 2013, Office 2012 or Office 2010. 32-bit Windows versions do not support 64-bit Windows. Before you begin, make sure that you have downloaded the Microsoft Office Installation Media. For Windows XP and Windows Vista. To create a new virtual PC, go to Start and select All Programs. A list of programs will appear. - -Find the program you want to use and select it. Choose Create a virtual machine. 2. - -4. Select Windows XP, and then click Next. The source operating system must be Windows Vista. For example, Windows XP Pro. 3. Select the amount of memory. The system requirements listed for the virtual machine do not specify a 32-bit version of Windows. - -If you want to install Office on Windows 7, Windows 8, Windows 8. 1, and Windows 10, you need to install Office 2016, Office 2013, Office 2012 or Office 2010. Install Office on a 32-bit Windows virtual machine. 1. - -2. Choose the name for the virtual machine and select Finish. 3. - -If you want to install Office on Windows 7, Windows 8, 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen 64-bit Showcase 2015.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen 64-bit Showcase 2015.md deleted file mode 100644 index 22dc2d237830aa4014044dbdf71edf27780bd25b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen 64-bit Showcase 2015.md +++ /dev/null @@ -1,13 +0,0 @@ -

          Xforce Keygen 64-bit Showcase 2015


          DOWNLOADhttps://cinurl.com/2uEYGs



          -
          -On FB ... ----- -Danny the duck on Instagram: "🍎🍎" -View this and other pins on the Danny the duck board by user Sasha B. -Tagged -Danny the duck on Instagram:"🍎🍎 -What others are saying -Danny the duck on Instagram: "A quick little drawing of my favorite little teddy bear.🐻❤️😊" -492 Likes, 1 Comments - Danny the duck (@dannyduck_) on Instagram: "A quick little drawing of my favorite little teddy bear.🐻❤️😊" 8a78ff9644
          -
          -
          -

          diff --git a/spaces/swzamir/Restormer/demo_gradio.py b/spaces/swzamir/Restormer/demo_gradio.py deleted file mode 100644 index 6fd99f78409e2982c60276b4564dec48526d01d5..0000000000000000000000000000000000000000 --- a/spaces/swzamir/Restormer/demo_gradio.py +++ /dev/null @@ -1,75 +0,0 @@ -## Restormer: Efficient Transformer for High-Resolution Image Restoration -## Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang -## https://arxiv.org/abs/2111.09881 - - -import torch -import torch.nn.functional as F -import os -from skimage import img_as_ubyte -import cv2 -import argparse - -parser = argparse.ArgumentParser(description='Test Restormer on your own images') -parser.add_argument('--input_path', default='./temp/image.jpg', type=str, help='Directory of input images or path of single image') -parser.add_argument('--result_dir', default='./temp/', type=str, help='Directory for restored results') -parser.add_argument('--task', required=True, type=str, help='Task to run', choices=['Motion_Deblurring', - 'Single_Image_Defocus_Deblurring', - 'Deraining', - 'Real_Denoising', - 'Gaussian_Gray_Denoising', - 'Gaussian_Color_Denoising']) - -args = parser.parse_args() - - -task = args.task -out_dir = os.path.join(args.result_dir, task) - -os.makedirs(out_dir, exist_ok=True) - - -if task == 'Motion_Deblurring': - model = torch.jit.load('motion_deblurring.pt') -elif task == 'Single_Image_Defocus_Deblurring': - model = torch.jit.load('single_image_defocus_deblurring.pt') -elif task == 'Deraining': - model = torch.jit.load('deraining.pt') -elif task == 'Real_Denoising': - model = torch.jit.load('real_denoising.pt') - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -# device = torch.device('cpu') -# stx() - -model = model.to(device) -model.eval() - -img_multiple_of = 8 - -with torch.inference_mode(): - if torch.cuda.is_available(): - torch.cuda.ipc_collect() - torch.cuda.empty_cache() - - img = cv2.cvtColor(cv2.imread(args.input_path), cv2.COLOR_BGR2RGB) - - input_ = torch.from_numpy(img).float().div(255.).permute(2,0,1).unsqueeze(0).to(device) - - # Pad the input if not_multiple_of 8 - h,w = input_.shape[2], input_.shape[3] - H,W = ((h+img_multiple_of)//img_multiple_of)*img_multiple_of, ((w+img_multiple_of)//img_multiple_of)*img_multiple_of - padh = H-h if h%img_multiple_of!=0 else 0 - padw = W-w if w%img_multiple_of!=0 else 0 - input_ = F.pad(input_, (0,padw,0,padh), 'reflect') - - # print(h,w) - restored = torch.clamp(model(input_),0,1) - - # Unpad the output - restored = img_as_ubyte(restored[:,:,:h,:w].permute(0, 2, 3, 1).cpu().detach().numpy()[0]) - - out_path = os.path.join(out_dir, os.path.split(args.input_path)[-1]) - cv2.imwrite(out_path,cv2.cvtColor(restored, cv2.COLOR_RGB2BGR)) - - # print(f"\nRestored images are saved at {out_dir}") \ No newline at end of file diff --git a/spaces/szk1ck/image-matting/utils/functions.py b/spaces/szk1ck/image-matting/utils/functions.py deleted file mode 100644 index 971051891861f7ce1a77acbe8c7d39e9dd63d32c..0000000000000000000000000000000000000000 --- a/spaces/szk1ck/image-matting/utils/functions.py +++ /dev/null @@ -1,70 +0,0 @@ -import os, random -from logging import getLogger, StreamHandler, DEBUG -logger = getLogger(__name__) -handler = StreamHandler(); handler.setLevel(DEBUG) -logger.setLevel(DEBUG) -logger.addHandler(handler) -logger.propagate = False - - -def get_random_name(): - famous_painters = [ - "Leonardo", "DaVinci", - "Michelangelo", - "Pablo", "Picasso", - "Vincent", "VanGogh", - "Rembrandt", "VanRijn", - "Claude", "Monet", - "Salvador", "Dali", - "Jackson", "Pollock", - "Andy", "Warhol", - "Henri", "Matisse", - "Georgia", "Keeffe", - "Edvard", "Munch", - "Wassily", "Kandinsky", - "Gustav", "Klimt", - "Rene", "Magritte", - "Frida", "Kahlo", - "Edgar", "Degas", - "Johannes", "Vermeer", - "Paul", "Cezanne", - "Marc", "Chagall", - ] - - random_painter = random.choice(famous_painters) - - # 4桁の乱数を生成 - rand_num = random.randint(1000, 9999) - - return random_painter+str(rand_num) - - -def complete(work_dir): - work_dir = work_dir - # logger.debug(f"complete :", work_dir) - return work_dir - - -def clean(text_output): - # logger.debug(f"text_output : {text_output}") - - if text_output!="idle_state": - logger.info(f"clean up : {text_output}.zip") - os.remove(f"{text_output}.zip") - return "idle_state" - else: - logger.info(f"reset") - return "idle_state" - - - -def clean_by_name(text_output): - # logger.debug(f"text_output : {text_output}") - if text_output!="idle_state": - text_output, dir_name = text_output.split("+") - logger.info(f"clean up : {dir_name}.zip") - os.remove(f"{dir_name}.zip") - return "idle_state" - else: - logger.info(f"reset") - return "idle_state" diff --git a/spaces/tabeina/bingo1/src/lib/hooks/use-enter-submit.tsx b/spaces/tabeina/bingo1/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/taesiri/DeticChatGPT/tools/fix_o365_path.py b/spaces/taesiri/DeticChatGPT/tools/fix_o365_path.py deleted file mode 100644 index 38716e56c465fc1a2b904a39dd3b9660eafba398..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/tools/fix_o365_path.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import path -import os - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--ann", default='datasets/objects365/annotations/zhiyuan_objv2_train_fixname.json') - parser.add_argument("--img_dir", default='datasets/objects365/train/') - args = parser.parse_args() - - print('Loading', args.ann) - data = json.load(open(args.ann, 'r')) - images = [] - count = 0 - for x in data['images']: - path = '{}/{}'.format(args.img_dir, x['file_name']) - if os.path.exists(path): - images.append(x) - else: - print(path) - count = count + 1 - print('Missing', count, 'images') - data['images'] = images - out_name = args.ann[:-5] + '_fixmiss.json' - print('Saving to', out_name) - json.dump(data, open(out_name, 'w')) diff --git a/spaces/taskswithcode/semantic_search/README.md b/spaces/taskswithcode/semantic_search/README.md deleted file mode 100644 index 6c62bda9eaa19f56436d472df2aaada19b7d9529..0000000000000000000000000000000000000000 --- a/spaces/taskswithcode/semantic_search/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Semantic Search -emoji: 👁 -colorFrom: purple -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Corel Products Keygen V3 3 Free Download.md b/spaces/terfces0erbo/CollegeProjectV2/Corel Products Keygen V3 3 Free Download.md deleted file mode 100644 index a7e2f73e4adaebf49059ea2bdd3e9bff4b2ba5be..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Corel Products Keygen V3 3 Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

          corel products keygen v3 3 free download


          Download Zip ✑ ✑ ✑ https://bytlly.com/2uGjNy



          - -The MicroStation family of products provide the power and versatility to precisely view, model, ... 0 crack CAD hack CAM free CAE download software bCAD 3. ... Corel DRAW Graphics Suite X7 Keys 3264 bit Full Crack Serial Dec 17, 2013. 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Desi Kattey 1 720p Download Movies.md b/spaces/terfces0erbo/CollegeProjectV2/Desi Kattey 1 720p Download Movies.md deleted file mode 100644 index a57ebfa95dbc1fefdd78c36d5dd2eee6de99f3c8..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Desi Kattey 1 720p Download Movies.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Desi Kattey 1 720p Download Movies


          Download Ziphttps://bytlly.com/2uGlWH



          -
          -Dhoom Dhoom 1 Full Movie Hd p Download.. Download Dhoom ... Desi Kattey (2014) Hindi Movie Official Trailer HD 720P ... Tak Dhoom HD ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Fundamentos De Enfermeria Hozier 9 Edicion Pdf Free.md b/spaces/terfces0erbo/CollegeProjectV2/Fundamentos De Enfermeria Hozier 9 Edicion Pdf Free.md deleted file mode 100644 index 0e9ce253a1b38a3238207e23316c1b189fb6edfc..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Fundamentos De Enfermeria Hozier 9 Edicion Pdf Free.md +++ /dev/null @@ -1,10 +0,0 @@ -

          Fundamentos De Enfermeria Hozier 9 Edicion Pdf Free


          Download Ziphttps://bytlly.com/2uGiXE



          - -December 14, 2021 — . c/vqCqO0mU/52-fundamentos-de-enfermeria-hozier-9-edicion-pdf-free-download.pdf -In the process of learning the course "Fundamental Anatomy" (C.V.qCqO0mU / 52-... -December 8, 2018 - . c/vqCqO0mU/52-fundamentos-de-enfermeria-hozier-9-edicion-pdf-free-download.pdf -December 6, 2018 - . C.V.qCqO0mU/52-fundamentos-de-enfermeria-hozier-8-edicion-pdf-free-download.pdf -In the process of studying at the course "Foundation 8a78ff9644
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hindi-1080p-Hd-A-Flying-Jatt-Download.md b/spaces/terfces0erbo/CollegeProjectV2/Hindi-1080p-Hd-A-Flying-Jatt-Download.md deleted file mode 100644 index ea26dff7d2031266ac8d15475a662e81b3f59c56..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hindi-1080p-Hd-A-Flying-Jatt-Download.md +++ /dev/null @@ -1,96 +0,0 @@ -## Hindi 1080p Hd A Flying Jatt Download - - - - - - - - - -**DOWNLOAD ✸ [https://www.google.com/url?q=https%3A%2F%2Furlin.us%2F2tyCdl&sa=D&sntz=1&usg=AOvVaw3vYy9VqOtXykcxPL-Zg7qV](https://www.google.com/url?q=https%3A%2F%2Furlin.us%2F2tyCdl&sa=D&sntz=1&usg=AOvVaw3vYy9VqOtXykcxPL-Zg7qV)** - - - - - - - - - - - - - -# Hindi 1080p Hd A Flying Jatt Download: How to Watch the Superhero Action Comedy Movie Online - - - -If you are looking for a fun and entertaining movie to watch online, you might want to check out **Hindi 1080p Hd A Flying Jatt Download**. This is a 2016 Bollywood movie that features Tiger Shroff, Jacqueline Fernandez and Nathan Jones in the lead roles. The movie is about a reluctant superhero who fights crime and protects people from an evil villain. The movie is directed by Remo D'Souza and produced by Balaji Motion Pictures. - - - -In this article, we will tell you how to download and watch Hindi 1080p Hd A Flying Jatt online for free. We will also give you some information about the movie, such as the plot, the cast, the reviews and the ratings. So, without further ado, let's get started. - - - -## How to Download and Watch Hindi 1080p Hd A Flying Jatt Online for Free - - - -There are many websites that offer Hindi 1080p Hd A Flying Jatt download for free. However, not all of them are safe and legal. Some of them may contain viruses, malware, pop-ups or ads that can harm your device or compromise your privacy. Therefore, we recommend you to use only trusted and reliable sources to download and watch Hindi 1080p Hd A Flying Jatt online. - - - -One of the best sources to download and watch Hindi 1080p Hd A Flying Jatt online for free is PogoLinks. PogoLinks is a website that provides Bollywood and Hollywood movies and web series in various qualities and formats. You can download A Flying Jatt (2016) movie in full HD quality with Hindi audio, with a resolution of 480p, 720p, 720p HEVC and 1080p. You can also watch the movie online for free on PogoLinks. - - - -To download and watch Hindi 1080p Hd A Flying Jatt online for free on PogoLinks, follow these simple steps: - - - -1. Go to [https://pogolinks.art/movies/a-flying-jatt-2016/](https://pogolinks.art/movies/a-flying-jatt-2016/) - -2. Scroll down and click on the download link that matches your preferred quality and size. - -3. You will be redirected to a new page where you will see a captcha. Solve the captcha and click on "Continue". - -4. You will see another page with a countdown timer. Wait for the timer to end and click on "Get Link". - -5. You will be taken to the final page where you can see the direct Google Drive download link for Hindi 1080p Hd A Flying Jatt. Click on it and start downloading. - -6. After downloading, you can watch the movie offline using any video player that supports MKV format. - -7. If you want to watch the movie online, you can click on "Watch Now" on PogoLinks and choose from multiple sources. - - - -That's it! You have successfully downloaded and watched Hindi 1080p Hd A Flying Jatt online for free using PogoLinks. - - - -## What is Hindi 1080p Hd A Flying Jatt About? - - - -Hindi 1080p Hd A Flying Jatt is a superhero action comedy movie that tells the story of Jatt (Tiger Shroff), a timid school teacher who inherits superpowers from his father. He becomes a superhero named A Flying Jatt who fights crime and protects people from Raka (Nathan Jones), an evil industrialist who wants to destroy the environment. Along the way, he also falls in love with Kirti (Jacqueline Fernandez), a bubbly girl who works at his school. - - - -The movie is a mix of humor, romance, action and fantasy. It has some impressive stunts, special effects and dance sequences. It also has a message about environmental awareness and social responsibility. The movie is suitable for all ages and can be enjoyed by anyone who likes superhero movies. - - - -## Who are the Cast Members of Hindi 1080p Hd A Flying Jatt? - - - -The cast members of Hindi - - 145887f19f - - - - - diff --git a/spaces/texantech/01-3DModel-GradioDemo/files/readme.md b/spaces/texantech/01-3DModel-GradioDemo/files/readme.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/thinkcol/chainlit-example/chainlit.md b/spaces/thinkcol/chainlit-example/chainlit.md deleted file mode 100644 index 0f673dc0aed7dae5cfbc91a29940b6dbe270ac9d..0000000000000000000000000000000000000000 --- a/spaces/thinkcol/chainlit-example/chainlit.md +++ /dev/null @@ -1,14 +0,0 @@ -# Welcome to Chainlit! 🚀🤖 - -Hi there, Developer! 👋 We're excited to have you on board. Chainlit is a powerful tool designed to help you prototype, debug and share applications built on top of LLMs. - -## Useful Links 🔗 - -- **Documentation:** Get started with our comprehensive [Chainlit Documentation](https://docs.chainlit.io) 📚 -- **Discord Community:** Join our friendly [Chainlit Discord](https://discord.gg/ZThrUxbAYw) to ask questions, share your projects, and connect with other developers! 💬 - -We can't wait to see what you create with Chainlit! Happy coding! 💻😊 - -## Welcome screen - -To modify the welcome screen, edit the `chainlit.md` file at the root of your project. If you do not want a welcome screen, just leave this file empty. diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Easy Steps to Install GTA 4 Mod Menu PS3 No Jailbreak USB.md b/spaces/tialenAdioni/chat-gpt-api/logs/Easy Steps to Install GTA 4 Mod Menu PS3 No Jailbreak USB.md deleted file mode 100644 index 30bccfe44f76da1b4e8781fe7e1ecd4180ea2e6a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Easy Steps to Install GTA 4 Mod Menu PS3 No Jailbreak USB.md +++ /dev/null @@ -1,65 +0,0 @@ -
          -

          How to Install GTA 4 Mod Menu PS3 No Jailbreak USB

          -

          If you are a fan of Grand Theft Auto 4, you might be wondering how to install a mod menu on your PS3 without jailbreaking it. A mod menu is a custom interface that allows you to access various features and cheats in the game, such as changing your character, spawning vehicles, weapons, money, and more. In this article, we will show you how to install GTA 4 mod menu PS3 no jailbreak USB in a few simple steps.

          -

          What You Need

          -

          Before you start, you will need the following items:

          -

          gta 4 mod menu ps3 no jailbreak usb


          Download Filehttps://urlcod.com/2uK5ug



          -
            -
          • A PS3 console with GTA 4 installed.
          • -
          • A USB flash drive with at least 8 GB of free space.
          • -
          • A computer with an internet connection.
          • -
          • A GTA 4 mod menu file. You can download one from this website.
          • -
          -

          Step 1: Format Your USB Flash Drive

          -

          The first step is to format your USB flash drive to FAT32. This is the file system that the PS3 can recognize and read. To do this, follow these steps:

          -
            -
          1. Plug your USB flash drive into your computer.
          2. -
          3. Open File Explorer and right-click on your USB flash drive.
          4. -
          5. Select Format from the context menu.
          6. -
          7. Choose FAT32 as the file system and click Start.
          8. -
          9. Wait for the formatting process to complete and click OK.
          10. -
          -

          Step 2: Copy the Mod Menu File to Your USB Flash Drive

          -

          The next step is to copy the mod menu file that you downloaded to your USB flash drive. To do this, follow these steps:

          -
            -
          1. Open the folder where you saved the mod menu file.
          2. -
          3. Right-click on the file and select Copy from the context menu.
          4. -
          5. Open File Explorer and navigate to your USB flash drive.
          6. -
          7. Create a new folder and name it PS3.
          8. -
          9. Inside the PS3 folder, create another folder and name it SAVEDATA.
          10. -
          11. Paste the mod menu file inside the SAVEDATA folder.
          12. -
          -

          Step 3: Transfer the Mod Menu File to Your PS3

          -

          The final step is to transfer the mod menu file from your USB flash drive to your PS3. To do this, follow these steps:

          -
            -
          1. Eject your USB flash drive from your computer and plug it into your PS3.
          2. -
          3. Turn on your PS3 and go to Settings.
          4. -
          5. Select System Settings and then Backup Utility.
          6. -
          7. Select Restore and then Yes.
          8. -
          9. Select your USB flash drive and then Yes again.
          10. -
          11. Wait for the transfer process to complete and press X to restart your PS3.
          12. -
          -

          Congratulations!

          -

          You have successfully installed GTA 4 mod menu PS3 no jailbreak USB. To access the mod menu, launch GTA 4 and press L1 + R1 on your controller. You will see a list of options that you can choose from. Enjoy!

          - -

          Benefits of Using GTA 4 Mod Menu PS3 No Jailbreak USB

          -

          Using a mod menu for GTA 4 can enhance your gaming experience in many ways. Here are some of the benefits of using GTA 4 mod menu PS3 no jailbreak USB:

          -
            -
          • You can customize your character with different outfits, hairstyles, tattoos, and more.
          • -
          • You can spawn any vehicle you want, from cars, bikes, boats, helicopters, planes, and even tanks.
          • -
          • You can access various weapons and ammo, from pistols, rifles, shotguns, grenades, rockets, and more.
          • -
          • You can modify the game settings, such as weather, time, traffic, police, and more.
          • -
          • You can activate various cheats and hacks, such as god mode, infinite health, money, ammo, and more.
          • -
          -

          Precautions of Using GTA 4 Mod Menu PS3 No Jailbreak USB

          -

          While using a mod menu for GTA 4 can be fun and exciting, it also comes with some risks and drawbacks. Here are some of the precautions of using GTA 4 mod menu PS3 no jailbreak USB:

          -
            -
          • You may encounter glitches and bugs that can affect the game performance and stability.
          • -
          • You may get banned from online multiplayer mode if you use the mod menu in public sessions.
          • -
          • You may lose your game progress and save data if you overwrite them with the mod menu file.
          • -
          • You may damage your PS3 or USB flash drive if you use a corrupted or incompatible mod menu file.
          • -
          -

          Conclusion

          -

          GTA 4 is one of the most popular and iconic games of all time. With a mod menu, you can unlock new features and possibilities that can make the game more enjoyable and entertaining. However, you should also be aware of the potential risks and consequences of using a mod menu. In this article, we have shown you how to install GTA 4 mod menu PS3 no jailbreak USB in a few simple steps. We hope this guide was helpful and informative. Happy gaming!

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Excel Cracker _setup.exe for Free (And Why You Shouldnt).md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Excel Cracker _setup.exe for Free (And Why You Shouldnt).md deleted file mode 100644 index 575480cff889d52e9c6a838e14d9e9cb0df0f28b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Excel Cracker _setup.exe for Free (And Why You Shouldnt).md +++ /dev/null @@ -1,21 +0,0 @@ -
          -

          Download Excel Cracker _setup.exe: Why You Should Avoid It and What to Do Instead

          -

          Excel Cracker is a software that claims to help you crack or recover passwords of Excel files. It is supposed to work with any version of Excel and any type of password protection. However, some people might be tempted to look for a way to download Excel Cracker _setup.exe for free from the internet.

          -

          download excel cracker _setup.exe


          Download > https://urlcod.com/2uK9bO



          -

          However, downloading Excel Cracker _setup.exe for free is not a good idea, as it can cause many problems and risks for you and your computer. In this article, we will explain why you should avoid downloading Excel Cracker _setup.exe for free and what are some better alternatives that you can use instead.

          -

          Why You Should Avoid Downloading Excel Cracker _setup.exe for Free

          -

          Downloading Excel Cracker _setup.exe for free is a form of software piracy that violates the terms and conditions of the developer and infringes on their intellectual property rights. By doing so, you are breaking the law and exposing yourself to potential legal consequences. Moreover, downloading Excel Cracker _setup.exe for free also has many disadvantages and risks, such as:

          -
            -
          • It is unsafe. Downloading Excel Cracker _setup.exe for free from unknown or untrusted sources exposes your computer to viruses, malware, spyware, ransomware, and other malicious programs that can harm your system, steal your data, or lock your files. You might also get unwanted ads, pop-ups, or redirects that can compromise your online security and privacy.
          • -
          • It is unreliable. Downloading Excel Cracker _setup.exe for free does not guarantee that you will get the latest or the best version of the software or that it will work properly. You might encounter errors, bugs, glitches, compatibility issues, or performance problems that can affect your productivity and efficiency. You might also miss out on important updates, patches, or features that the developer releases for the official version of the software.
          • -
          • It is unethical. Downloading Excel Cracker _setup.exe for free is unfair and disrespectful to the developer who has invested their time, effort, and money to produce a high-quality product that benefits millions of users around the world. By downloading Excel Cracker _setup.exe for free, you are depriving them of their rightful income and recognition.
          • -
          -

          What Are Some Better Alternatives to Downloading Excel Cracker _setup.exe for Free

          -

          Instead of downloading Excel Cracker _setup.exe for free, you should consider some better alternatives that are legal, safe, and reliable. Here are some options:

          -

          -
            -
          • Buy a license. The best way to use Excel Cracker is to buy a license from the developer or an authorized reseller. You can choose from different plans and packages that suit your needs and budget. You will get access to the full version of Excel Cracker and other products from the developer, as well as technical support and security features.
          • -
          • Use a free trial. If you want to try Excel Cracker before buying a license, you can use a free trial that the developer offers for new users. You can sign up for an account and get a 30-day trial that includes Excel Cracker and other products from the developer. You can use all the features and functions of Excel Cracker during the trial period and decide if you want to continue using it after the trial ends.
          • -
          • Use a free alternative. If you don't want to pay for Excel Cracker or use a trial version, you can use a free alternative that has similar features and functions. There are many online or offline tools that you can use for free to crack or recover passwords of Excel files, such as PassFab for Excel,

            ddb901b051
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/JP Morgan Microsoft Launch Quorum Blockchain Service.md b/spaces/tialenAdioni/chat-gpt-api/logs/JP Morgan Microsoft Launch Quorum Blockchain Service.md deleted file mode 100644 index 1da50ee188d1a7ecd41f4489d996959398c22386..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/JP Morgan Microsoft Launch Quorum Blockchain Service.md +++ /dev/null @@ -1,22 +0,0 @@ - -

            JP Morgan, Microsoft Launch Quorum Blockchain Service on Azure Cloud

            -

            JP Morgan and Microsoft have announced a strategic partnership to accelerate the adoption of enterprise blockchain. Through this partnership, Quorum, an enterprise-variant of the Ethereum blockchain developed by JP Morgan, will become the first distributed ledger platform available through Azure Blockchain Service, enabling JP Morgan and Microsoft customers to build and scale blockchain networks in the cloud.

            -

            Quorum is a fully integrated, Ethereum-based blockchain platform and suite of applications that aims to solve complex business and societal problems via blockchain solutions. Quorum will leverage Azure's unique strengths such as lower costs, simplified deployment and built-in governance to provide enterprise clients with a robust and secure blockchain infrastructure.

            -

            JP Morgan, Microsoft Launch Quorum Blockchain Service


            DOWNLOADhttps://urlcod.com/2uK3Wn



            -

            Microsoft will also provide engineering, consulting and go-to-market support for Quorum, as well as address common enterprise, independent software vendor and developer needs for building and deploying blockchain applications on Quorum in the cloud. The partnership will also enable Quorum to power JP Morgan and Microsoft blockchain programs and first-party apps, such as the Interbank Information Network, JPM Coin and Microsoft’s Xbox royalty payment process, among others.

            -

            "We are incredibly proud of the success Quorum has had over the last four years, as organizations around the world use Quorum to solve complex business and societal problems via blockchain solutions," said Umar Farooq, Global Head of Blockchain at JP Morgan. "We are delighted to partner alongside Microsoft as we continue to strengthen Quorum and expand capabilities and services on the platform. Azure will bring unique strengths to enterprise clients using Quorum."

            -

            "As digital transformation extends beyond the walls of an individual organization, companies need solutions that enable them to securely share their business processes and data in order to drive imaginative new business models and reinvent industries. We’re thrilled to partner with a leader like JP Morgan to establish a foundation on which enterprises and partners can rapidly build and scale blockchain networks," said Peggy Johnson, Executive Vice President of Business Development at Microsoft. "Together, we’re taking a truly transformative technology like Quorum and making it available through the Azure platform to accelerate innovation for our customers."

            -

            -

            The partnership between JP Morgan and Microsoft is expected to boost the adoption of enterprise blockchain and create new opportunities for innovation and collaboration across various industries.

            - -

            Quorum Blockchain Use Cases

            -

            Quorum blockchain has many potential use cases across various industries and sectors. Some of the prominent examples of Quorum blockchain use cases are as follows:

            -
              -
            • Banking and Finance: Quorum blockchain is used by many banks and financial institutions to streamline their processes and transactions. For example, JP Morgan has launched JPM Coin, a digital currency that runs on Quorum blockchain and enables instant settlement of payments between institutional clients. Another example is Covantis, a global initiative by leading agribusinesses to modernize commodity trade finance using Quorum blockchain.
            • -
            • Supply Chain and Logistics: Quorum blockchain is used to improve the efficiency and transparency of supply chain and logistics operations. For example, VAKT is a platform that leverages Quorum blockchain to digitize the post-trade process for energy commodities. Another example is SUKU, a platform that uses Quorum blockchain to provide end-to-end supply chain visibility and traceability for various products.
            • -
            • Healthcare: Quorum blockchain is used to enhance the security and interoperability of healthcare data and systems. For example, Prescryptive is a platform that uses Quorum blockchain to empower consumers with their healthcare choices and data. Another example is Emergent Technology, a company that uses Quorum blockchain to digitize gold as a payments rail for healthcare services.
            • -
            • Hospitality and Travel: Quorum blockchain is used to facilitate borderless and secure payments for hospitality and travel services. It is also used to reward customers for their loyalty and referrals. For example, NexChain is a platform that uses Quorum blockchain to create industry-wide databases for home rental providers. Another example is Aura, a consortium by LVMH, ConsenSys, and Microsoft that uses Quorum blockchain to provide a unique digital identity for luxury products.
            • -
            -

            These are some of the use cases of Quorum blockchain that showcase its versatility and potential for various industries. Quorum blockchain is constantly evolving and expanding its capabilities and services to meet the diverse needs of enterprises and developers.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CSR Classics APK OBB Race Restore and Customize Your Dream Cars.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CSR Classics APK OBB Race Restore and Customize Your Dream Cars.md deleted file mode 100644 index b69f555a4939aa611b473cfd89dd854081e0aa46..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CSR Classics APK OBB Race Restore and Customize Your Dream Cars.md +++ /dev/null @@ -1,135 +0,0 @@ - -

            CSR Classics APK OBB: A Guide to Download and Play the Ultimate Drag Racing Game

            -

            If you are a fan of drag racing games, you might have heard of CSR Classics, a popular game that lets you race with classic cars from famous manufacturers. But did you know that you can download and play CSR Classics APK OBB, a modified version of the game that gives you unlimited money and access to all the cars and upgrades? In this article, we will tell you everything you need to know about CSR Classics APK OBB, including what it is, how to download and install it, and how to play it. Let's get started!

            -

            csr classics apk obb


            Download Filehttps://bltlly.com/2uOjMz



            -

            What is CSR Classics?

            -

            CSR Classics is a drag racing game developed by NaturalMotionGames Ltd. It was released in 2013 for iOS and Android devices. The game features over 50 classic cars from brands like Ford, Chevrolet, BMW, Mercedes, Jaguar, Dodge, and more. You can customize your cars with different paint jobs, decals, wheels, engines, transmissions, and nitrous. You can also compete with other players online or offline in various modes like Crew Battles, Ladder Races, Regulation Races, and Daily Battles.

            -

            The gameplay of CSR Classics

            -

            The gameplay of CSR Classics is simple but addictive. You have to tap the screen to start the engine, then swipe up to shift gears at the right time. You can also use nitrous to boost your speed when needed. The game has realistic physics and graphics that make you feel like you are driving a real car. You can also watch replays of your races and share them with your friends on social media.

            -

            The features of CSR Classics

            -

            Some of the features of CSR Classics are:

            -
              -
            • You can race with over 50 classic cars from the 50s to the 90s.
            • -
            • You can customize your cars with various options and upgrades.
            • -
            • You can challenge other players online or offline in different modes.
            • -
            • You can join or create a crew and compete with other crews for rewards and glory.
            • -
            • You can enjoy stunning graphics and sound effects that immerse you in the game.
            • -
            -

            What is APK OBB?

            -

            APK OBB is a combination of two types of files that are used to install Android games and apps. APK stands for Android Package Kit, which is the file format that contains the code, resources, and metadata of an app. OBB stands for Opaque Binary Blob, which is the file format that contains additional data like graphics, sounds, and videos of an app.

            -

            The difference between APK and OBB files

            -

            The main difference between APK and OBB files is that APK files are smaller and contain only the essential information of an app, while OBB files are larger and contain more data that enhance the app's performance and quality. For example, an APK file might be 50 MB in size, while an OBB file might be 500 MB in size.

            -

            csr classics apk obb download
            -csr classics apk obb mod
            -csr classics apk obb data
            -csr classics apk obb offline
            -csr classics apk obb latest version
            -csr classics apk obb unlimited money
            -csr classics apk obb android
            -csr classics apk obb free
            -csr classics apk obb file
            -csr classics apk obb hack
            -csr classics apk obb update
            -csr classics apk obb full
            -csr classics apk obb revdl
            -csr classics apk obb rexdl
            -csr classics apk obb highly compressed
            -csr classics apk obb mega
            -csr classics apk obb 2022
            -csr classics apk obb 2023
            -csr classics apk obb old version
            -csr classics apk obb new version
            -csr classics apk obb for pc
            -csr classics apk obb for ios
            -csr classics apk obb for windows
            -csr classics apk obb for mac
            -csr classics apk obb for laptop
            -csr classics apk obb for tablet
            -csr classics apk obb for tv
            -csr classics apk obb for firestick
            -csr classics apk obb for chromebook
            -csr classics apk obb for bluestacks
            -csr classics apk obb for ldplayer
            -csr classics apk obb for noxplayer
            -csr classics apk obb for memuplay
            -csr classics apk obb for gameloop
            -csr classics apk obb for smartgaga
            -csr classics apk obb gameplay
            -csr classics apk obb review
            -csr classics apk obb cheats
            -csr classics apk obb tips
            -csr classics apk obb tricks
            -csr classics apk obb guide
            -csr classics apk obb walkthrough
            -csr classics apk obb tutorial
            -csr classics apk obb features
            -csr classics apk obb requirements
            -csr classics apk obb size
            -csr classics apk obb graphics
            -csr classics apk obb soundtracks
            -csr classics apk obb cars list

            -

            The benefits of using APK OBB files

            -

            Some of the benefits of using APK OBB files are:

            -
              -
            • You can download and install games and apps that are not available in your region or device.
            • -
            • You can download and install modified versions of games and apps that give you unlimited money, coins, gems, or other resources.
            • -
            • You can download and install games and apps that have better graphics, sounds, and features than the original versions

              How to download and install CSR Classics APK OBB?

              -

              If you want to download and install CSR Classics APK OBB, you need to follow some simple steps. But before that, you need to make sure that your device meets the requirements for the game.

              -

              The requirements for CSR Classics APK OBB

              -

              According to the official Google Play Store page of CSR Classics, the requirements for the game are:

              -
                -
              • An Android device running on version 4.0 or higher.
              • -
              • At least 1 GB of RAM and 2 GB of free storage space.
              • -
              • A stable internet connection for online features.
              • -
              -

              The steps to download and install CSR Classics APK OBB

              -

              Once you have checked the requirements, you can proceed with the following steps:

              -
                -
              1. Go to a trusted website that provides CSR Classics APK OBB files, such as [APKPure] or [APKMirror].
              2. -
              3. Download the CSR Classics APK file and the CSR Classics OBB file to your device.
              4. -
              5. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
              6. -
              7. Locate the CSR Classics APK file on your device and tap on it to install it.
              8. -
              9. Do not open the game yet. Instead, locate the CSR Classics OBB file on your device and extract it using a file manager app like [ES File Explorer] or [ZArchiver].
              10. -
              11. Copy the extracted folder named "com.naturalmotion.csrclassics" and paste it into the Android/OBB directory on your device's internal storage.
              12. -
              13. Now you can open the game and enjoy CSR Classics APK OBB with unlimited money and all cars unlocked.
              14. -
              -

              How to play CSR Classics APK OBB?

              -

              Playing CSR Classics APK OBB is not much different from playing the original version of the game. You can still enjoy the same gameplay, features, and modes as before. However, there are some tips and tricks that can help you improve your performance and have more fun with the game.

              -

              The tips and tricks for CSR Classics APK OBB

              -

              Some of the tips and tricks for CSR Classics APK OBB are:

              -
                -
              • Use the unlimited money wisely. You can buy any car you want, but you should also upgrade them to increase their performance and value. You can also buy more nitrous to boost your speed during races.
              • -
              • Choose the right car for each race. Different cars have different strengths and weaknesses, such as acceleration, top speed, handling, and weight. You should pick a car that suits the track and the opponent you are facing.
              • -
              • Master the timing of shifting gears and using nitrous. Shifting gears at the right time can give you an edge over your rivals, while using nitrous at the wrong time can waste your precious resource. You should practice your timing skills in the Regulation Races mode or in the Test Drive mode.
              • -
              • Challenge other players online or offline. You can race against other players from around the world in the Online Multiplayer mode or against your friends in the Local Multiplayer mode. You can also join or create a crew and compete with other crews for rewards and glory.
              • -
              • Complete the daily battles and events. You can earn extra money, gold, and cars by completing the daily battles and events that are available in the game. You can also unlock new cars and decals by completing certain achievements.
              • -
              -

              The best cars and upgrades for CSR Classics APK OBB

              -

              The best cars and upgrades for CSR Classics APK OBB depend on your personal preference and play style. However, some of the most popular and powerful cars in the game are:

              - - - - - - - -
              CarTierPriceDescription
              Ford Mustang Boss 302Tier 1$95,000A classic muscle car that has high acceleration and top speed, but low handling and weight.
              Chevrolet Camaro SSTier 2$190,000A modern muscle car that has balanced performance and stats, but requires more upgrades to reach its full potential.
              Dodge Charger R/TTier 3$380,000A legendary muscle car that has excellent acceleration and top speed, but poor handling and weight.
              BMW M3 GTRTier 4$760,000A rare sports car that has superb handling and weight, but moderate acceleration and top speed.
              Lamborghini MiuraTier 5$1,520,000A legendary supercar that has outstanding performance and stats in all aspects, but is very expensive and hard to obtain.
              -

              As for the upgrades, you should focus on improving the engine, transmission, and nitrous of your cars, as they have the most impact on your speed and acceleration. You should also upgrade the tires and body of your cars, as they affect your handling and weight. You can ignore the intake and exhaust upgrades, as they have little effect on your performance.

              -

              Conclusion

              -

              CSR Classics APK OBB is a great way to enjoy the ultimate drag racing game with unlimited money and all cars unlocked. You can download and install it easily by following the steps we have provided in this article. You can also play it with more fun and success by following the tips and tricks we have shared. CSR Classics APK OBB is a game that will keep you entertained for hours with its realistic graphics, sound effects, and gameplay. If you are a fan of drag racing games, you should definitely give it a try!

              -

              FAQs

              -

              Here are some of the frequently asked questions about CSR Classics APK OBB:

              -
                -
              • Q: Is CSR Classics APK OBB safe to use?
              • -
              • A: Yes, CSR Classics APK OBB is safe to use as long as you download it from a trusted website and scan it with an antivirus app before installing it.
              • -
              • Q: Is CSR Classics APK OBB compatible with my device?
              • -
              • A: CSR Classics APK OBB is compatible with most Android devices running on version 4.0 or higher. However, some devices may experience lag or crashes due to low RAM or storage space.
              • -
              • Q: Will CSR Classics APK OBB affect my progress in the original game?
              • -
              • A: No, CSR Classics APK OBB will not affect your progress in the original game, as they are installed separately and use different data files. You can play both versions of the game without any interference.
              • -
              • Q: Can I update CSR Classics APK OBB to the latest version?
              • -
              • A: Yes, you can update CSR Classics APK OBB to the latest version by downloading and installing the new APK file and OBB file from the same website you got them from. However, you may lose your progress and money if you do so.
              • -
              • Q: Can I play CSR Classics APK OBB offline?
              • -
              • A: Yes, you can play CSR Classics APK OBB offline without an internet connection. However, you will not be able to access some of the online features like multiplayer mode, crew battles, daily battles, and events.
              • -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Attack on Titan Simulator APK and Fight the Titans in 3D.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Attack on Titan Simulator APK and Fight the Titans in 3D.md deleted file mode 100644 index 555b2538fe16de41ba7bc8bba663f135e00d57d7..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Attack on Titan Simulator APK and Fight the Titans in 3D.md +++ /dev/null @@ -1,104 +0,0 @@ - -

              Attack on Titan Simulator APK: A Fan-Made Game That Lets You Slay Titans

              -

              If you are a fan of Attack on Titan, you might have wondered how it feels to use the 3D Maneuvering Gear and slash at the napes of the monstrous Titans. Well, wonder no more, because there is a fan-made game that lets you do just that. It's called Attack on Titan - Fan Game APK, and it's available for free for Android devices.

              -

              attack on titan simulator apk


              Download File ⚙⚙⚙ https://bltlly.com/2uOk9r



              -

              In this article, we will tell you everything you need to know about this game, how to play it, and what makes it different from other games based on the same franchise. We will also give you our honest review of the game and answer some frequently asked questions. So, let's get started!

              -

              What is Attack on Titan - Fan Game APK?

              -

              Attack on Titan - Fan Game APK is a free action game developed by Swammy for Android devices. It is not an official game licensed by the creators of Attack on Titan, but a fan-made project that aims to recreate the feeling of being in the anime.

              -

              The game puts you in a first-person perspective, similar to other action games like Sniper 3D Assassin, Combat Master Mobile or Clash Squad Battlegrounds Max. You can choose from ten playable characters, including Eren Yeager, Mikasa Ackerman, Armin Arlert, Levi Ackerman, and more. Each character has their own stats and skills that affect their performance in battle.

              -

              attack on titan 3d game apk
              -attack on titan assault mod apk
              -attack on titan fan game download
              -attack on titan mobile game android
              -attack on titan rpg apk
              -attack on titan simulator online
              -attack on titan tribute game apk
              -download attack on titan assault
              -download attack on titan fan game
              -download attack on titan mobile
              -free attack on titan game android
              -how to play attack on titan fan game
              -install attack on titan assault
              -install attack on titan fan game
              -install attack on titan mobile
              -latest version of attack on titan assault
              -latest version of attack on titan fan game
              -latest version of attack on titan mobile
              -play attack on titan 3d game
              -play attack on titan assault online
              -play attack on titan fan game online
              -play attack on titan mobile online
              -play attack on titan rpg online
              -play attack on titan simulator free
              -play attack on titan tribute game online

              -

              The game follows the main story of Attack on Titan, from the first season to the second season of the anime. You will face various types of Titans, from the normal ones to the abnormal ones, as well as the special ones like the Colossal Titan and the Armored Titan. You will also encounter familiar scenes and locations from the anime, such as Trost District, Shiganshina District, and Forest of Giant Trees.

              -

              How to play Attack on Titan - Fan Game APK?

              -

              The gameplay of Attack on Titan - Fan Game APK is simple but challenging. You use your 3D Maneuvering Gear to move around the map, avoiding obstacles and enemies. You can also use your boost to gain speed and altitude. To attack a Titan, you need to lock onto one of its body parts, such as its arms, legs, or neck. Then, you need to press the attack button at the right time to deal damage. If you hit the neck, you can kill the Titan instantly. If you hit other parts, you can weaken or disable them.

              -

              The game has two modes: Story Mode and Free Mode. In Story Mode, you follow the plot of the anime and complete missions with specific objectives and conditions. In Free Mode, you can choose any map and character and fight against endless waves of Titans. You can also customize your difficulty level and settings.

              -

              The game also has some features that make it more immersive and fun. For example:

              -
                -
              • You can upgrade your equipment with materials that you collect from killing Titans or completing missions. You can improve your blades, gas tanks, and gear to increase your damage, speed, and durability.
              • -
              • You can recruit teammates to assist you in combat. You can find them in green smoke signals on the map or complete side missions to unlock them. They will follow your commands and help you fight against Titans.
              • -
              • You can activate the Decisive Battle Signal when your meter is full. This will boost your stats and skills for a limited time and allow you to perform special attacks.
              • -
              • You can use items to replenish your health, blades, or gas. You can find them in crates or get them from Logisticians who are marked by backpack icons on the map.
              • -
              -

              What are the pros and cons of Attack on Titan - Fan Game APK?

              -

              As a fan-made game, Attack on Titan - Fan Game APK has its strengths and weaknesses Continuing the article:

              What are the pros and cons of Attack on Titan - Fan Game APK?

              -

              As a fan-made game, Attack on Titan - Fan Game APK has its strengths and weaknesses. Here are some of them:

              - - - - - - - - - - - - - - - - - - - - - - - - - -
              ProsCons
              - It is free to download and play.- It is not an official game and may not be faithful to the source material.
              - It has a first-person perspective that makes the combat more immersive and thrilling.- It has some bugs and glitches that may affect the gameplay.
              - It has multiplayer support that allows you to play with your friends online.- It has limited content and features compared to other games based on Attack on Titan.
              - It has good graphics and animations that capture the atmosphere of the anime.- It has high system requirements and may not run smoothly on low-end devices.
              - It has customizable settings and difficulty levels that suit your preferences.- It has no voice acting or sound effects that enhance the experience.
              -

              Overall, Attack on Titan - Fan Game APK is a decent game that offers a lot of fun and excitement for fans of the anime and action games. However, it is not perfect and may not satisfy everyone's expectations. It is still a work in progress and may improve in the future with more updates and enhancements.

              -

              Conclusion

              -

              Attack on Titan - Fan Game APK is a fan-made game that lets you experience the thrill of fighting against the giant Titans from the popular anime and manga series, Attack on Titan. It is a free action game for Android devices that puts you in a first-person perspective, similar to other action games like Sniper 3D Assassin, Combat Master Mobile or Clash Squad Battlegrounds Max. You can choose from ten playable characters, each with their own stats and skills, and follow the main story of the anime from the first season to the second season. You can also play in Free Mode, where you can fight against endless waves of Titans on any map. You can also upgrade your equipment, recruit teammates, activate special attacks, and use items to aid you in battle. The game also has multiplayer support, where you can play with your friends online in co-op mode. The game has good graphics and animations, but also some bugs and glitches. It is not an official game and may not be faithful to the source material. It is still a work in progress and may get better with more updates and enhancements.

              -

              If you are a fan of Attack on Titan or action games in general, you might want to give this game a try. It is free to download and play, and it offers a lot of fun and excitement. However, if you are looking for a more polished and complete game based on Attack on Titan, you might want to look for other options. There are several official games based on the franchise, such as Attack on Titan 2: Final Battle, Attack on Titan: Assault, or Attack on Titan: Tactics. These games have more content, features, quality, and fidelity than Attack on Titan - Fan Game APK. They are also licensed by the creators of Attack on Titan, so they are more authentic and accurate to the source material.

              -

              We hope this article has helped you learn more about Attack on Titan - Fan Game APK. If you have any questions or comments, feel free to leave them below. Thank you for reading!

              -

              FAQs

              -

              How do I download Attack on Titan - Fan Game APK?

              -

              You can download Attack on Titan - Fan Game APK from various websites that offer free APK files for Android devices. One of them is FileHippo, where you can find the latest version of the game. You can also follow the developer Swammy on his social media accounts, where he posts updates and links to download the game.

              -

              Is Attack on Titan - Fan Game APK safe to install?

              -

              As with any APK file downloaded from third-party sources, there is always a risk of malware or viruses. Therefore, you should always scan the file before installing it on your device. You should also make sure that you have enough storage space and battery power to run the game smoothly. You should also enable unknown sources in your device settings to allow the installation of APK files from outside the Google Play Store.

              -

              Can I play Attack on Titan - Fan Game APK offline?

              -

              Yes, you can play Attack on Titan - Fan Game APK Continuing the article:

              Can I play Attack on Titan - Fan Game APK offline?

              -

              Yes, you can play Attack on Titan - Fan Game APK offline, as long as you have downloaded the game and installed it on your device. You can play in Story Mode or Free Mode without an internet connection. However, you will not be able to access the multiplayer mode or the online leaderboards. You will also not be able to receive any updates or bug fixes from the developer.

              -

              How do I play Attack on Titan - Fan Game APK with my friends?

              -

              If you want to play Attack on Titan - Fan Game APK with your friends, you need to have an internet connection and join the multiplayer mode. You can either create a room or join an existing one. You can invite your friends by sharing the room code or scanning the QR code. You can also join a random room and play with strangers. You can chat with your teammates and coordinate your strategies. You can also compete with other players and rank up on the online leaderboards.

              -

              What are the minimum requirements to play Attack on Titan - Fan Game APK?

              -

              The minimum requirements to play Attack on Titan - Fan Game APK are as follows:

              -
                -
              • Android version: 4.4 or higher
              • -
              • RAM: 2 GB or more
              • -
              • Storage: 500 MB or more
              • -
              • Processor: Quad-core or higher
              • -
              • Graphics: Adreno 505 or higher
              • -
              -

              If your device does not meet these requirements, you may experience lag, crashes, or errors while playing the game. You may also not be able to download or install the game properly.

              -

              How do I contact the developer of Attack on Titan - Fan Game APK?

              -

              If you have any feedback, suggestions, or issues regarding Attack on Titan - Fan Game APK, you can contact the developer Swammy through his social media accounts. You can also leave a comment on his YouTube videos or send him an email at swammygames@gmail.com. He is very responsive and friendly, and he appreciates any support from his fans.

              -

              Is there a PC version of Attack on Titan - Fan Game APK?

              -

              No, there is no PC version of Attack on Titan - Fan Game APK. The game is only available for Android devices. However, you can use an Android emulator to play the game on your PC. An Android emulator is a software that simulates an Android device on your PC, allowing you to run Android apps and games. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download any of these emulators and install Attack on Titan - Fan Game APK on them. However, keep in mind that this may affect the performance and quality of the game, and it may not be compatible with some features or settings.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clash of Clans Terbaru Mod Apk 15.292.17 - Get Unlimited Gems Gold and Elixir.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clash of Clans Terbaru Mod Apk 15.292.17 - Get Unlimited Gems Gold and Elixir.md deleted file mode 100644 index 7c14b371efc7d5cdc00df34524d342e5f92c68f1..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clash of Clans Terbaru Mod Apk 15.292.17 - Get Unlimited Gems Gold and Elixir.md +++ /dev/null @@ -1,87 +0,0 @@ - -

              Download Clash of Clans Terbaru Mod Apk

              -

              Are you a fan of strategy games? Do you love building your own village, training your army, and fighting against other players? If yes, then you must have heard of Clash of Clans, one of the most popular mobile games in the world. But did you know that you can enjoy the game even more with Clash of Clans mod apk? In this article, we will tell you everything you need to know about this amazing modded version of the game, and how you can download and install it on your device. So, let's get started!

              -

              download clash of clans terbaru mod apk


              Download https://bltlly.com/2uOpFd



              -

              What is Clash of Clans?

              -

              Clash of Clans is a freemium strategy game developed by Supercell, a Finnish game company. The game was released in 2012 for iOS and in 2013 for Android devices. Since then, it has become one of the most downloaded and played games in the world, with over 500 million downloads on Google Play Store alone.

              -

              The game is set in a fantasy world where you are the chief of your own village. You have to build and upgrade various buildings, such as town hall, barracks, gold mines, elixir collectors, defenses, walls, and more. You also have to train and upgrade different types of troops, such as barbarians, archers, giants, wizards, dragons, and more. You can use these troops to attack other players' villages and loot their resources, or to defend your own village from enemy attacks.

              -

              The game also has a multiplayer mode where you can join or create a clan with other players. You can chat with your clan members, donate and receive troops, and participate in clan wars. Clan wars are special events where two clans compete against each other for rewards and glory. You can also take part in seasonal events, special challenges, and global tournaments to earn more rewards and trophies.

              -

              Why download Clash of Clans mod apk?

              -

              Clash of Clans is undoubtedly a fun and addictive game, but it also has some drawbacks. One of them is that it requires a lot of time and patience to progress in the game. You have to wait for long hours or even days for your buildings and troops to be upgraded. You also have to collect enough resources, such as gold, elixir, and gems, to afford these upgrades. And if you run out of resources or get attacked by stronger players, you might lose your progress and motivation.

              -

              download clash of clans latest mod apk unlimited money
              -download clash of clans new mod apk free gems
              -download clash of clans mod apk android 1
              -download clash of clans mod apk offline
              -download clash of clans hack mod apk 2023
              -download clash of clans mod apk fhx
              -download clash of clans mod apk revdl
              -download clash of clans mod apk rexdl
              -download clash of clans mod apk happymod
              -download clash of clans mod apk s1
              -download clash of clans mod apk s2
              -download clash of clans mod apk s3
              -download clash of clans mod apk s4
              -download clash of clans mod apk no root
              -download clash of clans mod apk private server
              -download clash of clans mod apk town hall 14
              -download clash of clans mod apk unlimited troops
              -download clash of clans mod apk unlimited everything
              -download clash of clans mod apk latest version 2023
              -download clash of clans mod apk for pc
              -download clash of clans mod apk for ios
              -download clash of clans mod apk online
              -download clash of clans mod apk supercell
              -download clash of clans mod apk with builder base
              -download clash of clans mod apk unlimited gold and elixir
              -download clash of clans update terbaru mod apk
              -download game clash of clans terbaru mod apk
              -cara download clash of clans terbaru mod apk
              -link download clash of clans terbaru mod apk
              -situs download clash of clans terbaru mod apk
              -tempat download clash of clans terbaru mod apk
              -website download clash of clans terbaru mod apk
              -aplikasi untuk download clash of clans terbaru mod apk
              -tutorial download clash of clans terbaru mod apk
              -video download clash of clans terbaru mod apk
              -review download clash of clans terbaru mod apk
              -tips and tricks download clash of clans terbaru mod apk
              -cheat codes download clash of clans terbaru mod apk
              -best strategy download clash of clans terbaru mod apk
              -how to install and play download clash of clans terbaru mod apk

              -

              That's why many players look for ways to hack or cheat the game. And one of the best ways to do that is by using Clash of Clans mod apk. This is a modified version of the original game that gives you access to unlimited resources, unlimited troops, customization options, anti-ban protection, and more. With Clash of Clans mod apk, you can enjoy the game without any limitations or restrictions.

              -

              Unlimited resources

              -

              One of the main features of Clash of Clans mod apk is that it gives you unlimited resources. You don't have to worry about running out of gold, elixir, or gems anymore. You can use these resources to upgrade your buildings and troops as much as you want. You can also buy anything from the shop without spending real money.

              -

              Unlimited troops

              -

              Another feature of Clash of Clans mod apk is that it gives you unlimited troops. You don't have to wait for your barracks or army camps to produce or house your troops anymore. You can train any number and type of troops you want instantly. You can also use these troops to attack other players' villages and loot their resources, or to defend your own village from enemy attacks. You can also use these troops to participate in clan wars and events.

              -

              Customization options

              -

              Another feature of Clash of Clans mod apk is that it gives you customization options. You can change the appearance and behavior of your buildings and troops according to your preferences. You can also create your own custom heroes and spells with unique abilities and effects. You can also modify the game settings, such as the difficulty level, the speed, the graphics, and more.

              -

              Anti-ban protection

              -

              Another feature of Clash of Clans mod apk is that it gives you anti-ban protection. You don't have to worry about getting banned or suspended by Supercell for using the modded version of the game. The mod apk has a built-in anti-ban system that prevents the game servers from detecting any suspicious activity or modification. You can play the game safely and securely without any risk.

              -

              How to download and install Clash of Clans mod apk?

              -

              Now that you know the benefits of using Clash of Clans mod apk, you might be wondering how to download and install it on your device. Well, don't worry, because we have got you covered. Just follow these simple steps and you will be able to enjoy the game in no time.

              -

              The steps to follow for a successful installation

              -

              Download the mod apk file from a trusted source

              -

              The first step is to download the mod apk file from a trusted source. There are many websites that claim to offer the latest version of Clash of Clans mod apk, but not all of them are reliable or safe. Some of them might contain viruses, malware, or outdated files that can harm your device or compromise your privacy. Therefore, we recommend you to download the mod apk file from our website, which is 100% safe and verified. You can find the download link at the end of this article.

              -

              Enable unknown sources on your device settings

              -

              The second step is to enable unknown sources on your device settings. This is necessary because Clash of Clans mod apk is not available on the official app stores, such as Google Play Store or Apple App Store. Therefore, you need to allow your device to install apps from unknown sources, which are not verified by Google or Apple. To do this, go to your device settings, then security, then unknown sources, and toggle it on.

              -

              Install the mod apk file and launch the game

              -

              The third and final step is to install the mod apk file and launch the game. To do this, locate the downloaded mod apk file on your device storage, then tap on it to start the installation process. Follow the instructions on the screen and wait for a few minutes until the installation is complete. Once done, you can launch the game from your app drawer or home screen and enjoy it.

              -

              Conclusion

              -

              Clash of Clans is a fun and addictive strategy game that millions of people love to play. However, if you want to experience the game in a new and exciting way, you should try Clash of Clans mod apk. This is a modified version of the game that gives you unlimited resources, unlimited troops, customization options, anti-ban protection, and more. With Clash of Clans mod apk, you can build your dream village, train your ultimate army, and conquer other players' villages without any hassle.

              -

              If you are interested in downloading and installing Clash of Clans mod apk on your device, just follow the steps we have mentioned above and you will be good to go. Remember to download the mod apk file from our website only, as we guarantee its safety and quality. Also, make sure to enable unknown sources on your device settings before installing the mod apk file.

              -

              We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. And if you liked this article, don't forget to share it with your friends and family who might also enjoy playing Clash of Clans mod apk. Thank you for reading!

              -

              Frequently Asked Questions

              -

              Here are some of the most common questions that people ask about Clash of Clans mod apk:

              -

              Is Clash of Clans mod apk safe to use?

              -

              Yes, Clash of Clans mod apk is safe to use as long as you download it from our website only. We have tested and verified the mod apk file before uploading it on our website. It does not contain any viruses, malware, or outdated files that can harm your device or compromise your privacy.

              -

              Is Clash of Clans mod apk compatible with my device?Yes, Clash of Clans mod apk is compatible with most Android devices that have Android 4.1 or higher. However, some devices might not support the mod apk file due to different specifications or configurations. If you encounter any problems while installing or playing the mod apk file, you can try to clear the cache and data of the game, or reinstall the mod apk file.

              -

              Will I lose my progress if I use Clash of Clans mod apk?

              -

              No, you will not lose your progress if you use Clash of Clans mod apk. The mod apk file does not affect your original game data or account. You can switch between the original game and the modded game anytime you want. However, you should not use the same account for both games, as it might cause some issues or conflicts. You should create a separate account for the modded game and use a different email address.

              -

              Can I play online with other players using Clash of Clans mod apk?

              -

              Yes, you can play online with other players using Clash of Clans mod apk. The mod apk file does not prevent you from accessing the multiplayer mode of the game. You can join or create a clan with other players, chat with them, donate and receive troops, and participate in clan wars and events. However, you should be careful not to abuse the mod apk features, as it might ruin the game balance and fairness for other players. You should also avoid playing with players who are using the original game, as they might report you for cheating or hacking.

              -

              How can I update Clash of Clans mod apk?

              -

              To update Clash of Clans mod apk, you need to download the latest version of the mod apk file from our website and install it on your device. You don't need to uninstall the previous version of the mod apk file, as it will be overwritten by the new version. However, you should always backup your game data before updating the mod apk file, in case something goes wrong or you want to revert to the previous version.

              -

              Where can I find more information about Clash of Clans mod apk?

              -

              If you want to find more information about Clash of Clans mod apk, you can visit our website or contact us through our email address. We will be happy to answer any questions or queries you might have about the mod apk file. You can also check out our blog posts and reviews about the mod apk file and other related topics.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/util/wait.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/util/wait.py deleted file mode 100644 index 21b4590b3dc9b58902b0d47164b9023e54a85ef8..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/util/wait.py +++ /dev/null @@ -1,152 +0,0 @@ -import errno -import select -import sys -from functools import partial - -try: - from time import monotonic -except ImportError: - from time import time as monotonic - -__all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] - - -class NoWayToWaitForSocketError(Exception): - pass - - -# How should we wait on sockets? -# -# There are two types of APIs you can use for waiting on sockets: the fancy -# modern stateful APIs like epoll/kqueue, and the older stateless APIs like -# select/poll. The stateful APIs are more efficient when you have a lots of -# sockets to keep track of, because you can set them up once and then use them -# lots of times. But we only ever want to wait on a single socket at a time -# and don't want to keep track of state, so the stateless APIs are actually -# more efficient. So we want to use select() or poll(). -# -# Now, how do we choose between select() and poll()? On traditional Unixes, -# select() has a strange calling convention that makes it slow, or fail -# altogether, for high-numbered file descriptors. The point of poll() is to fix -# that, so on Unixes, we prefer poll(). -# -# On Windows, there is no poll() (or at least Python doesn't provide a wrapper -# for it), but that's OK, because on Windows, select() doesn't have this -# strange calling convention; plain select() works fine. -# -# So: on Windows we use select(), and everywhere else we use poll(). We also -# fall back to select() in case poll() is somehow broken or missing. - -if sys.version_info >= (3, 5): - # Modern Python, that retries syscalls by default - def _retry_on_intr(fn, timeout): - return fn(timeout) - -else: - # Old and broken Pythons. - def _retry_on_intr(fn, timeout): - if timeout is None: - deadline = float("inf") - else: - deadline = monotonic() + timeout - - while True: - try: - return fn(timeout) - # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 - except (OSError, select.error) as e: - # 'e.args[0]' incantation works for both OSError and select.error - if e.args[0] != errno.EINTR: - raise - else: - timeout = deadline - monotonic() - if timeout < 0: - timeout = 0 - if timeout == float("inf"): - timeout = None - continue - - -def select_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - rcheck = [] - wcheck = [] - if read: - rcheck.append(sock) - if write: - wcheck.append(sock) - # When doing a non-blocking connect, most systems signal success by - # marking the socket writable. Windows, though, signals success by marked - # it as "exceptional". We paper over the difference by checking the write - # sockets for both conditions. (The stdlib selectors module does the same - # thing.) - fn = partial(select.select, rcheck, wcheck, wcheck) - rready, wready, xready = _retry_on_intr(fn, timeout) - return bool(rready or wready or xready) - - -def poll_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - mask = 0 - if read: - mask |= select.POLLIN - if write: - mask |= select.POLLOUT - poll_obj = select.poll() - poll_obj.register(sock, mask) - - # For some reason, poll() takes timeout in milliseconds - def do_poll(t): - if t is not None: - t *= 1000 - return poll_obj.poll(t) - - return bool(_retry_on_intr(do_poll, timeout)) - - -def null_wait_for_socket(*args, **kwargs): - raise NoWayToWaitForSocketError("no select-equivalent available") - - -def _have_working_poll(): - # Apparently some systems have a select.poll that fails as soon as you try - # to use it, either due to strange configuration or broken monkeypatching - # from libraries like eventlet/greenlet. - try: - poll_obj = select.poll() - _retry_on_intr(poll_obj.poll, 0) - except (AttributeError, OSError): - return False - else: - return True - - -def wait_for_socket(*args, **kwargs): - # We delay choosing which implementation to use until the first time we're - # called. We could do it at import time, but then we might make the wrong - # decision if someone goes wild with monkeypatching select.poll after - # we're imported. - global wait_for_socket - if _have_working_poll(): - wait_for_socket = poll_wait_for_socket - elif hasattr(select, "select"): - wait_for_socket = select_wait_for_socket - else: # Platform-specific: Appengine. - wait_for_socket = null_wait_for_socket - return wait_for_socket(*args, **kwargs) - - -def wait_for_read(sock, timeout=None): - """Waits for reading to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, read=True, timeout=timeout) - - -def wait_for_write(sock, timeout=None): - """Waits for writing to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, write=True, timeout=timeout) diff --git a/spaces/tomofi/MMOCR/mmocr/utils/setup_env.py b/spaces/tomofi/MMOCR/mmocr/utils/setup_env.py deleted file mode 100644 index 21def2f0809153a5f755af2431f7e702db625e5c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/utils/setup_env.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform -import warnings - -import cv2 -import torch.multiprocessing as mp - - -def setup_multi_processes(cfg): - """Setup multi-processing environment variables.""" - # set multi-process start method as `fork` to speed up the training - if platform.system() != 'Windows': - mp_start_method = cfg.get('mp_start_method', 'fork') - current_method = mp.get_start_method(allow_none=True) - if current_method is not None and current_method != mp_start_method: - warnings.warn( - f'Multi-processing start method `{mp_start_method}` is ' - f'different from the previous setting `{current_method}`.' - f'It will be force set to `{mp_start_method}`. You can change ' - f'this behavior by changing `mp_start_method` in your config.') - mp.set_start_method(mp_start_method, force=True) - - # disable opencv multithreading to avoid system being overloaded - opencv_num_threads = cfg.get('opencv_num_threads', 0) - cv2.setNumThreads(opencv_num_threads) - - # setup OMP threads - # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa - if 'OMP_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: - omp_num_threads = 1 - warnings.warn( - f'Setting OMP_NUM_THREADS environment variable for each process ' - f'to be {omp_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) - - # setup MKL threads - if 'MKL_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: - mkl_num_threads = 1 - warnings.warn( - f'Setting MKL_NUM_THREADS environment variable for each process ' - f'to be {mkl_num_threads} in default, to avoid your system being ' - f'overloaded, please further tune the variable for optimal ' - f'performance in your application as needed.') - os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads) diff --git a/spaces/tomofi/NDLOCR/cli/procs/page_separation.py b/spaces/tomofi/NDLOCR/cli/procs/page_separation.py deleted file mode 100644 index bb07319d4d039d3509c5dc1ce05444d964ad7914..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/cli/procs/page_separation.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) 2022, National Diet Library, Japan -# -# This software is released under the CC BY 4.0. -# https://creativecommons.org/licenses/by/4.0/ - - -import copy -import numpy -import os - -from .base_proc import BaseInferenceProcess - - -class PageSeparation(BaseInferenceProcess): - """ - ノド元分割処理を実行するプロセスのクラス。 - BaseInferenceProcessを継承しています。 - """ - def __init__(self, cfg, pid): - """ - Parameters - ---------- - cfg : dict - 本推論処理における設定情報です。 - pid : int - 実行される順序を表す数値。 - """ - super().__init__(cfg, pid, '_page_sep') - - if self.cfg['page_separation']['silence_tf_log']: - import logging - import warnings - os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' - warnings.simplefilter(action='ignore', category=FutureWarning) - - import tensorflow as tf - tf.get_logger().setLevel(logging.ERROR) - - from src.separate_pages_ssd.inference_divided import divide_facing_page_with_cli, load_weightfile - load_weightfile(os.path.abspath(self.cfg['page_separation']['weight_path'])) - self._run_src_inference = divide_facing_page_with_cli - - def _is_valid_input(self, input_data): - """ - 本クラスの推論処理における入力データのバリデーション。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - [変数なし] : bool -  入力データが正しければTrue, そうでなければFalseを返します。 - """ - if type(input_data['img']) is not numpy.ndarray: - print('PageSeparation: input img is not numpy.ndarray') - return False - return True - - def _run_process(self, input_data): - """ - 推論処理の本体部分。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - result : dict - 推論処理の結果を保持する辞書型データ。 - 基本的にinput_dataと同じ構造です。 - """ - print('### Page Separation ###') - log_file_path = None - if self.process_dump_dir is not None: - log_file_path = os.path.join(self.process_dump_dir, self.cfg['page_separation']['log']) - inference_output = self._run_src_inference(input=input_data['img'], - input_path=input_data['img_path'], - left=self.cfg['page_separation']['left'], - right=self.cfg['page_separation']['right'], - single=self.cfg['page_separation']['single'], - ext=self.cfg['page_separation']['ext'], - quality=self.cfg['page_separation']['quality'], - short=self.cfg['page_separation']['short'], - log=log_file_path) - if (not self.cfg['page_separation']['allow_invalid_num_output']) and (not len(inference_output) in range(1, 3)): - print('ERROR: Output from page separation must be 1 or 2 pages.') - return None - - # Create result to pass img_path and img data - result = [] - for id, single_output_img in enumerate(inference_output): - output_data = copy.deepcopy(input_data) - output_data['img'] = single_output_img - output_data['orig_img_path'] = input_data['img_path'] - - # make and save separated img file name - if id == 0: - id = 'L' - else: - id = 'R' - orig_img_name = os.path.basename(input_data['img_path']) - stem, ext = os.path.splitext(orig_img_name) - output_data['img_file_name'] = stem + '_' + id + '.jpg' - - result.append(output_data) - - return result diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py deleted file mode 100644 index e9c5defd1cda850f9702c05a86e0671880ef5e38..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py +++ /dev/null @@ -1,105 +0,0 @@ -_base_ = [ - '../_base_/default_runtime.py', '../_base_/datasets/coco_detection.py' -] - -# model settings -model = dict( - type='CornerNet', - backbone=dict( - type='HourglassNet', - downsample_times=5, - num_stacks=2, - stage_channels=[256, 256, 384, 384, 384, 512], - stage_blocks=[2, 2, 2, 2, 2, 4], - norm_cfg=dict(type='BN', requires_grad=True)), - neck=None, - bbox_head=dict( - type='CentripetalHead', - num_classes=80, - in_channels=256, - num_feat_levels=2, - corner_emb_channels=0, - loss_heatmap=dict( - type='GaussianFocalLoss', alpha=2.0, gamma=4.0, loss_weight=1), - loss_offset=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1), - loss_guiding_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=0.05), - loss_centripetal_shift=dict( - type='SmoothL1Loss', beta=1.0, loss_weight=1)), - # training and testing settings - train_cfg=None, - test_cfg=dict( - corner_topk=100, - local_maximum_kernel=3, - distance_threshold=0.5, - score_thr=0.05, - max_per_img=100, - nms=dict(type='soft_nms', iou_threshold=0.5, method='gaussian'))) -# data settings -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='RandomCenterCropPad', - crop_size=(511, 511), - ratios=(0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3), - test_mode=False, - test_pad_mode=None, - **img_norm_cfg), - dict(type='Resize', img_scale=(511, 511), keep_ratio=False), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=True, - transforms=[ - dict(type='Resize'), - dict( - type='RandomCenterCropPad', - crop_size=None, - ratios=None, - border=None, - test_mode=True, - test_pad_mode=['logical_or', 127], - **img_norm_cfg), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict( - type='Collect', - keys=['img'], - meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape', - 'scale_factor', 'flip', 'img_norm_cfg', 'border')), - ]) -] -data = dict( - samples_per_gpu=6, - workers_per_gpu=3, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='Adam', lr=0.0005) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[190]) -runner = dict(type='EpochBasedRunner', max_epochs=210) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/grid_assigner.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/grid_assigner.py deleted file mode 100644 index 7390ea6370639c939d578c6ebf0f9268499161bc..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/grid_assigner.py +++ /dev/null @@ -1,155 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class GridAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - """ - - def __init__(self, - pos_iou_thr, - neg_iou_thr, - min_pos_iou=.0, - gt_max_assign_all=True, - iou_calculator=dict(type='BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, bboxes, box_responsible_flags, gt_bboxes, gt_labels=None): - """Assign gt to bboxes. The process is very much like the max iou - assigner, except that positive samples are constrained within the cell - that the gt boxes fell in. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to -1 - 2. assign proposals whose iou with all gts <= neg_iou_thr to 0 - 3. for each bbox within a cell, if the iou with its nearest gt > - pos_iou_thr and the center of that gt falls inside the cell, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals within the cell the - gt bbox falls in to itself. - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - box_responsible_flags (Tensor): flag to indicate whether box is - responsible for prediction, shape(n, ) - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all gt and bboxes - overlaps = self.iou_calculator(gt_bboxes, bboxes) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - # 2. assign negative: below - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - # shape of max_overlaps == argmax_overlaps == num_bboxes - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps <= self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, (tuple, list)): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps > self.neg_iou_thr[0]) - & (max_overlaps <= self.neg_iou_thr[1])] = 0 - - # 3. assign positive: falls into responsible cell and above - # positive IOU threshold, the order matters. - # the prior condition of comparision is to filter out all - # unrelated anchors, i.e. not box_responsible_flags - overlaps[:, ~box_responsible_flags.type(torch.bool)] = -1. - - # calculate max_overlaps again, but this time we only consider IOUs - # for anchors responsible for prediction - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - # shape of gt_max_overlaps == gt_argmax_overlaps == num_gts - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - pos_inds = (max_overlaps > - self.pos_iou_thr) & box_responsible_flags.type(torch.bool) - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - # 4. assign positive to max overlapped anchors within responsible cell - for i in range(num_gts): - if gt_max_overlaps[i] > self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = (overlaps[i, :] == gt_max_overlaps[i]) & \ - box_responsible_flags.type(torch.bool) - assigned_gt_inds[max_iou_inds] = i + 1 - elif box_responsible_flags[gt_argmax_overlaps[i]]: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - # assign labels of positive anchors - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/tovaru/vits-for-ba/commons.py b/spaces/tovaru/vits-for-ba/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/tovaru/vits-for-ba/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/truong-xuan-linh/auto-comment-generation/src/model/init.py b/spaces/truong-xuan-linh/auto-comment-generation/src/model/init.py deleted file mode 100644 index f1638031342d735b01529d3afae2f847a3c0ba6d..0000000000000000000000000000000000000000 --- a/spaces/truong-xuan-linh/auto-comment-generation/src/model/init.py +++ /dev/null @@ -1,7 +0,0 @@ -import os -import gdown - -def download_model(url, output): - if os.path.isfile(output): - return - gdown.download(url, output, quiet=True) diff --git a/spaces/ttt246/brain/Extension/src/pages/Options/index.css b/spaces/ttt246/brain/Extension/src/pages/Options/index.css deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/uSerNameDDHL/bingo/src/components/ui/input.tsx b/spaces/uSerNameDDHL/bingo/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/ulysses115/ulysses115-pmvoice/monotonic_align/__init__.py b/spaces/ulysses115/ulysses115-pmvoice/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/ulysses115-pmvoice/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/usbethFlerru/sovits-modelsV2/Nitro-Pro-Enterprise-V90237x86x64.md b/spaces/usbethFlerru/sovits-modelsV2/Nitro-Pro-Enterprise-V90237x86x64.md deleted file mode 100644 index 017291b79ab235cd3bb375d5207ccd2de734456d..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/Nitro-Pro-Enterprise-V90237x86x64.md +++ /dev/null @@ -1,60 +0,0 @@ -## Nitro Pro Enterprise V9.0.2.37.x86.x64 - - - - - - - - - -**LINK ○○○ [https://searchdisvipas.blogspot.com/?download=2txnO5](https://searchdisvipas.blogspot.com/?download=2txnO5)** - - - - - - - - - - - - - -# Nitro Pro Enterprise v9.0.2.37.x86.x64: A Powerful PDF Solution for Windows - - - -If you are looking for a professional-quality PDF software that can create, convert, edit, sign, and share PDF files with ease, you might want to check out Nitro Pro Enterprise v9.0.2.37.x86.x64. This software is designed for the business user and offers a range of features that make working with PDF faster and easier than ever before. - - - -Nitro Pro Enterprise v9.0.2.37.x86.x64 is compatible with Windows XP, Vista, 7, 8, and 10, and supports both 32-bit and 64-bit systems. It can handle any format or content type, from paper scans to spreadsheets, presentations, reports, and more. You can create PDF files from over 300 formats, combine multiple files into one PDF document, print to PDF from any application, and use Microsoft Office add-ins to integrate Nitro Pro with your workflow. - - - -One of the main advantages of Nitro Pro Enterprise v9.0.2.37.x86.x64 is its ability to convert and export PDF files with high accuracy and efficiency. You can convert PDF files to Word, Excel, PowerPoint, HTML, image, text, and more formats with just a few clicks. You can also extract text and images from PDF files and reuse them in other applications. Nitro Pro uses industry-leading conversion technology and extraction tools to ensure the quality and integrity of your data. - - - -Nitro Pro Enterprise v9.0.2.37.x86.x64 also allows you to edit PDF files directly and intuitively. You can add, delete, replace, and correct text and images in PDF files with ease. You can also edit pages, optimize files, add bookmarks and links, apply watermarks, headers and footers, insert bates numbering, and more. Nitro Pro lets you edit entire paragraphs with automatic text reflowing, like you would in a word processor. - - - -Another key feature of Nitro Pro Enterprise v9.0.2.37.x86.x64 is its ability to sign and share PDF files securely and conveniently. You can create and apply digital signatures to your PDF documents, encrypt them with passwords and certificates, apply restrictions on printing, copying, and altering them, and verify their authenticity. You can also fill in, save, print, and submit forms online using Nitro Pro's form design and filling tools. - - - -Nitro Pro Enterprise v9.0.2.37.x86.x64 also integrates with Nitro Cloud, a cloud-based service that enables you to access your PDF files from any device and collaborate with others in real time. You can send and request signatures, track document activity, share feedback, request approvals, and more using Nitro Cloud. - - - -Nitro Pro Enterprise v9.0.2.37.x86.x64 is a comprehensive PDF solution that can help you work more productively with your digital documents. It offers a simple, straightforward, and intuitive user interface that makes it easy to use for anyone. It also offers a free trial version that you can download from its official website[^1^] [^2^]. If you want to purchase the full version of Nitro Pro Enterprise v9.0.2.37.x86.x64 , you can do so online or contact their sales team for more information. - - 1b8d091108 - - - - - diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Ce Qui Est Une Echographie Vaginale Un Examen Non Invasif Pour Visualiser Les Organes Pelviens.md b/spaces/usbethFlerru/sovits-modelsV2/example/Ce Qui Est Une Echographie Vaginale Un Examen Non Invasif Pour Visualiser Les Organes Pelviens.md deleted file mode 100644 index 3df8320faa0ca5d9778f63e9b34fd509a0c36165..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Ce Qui Est Une Echographie Vaginale Un Examen Non Invasif Pour Visualiser Les Organes Pelviens.md +++ /dev/null @@ -1,13 +0,0 @@ - -

              L'échographie endovaginale est une technique d'imagerie médicale qui consiste à explorer l'utérus et les ovaires. Elle est indiquée chez certaines femmes victimes de saignements ou qui ressentent des douleurs dans la région pelvienne (bassin). Elle permet par exemple de mettre en évidence un kyste ou une tumeur (fibrome). Une sonde munie d'une micro-caméra est introduite dans le vagin et reproduit les images sur un écran. Grâce à cette technique, il est possible de suivre l'évolution d'une grossesse très tôt et de contrôler la fréquence cardiaque du fœtus. L'échographie endovaginale est parfois réalisée au cours d'une chirurgie délicate ou dans le cadre d'une fécondation in vitro.

              -

              Ce Qui Est Une Echographie Vaginale


              DOWNLOAD ✓✓✓ https://urlcod.com/2uyXrb



              -

              Sommaire Définition et principes Hypoéchogène ou hyperéchogène Echographie abdominale Echographie pelvienne Echographie mammaire Echographie endovaginale Echographie cardiaque Préparation Déroulement Grossesse Prix...

              -

              On note une augmentation du flux vasculaire les jours précédant l'ovulation, suivie par un deuxième pic 3 jours plus tard. Ces variations démontrent qu'il existe une régulation spécifique pour préparer la réceptivité endométriale. Les vaisseaux en provenance de la zone jonctionnelle peuvent alors pénétrer dans l'endomètre jusqu'au niveau de la ligne cavitaire: ces aspects témoignent d'une réceptivité endométriale optimale. La zone jonctionnelle, qui est la portion de myomètre adjacent à l'endomètre (ou zone sub-endométriale du myomètre d'origine müllerienne). Ce sont plus particulièrement les atteintes de la zone jonctionnelle qui sont susceptibles de retentir sur la nidation. On l'identifie comme une zone hypoéchogène de 5 mm. Cette zone, qui n'était visualisable que par IRM, est aujourd'hui facilement identifiable lors l'échographie pelvienne, en particulier grâce à l'échographie 3D (phénomènes de sommation spatiale). Son épaisseur peut varier au cours de la vie génitale, pouvant s'épaissir après 35 ans ou s'atrophier en cas de contraception prolongée ou d'hypo-oestrogénie. Étiologie de l´infertilité d´origine endométriale détectable par échographie endovaginale: les lésions endométriales détectables par l´échographie sont les anomalies congénitales de l´utérus, les fibromes, les polypes, et les synéchies. La sensibilité globale de l´EEV dans la détection des anomalies endométriales chez les patientes infertiles a été évaluée à 98,9%, avec une valeur prédictive positive de 94,3% et une valeur prédictive négative de 5,5%. La spécificité globale d´un examen échographique normal est de 31,3%, avec une valeur prédictive négative de 71,4% [6].

              -

              Dans certaines situations, une échographie endo-vaginale peut vous être proposée (sonde adaptée introduite dans le vagin) afin de mieux visualiser certaines parties du foetus , du col ,de l'utérus ou des ovaires. Cette échographie indolore est réalisée toute en douceur.

              -

              Pour déterminer à quel moment du cycle menstruel se trouve la femme, vérifier que l'endomètre présente une épaisseur adéquate ou détecter des anomalies, il est possible de réaliser une échographie endovaginale.

              -

              L'examen le plus facile et le plus intéressant pour réaliser la surveillance semble être l'échographie endovaginale mesurant l'épaisseur endométriale. Une muqueuse de 5 mm ou moins garantit, avec une bonne fiabilité, l'absence de lésion endocavitaire.

              -

              -

              Cite this article: Sanae Stimou et al. Place de l´échographie endovaginale dans l´exploration de l´infertilité d´origine endométriale. Pan African Medical Journal. 2020;37:92. [doi: 10.11604/pamj.2020.37.92.22375]

              -

              Il est important d´explorer la cavité utérine dans le bilan d´une infertilité, car de nombreuses lésions intra-utérines peuvent être retrouvées. L´échographie endovaginale (EEV) est un examen de première intention dans le bilan d´une infertilité féminine. Elle permet une évaluation de la cavité utérine à la recherche d´anomalie responsable du trouble de la fertilité et aussi de mettre en évidence des lésions pouvant entraîner des échecs de transfert ou d´implantation. Il s´agit d´un examen facilement réalisable, et reproductible. Notre objectif est de détailler les lésions endométriale détectable par EEV pour préciser la place de l´EEV dans le bilan d´infertilité.

              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Compilers Principles Techniques And Tools Solutions Manual 2nd Edition.md b/spaces/usbethFlerru/sovits-modelsV2/example/Compilers Principles Techniques And Tools Solutions Manual 2nd Edition.md deleted file mode 100644 index 155f4f371f1fe71f73e520fbcf2cc246126cf194..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Compilers Principles Techniques And Tools Solutions Manual 2nd Edition.md +++ /dev/null @@ -1,10 +0,0 @@ - -

              it is not uncommon for some problems to look like a tree search. since a decision tree is easy to visualise, it is tempting to use decision trees to help understand these problems, although it is rare for decision trees to be optimal solutions.

              -

              Compilers Principles Techniques And Tools Solutions Manual 2nd Edition


              Download Ziphttps://urlcod.com/2uyXOf



              -

              inverted indexes and searching. most search engines employ some kind of inverted index. while simple, inverted indexes are notoriously slow to update when the index is large and must be searchable. the approach used for the whitten book " programming pearls" is an example of an inverted index, where the inverted index manages to solve a very difficult searching problem efficiently. the methodology of the book is quite similar to that of the fht. the two books are compiled into a single theoretical framework using the same terminology.

              -

              in the last year we have been working on a new edition of the dragon book, published by addison wesley. you can check the conference proceedings and plenary papers out for a great overview on the subjects covered in the book. the forthcoming edition of the book covers the following topics, extended from the previous edition:

              -

              formal semantics
              formally defined languages, model checking, higher-order parsing, operational semantics, type systems.
              the foundational techniques
              recursive descent parsing, grammar-based parsing, operator-precedence parsing, k-tuplets, lalr parsing, lexical grammars, some recent research in practical parsing efficiency.

              -

              -

              static analysis is the main focus of the book. in its broadest sense, it consists of three basic components: syntax analysis, semantic analysis, and code generation. there are two main categories of syntax analyses: lexical analysis to separate words and punctuation from the text, and syntax-directed translation to produce an intermediate representation. however, these types of analysis are often performed by different programs. the final step in syntax analysis is to compute a complexity bound for the resulting program .
              dynamic analysis is concerned with what a program does. it is the process of analyzing programs as they execute in order to determine: performance bottlenecks; which parts of the program are the most time consuming; and which sections of the program are the most vulnerable to the effects of memory management.

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Dmc-devilmaycry.exe - .net Framework Initialization Error.md b/spaces/usbethFlerru/sovits-modelsV2/example/Dmc-devilmaycry.exe - .net Framework Initialization Error.md deleted file mode 100644 index a4421b768b50e0c356cc409dfad83bc39fb99baa..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Dmc-devilmaycry.exe - .net Framework Initialization Error.md +++ /dev/null @@ -1,8 +0,0 @@ -

              Dmc-devilmaycry.exe - .net Framework Initialization Error


              Download Filehttps://urlcod.com/2uyWm6



              - -Jan 16, 2014 - NET\\Framework\\v4.0.30319\\mscorsvw.exe [2010-3-18 130384] . 16/1/2557 8:52:36, Error: volmgr [46] - Failed to initialize crash dump! 16/1/2557 8:52:36, Error: volmgr [46] - Failed to initialize crash dump! -16/1/2557 8:52:36, Error: volmgr [46] - Failed to initialize crash dump! -16/1/2557 8:52:36, Error: volmgr [46] - Failed to initialize crash dump! 8a78ff9644
              -
              -
              -

              diff --git a/spaces/vaibhavarduino/anime-plus/e4e/criteria/lpips/utils.py b/spaces/vaibhavarduino/anime-plus/e4e/criteria/lpips/utils.py deleted file mode 100644 index 3d15a0983775810ef6239c561c67939b2b9ee3b5..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/anime-plus/e4e/criteria/lpips/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -from collections import OrderedDict - -import torch - - -def normalize_activation(x, eps=1e-10): - norm_factor = torch.sqrt(torch.sum(x ** 2, dim=1, keepdim=True)) - return x / (norm_factor + eps) - - -def get_state_dict(net_type: str = 'alex', version: str = '0.1'): - # build url - url = 'https://raw.githubusercontent.com/richzhang/PerceptualSimilarity/' \ - + f'master/lpips/weights/v{version}/{net_type}.pth' - - # download - old_state_dict = torch.hub.load_state_dict_from_url( - url, progress=True, - map_location=None if torch.cuda.is_available() else torch.device('cpu') - ) - - # rename keys - new_state_dict = OrderedDict() - for key, val in old_state_dict.items(): - new_key = key - new_key = new_key.replace('lin', '') - new_key = new_key.replace('model.', '') - new_state_dict[new_key] = val - - return new_state_dict diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/hub/datasets.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/hub/datasets.md deleted file mode 100644 index c1bdc38efc59465c9913d5e2266d2d2afc06729b..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/hub/datasets.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -comments: true -description: Efficiently manage and use custom datasets on Ultralytics HUB for streamlined training with YOLOv5 and YOLOv8 models. -keywords: Ultralytics, HUB, Datasets, Upload, Visualize, Train, Custom Data, YAML, YOLOv5, YOLOv8 ---- - -# HUB Datasets - -Ultralytics HUB datasets are a practical solution for managing and leveraging your custom datasets. - -Once uploaded, datasets can be immediately utilized for model training. This integrated approach facilitates a seamless transition from dataset management to model training, significantly simplifying the entire process. - -## Upload Dataset - -Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets. They use the same structure and the same label formats to keep -everything simple. - -Before you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML file inside the dataset root directory** and that **your dataset YAML, directory and ZIP have the same name**, as shown in the example below, and then zip the dataset directory. - -For example, if your dataset is called "coco8", as our [COCO8](https://docs.ultralytics.com/datasets/detect/coco8) example dataset, then you should have a `coco8.yaml` inside your `coco8/` directory, which will create a `coco8.zip` when zipped: - -```bash -zip -r coco8.zip coco8 -``` - -You can download our [COCO8](https://github.com/ultralytics/hub/blob/master/example_datasets/coco8.zip) example dataset and unzip it to see exactly how to structure your dataset. - -

              - COCO8 Dataset Structure -

              - -The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. - -!!! example "coco8.yaml" - - ```yaml - --8<-- "ultralytics/datasets/coco8.yaml" - ``` - -After zipping your dataset, you should validate it before uploading it to Ultralytics HUB. Ultralytics HUB conducts the dataset validation check post-upload, so by ensuring your dataset is correctly formatted and error-free ahead of time, you can forestall any setbacks due to dataset rejection. - -```py -from ultralytics.hub import check_dataset -check_dataset('path/to/coco8.zip') -``` - -Once your dataset ZIP is ready, navigate to the [Datasets](https://hub.ultralytics.com/datasets) page by clicking on the **Datasets** button in the sidebar. - -![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Datasets button in the sidebar](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_2.jpg) - -??? tip "Tip" - - You can also upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page. - - ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_3.jpg) - -Click on the **Upload Dataset** button on the top right of the page. This action will trigger the **Upload Dataset** dialog. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Upload Dataset button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_4.jpg) - -Upload your dataset in the _Dataset .zip file_ field. - -You have the additional option to set a custom name and description for your Ultralytics HUB dataset. - -When you're happy with your dataset configuration, click **Upload**. - -![Ultralytics HUB screenshot of the Upload Dataset dialog with an arrow pointing to the Upload button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_5.jpg) - -After your dataset is uploaded and processed, you will be able to access it from the Datasets page. - -![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_6.jpg) - -You can view the images in your dataset grouped by splits (Train, Validation, Test). - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Images tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_7.jpg) - -??? tip "Tip" - - Each image can be enlarged for better visualization. - - ![Ultralytics HUB screenshot of the Images tab inside the Dataset page with an arrow pointing to the expand icon](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_8.jpg) - - ![Ultralytics HUB screenshot of the Images tab inside the Dataset page with one of the images expanded](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_9.jpg) - -Also, you can analyze your dataset by click on the **Overview** tab. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Overview tab](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_10.jpg) - -Next, [train a model](https://docs.ultralytics.com/hub/models/#train-model) on your dataset. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_upload_dataset_11.jpg) - -## Share Dataset - -!!! info "Info" - - Ultralytics HUB's sharing functionality provides a convenient way to share datasets with others. This feature is designed to accommodate both existing Ultralytics HUB users and those who have yet to create an account. - -??? note "Note" - - You have control over the general access of your datasets. - - You can choose to set the general access to "Private", in which case, only you will have access to it. Alternatively, you can set the general access to "Unlisted" which grants viewing access to anyone who has the direct link to the dataset, regardless of whether they have an Ultralytics HUB account or not. - -Navigate to the Dataset page of the dataset you want to share, open the dataset actions dropdown and click on the **Share** option. This action will trigger the **Share Dataset** dialog. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Share option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_1.jpg) - -??? tip "Tip" - - You can also share a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. - - ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Share option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_2.jpg) - -Set the general access to "Unlisted" and click **Save**. - -![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dropdown and one to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_3.jpg) - -Now, anyone who has the direct link to your dataset can view it. - -??? tip "Tip" - - You can easily click on the dataset's link shown in the **Share Dataset** dialog to copy it. - - ![Ultralytics HUB screenshot of the Share Dataset dialog with an arrow pointing to the dataset's link](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_share_dataset_4.jpg) - -## Edit Dataset - -Navigate to the Dataset page of the dataset you want to edit, open the dataset actions dropdown and click on the **Edit** option. This action will trigger the **Update Dataset** dialog. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Edit option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_1.jpg) - -??? tip "Tip" - - You can also edit a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. - - ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Edit option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_2.jpg) - -Apply the desired modifications to your dataset and then confirm the changes by clicking **Save**. - -![Ultralytics HUB screenshot of the Update Dataset dialog with an arrow pointing to the Save button](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_edit_dataset_3.jpg) - -## Delete Dataset - -Navigate to the Dataset page of the dataset you want to delete, open the dataset actions dropdown and click on the **Delete** option. This action will delete the dataset. - -![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Delete option](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_1.jpg) - -??? tip "Tip" - - You can also delete a dataset directly from the [Datasets](https://hub.ultralytics.com/datasets) page. - - ![Ultralytics HUB screenshot of the Datasets page with an arrow pointing to the Delete option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_2.jpg) - -??? note "Note" - - If you change your mind, you can restore the dataset from the [Trash](https://hub.ultralytics.com/trash) page. - - ![Ultralytics HUB screenshot of the Trash page with an arrow pointing to the Restore option of one of the datasets](https://raw.githubusercontent.com/ultralytics/assets/main/docs/hub/datasets/hub_delete_dataset_3.jpg) diff --git a/spaces/vict0rsch/climateGAN/figures/human_evaluation.py b/spaces/vict0rsch/climateGAN/figures/human_evaluation.py deleted file mode 100644 index 2889c0a945879830b844259f203612f96f759bef..0000000000000000000000000000000000000000 --- a/spaces/vict0rsch/climateGAN/figures/human_evaluation.py +++ /dev/null @@ -1,208 +0,0 @@ -""" -This script plots the result of the human evaluation on Amazon Mechanical Turk, where -human participants chose between an image from ClimateGAN or from a different method. -""" -print("Imports...", end="") -from argparse import ArgumentParser -import os -import yaml -import numpy as np -import pandas as pd -import seaborn as sns -from pathlib import Path -import matplotlib.pyplot as plt - - -# ----------------------- -# ----- Constants ----- -# ----------------------- - -comparables_dict = { - "munit_flooded": "MUNIT", - "cyclegan": "CycleGAN", - "instagan": "InstaGAN", - "instagan_copypaste": "Mask-InstaGAN", - "painted_ground": "Painted ground", -} - - -# Colors -palette_colorblind = sns.color_palette("colorblind") -color_climategan = palette_colorblind[9] - -palette_colorblind = sns.color_palette("colorblind") -color_munit = palette_colorblind[1] -color_cyclegan = palette_colorblind[2] -color_instagan = palette_colorblind[3] -color_maskinstagan = palette_colorblind[6] -color_paintedground = palette_colorblind[8] -palette_comparables = [ - color_munit, - color_cyclegan, - color_instagan, - color_maskinstagan, - color_paintedground, -] -palette_comparables_light = [ - sns.light_palette(color, n_colors=3)[1] for color in palette_comparables -] - - -def parsed_args(): - """ - Parse and returns command-line args - - Returns: - argparse.Namespace: the parsed arguments - """ - parser = ArgumentParser() - parser.add_argument( - "--input_csv", - default="amt_omni-vs-other.csv", - type=str, - help="CSV containing the results of the human evaluation, pre-processed", - ) - parser.add_argument( - "--output_dir", - default=None, - type=str, - help="Output directory", - ) - parser.add_argument( - "--dpi", - default=200, - type=int, - help="DPI for the output images", - ) - parser.add_argument( - "--n_bs", - default=1e6, - type=int, - help="Number of bootrstrap samples", - ) - parser.add_argument( - "--bs_seed", - default=17, - type=int, - help="Bootstrap random seed, for reproducibility", - ) - - return parser.parse_args() - - -if __name__ == "__main__": - # ----------------------------- - # ----- Parse arguments ----- - # ----------------------------- - args = parsed_args() - print("Args:\n" + "\n".join([f" {k:20}: {v}" for k, v in vars(args).items()])) - - # Determine output dir - if args.output_dir is None: - output_dir = Path(os.environ["SLURM_TMPDIR"]) - else: - output_dir = Path(args.output_dir) - if not output_dir.exists(): - output_dir.mkdir(parents=True, exist_ok=False) - - # Store args - output_yml = output_dir / "args_human_evaluation.yml" - with open(output_yml, "w") as f: - yaml.dump(vars(args), f) - - # Read CSV - df = pd.read_csv(args.input_csv) - - # Sort Y labels - comparables = df.comparable.unique() - is_climategan_sum = [ - df.loc[df.comparable == c, "climategan"].sum() for c in comparables - ] - comparables = comparables[np.argsort(is_climategan_sum)[::-1]] - - # Plot setup - sns.set(style="whitegrid") - plt.rcParams.update({"font.family": "serif"}) - plt.rcParams.update( - { - "font.serif": [ - "Computer Modern Roman", - "Times New Roman", - "Utopia", - "New Century Schoolbook", - "Century Schoolbook L", - "ITC Bookman", - "Bookman", - "Times", - "Palatino", - "Charter", - "serif" "Bitstream Vera Serif", - "DejaVu Serif", - ] - } - ) - fontsize = "medium" - - # Initialize the matplotlib figure - fig, ax = plt.subplots(figsize=(10.5, 3), dpi=args.dpi) - - # Plot the total (right) - sns.barplot( - data=df.loc[df.is_valid], - x="is_valid", - y="comparable", - order=comparables, - orient="h", - label="comparable", - palette=palette_comparables_light, - ci=None, - ) - - # Plot the left - sns.barplot( - data=df.loc[df.is_valid], - x="climategan", - y="comparable", - order=comparables, - orient="h", - label="climategan", - color=color_climategan, - ci=99, - n_boot=args.n_bs, - seed=args.bs_seed, - errcolor="black", - errwidth=1.5, - capsize=0.1, - ) - - # Draw line at 0.5 - y = np.arange(ax.get_ylim()[1] + 0.1, ax.get_ylim()[0], 0.1) - x = 0.5 * np.ones(y.shape[0]) - ax.plot(x, y, linestyle=":", linewidth=1.5, color="black") - - # Change Y-Tick labels - yticklabels = [comparables_dict[ytick.get_text()] for ytick in ax.get_yticklabels()] - yticklabels_text = ax.set_yticklabels( - yticklabels, fontsize=fontsize, horizontalalignment="right", x=0.96 - ) - for ytl in yticklabels_text: - ax.add_artist(ytl) - - # Remove Y-label - ax.set_ylabel(ylabel="") - - # Change X-Tick labels - xlim = [0.0, 1.1] - xticks = np.arange(xlim[0], xlim[1], 0.1) - ax.set(xticks=xticks) - plt.setp(ax.get_xticklabels(), fontsize=fontsize) - - # Set X-label - ax.set_xlabel(None) - - # Change spines - sns.despine(left=True, bottom=True) - - # Save figure - output_fig = output_dir / "human_evaluation_rate_climategan.png" - fig.savefig(output_fig, dpi=fig.dpi, bbox_inches="tight") diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet.py deleted file mode 100644 index c6d3b9c240c24687d432197f976ee01fbf423216..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet.py +++ /dev/null @@ -1,187 +0,0 @@ -import torch -from torch import nn - -__all__ = ['iresnet18', 'iresnet34', 'iresnet50', 'iresnet100', 'iresnet200'] - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=1, - stride=stride, - bias=False) - - -class IBasicBlock(nn.Module): - expansion = 1 - def __init__(self, inplanes, planes, stride=1, downsample=None, - groups=1, base_width=64, dilation=1): - super(IBasicBlock, self).__init__() - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05,) - self.conv1 = conv3x3(inplanes, planes) - self.bn2 = nn.BatchNorm2d(planes, eps=1e-05,) - self.prelu = nn.PReLU(planes) - self.conv2 = conv3x3(planes, planes, stride) - self.bn3 = nn.BatchNorm2d(planes, eps=1e-05,) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - out = self.bn1(x) - out = self.conv1(out) - out = self.bn2(out) - out = self.prelu(out) - out = self.conv2(out) - out = self.bn3(out) - if self.downsample is not None: - identity = self.downsample(x) - out += identity - return out - - -class IResNet(nn.Module): - fc_scale = 7 * 7 - def __init__(self, - block, layers, dropout=0, num_features=512, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False): - super(IResNet, self).__init__() - self.fp16 = fp16 - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05) - self.prelu = nn.PReLU(self.inplanes) - self.layer1 = self._make_layer(block, 64, layers[0], stride=2) - self.layer2 = self._make_layer(block, - 128, - layers[1], - stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, - 256, - layers[2], - stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, - 512, - layers[3], - stride=2, - dilate=replace_stride_with_dilation[2]) - self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05,) - self.dropout = nn.Dropout(p=dropout, inplace=True) - self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features) - self.features = nn.BatchNorm1d(num_features, eps=1e-05) - nn.init.constant_(self.features.weight, 1.0) - self.features.weight.requires_grad = False - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, 0, 0.1) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - if zero_init_residual: - for m in self.modules(): - if isinstance(m, IBasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ), - ) - layers = [] - layers.append( - block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(self.inplanes, - planes, - groups=self.groups, - base_width=self.base_width, - dilation=self.dilation)) - - return nn.Sequential(*layers) - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.conv1(x) - x = self.bn1(x) - x = self.prelu(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.bn2(x) - x = torch.flatten(x, 1) - x = self.dropout(x) - x = self.fc(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def _iresnet(arch, block, layers, pretrained, progress, **kwargs): - model = IResNet(block, layers, **kwargs) - if pretrained: - raise ValueError() - return model - - -def iresnet18(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet18', IBasicBlock, [2, 2, 2, 2], pretrained, - progress, **kwargs) - - -def iresnet34(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet34', IBasicBlock, [3, 4, 6, 3], pretrained, - progress, **kwargs) - - -def iresnet50(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet50', IBasicBlock, [3, 4, 14, 3], pretrained, - progress, **kwargs) - - -def iresnet100(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet100', IBasicBlock, [3, 13, 30, 3], pretrained, - progress, **kwargs) - - -def iresnet200(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet200', IBasicBlock, [6, 26, 60, 6], pretrained, - progress, **kwargs) - diff --git a/spaces/visakh7843/Sheet_Music_Generator/setup.sh b/spaces/visakh7843/Sheet_Music_Generator/setup.sh deleted file mode 100644 index e221c60655cf9d06bd304bc6395c60f761ef174d..0000000000000000000000000000000000000000 --- a/spaces/visakh7843/Sheet_Music_Generator/setup.sh +++ /dev/null @@ -1,2 +0,0 @@ -export GRADIO_SERVER_NAME=0.0.0.0 -export GRADIO_SERVER_PORT="$PORT" diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/nerf/renderer.py b/spaces/vishnu0001/text2mesh/shap_e/models/nerf/renderer.py deleted file mode 100644 index c356512bc43ebe0ba0a8a78a8d32a989d7a13056..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/nerf/renderer.py +++ /dev/null @@ -1,301 +0,0 @@ -from functools import partial -from typing import Any, Dict, Optional - -import torch - -from shap_e.models.nn.meta import subdict -from shap_e.models.renderer import RayRenderer -from shap_e.models.volume import Volume -from shap_e.util.collections import AttrDict - -from .model import NeRFModel -from .ray import RayVolumeIntegral, StratifiedRaySampler, render_rays - - -class TwoStepNeRFRenderer(RayRenderer): - """ - Coarse and fine-grained rendering as proposed by NeRF. This class - additionally supports background rendering like NeRF++. - """ - - def __init__( - self, - n_coarse_samples: int, - n_fine_samples: int, - void_model: NeRFModel, - fine_model: NeRFModel, - volume: Volume, - coarse_model: Optional[NeRFModel] = None, - coarse_background_model: Optional[NeRFModel] = None, - fine_background_model: Optional[NeRFModel] = None, - outer_volume: Optional[Volume] = None, - foreground_stratified_depth_sampling_mode: str = "linear", - background_stratified_depth_sampling_mode: str = "linear", - importance_sampling_options: Optional[Dict[str, Any]] = None, - channel_scale: float = 255, - device: torch.device = torch.device("cuda"), - **kwargs, - ): - """ - :param outer_volume: is where distant objects are encoded. - """ - super().__init__(**kwargs) - - if coarse_model is None: - assert ( - fine_background_model is None or coarse_background_model is None - ), "models should be shared for both fg and bg" - - self.n_coarse_samples = n_coarse_samples - self.n_fine_samples = n_fine_samples - self.void_model = void_model - self.coarse_model = coarse_model - self.fine_model = fine_model - self.volume = volume - self.coarse_background_model = coarse_background_model - self.fine_background_model = fine_background_model - self.outer_volume = outer_volume - self.foreground_stratified_depth_sampling_mode = foreground_stratified_depth_sampling_mode - self.background_stratified_depth_sampling_mode = background_stratified_depth_sampling_mode - self.importance_sampling_options = AttrDict(importance_sampling_options or {}) - self.channel_scale = channel_scale - self.device = device - self.to(device) - - if self.coarse_background_model is not None: - assert self.fine_background_model is not None - assert self.outer_volume is not None - - def render_rays( - self, - batch: Dict, - params: Optional[Dict] = None, - options: Optional[Dict] = None, - ) -> AttrDict: - params = self.update(params) - - batch = AttrDict(batch) - if options is None: - options = AttrDict() - options.setdefault("render_background", True) - options.setdefault("render_with_direction", True) - options.setdefault("n_coarse_samples", self.n_coarse_samples) - options.setdefault("n_fine_samples", self.n_fine_samples) - options.setdefault( - "foreground_stratified_depth_sampling_mode", - self.foreground_stratified_depth_sampling_mode, - ) - options.setdefault( - "background_stratified_depth_sampling_mode", - self.background_stratified_depth_sampling_mode, - ) - - shared = self.coarse_model is None - - # First, render rays using the coarse models with stratified ray samples. - coarse_model, coarse_key = ( - (self.fine_model, "fine_model") if shared else (self.coarse_model, "coarse_model") - ) - coarse_model = partial( - coarse_model, - params=subdict(params, coarse_key), - options=options, - ) - parts = [ - RayVolumeIntegral( - model=coarse_model, - volume=self.volume, - sampler=StratifiedRaySampler( - depth_mode=options.foreground_stratified_depth_sampling_mode, - ), - n_samples=options.n_coarse_samples, - ), - ] - if options.render_background and self.outer_volume is not None: - coarse_background_model, coarse_background_key = ( - (self.fine_background_model, "fine_background_model") - if shared - else (self.coarse_background_model, "coarse_background_model") - ) - coarse_background_model = partial( - coarse_background_model, - params=subdict(params, coarse_background_key), - options=options, - ) - parts.append( - RayVolumeIntegral( - model=coarse_background_model, - volume=self.outer_volume, - sampler=StratifiedRaySampler( - depth_mode=options.background_stratified_depth_sampling_mode, - ), - n_samples=options.n_coarse_samples, - ) - ) - coarse_results, samplers, coarse_raw_outputs = render_rays( - batch.rays, - parts, - partial(self.void_model, options=options), - shared=shared, - render_with_direction=options.render_with_direction, - importance_sampling_options=AttrDict(self.importance_sampling_options), - ) - - # Then, render rays using the fine models with importance-weighted ray samples. - fine_model = partial( - self.fine_model, - params=subdict(params, "fine_model"), - options=options, - ) - parts = [ - RayVolumeIntegral( - model=fine_model, - volume=self.volume, - sampler=samplers[0], - n_samples=options.n_fine_samples, - ), - ] - if options.render_background and self.outer_volume is not None: - fine_background_model = partial( - self.fine_background_model, - params=subdict(params, "fine_background_model"), - options=options, - ) - parts.append( - RayVolumeIntegral( - model=fine_background_model, - volume=self.outer_volume, - sampler=samplers[1], - n_samples=options.n_fine_samples, - ) - ) - fine_results, *_ = render_rays( - batch.rays, - parts, - partial(self.void_model, options=options), - shared=shared, - prev_raw_outputs=coarse_raw_outputs, - render_with_direction=options.render_with_direction, - ) - - # Combine results - aux_losses = fine_results.output.aux_losses.copy() - for key, val in coarse_results.output.aux_losses.items(): - aux_losses[key + "_coarse"] = val - - return AttrDict( - channels=fine_results.output.channels * self.channel_scale, - channels_coarse=coarse_results.output.channels * self.channel_scale, - distances=fine_results.output.distances, - transmittance=fine_results.transmittance, - transmittance_coarse=coarse_results.transmittance, - t0=fine_results.volume_range.t0, - t1=fine_results.volume_range.t1, - intersected=fine_results.volume_range.intersected, - aux_losses=aux_losses, - ) - - -class OneStepNeRFRenderer(RayRenderer): - """ - Renders rays using stratified sampling only unlike vanilla NeRF. - The same setup as NeRF++. - """ - - def __init__( - self, - n_samples: int, - void_model: NeRFModel, - foreground_model: NeRFModel, - volume: Volume, - background_model: Optional[NeRFModel] = None, - outer_volume: Optional[Volume] = None, - foreground_stratified_depth_sampling_mode: str = "linear", - background_stratified_depth_sampling_mode: str = "linear", - channel_scale: float = 255, - device: torch.device = torch.device("cuda"), - **kwargs, - ): - super().__init__(**kwargs) - self.n_samples = n_samples - self.void_model = void_model - self.foreground_model = foreground_model - self.volume = volume - self.background_model = background_model - self.outer_volume = outer_volume - self.foreground_stratified_depth_sampling_mode = foreground_stratified_depth_sampling_mode - self.background_stratified_depth_sampling_mode = background_stratified_depth_sampling_mode - self.channel_scale = channel_scale - self.device = device - self.to(device) - - def render_rays( - self, - batch: Dict, - params: Optional[Dict] = None, - options: Optional[Dict] = None, - ) -> AttrDict: - params = self.update(params) - - batch = AttrDict(batch) - if options is None: - options = AttrDict() - options.setdefault("render_background", True) - options.setdefault("render_with_direction", True) - options.setdefault("n_samples", self.n_samples) - options.setdefault( - "foreground_stratified_depth_sampling_mode", - self.foreground_stratified_depth_sampling_mode, - ) - options.setdefault( - "background_stratified_depth_sampling_mode", - self.background_stratified_depth_sampling_mode, - ) - - foreground_model = partial( - self.foreground_model, - params=subdict(params, "foreground_model"), - options=options, - ) - parts = [ - RayVolumeIntegral( - model=foreground_model, - volume=self.volume, - sampler=StratifiedRaySampler( - depth_mode=options.foreground_stratified_depth_sampling_mode - ), - n_samples=options.n_samples, - ), - ] - if options.render_background and self.outer_volume is not None: - background_model = partial( - self.background_model, - params=subdict(params, "background_model"), - options=options, - ) - parts.append( - RayVolumeIntegral( - model=background_model, - volume=self.outer_volume, - sampler=StratifiedRaySampler( - depth_mode=options.background_stratified_depth_sampling_mode - ), - n_samples=options.n_samples, - ) - ) - results, *_ = render_rays( - batch.rays, - parts, - self.void_model, - render_with_direction=options.render_with_direction, - ) - - return AttrDict( - channels=results.output.channels * self.channel_scale, - distances=results.output.distances, - transmittance=results.transmittance, - t0=results.volume_range.t0, - t1=results.volume_range.t1, - intersected=results.volume_range.intersected, - aux_losses=results.output.aux_losses, - ) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py deleted file mode 100644 index 676937a2085d4da20fa87923041a200fca6214eb..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super(MMDistributedDataParallel, self).__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py deleted file mode 100644 index b75b1566c9f18169cee51d4b55d75e0357b69c57..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/core/seg/sampler/base_pixel_sampler.py +++ /dev/null @@ -1,12 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BasePixelSampler(metaclass=ABCMeta): - """Base class of pixel sampler.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def sample(self, seg_logit, seg_label): - """Placeholder for sample function.""" diff --git a/spaces/whgwd2023/bingo/src/components/ui/icons.tsx b/spaces/whgwd2023/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/wilson1/bingo/src/components/ui/tooltip.tsx b/spaces/wilson1/bingo/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/wolf-sigma/Starburst_Galaxy__PyStarburst_Demo/env.py b/spaces/wolf-sigma/Starburst_Galaxy__PyStarburst_Demo/env.py deleted file mode 100644 index 45756284aae12c711b3868b688072fe10dd7cddd..0000000000000000000000000000000000000000 --- a/spaces/wolf-sigma/Starburst_Galaxy__PyStarburst_Demo/env.py +++ /dev/null @@ -1,24 +0,0 @@ -# Prompt user for credentials -import os - -PROPMPT_CREDS=False -SHOW_SETTINGS=False - -# Web App -PORT = 7860 -BIND_HOST = '0.0.0.0' -SHARE = False -DEBUG = False - -# Credentials -HOST=os.environ.get("HOST") -USERNAME=os.environ.get("SB_USER") -PASSWORD=os.environ.get("SB_PASS") - -# Target Catalog for writing -TARGET_CATALOG='s3lakehouse' - -# OpenAI Configs -OPENAI_MODEL = "gpt-3.5-turbo-16k" -OPENAI_API_KEY = os.environ.get("OPEN_API_KEY") - diff --git a/spaces/wukevin/foldingdiff/README.md b/spaces/wukevin/foldingdiff/README.md deleted file mode 100644 index 4064ed566dce114e930c3ae21a8ae16fc0a8727e..0000000000000000000000000000000000000000 --- a/spaces/wukevin/foldingdiff/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: foldingdiff -emoji: 🤖 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: true -tags: ['diffusion', 'ddpm', 'proteins', 'protein structure', 'transformer', 'generative', 'foldingdiff'] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/tao_ow.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/tao_ow.py deleted file mode 100644 index 40f80d7876ed9f27fb108b732315c4f8c6fc0984..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/tao_ow.py +++ /dev/null @@ -1,652 +0,0 @@ -import os -import numpy as np -import json -import itertools -from collections import defaultdict -from scipy.optimize import linear_sum_assignment -from ..utils import TrackEvalException -from ._base_dataset import _BaseDataset -from .. import utils -from .. import _timing - - -class TAO_OW(_BaseDataset): - """Dataset class for TAO tracking""" - - @staticmethod - def get_default_dataset_config(): - """Default class config values""" - code_path = utils.get_code_path() - default_config = { - 'GT_FOLDER': os.path.join(code_path, 'data/gt/tao/tao_training'), # Location of GT data - 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/tao/tao_training'), # Trackers location - 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER) - 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder) - 'CLASSES_TO_EVAL': None, # Classes to eval (if None, all classes) - 'SPLIT_TO_EVAL': 'training', # Valid: 'training', 'val' - 'PRINT_CONFIG': True, # Whether to print current config - 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER - 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER - 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL - 'MAX_DETECTIONS': 300, # Number of maximal allowed detections per image (0 for unlimited) - 'SUBSET': 'all' - } - return default_config - - def __init__(self, config=None): - """Initialise dataset, checking that all required files are present""" - super().__init__() - # Fill non-given config values with defaults - self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name()) - self.gt_fol = self.config['GT_FOLDER'] - self.tracker_fol = self.config['TRACKERS_FOLDER'] - self.should_classes_combine = True - self.use_super_categories = False - - self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER'] - self.output_fol = self.config['OUTPUT_FOLDER'] - if self.output_fol is None: - self.output_fol = self.tracker_fol - self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER'] - - gt_dir_files = [file for file in os.listdir(self.gt_fol) if file.endswith('.json')] - if len(gt_dir_files) != 1: - raise TrackEvalException(self.gt_fol + ' does not contain exactly one json file.') - - with open(os.path.join(self.gt_fol, gt_dir_files[0])) as f: - self.gt_data = json.load(f) - - self.subset = self.config['SUBSET'] - if self.subset != 'all': - # Split GT data into `known`, `unknown` or `distractor` - self._split_known_unknown_distractor() - self.gt_data = self._filter_gt_data(self.gt_data) - - # merge categories marked with a merged tag in TAO dataset - self._merge_categories(self.gt_data['annotations'] + self.gt_data['tracks']) - - # Get sequences to eval and sequence information - self.seq_list = [vid['name'].replace('/', '-') for vid in self.gt_data['videos']] - self.seq_name_to_seq_id = {vid['name'].replace('/', '-'): vid['id'] for vid in self.gt_data['videos']} - # compute mappings from videos to annotation data - self.videos_to_gt_tracks, self.videos_to_gt_images = self._compute_vid_mappings(self.gt_data['annotations']) - # compute sequence lengths - self.seq_lengths = {vid['id']: 0 for vid in self.gt_data['videos']} - for img in self.gt_data['images']: - self.seq_lengths[img['video_id']] += 1 - self.seq_to_images_to_timestep = self._compute_image_to_timestep_mappings() - self.seq_to_classes = {vid['id']: {'pos_cat_ids': list({track['category_id'] for track - in self.videos_to_gt_tracks[vid['id']]}), - 'neg_cat_ids': vid['neg_category_ids'], - 'not_exhaustively_labeled_cat_ids': vid['not_exhaustive_category_ids']} - for vid in self.gt_data['videos']} - - # Get classes to eval - considered_vid_ids = [self.seq_name_to_seq_id[vid] for vid in self.seq_list] - seen_cats = set([cat_id for vid_id in considered_vid_ids for cat_id - in self.seq_to_classes[vid_id]['pos_cat_ids']]) - # only classes with ground truth are evaluated in TAO - self.valid_classes = [cls['name'] for cls in self.gt_data['categories'] if cls['id'] in seen_cats] - # cls_name_to_cls_id_map = {cls['name']: cls['id'] for cls in self.gt_data['categories']} - - if self.config['CLASSES_TO_EVAL']: - # self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None - # for cls in self.config['CLASSES_TO_EVAL']] - self.class_list = ["object"] # class-agnostic - if not all(self.class_list): - raise TrackEvalException('Attempted to evaluate an invalid class. Only classes ' + - ', '.join(self.valid_classes) + - ' are valid (classes present in ground truth data).') - else: - # self.class_list = [cls for cls in self.valid_classes] - self.class_list = ["object"] # class-agnostic - # self.class_name_to_class_id = {k: v for k, v in cls_name_to_cls_id_map.items() if k in self.class_list} - self.class_name_to_class_id = {"object": 1} # class-agnostic - - # Get trackers to eval - if self.config['TRACKERS_TO_EVAL'] is None: - self.tracker_list = os.listdir(self.tracker_fol) - else: - self.tracker_list = self.config['TRACKERS_TO_EVAL'] - - if self.config['TRACKER_DISPLAY_NAMES'] is None: - self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list)) - elif (self.config['TRACKERS_TO_EVAL'] is not None) and ( - len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)): - self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES'])) - else: - raise TrackEvalException('List of tracker files and tracker display names do not match.') - - self.tracker_data = {tracker: dict() for tracker in self.tracker_list} - - for tracker in self.tracker_list: - tr_dir_files = [file for file in os.listdir(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol)) - if file.endswith('.json')] - if len(tr_dir_files) != 1: - raise TrackEvalException(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol) - + ' does not contain exactly one json file.') - with open(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, tr_dir_files[0])) as f: - curr_data = json.load(f) - - # limit detections if MAX_DETECTIONS > 0 - if self.config['MAX_DETECTIONS']: - curr_data = self._limit_dets_per_image(curr_data) - - # fill missing video ids - self._fill_video_ids_inplace(curr_data) - - # make track ids unique over whole evaluation set - self._make_track_ids_unique(curr_data) - - # merge categories marked with a merged tag in TAO dataset - self._merge_categories(curr_data) - - # get tracker sequence information - curr_videos_to_tracker_tracks, curr_videos_to_tracker_images = self._compute_vid_mappings(curr_data) - self.tracker_data[tracker]['vids_to_tracks'] = curr_videos_to_tracker_tracks - self.tracker_data[tracker]['vids_to_images'] = curr_videos_to_tracker_images - - def get_display_name(self, tracker): - return self.tracker_to_disp[tracker] - - def _load_raw_file(self, tracker, seq, is_gt): - """Load a file (gt or tracker) in the TAO format - - If is_gt, this returns a dict which contains the fields: - [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det). - [gt_dets]: list (for each timestep) of lists of detections. - [classes_to_gt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as - keys and corresponding segmentations as values) for each track - [classes_to_gt_track_ids, classes_to_gt_track_areas, classes_to_gt_track_lengths]: dictionary with class values - as keys and lists (for each track) as values - - if not is_gt, this returns a dict which contains the fields: - [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det). - [tracker_dets]: list (for each timestep) of lists of detections. - [classes_to_dt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as - keys and corresponding segmentations as values) for each track - [classes_to_dt_track_ids, classes_to_dt_track_areas, classes_to_dt_track_lengths]: dictionary with class values - as keys and lists as values - [classes_to_dt_track_scores]: dictionary with class values as keys and 1D numpy arrays as values - """ - seq_id = self.seq_name_to_seq_id[seq] - # File location - if is_gt: - imgs = self.videos_to_gt_images[seq_id] - else: - imgs = self.tracker_data[tracker]['vids_to_images'][seq_id] - - # Convert data to required format - num_timesteps = self.seq_lengths[seq_id] - img_to_timestep = self.seq_to_images_to_timestep[seq_id] - data_keys = ['ids', 'classes', 'dets'] - if not is_gt: - data_keys += ['tracker_confidences'] - raw_data = {key: [None] * num_timesteps for key in data_keys} - for img in imgs: - # some tracker data contains images without any ground truth information, these are ignored - try: - t = img_to_timestep[img['id']] - except KeyError: - continue - annotations = img['annotations'] - raw_data['dets'][t] = np.atleast_2d([ann['bbox'] for ann in annotations]).astype(float) - raw_data['ids'][t] = np.atleast_1d([ann['track_id'] for ann in annotations]).astype(int) - raw_data['classes'][t] = np.atleast_1d([1 for _ in annotations]).astype(int) # class-agnostic - if not is_gt: - raw_data['tracker_confidences'][t] = np.atleast_1d([ann['score'] for ann in annotations]).astype(float) - - for t, d in enumerate(raw_data['dets']): - if d is None: - raw_data['dets'][t] = np.empty((0, 4)).astype(float) - raw_data['ids'][t] = np.empty(0).astype(int) - raw_data['classes'][t] = np.empty(0).astype(int) - if not is_gt: - raw_data['tracker_confidences'][t] = np.empty(0) - - if is_gt: - key_map = {'ids': 'gt_ids', - 'classes': 'gt_classes', - 'dets': 'gt_dets'} - else: - key_map = {'ids': 'tracker_ids', - 'classes': 'tracker_classes', - 'dets': 'tracker_dets'} - for k, v in key_map.items(): - raw_data[v] = raw_data.pop(k) - - # all_classes = [self.class_name_to_class_id[cls] for cls in self.class_list] - all_classes = [1] # class-agnostic - - if is_gt: - classes_to_consider = all_classes - all_tracks = self.videos_to_gt_tracks[seq_id] - else: - # classes_to_consider = self.seq_to_classes[seq_id]['pos_cat_ids'] \ - # + self.seq_to_classes[seq_id]['neg_cat_ids'] - classes_to_consider = all_classes # class-agnostic - all_tracks = self.tracker_data[tracker]['vids_to_tracks'][seq_id] - - # classes_to_tracks = {cls: [track for track in all_tracks if track['category_id'] == cls] - # if cls in classes_to_consider else [] for cls in all_classes} - classes_to_tracks = {cls: [track for track in all_tracks] - if cls in classes_to_consider else [] for cls in all_classes} # class-agnostic - - # mapping from classes to track information - raw_data['classes_to_tracks'] = {cls: [{det['image_id']: np.atleast_1d(det['bbox']) - for det in track['annotations']} for track in tracks] - for cls, tracks in classes_to_tracks.items()} - raw_data['classes_to_track_ids'] = {cls: [track['id'] for track in tracks] - for cls, tracks in classes_to_tracks.items()} - raw_data['classes_to_track_areas'] = {cls: [track['area'] for track in tracks] - for cls, tracks in classes_to_tracks.items()} - raw_data['classes_to_track_lengths'] = {cls: [len(track['annotations']) for track in tracks] - for cls, tracks in classes_to_tracks.items()} - - if not is_gt: - raw_data['classes_to_dt_track_scores'] = {cls: np.array([np.mean([float(x['score']) - for x in track['annotations']]) - for track in tracks]) - for cls, tracks in classes_to_tracks.items()} - - if is_gt: - key_map = {'classes_to_tracks': 'classes_to_gt_tracks', - 'classes_to_track_ids': 'classes_to_gt_track_ids', - 'classes_to_track_lengths': 'classes_to_gt_track_lengths', - 'classes_to_track_areas': 'classes_to_gt_track_areas'} - else: - key_map = {'classes_to_tracks': 'classes_to_dt_tracks', - 'classes_to_track_ids': 'classes_to_dt_track_ids', - 'classes_to_track_lengths': 'classes_to_dt_track_lengths', - 'classes_to_track_areas': 'classes_to_dt_track_areas'} - for k, v in key_map.items(): - raw_data[v] = raw_data.pop(k) - - raw_data['num_timesteps'] = num_timesteps - raw_data['neg_cat_ids'] = self.seq_to_classes[seq_id]['neg_cat_ids'] - raw_data['not_exhaustively_labeled_cls'] = self.seq_to_classes[seq_id]['not_exhaustively_labeled_cat_ids'] - raw_data['seq'] = seq - return raw_data - - @_timing.time - def get_preprocessed_seq_data(self, raw_data, cls): - """ Preprocess data for a single sequence for a single class ready for evaluation. - Inputs: - - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data(). - - cls is the class to be evaluated. - Outputs: - - data is a dict containing all of the information that metrics need to perform evaluation. - It contains the following fields: - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers. - [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det). - [gt_dets, tracker_dets]: list (for each timestep) of lists of detections. - [similarity_scores]: list (for each timestep) of 2D NDArrays. - Notes: - General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps. - 1) Extract only detections relevant for the class to be evaluated (including distractor detections). - 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a - distractor class, or otherwise marked as to be removed. - 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain - other criteria (e.g. are too small). - 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation. - After the above preprocessing steps, this function also calculates the number of gt and tracker detections - and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are - unique within each timestep. - TAO: - In TAO, the 4 preproc steps are as follow: - 1) All classes present in the ground truth data are evaluated separately. - 2) No matched tracker detections are removed. - 3) Unmatched tracker detections are removed if there is not ground truth data and the class does not - belong to the categories marked as negative for this sequence. Additionally, unmatched tracker - detections for classes which are marked as not exhaustively labeled are removed. - 4) No gt detections are removed. - Further, for TrackMAP computation track representations for the given class are accessed from a dictionary - and the tracks from the tracker data are sorted according to the tracker confidence. - """ - cls_id = self.class_name_to_class_id[cls] - is_not_exhaustively_labeled = cls_id in raw_data['not_exhaustively_labeled_cls'] - is_neg_category = cls_id in raw_data['neg_cat_ids'] - - data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores'] - data = {key: [None] * raw_data['num_timesteps'] for key in data_keys} - unique_gt_ids = [] - unique_tracker_ids = [] - num_gt_dets = 0 - num_tracker_dets = 0 - for t in range(raw_data['num_timesteps']): - - # Only extract relevant dets for this class for preproc and eval (cls) - gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id) - gt_class_mask = gt_class_mask.astype(np.bool) - gt_ids = raw_data['gt_ids'][t][gt_class_mask] - gt_dets = raw_data['gt_dets'][t][gt_class_mask] - - tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id) - tracker_class_mask = tracker_class_mask.astype(np.bool) - tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask] - tracker_dets = raw_data['tracker_dets'][t][tracker_class_mask] - tracker_confidences = raw_data['tracker_confidences'][t][tracker_class_mask] - similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask] - - # Match tracker and gt dets (with hungarian algorithm). - unmatched_indices = np.arange(tracker_ids.shape[0]) - if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0: - matching_scores = similarity_scores.copy() - matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0 - match_rows, match_cols = linear_sum_assignment(-matching_scores) - actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps - match_cols = match_cols[actually_matched_mask] - unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0) - - if gt_ids.shape[0] == 0 and not is_neg_category: - to_remove_tracker = unmatched_indices - elif is_not_exhaustively_labeled: - to_remove_tracker = unmatched_indices - else: - to_remove_tracker = np.array([], dtype=np.int) - - # remove all unwanted unmatched tracker detections - data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0) - data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0) - data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0) - similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1) - - data['gt_ids'][t] = gt_ids - data['gt_dets'][t] = gt_dets - data['similarity_scores'][t] = similarity_scores - - unique_gt_ids += list(np.unique(data['gt_ids'][t])) - unique_tracker_ids += list(np.unique(data['tracker_ids'][t])) - num_tracker_dets += len(data['tracker_ids'][t]) - num_gt_dets += len(data['gt_ids'][t]) - - # Re-label IDs such that there are no empty IDs - if len(unique_gt_ids) > 0: - unique_gt_ids = np.unique(unique_gt_ids) - gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1)) - gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids)) - for t in range(raw_data['num_timesteps']): - if len(data['gt_ids'][t]) > 0: - data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int) - if len(unique_tracker_ids) > 0: - unique_tracker_ids = np.unique(unique_tracker_ids) - tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1)) - tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids)) - for t in range(raw_data['num_timesteps']): - if len(data['tracker_ids'][t]) > 0: - data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int) - - # Record overview statistics. - data['num_tracker_dets'] = num_tracker_dets - data['num_gt_dets'] = num_gt_dets - data['num_tracker_ids'] = len(unique_tracker_ids) - data['num_gt_ids'] = len(unique_gt_ids) - data['num_timesteps'] = raw_data['num_timesteps'] - data['seq'] = raw_data['seq'] - - # get track representations - data['gt_tracks'] = raw_data['classes_to_gt_tracks'][cls_id] - data['gt_track_ids'] = raw_data['classes_to_gt_track_ids'][cls_id] - data['gt_track_lengths'] = raw_data['classes_to_gt_track_lengths'][cls_id] - data['gt_track_areas'] = raw_data['classes_to_gt_track_areas'][cls_id] - data['dt_tracks'] = raw_data['classes_to_dt_tracks'][cls_id] - data['dt_track_ids'] = raw_data['classes_to_dt_track_ids'][cls_id] - data['dt_track_lengths'] = raw_data['classes_to_dt_track_lengths'][cls_id] - data['dt_track_areas'] = raw_data['classes_to_dt_track_areas'][cls_id] - data['dt_track_scores'] = raw_data['classes_to_dt_track_scores'][cls_id] - data['not_exhaustively_labeled'] = is_not_exhaustively_labeled - data['iou_type'] = 'bbox' - - # sort tracker data tracks by tracker confidence scores - if data['dt_tracks']: - idx = np.argsort([-score for score in data['dt_track_scores']], kind="mergesort") - data['dt_track_scores'] = [data['dt_track_scores'][i] for i in idx] - data['dt_tracks'] = [data['dt_tracks'][i] for i in idx] - data['dt_track_ids'] = [data['dt_track_ids'][i] for i in idx] - data['dt_track_lengths'] = [data['dt_track_lengths'][i] for i in idx] - data['dt_track_areas'] = [data['dt_track_areas'][i] for i in idx] - # Ensure that ids are unique per timestep. - self._check_unique_ids(data) - - return data - - def _calculate_similarities(self, gt_dets_t, tracker_dets_t): - similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t) - return similarity_scores - - def _merge_categories(self, annotations): - """ - Merges categories with a merged tag. Adapted from https://github.com/TAO-Dataset - :param annotations: the annotations in which the classes should be merged - :return: None - """ - merge_map = {} - for category in self.gt_data['categories']: - if 'merged' in category: - for to_merge in category['merged']: - merge_map[to_merge['id']] = category['id'] - - for ann in annotations: - ann['category_id'] = merge_map.get(ann['category_id'], ann['category_id']) - - def _compute_vid_mappings(self, annotations): - """ - Computes mappings from Videos to corresponding tracks and images. - :param annotations: the annotations for which the mapping should be generated - :return: the video-to-track-mapping, the video-to-image-mapping - """ - vids_to_tracks = {} - vids_to_imgs = {} - vid_ids = [vid['id'] for vid in self.gt_data['videos']] - - # compute an mapping from image IDs to images - images = {} - for image in self.gt_data['images']: - images[image['id']] = image - - for ann in annotations: - ann["area"] = ann["bbox"][2] * ann["bbox"][3] - - vid = ann["video_id"] - if ann["video_id"] not in vids_to_tracks.keys(): - vids_to_tracks[ann["video_id"]] = list() - if ann["video_id"] not in vids_to_imgs.keys(): - vids_to_imgs[ann["video_id"]] = list() - - # Fill in vids_to_tracks - tid = ann["track_id"] - exist_tids = [track["id"] for track in vids_to_tracks[vid]] - try: - index1 = exist_tids.index(tid) - except ValueError: - index1 = -1 - if tid not in exist_tids: - curr_track = {"id": tid, "category_id": ann['category_id'], - "video_id": vid, "annotations": [ann]} - vids_to_tracks[vid].append(curr_track) - else: - vids_to_tracks[vid][index1]["annotations"].append(ann) - - # Fill in vids_to_imgs - img_id = ann['image_id'] - exist_img_ids = [img["id"] for img in vids_to_imgs[vid]] - try: - index2 = exist_img_ids.index(img_id) - except ValueError: - index2 = -1 - if index2 == -1: - curr_img = {"id": img_id, "annotations": [ann]} - vids_to_imgs[vid].append(curr_img) - else: - vids_to_imgs[vid][index2]["annotations"].append(ann) - - # sort annotations by frame index and compute track area - for vid, tracks in vids_to_tracks.items(): - for track in tracks: - track["annotations"] = sorted( - track['annotations'], - key=lambda x: images[x['image_id']]['frame_index']) - # Computer average area - track["area"] = (sum(x['area'] for x in track['annotations']) / len(track['annotations'])) - - # Ensure all videos are present - for vid_id in vid_ids: - if vid_id not in vids_to_tracks.keys(): - vids_to_tracks[vid_id] = [] - if vid_id not in vids_to_imgs.keys(): - vids_to_imgs[vid_id] = [] - - return vids_to_tracks, vids_to_imgs - - def _compute_image_to_timestep_mappings(self): - """ - Computes a mapping from images to the corresponding timestep in the sequence. - :return: the image-to-timestep-mapping - """ - images = {} - for image in self.gt_data['images']: - images[image['id']] = image - - seq_to_imgs_to_timestep = {vid['id']: dict() for vid in self.gt_data['videos']} - for vid in seq_to_imgs_to_timestep: - curr_imgs = [img['id'] for img in self.videos_to_gt_images[vid]] - curr_imgs = sorted(curr_imgs, key=lambda x: images[x]['frame_index']) - seq_to_imgs_to_timestep[vid] = {curr_imgs[i]: i for i in range(len(curr_imgs))} - - return seq_to_imgs_to_timestep - - def _limit_dets_per_image(self, annotations): - """ - Limits the number of detections for each image to config['MAX_DETECTIONS']. Adapted from - https://github.com/TAO-Dataset/ - :param annotations: the annotations in which the detections should be limited - :return: the annotations with limited detections - """ - max_dets = self.config['MAX_DETECTIONS'] - img_ann = defaultdict(list) - for ann in annotations: - img_ann[ann["image_id"]].append(ann) - - for img_id, _anns in img_ann.items(): - if len(_anns) <= max_dets: - continue - _anns = sorted(_anns, key=lambda x: x["score"], reverse=True) - img_ann[img_id] = _anns[:max_dets] - - return [ann for anns in img_ann.values() for ann in anns] - - def _fill_video_ids_inplace(self, annotations): - """ - Fills in missing video IDs inplace. Adapted from https://github.com/TAO-Dataset/ - :param annotations: the annotations for which the videos IDs should be filled inplace - :return: None - """ - missing_video_id = [x for x in annotations if 'video_id' not in x] - if missing_video_id: - image_id_to_video_id = { - x['id']: x['video_id'] for x in self.gt_data['images'] - } - for x in missing_video_id: - x['video_id'] = image_id_to_video_id[x['image_id']] - - @staticmethod - def _make_track_ids_unique(annotations): - """ - Makes the track IDs unqiue over the whole annotation set. Adapted from https://github.com/TAO-Dataset/ - :param annotations: the annotation set - :return: the number of updated IDs - """ - track_id_videos = {} - track_ids_to_update = set() - max_track_id = 0 - for ann in annotations: - t = ann['track_id'] - if t not in track_id_videos: - track_id_videos[t] = ann['video_id'] - - if ann['video_id'] != track_id_videos[t]: - # Track id is assigned to multiple videos - track_ids_to_update.add(t) - max_track_id = max(max_track_id, t) - - if track_ids_to_update: - print('true') - next_id = itertools.count(max_track_id + 1) - new_track_ids = defaultdict(lambda: next(next_id)) - for ann in annotations: - t = ann['track_id'] - v = ann['video_id'] - if t in track_ids_to_update: - ann['track_id'] = new_track_ids[t, v] - return len(track_ids_to_update) - - def _split_known_unknown_distractor(self): - all_ids = set([i for i in range(1, 2000)]) # 2000 is larger than the max category id in TAO-OW. - # `knowns` includes 78 TAO_category_ids that corresponds to 78 COCO classes. - # (The other 2 COCO classes do not have corresponding classes in TAO). - self.knowns = {4, 13, 1038, 544, 1057, 34, 35, 36, 41, 45, 58, 60, 579, 1091, 1097, 1099, 78, 79, 81, 91, 1115, - 1117, 95, 1122, 99, 1132, 621, 1135, 625, 118, 1144, 126, 642, 1155, 133, 1162, 139, 154, 174, 185, - 699, 1215, 714, 717, 1229, 211, 729, 221, 229, 747, 235, 237, 779, 276, 805, 299, 829, 852, 347, - 371, 382, 896, 392, 926, 937, 428, 429, 961, 452, 979, 980, 982, 475, 480, 993, 1001, 502, 1018} - # `distractors` is defined as in the paper "Opening up Open-World Tracking" - self.distractors = {20, 63, 108, 180, 188, 204, 212, 247, 303, 403, 407, 415, 490, 504, 507, 513, 529, 567, - 569, 588, 672, 691, 702, 708, 711, 720, 736, 737, 798, 813, 815, 827, 831, 851, 877, 883, - 912, 971, 976, 1130, 1133, 1134, 1169, 1184, 1220} - self.unknowns = all_ids.difference(self.knowns.union(self.distractors)) - - def _filter_gt_data(self, raw_gt_data): - """ - Filter out irrelevant data in the raw_gt_data - Args: - raw_gt_data: directly loaded from json. - - Returns: - filtered gt_data - """ - valid_cat_ids = list() - if self.subset == "known": - valid_cat_ids = self.knowns - elif self.subset == "distractor": - valid_cat_ids = self.distractors - elif self.subset == "unknown": - valid_cat_ids = self.unknowns - # elif self.subset == "test_only_unknowns": - # valid_cat_ids = test_only_unknowns - else: - raise Exception("The parameter `SUBSET` is incorrect") - - filtered = dict() - filtered["videos"] = raw_gt_data["videos"] - # filtered["videos"] = list() - unwanted_vid = set() - # for video in raw_gt_data["videos"]: - # datasrc = video["name"].split('/')[1] - # if datasrc in data_srcs: - # filtered["videos"].append(video) - # else: - # unwanted_vid.add(video["id"]) - - filtered["annotations"] = list() - for ann in raw_gt_data["annotations"]: - if (ann["video_id"] not in unwanted_vid) and (ann["category_id"] in valid_cat_ids): - filtered["annotations"].append(ann) - - filtered["tracks"] = list() - for track in raw_gt_data["tracks"]: - if (track["video_id"] not in unwanted_vid) and (track["category_id"] in valid_cat_ids): - filtered["tracks"].append(track) - - filtered["images"] = list() - for image in raw_gt_data["images"]: - if image["video_id"] not in unwanted_vid: - filtered["images"].append(image) - - filtered["categories"] = list() - for cat in raw_gt_data["categories"]: - if cat["id"] in valid_cat_ids: - filtered["categories"].append(cat) - - filtered["info"] = raw_gt_data["info"] - filtered["licenses"] = raw_gt_data["licenses"] - - return filtered diff --git a/spaces/xiang2811/ChatGPT/modules/models.py b/spaces/xiang2811/ChatGPT/modules/models.py deleted file mode 100644 index 4617e3df52ce2e1e4ad743b577364a33b324f26c..0000000000000000000000000000000000000000 --- a/spaces/xiang2811/ChatGPT/modules/models.py +++ /dev/null @@ -1,578 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - # Chats - # model_name = model_args.model_name_or_path - # if model_args.lora_model_path is not None: - # model_name += f" + {model_args.lora_model_path}" - - # context = ( - # "You are a helpful assistant who follows the given instructions" - # " unconditionally." - # ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMBot_Client(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - - def reset(self): - self.session_id = str(uuid.uuid4()) - return [], "已重置" - - def try_read_image(self, filepath): - import base64 - - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - def read_image_as_bytes(filepath): - # 读取图片文件并返回比特流 - with open(filepath, "rb") as f: - image_bytes = f.read() - return image_bytes - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - image_bytes = read_image_as_bytes(filepath) - base64_encoded_image = base64.b64encode(image_bytes).decode() - self.image_bytes = base64_encoded_image - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMBot: - model = XMBot_Client(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/xu1998hz/sescore_english_coco/sescore_english_coco.py b/spaces/xu1998hz/sescore_english_coco/sescore_english_coco.py deleted file mode 100644 index d806ca94664e40b13b01294aa0b82da6dbfa298f..0000000000000000000000000000000000000000 --- a/spaces/xu1998hz/sescore_english_coco/sescore_english_coco.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""SEScore: a text generation evaluation metric """ - -import evaluate -import datasets - -import comet -from typing import Dict -import torch -from comet.encoders.base import Encoder -from comet.encoders.bert import BERTEncoder -from transformers import AutoModel, AutoTokenizer - -class robertaEncoder(BERTEncoder): - def __init__(self, pretrained_model: str) -> None: - super(Encoder, self).__init__() - self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model) - self.model = AutoModel.from_pretrained( - pretrained_model, add_pooling_layer=False - ) - self.model.encoder.output_hidden_states = True - - @classmethod - def from_pretrained(cls, pretrained_model: str) -> Encoder: - return robertaEncoder(pretrained_model) - - def forward( - self, input_ids: torch.Tensor, attention_mask: torch.Tensor, **kwargs - ) -> Dict[str, torch.Tensor]: - last_hidden_states, _, all_layers = self.model( - input_ids=input_ids, - attention_mask=attention_mask, - output_hidden_states=True, - return_dict=False, - ) - return { - "sentemb": last_hidden_states[:, 0, :], - "wordemb": last_hidden_states, - "all_layers": all_layers, - "attention_mask": attention_mask, - } - - -# TODO: Add BibTeX citation -_CITATION = """\ -@inproceedings{xu-etal-2022-not, - title={Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis}, - author={Xu, Wenda and Tuan, Yi-lin and Lu, Yujie and Saxon, Michael and Li, Lei and Wang, William Yang}, - booktitle ={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing}, - month={dec}, - year={2022}, - url={https://arxiv.org/abs/2210.05035} -} -""" - -_DESCRIPTION = """\ -SEScore is an evaluation metric that trys to compute an overall score to measure text generation quality. -""" - -_KWARGS_DESCRIPTION = """ -Calculates how good are predictions given some references -Args: - predictions: list of candidate outputs - references: list of references -Returns: - {"mean_score": mean_score, "scores": scores} - -Examples: - >>> import evaluate - >>> sescore = evaluate.load("xu1998hz/sescore") - >>> score = sescore.compute( - references=['sescore is a simple but effective next-generation text evaluation metric'], - predictions=['sescore is simple effective text evaluation metric for next generation'] - ) -""" - -# TODO: Define external resources urls if needed -BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class SEScore(evaluate.Metric): - """SEScore""" - - def _info(self): - # TODO: Specifies the evaluate.EvaluationModuleInfo object - return evaluate.MetricInfo( - # This is the description that will appear on the modules page. - module_type="metric", - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - # This defines the format of each prediction and reference - features=datasets.Features({ - 'predictions': datasets.Value("string", id="sequence"), - 'references': datasets.Value("string", id="sequence"), - }), - # Homepage of the module for documentation - homepage="http://module.homepage", - # Additional links to the codebase or references - codebase_urls=["http://github.com/path/to/codebase/of/new_module"], - reference_urls=["http://path.to.reference.url/new_module"] - ) - - def _download_and_prepare(self, dl_manager): - """download SEScore checkpoints to compute the scores""" - # Download SEScore checkpoint - from comet import load_from_checkpoint - import os - from huggingface_hub import snapshot_download - # initialize roberta into str2encoder - comet.encoders.str2encoder['RoBERTa'] = robertaEncoder - destination = snapshot_download(repo_id="xu1998hz/sescore_english_coco", revision="main") - self.scorer = load_from_checkpoint(f'{destination}/checkpoint/caption.ckpt') - - def _compute(self, predictions, references, gpus=None, progress_bar=False): - if gpus is None: - gpus = 1 if torch.cuda.is_available() else 0 - - data = {"src": references, "mt": predictions} - data = [dict(zip(data, t)) for t in zip(*data.values())] - scores, mean_score = self.scorer.predict(data, gpus=gpus, progress_bar=progress_bar) - return {"mean_score": mean_score, "scores": scores} diff --git a/spaces/xuxw98/TAPA/finetune/lora.py b/spaces/xuxw98/TAPA/finetune/lora.py deleted file mode 100644 index 18737015c5d2290406a4558248d8b8b311cb5bf4..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/finetune/lora.py +++ /dev/null @@ -1,218 +0,0 @@ -""" -Instruction-tuning with LoRA on the Alpaca dataset. - -Note: If you run into a CUDA error "Expected is_sm80 to be true, but got false", uncomment the line -`torch.backends.cuda.enable_flash_sdp(False)` in the script below (see https://github.com/Lightning-AI/lit-llama/issues/101). -""" -import sys -from pathlib import Path -import os -import time - -import lightning as L -import numpy as np -import torch - -# support running without installing as a package -wd = Path(__file__).parent.parent.resolve() -sys.path.append(str(wd)) - -from generate import generate -from lit_llama.lora import mark_only_lora_as_trainable, lora, lora_state_dict -from lit_llama.model import LLaMA, LLaMAConfig -from lit_llama.tokenizer import Tokenizer -from scripts.prepare_alpaca import generate_prompt - - -instruction_tuning = True -eval_interval = 100 -save_interval = 100 -eval_iters = 100 -log_interval = 1 - -# Hyperparameters -learning_rate = 3e-4 -batch_size = 128 -micro_batch_size = 4 -gradient_accumulation_iters = batch_size // micro_batch_size -assert gradient_accumulation_iters > 0 -max_iters = 50000 * 3 // micro_batch_size -weight_decay = 0.0 -max_seq_length = 256 # see scripts/prepare_alpaca.py -lora_r = 8 -lora_alpha = 16 -lora_dropout = 0.05 -warmup_iters = 100 - - -def main( - data_dir: str = "data/alpaca", - pretrained_path: str = "checkpoints/lit-llama/7B/lit-llama.pth", - tokenizer_path: str = "checkpoints/lit-llama/tokenizer.model", - out_dir: str = "out/lora/alpaca", -): - - fabric = L.Fabric(accelerator="cuda", devices=1, precision="bf16-true") - fabric.launch() - fabric.seed_everything(1337 + fabric.global_rank) - - if fabric.global_rank == 0: - os.makedirs(out_dir, exist_ok=True) - - train_data, val_data = load_datasets(data_dir=data_dir) - - config = LLaMAConfig.from_name("7B") - config.block_size = max_seq_length - - checkpoint = torch.load(pretrained_path) - - with fabric.init_module(), lora(r=lora_r, alpha=lora_alpha, dropout=lora_dropout, enabled=True): - model = LLaMA(config) - # strict=False because missing keys due to LoRA weights not contained in checkpoint state - model.load_state_dict(checkpoint, strict=False) - - mark_only_lora_as_trainable(model) - - optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate) - model, optimizer = fabric.setup(model, optimizer) - train(fabric, model, optimizer, train_data, val_data, tokenizer_path, out_dir) - - # Save the final LoRA checkpoint at the end of training - checkpoint = lora_state_dict(model) - fabric.save(os.path.join(out_dir, "lit-llama-lora-finetuned.pth"), checkpoint) - - -def train( - fabric: L.Fabric, - model: torch.nn.Module, - optimizer: torch.optim.Optimizer, - train_data: np.ndarray, - val_data: np.ndarray, - tokenizer_path: str, - out_dir: str, -) -> None: - """The training loop. - - Loosely based on the nanoGPT implementation: https://github.com/karpathy/nanoGPT. - """ - step_count = 0 - - for iter_num in range(max_iters): - - if step_count <= warmup_iters: - # linear warmup - lr = learning_rate * step_count / warmup_iters - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - t0 = time.time() - - input_ids, targets = get_batch(fabric, train_data) - with fabric.no_backward_sync(model, enabled=((iter_num + 1) % gradient_accumulation_iters != 0)): - logits = model(input_ids) - loss = loss_fn(logits, targets) - fabric.backward(loss / gradient_accumulation_iters) - - if (iter_num + 1) % gradient_accumulation_iters == 0: - optimizer.step() - optimizer.zero_grad() - step_count += 1 - - if step_count % eval_interval == 0: - val_loss = validate(fabric, model, val_data, tokenizer_path) - fabric.print(f"step {iter_num}: val loss {val_loss:.4f}") - fabric.barrier() - - if step_count % save_interval == 0: - print(f"Saving LoRA weights to {out_dir}") - # We are only saving the LoRA weights - # TODO: Provide a function/script to merge the LoRA weights with pretrained weights - checkpoint = lora_state_dict(model) - fabric.save(os.path.join(out_dir, f"iter-{iter_num:06d}-ckpt.pth"), checkpoint) - - dt = time.time() - t0 - if iter_num % log_interval == 0: - fabric.print(f"iter {iter_num}: loss {loss.item():.4f}, time: {dt*1000:.2f}ms") - - -def generate_response(model, instruction, tokenizer_path): - tokenizer = Tokenizer(tokenizer_path) - sample = {"instruction": instruction, "input": ""} - prompt = instruction - if instruction_tuning: - prompt = generate_prompt(sample) - encoded = tokenizer.encode(prompt, bos=True, eos=False, device=model.device) - - output = generate( - model, - idx=encoded, - max_seq_length=max_seq_length, - max_new_tokens=100, - ) - output = tokenizer.decode(output) - return output # output.split("### Response:")[1].strip() - - -@torch.no_grad() -def validate(fabric: L.Fabric, model: torch.nn.Module, val_data: np.ndarray, tokenizer_path: str) -> torch.Tensor: - fabric.print("Validating ...") - model.eval() - losses = torch.zeros(eval_iters) - for k in range(eval_iters): - input_ids, targets = get_batch(fabric, val_data) - logits = model(input_ids) - loss = loss_fn(logits, targets) - losses[k] = loss.item() - out = losses.mean() - - # produce an example: - instruction = "Recommend a movie for me to watch during the weekend and explain the reason." - - output = generate_response(model, instruction, tokenizer_path) - fabric.print(instruction) - fabric.print(output) - - model.train() - return out.item() - -def loss_fn(logits, targets): - # shift the targets such that output n predicts token n+1 - logits = logits[..., :-1, :].contiguous() - targets = targets[..., 1:].contiguous() - loss = torch.nn.functional.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1) - return loss - - -def get_batch(fabric: L.Fabric, data: list): - ix = torch.randint(len(data), (micro_batch_size,)) - - input_ids = [data[i]["input_ids"].type(torch.int64) for i in ix] - labels = [data[i]["labels"].type(torch.int64) for i in ix] - - max_len = max(len(s) for s in input_ids) - - def pad_right(x, pad_id): - # pad right based on the longest sequence - n = max_len - len(x) - return torch.cat((x, torch.full((n,), pad_id, dtype=x.dtype))) - - x = torch.stack([pad_right(x, pad_id=0) for x in input_ids]) - y = torch.stack([pad_right(x, pad_id=-1) for x in labels]) - x, y = fabric.to_device((x.pin_memory(), y.pin_memory())) - return x, y - - -def load_datasets(data_dir): - train_data = torch.load(os.path.join(data_dir, "train.pt")) - val_data = torch.load(os.path.join(data_dir, "test.pt")) - return train_data, val_data - - -if __name__ == "__main__": - # Uncomment this line if you see an error: "Expected is_sm80 to be true, but got false" - # torch.backends.cuda.enable_flash_sdp(False) - torch.set_float32_matmul_precision("high") - - from jsonargparse.cli import CLI - - CLI(main) diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py deleted file mode 100644 index 170fb23971d135ebf0c854c652a0005d3f31abaa..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py +++ /dev/null @@ -1,566 +0,0 @@ -import argparse -import cv2 -import glob -import mimetypes -import numpy as np -import os -import shutil -import subprocess -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url -from os import path as osp -from tqdm import tqdm - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - -try: - import ffmpeg -except ImportError: - import pip - - pip.main(["install", "--user", "ffmpeg-python"]) - import ffmpeg - - -def get_video_meta_info(video_path): - ret = {} - probe = ffmpeg.probe(video_path) - video_streams = [ - stream for stream in probe["streams"] if stream["codec_type"] == "video" - ] - has_audio = any(stream["codec_type"] == "audio" for stream in probe["streams"]) - ret["width"] = video_streams[0]["width"] - ret["height"] = video_streams[0]["height"] - ret["fps"] = eval(video_streams[0]["avg_frame_rate"]) - ret["audio"] = ffmpeg.input(video_path).audio if has_audio else None - ret["nb_frames"] = int(video_streams[0]["nb_frames"]) - return ret - - -def get_sub_video(args, num_process, process_idx): - if num_process == 1: - return args.input - meta = get_video_meta_info(args.input) - duration = int(meta["nb_frames"] / meta["fps"]) - part_time = duration // num_process - print(f"duration: {duration}, part_time: {part_time}") - os.makedirs( - osp.join(args.output, f"{args.video_name}_inp_tmp_videos"), exist_ok=True - ) - out_path = osp.join( - args.output, f"{args.video_name}_inp_tmp_videos", f"{process_idx:03d}.mp4" - ) - cmd = [ - args.ffmpeg_bin, - f"-i {args.input}", - "-ss", - f"{part_time * process_idx}", - f"-to {part_time * (process_idx + 1)}" - if process_idx != num_process - 1 - else "", - "-async 1", - out_path, - "-y", - ] - print(" ".join(cmd)) - subprocess.call(" ".join(cmd), shell=True) - return out_path - - -class Reader: - def __init__(self, args, total_workers=1, worker_idx=0): - self.args = args - input_type = mimetypes.guess_type(args.input)[0] - self.input_type = "folder" if input_type is None else input_type - self.paths = [] # for image&folder type - self.audio = None - self.input_fps = None - if self.input_type.startswith("video"): - video_path = get_sub_video(args, total_workers, worker_idx) - self.stream_reader = ( - ffmpeg.input(video_path) - .output("pipe:", format="rawvideo", pix_fmt="bgr24", loglevel="error") - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - meta = get_video_meta_info(video_path) - self.width = meta["width"] - self.height = meta["height"] - self.input_fps = meta["fps"] - self.audio = meta["audio"] - self.nb_frames = meta["nb_frames"] - - else: - if self.input_type.startswith("image"): - self.paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, "*"))) - tot_frames = len(paths) - num_frame_per_worker = tot_frames // total_workers + ( - 1 if tot_frames % total_workers else 0 - ) - self.paths = paths[ - num_frame_per_worker - * worker_idx : num_frame_per_worker - * (worker_idx + 1) - ] - - self.nb_frames = len(self.paths) - assert self.nb_frames > 0, "empty folder" - from PIL import Image - - tmp_img = Image.open(self.paths[0]) - self.width, self.height = tmp_img.size - self.idx = 0 - - def get_resolution(self): - return self.height, self.width - - def get_fps(self): - if self.args.fps is not None: - return self.args.fps - elif self.input_fps is not None: - return self.input_fps - return 24 - - def get_audio(self): - return self.audio - - def __len__(self): - return self.nb_frames - - def get_frame_from_stream(self): - img_bytes = self.stream_reader.stdout.read( - self.width * self.height * 3 - ) # 3 bytes for one pixel - if not img_bytes: - return None - img = np.frombuffer(img_bytes, np.uint8).reshape([self.height, self.width, 3]) - return img - - def get_frame_from_list(self): - if self.idx >= self.nb_frames: - return None - img = cv2.imread(self.paths[self.idx]) - self.idx += 1 - return img - - def get_frame(self): - if self.input_type.startswith("video"): - return self.get_frame_from_stream() - else: - return self.get_frame_from_list() - - def close(self): - if self.input_type.startswith("video"): - self.stream_reader.stdin.close() - self.stream_reader.wait() - - -class Writer: - def __init__(self, args, audio, height, width, video_save_path, fps): - out_width, out_height = int(width * args.outscale), int(height * args.outscale) - if out_height > 2160: - print( - "You are generating video that is larger than 4K, which will be very slow due to IO speed.", - "We highly recommend to decrease the outscale(aka, -s).", - ) - - if audio is not None: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - audio, - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - acodec="copy", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - else: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - - def write_frame(self, frame): - frame = frame.astype(np.uint8).tobytes() - self.stream_writer.stdin.write(frame) - - def close(self): - self.stream_writer.stdin.close() - self.stream_writer.wait() - - -def inference_video(args, video_save_path, device=None, total_workers=1, worker_idx=0): - # ---------------------- determine models according to model names ---------------------- # - args.model_name = args.model_name.split(".pth")[0] - if args.model_name == "RealESRGAN_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" - ] - elif args.model_name == "RealESRNet_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth" - ] - elif ( - args.model_name == "RealESRGAN_x4plus_anime_6B" - ): # x4 RRDBNet model with 6 blocks - model = RRDBNet( - num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4 - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" - ] - elif args.model_name == "RealESRGAN_x2plus": # x2 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - netscale = 2 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" - ] - elif args.model_name == "realesr-animevideov3": # x4 VGG-style model (XS size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=16, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth" - ] - elif args.model_name == "realesr-general-x4v3": # x4 VGG-style model (S size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=32, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - ] - - # ---------------------- determine model paths ---------------------- # - model_path = os.path.join("weights", args.model_name + ".pth") - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - - # use dni to control the denoise strength - dni_weight = None - if args.model_name == "realesr-general-x4v3" and args.denoise_strength != 1: - wdn_model_path = model_path.replace( - "realesr-general-x4v3", "realesr-general-wdn-x4v3" - ) - model_path = [model_path, wdn_model_path] - dni_weight = [args.denoise_strength, 1 - args.denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - device=device, - ) - - if "anime" in args.model_name and args.face_enhance: - print( - "face_enhance is not supported in anime models, we turned this option off for you. " - "if you insist on turning it on, please manually comment the relevant lines of code." - ) - args.face_enhance = False - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - - face_enhancer = GFPGANer( - model_path="https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth", - upscale=args.outscale, - arch="clean", - channel_multiplier=2, - bg_upsampler=upsampler, - ) # TODO support custom device - else: - face_enhancer = None - - reader = Reader(args, total_workers, worker_idx) - audio = reader.get_audio() - height, width = reader.get_resolution() - fps = reader.get_fps() - writer = Writer(args, audio, height, width, video_save_path, fps) - - pbar = tqdm(total=len(reader), unit="frame", desc="inference") - while True: - img = reader.get_frame() - if img is None: - break - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True - ) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print("Error", error) - print( - "If you encounter CUDA out of memory, try to set --tile with a smaller number." - ) - else: - writer.write_frame(output) - - torch.cuda.synchronize(device) - pbar.update(1) - - reader.close() - writer.close() - - -def run(args): - args.video_name = osp.splitext(os.path.basename(args.input))[0] - video_save_path = osp.join(args.output, f"{args.video_name}_{args.suffix}.mp4") - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - os.makedirs(tmp_frames_folder, exist_ok=True) - os.system( - f"ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {tmp_frames_folder}/frame%08d.png" - ) - args.input = tmp_frames_folder - - num_gpus = torch.cuda.device_count() - num_process = num_gpus * args.num_process_per_gpu - if num_process == 1: - inference_video(args, video_save_path) - return - - ctx = torch.multiprocessing.get_context("spawn") - pool = ctx.Pool(num_process) - os.makedirs( - osp.join(args.output, f"{args.video_name}_out_tmp_videos"), exist_ok=True - ) - pbar = tqdm(total=num_process, unit="sub_video", desc="inference") - for i in range(num_process): - sub_video_save_path = osp.join( - args.output, f"{args.video_name}_out_tmp_videos", f"{i:03d}.mp4" - ) - pool.apply_async( - inference_video, - args=( - args, - sub_video_save_path, - torch.device(i % num_gpus), - num_process, - i, - ), - callback=lambda arg: pbar.update(1), - ) - pool.close() - pool.join() - - # combine sub videos - # prepare vidlist.txt - with open(f"{args.output}/{args.video_name}_vidlist.txt", "w") as f: - for i in range(num_process): - f.write(f"file '{args.video_name}_out_tmp_videos/{i:03d}.mp4'\n") - - cmd = [ - args.ffmpeg_bin, - "-f", - "concat", - "-safe", - "0", - "-i", - f"{args.output}/{args.video_name}_vidlist.txt", - "-c", - "copy", - f"{video_save_path}", - ] - print(" ".join(cmd)) - subprocess.call(cmd) - shutil.rmtree(osp.join(args.output, f"{args.video_name}_out_tmp_videos")) - if osp.exists(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")): - shutil.rmtree(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")) - os.remove(f"{args.output}/{args.video_name}_vidlist.txt") - - -def main(): - """Inference demo for Real-ESRGAN. - It mainly for restoring anime videos. - - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "-i", "--input", type=str, default="inputs", help="Input video, image or folder" - ) - parser.add_argument( - "-n", - "--model_name", - type=str, - default="realesr-animevideov3", - help=( - "Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus | RealESRNet_x4plus |" - " RealESRGAN_x2plus | realesr-general-x4v3" - "Default:realesr-animevideov3" - ), - ) - parser.add_argument( - "-o", "--output", type=str, default="results", help="Output folder" - ) - parser.add_argument( - "-dn", - "--denoise_strength", - type=float, - default=0.5, - help=( - "Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. " - "Only used for the realesr-general-x4v3 model" - ), - ) - parser.add_argument( - "-s", - "--outscale", - type=float, - default=4, - help="The final upsampling scale of the image", - ) - parser.add_argument( - "--suffix", type=str, default="out", help="Suffix of the restored video" - ) - parser.add_argument( - "-t", - "--tile", - type=int, - default=0, - help="Tile size, 0 for no tile during testing", - ) - parser.add_argument("--tile_pad", type=int, default=10, help="Tile padding") - parser.add_argument( - "--pre_pad", type=int, default=0, help="Pre padding size at each border" - ) - parser.add_argument( - "--face_enhance", action="store_true", help="Use GFPGAN to enhance face" - ) - parser.add_argument( - "--fp32", - action="store_true", - help="Use fp32 precision during inference. Default: fp16 (half precision).", - ) - parser.add_argument( - "--fps", type=float, default=None, help="FPS of the output video" - ) - parser.add_argument( - "--ffmpeg_bin", type=str, default="ffmpeg", help="The path to ffmpeg" - ) - parser.add_argument("--extract_frame_first", action="store_true") - parser.add_argument("--num_process_per_gpu", type=int, default=1) - - parser.add_argument( - "--alpha_upsampler", - type=str, - default="realesrgan", - help="The upsampler for the alpha channels. Options: realesrgan | bicubic", - ) - parser.add_argument( - "--ext", - type=str, - default="auto", - help="Image extension. Options: auto | jpg | png, auto means using the same extension as inputs", - ) - args = parser.parse_args() - - args.input = args.input.rstrip("/").rstrip("\\") - os.makedirs(args.output, exist_ok=True) - - if mimetypes.guess_type(args.input)[0] is not None and mimetypes.guess_type( - args.input - )[0].startswith("video"): - is_video = True - else: - is_video = False - - if is_video and args.input.endswith(".flv"): - mp4_path = args.input.replace(".flv", ".mp4") - os.system(f"ffmpeg -i {args.input} -codec copy {mp4_path}") - args.input = mp4_path - - if args.extract_frame_first and not is_video: - args.extract_frame_first = False - - run(args) - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - shutil.rmtree(tmp_frames_folder) - - -if __name__ == "__main__": - main() diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan2_generator.py b/spaces/ybelkada/interfacegan_pp/models/stylegan2_generator.py deleted file mode 100644 index ec4e805f62ceb87f3ae1bc7b69085701bf423b28..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan2_generator.py +++ /dev/null @@ -1,189 +0,0 @@ -# python3.7 -"""Contains the generator class of StyleGAN. - -Basically, this class is derived from the `BaseGenerator` class defined in -`base_generator.py`. -""" - -import os -import numpy as np -import pickle -from PIL import Image - -from typing import List, Optional, Tuple, Union - -import torch - -from . import model_settings -from .stylegan3_official_network import StyleGAN3GeneratorModel -from .base_generator import BaseGenerator - -__all__ = ['StyleGANGenerator'] - -def make_transform(translate: Tuple[float,float], angle: float): - m = np.eye(3) - s = np.sin(angle/360.0*np.pi*2) - c = np.cos(angle/360.0*np.pi*2) - m[0][0] = c - m[0][1] = s - m[0][2] = translate[0] - m[1][0] = -s - m[1][1] = c - m[1][2] = translate[1] - return m - -class StyleGAN2Generator(BaseGenerator): - """Defines the generator class of StyleGAN. - - Different from conventional GAN, StyleGAN introduces a disentangled latent - space (i.e., W space) besides the normal latent space (i.e., Z space). Then, - the disentangled latent code, w, is fed into each convolutional layer to - modulate the `style` of the synthesis through AdaIN (Adaptive Instance - Normalization) layer. Normally, the w's fed into all layers are the same. But, - they can actually be different to make different layers get different styles. - Accordingly, an extended space (i.e. W+ space) is used to gather all w's - together. Taking the official StyleGAN model trained on FF-HQ dataset as an - instance, there are - (1) Z space, with dimension (512,) - (2) W space, with dimension (512,) - (3) W+ space, with dimension (18, 512) - """ - - def __init__(self, model_name, logger=None): - self.truncation_psi = model_settings.STYLEGAN_TRUNCATION_PSI - self.truncation_layers = model_settings.STYLEGAN_TRUNCATION_LAYERS - self.randomize_noise = model_settings.STYLEGAN_RANDOMIZE_NOISE - self.model_specific_vars = ['truncation.truncation'] - super().__init__(model_name, logger) - self.num_layers = (int(np.log2(self.resolution)) - 1) * 2 - assert self.gan_type in ['stylegan3', 'stylegan2'] - - def build(self): - self.check_attr('w_space_dim') - self.check_attr('fused_scale') - self.model = StyleGAN3GeneratorModel( - img_resolution=self.resolution, - w_dim=self.w_space_dim, - z_dim=self.latent_space_dim, - c_dim=self.c_space_dim, - img_channels=3 - ) - - - def load(self): - self.logger.info(f'Loading pytorch model from `{self.model_path}`.') - with open(self.model_path, 'rb') as f: - self.model = pickle.load(f)['G_ema'] - self.logger.info(f'Successfully loaded!') - # self.lod = self.model.synthesis.lod.to(self.cpu_device).tolist() - # self.logger.info(f' `lod` of the loaded model is {self.lod}.') - - - def sample(self, num, latent_space_type='Z'): - """Samples latent codes randomly. - - Args: - num: Number of latent codes to sample. Should be positive. - latent_space_type: Type of latent space from which to sample latent code. - Only [`Z`, `W`, `WP`] are supported. Case insensitive. (default: `Z`) - - Returns: - A `numpy.ndarray` as sampled latend codes. - - Raises: - ValueError: If the given `latent_space_type` is not supported. - """ - latent_space_type = latent_space_type.upper() - if latent_space_type == 'Z': - latent_codes = np.random.randn(num, self.latent_space_dim) - elif latent_space_type == 'W': - latent_codes = np.random.randn(num, self.w_space_dim) - elif latent_space_type == 'WP': - latent_codes = np.random.randn(num, self.num_layers, self.w_space_dim) - else: - raise ValueError(f'Latent space type `{latent_space_type}` is invalid!') - - return latent_codes.astype(np.float32) - - def preprocess(self, latent_codes, latent_space_type='Z'): - """Preprocesses the input latent code if needed. - - Args: - latent_codes: The input latent codes for preprocessing. - latent_space_type: Type of latent space to which the latent codes belong. - Only [`Z`, `W`, `WP`] are supported. Case insensitive. (default: `Z`) - - Returns: - The preprocessed latent codes which can be used as final input for the - generator. - - Raises: - ValueError: If the given `latent_space_type` is not supported. - """ - if not isinstance(latent_codes, np.ndarray): - raise ValueError(f'Latent codes should be with type `numpy.ndarray`!') - - latent_space_type = latent_space_type.upper() - if latent_space_type == 'Z': - latent_codes = latent_codes.reshape(-1, self.latent_space_dim) - norm = np.linalg.norm(latent_codes, axis=1, keepdims=True) - latent_codes = latent_codes / norm * np.sqrt(self.latent_space_dim) - elif latent_space_type == 'W': - latent_codes = latent_codes.reshape(-1, self.w_space_dim) - elif latent_space_type == 'WP': - latent_codes = latent_codes.reshape(-1, self.num_layers, self.w_space_dim) - else: - raise ValueError(f'Latent space type `{latent_space_type}` is invalid!') - - return latent_codes.astype(np.float32) - - def easy_sample(self, num, latent_space_type='Z'): - return self.sample(num, latent_space_type) - - def synthesize(self, - latent_codes, - latent_space_type='Z', - generate_style=False, - generate_image=True): - """Synthesizes images with given latent codes. - - One can choose whether to generate the layer-wise style codes. - - Args: - latent_codes: Input latent codes for image synthesis. - latent_space_type: Type of latent space to which the latent codes belong. - Only [`Z`, `W`, `WP`] are supported. Case insensitive. (default: `Z`) - generate_style: Whether to generate the layer-wise style codes. (default: - False) - generate_image: Whether to generate the final image synthesis. (default: - True) - - Returns: - A dictionary whose values are raw outputs from the generator. - """ - if not isinstance(latent_codes, np.ndarray): - raise ValueError(f'Latent codes should be with type `numpy.ndarray`!') - - results = {} - translate = (0,0) - rotate=0.0 - z = torch.from_numpy(latent_codes).to(self.run_device) - label = torch.zeros([1, self.c_space_dim]).to(self.run_device) - - if hasattr(self.model.synthesis, 'input'): - m = make_transform(translate, rotate) - m = np.linalg.inv(m) - self.model.synthesis.input.transform.copy_(torch.from_numpy(m)) - - ws = self.model.mapping(z, label) - #wps = self.model.truncation(w) - img = self.model(z, label) - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - img = img.cpu().numpy() - - results['image'] = img - results['z'] = latent_codes - results['w'] = ws.detach().cpu().numpy() - #results['wp'] = wps.detach().cpu().numpy() - - return results diff --git a/spaces/yeqingmei123/face-test/op/upfirdn2d_cpu.py b/spaces/yeqingmei123/face-test/op/upfirdn2d_cpu.py deleted file mode 100644 index a0f820b4c81e03598589b1ea6b95cf9bef9b04f8..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/op/upfirdn2d_cpu.py +++ /dev/null @@ -1,60 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.nn import functional as F - - - -module_path = os.path.dirname(__file__) - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/yerfor/SyntaSpeech/egs/datasets/audio/lj/preprocess.py b/spaces/yerfor/SyntaSpeech/egs/datasets/audio/lj/preprocess.py deleted file mode 100644 index a3d45c9aa855bb7ce40b5e8374547014350fa92b..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/egs/datasets/audio/lj/preprocess.py +++ /dev/null @@ -1,9 +0,0 @@ -from data_gen.tts.base_preprocess import BasePreprocessor - - -class LJPreprocess(BasePreprocessor): - def meta_data(self): - for l in open(f'{self.raw_data_dir}/metadata.csv').readlines(): - item_name, _, txt = l.strip().split("|") - wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav" - yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt} diff --git a/spaces/ygangang/VToonify/vtoonify/model/raft/train_mixed.sh b/spaces/ygangang/VToonify/vtoonify/model/raft/train_mixed.sh deleted file mode 100644 index d9b979f143902a17a0ba7b0a8f960598b7096e0b..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/model/raft/train_mixed.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 --num_steps 120000 --batch_size 8 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 --mixed_precision -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 --num_steps 50000 --batch_size 5 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 --mixed_precision diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/audio.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/audio.py deleted file mode 100644 index 3bdb70ba9357e95ff05853dcc06437c3401ef3be..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/audio.py +++ /dev/null @@ -1,125 +0,0 @@ -import os -from functools import lru_cache -from typing import Union - -import ffmpeg -import numpy as np -import torch -import torch.nn.functional as F - -from .utils import exact_div - -from librosa.filters import mel as librosa_mel_fn - -# hard-coded audio hyperparameters -SAMPLE_RATE = 16000 -N_FFT = 400 -N_MELS = 80 -HOP_LENGTH = 160 -CHUNK_LENGTH = 30 -N_SAMPLES = CHUNK_LENGTH * SAMPLE_RATE # 480000: number of samples in a chunk -N_FRAMES = exact_div(N_SAMPLES, HOP_LENGTH) # 3000: number of frames in a mel spectrogram input - - -def load_audio(file: str, sr: int = SAMPLE_RATE): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr) - .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") from e - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 - - -def pad_or_trim(array, length: int = N_SAMPLES, *, axis: int = -1): - """ - Pad or trim the audio array to N_SAMPLES, as expected by the encoder. - """ - if torch.is_tensor(array): - if array.shape[axis] > length: - array = array.index_select(dim=axis, index=torch.arange(length, device=array.device)) - - if array.shape[axis] < length: - pad_widths = [(0, 0)] * array.ndim - pad_widths[axis] = (0, length - array.shape[axis]) - array = F.pad(array, [pad for sizes in pad_widths[::-1] for pad in sizes]) - else: - if array.shape[axis] > length: - array = array.take(indices=range(length), axis=axis) - - if array.shape[axis] < length: - pad_widths = [(0, 0)] * array.ndim - pad_widths[axis] = (0, length - array.shape[axis]) - array = np.pad(array, pad_widths) - - return array - - -@lru_cache(maxsize=None) -def mel_filters(device, n_mels: int = N_MELS) -> torch.Tensor: - """ - load the mel filterbank matrix for projecting STFT into a Mel spectrogram. - Allows decoupling librosa dependency; saved using: - - np.savez_compressed( - "mel_filters.npz", - mel_80=librosa.filters.mel(sr=16000, n_fft=400, n_mels=80), - ) - """ - assert n_mels == 80, f"Unsupported n_mels: {n_mels}" - return torch.from_numpy(librosa_mel_fn(sr=SAMPLE_RATE,n_fft=N_FFT,n_mels=n_mels)).to(device) - - -def log_mel_spectrogram(audio: Union[str, np.ndarray, torch.Tensor], n_mels: int = N_MELS): - """ - Compute the log-Mel spectrogram of - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor], shape = (*) - The path to audio or either a NumPy array or Tensor containing the audio waveform in 16 kHz - - n_mels: int - The number of Mel-frequency filters, only 80 is supported - - Returns - ------- - torch.Tensor, shape = (80, n_frames) - A Tensor that contains the Mel spectrogram - """ - if not torch.is_tensor(audio): - if isinstance(audio, str): - audio = load_audio(audio) - audio = torch.from_numpy(audio) - - window = torch.hann_window(N_FFT).to(audio.device) - stft = torch.stft(audio, N_FFT, HOP_LENGTH, window=window, return_complex=True) - magnitudes = stft[..., :-1].abs() ** 2 - - filters = mel_filters(audio.device, n_mels) - mel_spec = filters @ magnitudes - - log_spec = torch.clamp(mel_spec, min=1e-10).log10() - log_spec = torch.maximum(log_spec, log_spec.max() - 8.0) - log_spec = (log_spec + 4.0) / 4.0 - return log_spec diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/__init__.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py deleted file mode 100644 index 52736e331cc6c95001bc84f2c17a0805789b2450..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py +++ /dev/null @@ -1,37 +0,0 @@ -from detectron2.data.datasets.register_coco import register_coco_instances -import os - -categories = [ - {'id': 0, 'name': 'car'}, - {'id': 1, 'name': 'truck'}, - {'id': 2, 'name': 'trailer'}, - {'id': 3, 'name': 'bus'}, - {'id': 4, 'name': 'construction_vehicle'}, - {'id': 5, 'name': 'bicycle'}, - {'id': 6, 'name': 'motorcycle'}, - {'id': 7, 'name': 'pedestrian'}, - {'id': 8, 'name': 'traffic_cone'}, - {'id': 9, 'name': 'barrier'}, -] - -def _get_builtin_metadata(): - id_to_name = {x['id']: x['name'] for x in categories} - thing_dataset_id_to_contiguous_id = {i: i for i in range(len(categories))} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS = { - "nuimages_train": ("nuimages", "nuimages/annotations/nuimages_v1.0-train.json"), - "nuimages_val": ("nuimages", "nuimages/annotations/nuimages_v1.0-val.json"), - "nuimages_mini": ("nuimages", "nuimages/annotations/nuimages_v1.0-mini.json"), -} - -for key, (image_root, json_file) in _PREDEFINED_SPLITS.items(): - register_coco_instances( - key, - _get_builtin_metadata(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) diff --git a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/text-davinci-003/QR/5-shot/encs/test.en-cs.cs b/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/text-davinci-003/QR/5-shot/encs/test.en-cs.cs deleted file mode 100644 index e47aebb196803665b177ac7eb1ade3687c9f4223..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/system-outputs/text-davinci-003/QR/5-shot/encs/test.en-cs.cs +++ /dev/null @@ -1,2037 +0,0 @@ -Pokud vás nenajdou, určitě zavolají. -Nicméně je lepší, že jakmile jsou blízko vaší dodací adresy, můžete je místo toho kontaktovat. -Samco Sport vakuové hadice jsou čisté silikonové gumové hadice, které jsou k dispozici ve vnitřních průměrech (I.D) od 3 mm do 9 mm. -Konkrétně navrženo pro všechny vákuumové hadice motoru, hadice ventilátoru karburátoru, hadice ventilace nádrže na palivo, přetečení chladicí kapaliny a kontrolu emisí a může být použito pro hadice stěračů a izolaci drátů. -Vhodné pouze pro instalace s nízkým tlakem. -Vysavačová hadice od Samco není navržena pro přepravu oleje, paliva nebo pro trvalý přenos tlakového horkého vody. -S neuvěřitelnou schopností roztažení v průměru, umožňující hadici roztáhnout na spojení pro dokonalý těsný spoj (tj. můžete roztáhnout hadici s průměrem 3 mm na spojení s průměrem 5 mm). -Přidejte 5 dvojitých drátových svorek do své objednávky za pouhých 99p, perfektní pro upevnění hadice vysavače na místě! -S více než 12letou zkušeností s obchodováním s náhradními Samco Sport výkonnými silikonovými hadicemi jsme hrdí na to, že jsme světovým vedoucím distributorem specialistů na silikonové hadice pro motocykly. -S velkým skladem univerzálních dílů závodních vozů se snažíme o 100% servis. -Samco Sport Vysavačová hadice má celou řadu aplikací: kontrola emisí, přetečení chladiče, stěrače a je ideální pro náhradu hadice ventilu karburátoru na motokrosových a silničních aplikacích. -Je to skvělý produkt a vhodný pro všechny druhy jízdních kol, aut a komerční aplikace. -Přidejte 5 dvojitých drátových svorek do své objednávky za pouhých 99p, perfektní pro upevnění hadice vysavače na místě! -Není vhodné pro vysokotlaké vodní instalace nebo teplovodní systémy. -Tento hadice není vhodný pro přenos oleje. -Proč si vybrat hadice z silikonu Samco Sport? -Ltd Záruka na životnost, 2 roky pro aplikace pro palivo. -Jen se přihlaste do svého účtu a počkejte, až se synchronizace dokončí, knihy se automaticky načtou do #PRS_ORG#. -To je vše. -Je tu něco jiného, s čím bych vám mohl pomoci? -Jsi tam? -Omlouvám se, z důvodu kvality budu muset tento chat uzavřít, pokud neobdržím odpověď do 2 minut. -Teď tento chat uzavřu, protože nebyla obdržena žádná odpověď. -Rád budu pokračovat ve vaší pomoci e-mailem, nebo se můžete znovu obrátit na nás v pro vás vhodnějším čase. -Odpojení bude potřebovat, aby se ujistili, že jejich aplikace fungují na všech různých verzích iOS. -To nemusí být ani nutně pravda. -Jen stanovte limit verzí iOS, které aplikace podporuje a vydávejte pouze aktualizace pro zařízení s nejnovějšími kompatibilními verzemi iOS. -Takto funguje většina aplikací nyní. -Také, protože Apple může často vydávat nové verze iOS a macOS. -Není problém, že to stále není dost často? -To také vytváří některé strašné UX. -I když Apple zvýšil rychlost aktualizací OS kvůli opravám drobných chyb v několika aplikacích, proč uživatel potřebuje provést *celou aktualizaci OS* pokaždé? -A co to vůbec znamená "vývojáři mohou být jisti, že jejich patch/funkce bude v nové verzi vydána"? -To je v protikladu k Google. -Google musel odpojit, protože bylo ve volné přírodě mnoho verzí Androidu, každá s obrovským podílem na trhu. -Mohu bez pochybností říct, že kdyby verze Androidu na telefonech byly konzistentní jako iOS, Google by nikdy neudělal vydání s OS pro tyto aplikace. -To je odvážné tvrzení, ale ať už je to jakkoli, stále to nevysvětluje, jak je seskupování aktualizací aplikací jako aktualizace operačního systému „lepší“ pro vývojáře nebo koncového uživatele. -Vidím, že jste objednali z restaurace, která provádí vlastní doručení. -Přijali vaši objednávku, která je #PRS_ORG#. -Restaurace vás zavolala a nemají položku, kterou jste objednali? -Čas dochází pro jadernou dohodu s Íránem, říká Německo. -Německá ministryně zahraničí varovala v sobotu, že čas se krátí, aby se našlo způsob, jak obnovit jadernou dohodu mezi světovými velmocemi a Íránem z roku 2015, po setkáních s jejími protějšky zemí G7. -Jednání byla obnovena ve Vídni, aby se pokusila oživit jadernou dohodu, s oběma stranami se snaží odhadnout předpoklady úspěchu po nejnovějších výměnách v zastavit-startovat jednání. -"Čas se krátí," řekla německá ministryně zahraničí Annalena Baerbock novinářům v Liverpoolu ve Velké Británii, kde se koná schůzka ministrů zahraničí G7. -V posledních dnech se ukázalo, že nemáme žádný pokrok. -Baerbock řekl, že Írán obnovil jednání s pozicí, která vrátila jednání o šest měsíců zpět. -Současné kolo rozhovorů v Curychu následuje po přestávce pěti měsíců po volbě tvrdého protizápadníka Ebrahima Raisiho jako prezidenta Íránu. -Dříve američtí úředníci řekli, že ministr zahraničí Antony Blinken v pátek uspořádal "produktivní" schůzku s protějšky z Velké Británie, Německa a Francie, kde se diskutovalo o cestě vpřed pro jednání o Íránu. -Vysoce postavený úředník ministerstva zahraničí řekl, že mezi zeměmi G7 probíhala „intenzivní“ konverzace, které byly jednotné ve svém postoji k jaderným jednáním. -"Vyjádření bude také silné ohledně důležitosti vrácení Íránu ke stolu a že je možné dohodu uzavřít, ale čas se krátí, takže jsme jednotní v tom," řekl oficiální, který informoval novináře na podmínku anonymity. -Úředník dále uvedl, že americký speciální vyslanec pro Írán Robert Malley se vrací do Vídně na jednání. -Íránští úředníci dříve řekli, že drží svou tvrdou pozici. -Podle původního jaderné dohody, opuštěné v roce 2018 tehdejším prezidentem Donaldem Trumpem, omezil Írán svůj jaderný program výměnou za uvolnění amerických, evropských a OSN sankcí. -Západ se obává, že by program mohl být použit k vývoji zbraní, což Teherán popírá. -Raisi řekl v sobotu, že Teherán je vážný ve svých jaderných jednáních v Vídni, uvedla oficiální agentura IRNA. -Nepřímé americko-íránské rozhovory, ve kterých diplomaté z Francie, Velké Británie, Německa, Ruska a Číny mezi nimi přepravují, protože Teherán odmítá přímý kontakt s Washingtonem, mají za cíl obě strany přimět k obnovení plného dodržování dohody. -Setkání G7, které se očekává, že vyústí ve společný výzvu pro Írán, aby zmírnil svůj jaderný program a využil příležitosti vídeňských jednání. -Z jaké země dodáváte? -Kdy je balíček u nás? -Mají mnoho díků za jejich pomoc. -Děkuji vám, že jste si našli čas mluvit se mnou dnes a doufám, že jsem dokázal vyřešit vaši otázku. Pokud byste nevadilo, abyste hodnotili naši chatovací konverzaci dnes na základě mých zákaznických dovedností, byl bych vám velmi vděčný. Tlačítko hodnocení lze nalézt v tomto chatu. -Děkuji za informace. -Budu velmi rád, když vám mohu pomoci. -Zkontroluji váš účet, prosím, chvíli počkejte. -Děkuji za vaše čekací dobu, zkontroloval jsem informace do vašeho účtu. -Je mi opravdu líto, že máte s vaším e-knihou tento problém, ale jsem ochoten vám pomoci. -Podělím se o pár kroků, které je třeba provést do vašeho eReaderu, ano? -Mám rád články jako tento, které pomáhají rozplést zamotanou síť sociopatických megalomanských mužů, které můj otec glorifikoval do zblbnutí, a množství vůdců, které odsoudil. -Hádejte, kde Nixon a Carter spadli, a jeho zlatý chlapec Reagan nemohl udělat žádné zlé. -Zatímco jsem dávno věděl, že tento světový pohled je úplný BS a ve skutečnosti nenávidím každého z těch megalomanů od Caesara přes Bonaparta, Nixona, Reagana, Bushe a Trumpa, -Můžu ocenit historický význam Cézara nebo Napoleona, ale quasi-posvátná povaha jejich sanitovaných dějin mě později od nich odradila. -Dodnes odmítám studovat historii Polska, protože by to jen umožnilo paranoidní konspirace mého otce vyplout na povrch. -Vracíme se k tomuto článku, miluji ty malé detaily, které vás připomínají, že ve skutečnosti existovala jedna dobrá volba (navzdory menším nedostatkům - ale většinou jednající v dobré víře) a jedna strana, která nebyla dobrá, nejednala v dobré víře a kde zloba byla a je pravděpodobnějším vysvětlením než hloupost. -To je věc. -Republikáni rádi se schovávají za hloupost místo toho, aby přiznali zločin, ale nebuďte naivní, pokud je ideový základ strany pod útokem. -Pak náhoda, náhodné objevy, atd. zřídka existují. -Obležená mentalita znamená, že každá akce musí mít smysl, jinak vyčerpáte omezenou energii na zbytečné činy. -Ale republikáni rádi se schovávají za naše složitější pochopení světa a snaží se vrátit různé filozofické břitvy. -Proto jsme připraveni vám pomoci s jakýmikoli otázkami nebo obavami, které můžete mít před objednáním nebo po obdržení vaší objednávky. -Kontaktujte nás, prosím, prostřednictvím zpráv eBay a člen našeho týmu se vám co nejdříve vrátí. -Upozorňujeme: Naše otevírací doba je: Pondělí-Pátek od 09:00 - 17:30. -Kancelář zůstává během víkendu zavřená. -V dnech, kdy je kancelář zavřená, nebudeme moci odeslat vaše objednávky. -Také všechny objednávky uskutečněné o víkendu budou odeslány během následujících pracovních dnů. -Naším cílem je nabídnout našim zákazníkům nejlepší možnou službu. -Proto odesíláme naše objednávky do 1 pracovního dne po zaplacení. -Nabízené služby dodávky jsou standardní sledovaná pošta (2-3 pracovní dny), první třída služby, stejně jako expresní služba. -Upozorňujeme, že během svátků (např. Vánoce) může dojít k mírným zpožděním od kurýrní služby. -Vrácení musí být do 30 dnů od dodání ve stejném stavu, v jakém byly odeslány. -Prosím, pošlete nám zprávu prostřednictvím zpráv eBay ohledně vašeho návratu. -Uveďte prosím své uživatelské jméno eBay a důvod pro vrácení na poznámku do balíčku, abyste urychlili proces vrácení peněz nebo výměny. -Pokud jsou zboží vadné, vrátíme náklady na dopravu, ale pro všechny ostatní vrácení toto neplatí. -To je jiná věc. -Osoby se zdravotním postižením v Americe prostě nejsou správně zacházeno, konec. -Nemá to nic společného s příjmem nebo žít samotným. -Služby a úvaha pro postižené (stejně jako pro chudé) jsou stále daleko od toho, kde by měly být. -Zacházíme se zdravotně postiženými jako s odpadem. -Zacházíme se chudými jako s odpadem. -Každý v USA by měl být zahanben. -Máte pravdu, že naše společnost potřebuje více cenit lidský život. -Kdybychom to udělali, viděli bychom, jak tyto masové střelby klesají. -Viděli bychom méně dopravních nehod a úmrtí. -Zdravotní péče a péče o děti by byly dostupné a mnohem snazší přístup, atd. -Bohužel, americká společnost "přijala" statistiky útrap, úmrtí a dalších obětí jako prostě "způsob života"...výměnou za "svobodu" nebo něco takového BS. -Vidím tvůj komentář o tom, že jsi postižený a není ti podporován, jako další příklad toho, jak Amerika prostě lidi nepodporuje. -Je to ve stejném duchu jako bod autora, ne konflikt. -Alamo lokalita v Kalifornii mi provedla podobný podvod. -Když jsem vracel auto, agent našel škrábance pod autem (které jsem nezpůsobil). -Musel jsem podepsat, abych potvrdil „škodu“. -Měl jsem také videa a fotky, které nezahrnovaly spodek auta. -Pár týdnů po mém návratu domů jsem obdržel dopis, ve kterém byly uvedeny další škody, za které bych byl také účtován - včetně škrábanců na dveřích, které údajně vyžadovaly přetírání zadní části auta několik dní po mém návratu z pronájmu. -Žádné z těchto poškození nebylo viditelné pro mě (ani pro agenta, když jsem vrátil vozidlo). -Nic z toho nebylo vidět na fotografiích, které jsem pořídil, když jsem vrátil auto, takže jsem nárok popřel. -Odmítli spor a vyžadovali okamžitou platbu za škodu. -Protože to byla pracovní cesta, předala jsem své fotky našemu právnímu oddělení. -Po krátké době jsem obdržel dopis od Alamo, ve kterém uvedli, že ve prospěch spokojenosti zákazníka zruší poplatky. -Kdybych byl sám, určitě bych skončil platit účet za škodu, o které jsem si jistý, že se nestala, zatímco auto bylo ve mé péči. -Rusko varovalo před „důsledky“, pokud by byla Ukrajina napadena. -Skupina sedmi varovala Rusko před masivními důsledky a vážnými náklady, pokud prezident Vladimir Putin napadne Ukrajinu, podle návrhu prohlášení. -Americké zpravodajské služby odhadují, že Rusko by mohlo plánovat vícefrontovou ofenzívu na Ukrajinu již příští rok, zahrnující až 175 000 vojáků. -Kreml popírá, že plánuje invazi a říká, že Západ je ovládnut rusofobií. -Moskva říká, že rozšíření NATO ohrožuje Rusko a porušilo záruky, které mu byly dány, když Sovětský svaz v roce 1991 zanikl. -Na schůzce ve severní anglickém městě Liverpool řekli delegáti G7, že jsou jednotní ve svém odsouzení vojenského nárůstu Ruska u Ukrajiny a vyzvali Moskvu, aby deeskalovala. -"Rusko by nemělo mít žádné pochybnosti o tom, že další vojenská agrese proti Ukrajině by měla obrovské důsledky a vážné náklady," uvádí se ve znění návrhu prohlášení, které potvrdily zdroje G7. -„Potvrzujeme naši neochvějnou závazek k suverenitě a územní celistvosti Ukrajiny, stejně jako právo každého suverénního státu určit si svou vlastní budoucnost,“ uvádí se ve znění návrhu. -Pro Moskvu je rostoucí objetí NATO sousedního bývalého sovětského republiky - a to, co vidí jako noční můru možnosti aliančních střel v Ukrajině zaměřených proti Rusku - "červenou linií", kterou nedovolí překročit. -Pan Putin požaduje právně závazné bezpečnostní záruky, že NATO se nebude dále rozšiřovat na východ nebo umísťovat své zbraně blízko ruského území; Washington opakovaně řekl, že žádná země nemůže vetovat naděje Ukrajiny na NATO. -V roce 2014 Rusko obsadilo černomořskou poloostrov Krym od Ukrajiny, což vyvolalo, že Západ uvalil sankce na Rusko. -Kreml dnes řekl, že pan Putin řekl americkému prezidentovi Joe Bidenovi, že ruské jednotky neohrožují a že Moskva je démonizována za přesun vojsk po svém vlastním území. -Mluvčí Kremlu Dmitry Peskov řekl, že mezi Ruskem a Spojenými státy existují velmi vážné konceptuální rozdíly ohledně moskevských "červených linií". -G7 se skládá z Británie, Francie, Německa, Itálie, Japonska, Kanady a Spojených států a zahrnuje zástupce Evropské unie. -"Vyzýváme Rusko, aby deeskalovalo, využívalo diplomatické kanály a dodržovalo své mezinárodní závazky týkající se průhlednosti vojenských aktivit," uvedlo G7 ve znění návrhu. -„Potvrzujeme naši podporu úsilí Francie a Německa ve formátu Normandie k dosažení plného provedení dohod Minsk k řešení konfliktu na východě Ukrajiny,“ uvádí se ve znění návrhu. -Čínský Xi a ruský Putin dominují G7. -Papež vyzývá k „vážnému mezinárodnímu dialogu“ k odstranění napětí v Ukrajině. -Pape -Řekl, že se modlí za "milou Ukrajinu, pro všechny její církve a náboženské společenství a pro všechny její lidi, aby napětí tam bylo vyřešeno vážným mezinárodním dialogem a ne zbraněmi." -Zbraně nejsou cesta, kterou se má jít. -Ať tento Vánoce přináší mír Ukrajině," řekl papež tisícům lidí na náměstí svatého Petra pro jeho polední požehnání a projev. -Ukrajina je převážně pravoslavná, s katolíky buď latinského obřadu nebo byzantského obřadu tvořícími asi 10% populace ve bývalé sovětské republice. -Zvlášť pan Biden řekl, že řekl Putinovi, že Rusko zaplatí "strašnou cenu" a čelí devastujícím ekonomickým důsledkům, pokud by došlo k invazi Ukrajiny. -Dovolte mi okamžik, abych vás hledal. -V tuto chvíli se zdá, že nemáme další kusy, zkontroluji, kdy očekáváme více. -Bohužel se nezdá, že by byly nějaké budoucí plány na výrobu jednotlivých sekcí. -Je tu něco jiného, s čím mohu pomoci, prosím? -Nejsem seznámen s Teleloadingem. -Pokud však chcete otevřít nedávno zakoupenou knihu od #PRS_ORG# ve vašem #PRS_ORG# e čtečce, stačí synchronizovat vaši e čtečku přes WiFi a stáhnout knihu do vaší e čtečky, abyste mohli začít číst, přenos počítačem nebo e-mailem není nutný. -Pokud kniha po synchronizaci stále má problémy s otevřením ve vašem čtečce, můžeme zkusit nějaký proces odstraňování potíží. -Potřeboval bych vědět, zda kniha zobrazuje chybovou zprávu, zdá se být zablokovaná nebo dokonce nezobrazuje ve vašem účtu #PRS_ORG# ve vašem čtečce #PRS_ORG#. -Bylo to před rokem 2018. -US Air mě zanechalo v Philadelphia místo toho, aby mě dovezlo až do Newarku, a já a dalších deset lidí se snažilo dostat auta pozdě večer. -Lidé na přepážce byli nejhorší, jaké jsem kdy viděl. -Fronta ven z dveří a dělali si přestávky a mluvili o náhodných věcech, které se netýkaly práce, jako bychom ani nebyli tady. -Měl jsem potvrzenou rezervaci. -Po hodině čekání jsem jí řekl, co jsem si rezervoval, a ona na mě hlasitě obvinila, že jí lžu, a vyčítala mi to. -Nakonec jsem to vzdal a šel jsem do Hertz, kteří mi účtovali majlant, ale okamžitě mi dali auto. -Slíbil jsem, že už nikdy nebudu používat Avis. -Nejhorší zkušenost s autem vůbec. -National & Hertz byly pro mě vždy dobré zkušenosti. -Ti dva, následovaní Enterprise. -Enterprise nebyli ve skutečnosti špatní v žádném směru, ale nikdy nebyli tak pohodlní jako National, kde jsem mohl přijít a vybrat si auto a odjet bez čekání navždy na přepážce. -Vím, že to jsou anekdotické zkušenosti, ale budu se snažit všem říct, aby se vyhnuli Avis jako čert kříži. -Je pravda, že dobrá zákaznická služba udrží zákazníky věrné a špatná zákaznická zkušenost odradí 10krát více zákaznických příležitostí. -Oprava vašeho #PRS_ORG# účtu na eReaderu. -Přejděte na svou domovskou obrazovku. -Klepněte na -Více ikon na dolní straně obrazovky. -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Vedle 'Opravit váš #PRS_ORG# účet', klepněte na Opravit. -Klepněte na Opravit nyní. -Proces opravy účtu začne. -Pokud máte hodně knih, může to chvíli trvat. -Pokud oprava vašeho účtu nevyřešila problém: -Odhlásit se a znovu se přihlásit do svého eReaderu. -Přejděte na svou domovskou obrazovku. -Klepněte na -Více ikon na dolní straně obrazovky. -Klepněte na Nastavení. -Klepněte na Účty. -Pod #PRS_ORG#, klepněte na Odhlásit se. -Objeví se potvrzovací obrazovka. -Klepněte na Odhlásit se. -Po odhlášení postupujte podle pokynů na obrazovce pro nastavení vašeho eReaderu. -poté aktualizujte slovník -Omlouvám se za to, musíme získat povolení od držitele účtu, abychom mohli diskutovat o objednávce s jinou osobou. Omlouvám se, pokud to bylo již dříve provedeno, ale bez povolení držitele účtu bych nemohl o tomto s vámi diskutovat. -Víš něco. -Chápu, o čem byl Daveův bod. -Je horší zabíjet černé lidi, než se smát trans lidem. -A samozřejmě to je pravda. -Ale Dave něco zapomněl. -Mnoho lidí, kteří nenávidí trans lidi, také nenávidí černochy. -Nikomu se nezavděčil #blacklivesmatter. -Jen dal transphobům dalšího hrdinu a více anti-trans rétoriky. -On dal důvěryhodnost transfobii. -A vzhledem k tomu, že nejzranitelnější trans lidé jsou trans ženy barvy, učinil je cílem pro násilí. -Odešel ze svého vystoupení, protože si uvědomil, že bílí lidé se smáli JEMU, ne S ním. -Jak to, že si neuvědomil, že udělal přesně to samé trans lidem, je velmi smutné. -Ano, to znamená, že když cvičím, opravdu mě nezajímá, kolik kalorií to spálí, a nezměním si svá čísla nebo makronutrienty kvůli tomu, kolik jsem spálil. -Snažím se držet se 1200-1300. -Ale pokud jsem extra hladový, ano, něco sním, abych zásobil své tělo a přijal, že ztráta hmotnosti může být o den pomalejší, nebo ne. -Pokud už držíte 500 kalorií dietu, další kousek steaku nebo dokonce chleba po náročném tréninku vůbec nezničí váš pokrok. -To by mohlo jen zúžit váš deficit jeden den. -Další kousek pizzy nebo miska zmrzliny? -To nejde. -Pokud vždy potřebujete jíst více kvůli cvičení, zvažte, že v první řadě nebudete omezovat tolik kalorií. -Možná začít s deficitem 300. -Doufám, že to pomůže! -Víš, co ti rozumím. -protože chceme, abyste měli svůj objednávku od nás. -Jako zdvořilost při vašem prvním objednávce zpracuji plnou částku kreditu na tuto objednávku, takže můžete tento kredit použít k objednání správného oddělení. -Ach, jsem tak rád, že jste se zeptal, mám o tom hodně co říct. -Ano, došlo k velmi drastické změně. -Jsem přebytečně těžký/obezní od tří let, takže existovat tak, jak to je, je vše, co jsem kdy znal, až do 30 let. -Většina mých rodin a přátel, kteří mě znali předtím, se ke mně chovají stejně a jsou SO. -čert. -podpůrný. -Mám několik vybraných rodinných vztahů, které byly od začátku napjaté, kde se zdá, že můj úbytek na váze zhoršil stávající problémy. -To může být výsledek jejich komplexů nebo můj, protože si myslím, že jsou zvyklí na to, že se na mě mohou vykašlat a já na oplátku jsem nyní ještě méně ochotný brát jejich hovno. -Jedna osoba se zvláštním způsobem snažila uzmout si zásluhy na mém úbytku hmotnosti. -Základně, naznačili, že byli ta hnací síla, která mě sem dostala, pod záminkou podpory, když ve skutečnosti ani nevěděli, že dělám tyto změny, dokud jsem neprošel RNY a neztratil více než 100 liber. -Ve skutečnosti byli poslední, kteří to věděli, úmyslně, protože jsem jim prostě nechtěl dovolit, aby se snažili ovládat a vyhrožovat mi věcmi, jako jsem jim to dříve dovolil. -Nepřekvapivě, teď se začali urážet mé další vlastnosti, jako říkat mi, že můj nos a čelo vypadají příliš velké od té doby, co jsem zhubl, a že potřebuji operaci nosu a bangy, aby to opravily - to je typické chování od nich. -Nejprve mi tyto věci posílali soukromou zprávou, ale když jsem neodpověděl, začali o tom veřejně komentovat na sociálních médiích, bez studu. -Když jsem byl větší, to by mě zničilo a já bych poslouchal, ale teď to jen ignoruji. -Naštěstí je moje kůže teď silnější (nejen proto, že je v poslední době nadbytečná). -Pozornost od cizích lidí je pro mě nejdivnější částí. -Když jsem byl větší, lidé mi vůbec nevěnovali pozornost. -Jako, velmi málo až žádný kontakt očima. -Žádné říkání ahoj nebo úsměv na mě, když jsme procházeli ulicí, pokud mě neznali. -Určitě žádné vycházení z cesty, aby mi pomohli nebo mě pochválili. -Bylo to více izolující, než jsem si uvědomil, protože to bylo, na co jsem byl zvyklý. -Věděl jsem, že lidé mohou být soudní o mé velikosti - s mnoha to dělají otevřeně - ale nikdy jsem si neuvědomil, dokud jsem nezhubl, mikroúroveň toho a jak subtilní to může být. -Nejenže jsem o tom nevěděl, jen protože jsem na to byl zvyklý, ale myslím si, že ani ti, kteří to podporují, nejsou aktivně si vědomi toho, co dělají. -Opravdu věřím, že je to podvědomá předsudek, vychovávaný a zesílený zobrazením a zacházením s obézními lidmi v médiích, který mnozí lidé prostě neuvědomují, že projektují. -Teď se to cítí, jako by každý všude na mě koukal, usmíval se na mě, mluvil se mnou atd. -Oba muži i ženy se se mnou zacházejí jinak, dělají více úsilí, aby se se mnou bavili / aby mě poznali - a to jen platonicky. -Romanticky se můj datový bazén rozšířil od mála, kteří byli ochotni být viděni se mnou, na to, co se cítí jako… všichni lol. -Je to STRAŠIDELNÉ. -Předpokládal jsem, že alespoň fakt, že jsem byl morbidně obézní, ztratil jsem všechnu tu váhu a teď mám přebytečnou kůži, by odradilo některé lidi, ale navzdory tomu, že jsem dal svou ztrátu hmotnosti a přebytečnou kůži přímo do popředí (protože to nechci být tajemství), to nikoho z mých zkušeností nezajímalo/neodradilo. -Zdá se, že to udělalo přesný opak a zvýšilo jejich zájem, ve skutečnosti. -Obrovské šok pro mě. -Musím tu ale dát malou veřejnou výzvu mužům, kteří se nově baví/randí s ženou, která zhubla: komentáře jako „Jsem tak rád, že jsi zhubla, ještě jsi si neuvědomila, jaká jsi krásná“ NEJSOU cestou, jak jít. -Slyšel jsem nějakou variaci tohoto vícekrát, než bych si přál spočítat, a všichni si mysleli, že je to kompliment. -Říkal jsem ti to.... -Obchod, ve kterém jsem pracoval, procházel úplnou a úplnou reorganizací. -Chodby se měnily a všichni jsme se učili, kde je všechno. -Není třeba říkat, že to byl chaos. -V době, kdy se to stalo, jsme byli docela zaneprázdněni a měl jsem frontu zákazníků, ale Karen se rozhodla přeskočit frontu a zeptat se mě, kde něco je. -Nepamatuji si přesnou položku, ale bylo to něco jako papírové talíře (nebo něco, co by bylo blízko nim ... plastové vidličky? -Slámky?). -Protože jsem měl řadu zákazníků, nemohl jsem odejít, abych jí pomohl najít, tak jsem jí řekl: "Myslím, že jsou teď na uličce 7." -Než se stihnu dostat k mému walkie, abych se na něco zeptal, ona se rozutíká. -Jen aby se o pár minut později vrátili a řekli mi, že tam nejsou. -Manažer je nyní blízko, takže se ho ptám, jestli jí může pomoci, a říkám mu, že jsem si myslel, že jsou na 7, ale ona řekla, že nejsou. -Vypadá zmateně a říká: "OK, možná jsou na 8." -Pomohu vám je najít, pane. -Jak se chystají odejít, otočí se ke mně a říká: "Měl bys vědět lépe, než někomu říkat, kde něco je, když to ve skutečnosti nevíš." -Stručně řečeno, vrátila se k pokladně, ale šla do jiné fronty. -Jakmile se manažer vrátil, naklonil se a šeptal mi: "Byli na příčce 7, jak jsi jí řekl." -HA....říkal jsem ti! -Letošní trend na druhý vánoční strom do ložnice posílá prodeje menších smrčků vzhůru. -Máte jen jednu vánoční strom? -Pak byste mohli být pozadu. -Letošní trend je druhý strom v ložnici a to vedlo ke zvýšení prodeje menších smrků. -Podle odborníků více než čtvrtina britských domů nyní disponuje dvěma vánočními smrky - a může to být více než symbol statusu. -Obojí, uklidňující zelená barva i aroma borovice se říká, že jsou dobré pro duševní zdraví a spánkové cykly - zatímco i falešné stromy mohou pomoci vyvolat pocit nostalgie. -Jiné rodiny dostanou dva stromy, takže děti mohou jeden ozdobit tak, jak se jim líbí, poskytující místo pro všechny domácí skvosty, zatímco druhý, elegantněji ozdobený smrk, bude více viditelný, aby imponoval sousedům. -Mezi těmi, kteří se přidávají k trendu, který začal v USA, je Carole Middleton, matka kněžny z Cambridge, která má ve svém domě v Bucklebury, West Berkshire, druhý strom pro vnoučata George, Louis a Charlotte. -Minulý týden napsala na Instagramu: "Letos opět plánujeme mít dva vánoční stromy: jeden pro děti na zdobení a jeden, který dělám sama." -Britské zahradní centra řekla, že prodeje menších stromů se letos zvýšily o 50 procent ve svých 58 lokalitách. -Ředitel Boyd Douglas-Davies, který je také prezidentem Asociace zahradnických obchodů, řekl: „Lidé obchodují s rostlinou v ložnici a dávají do ní krásně zdobené stromy.“ -Sesterská řetězec zahradních center Squire's hlásí, že 30 % jejich zákazníků plánuje mít alespoň dva stromy - a více než desetina z nich má v úmyslu mít tři nebo více. -Předsedkyně Sarah Squire řekla: "Dávají ložnici krásnou, uklidňující vůni, která je dokonalou pomůckou pro dobrý spánek." -Stejně jako rostliny v ložnici jsou známé pro pomoc při psychickém zdraví a čištění vzduchu, stromy jsou také říkány, že pomáhají spánku. -Odborník na spánek Carl Walsh řekl: "Naše mozky shromažďují informace z našeho okolí a to se překládá do signálů, které uvolňují hormony v reakci. -V tomto případě jsou to hormony melatonin a kortizol, které řídí váš spánkový cykl a uvedou vaše tělo do spavého stavu. -Dodal také, že strom v ložnici může také přenést lidi zpět do více bezstarostného a mladistvého období. -"Vánoce mohou být docela stresující čas. -Strom v ložnici vrací lidi zpět do jejich dětství, kdy neměli žádné zodpovědnosti a mohli zapomenout na stresující věci. -To je vždy dobré pro spánek.” -Přeji vám krásný den -Děkuji, že jste si dnes udělali čas na chatování se mnou. -Jakmile tento chat skončí, obdržíte e-mail s hodnocením chatu. -Prosím, vyplňte to, pokud máte chvíli, ale pokud nemáte čas, přeji vám krásný den a ještě jednou děkuji. -Děkuji, prosím, vytrvejte chvíli, zatímco se na to pro vás podívám. -Omlouvám se za to, protože držitel účtu není sám sebe, budeme potřebovat #NAME#, aby nás kontaktoval, aby potvrdil její údaje. Jakmile to udělá a potvrdí, že je s námi spokojen, abychom diskutovali o objednávce s vámi, můžeme se podívat na předchozí korespondenci pro vás. -a udělejte svou první nákup z webové stránky #PRS_ORG#. -Chcete-li aktualizovat své platební údaje, postupujte podle těchto kroků: -Přihlaste se do svého účtu #PRS_ORG#. -Klikněte na "Můj účet" a v menu vyberte "Nastavení účtu". -Vyberte kartu „Informace o platbě“. -V sekci „Platební informace“ vyberte typ kreditní karty a zadejte číslo karty, bezpečnostní kód (CVV), jméno na kartě a datum expirace. -Klikněte na "Uložit”. -Objednávka byla zpracována jako objednávka na vyzvednutí, což znamená, že jste si ji vybrali, abyste ji vyzvedli. -Proto nemůžeme přiřadit jezdce pro toto. -Protože objednávka již byla přijata, nemůžeme v tomto okamžiku objednávku zrušit. -Je to kruhovité... -Myslím, že jídlení boxy jsou šílený návrh. -Matematika, kterou dělají, je šílená. -"Ve skutečnosti ušetříme peníze, protože nemusíme jít ven a koupit celou láhev sójové omáčky, abychom vyzkoušeli asijskou kuchyni..." Šílenost. -Myslím si, že v spotřebitelském prostoru je jediným důvodem, proč někdo mimo horní třídu zažil nějaký růst mezd, levnější zboží s nižšími maržemi. -Platy se skutečně nezvýšily, ale věci se staly levnější. -Problém je, že jsme prodali lidi pod námi. -Souhlasím s tebou. -Někteří z nás musí alespoň částečně vzdát pohodlí, abychom společnost udělali lepší. -I když nejsem ve výši příjmu, která by platila více daní, stále mohu kupovat méně věcí, které jsou dražší, aby je mohli vyrábět lidé, kteří vydělávají životní mzdu, a já mohu být ochoten čekat několik dní, abych to mohl dostat, takže některý gig pracovník nemusí být vyčerpán... -Prosím, stále klepnutím tam, kde se zobrazují obrázky, můžete vidět obrázky a sledovat, kam klepnout? -Budu dále poskytovat obrázky. -Ale dejte mi prosím vědět, jestli jste byli schopni klepnout na své zařízení, kde obrázky říkají -Dallas Cowboys přináší lavice do Washingtonu, rivalita se zvyšuje. -Straniště pro návštěvy ve Washingtonu má pro Dallas Cowboys známý domácí vzhled. -Po obdržení zprávy od ostatních týmů, že lavičky na straně hřiště na FedEx Field potřebují výraznou modernizaci, Cowboys přinesli své vlastní pro tento soubojový zápas. -Když přijeli na stadion v neděli, už byli oblečeni ve znaky a loga Cowboys. -Kowboyové slyšeli od Seahawks, kteří nedávno hráli proti Washingtonu v pondělí večer a měli stížnosti, že topení na lavičkách nefungovalo. -Již ve čtvrtek komentoval Cowboys running back Ezekiel Elliott o výhodách hraní venku v chladných hrách, protože vyhřívané lavičky jsou prospěšné pro jeho zranění kolene. -Kowboyové právě zajistili, aby Zeke a jeho spoluhráči dostali tuto příležitost. -Tato akce je jen nejnovější zvrat ve vztahu Dallas-Washington, který se ještě více rozpálil tento týden, když hlavní trenér Cowboys Mike McCarthy předpověděl vítězství svému týmu, což vyvolalo nějaké ohňostroje mezi Washingtonem a Ronem Riverou a hráči. -Washington porazil Cowboys ve vzájemných zápasech po sobě. -Je to více než 30 let, co porazilo Dallas ve třech po sobě jdoucích setkáních (1986-88). -Fanoušci Cowboys tvořili více než polovinu davu na FedEx field, což bylo patrné na modrých a bílých dresech ve stadionu. -Majitel Jerry Jones předznamenal to již tento týden, když řekl na 105.3 FM v Dallasu: "Vždy jsme prodávali více klobouků, čepic, triček Cowboys." -Vždy jsme měli největší podporu fanoušků pozitivní z Washingtonu, to je mimo oblast Dallasu. -Mimo oblast Texasu je Washington místem, kde máme nejvíce podpory ze všech, pokud jde o všechny věci, které byste mohli počítat. -Přidělený jezdec nikdy neukázal. -Odpřiřadili jsme ho a systém nyní hledá nového jezdce. -Prosím, dejte mu ještě 15 minut, aby se tam dostal. -V některých komunitách poskytuje církev bezpečné místo pro některé pronásledované sociální skupiny. -Není to náhoda, že hnutí za občanská práva bylo velmi spjato s menšinovými církvemi, mešitami a chrámy. -Ahmad Aubreyho soudní proces je také příkladem pozitivního dopadu. -Satanský chrám také dělá dobré věci. -Nicméně, příklady, kde je něco velmi špatného se systémem, byly vždy zřejmé. -Náboženské organizace a instituce by obecně měly být drženy na stejných standardech jako jakákoli jiná charitativní organizace. -Průhlednost je jméno hry. -Podíváme-li se na případy jako je římskokatolická církev, může být vhodné zajistit, aby prostředky získané těmito daňově osvobozenými náboženskými organizacemi neopustily zemi. -Když přemýšlím o náboženských členstvích, možná je užitečný model spolupráce; každý člen dostane jeden hlas jako akcionář. -Doufejme, že alespoň přispívají na sociální zabezpečení. -Kontrolováním zde znovu, mohu vidět, že jezdec omylem označil objednávku jako doručenou. -Momentálně nemáme přesné informace o tom, co se stalo s jezdcem a také s vaším objednávkou. -Nyní to pro vás vyšetřujeme. -Tohle můžu udělat. -Postupujte podle níže uvedených kroků pro provedení opravy synchronizace vašeho #PRS_ORG# (před zahájením budete potřebovat připojení Wi-Fi): -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Více v pravém dolním rohu obrazovky (3 vodorovné čáry). -Klepněte na Informace o zařízení. -Vedle Oprava/obnovení vašeho #PRS_ORG# účtu, klepněte na Oprava/Obnovit. -Opravit nyní/Obnovit -Po dokončení synchronizace opět stiskněte tlačítko Sync Now, abyste nainstalovali dostupné aktualizace. -Hangáry Shuttle Enterprise-D -**Enterprise-D** z *The Next Generation* měla **tři** výsadkové hangáry. -Na show vždy vidíme Shuttlebays 2 a 3 na palubách 12 a 13. -Tyto dva výtahové hangáry byly zastoupeny plně velikostním studiovým setem, který mohl ubytovat plně velikostní sady výtahů. -Vždycky jsem to miloval, když epizody ukazovaly dvojité výtahové hangáry na zadní straně střední části, krku nebo čemukoli, co chcete nazvat. -Jak je možné, že jsme nikdy neviděli hlavní výsadkovou palubu? -Bylo to umístěno pod hlavním mostem na palubách 3&4 a pravděpodobně by to byla obrovská zařízení. -Místo toho, aby tam šli, posádka mostu by se projela turboliftem přímo kolem toho až na palubu 13. -V původním *Star Treku* byla postavena a použita miniaturní scéna s miniaturním výtahem, aby se shuttlebayu dala životnost. -Postavy občas mluvily u dveří, které vedly do hangáru s miniaturním setem a hangárem, které byly nad sebou, aby daly lodi měřítko a život. -Nemohli to udělat na TNG? -Viděli jsme, jak Worf a Data vypustili shuttle z hlavního shuttlebay v "The Best of Both Worlds, Part II", ale start shuttle byl viděn zevnitř shuttle. -Prostě vidíme zeď venku z okna, jak shuttle letí ven, rychle nahrazeným vesmírem. -Jediný čas, kdy jsme viděli přistávací hangár v plném měřítku, byl v "Příčina a následek". -Vidíme vesmírný záběr otevírání rolovacích dveří, dekompresi hlavního výtahu a rychle se podíváme dovnitř spolu s několika zaparkovanými výtahy. -Máte nějaké nápady, proč hlavní výsadková hala nikdy nebyla viděna mimo tyto dvě instance? -Jste se odhlásili a přihlásili na své aplikaci? -Udělali jste 2 postupy? -Pokud jste oba postupy provedli a problém nevyřešili, mohu vám peníze vrátit na váš účet Store Credit. -Tak můžete okamžitě koupit knihu podle vašeho výběru. -Bylo by to v pořádku? -Jsi tam? -Pro účely kvality budu muset uvolnit tento chat, pokud nebude žádná interakce během následujících 2 minut. -Děkujeme za kontaktování #PRS_ORG#, bylo mi potěšením Vám dnes pomoci. -Doufám, že máte skvělý den. -OK, udělejte mi prosím laskavost a následujte následující kroky> -Připojte zástrčku ze zdroje napájení (není součástí) a poté připojte svůj eReader k zástrčce ze zdroje napájení. -Stiskněte a podržte tlačítko napájení, dokud neuvidíte slova "Vypnuto" na vrcholu obrazovky. -Držte tlačítko napájení po dobu 3-4 sekund. -Uvolněte tlačítko napájení. -Stiskněte a podržte tlačítko napájení na vašem eReaderu po dobu 30 sekund. -Počkejte, až se objeví obrazovka 'Obnovit'. -Uvolněte tlačítko napájení. -Po resetování e-čtečky se vás zeptá na nastavení jazyka a sítě WiFi. -Poté budete muset přihlásit se svou e-mailovou adresou a heslem. -Pokud to nefunguje, prosím, nyní se odhlaste a znovu se přihlaste z vašeho čtečky knih. -Odhlásit se z #PRS_ORG# -Přejděte na svou domovskou obrazovku. -Více ikon na dolní straně obrazovky. -Klepněte na Nastavení. -Klepněte na Účty. -Pod #PRS_ORG#, klepněte na Odhlásit se. -Objeví se potvrzovací obrazovka. -I když neznáte své heslo, můžete si vytvořit nové heslo postupováním podle kroků, které jsem poslal. -Ale nebojte se, můžu také poslat odkaz pro obnovení vašeho hesla. -Děkuji, že jste si dnes udělali čas na chatování se mnou. -Jakmile tento chat skončí, obdržíte e-mail s hodnocením chatu. -Prosím, vyplňte to, pokud máte chvíli, ale pokud nemáte čas, přeji vám krásný den a ještě jednou děkuji. -Hej r/Military! -Jsem země, kde je vojenská služba povinná, a jen se ptám, jak to je v jiných zemích. -Ahoj všichni! -Jsem z Estonska, kde jsem součástí Národní obranné síly. -Zde je vojenská služba povinná pro všechny muže ve věku 16-29 let. -Musíte absolvovat buď 8 nebo 11 měsíců výcviku, po kterém budete posláni do "rezervní" jednotky, dokud nedosáhnete 60 let. -V té době má Obranná síla právo požadovat, abyste se jednou nebo dvakrát ročně účastnili některých vojenských cvičení po dobu přibližně 2 týdnů ročně. -Nicméně, nejste povinni jít na zahraniční misi. -Pokud to chcete udělat, musíte se připojit k "skautskému pluku", kde budete profesionálním vojenským chlapíkem s platbou a podobně. -Jen se ptám, jak je to v ostatních zemích? -Pokud se připojíte k armádě například v USA nebo ve Velké Británii, musíte se pak zúčastnit boje v jiné zemi? -Co si myslíte o povinné vojenské službě? -Během tréninku, když jsem byl v Tapa 2018-2019, byly také jednotky z Velké Británie, USA, Francie, Belgie, Dánska a Kanady. -Bohužel jsme ale neměli moc času na společenskou interakci a já jsem se nemohl osobně zeptat těch kluků, jaké to je pro ně sloužit ve vojenské službě jejich země. -Vím, že v tomto subredditu jsou pravděpodobně převážně členové NATO, ale bylo by zajímavé slyšet i od ostatních (ne-NATO) zemí. -Omlouvám se za mou špatnou gramatiku. -Angličtina je moje druhý jazyk. -Je mi líto, že váš objednávka se zpožděním. -Prozkoumal jsem to a vidím, že vaše oblast v současné době má vysoké objemy objednávek, proto jim byl přidělen jezdec pro vaši objednávku. -Ale jen aktualizace, je tu teď jezdec, který potvrdil svůj příjezd do restaurace. -Francie reaguje na protichůdnou nabídku amerických fregat pro Řecko. -Ministerstva obrany Francie a Řecka obě potvrdila, že konkurenční nabídka od USA nebude mít žádný vliv na již „podepsanou“ a „konečnou“ vícemiliardovou dohodu na nákup francouzských fregat Belharra. -Francouzské ministerstvo ozbrojených sil uvedlo v sobotu, že smlouva o obraně s Aténami byla již "podepsána před několika dny", než americké ministerstvo zahraničí oznámilo své schválení potenciálního prodeje amerických fregat. -Od doby, kdy jsme diskutovali s Řeky, americká nabídka už není na stole... -Také jsme podepsali smlouvu s Řeky. -Ministerstvo obrany Řecka také potvrdilo, že dohoda s Paříží je "konečná", protože byla vyjednána na "nejvyšší možné úrovni" a "osobně oznámena" řeckým premiérem Kyriakem Mitsotakisem. -Údajně se očekává, že finální smlouvy budou ratifikovány řeckým parlamentem "brzy". -Agentura pro obranu a bezpečnostní spolupráci USA uvedla v pátek, že schválila prodej za 6,9 miliardy dolarů čtyř bojových fregat od společnosti Lockheed Martin a samostatný program ve výši 2,5 miliardy dolarů na modernizaci fregat třídy MEKO Řecka. -Oznámení vyvolalo některé obavy ohledně dohody Atény-Paříž, zejména po dlouho existujícím podmořském stavebním "obchodu století" mezi Francií a Austrálií, který byl náhle zničen bombou AUKUS smlouvy v září, bez předchozího varování. -Rozzlobený Paříž obvinil Washington a Canberra z "úderu do zad," zatímco jen dva týdny později Macron vystoupil se řeckým premiérem, aby osobně oznámil prodej alespoň tří francouzských válečných lodí Aténám za kolem 3,5 miliardy dolarů, říkajíc, že je čas "přestat být naivní" a propagovat novou dohodu jako znamení "strategické autonomie a suverenity Evropy." -Tentokrát, podle francouzské armády, USA "nás varovaly, že tato oznámení přijdou" a že Američané údajně neměli "žádnou touhu jít dál" s opravdovým prodejem jejich fregat. -Jen kontroluji tyto informace pro vás, nebudu dlouho. -Zkontroloval jsem to a to by bylo bezkontaktní, takže bohužel by nemohli přinést položku na vaši nemovitost, omlouvám se za to. -Varování před bouřlivým počasím, protože silné větry představují „nebezpečí pro život“. -Očekává se, že silné větry budou bít severní části Skotska s narušením cestování, zejména lodních služeb. -Severní západ, Shetlandy a Orkneje čelí v noci z neděle na pondělí rychlostem větru až 85 mph. -Hebridy, západní pobřeží Highlands a části Argyll a Bute byly varovány, aby byly připraveny na letící odpadky, které představují "nebezpečí pro život" a způsobují poškození budov. -Odborníci varují, že špatné počasí může vést k výpadkům elektrického proudu, uzavření silnic a mostů a zrušení leteckých a trajektových služeb. -Následuje po dvou pojmenovaných bouřích, Arwen a Barra, které způsobily rozsáhlé narušení velkých částí země. -Více než 100 000 domů bylo odpojeno od elektrického proudu kvůli extrémnímu poškození způsobenému bouří Arwen 26. a 27. listopadu. -Bouře Barra narušila dodávky pro asi 10 000 jen 11 dní později 7. prosince. -STV počasí prezentér Philip Petrie řekl, že to bylo velmi téměř tři v řadě. -Met Office sledovali nízký tlakový systém, který se v noci v neděli pohyboval po severních oblastech, přinášející velmi silné větry a bouřlivé, silné přeháňky. -Met Office vydal žluté varování před větrem, které vstoupí v platnost od 21 hodin v neděli, které se týká Západních ostrovů, částí Highlands a Argyll a Bute. -"V této oblasti existuje potenciál, že se vítr může dosáhnout rychlosti 80-85 mil za hodinu, což způsobí narušení lodní dopravy a také nějaké škody a výpadky elektrického proudu," řekl Philip. -Další varování nabývá účinnosti v půlnoci v neděli, které se týká Orkneje a Shetlandu. -"Tato varování trvá až do poledne pondělí, jak se střed nízkého tlaku přibližuje k Severním ostrovem, opět přináší poryvy větru o rychlosti 80-85 mph po pobřeží a místně v některých oblastech více než 90 mph," řekl Philip. -"Je to velmi rychlé, takže to bude pryč do pondělního odpoledne, s věcmi, které se začínají uvolňovat a uklidňovat k obědu. -Během zbytku týdne se věci budou nadále uklidňovat před příštím víkendem. -Zlodějův kalhoty se mu sjíždějí, když se pokouší utéct. -Takže si všimněte, že jsem to neviděl. -To mi řekli kolegové ve mém prvním obchodním zaměstnání. -Ti dva kluci přišli do obchodu. -Jeden z nich byl v městě docela proslulý tím, že se vždycky dostával do potíží se zákonem. -Po chvíli prohlížení odešel proslulý a vrátil se ke svému vozidlu, zatímco druhý si vzal nákupní vozík a do něj vložil velkou 500 dolarovou sada zásuvek. -Tento společník pak čekal, až budou dva pokladní u východových dveří zaneprázdněni, pak šel přímo kolem nich a ven dveřmi. -Oba si toho všimli a zeptali se navzájem, jestli ten chlap zaplatil. -Když bylo potvrzeno, že ne, jeden za ním běžel. -Řekli mi, že pokladní křičela na něj, aby zastavil, když ho pronásledovala, ale on začal běžet s vozíkem k únikovému vozidlu. -Nevím, jestli byl jedním z těch kluků, kteří měli rádi, že nosí kalhoty nízko, nebo neměl pás. -Ale mi bylo řečeno, že mu kalhoty začaly klesat a on se snažil je zvednout, zatímco běžel a tlačil vozík s těžkou sadou klíčů. -Poté opustí vozík, nechává v něm sada klíčů, jak si zvedá kalhoty a běží k únikovému vozidlu, skočí do něj se svým proslulým společníkem a odjíždí. -Test molekulární diagnostiky může detekovat variantu Omicron během 20 minut: Zpráva -Korejští vědci vyvinuli molekulární diagnostickou technologii, která může detekovat varianty Omicron. -Vývoj technologie byl nyní dokončen a očekává se, že bude trvat čas na její komercializaci. -POSTECH oznámil 10. den, že výzkumný tým vedený profesorem Lee Jung-wook z Katedry chemického inženýrství vyvinul molekulární diagnostickou technologii, která může detekovat variantu Omicron během 20-30 minut a výsledky bude publikovat online. -Omicron je varianta, ve které jsou 26-32 mutace ve spike, který se používá k infikování buněk virem COVID-19. -Podle výzkumného týmu může technologie molekulární diagnostiky rozlišovat mutace na jednotlivém nukleotidovém základě, takže může detekovat "Stealth Omicron", které jsou obtížně detekovatelné PCR testy. -V současné době Korea Centers for Disease Control and Prevention používá tři metody k detekci variant COVID-19: analýza celého genomu, analýza cílové DNA (mutace, jako je například protein spike) a PCR test. -V případě varianty Delta ji lze zjistit pomocí současného PCR testu, ale Omicron ne. -Nově vyvinutá technologie tentokrát není metoda sekvenování, která čte sekvence DNA nebo RNA, ale molekulární diagnostická technologie. -Stávající technologie skenuje pouze specifické oblasti viru, ale molekulární diagnostická technologie byla navržena tak, aby způsobila reakce pouze při existenci RNA COVID-19, čímž umožňuje rychlé zjištění. -Podle profesora Lee má Omicron silný signál pro N geny v PCR testech, ale má slabý signál pro S geny. -V případě "Stealth Omicron" byly oba geny N a S potvrzeny jako pozitivní, což ztěžuje jeho odlišení od ostatních variant. -Molekulární diagnostická technologie pracuje v různých mechanismech od PCR, účinně detekující Omicron variantu. -Na rozdíl od běžné technologie, která obvykle zpracovává až 96 vzorků na zařízení, nová technologie může zpracovat více než 125 za 30 minut (více než 250 vzorků za hodinu). -Navíc tato technologie nepotřebuje speciální vybavení, takže může vytvářet diagnostické sady jednoduše a snadno. -Protože metoda může vyvinout diagnostický kit během 4 dnů, očekává se, že bude rychle reagovat i v případě, že se v budoucnu objeví nová varianta nebo virus. -"Doufám, že zveřejnění této technologie nám pomůže co nejdříve se vrátit k normálnímu každodennímu životu," řekl profesor Lee. -Budeme se snažit rychle diagnostikovat a reagovat na nové varianty, které by mohly vyjít po COVID-19. -Tato technologie se nyní nachází před komercializací. -Nicméně, může být použit jako pomocný v současných situacích, kde nebyl vyvinut PCR test pro Omicron. -Profesor Lee řekl: „Myslím, že tato technologie bude blízko komercializaci ve druhé polovině příštího roku po klinických zkouškách. -Důvod, proč zveřejňuji technologii, je sdílet ji s ostatními, aby vyvinuli lepší technologie pro překonání COVID-19 a umožnit také rozvojovým zemím analyzovat varianty COVID-19. -Změna adresy v objednávce není možná, nicméně toto může být doručeno na novou adresu. -Můžete volat jezdce, jakmile je blízko adresy uvedené v tomto objednávce, pomocí funkce volání jezdce v aplikaci. -Je mi opravdu líto za nepříjemnosti, můžete mi odpovědět na můj e-mail a rád budu pokračovat ve vaší osobní asistenci, nebo můžete otevřít novou interakci s námi, jak si přejete, rádi vám pomůžeme. -Pamatujte, že náš Chatovací servis je pro vás otevřený 24/7. -Děkujeme za kontaktování #PRS_ORG#, bylo mi potěšením Vám dnes pomoci. -Doufám, že máte skvělý den. -Aston Villa je nejnovějším klubem Premier League, který trpí výbuchem Covidu. -Aston Villa se stali nejnovějším Premier League týmem, který utrpěl Covidovou epidemii, když bylo v klubu objeveno několik pozitivních případů. -Tréninková seance v neděli na Bodymoor Heath byla zrušena jako výsledek, seance, která byla navržena pouze pro malý počet hráčů k obnově po prohře s Liverpoolem v sobotu. -V této fázi se nezdá, že by se jednalo o vážnou epidemii, protože The Athletic hlásí, že se pozitivně testoval pouze jeden hráč, zatímco ostatní jsou zaměstnanci na tréninkovém hřišti. -Villa čelí venkovnímu zápasu proti Norwich City ve středu večer v Premier League a není žádný náznak, že by měl být zrušen, s tréninkem také očekáváno, že půjde v pondělí normálně. -Identita hráče, který testoval pozitivně, nebyla potvrzena, ani zda to byl některý z mužů, kteří se zúčastnili proti Liverpoolu. -Manchester United také utrpěl v neděli nákazu Covid a zdá se, že o té situaci je více obav, s cestou Red Devil's do Brentfordu ve středu nyní údajně ohrožena. -Tottenham Hotspur už bojují s virusem, jejich zápas proti Brightonu v neděli byl odložen poté, co osm hráčů a pět zaměstnanců obdrželo pozitivní výsledky. -Mistrovské strany West Brom a Queens Park Rangers také trpěly výskytem a zápas QPR se Sheffield United v pondělí byl odložen. -Každý, kdo bude pozitivní na omicron variantu Covid-19, bude muset izolovat po dobu 10 dní, stejně jako každý, kdo byl identifikován jako blízký kontakt pozitivního výsledku. -Děkujeme za kontaktování #PRS_ORG#, jste přes #NAME#. -Abych vám mohl pomoci, můžete prosím poskytnout své údaje o účtu (celé jméno, e-mailovou adresu, poštovní adresu a číslo objednávky)? -Dalším krokem je odhlásit se na vašem zařízení. -Předtím, než to uděláte, chtěl bych, abyste zvážili, že jakékoli poznámky, které jste udělali ve svých knihách, mohou být smazány, stejně jako filtry, čtení postupu, stahování a další přizpůsobení. -Pokud máte e-knihy třetích stran, mohou zmizet. -Jsem v HR a v minulosti jsem pracoval s mzdami. -Pokud ke mně někdo přijde a řekne mi, že pracuje na tom, aby se dostal z finančně zneužívajícího vztahu a jeho zneužívatel se podívá na jejich výplatní pásky, -Možná Vám můžeme pomoci! -V závislosti na společnosti. -Nemusel jsem dělat žádné z níže uvedeného, ale musel jsem držet zaměstnance mimo naše adresáře a učit recepci, aby předstírali, že nevědí, kdo je někdo a jak identifikovat násilníka, kdyby přišel. -Mohl bych udělat dohodu o odčerpávání peněz jako pozdější daňový odpočet, dát odpočtu nesouvisející název, který by vypadal jako nějaký povinný odpočet a poté "odeslat" tento odpočet zpět vám samostatně. -Musel bych vás samozřejmě nejspíš podepsat smlouvu. -Další věc, kterou bych mohl udělat: mít s vámi falešnou e-mailovou konverzaci o tom, proč se vám ztrácejí výplatní pásky nebo proč váš heslo nefunguje (po tom, co jej změníte) a jak se „snažíme tohle vyřešit, děkujeme za vaši trpělivost!“ -Nemáme to, ale někteří zaměstnavatelé mohou vyplatit celou nebo část výplaty na svou vlastní debetní kartu, bez potřeby banky. -Také mnoho zaměstnavatelů má různé podvyužívané služby podpory zaměstnanců. -Tyto mohou zahrnovat odbornou pomoc, právní pojištění, slevy a kupóny. -Vyplatí se zeptat se, co mají, abyste mohli využít cokoli, co pomáhá. -Některé tělocvičny vám umožňují pronajmout si skříňky. -Není to ideální místo na skrývání věcí, protože je tu riziko krádeže, ale je to možnost, která by mohla fungovat pro některé. -To je věc, kterou lidé nepochopí. -Matematika neříká, že nemůžete být opravdu nemocní, pokud jste mladí a zdraví. -Možná jsem pesimista, ale to je lepší než myslet si, že jste nezranitelní. -Tohle ti může stát život. -Myslím, že jsem to bral vážně, protože jsem často nemocný a nesnáším to. -Jsem obecně zdravý, ale chřipka mě vždycky velmi tvrdě zasáhne. -Bál jsem se, že Covid bude horší. -Nebylo to, pravděpodobně proto, že jsem se nedostal do styku s mnoha viry, ale bylo to dost špatné. -Trvalo to měsíce, než se mé tělo vrátilo do normálu. -Nevím proč to ovlivňuje lidi jinak, ale pro mě to byly bolesti těla a bolesti hlavy, které byly nejhorší částí. -Objednávka je extrémně pozdě a zde ukazuje, že náš jezdec je již v restauraci. -Nicméně je to divné, protože není žádný pokrok. -V tomto případě jsem označil vaši objednávku jako doručenou a zpracuji vám vrácení peněz. -Takže můžete místo toho objednat novou objednávku. -Zkuste prosím provést tyto postupy. -Pro opravu vašeho účtu v aplikaci #PRS_ORG#, postupujte podle níže uvedených kroků: -Z domovské obrazovky aplikace #PRS_ORG#, klepněte na Více dole na obrazovce. -Klepněte na Opravit váš účet. -Pokud máte hodně položek, může to chvíli trvat, než opravíte váš účet. -Vraťte se na svou domovskou obrazovku a klepněte na knihy nebo audioknihy a zjistěte, zda se objeví chybějící položka. -Až budete hotovi, pokračujte prosím tímto postupem. -Děkuji za vaše čekací dobu, zkontroloval jsem informace do vašeho účtu. -Je mi opravdu líto, že máte s vaším e-knihou tento problém, ale jsem ochoten vám pomoci. -Sdílím s vámi pár kroků, které je třeba provést do vašeho zařízení, ano? -Francouzští rybáři hrozí narušením britských dovozů v rybářském sporu po brexitu. -Francouzští rybáři hrozí narušením britských dovozů v pokusu o vyvíjení tlaku na Lond -Hrozba byla vydána v sobotu několik hodin po tom, co Velká Británie souhlasila s vydáním dalších 23 licencí francouzským rybářům, aby se zmírnily napětí mezi oběma sousedy, kteří se v posledních šesti měsících potýkají s rybářskou krizí. -Francie hledá dalších 81 schválení, aby dosáhla 104 licencí potřebných pro provoz svých lodí v britských a kanálských ostrovech podle dohody o brexitu podepsané loni. -Evropská unie stanovila termín 10. prosince pro Londýn, aby udělil licenci francouzským rybářským lodím v rámci brexitu, s hrozbou evropského právního postupu v případě žádného průlomu. -Podtrhujíc, že Francie má nárok na kolem 80 dalších britských licencí, skupina zastupující rybáře v klíčovém přístavu Boulogne-sur-Mer a další po celém severním pobřeží hrozila v sobotu večer protesty. -"Očekávají se protesty ... protesty, které budou cílit na britské dovozy," uvedla ve vyjádření místní skupina rybářského průmyslu CRPMEM pro region Hauts-de-France. -Skupina uvedla, že její členové byli "vyčerpaní" zprávou o pouhých 23 nových licencí a cítili se "zrazeni" Evropskou komisí, která by mohla proti Británii zahájit právní akci kvůli tomuto problému. -CRPMEM řekl, že protesty budou "v souladu s blokádami přístavů v Bretani, Normandii a severní Francii, které se konaly 26. listopadu." -Ten den francouzské rybářské lodě krátce blokovaly trajekty a další lodě v přístavech Calais, Saint-Malo a Ouistreham, zatímco vozidla byla také poslána k narušení dopravy, která se snažila použít železniční spojení Channel Tunnel. -Od té doby bylo uskutečněno několik kol jednání mezi oběma stranami, ale trvalé řešení ještě nebylo vypracováno. -Je obrazovka šedivá a vidíte obálku knihy? -Chcete-li zařízení úplně vypnout, nechte svůj prst stisknutý tlačítko napájení po dobu 30 sekund. -Tip na čištění hardwaru Androidu -Tenké (0,3 mm - 0,5 mm) SUCHÉ mezizubní kartáčky jsou ideální pro čištění těch malých otvorů, ve kterých jsou umístěny mikrofony a reproduktory vašeho chytrého zařízení. -Jsou to levný produkt a bezpečnější než mnohé jiné metody, jako jsou alkoholy na otírání, zubní kartáčky, jehly a jehly. -Právě jsem použil tento způsob k vyčištění portu mikrofonu na mém Samsung Galaxy Watch 4 Classic, protože při použití funkce řeči k textu neregistroval můj hlas. -Po měsících přemýšlení bych potřeboval uspořádat náhradu záruky nebo si objednat opravu. -Po mnoha frustracích křičím na své hodinky během telefonních hovorů, aby mě bylo slyšet a/nebo pochopeno. -Po následování rad výrobce a použití funkcí vodního zámku, resetování zařízení A obnovení továrního nastavení mého zařízení. -A po prohledávání internetu vícekrát. -Zdálo se, že není žádná zaručená spokojenost. -Potom jsem měl zjevení a zkusil mezizubní kartáčky a fungují... -Oni pracují velmi, VELMI dobře! -Po několika poklepáních a otáčení s tenkou, ale pevnou štětinovou tyčí by mělo vaše zařízení fungovat stejně jako když bylo zcela nové. -Doporučuji to provést suchou štětkou a nebudu přijímat žádné následky, pokud se rozhodnete použít to s jakoukoli kombinací jakéhokoli čisticího produktu. -Teplota ohřívače vody a problém s koupelnou. -Můj ohřívač vody je nastavený docela nízko. -Je to malá nádrž v šatně (žiji v starém předválečném bytě). -Otázka je, zda se vana naplní až po okraj bez toho, aby se ochladila. -Pokud se koupe jen jednou týdně (ale sprchujte každé 2 dny nebo tak), a ohřívač má dostatek vody pro rychlé sprchy, stojí za to zvýšit teplotu pro jednou týdně koupel? -Nebo bych ušetřil více elektřiny ohříváním svého hrnce na sporáku a přidáním do koupele, jakmile je ohřívač vody vyprázdněn? -Editace: Děkuji všem za rady! -Zvýšil jsem teplotu jen trochu a to zabralo. -Komentář o vaření, které je neefektivní, je pravděpodobně správný, protože i když ohřívač vody běží neustále, to věc má TOLIK izolace. -Je to těžké dostat se tak, že to nechám na té teplotě a nazvu to dnem. -Další věc, kterou potřebujeme, abyste zkusili, je resetovat USB porty ve vašem počítači. -Návod, jak to udělat, naleznete v následujícím odkazu: #URL# -Ujistěte se, prosím, že vyzkoušíte tři metody uvedené tam. -Pokud po jejich vyzkoušení problém zůstane, ujistěte se, že nás znovu kontaktujete. -Děkuji za poslání fotky. -Nechte mě to pro vás dále zkontrolovat. -Kontrolováním toho znovu se zdá, že zde je pouze jeden kus pro Shrimp Dumpling. -Myslím, že je možné (ale nevím), že jim říkali, že není bezpečné řídit. -Když se tornáda stane tak neodvratnou, obvykle to dělají ti počasí "Chraňte se nyní!!" -Věc, protože nevíte, zda to bude za dvě minuty nebo deset minut nebo co. -Nevím, jakou právo mají skutečně zakázat lidem opustit, ale mohu vidět, kde by jim říkali, aby se schovali. -Můžete si představit všechny ty lidi, kteří se snaží dostat z parkoviště, když to přistálo? -Všichni by byli zabiti. -ALE pokud by byli jako "Pokračujte v práci!" -Místo "Schovej se!" -To je jiné. -Ví někdo, jestli ještě pracovali nebo se schovávali někde uvnitř? -Použijte prosím funkci „přidat do košíku“ pro sloučení vaší objednávky, poté se přihlaste a zaplaťte jako obvykle. -Poté vám vrátíme přebytečné poštovné při odeslání. -Pokud byste chtěli vědět dopředu, jaká bude doprava, pošlete nám zprávu, ve které uvedete, jaké položky a velikosti byste chtěli a do jaké země mají být zaslány. -Q. Můžete odeslat mou objednávku na jinou adresu? -A. Pokud jste v UK, nemáme problém poslat na jinou UK adresu, ale musíte si vybrat položku, která má jako výchozí dopravu podepsanou, nebo si vybrat možnost podepsaného doručení při objednávce. -U mezinárodních objednávek nemůžeme změnit adresu. -Pokud je chyba, dejte nám prosím vědět co nejdříve, abychom mohli objednávku zrušit a abyste mohli znovu nakoupit s správnou adresou. -Q. Mohu mít měření položky? -A. Prosím, zkontrolujte popis inzerátu a obrázky v inzerátu. -Pokud to bude možné, budeme se snažit umístit velikostní průvodce. -Pokud nenajdete průvodce velikostí, kontaktujte nás prosím. -Q. Jak se oblečení srovnává s velikostmi ve mé zemi? -A. Pokud není uvedeno jinak, byly všechny položky navrženy pro trh ve Velké Británii. -Pokud jste v Severní Americe, velikost v UK je trochu menší, takže budete možná muset jít o velikost výš. -Pro další pokyny si prosím prohlédněte tabulky velikostí. -Obvykle jsou velikosti UK stejné jako velikosti EU a neměly by být upravovány. -Q. Kdy dorazí má objednávka? -Pro Velkou Británii odesíláme téměř všechny objednávky prostřednictvím Royal Mail 1. třídy. -Toto má odhad dodání 1-3 dny. -Pokud potřebujete doručení na druhý den, nabízíme službu Royal Mail Special Delivery 1pm. -Pro Evropu trvá většina objednávek mezi 3-5 dny, než dorazí do země a pro zbytek světa 5-7 dní. -Poté nemůžeme nabídnout odhad časů dodání, protože to závisí na poštovních službách jednotlivých zemí a na celních službách, pokud se nachází mimo EU. -Q. Můžu vyměnit za jinou velikost / položku? -Ano, existují dva způsoby, jak to lze udělat. -1) Kontaktujte nás prosím a požádejte o adresu pro vrácení. -Když zboží zasíláte zpět, musíte přiložit poznámku s uvedením vašeho eBay ID a velikosti, kterou potřebujete. -Pokud jste objednal mezinárodní objednávku, polovina původního nákladu na dopravu by byla znovu aplikována. -2) Použijte možnost eBay pro vrácení položky. -Tato možnost je také vhodná, pokud byste chtěli vrácení peněz, protože jakmile obdržíme zboží zpět, vrátíme vám je; pokud potřebujete výměnu, kupte si správné zboží buď před vrácením peněz nebo po něm, jak je požadováno. -Prosím, použijte funkci „přidat do košíku“, aby vaše objednávka zůstala pohromadě. -Pokud jsou položky objednány individuálně, nemůžeme zaručit, že budou odeslány společně. -Jakmile jsou všechny položky ve vašem nákupním košíku, prosím, zkontrolujte a zaplaťte jako obvykle a my vám vrátíme přebytečné poštovné. -Pokud jste neobdrželi svůj náhradu poté, co jsme označili položku jako odeslanou, pošlete nám prosím zprávu, abychom mohli náhradu zpracovat. -Q. Zahrnujete účtenku? -Ne, nezahrnujeme do balíčků účtenky, pokud není požadováno. -Pokud potřebujete účtenku, pošlete nám zprávu a můžeme ji poslat emailem. -Pokud potřebujete daňový doklad, kontaktujte nás a my vám ho pošleme emailem. -Q. Čekám na svou objednávku už nějakou dobu a ještě nepřišla. -Je možné, že je ztracené? -A. Pro objednávky do Velké Británie dejte své objednávce 7 dní na příjezd, Evropa 21 dní a zbytek světa 30 dní. -Pokud váš objednávka ještě nedorazila po těchto datum, kontaktujte nás, abychom mohli vyšetřit s dopravním agentem. -Prosím, vezměte na vědomí, že to může trvat až 2 týdny, ale jakmile budeme mít aktualizaci, dáme vám vědět. -Q. Jsem mimo EU. -Musím platit nějaké clo nebo celní poplatky? -A. Zkontrolujte si prosím tyto informace s místními úřady. -Neberme na sebe žádnou odpovědnost za celní nebo cla, ani nebudeme platit žádné peníze k nim. -Neupravujeme informace na celních prohlášeních, takže prosím neptejte se. -Děkujeme vám za vaši zákazníky. -Jsme malá firma z ostrova Man, a pokud máte jakékoli dotazy ohledně vaší objednávky nebo jakékoli otázky, neváhejte se na nás obrátit. -Budeme se snažit vrátit se k vám co nejdříve, ale může to trvat až 24 hodin. -Pokud po této době neobdržíte odpověď, pošlete prosím zprávu znovu, abychom ji přehlédli nebo vzácnou příležitostí, kdy je problém s eBay Messages. -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Více (tři vodorovné čáry) dole na obrazovce. -Klepněte na Nastavení -Klepněte na Informace o zařízení. -Vedle „Opravit váš účet #PRS_ORG#“, klepněte na Opravit. -Opravit nyní. -Proces opravy účtu začne. -Pokud máte hodně knih, může to chvíli trvat. -Pro opravu vašeho účtu v aplikaci #PRS_ORG#, postupujte podle níže uvedených kroků: -Z domovské obrazovky aplikace #PRS_ORG#, klepněte na Více dole na obrazovce. -Klepněte na Opravit váš účet. -Pokud máte hodně položek, může to chvíli trvat, než opravíte váš účet. -Vraťte se na svou domovskou obrazovku a klepněte na knihy nebo audioknihy a zjistěte, zda se objeví chybějící položka. -Uvědomte si, že pokud zaplatíte za expresní dopravu, čas zpracování objednávky je stále 3 pracovní dny. -Až bude položka expedována, bude expedována službou expresního doručení, pokud jste za to zaplatili. -Pracovní dny nezahrnují soboty, neděle a státní svátky. -Mezinárodní zásilky jsou obvykle doručeny do 11 až 22 pracovních dnů, v závislosti na čase, který je potřeba k projití celní kontroly. -Sazba za dopravu - Zdarma Standardní doprava, pokud je uvedena jako zdarma v produktu. -Poznámka 1: Některé země mohou účtovat dodatečné poplatky na místním celním úřadě. -Prosím, zavolejte na vaši celnici nebo to googlujte pro přesné poplatky. -Poznámka 2: Jakékoli clo nebo daně v zemi kupujícího budou hrazeny kupujícím a my nebudeme nahrazovat žádnou částku. -Garantujeme Vaši spokojenost a nabízíme 30denní záruku vrácení peněz (nebo výměny). -Pokud z jakéhokoli důvodu není s vaším nákupem spokojenost, kontaktujte nás prosím nejdříve, než zanecháte negativní / neutrální zpětnou vazbu, abychom mohli věci napravit! -Máte 30 dní na vrácení zboží od dne objednávky. -Musíte nám poskytnout sledovací čísla, jakmile zboží zpět odesíláte. -POKUD JE VÝROBEK POŠKOZEN NEBO JSOU ŠTÍTKY ODSTRANĚNY NEBO POUŽITY NEBO NOSENY VÁMI, PAK JE VRÁCENÍ NEPLATNÉ. -Daň zpětného dovozu v zemi kupujícího, pokud je vybírána, musí být zaplacena kupujícím. -Dbáme absolutní péče, aby byly cenné šperky dobře zabaleny, aby nedošlo k poškození produktu. -Jsou dodány v elegantní krabici, ideální pro darování někomu speciálnímu. -Zpětná vazba a DSR (Detailní hodnocení prodejců). -„Naší prioritou je mít 100% spokojenost zákazníka a zajistit, abyste měli skvělé nakupování. -Můžete se cítit bezpečně, že nám můžete důvěřovat a kontaktujte nás, pokud máte nějaké otázky nebo komentáře. -Vezmeme vaši zpětnou vazbu s nejvyšší důležitostí. -Pokud z jakéhokoli důvodu není s našimi produkty nebo službami spokojený nebo nespokojený, nejprve se nás zeptejte a dejte nám příležitost věci napravit. -Nechceme žádné negativní hodnocení a tyto nemohou být po udělení změněny, takže nám dejte příležitost poskytnout rychlejší řešení pro jakýkoli problém, se kterým se můžete setkat. -Specializujeme se na šperky na míru Solitaire Diamond Rings, Snubní prsteny, Svatební pásky, Diamantové náušnice, Svatební náhrdelníky, Přívěsky a Loose Diamond Solitaire spolu s mnoha dárkovými předměty. -Také jsme zavedli diamantové šperky v 92,5 stříbře. -Naše nabídka zahrnuje prsteny, náušnice, přívěsky a Mangalsutra. -Máme více než 6 desetiletí zkušeností s výrobou šperků. -Také se zabýváme velkoobchodem a vývozem 14 K, 18 K ručně vyrobených a strojově vyrobených zlatých diamantových šperků. -Můžete to také resetovat odtamtud. -Nicméně doporučuji resetovat to z vašeho počítače, i když jste přihlášeni na svém počítači, to je pro vás, abyste si zapamatovali své heslo, protože tyto informace je důležité znát nazpaměť. -Jakmile je vaše heslo resetováno z vašeho počítače, zkuste prosím znovu přistupovat na naši dceru e-čtečku s vaším novým heslem. -Dejte mi prosím vědět, jestli to funguje. -Kupoval jsem nové pneumatiky. -Našel jsem ty, které jsem chtěl na webových stránkách obchodu s pneumatikami. -Vytiskl jsem stránku a vzal ji do mého místního obchodu. -Byla to součást řetězce. -Chlapík v obchodě to vyšetřil a vyšlo najevo, že současná cena pneumatik byla vyšší než můj výtisk. -Nevím, odkud má vyšší cenu. -Naštěstí byl ten chlap upřímný a místo toho, aby se snažil účtovat vyšší cenu, prodával mi pneumatiky za cenu, kterou jsem měl na svém výtisku. -Řekl, že protože jsem měl výtisk, musel mi prodávat pneumatiky za cenu výtisku. -On byl na to také velkorysý. -Od té doby od nich kupuji pneumatiky. -Děkuji - takže tato dotaz je s skladem, jak je uvedeno v chatu včera, musíme čekat na odpověď na vyšetřování. -Jakmile se nám ozve zpět, to je kdy vám bude odeslán email. -Zkoušel jsem volat do restaurace a také jezdce, ale nebyli schopni odpovědět, omlouvám se. -Můžu vědět, jestli stále chcete čekat na objednávku? -Pes, který neustále štěkal a jak jsem ho zastavil. -Moji sousedé si před třemi lety pořídili psa. -Tyto sousedi a já sdílíme plot. -Odděluje naše zahrady. -No, tento pes štěká a štěká a snaží se mě kousnout skrz plot celou dobu, kdy jsem venku na zahradě. -Zkusil jsem to ignorovat, mluvit tiše atd. -ale tento pes je šílený. -Údržbáři se toho bojí. -Takže jsem šel a udělal sousedskou věc a zeptal se jich, jak mi pomohou zjistit, jak dostat tohoto psa, aby se uklidnil. -V tuto chvíli nemůžu ani použít svou zahradu. -Ten pes je venku celý den, štěká a štěká bez přestání. -Ptala jsem se, jestli mu můžu dát zdravé pochutiny skrz plot. -Majitel říká ne. -Ptala jsem se, jestli bychom mohli jít na polovinu na nešokový obojek na psa. -Majitel říká ne. -(Upravit na to, že jsem souseda požádal alespoň třikrát, aby jí pomohl s jejím psem.) -Frustrovaný, ale ještě neochotný volat kontrolu zvířat nebo cokoli jiného, jsem vymyslel plán. -Koupil jsem velmi pěkný přenosný reproduktor, který je *hlasitý*. -Jako jsem to uložil a investoval do toho. -Teď, každou dobu, když jdu do mé zahrady, přináším svůj reproduktor. -Zde není žádný dení zákaz hluku, zkontroloval jsem to. -Když pes začne štěkat a štěkat na plot, moji sousedé (všichni) si mohou užít trochu Lamb of God nebo Rotting Christ nebo nějakou jinou skvělou hudbu naplno. -Můj reproduktor třese stůl. -Sousedi to neměli dlouho, než dali dohromady dva a dva. -Pes je nyní držen převážně uvnitř a když ven, je rychlý nebo majitel jde ven s ním. -Čtvrť je nyní nádherně tichá. -Opravit některé gramatické chyby -Opět upravit: PSA NEDÁVEJTE PSŮM ŽÁDNÉ DRUHY LIDSKÝCH LÉKŮ, JAKO JSOU LAXATIVA NEBO NYQUIL. -To může vážně ublížit a dokonce i zabít zvíře. -Také je reproduktor přenosným PA systémem od JYX, pokud by někdo měl zájem. -Bylo to pod 200 dolary, ale jsem chudý, takže jsem musel trochu šetřit. -Zní to skvěle za ty peníze. -Jsem ohromen a stejně tak i moji sousedé. -Lavina na lyžařském středisku ve Washingtonu zabila 1 osobu a uvěznila 5. -Lavina se v sobotu prohnala částí lyžařského střediska ve Washingtonu, které se používá k přístupu k lyžování v zadních krajích, zabila 60letého muže a dočasně uvěznila pět dalších. -Lavina byla nahlášena kolem 10:50 ráno v oblasti Silver Basin na Crystal Mountain, která se nachází asi 85 mil (137 kilometrů) jihovýchodně od Seattle, řekl sgt. Darren Moss z Pierce County Sheriff's Department. -Identita muže, který zemřel, nebyla zveřejněna, ale úřady říkají, že po vytažení z sněhu nebyl dýchání a přestože druhý lyžař provedl resuscitaci, nepřežil. -Ostatní lyžaři ve své skupině se zachránili s pomocí dvou svědků, kteří je viděli, jak jsou unášeni sněhem. -Všichni měli na sobě lavinové vysílačky. -Zatímco všichni ti, kteří byli chyceni v lavině, byli zkušení backcountry lyžaři, bylo vydáno varování proti lyžování v oblasti, která byla právě uvnitř hranic Crystal Mountain Resortu. -Soukromé lyžařské středisko určuje podmínky, ale nic nebrání lyžařům, aby tam šli, protože pozemek sousedí s veřejnými pozemky v Národním lese Mount Baker-Snoqualmie. -Frank DeBerry, prezident a CEO resortu, řekl, že všech šest mužů mělo lyžařské pasy pro výstup na svah, což znamená, že byli registrováni u lyžařské hlídky, účastnili se orientace, jak a kde přistupovat k lyžování v zadní části resortu a byli povinni zkontrolovat sněhové podmínky před svou výpravou. -Lyžaři mohou cestovat, kam chtějí, kdekoli v národním lese. -Šli do lesa, ale nakonec se vrátili do hranic (rezortu), kde se stala tato událost," řekl DeBerry. -Kromě uzavření oblasti, kde došlo k sjezdu, uzavřel rezort již během dne Mt. Rainier Gondola kvůli větru dosahujícímu rychlosti 100 mil za hodinu (161 kilometrů za hodinu). -Lavina přišla během prvního významného sněžení sezóny. -Oblast je pod varováním před zimní bouří až do nedělního rána, s tím, že Národní služba pro počasí říká, že pro oblasti nad 2000 stop (610 metrů) je možné 12 až 15 palců (38 centimetrů) sněhu. -"Měli jsme pozdní start sezóny a teď jsme se dostali z téměř žádného sněhu na obrovskou sněhovou bouři. -Lidé se vzrušili," řekl DeBerry. -"Všichni si musíme pamatovat, že je to sport, který přináší riziko." -Crystal Mountain je největší lyžařský resort ve Washingtonu, který zahrnuje 2 600 akru (1 052 hektarů). -Oriflame Optimals Hydra Radiance Hydratační denní krém + Hydra Radiance Hydratační noční krém - Normální / Kombinovaná pokožka -Formulováno se švédskou přírodní směsí složek červené řasy, hnědé řasy a vodních minerálů s vitamínem C a protiprachovou aktivní látkou. -Aqua Minerals udržuje pokožku hydratovanou a pružnou. -Hydratační denní a noční krém, který zanechá žíznivou pokožku jemnou, pružnou a svěží. -Formulováno se švédskou přírodní směsí složek červené řasy, hnědé řasy a vodních minerálů s vitamínem C a protiprachovou aktivní látkou. -Hlavní kreditní karty a online bankovní převody jsou vítány. -Okamžitá platba je po vítězném dražebním příkazu požadována. -Zboží bude odesláno ve stejný den nebo následující den po obdržení plného platby. -Doba dodání je přibližně 10-21 pracovních dnů (Špatné počasí může způsobit zpoždění v dodávce a může trvat déle než měsíc, než se dostane.). -Za dodatečný poplatek můžeme zajistit expresní přepravu kurýrem (India post parcel) během 5-11 pracovních dnů. -Nabízíme slevu na kombinované dopravě při nákupu dvou nebo více položek z našeho obchodu. -Jen se nás zeptejte kliknutím na "Zeptat se". -Mezinárodní zákazníci jsou zodpovědní za clo a daně ve své zemi. -Kupující je zodpovědný za náklady na návratnou dopravu. -Refundace může být provedena pouze v případě, že není k dispozici náhrada. -Jakékoli poplatky za dopravu, manipulaci a pojištění nejsou vratné. -Nevracíme náklady na dopravu. -Naší prioritou je 100% spokojenost zákazníka. -Dáváme důležitost našim zákazníkům a poskytujeme nejvyšší kvalitu zákaznického servisu. -Etika a integrita jsou nejlepší částí našeho podnikání a věříme v dodávání nejlepší kvality produktů a služeb za nejlepší ceny. -Navíc je jedním z našich hlavních cílů odpovídat na otázky co nejrychleji a co nejdříve. -Naší prioritou je 100% spokojenost zákazníka. -Cílem je poskytovat přísně 5hvězdičkovou službu ve všech kategoriích. -Udržujeme 100% spokojenost zákazníků! -Vaše zpětná vazba je pro nás velmi důležitá. -Jakmile obdržíte položku, nechte nám prosím pozitivní zpětnou vazbu. -Pozitivní zpětná vazba je velmi ceněna a my také zanecháme pozitivní zpětnou vazbu. -Pokud z jakéhokoli důvodu nejste spokojeni, nezanechávejte prosím střední nebo negativní zpětnou vazbu. -Dejte nám šanci a my dáme do toho všechno. -Rádi se rychle postaráme o problém a dáme vám uspokojivou odpověď. -Zkontroloval jsem to tady a zdá se, že jezdec tam šel. -Zkontrolovali jste u vašich dveří nebo v příjezdové hale? -Možná to tam zanechal. -Omlouváme se za nepříjemnosti. -Omlouváme se za nepříjemnosti. -Je tu něco jiného, s čím vám mohu pomoci? -Je to poprvé a doufám, že naposledy. -Přeji vám krásný zbytek dne a šťastný nový rok! -Už jsem to udělal několikrát, nefunguje to. -Vyplatil jsem vám náhradu za knihu. -Jsem velmi nespokojený s řešením, co mám dělat, pokud se problém objeví znovu v další knize? -Je to zadní příležitost, že se to stane. -Omlouvám se, z důvodu kvality budu muset tento chat uzavřít, pokud neobdržím odpověď do 2 minut. -Teď tento chat uzavřu, protože nebyla obdržena žádná odpověď. -Ještě jsem neviděl žádné komentáře od Australanů, takže bych mohl říct pár slov. -Je obtížné najít vybavení, které je jedinečné nebo mimo hlavní proud. -Většina desek jsou masové tržní desky jako sector 9s nebo Loaded Tan Tien's ... Mám obojí a nezlobím se. -Pokud chci něco mimořádného, kupuji přímo od výrobce nebo prostřednictvím Muir. -Doprava je vždy problém a vždy je drahá. -Opravdu jsem chtěl Tortugu, ale když to bylo všechno zaplacené, bylo to více než 500 AU $ (včetně dopravy a kurzu). -Pouze doprava byla přibližně US$100. -Rozumím, že tohle není něco, nad čím máte kontrolu... Jen jsem chtěl ilustrovat úvahy a oběti, které byly učiněny z této strany světa. -Nakonec, milujte své desky! -Velký respekt. -Zdarma trénink na CompTIA A+ | Bude pokrývat celý kurz. -Momentálně poskytuji zdarma výcvik na kurzu CompTIA A+. -Kurz se skládá z 18 modulů a budu dělat věnovatou videa na každý modul. -Některé z těchto videí mohou být trochu dlouhé, protože to bude celý modul v každém videu, takže pokud hledáte pouze konkrétní témata nebo chcete pouze obnovit určitá témata, využijte prosím časových razítek v popisech. -Časové razítka jsou tam, aby vám usnadnila život, takže je to vaše vlastní chyba, pokud nakonec skenujete modul tam a zpět jako šílenec, který hledá své ztracené zuby. -Udělám 20 videí pro tento kurz, první je jen 4minutový úvod vysvětlující kurz, poslední bude video s tipy na zkoušku a pak samozřejmě 18 videí mezi nimi budou vaše moduly. -Trénink by měl být dostatečný k tomu, abyste prošli obě mezinárodní zkoušky pro A + a ostatní kurzy, které poskytuji, by měly být také dostatečné k tomu, abyste prošli příslušné zkoušky, pokud existuje zkouška související s tím konkrétním kurzem. -Pokud máte otázku týkající se konkrétního tématu v modulu nebo kurzu obecně, na kterou byste potřebovali více jasnosti, neváhejte se zeptat a já se pokusím vám pomoci, pokud jsem online. -Zde je úvod do kurzu -**Úvod do kurzu CompTIA A+** -Nabízíme devět typů plakátů: -Vyberte si požadovaný formát plakátu z rozevíracího menu. -Plakáty jsou zasílány v pevném kartónovém A5 obálce. -Používá se, když je 6x4" (10x15 cm) příliš malý. -Plakáty jsou zasílány v pevném kartónovém A5 obálce. -Vysoká kvalita fotolabu v lesklém finiši. -Vysoký lesk dodává život tisku, čímž se barvy zdají živé a ostré. -Plakáty jsou zasílány v pevném kartónovém A5 obálce. -Tisknuté na super-premium polomatný fotografický papír, poskytuje vysokou definici barev s omezenou reflexí přímého světla. -A3 Plakáty jsou zasílány v kartonové trubce na plakáty. -Tisknuté na vysoce kvalitním fotopapíru 280g super-premium semi-lesk, poskytuje vysokou definici barev se sníženou reflexí ve přímém světle. -A2 Plakáty jsou zasílány v kartonové trubce na plakáty. -Naše laminované plakáty A4 a A3 jsou pokryty plastem a mají na každé straně přibližně 2mm tenký průhledný plastový rámeček. -Nejsou dodávány s rámem. -A4 rám může být zavěšen nebo stát volně. -A4 rámované obrázky jsou dodávány s černým dřevěným rámem s skleněnou přední stranou. -Fotky přicházejí v pevném kartónovém obálce v krabici s rámem. -Pokud potřebujete tisk s nebo bez okrajů, nechte nám prosím zprávu. -Různé počítačové obrazovky, operační systémy a dokonce i různé webové prohlížeče mají různé barevné charakteristiky, takže je téměř nemožné, aby daná barva vypadala stejně na každé obrazovce. -Pokud barvy plakátů neodpovídají vašim očekáváním, pošlete nám prosím zprávu. -Většinou to můžeme změnit tak, aby vyhovovalo vašim potřebám. -Je to pravidelná funkce, kterou zařízení má, pokud chcete ušetřit více energie, můžete provést tyto kroky: -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Více dole na obrazovce. -Klepněte na Nastavení. -Klepněte na Úspora energie a soukromí. -Klepněte na seznam vedle „Automaticky usnout po“ a vyberte čas, než se váš #PRS_ORG# eReader usne. -Čím kratší čas, tím déle vydrží baterie vašeho eReaderu. -Klepněte na seznam vedle „Automaticky vypnout po“ a vyberte čas, než se váš #PRS_ORG# eReader vypne. -Čím kratší čas, tím déle vydrží baterie vašeho eReaderu. -Jak vidím, tento jezdec přijel na vaše místo v 12:39. Zkusil doručit tuto objednávku do 12:52. -Jezdec se snažil zanechat objednávku na bezpečnosti, ale on to neakceptoval. -Proto byl objednávka vzata jezdcem, jak jde. -Vaše zakoupená položka bude odeslána prostřednictvím Royal Mail nebo národní kurýrní společnosti. -Snažíme se odeslat zboží stejný nebo následující pracovní den v závislosti na čase nákupu po obdržení platby. -12.00 poledne je časový limit. -Nezpracováváme ani nezasíláme objednávky během veřejných svátků nebo víkendů. -Všechny objednávky odesíláme v souladu, ale v určitém okamžiku může být možné, že vámi zakoupená položka bude vyprodána. -V tomto případě vás budeme informovat / kontaktovat buď když bude položka zpět na skladě připravená k odeslání nebo abychom vám poskytli alternativní možnost. -Budete mít právo zrušit objednávku, pokud si to přejete. -eBay poskytuje odhadované datum doručení, které nezahrnuje žádné předpokládané zpoždění od Royal Mail / Kurýrů. -To může zahrnovat špatné počasí, poruchu systému nebo stávky zaměstnanců atd. -Tyto problémy nejsou pod naší kontrolou, takže si to prosím uvědomte. -Zasíláme zboží s očekáváním poskytování služby kurýry, ale někdy se můžou zklamat a to nemůže být naše vina. -Pokud není kurýr schopen doručit, měla by být příslušnou doručovací společností vystavena karta, která uvádí, jak uspořádat opětovné doručení nebo kde je balíček pro vás ponechán k vyzvednutí. -Pokud byl balík vrácen na depo kurýra, pak vám umožní určitou dobu na jeho vyzvednutí. -Pokud nebude v tomto čase vyzvednuto, zásilka bude vrácena zpět nám. -Poté bychom požadovali, abyste nám uhradili náklady na poštovné pro opětovné zaslání balíčku zpět k vám. -Pokud položka již není potřebná, bude vystavena náhrada méně částky za poštovné. -Pokud z jakéhokoli důvodu nebudete s nákupem spokojeni, můžete zboží vrátit a obdržet náhradu do 30 dnů. -Vrácení jsou přijímána pouze v případě, že položka je v jejím původním prodejním stavu, což znamená, že položky nesmí být použity, nosil, označeny, nemají žádnou vůni, žádné chlupy zvířat nebo být v takovém stavu, že to nemůže být prodáno znovu. -Zboží musí být vráceno v jeho původním balení s všemi produktovými štítky připojenými. -Buďte prosím opatrní, když si oblečení zkoušíte, abyste nenosili make-up, nebo vlasové produkty, parfémy, deodoranty nebo jiné krémy nebo látky, které by mohly produkt značit nebo poškodit. -To pouze vede k tomu, že váš vrácený produkt nebude přijat námi pro vrácení peněz. -Budeme po vás požadovat platbu poštovného, abychom vám položku vrátili. -Budeme uchovávat nebo čekat na platbu poštovného za položku maximálně po dobu 30 dnů a po této době bude položka zlikvidována. -Položky musí být vráceny do 30 dnů od přijetí. -Pokud přišla položka vadná nebo jsme poslali špatnou položku, pak zaplatíme návrat položky. -Nejjednodušší způsob by bylo otevřít žádost o vrácení prostřednictvím eBbay. -Jakmile bude přijato, prozkoumáme položku a vrátíme vám účet. -Položky, které se po/během nošení poškodí, budou podrobeny kontrole položky, když nám bude vrácena. -Pokud se závada považuje za skutečnou výrobní vadu, budete vám vráceny peníze. -Pokud to není výrobní vada, bude položka po zaplacení poštovného na její vrácení vám vrácena zpět. -Znovu budeme čekat 30 dní na provedení této platby, po které bude položka zlikvidována. -Nejjednodušší způsob by bylo vrátit položku pro vrácení peněz prostřednictvím vrácení Ebay a poté jednoduše zakoupit požadovanou velikost nebo barvu zpět od nás. -VŠECHNY VRÁCENÍ JSOU ZODPOVĚDNOSTÍ ODESÍLATELE, DOKUD NEDORAZÍ K NÁM. -Získejte prosím Důkaz o poštovném od pokladního pošty. -Pozitivní zpětná vazba je vždy vítána, ale pokud z jakéhokoli důvodu nastane problém s vaším nákupem, dejte nám prosím šanci vyřešit tento problém. -Doufáme, že naše zákaznická služba bude velmi uspokojivá. -Děkuji za poskytnuté informace, doufám, že se máte dobře. -Prosím, nechte mě ověřit váš účet #PRS_ORG#. -Budu rád, když vám mohu pomoci. -Dejte mi prosím chvíli. -Děkujeme za počkání, omlouváme se, že vaše matka nedostala dárkovou kartu, potvrďte prosím email, který byl odeslán. -Bojím se, že nebudu schopný potvrdit cenu postele, dokud nebude znovu k dispozici na webu, protože byla snížena ve vaší zimní slevě, je pravděpodobné, že to nebude cena článku, když je znovu přidán na web. -Počkejte na objednávku a můžete nám nahlásit, pokud někdy jídlo není horké, takže vám můžeme pomoci. -Vím, že mít kredit vám nezlepší vaše jídlo, ale dovolte mi, abych kompenzoval zpoždění. -Makeshift Munster rozdrtí Včely v Champions Cup crackeru. -Munster převálcoval Wasps v napínavě chaotickém zápase Heineken Champions Cup, který se odehrával mezi náhradními týmy obklopenými problémy s Covid a zraněními. -Divoká první polovina skončila s Munsterem vedením 13-7 poté, co kapitán Wasps Brad Shields byl kontroverzně vyloučen za nebezpečný zákrok na Dave Kilcoyne. -A s prostitutkou Dan Frostem vyloučenou na půli času, jejich 13 mužů bylo na lopatkách a Munster reagoval inženýrováním try pro debutanta Patricka Campbella a Andrewa Conwaya. -Všestranný výkon Alfieho Barbearyho, který zakončil úžasný pokus, dal Wasps naději, ale byl neočekávaně nahrazen v poločase a od okamžiku, kdy Campbell ukázal svou třídu a skóroval ve 43. minutě, se stalo jednosměrným provozem. -Chybí 17 hráčů kvůli zranění, Wasps se museli také vypořádat s odchodem dalších čtyř kvůli Covidu ráno, což vedlo k rychlému překonfigurování týmu. -Munster, mezitím, postrádali 34 členů týmu kvůli karanténě po jejich nedávné nešťastné cestě do Jihoafrické republiky na United Rugby Championship, což vytvořilo pět debutantů v základní sestavě a dalších sedm na lavičce. -Zásadní však bylo, že se zúčastnili hvězdy Irska jako Tadhg Beirne, Peter O'Mahony, Conor Murray a Keith Earls, aby posílili jejich řady. -Pro všechny absentéry to byl příjemný zážitek, kde se zdálo, že se může stát cokoli, alespoň dokud Munster neukázal neuvěřitelnou hloubku svých herních zdrojů, aby se odpoutal. -Covid-ovým začátkem dne pro Včely bylo zhoršeno, když se hra rozběhla, když Thomas Young byl odmítnut jistou přesností O'Mahonyho skvělým pokrytím. -A zatímco Joey Carbery poslal penalty mezi tyče, aby vyvolal první krev pro Munster, Jimmy Gopperth udeřil do kříže, aby pokračoval ve smůle dvakrát vítězných. -Ale jejich scrum poskytoval oporu ve hře a vyžadovalo se horečnatá obrana, aby se jejich maul držel na uzdě, dokud nevyprodukovali první z dvou obratů v rychlém sledu. -Munster bojovali o každý míč v odporu proti svým absentérům a jejich vítězné naděje obdržely dramatický záštitu, když byla Shieldsovi ukázána jeho pochybná červená karta, s rozhodčím Romainem Poitem, který řekl, že jeho rameno se dotklo Kilcoyneho krku. -Carbery vystřelil jednoduchou penaltu na tyč a i když byl na cíl brzy poté, následovala dramatická změna, když Wasps vykradli úžasný try přes Barbeary. -Ukončil to vzrušující období rugby od konce konce, ve kterém soupeři střídali útoky z hlubokého a po tom, co byl ve středu domácích pokroků, Barbeary zasadil rozhodující úder. -Přívětivý odraz od Murrayho kopu Earlsovi poskytl nejjednodušší pokus, jak Munster zaútočil a pak Frost odešel do sin-binu, když domácí strana byla snížena na 13. -Nezabralo to dlouho, než se projevila výhoda v personálu, když Beirne zahájil útok, který skončil skvělým dokončením zadního hráče Campbella. -A Munster byli z dohledu v 49. minutě, když volný pas během slibného protiútoku padl pro Conwayho, aby ho sebral a dokončil jednoduchý běh. -Hooker Scott Buckley, muž zápasu na svém debutu v Munsteru, byl dalším z lineoutu a to byla hra vyhrána, i když Michael Le Bourgeois vybral kvalitní linku, aby zlepšil skóre Wasps. -Přední strana: 1 velká kapsa a 1 kapsa na zip. -Pásek na rameno z přizpůsobitelné kůže o šířce 1,5 palce a délce 58 palců. -Taška vyrobená v prostředí bez zvířat a kouře. -Přirozeně opálené pouze slunečnicovým olejem, bez použití barviva nebo chemikálií. -UNIKÁTNÍ VLASTNOSTI RUČNĚ VYROBENÝCH KOŽENÝCH TAŠEK- -Taška je vyrobena z pravého kozího kůže (plného zrna) zpracovaného a tmavěného pouze slunečnicovým olejem. -Každá vintage kůže taška je úplně přírodní a ručně vyrobený produkt, proto se barvy a dokončení mohou lišit od jednoho kusu k druhému. -Každá taška má jedinečný antikvární kůže / lehce poškozený vintage kůže vzhled. -Části z několika kůží mohou být použity k vytvoření jednoho kůže tašky. -Takže se může lišit barva a textury na různých částech tašky, což vytvoří úžasně jedinečný efekt. -Kvůli různým řemeslníkům a mohou existovat malé rozdíly ve stylu, konstrukce tašek je zobrazena na webových stránkách. -Podšívka může být světlejší nebo tmavší barvy než ta, která je zobrazena na obrázcích. -Napište nám, abyste zjistili aktuální barvu skladu. -Pravá kůže může mít velmi malé řezy / jizvy / značky. -To neznamená, že je poškozeno. -Může tam být také viditelné záhyby v kůži. -Tyto funkce ukazují skutečný původ našich kůží satchels a messenger tašky, tvoří součást tašky a neovlivňují její trvanlivost. -Čistá kůže může trochu vonět, když je čerstvá, ale vůně zmizí s použitím. -Prosím, nechte to na slunci a čerstvém vzduchu po několik dní. -Mohou existovat místní celní / DPH poplatky, o kterých nevíme a které jsou mimo naši kontrolu. -Kupující jsou zodpovědní za CLOVEKU na cílové destinaci. -Vidím, můžu mít vaši verzi softwaru eReader? -Chcete-li najít verzi softwaru vašeho eReaderu: -Přejděte na svou domovskou obrazovku. -2)Klepněte na ikonu Více vpravo dole na obrazovce. -3)Klepněte na Nastavení. -4) Klepněte na Informace o zařízení. -Vedle 'Verze softwaru' uvidíte číslo verze vašeho eReaderu. -Wendy Rogers nazývá novozélandského premiéra "Leninem s vlasy" a tvrdí, že ve Spojených státech jsou "satanskí komunisté". -Republikánská arizonští senátor Wendy Rogers v neděli nazvala novozélandskou premiérku Jacindu Ardern "Leninem s vlasy" a varovala před komunismem v USA. -Rogers se zdálo, že kritizuje reakci Ardern na COVID, když se odvolala na sovětského vůdce Vladimira Lenina ve svém tweetu, který zveřejnila spolu s krátkým záběrem premiérky. -Rogers ve svém tweetu nevysvětlila svou kritiku Ardernové dále. -Ve zkratce Ardern mluvila o dezinformacích ohledně COVIDu a o úsilí Nového Zélandu o informování lidí o pandemii. -"Potřebujeme více odvážných křesťanů ve vládě, aby se postavili ďábelským komunistům ve všech stranách," napsal arizonaský senátor v dalším tweetu v neděli. -Její tweet byl přijat s posměchem od různých uživatelů sociálních médií, s jednou osobou, která tweetovala zpět: "Prosím nabídněte své definice komunismu a křesťanství, protože si myslím, že ani jedno nerozumíte." -"Vidím, že Wendy dnes dělá všechno, aby se snažila soutěžit s nejšílenějšími ze šílených," napsal další člověk na Twitteru. -Rogers byla hlasitá o svém postoji proti komunismu dříve na sociálních médiích. -V září nazvala Den práce "komunistickým svátkem" bez dalšího vysvětlení. -Její tweet byl vysmíván mezi uživateli sociálních médií, včetně The Arizona House Democrats, kteří odpověděli: "Říká srdce a duše Arizonské republikánské strany (dokážte nám, že se mýlíme)." -"Uvědomujete si, že pokud budete nadále falešně označovat všechny dobré věci za komunistické, jenom to komunismu dodá více atraktivity, ne?", zeptal se další uživatel sociálních médií. -Republikánská senátorka Wendy Rogers varovala před komunisty v Americe a vyzvala k tomu, aby ve vládě bylo více „odvážných křesťanů“. -Spisovatel Shiv Ramdas také odsoudil tweet parafrázováním jejích vlastních slov: "'pracovat je komunismus.'" -Zvlášť Rogers často navrhl, že Donald Trump vyhrál prezidentské volby 2020 a vyžádal si nové volby. -"Vyzývám voliče Bidena, aby byli v Arizona staženi a musí být provedeno nové volby. -Voliči Arizony nesmí být ošizeni podvodně..." senátor tweetoval v červenci. -V červenci Rogers kampanil za deklaraci volby a dříve spustil petici, kterou tvrdila, že získala 663 000 podpisů. -"Věci se opravdu rozjíždí! -Dostaňme se co nejdříve na 1 milion. -Výsledky auditu jsou brzy tady, více států se připojuje," napsala v září na Twitteru. -Podporovatel Trumpa také prosazoval neověřené tvrzení o podvodu s voliči v Arizoně. -Newsweek kontaktoval kancelář senátora Rogerse kvůli komentáři. -Jděte na svou domovskou obrazovku. -2.-Klepněte na menu (3 vodorovné čáry) Více ikonu dole na obrazovce. -Klepněte na Nastavení. -Klepněte na Účty. -Pod #PRS_ORG#, klepněte na Odhlásit se. -Objeví se potvrzovací obrazovka. -Klepněte na Odhlásit se. -Další věc, kterou můžete zkusit, je provést tovární resetování vašeho zařízení a poté zkontrolovat, zda je detekováno vaším počítačem. -Provedení tohoto kroku, prosím, postupujte podle těchto instrukcí: -Pokud je to možné, zálohujte knihy nebo dokumenty, které jste přidali do svého eReaderu pomocí #PRS_ORG# nebo které jste ručně přidali pomocí počítače. -Nemusíte zálohovat žádné knihy, které jste si koupili od #PRS_ORG#. -Jakékoli knihy, které jste zakoupili od #PRS_ORG#, můžete po továrním resetu znovu stáhnout z #PRS_ORG# #PRS_ORG#. -Přejděte na svou domovskou obrazovku. -Klepněte na Domů na vrcholu obrazovky. -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Klepněte na Tlačítko Obnovení továrního nastavení pod Pokročilé. -Klepněte na Resetovat nyní. -Jsem stále s tebou. -Dlouhé fronty na trička Banksyho podporující protestující, kteří srazili sochy. -Davy zoufalých lidí, kteří chtějí koupit trička navržená tajemným uličním umělcem Banksym, byly viděny v Bristolu ve Velké Británii. -Byli propuštěni, aby podpořili protestující souzené za svržení sochy obchodníka s otroky během pochodu Black Lives Matter. -Banksy navrhl limitovanou edici "suvenýrových triček" k označení soudního procesu čtyř lidí obviněných z poškození kontroverzní sochy v Bristolu minulý rok. -"Všechny výtěžky obžalovaným, aby mohli jít na pivo," napsal umělec na Instagramu. -Prodáno za 25 liber ($33) plus DPH a omezeno na jeden kus na osobu ve více obchodech, tričko bylo tak žádané, že lidé čekali v řadě kolem bloků, aby ho dostali. -Video dlouhé téměř dvě minuty, které bylo zveřejněno na Twitteru, ukazuje nekonečnou řadu zákazníků. -Britská média hlásila, že "tisíce" byly nadšené vybrat peníze pro protestující tím, že koupí šedou tričko, které zobrazuje prázdný podstavec s nápisem "Bristol" nad ním. -Odkazuje na povalenou bronzovou památku 17. století obchodníka Edwarda Colstona, který se podílel na transatlantickém otroctví. -Aktivisté, známí jako "Colston Čtyři", čelí soudnímu procesu na Bristolském korunním soudu příští týden, obviněni z páchání trestného činu poškození památky patřící městské radě. -Muži - kteří všichni vznesli nevinu - jsou obviněni z potopení sochy "bez zákonného omluvného důvodu". -Chválen některými za to, že po jeho smrti zanechal peníze na různé charitativní účely, byla socha kontroverzního obchodníka napadena v červnu 2020, když se v městě konala protest podporující hnutí Black Lives Matter (BLM). -Poškozený podstavec a graffiti socha byla později získána městskou radou z Bristolského přístavu, kde byla během nepokojů hodena, a znovu se objevila jako místní muzejní exponát, spolu s vybranou kolekcí BLM plakátů z pochodu. -Socha BLM protestujícího byla postavena na prázdném podstavci dříve obsazeném Colstonem. -Nemohu provést žádné změny, jakmile byla objednávka provedena, nicméně, když jezdec opustí restauraci, budete moci s ním kontaktovat prostřednictvím aplikace. -Můžete také sledovat svého jezdce prostřednictvím aplikace a zavolat jim, jakmile jsou blízko. -Pro budoucí objednávky můžete přidat instrukce pro svého jezdce úpravou uložených adres ve vaší aplikaci. -Bohužel ceny položek jsou takové, jak se ukazují online, nemůžeme to pro vás změnit nebo snížit. -Doba dodání je uvedena na webových stránkách. -Protože nemáme sklad, jsou všechny položky vyrobeny na objednávku, zaslány nám sem na #URL# a poté odeslány na vás. -Proto Vás žádáme, abyste prosím umožnili tyto časové limity. -Časový odstup ukazuje, kdy má přijít další dávka. -Rodina vzdává hold "energetickému" 18letému mladíkovi, který byl v Birminghamu bodnut k smrti. -Rodina teenagera, který byl v Birmingham bodnut k smrti, ho popsala jako "mladého, energického 18letého", který snil o tom, že bude specialistou na digitální marketing. -Yahya Sharif byl nalezen vážně zraněný na Coventry Road, Small Heath, těsně před 17.30 hodinou v pátek, uvedla West Midlands Police. -Policie byla na místo přivolána záchrannou službou. -Navzdory nejlepším úsilím záchranářů byl Yahya, z Nechells, na místě potvrzen jako mrtvý. -Pitva prokázala, že zemřel na bodnou ránu do hrudníku. -Prohlášení vydané za jeho rodinu řeklo: "Nemůžeme uvěřit, že Yahya zmizel z našich očí. -Stále nevíme, proč byl zabit. -Mladý, energický 18-letý, jeho sen byl být specialistou na digitální marketing. -Celá komunita je šokována. -Ať Bůh bude s rodinou, kterou zanechal, zejména s jeho rodiči. -Detektivové shromažďují CCTV a další důkazy, jak se snaží sestavit, co se stalo a identifikovat a sledovat, kdo bodl teenagera. -Detektivní inspektor Hannah Whitehouse z oddělení vražd řekla: "Yahya bylo jen 18 let a měl před sebou celý život. -To bylo nyní odebráno v nejtragičtějších okolnostech. -Neexistuje žádný jasný motiv pro útok a pracujeme na plný úvazek, abychom identifikovali a vypátrali, kdo byl za to zodpovědný. -Mluvili jsme s několika svědky, ale stále potřebujeme slyšet od někoho s informacemi, kdo nám může pomoci. -Apeloval bych na ty, kteří tam byli v tu dobu, aby udělali správnou věc, přišli a mluvili s námi a řekli nám přesně, co se stalo a proč. -Je to nejméně, co rodina Yahyovi zaslouží. -Kdokoli s jakoukoli informací by měl zavolat na číslo 101, uvádějící referenční číslo 3643 10/12/21. -Rozumím, ale můj kolega vysvětlil včera, že musíme být v kontaktu s skladem, to bylo uděláno pro vás - takže čekáme na odpověď. -Jakmile budeme mít informace, můžeme vám pak sdělit, kde se nachází vaše objednávka. -Položka měla být odeslána 18.12. -Změňte nastavení písma pomocí menu na dolní straně: -Nastavte styl písma: Klepněte na rozevírací nabídku vedle „Font Face“, abyste vybrali ze seznamu dostupných písem. -Upravte velikost písma: Přetáhněte ikonu kruhu vedle „Velikost písma“, abyste změnili velikost textu. -Nastavte mezery mezi řádky: Přetáhněte kruhovou ikonu vedle „Mezery mezi řádky“, abyste zvětšili nebo zmenšili mezery mezi řádky písma. -Nastavte okraje: Přetáhněte posuvník vedle Okrajů, abyste okraje zvětšili nebo zmenšili. -Nastavte zarovnání textu: Vedle „Zarovnání“ vyberte svou volbu zarovnání. -Když změníte vzhled textu, váš eReader si zapamatuje vaši preferovanou velikost a styl a aplikuje je na další knihy, které čtete. -Pokud čtete PDF, nemůžete změnit velikost ani styl textu. -Propásl jste Shiba Inu? -EverGrow může být další velký Crypto, který exploduje v roce 2022. -Shiba Inu je nejnovější meme-krypto, který se šíří virálně, a přestože je o téměř 60% nižší než jeho maximální hodnota, tržní kapitalizace stále činí oči vody 20 miliard dolarů, čímž se stává 12. největší kryptoměnou na světě podle hodnoty. -Investice ve výši 100 dolarů při spuštění by dnes byla v hodnotě více než 2 miliony dolarů! -Mnoho jistě kopou sami sebe za to, že propásli takové zisky, ale realita je, že sázka na Shiba Inu byla čistou hrou. -Běh Shiba byl kombinací velmi chytrého marketingu a spousty hype, který vedl houf investorů s FOMO (strachem z toho, že něco přijdou) k nahromadění do meme-coinu. -I samotný název, který byl hold Elon Musk podporovaný Dogecoin, byl součástí designu. -Ve skutečnosti Shiba Inu nenabízí žádnou hmatatelnou užitečnost ani hodnotu, s zdánlivě malým úsilím to udělat v budoucnu. -Být na blockchainu Ethereum, bylo by tu spousta příležitostí pro vývoj, kdyby tým za Shiba Inu byl motivován k tomu. -Existují však některé kryptoměny, které se snaží vystoupit nad ostatní a podpořit svou oblíbenost skutečnou užitečností a základní hodnotou. -Před pouhými 10 týdny byla spuštěna EverGrow Coin ($EGC) týmem zkušených finančních, blockchain a marketingových odborníků. -Jedním z průlomových prvků jejich projektu je skutečnost, že token platí držitelům stabilní měnu. -Za krátkou dobu od spuštění obdrželi držitelé EverGrow Coin více než 30 milionů dolarů binance-pegged USD odměn - stabilní, regulovanou měnu, která je 1-k-1 s USD. -Podle BSCScan má projekt v současné době 110 000 držitelů. -Díky jejich revoluční smlouvě se EverGrow Coin rychle zvýšil na více než 1 miliardu dolarů na tržní kapitalizaci, ale pak se na CoinMarketCap objevila velká chyba dat, jen týdny po spuštění, což způsobilo masovou paniku mezi investory. -U takového nového projektu může trvat dlouho, než se vybuduje důvěra, a tento panický stav byl využit řadou článků, které se údajně platily od rivalů projektu, kteří používali nesprávná data k tomu, aby odradili investory EverGrow od projektu. -Během následujícího měsíce zůstaly chyby neopraveny a EverGrow se propadlo pod 300 milionů dolarů hodnoty. -Včera CoinMarket Cap umístil upozornění na stránce EverGrow, potvrzující, že chyba dat byla opravena. -Cena se nyní stabilizovala a známky návratu důvěry viděly nárůst o 22% od nedávných minim. -Nicméně EverGrow stále zůstává pod vysokými hodnotami dosáhnutými před touto chybou. -EverGrow je velmi odlišný od Shiba Inu. -Kromě zřejmých výhod odměn USD tým za projektem již spustil SWAP dApp na své webové stránce, nedávno odhalil nadcházející vydání Crypto Peněženky, která slibuje překonat funkce nabízené Trust Wallet nebo Safemoon Wallet a má celou řadu nástrojů, od platformy pro tvorbu obsahu po NFT Market Place & Lending, navržené tak, aby přinášely investorům trvalou hodnotu. -Je EverGrow Coin další Shiba Inu? -S Shiba Inu, které nabízí velmi málo nebo žádnou užitečnost, hodnocenou kolem 66krát více než EverGrow Coin, je tu jasný argument pro inovativní a převratný projekt jako je EverGrow, aby viděl nějaký vážný růst od jejich současné nízké tržní kapitalizace. -Pokud tým bude nadále imponovat krypto komunitě svou inovací a transparentností a dokáže se zbavit strachu, který mezi investory šíří chyby CoinMarketCap, existuje dobrá šance, že EverGrow Coin může být jedním z nejlepších kryptoměn, do kterých se v roce 2022 investuje. -Írán hlásí nejnižší počet denních případů COVID-19 za více než jeden rok. -Ministerstvo zdravotnictví Íránu zaregistrovalo 1 686 nových denních infekcí COVID-19, nejnižší počet za posledních 460 dní, což představuje výrazný pokles případů, jak se pátá vlna pandemie uklidňuje. -Podle Press TV oznámila ministerstvo v sobotu, že 58 Íránců zemřelo na nemoc, poznamenávajíc, že z nových případů zjištěných během posledních 24 hodin bylo 286 pacientů hospitalizováno. -Uvedlo se také, že v zemi se nakazilo 6 152 524 lidí COVID-19 a 5 963 373 z nakažených lidí se uzdravilo a bylo propuštěno z nemocnic. -Podle ministerstva jsou 3 126 pacientů s COVID-19 v jednotkách intenzivní péče (ICU) a dosud bylo v Iránu provedeno 39 951 481 diagnostických testů. -Čísla koronaviru jsou na poklesu od chvíle, kdy vláda zahájila masovou vakcinaci. -Dosud obdrželo první dávku vakcíny proti COVID 58 595 066 lidí, 49 157 835 obdrželo druhou dávku a 2 237 841 obdrželo boosterové injekce. -Celkový počet injekcí vakcín v zemi dosáhl 109 990 742 dávek. -Během posledních 24 hodin 19 provincií hlásilo téměř žádný případ úmrtí nebo pouze jednu mrtvou osobu. -Podle nejnovějších údajů jsou osm měst ve oranžových zónách, 119 ve žluté kategorii a 321 měst ve modrých zónách. -Není žádné město ve vysokém rizikovém červeném pásmu. -První viceprezident Íránu Mohammad Mokhber řekl ve středu, že země je plně připravena na rozšíření očkování proti koronaviru. -„Dnes neexistuje žádný zájem ani nedostatek v dodávkách vakcíny a půda je připravena pro třetí a čtvrtou dávku očkování,“ dodal Mokhber. -Čteš e-knihy na čtečce #PRS_ORG#, že? -Na stejném čtečce klikněte prosím na opravu účtu. -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Menu v horní části obrazovky. -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Vedle Opravit váš #PRS_ORG# účet, klepněte na Opravit. -Opravit nyní. -Budu velmi rád, když vám mohu pomoci. -Prosím, dejte mi pár okamžiků na ověření vašich informací. -V tomto případě můžete zkusit připojit zařízení pomocí různých USB kabelů. -Jakýkoli obecný micro-USB kabel by měl fungovat. -Také zkuste použít různé USB porty ve vašem počítači. -Vítejte, chvíli prosím. -Objednal jsem vám náhradní položku, která má být odeslána 19. února. -Teď ti jen zařídím štítek na vrácení. -Tohle věci vysychá, když je vystaveno jakémukoli kritickému myšlení. -Nevylučuji, že existují velké skupiny lidí, kteří neprovádějí kritické myšlení, ale ať už je to tak či onak, dokázat, že je to špatné, není žádnou zárukou, že to vybledne. -Nakonec jsme již měli forenzní audit a ruční přepočítání těchto hlasů a to nepomohlo. -Měli bychom jim jenom nadále dovolit "auditovat" hlasy, dokud nedosáhnou výsledků, které chtějí? -Toto umožňuje Uri Gellerovi zkoušet a dělat svou hovadinu na Jamese Randiho. -Tady končí příběh a lež jde zemřít. -Ne, není. -Toto je Uri Geller, který se snaží vytáhnout svou hovadinu na Jamese Randiho, nelíbí se mu výsledky a najímá společnost, jejíž generální ředitel prohlásil, že věří, že Gellerovy schopnosti jsou skutečné, aby "studovaly" jeho schopnosti a zkoumaly, zda Randi není komunista, který se snaží zničit Geller. -Pokud zde nejsou žádné výsledky, požádají o další audit. -Nebo budou tvrdit, že roztrhané hlasy byly podávány kuřatům, která byla poté spálena. -V určitém okamžiku musíte udělat skutečnou práci, která spočívá v pohledu na realitu a porovnání s tím, co si myslí, a ukázat, kde se mýlí. -Už jsme to udělali. -Dvakrát. -To je nezastavilo. -A není to jako by to bylo neškodné. -Už jsou tu obvinění, že tito lidé porušují federální zákon tím, že nezabezpečují hlasy správně. -Také uvedeno v tomto článku: Tato společnost plánuje fyzicky prozkoumat části Maricopa County, aby se zeptala lidí, zda jejich hlasy odpovídají. -Jak byste se cítili, kdyby někdo přišel a zeptal se vás, koho jste volili ve volbách, vědět, že pokud jim to neřeknete, vaše hlasování může být označeno a zahodeno? -Jak jste si jisti, že tato společnost bude data uchovávat v tajnosti a nebude o nich informovat ostatní ve vaší komunitě? -A byste byli stejně pravděpodobní hlasovat, kdybyste věděli, že je to možnost každou chvíli? -Znám spoustu lidí, kteří by to neudělali. -Všechny naše komiksy jsou jako standardní odeslány v sáčku. -Navíc jsou obvykle naloženy i starší položky. -Novější položky jsou zabaleny pouze. -Kromě výše uvedené položky máme skladem více než 250 000 komiksů, včetně starších i nových položek. -Všechny naše komiksy jsou dodávány z našeho skutečného světového obchodu, což nám umožňuje nabídnout obrovský sortiment komiksů prostřednictvím aukce. -Máme pravděpodobně to, co hledáte! -(Pokud objednáváte více objednávek, požádejte prosím o fakturu s přesnou částkou PŘED platbou.) -Tato položka je originální americký komiks a je v angličtině! -Uvědomte si, že všechny naše komiksy ve všech stupních (I VE ŠPATNÉM) budou kompletní, pokud není v seznamu uvedeno jinak! -Věnujte prosím čas prohlédnutí obou přiložených skenů obálky a podrobného popisu stupně nahoře, abyste se ujistili, že tento konkrétní komiks je ve stavu, který požadujete. -Většina našich nabídek nabízí slevy Multi-Buy. -Obvykle začíná od 3 nebo více položek, aby získal slevu. -Položky mohou být JAKÁKOLIV kombinace JAKÝCHKOLIV POLOŽEK zahrnutých do Multi-Buy. -Nemusí to být více kopií stejné položky. -Stačí vybrat požadovanou celkovou množství a automaticky dostanete slevu na všechny! -Některé z našich položek zahrnují možnost umístit nejlepší nabídku. -Pokud je možnost Nejlepší nabídky, budeme zvážit jakoukoli rozumnou nabídku. -Neuvádíme ŽÁDNÉ komiksy jako Mint stav. -Podle našeho názoru tato třída neexistuje. -Komiksy jsou masově vyráběné papírové položky, které jsou často zacházeny s malou péčí, než dokonce dorazí do obchodu nebo novinového stánku, aby byly nabídnuty k prodeji. -Každý komiks, dokonce i nový, bude mít nějakou formu drobného defektu, pokud si vezmete lupu a dostatečně se podíváte. -Pokud máte v úmyslu najít komiksovou dokonalost nebo výsledky garantované CGC, nejlepší bude, když si komiksy před licitováním osobně prohlédnete ve svém obchodě! -Apple Music Dokumenty a Data Velikost úložiště -Nedávno jsem přešel z iPhone 12 Pro na 13 Pro Max a na obou iPhoních jsem si všiml chyby, která spotřebovává můj interní úložiště. -Apple Music's Dokumenty a Data používají asi 35 GB vnitřního úložiště. -Zkusil jsem to opravit tím, že jsem aplikaci odstranil, ale protože se jedná o výchozí aplikaci, dokumenty a data nikdy nejsou z iPhone skutečně smazány. -Myslel jsem si, že když se přesunu na nový iPhone 13 Pro, že chyba zmizí, ale to nebyl případ. -Po instalaci z iCloud zálohy jsem zkontroloval aplikaci Apple Music a stále používala více než 30 GB pro dokumenty a data. -Po kontaktování dvou specialistů na podporu Apple mi jeden navrhl, abych vyčistil můj iPhone a začal znovu, zatímco druhý nenabídl žádné skutečné návrhy, protože problém přesahuje to, co mohou udělat. -Také jsem zkontroloval můj iPad a zdá se, že AM používá pouze 15 GB pro dokumenty a data na něm, ale to stále není přijatelné. -Nyní se obracím na komunitu, abych zjistil, jak rozšířený je tento problém, a možná získal pozornost Applu na tento problém. -Zažil jsi to taky? -Můžete prosím odpojit svůj eReader od počítače a zkusit tovární reset? -Tímto se smažou informace na vašem eReaderu, ale můžete si udělat zálohu a informace znovu přenést později. -Můžete postupovat podle těchto kroků: -Chcete-li provést tovární obnovení na vašem #PRS_ORG#, postupujte podle níže uvedených kroků: -Přejděte na svou domovskou obrazovku. -Klepněte na Domů na vrcholu obrazovky. -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Klepněte na Tlačítko Obnovení továrního nastavení pod Pokročilé. -Klepněte na Resetovat nyní. -Tento makro rozšíření trubky může transformovat váš objektiv do makro objektivu. -Sada se skládá ze tří trubek různých délek, které lze použít v jakékoli kombinaci nebo individuálně k získání různých zvětšení. -3 jednotlivé prsteny lze použít samostatně s montáží těla fotoaparátu a adaptérem objektivu a samozřejmě bude odlišný poměr zvětšení. -Máte 8 řad různých kombinací. -Prodloužené trubky jsou kovové trubky s objektivovým závitem na jednom konci a závitem těla fotoaparátu na druhém konci. -Sada rozšíření trubice nemá žádný vliv na kvalitu obrazu, protože uvnitř není žádná optika. -Není možné provést elektronický kontakt a automatické zaostření. -Expozice a zaostření musí být nastaveno ručně. -Nastavte fotoaparát a objektiv do manuálního režimu, vypněte a odpojte objektiv; -Připojte prodlužovací trubici mezi fotoaparát a objektiv. -Umístěte předmět blízko čočky a použijte hodně světla. -Když jsou trubky připojeny, musíte všechno provádět ručně. -A je důležité, abyste používali hodně externího světla. -Pokud to neuděláte ve světlém prostředí, můžete mít potíže s viděním objektu skrz zrcátko. -Proto můžeme zboží okamžitě a co nejdříve po jeho nákupu odeslat. -Musíte zaplatit prostřednictvím systému Paypal. -Všechny bankovní karty uvedené níže jsou akceptovány. -Pro pohodlí zákazníka a rychlejší dodání jsou tyto možnosti k dispozici: -Royal Mail 1. třída Podepsané pro (1 pracovní den) pro velké a drahé zboží -Royal Mail Tracked 24 (1 pracovní den) pro velké a drahé zboží -Royal Mail Mezinárodní sledované pro velké a drahé zboží. -Royal Mail Mezinárodní podepsaný pro velké a drahé zboží -Ujistěte se, že vaše objednávka obsahuje správnou dodací adresu. -Akceptujeme vrácení do 60 dnů od data, kdy jste obdrželi nákup. -Spokojenost zákazníka je pro nás velmi důležitá. -Pokud máte s vaším objednávkou nějaký problém, kontaktujte nás a uděláme vše pro to, abychom vás uspokojili. -Prosím, nezanechávejte negativní zpětnou vazbu. -Garantujeme, že váš problém bude rychle vyřešen. -Pokud jste spokojeni se svým nákupem, zanechte nám prosím pozitivní zpětnou vazbu. -Vaše zpětná vazba je pro nás velmi důležitá. -Zanecháme pro vás pozitivní zpětnou vazbu. -Pokud máte nějaké dotazy, neváhejte nás kontaktovat prostřednictvím systému e-mailování eBay. -Budeme se snažit odpovědět co nejdříve během 24 hodin. -Doufáme, že nám dáte šanci zlepšit naši službu a vyřešit jakékoli problémy, které byste mohli mít. -Vidím to pořád ve své práci. -A nemusí to být ani o životě nebo smrti, aby to bylo frustrující. -Měl jsem nedávného pacienta, který potřeboval velmi specifický postup na koleni, aby mohl normálně chodit a zlepšit kvalitu života. -Peer to peer selhal. -Pojišťovna říká, že to není lékařsky nutné. -Odvoláváme se. -Říkají znovu ne. -Jdeme do třetí strany odvolání. -Předkládáme všechny relevantní lékařské výzkumy podporující potřebu postupu. -Dokonce jsme zahrnuli i druhý názor jiného chirurga mimo našeho programu - ano, doporučuje postup. -24 hodin později nás zasáhli zpět s finálním "Ne". -Není lékařsky nutné. -Můj chirurg se rozčílí a říká: "DOBŘE!" -Ale ty mi budeš říkat, který postup bys doporučil, protože neznám žádný jiný, který by pomohl tomuto chudáčkovi. -Samozřejmě, že ne. -A tento kluk je na houby. -Žádná jiná možnost. -Jak se ukázalo, tento postup je obecně nenáviděn pojišťovnami, protože je docela drahý. -Vždycky musíme bojovat o to, ale obvykle souhlasí po odvolání. -Tentokrát ne. -Systém je tak zničený. -Na vašich stránkách nebylo nic o tak dlouhé době dodání. -Při objednávce je to do doby dodání uvedené. -Doba dodání je uvedena na webových stránkách. -Protože nemáme sklad, jsou všechny položky vyrobeny na objednávku, zaslány nám sem na #URL# a poté odeslány na vás. -Přemístěte přívěs! -Před lety jsem pracoval ve dřevěné dílně. -Šel jsem na instalaci s majitelem a když jsme se vrátili, zaparkoval prázdnou přívěs blízko popelnice. -Žádný zvláštní důvod, tam bylo jen místo, takže to je tam, kde to nechal. -Druhý den ráno přišel do práce a Jerry (ne jeho skutečné jméno) přišel ke mně, vypadal naštvaně kvůli něčemu. -Nic nového, vždycky byl trochu mrzutý starý chlap. -Konverzace šla něco podobného jako níže (před 18 lety, takže si to přesně nepamatuji). -Jerry: Zaparkoval jsi tu přívěs u popelnice? -Já: Ne, majitel jel včera. -Jerry: Nemůžeš tam zaparkovat tu přívěs, pak se nemůžu dostat k popelnici! -Já: Nezaparkoval jsem to tam, majitel ano, ale můžu to přemístit. -Jerry: Nevím, proč bys tam ten přívěs parkoval. -Víte, že potřebujeme přístup k popelnici. -Já: ale já to tam nezaparkoval. -Proč s tím nepromluvíte s majitelem? -Jerry: blah blah blah tvá chyba, čerti děti nemají žádný respekt, blah blah blah -Já: Nebyl jsem to já. -Rozhovor pokračoval tímto způsobem po několik minut, s ním mě kritizujícím za to, že jsem nechal přívěs tím způsobem, že jsem ho nechal tím způsobem. -Od toho dne až do odchodu z toho kabinetu o několik let později, kdykoli jsem pracoval pozdě (což bylo častěji než ne), a 5x8 přívěs byl v obchodě, vzal jsem jazyk a přetáhl ho až k popelnici pro Jerryho, aby ho ráno našel. -Navštivte prosím následující odkaz a postupujte podle kroků k vytvoření nového hesla. -Dejte mi vědět, jestli jste byli schopni vytvořit si nové heslo a přihlásit se s ním. -Rozumím, mohl byste prosím zkontrolovat, jestli se ebook otevře? -Našel jsi ebook? -Vzhledem k nereakci a z důvodu kvality musím ukončit tento chat, neváhejte nás kontaktovat pro jakékoli dotazy nebo otázky. Budeme rádi, když vám s tím poskytneme pomoc. -Mějte krásný den, Na shledanou! -Nejlepší neděle: Vstupte do New Yorku 1880s v HBO "The Gilded Age". -Upozornění na klobouk a slunečník! -"Zlatý věk", vytvořený Julianem Fellowesem ("Downton Abbey") a napsaný Fellowesem a Sonjou Warfield, má premiéru příští měsíc na HBO. -Nastaveno v New Yorku 1880s, sleduje Marian Brook (Louisa Jacobson, nahoře vlevo) a nadějný spisovatel Peggy Scott (Denée Benton, vpravo) jak se nově setkávají s starými penězi společností. -V obsazení jsou také Christine Baranski, Cynthia Nixon, Carrie Coon a Morgan Spector, mezi mnoha dalšími. -Dobrá zábava na zimu, ne? -Kostýmy, které vypadají velmi bohatě, jsou navrženy Kasií Walickou-Maimone, jejíž předchozí práce zahrnují "The Goldfinch", "A Quiet Place" a "Moonrise Kingdom". -"Zlatý věk" začíná streamovat na HBO Max 24. ledna. -Jižní Afrika Vzdává Hold Poslednímu Apartheidnímu Vůdci De Klerkovi -Jižní Afrika v neděli vyjádřila oficiální uznání FW de Klerkovi, poslednímu prezidentovi bílé vlády, který osvobodil Nelsona Mandelu z vězení a vedl zemi z apartheidu do demokracie. -De Klerk zemřel 11. listopadu ve věku 85 let po boji s rakovinou. -Bylo vyhlášeno čtyři dny národního smutku ve jeho čest. -Sloužil jako prezident od roku 1989 do roku 1994 a je nejvíce zapamatován pro vedení přechodu Jižní Afriky od bílé většinové vlády k prvním vícebarevným volbám v roce 1994. -De Klerk také sdílel Nobelovu cenu míru s Mandelou v roce 1993 po jeho osvobození z vězení v roce 1990. -Poté se Mandela stal prvním černým prezidentem Jižní Afriky po vítězství jeho strany Africké národní kongresu v roce 1994 ve volbách. -Prezident Cyril Ramaphosa se v neděli ráno zúčastnil protestantského Groote Kerku v Kapském Městě - jedné z nejstarších církví v Jižní Africe - aby vyřkl eulogii na počest De Klerka. -"Často byl nesprávně pochopen kvůli jeho přílišné správnosti," řekla De Klerkově vdově Elitě Georgiadis kolem 200 účastníkům. -Nikdy nezapomenu na tohoto muže, který mě okouzlil, který mě donutil chtít mu pomoci dosáhnout této obrovské úlohy před ním. -Soukromá mše a národní hymna předcházela slavnosti, která zahrnovala portrét De Klerka mezi dvěma svíčkami a sbor ozdobený bílými květinami. -Navzdory pozitivnímu renomé v zahraničí, De Klerk rozdělil názory v Jižní Africe a jeho smrt vyvolala smíšené reakce. -Kritici říkají, že zůstává nerozlučný s trestnými činy z doby apartheidu a mohl by za ně být zodpovědný, kdyby žil déle. -De Klerk zastupoval Národní stranu, která v roce 1948 formálně zavedla rasovou segregaci a odepření volebního práva většině ne-bílých obyvatel Jižní Afriky. -Venku z kostela držela malá skupina protestujících cedule s nápisy "Spravedlnost odmítnuta" a "Spravedlnost pro oběti apartheidu" a byli rychle odvedeni policií. -Okolí bylo uzavřeno pro dopravu a podřízeno vysoké bezpečnosti. -Komentáře v jeho posledních letech také poškodily obraz De Klerka v důsledku kritiky za jeho selhání omluvit se oficiálně za zločiny apartheidu. -V roce 2020 popřel, že apartheid je zločin proti lidskosti, než své prohlášení stáhl a omluvil se. -Nadační fond De Klerka vydal pohřební video, ve kterém se omlouvá "za bolest, zranění, urážku a škody, které apartheid způsobil" ne-bílým obyvatelům Jižní Afriky. -Pro vaši informaci Vám pošlu transkript naší konverzace. -Pokud budete mít další otázky nebo obavy, můžete vždy odpovědět na tento e-mail a my vám budeme moci dále pomoci. -Naše koncentrovaná kombinace oddanosti a odbornosti prospívá našim zákazníkům. -Norton předčil konkurenci ve mnoha renomovaných hlava-na-hlava testech a pouze Norton získal PC Magazine Editors 'Choice Award 34krát, včetně 11 let v řadě - více než jakákoli jiná bezpečnostní společnost. -Co to pro vás znamená? -Když si koupíte Norton Security, dostanete jeden z nejlepších bezpečnostních produktů na trhu dnes. -Zahrnujeme pouze ochrannou slib, který může udělit pouze Norton. -Jsme tak jistí ve své schopnosti udržet vás bezpečné, že nabízíme záruku vrácení peněz: Pokud se na vašem počítači nebo Macu objeví virus, který naši odborníci Norton nemohou odstranit, vrátíme vám peníze*. -S Norton Security Deluxe můžete rychle a snadno zabezpečit své zařízení. -Norton Security Deluxe poskytuje jednoduchý pohled, který podrobně popisuje stav ochrany vašeho zařízení. -Z jednoho přehledu můžete sledovat nastavení zabezpečení a ochrany identity a dokonce zobrazit historii skenovaných souborů a analyzovaných stahování. -Norton Security Deluxe zahrnuje přístup k online odborné pomoci od certifikovaných techniků Norton. -Pokud budete kdykoli potřebovat pomoc, naši zákaznické podpůrní agenti jsou připraveni vám pomoci 24 hodin denně, sedm dní v týdnu. -Pro aktivaci se zaregistrujte online a uložte své fakturační údaje do svého účtu Norton. -Automaticky obnovuje každý rok, pokud není obnovení zrušeno před dnem, kdy budete účtováni v my.norton.com nebo kontaktováním podpory Norton. -Obnovení předplatného je účtováno za cenu obnovení nalezenou na norton.com/pricing. -Cena je podléhá změně, ale před fakturací je odeslána upozornění e-mailem. -Podle politiky zrušení a vrácení peněz NortonLifeLock můžete po aktivaci smlouvu zrušit a požádat o plnou náhradu do 60 dnů od nákupu a pro každé roční obnovení do 60 dnů od účtování. -Předplatné začíná po online aktivaci. -Chcete-li spustit službu, stáhněte/nainstalujte na každé zařízení a/nebo dokončete nastavení. -Aktualizace a funkce mohou být přidány, upraveny nebo odstraněny v souladu s licenční smlouvou a služební smlouvou. -Sběr dat, ukládání a používání pro účely správy a obnovení předplatného podléhá Globálnímu prohlášení o ochraně soukromí společnosti NortonLifeLock. -Ponořte se do hlubokého příběhu uvězněného v rozsáhlém světě Black Desert, který čeká na to, aby byl prozkoumán. -Doprovázeni černým duchem, společníkem, jehož osud je propleten s jejich vlastním, hráči odhalí tajemství černých kamenů a historii jejich korumpujícího účinku. -Hráči si užijí dechberoucí grafiku s šílenou úrovní přizpůsobení postavy ve 19 třídách postav. -Každá třída nabízí intuitivní boj založený na dovednostech, vybavený sadou unikátních dovedností, které lze volně kombinovat do vzrušujících a účinných kombinací, které vás vždy drží na nohou. -Black Desert Prestige Edition je živý svět MMORPG s bonusovým obsahem v hodnotě 140 dolarů. -Zažijte rychlé, akční boje, lovte monstra a obří bosse, bojujte s přáteli ve gildě o ovládnutí uzlů a hradů a trénujte různé životní dovednosti, jako je rybaření, obchodování, tvoření, vaření, plachtění a mnohem více! -Robustní nástroje pro tvorbu postav - Vytvořte postavu, kterou chcete hrát. -Bezproblémový pohyb po celém světě - Není potřeba žádné časování načítání, když se prozkoumáváte. -Boj zaměřený na kombinace, nezaměřený na cíl - Účastněte se rychlého a akčního boje s dovednostmi, které lze spojovat do komb. -Unikátní počasí a klima - Počasí a klima budou mít různé účinky na různé zóny, na které se hráči mohou přizpůsobit. -Den/Noc Cyklus - Spolu s unikátními změnami počasí a klimatu se hra točí kolem denního/nočního cyklu, který mění chování NPC a spouští různé události na základě času dne. -Instancované hráčské bydlení - Od stanů po paláce a všechno mezi tím, hráči mohou zařídit a přizpůsobit si vlastní domovy a mohou najmout NPC, aby udržovali váš prostor čistý nebo si mohou nakupovat věci na trhu. -Boj na koni - Využijte své důvěryhodné hříbě na bojišti a využijte jejich pohyblivosti a účinnosti v boji. -Nezapomeňte však, že postroje budou potřebovat péči, ubytování a ochranu, protože mohou zemřít v boji. -Boss Hunts - Skupujte se se svými přáteli nebo ostatními hráči, abyste mohli lovit pole bossů a světové bossy a získat tu vzácnou kořist. -Obležení - Masivní free-for-all guild bitvy! -Připojte se k gildě a účastněte se denních node wars nebo týdenních conquest wars proti mnoha dalším soutěžícím gildám. -Vyhrajte uzel nebo hrad a reklamujte jej na týden, abyste mohli sbírat daně a zvýšit fondy svého gildu. -Obsah oceánu - Vyrobte si loď a vyplujte do rozlehlých oceánů, abyste rybařili, lovili mořské monstra a bosse, prozkoumávali pod vodou a sbírali, plnili úkoly, obchodovali a mnohem více. -Ochočování & Chování - Chytit a ochočit koně a slony v divočině, aby to bylo vaše hříbě. -Také můžete chovat koně pro lepší potomky s vylepšenými statistikami a dovednostmi jízdy. -Řemesla - Užívejte si všechny aspekty řemesla v Black Desert od nástrojů, zbraní, brnění, šperků, lodí, kostýmů, oblečení a dalšího. -Všechno lze vyrobit ve světě Black Desert. -Profese - Zúčastněte se a rozvíjejte svou postavu do profese, která může pomoci vašemu příjmu. -S profesemi jako sběr, zpracování, vaření, alchymie, trénink, rybaření, lov, obchodování, zemědělství a plavba si můžete vybrat, jak chcete hrát Black Desert Online. -Budu knihu smazat a znovu přidat a poté budete řešit svou aplikaci #PRS_ORG# 2 postupy, abyste zjistili, zda to problém vyřeší. -Prosím 2 minuty. -Je to hotovo. -Nyní se prosím pokuste provést tento postup ve vaší aplikaci: -Pro opravu vašeho účtu v aplikaci Android, postupujte podle níže uvedených kroků: -Klepněte na ikonu #PRS_ORG# v horní části obrazovky. -Přejděte na domovskou obrazovku. -Klepněte na ikonu Menu v horní části obrazovky. -Klepněte na Nastavení. -Posuňte se na dno a klepněte na Opravit váš účet. -Oprava kohoutku. -Jakmile dokončíte, pokračujte prosím tímto postupem: -Pro odhlášení postupujte podle níže uvedených kroků ve vaší aplikaci #PRS_ORG#, prosím: -Klepněte na ikonu Více dole na obrazovce. -Klepněte na Nastavení. -Klepněte na Odhlásit se z #PRS_ORG#. -A prosím se znovu přihlaste po tomto, aby byl účet aktualizován. -Jak to šlo? -Zde vidím, že k vaší objednávce ještě není přiřazen žádný jezdec. -Nicméně to zaznamenám do záznamů. -Můžete také použít aplikaci k volání nebo chatování s nimi, jakmile jsou blízko místa, budete mít možnost kontaktovat jezdce. -Ano, otevírám účet. -Prosím, postupujte podle následujícího procesu. -Pro opravu vašeho účtu v aplikaci Android, postupujte podle níže uvedených kroků: -Klepněte na ikonu #PRS_ORG# v horní části obrazovky. -Přejděte na domovskou obrazovku. -Klepněte na ikonu Menu v horní části obrazovky. -Klepněte na Nastavení. -Posuňte se na dno a klepněte na Opravit váš účet. -Oprava kohoutku. -VP−730 je 9−vstupní škálovač/přepínač pro analogové video, digitální video, vyvážené stereo a S/PDIF audio signály. -Může up- nebo down-scalovat kompozitní, s-Video (Y/C), komponentní video (YUV), HDMI, počítačové grafické video a soubory JPEG na vybranou počítačovou grafickou video nebo HDTV výstupní rozlišení na stejných výstupech - jeden HDMI a dva 15-pin HD. -Obsahuje zesilovač pro napájení reproduktorů. -Jednotka poskytuje bezchybné přepínání mezi zdroji prostřednictvím technologie FTBTM (fade-thru-black). -HQV® Video zpracování - HQV (Hollywood Quality Video) zpracování představuje stav umění ve video zpracování technologie, s nejvyšší kvalitou de-interlacing (s 3:2 & 2:2 pull down), redukcí šumu a škálováním výkonu pro standardní definici a vysokou definici signálů. -Fade-Thru-Black (FTBTM) Přepínání - Video se postupně stmívá a nový vstup se postupně stmívá z černé pro hladké, bezchybné přepínání. -Výstupní signál poskytuje trvalou synchronizaci, takže displej nikdy nezkresluje. -K-IIT XLTM Technologie vložení obrazu do obrazu - Ultra stabilní schopnost obrazu v obraze, obrazu a obrazu a rozdělení obrazovky. -Jakýkoli zdroj videa může být vložen do nebo umístěn vedle zdroje počítačové grafiky nebo naopak s ovládáním pozicování a velikosti okna. -Video vstupy - 2 univerzální video každé na 3 BNC (kompozitní, s−Video, komponenta), 4 počítačové grafiky/komponentní video (15−pin HD), 2 HDMI a 1 USB (pro JPEG data). -HDCP kompatibilní - Licenční smlouva HDCP (ochrana obsahu vysokého rozlišení) umožňuje přenášet chráněná data na HDMI vstupu pouze na HDMI výstup. -Více možností výběru poměru stran - 4x3 nebo 16x9, anamorfní, letterbox a uživatelem definované nastavení. -Společník AFV (Audio-Follow-Video) - Pro každý analogový video vstup podporuje vložený zvuk na 2 HDMI vstupech a výstupech. -Audio vstupy - 6 vyvážených nebo S / PDIF audio (každý vybíratelný) na terminálových bloků, jeden pro každý z 2 univerzálních videí a 4 počítačových grafických videí vstupů. -Vestavěný ProcAmp - Barva, odstín, ostrost, kontrast a jas jsou nastaveny individuálně pro každý vstup. -Jednotka byla plně testována ve všech vstupech a výstupech. -Jednotka bude vyžadovat konektor výstupu reproduktoru. -Úžasné. -Ale dobře na tobě. -Jo, když mi bylo 16, aplikoval jsem a dostal nabídku práce v restauraci. -Myčka nádobí. -První směna mě měla zavřít. -Sobota. -Pracovali jsme až do pozdních 1 hodin ráno. -Druhý den jsem skončil. -Nejlepší způsob, jak ztratit nového mladého pracovníka, je ho šokovat tím. -Stejné se stalo mému příteli poté, co jsem pracoval pro Pizza Hut několik let (nezavřel mě až po měsících, kdy jsem začal pracovat a trénoval), dostal jsem mu tam práci na místě. -Pokračovali v tom, že ho dali na dvě uzávěry po sobě. -On to vzdal. -Pokud neinzerujete práci jako zavírací gig pozdě v noci, očekávejte, že pokud je s tím zaútočíte příliš brzy, ztratíte své pracovníky. -Poté, prosím, smažte svou autorizaci. -Odpojte svůj Ereader. -Zapněte svůj eReader. -Připojte svůj eReader k počítači pomocí Micro USB kabelu. -Na vašem eReaderu: Klepněte na Připojit. -Na vašem počítači: Otevřete #PRS_ORG#. -Pod "Zařízení", klikněte pravým tlačítkem myši na #PRS_ORG# eReader. -Klikněte na Vymazat autorizaci zařízení. -Klikněte na tlačítko OK na potvrzovací obrazovce. -2) Zrušit autorizaci #PRS_ORG# -Pro odstranění autorizace #PRS_ORG#, klikněte na Nápověda > Odstranit autorizaci. -V otevřeném vyskakovacím okně zadejte heslo pro účet, který jste použili k autorizaci #PRS_ORG#. -Klikněte na Odstranit Autorizaci -Byla tyto kroky užitečné? -Bohužel jsem neobdržel odpověď déle než dvě minuty. -Pro účely kvality bude tento chatovací interakce uzavřen, nezapomeňte, že se můžete vždy vrátit a my budeme rádi, že budeme pokračovat ve vaší pomoci. -Boris Johnson se ocitá na hraně přízně u toryských poslanců. -Boris Johnson je dlouho považován za krále návratů. -A někteří toryští poslanci doufají, že on bude pokračovat v této sérii tím, že se zachrání před klesajícími průzkumy veřejného mínění v důsledku řady stran v Downing Street v odporu proti Covid zákonům. -Premiér se zamotal do uzlů, když opakovaně popíral, že byla porušena nějaká pravidla, než se objevily další zprávy a důkazy, které naznačovaly opak. -Nejprve byl video No 10 poradců smějících se, zatímco diskutovali o vánočním setkání 18. prosince minulého roku. -Poté Dominic Cummings, dříve nejbližší poradce Johnsona, slíbil, že byly pořízeny fotografie oslav, a tak s napětím čekali kritici vlády, až se objeví. -Když byla v neděli zveřejněna fotografie, na které Johnson hostí vánoční kvíz pro zaměstnance, kteří se připojili z No 10 a z domova, nebylo to úplně důkaz, na který někteří čekali, aby ho konečně zasáhli. -Obrázek z Sunday Mirror ukazuje Johnsona se dvěma poradci, kteří byli oblečeni s tinsel a Santa kloboukem - nejsou od sebe sociálně distancováni a jasně se účastní společenské příležitosti, zatímco míchají domácnosti. -Ale mohlo to být horší. -V No 10 a v sídle Konzervativní strany se konaly další strany, na kterých lidé konzumovali obrovské množství alkoholu, hráli stranické hry, vyměňovali si dárky od tajného Ježíška a bavili se až do pozdních hodin, podle zdrojů, které informovaly média včetně Guardianu, Mirroru, BBC a Times. -Ministři budou potichu vydechovat úlevu, že žádné obrázky těchto scén nebyly unikly - zatím. -Zatímco Johnsonova účast na kvízu porušila pravidla, podle Keira Starmera, vůdce Labour a bývalého ředitele veřejného stíhání, si myslí poslanci Tory, že lidé se na fotografii podívají a posoudí, že opravdu ukazuje, jak pořádá virtuální kvíz - běžný pohled během pandemie. -Personál, volající z jiných místností v č. 10, zatímco pijí a nesociálně se distancují, nelze vidět. -V neděli Nadhim Zahawi trval na tom, že obrázek je pouze příkladem Johnsona "děkujícího svým zaměstnancům" a použil ho k potlačení stranického skandálu jako "hype". -Řekl LBC: "Na této titulní stránce si myslím, že vaši posluchači na to budou koukat a uvidí premiéra ve svém kanceláři, se dvěma blízkými lidmi, kteří s ním pracují, bez alkoholu, kteří stráví 10 až 15 minut, aby poděkovali a motivovali svůj personál, který přichází, protože nemůžou pracovat z domova." -Už bylo učiněno mnoho škod, s vzbouřenými poslanci rozzuřenými nad tím, že premiér umožnil, aby se ujal "jedno pravidlo pro ně" narativu, od Cummings po Matta Hancocka a nedávno Owena Patersona. -Johnson se chvěje na okraji přízně u svých vlastních poslanců; pokud se objeví další fotografie, mohou ho přinutit přes hranici. -Můžete se kdykoliv vrátit, protože naše okno chatovací služby je otevřené 24/7. -Upřímně doufám, že najdete řešení. -Děkujeme za kontaktování #PRS_ORG#, bylo mi potěšením Vám dnes pomoci. -Doufám, že máte skvělý večer. -Měli jsme vypnutí proudu několikrát. -Krok 1: Okamžitě někoho na dveře. -Teď jsou bezpečnost. -Nikoho nepouštějte dovnitř a všímejte si lidí, kteří odcházejí (zejména dětských rukou). -Krok 2: Pokud tam nejsou, zavolejte manažerovi obchodu. -Krok 3: Ti, kteří jsou u pokladen a kdokoli jiný, mohou počkat několik minut, aby viděli, zda náhradní generátory nás opět rozjedou. -Krok 4: Projděte se po obchodě a vyžádejte si odchod každého nezaměstnance. -Stejně jako každý vozík přinesený dopředu. -Krok 5: Projděte se košíky a hledejte něco studeného a produktů. -Krok 6: Vraťte uvedené studené/produkty. -Krok 7: Pokryjte všechny nezavřené studené, tj. sýr / maso / zeleninu atd. -Krok 8: Podepsat naše jména na list papíru, když jsme odešli, abychom byli odpočítáni. -(Někteří byli povoleni odejít dříve, zejména pokud se necítili v tmě pohodlně nebo už neměli 6 hodin do doby odjezdu). -Je to opravdu tmavé, i dopředu. -Nemůžeme nikdy nechat zákazníky jen tak viset. -Nejsem si jistý, proč některé pokladny stále měly nějakou energii, zatímco jiné ne. -Nevím, ale nemyslím si, že bychom měli nějaký způsob, jak je měli platit. -Myslím si, že by se položky mohly skenovat, ale nikdy bychom nedůvěřovali zákazníkům, že zaplatí později. -Jednou to trvalo jen jako 3 hodiny, než se znovu zapnul proud. -Měli jsme několik z nás, takže kdyby to udělalo, jak jim řekla elektrárenská společnost, mohli bychom znovu otevřít. -Nezáleží mi, pokud máme možnost zůstat nebo ne, pomáhat při zachování produktu co nejlépe. -Nemít na výběr a ohrožovat zákazníky, je to, kde kreslím čáru. -Pro vaši informaci Vám pošlu transkript naší konverzace. -Pokud budete mít další otázky nebo obavy, můžete vždy odpovědět na tento e-mail a my vám budeme moci dále pomoci. -Děkujeme za kontaktování #PRS_ORG#, bylo mi potěšením Vám dnes pomoci. -Doufám, že máte skvělý den. -Snažím se zavolat jezdci, ale on mě nerozumí. -Proto prosím zavolejte jezdci, jakmile je blízko adresy uvedené v objednávce pomocí aplikace. -Děkuji za informace. -Budu velmi rád, když vám mohu pomoci. -Prosím, dejte mi chvíli na ověření účtu. -Díky za čekání. -Je mi líto, že zažíváte tento problém, udělám vše pro to, abych vám pomohl. -Prosím, dejte mi vědět, jaký je váš #PRS_ORG# model. -Vím, že je to v čínštině, nemusíte používat vnitřní funkci vašeho zařízení ani správný jazyk k provedení těchto posledních kroků odeslaných. -Prosím, udělejte mi laskavost přečíst je nejdříve a poté je provést. -Instrukce jsou k ručnímu resetování vašeho zařízení. -Správný jazyk není potřeba. -Pokud však chcete vyžádat návrat, mohu vám také pomoci. -Ještě jednou se omlouvám za nepříjemnosti, které jste zažili. -Stále hledáme způsoby, jak zlepšit naše služby a toto bude zaznamenáno jako zpětná vazba jednomu z našich ceněných zákazníků. -OK, můžete zkusit provést tovární resetování vašeho zařízení, abyste zjistili, zda tento problém opraví. -Rozumím, že jste to už zkusili vypnout a znovu zapnout bez úspěchu, že? -Bohužel to nyní není skladem, jen se podívám, jestli se to vrací. -Prosím, chvíli mi vytrvejte. -Toto bylo zrušeno, takže se nebude vracet do skladu, omlouváme se. -PlanetaJupiter konečně opustila sluneční soustavu mé kanceláře. -Před několika lety jsem napsal o mém kancelářském nepříteli, ženě jménem PlanetJupiter ve svých příbězích. -Tady není moc co říct. -Naposledy jsem ji viděl před Koronou, zhubla a zdálo se, že se při obědě trochu zaměřuje na skupiny potravin, i když stále používala svůj elektrický invalidní vozík a byla trochu smradlavá. -Zeptal jsem se jí, jak se má, jako se ptám všech mých spolupracovníků, když je vidím. -„Není to tak dobré, OP, zjistil jsem, že mám cukrovku, takže musím jíst méně sacharidů.“ -K jejímu malému kreditu, oběd měl rýži z květáku místo běžné. -Jsem z Midwesternu a vždy jsem byl milý k PJ, takže jsem jí řekl, že mi to moc líto, což bylo špatné, a co tento projekt, na kterém jsme oba? -Bude také pracovat pozdě, aby to stihla do soudního termínu? -Jasně, OP. -Ušetřuji peníze na přestěhování. -To je opravdu vzácné. -Můj stát má nejnižší výstěhování ze všech států, kdykoli. -Kam se stěhuje? -Do dalšího města ve středozápadě, které hodně pracuje v hovězím průmyslu. -Doufám, že ji nepopletli s hovězím! -Ukázalo se, že mě a ostatní dokumentující všechnu její pomalou/špatnou práci, usínání u stolu, otravování ostatních a smrdění, způsobilo, že přišla o pozice u všech firem kromě jedné, které často najímají mě, ji a ostatní na dohodu o práci. -Takže se musí nějak přestěhovat tam, kde je rodina ve městě. -Ona půjde zničit jiné pracoviště, ale alespoň ne moje. -To už nezáleží, protože jsem dostal mnohem lepší vzdálenou pozici. -Ne, nemůžete zadat datum schůzky, musíte objednat a pak můžeme položky držet pro vás, můžeme je držet nejprve po dobu tří měsíců. -Ještě něco, co bych vám mohl dnes odpoledne pomoci? -Děkuji vám, že jste si našli čas mluvit se mnou dnes a doufám, že jsem dokázal vyřešit vaši otázku. Pokud byste nevadilo, abyste hodnotili naši chatovací konverzaci dnes na základě mých zákaznických dovedností, byl bych vám velmi vděčný. Tlačítko hodnocení lze nalézt v tomto chatu. -Doufám, že máte skvělý den a prosím, vraťte se k nám, pokud budete potřebovat další pomoc. -V případě, že se obrazovka zase zasekne, postupujte prosím tyto kroky: -Připojte svůj eReader k zdroji energie tím, že uděláte jednu z následujících věcí: -Nejprve zapněte počítač a připojte k němu přiložený USB napájecí kabel a poté svůj eReader. -Připojte zástrčku ze zdroje napájení (není součástí) a poté připojte svůj eReader k zástrčce ze zdroje napájení. -Stiskněte a podržte tlačítko napájení, dokud světlo napájení v horním pravém rohu vašeho eReaderu nezhasne. -Uvidíte obrazovku 'Vypnuto', když je váš eReader vypnutý. -Uvolněte tlačítko napájení. -Stiskněte a podržte tlačítko napájení na vašem eReaderu po dobu 30 sekund. -Počkejte, až se objeví obrazovka Obnovení. -Uvolněte tlačítko napájení. -Vaše obrazovka eReader se ztmaví a začne proces obnovení. -Je tu něco jiného, s čím bych vám mohl pomoci? -Libye: plán na prezidentské volby 24. prosince se blíží k zhroucení. -Šance na to, že Libye uspořádá své první prezidentské volby v dlouho plánovaném termínu 24. prosince, se v neděli zdály blízko zhroucení, protože orgán dohlížející nad hlasováním řekl, že nemůže oznámit schválené kandidáty kvůli stále přetrvávajícím právním pochybnostem. -S volbami méně než za týden a prakticky žádný čas na kampaně, odložení by představovalo hořkou ránu pro naděje mezinárodního společenství na sjednocení hluboce rozdělené země. -Cizí mocnosti se také obávají, že celkový momentum směrem k demokracii může vyprchávat. -V krátkodobém horizontu budou muset souhlasit, zda bude pokračovat přechodná vláda, aby se naplnil politický vakuum a zabránilo návratu do občanské války. -Řada soudních rozhodnutí zrušila rozhodnutí libyjské volební komise blokovat významné osobnosti včetně Saifa al-Islama Kaddáfího, syna bývalého diktátora, aby kandidovali na prezidenta. -Předseda vlády dočasného, Abdul Hamid Dbeibah a válečník Khalifa Haftar, hlavou sebe-styled Libyjské národní armády, byly mezitím schváleny komisí, ale následně odvolány ostatními stranami. -Ve stanovisku v sobotu uvedlo, že nemůže oznámit jména schválených kandidátů z těch téměř 100, kteří se přihlásili, protože je „odhodláno vyčerpat všechny prostředky řízení, aby se jeho rozhodnutí shodovala s vydanými rozsudky.“ -Protichůdné frakce si navzájem vyčítají, že se snaží zastrašovat nebo kupovat soudní úředníky, aby zajistili obnovení svých kandidátů, a komise se snaží zjistit, zda byla rozhodnutí platná. -V případě Dbeibah se zavázal jako podmínka stát se dočasným premiérem, že nebude kandidovat ve volbách, ale od té doby se ve soudním sporu argumentovalo, že to byl morální závazek bez právní síly. -Saif Gaddafi byl v roce 2015 odsouzen v nepřítomnosti za válečné zločiny za jeho účast na boji proti revoluci, která svržení jeho otce Muammara Gaddafiho. -Popírá jakékoli pochybení. -Přítomnost desítek tisíc cizích bojovníků, najatých vojáků a domorodých milicí činí zemi hořlavou směsí a existují obavy, že volby provedené s spornými kandidáty by pouze vedly k výsledku, který nebude uznán. -V znamení napětí kolem cizích sil Francie tlačí EU, aby se v pondělí dohodla na uvalení sankcí na ruskou soukromou vojenskou společnost Wagner Group, která podle ní působí v Libyi a Sahelu. -Moskva popírá, že Wagner je spojen s ruským státem a řekla, že se odvetí proti sankcím EU uvaleným na její občany. -Schopnost mezinárodního společenství vyžadovat, aby libyjská politická třída dodržela datum voleb 24. prosince, které bylo poprvé dohodnuto v únoru, byla omezena jmenováním speciálního vyslance OSN Jána Kubiše, který rezignoval tři týdny před volbami po méně než roce ve funkci. -Generální tajemník OSN, António Guterres, od té doby jmenoval Stephanie Williams, bývalou důraznou zástupkyni zvláštního vyslance OSN, aby působila jako jeho zvláštní poradce. -Rusko vetovalo její jmenování plným vyslancem, ale má hluboké znalosti Libye a loni projevila ochotu čelit těm v politické třídě, kteří se staví proti volbám. -Misie OSN vydala prohlášení, ve kterém vyzývá všechny strany, aby nezvrátily dosavadní úspěchy, a ukazuje na registraci téměř 3 milionů voličů, úspěšnou distribuci volebních karet a aplikace velkého počtu kandidátů na prezidenta a parlament jako známky hlubokého lidového podpory pro volby. -Americký velvyslanec v Libyi, Richard Norland, řekl: „Odmítnutí jít k volbám a mobilizace k blokování pouze umístí osud a budoucnost země do rukou těch uvnitř Libye a jejich zahraničních podporovatelů, kteří upřednostňují sílu střelby před sílou hlasování.“ -Omlouvám se, ale nevidím, že byste se přihlásili do svého účtu, pokud nemáte jiný účet. -Pokud je tomu tak, dejte mi prosím vědět, na jaký email jste již přihlášeni na ereaderu. -Děkuji za informace. -Budu velmi rád, když vám mohu pomoci. -Těší mě. -Doufám, že máte skvělý den! -Omlouvám se, ale nemohu najít účet pod zadanou e-mailovou adresou. -Zákazník se na mě zlobí, protože jsem nevěděl, že potřebuje pomoc. -Pracuji v obchodě se zbožím na nákup/doručení objednávek. -Často mám zákazníky, kteří se ptají, kde je položka, a ptají se ve formě pozdravu + otázky, nebo jen otázky. -Také mám zákazníky, kteří jen říkají ahoj/dobré ráno/atd. -Prošel jsem kolem zákazníka, který pozdravil, a já jsem mu pozdravil zpět, pak jsem čekal několik sekund, abych viděl, jestli má otázku. -Nic jiného neřekl, takže jsem pokračoval a pokračoval v nákupu. -Pak řekl "ahoj?" znovu, s neomaleným tónem, a naštvaně se mě zeptal, jestli tady pracuji. -Řekl jsem, že ano, a on se znovu zeptal, kde je položka, zase s neomaleným tónem. -Ukázal jsem, kde jsem si myslel, že by to mělo být umístěno, a řekl jsem, že si myslím, že by tam mělo být, ale zdá se, že jsme z toho venku. -Pak jen naštvaně řekl "zapomeň na to" a odešel. -Jak jsem měl vědět, že potřebuje pomoc? -Právě řekl "ahoj", což říká spousta zákazníků zdvořile. -Tento je jediný zákazník, kterého jsem měl, který jen řekl ahoj bez toho, aby se zeptal na otázku, a pak očekával, že budu vědět, že potřebuje pomoc. -Neřekl mi nic nevybíravého, ale jeho tón hlasu byl celou dobu extrémně naštvaný, i když jsem se snažil mu pomoci. -Díky za čekání. -Dříve byl vybrán špatný pořadí, proto jsem se dříve zmatl. -Myslel jsem, že už bylo doručeno. -Zkontroloval jsem správné pořadí a vidím, že jezdec se právě snaží to teď vyzvednout. -Bude tam za 10-15 minut. -Liz Truss slibuje dalších 75 milionů liber v humanitární pomoci Afghánistánu na zasedání G7. -Liz Truss oznámila, že Velká Británie poskytne Afghánistánu dalších 75 milionů liber v pomoci, aby pomohla řešit jeho se zhoršující se humanitární situaci. -Ministr zahraničí řekl, že závazek pomůže zachránit životy a "podpořit stabilitu v oblasti". -Následuje diskuse mezi ministry zahraničí G7 v Liverpoolu v sobotu o tom, jaké koordinované akce lze podniknout v Afghánistánu, spolu s tím, jak se zapojit do vlády Talibanu. -Militární skupina v srpnu napadla Kábul v bleskovém postupu, když 20 let okupace středoasijské země bylo ukončeno spěšným spojeneckým odchodem. -Paní Trussová řekla: "Velká Británie poskytuje v Afghánistánu v této zimě zásadní humanitární pomoc." -Fondy oznámené dnes ušetří životy, ochrání ženy a dívky a podpoří stabilitu v oblasti. -Jsme rozhodnuti udělat vše, co můžeme pro lidi v Afghánistánu. -Dodatečná finanční podpora přinese Velké Británii závazek vůči Afghánistánu ve výši 286 milionů liber letos. -Bude použito k poskytování podpory obětem násilí založeného na pohlaví a financování základních služeb ochrany dětí. -Organizace Spojených národů a humanitární agentury budou prioritně řešit ty nejvíce ohrožené, včetně domácností vedených ženami a osobami se zdravotním postižením, uvedlo Ministerstvo zahraničních věcí, Společenství a rozvoje (FCDO). -Úředníci řekli, že žádné financování nepůjde přímo skrze Taliban, namísto toho bude směřovat skrze Afghánský humanitární fond, Program Světové potravinové organizace (WFP) a další organizace. -WFP obdrží 34 milionů liber z financování oznámeného v neděli. -David Beasley, ředitel organizace, řekl, že dar „nám pomůže zachránit mnoho životů.“ -"Co vidíme na zemi je srdcervoucí - 23 milionů lidí čelí vážnému hladu v zemi zničené suchotou, konfliktem a ekonomickou krizí," řekl. -"Ženy a děti nesou největší tíhu tohoto utrpení a jak se blíží tvrdá zima, stále více a více lidí se každý den propadá do podvýživy a hladovění." -Tento týden varoval vrchní humanitární představitel OSN, že ekonomický kolaps Afghánistánu se "děje před našima očima" a vyzval mezinárodní společenství, aby podniklo kroky k zastavení "volného pádu" předtím, než dojde k dalším úmrtím. -Martin Griffiths řekl: "Je to stále horší a horší každý týden." -Oznámení o financování přichází po tom, co ministři tento týden čelili trapným otázkám ohledně úsilí o odchod z Afghánistánu po důkazech od whistleblowera poslanci. -Raphael Marshall, který pracoval pro Ministerstvo zahraničí během Operace Pitting, tvrdí, že pouze 5% afghánských občanů, kteří se podali o útěk pod jedním britským schématem, obdrželo pomoc v důsledku „dysfunkčního“ a „chaotického“ zacházení se situací. -Pan Marshall řekl Poslanecké sněmovně Výboru pro zahraniční záležitosti, že někteří z těch, kteří doufali, že uniknou, byli zavražděni poté, co byli necháni za sebou v Kábulu. -Také tvrdil, že Boris Johnson požádal o to, aby byla k dispozici "značná kapacita" pro evakuaci zvířat ze útulku, který provozuje bývalý královský námořník Paul "Pen" Farthing, čímž ohrozil životy vojáků, aby jim pomohl opustit soukromě financovaný letoun. -Předseda vlády označil tyto tvrzení za "úplný nesmysl". -V neděli v Muzeu Liverpoolu bude paní Trussová diskutovat s ministry zemí Asociace jihovýchodní Asie, kteří se poprvé účastní setkání G7 - většinou virtuálně. -Ministr zahraničí zdůrazní důležitost spolupráce s "ekonomikami budoucnosti" jihovýchodní Asie k řešení současných výzev, kterým čelí Západ, uvedlo FCDO. -Po oznámení integrovaného přehledu zahraniční politiky Velké Británie v březnu byl pozvání asijským ministrům vydáno s cílem „naklonit se“ k Indo-Pacifiku, což bylo vnímáno jako snaha omezit rostoucí vliv Číny v této oblasti. -Scholz, polský premiér diskutují o migraci, energetice a EU. -Nový německý kancléř Olaf Scholz přijel v neděli do Varšavy na jednání s polským premiérem Mateuszem Morawieckim o migraci, energetice, záležitostech Evropské unie a napětí na východě hranic bloku. -Byl přivítán Morawieckim, s vojenskými poctami, před kanceláří polského premiéra. -Byla to jedna z raných návštěv Scholze po tom, co byl ve středu přísahán se svou koaliční vládou. -Polsko je hlasitým odpůrcem potrubí Nord Stream 2, které přepraví ruský plyn přímo do Německa, říkajíc, že to činí Evropu závislou na dodávkách Ruska a vystavuje ji tlaku ze strany Moskvy. -Německý regulátor pozastavil schvalovací postup pro dokončenou ropovodní trasu kvůli právním otázkám. -Vláda ve Varšavě je také zapojena do stále se zostřujícího sporu s Evropskou komisí, výkonnou mocí EU, která odmítá poskytovat polské vládě pandemické obnovovací fondy s tím, že její politiky oslabují tamní soudní nezávislost. -Scholz a Morawiecki mají také diskutovat o složitých vzájemných vztazích pod novou vládou Německa. -Dobré sousedské vztahy jsou stále zastíněny druhou světovou válkou, zejména v současné pravicové vládě Polska, která tvrdí, že Německo Polsku dluží náhradu za škody způsobené během války. -Agnieszka Lada-Konefal, zástupce ředitele Německého institutu pro polské záležitosti v Darmstadtu v Německu, očekává, že vláda Scholze bude pokračovat v dialogu a kontaktu s Polskem, které je důležitým členem na východním příkopu EU a pátým největším obchodním partnerem Německa. -Návštěva přichází 30 let po ratifikaci obou parlamentů smlouvy o dobrých sousedských vztazích a přátelské spolupráci. -Německá nová zahraniční ministryně Annalena Baerbock byla v pátek ve Varšavě. -Vyjádřila podporu Německa pro Polsko, které uzavřelo svou východní hranici pro migranty, kteří jsou zřejmě podporováni belaruskou vládou, aby hledali nelegální průchod. -Také vyzvala k humanitárnímu zacházení s migranty, kteří jsou uvězněni na hranici. -Polsko a EU říkají, že vláda běloruského prezidenta Aleksandra Lukašenka se snaží destabilizovat blok tím, že podněcuje migraci do jeho zemí. -V pátek se Scholz setkal s francouzským prezidentem Emmanuel Macronem v Paříži a později s úředníky EU a NATO v Bruselu. -Scholz, středolevicový politik, se stal devátým německým kancléřem po druhé světové válce, otevírajícím novou éru pro nejvíce obyvatelnou zemi EU a největší ekonomiku po 16letém vládnutí Angely Merkelové. -Jeho vláda se skládá z koalice jeho středolevicových sociálních demokratů, ekologických Zelených a pro-business Svobodných demokratů. -Můžeme zkusit ruční reset. -Připojte svůj eReader k zdroji energie tím, že uděláte jednu z následujících věcí: -Nejprve zapněte počítač a připojte k němu přiložený USB napájecí kabel a poté svůj eReader. -Připojte zástrčku ze zdroje napájení (není součástí) a poté připojte svůj eReader k zástrčce ze zdroje napájení. -Stiskněte a podržte tlačítko napájení, dokud světlo napájení v horním pravém rohu vašeho eReaderu nezhasne. -Uvidíte obrazovku 'Vypnuto', když je váš eReader vypnutý. -Uvolněte tlačítko napájení. -Stiskněte a podržte tlačítko napájení na vašem eReaderu po dobu 30 sekund. -Počkejte, až se objeví obrazovka Obnovení. -Uvolněte tlačítko napájení. -Vaše obrazovka eReader se ztmaví a začne proces obnovení. -Deluxe Manuální / Bateriový poháněný vakuový penisový pumpa, vyrobená společností VVI Ltd England, vám umožňuje zvládnout vaši erektilní dysfunkci, obecně známou jako ED. -Erektilní dysfunkce může být emocionálně a finančně náročná, proto Encore poskytuje jeden z nejdostupnějších penisových pump na trhu. -Tento víceproudový vysavač má speciální úchopovou rukojeť zabudovanou do hlavy čerpadla, která uživateli poskytuje vynikající kontrolu nad čerpáním a vysáváním. -Vakuová terapie byla prokázána jako účinná při léčbě erektilní dysfunkce u více než 95% mužů bez vážných vedlejších účinků nebo léků. -Čerpadlo a válec jsou oba kryty zárukou výrobce na celý život, což znamená, že Encore nahradí buď část v případě poruchy nebo selhání. -Po troše cvičení se terapie vakuem s tímto systémem stává snadnou a pohodlnou. -Navíc VVI zahrnuje několik dalších položek v tomto kitu, který činí proces rychlým a uživatelsky přívětivým. -Patentovaný výhozní kroužek, násypka a mazivo obsažené v sadě pomáhají aplikovat napěťové pásky po čerpání. -Napěťové pásky, také známé jako penisové kroužky, pomáhají udržet erekci, jakmile byla dosažena pomocí pumpy. -Tento kit obsahuje sadu napěťových pásů ve nejpopulárnějších velikostech, aby uživatel mohl najít nejúčinnější úroveň napětí. -Aby toho nebylo málo, celý sada se vejde do elegantního a diskrétního přenosného pouzdra, které se skladuje prakticky kdekoli. -VVI Medical chápe, že mnoho jednotlivců chce udržet svůj sexuální život soukromý, což je důvod, proč budeme při expedici tohoto produktu dbát nejvyšší diskrétnosti. -Obdržíte svůj zásilku Encore Deluxe Manuální / Bateriově napájené vakuové čerpadlo penisu v obyčejné krabici. -K nákupu této pumpy není potřeba žádný lékařský předpis. -Můžete prosím zkusit uskutečnit nákup na počítači na webové stránce. -Platforma může mít nějaké problémy. -Byl jste schopen vyzkoušet nákup na počítači na webové stránce? -Vzhledem k nereakci a z důvodu kvality musím ukončit tento chat, neváhejte nás kontaktovat pro jakékoli dotazy nebo otázky. Budeme rádi, když vám s tím poskytneme pomoc. -Mějte krásný den, Na shledanou! -Podívejte se na kartu Platby a poštovné pro naše aktuální sazby. -Naše standardní služba je odeslání leteckou poštou. -K dispozici jsou prémiové podepsané a kurýrní služby. -Pokud nejsou uvedeny náklady pro váš stát, kontaktujte nás pro cenovou nabídku. -Dodání menších fotografií až do velikosti 16x12" do Evropy je obvykle 5 - 15 pracovních dnů od odeslání a do zbytku světa 7 - 20 pracovních dnů, prostřednictvím letecké pošty. -Dodání velkých fotografií 20x16" a 24x20" se obvykle doručuje do 7 - 20 pracovních dnů do Evropy a Zbytku světa. -Kombinujeme dopravu na objednávky pro stejného zákazníka. -Vyberte si všechny fotografie, které byste chtěli, a po dokončení se jednou zkontrolujte, abyste automaticky obdrželi slevu na poštovné. -Mezinárodní kupující si všimněte: naše velké fotografie jsou zasílány v poštovních trubkách. -Upozorňujeme, že ve některých zemích místní poštovní služby nedoručují poštovní trubky spolu s dopisy a malými balíčky. -Z tohoto důvodu zahrnuje uvedená dodací lhůta široký rozsah. -Poštovní společnosti umožňují až 25 pracovních dnů pro doručení zásilek standardním leteckým způsobem. -Proto prosím počítejte s asi 25 pracovními dny od odeslání, než se na nás obrátíte ohledně podezření na problém s dodáním. -Nabízíme premium Airmail služby s prioritním zpracováním a sledováním. -Obecně je dodání rychlejší prostřednictvím těchto služeb, ale buďte si vědomi, že nejde o časově omezené nebo zaručené služby a stejná úroveň služeb až 25 pracovních dnů dodací lhůty je uplatňována poštovními společnostmi. -Pokud potřebujete svou objednávku naléhavě, vyberte si možnost expresního kurýrního poštovného (pokud není pro vaši zemi uvedena, kontaktujte nás pro cenovou nabídku). -Vaše objednávka bude doručena FedExem během několika dnů. -Pokud byste chtěli poradit se doporučenou poštovní metodou do vaší země, kontaktujte nás - máme roky zkušeností a rádi vám poradíme. -Organizace ve stavu vysoké pohotovosti, jak technici závodí, aby opravili chybu softwaru. -Kritická zranitelnost ve široce používaném softwarovém nástroji - rychle využitá ve hře Minecraft online - se rychle stává významnou hrozbou pro organizace po celém světě. -"Internet je teď v plamenech," řekl Adam Meyers, senior viceprezident pro inteligenci ve společnosti pro bezpečnost na internetu Crowdstrike. -"Lidé se snaží opravit," řekl, "a všichni se snaží toho využít." -Řekl v pátek ráno, že za 12 hodin od zveřejnění existence chyby byla "úplně zbraňována", což znamená, že zločinci vyvinuli a distribuovali nástroje pro její využití. -Chyba může být nejhorší počítačová zranitelnost objevená za roky. -Bylo to odhaleno ve všudypřítomném nástroji, který se používá v cloudu servery a podnikovém softwaru používaném v průmyslu a vládě. -Pokud není opraveno, poskytuje zločincům, špionům a programátorům nováčkům snadný přístup k vnitřním sítím, kde mohou vyloupit cenná data, nainstalovat malware, smazat důležité informace a mnohem více. -Cyber útoky jsou nyní považovány za největší hrozbu pro finanční stabilitu. -„Byl bych nucen přemýšlet o společnosti, která není v ohrožení,“ řekl Joe Sullivan, hlavní bezpečnostní důstojník společnosti Cloudflare, jejíž online infrastruktura chrání webové stránky před zákeřnými účastníky. -Nečíslné miliony serverů mají nainstalováno a odborníci řekli, že dopady nebudou známy několik dní. -Amit Yoran, CEO bezpečnostní společnosti Tenable, to nazval „největším, nejkritičtějším zranitelností poslední dekády“ - a možná největším v historii moderního počítačového výpočtu. -Zranitelnost, pojmenovaná "Log4Shell", byla hodnocena 10 na stupnici od jedné do deseti Apache Software Foundation, který dohlíží na vývoj softwaru. -Kdokoli s exploitem může získat plný přístup k neopravenému počítači, který používá software. -Odborníci řekli, že extrémní snadností, s jakou zranitelnost umožňuje útočníkovi přístup k webovému serveru - bez požadavku na heslo - je to, co ji činí tak nebezpečnou. -Novozélandský tým pro pohotovostní reakce na počítače byl mezi prvními, kteří oznámili, že chyba je "aktivně využívána v divočině" jen hodiny po tom, co byla veřejně oznámena ve čtvrtek a byla vydána oprava. -Zranitelnost, která se nachází v open-source softwaru Apache, který se používá k provozování webových stránek a dalších webových služeb, byla nadaci 24. listopadu nahlášena čínskou technologickou společností Alibaba, uvedla. -Trvalo to dva týdny, než se vyvinul a vydal opravu. -Chcete-li aktualizovat své platební údaje, postupujte podle těchto kroků: -Přihlaste se do svého účtu #PRS_ORG#. -Klikněte na "Můj účet" a v menu vyberte "Nastavení účtu". -Vyberte kartu „Informace o platbě“. -V sekci „Platební informace“ vyberte typ kreditní karty a zadejte číslo karty, bezpečnostní kód (CVV), jméno na kartě a datum expirace. -Klikněte na "Uložit”. -Zkusil jsi tyto kroky? -Váš účet je anjahoehn. -Ve vašem účtu je uvedeno, že jediným odkazem (možnost přihlášení) k přístupu k vašemu účtu #PRS_ORG# je #PRS_ORG#. -Vaše uživatelské jméno je anjahoehne emailová adresa / Poslal jsem odkaz na obnovení vašeho hesla. -Prosím zkontrolujte svou poštu prosím okamžitě. -Čekám na tebe tady -Jak to šlo? -Dostal jste odkaz na obnovení vašeho hesla? -Jsi tam? -Poslal jsem další odkaz pro obnovení vašeho hesla. -Zkontrolujte prosím svou poštu. -Pro účely kvality budu muset uvolnit tento chat, pokud nebude žádná interakce během následujících 2 minut. -Děkujeme za kontaktování #PRS_ORG#, bylo mi potěšením Vám dnes pomoci. -Doufám, že máte skvělý den. -Jaký byl rozloučení s Etho? -Nejprve rychlé vysvětlení: Nejsem uživatel účtu, ale jeho manželka. -Mám povolení používat tento účet, protože jsem v technice úplně blbý a trochu mi to pomáhá s problémy se zdravím duševním (a vidím ironii technofoba se ptát na Redstone Titan :P) -Druhé a mnohem důležitější výhrada: Nechci rozpoutat dramata ani podezřívat, že něco nebylo v pořádku. -Podle mého názoru to byla jen změna větru, která nevyhovovala všem. -Jsem jen starý fanoušek uspokojující nostalgickou přímku. -S tím venku >.<... -Takže jsem býval velkým fanouškem Mindcracku za starých časů. -Nikdy jsem nezmeškal vydání od GuudeBoulderfist, miloval kolaborace atd. -Při sledování náhodného YouTube kanálu jsem narazil na vid, který popisuje historii kanálu Etho. -Na konci se dotklo Mindcracku, který se stal komerčním. -Jak to neviděl v pozitivním světle a nevyhnutelné odmítnutí podepsat související smlouvy. -Opět ani jít 'pro' ani chtít to udržet na úrovni je špatné rozhodnutí a vím, že lidé jdou různými směry a tak dále. -Rychlý Google mě dovedl k tomuto starému vláknu, které ukazuje jeho stranu věcí. -Většinou věci, které je třeba říci opatrně, ale co nejvíce zaujme na první pohled, je to, že jste viděli věci jiným světlem, ale celkově zůstali v dobrých vztazích. -Celý tento příběh se odehrál poté, co jsem se přesunul k jiným věcem, takže pro mě je to všechno nové. -Co hledám je druhá strana obrázku? -Jak jsem řekl, mé duševní zdraví není vynikající a vidět starou skupinu, ke které jsem byl jako fanoušek připojen, rozcházet se bez nějakých hloupých explozí, které jsou příliš běžné ve polarizované veřejné diskusi, může být trochu příjemné a podobně. -Tak jaká byla reakce od "starého gangu"? -Dělali jste stále společně něco? -Odcizili jste se pomalu? -Stále si povídat nebo pozvat jeden druhého na akce? -Opět nečekám nic dramatického nebo vidět lidi na krku jeden druhému. -Spíše naopak. -Myslím si, že v určitém smyslu je to forma nízkonákladového uzavření něčeho, co je trochu malé ve mém životě, aby to odráželo trochu pozitivního na mé psychicky trápené zadku. -P.S. Nemohl jsem si nevšimnout charitativní sbírky, kterou jste měli, a obrovského množství, které jste vybrali. -To je úžasné jako peklo! -Vysvětlení dvojitého výpadku elektrického proudu v Gabbě a proč by se to mohlo stát znovu. -Rizika výpadku napájení inherentní v nastavení kompoundu vysílání Gabba je nepravděpodobné, že se zlepší před další sérií Ashes, jelikož orgány cricketu čekají na více podrobností o plánech na významnou modernizaci stadionu pro pořádání her 2032 Olympijských her. -Zdroje The Age a The Sydney Morning Herald řekly, že Gabba je jediným velkým stadionem v australském cricketu, kde hlavní napájecí zdroj na hřišti není dostatečný k napájení obrovského množství přenosových nákladních vozů a zařízení potřebných k odesílání obrázků po celém světě. -Primární a záložní generátory napájející globální vysílání Gabba Test se vypnuly na 25 minut ve čtvrtý den. -To je proto, že základní výkon v obvodu je nutný k dodávání světelných věží Gabba - z nichž jedna infamously selhala během zápasu Big Bash League v roce 2019 - a samotné hřiště. -V důsledku toho vysílací společnosti čerpají svůj hlavní zdroj energie z obrovského, naftou poháněného generátoru najatého pro Testovací zápas, s nouzovým zdrojem energie, který má být získán z nouzového generátoru. -Nicméně ve čtvrtý den Testovacího zápasu selhala primární generátor spojený s záložním generátorem, což způsobilo, že oba selhaly současně a vedlo k úplné nebo částečné ztrátě obrazu vysílání a DRS po téměř 30 minutách. -NEP, společnost, která poskytuje externí vysílací vozy a další zařízení Fox a Seven, požádala o vysvětlení od společnosti, která poskytla generátory. -Všechny ostatní plochy, které budou použity pro Ashes - Adelaide Oval, MCG, SCG a Bellerive Oval v Hobartu - poskytnou hlavní napájení pro vysílání, zatímco dieselový generátor bude sloužit jako záložní zdroj. -Tento rozdíl, který v minulosti způsobil významné obavy u pořadatele Fox Cricket, byl během Ashes Testu zhoršen výrazně sníženým počtem produkčního a technického personálu, který byl schopen sledovat mnoho metaforických míčů, které byly během zápasu ve vzduchu. -Cricket Australia bylo varováno Foxem po několik měsíců, že z technického hlediska by bylo bezpečnější hrát zápas jinde, ale pokud by zůstal na Gabba, existovaly by "velká rizika" spojená s kostrou posádky povolenou do Queenslandu. -Nerezová ocel Vyrobeno rovně, údržba břitvy Učiněno snadno s vyměnitelnými čepelemi! -Tento holící strojek je blízkým příbuzným Straight / Cut Throat holícího strojku, který vám poskytuje starou barberovskou vintage atmosféru za zlomek ceny a prakticky nulovou údržbu! -Používání náhradních standardních dvojitých hraných břitů, stejně jako klasický bezpečnostní holící strojek - To znamená, že se nemusíte starat o stropping a broušení a přesto si užívat blízkost holení přímým břitovým strojkem! -Ideální pro začátečníky, kteří chtějí vyzkoušet umění holení s holicími strojky. -Tří nebo pět čepelové holící strojky dráždí pokožku mnohem více a musíte je tlačit silně proti pokožce, abyste je mohli použít. -Proto je tento holící produkt tak skvělý a často se používá pro lepší péči o pleť než průměrný holič. -Tvá tvář ti později poděkuje. -Připraveno k použití s jedním balením břitů. -Přichází v Haryali London Gift Designer Boxu. -Obrázky jsou skutečných položek, takže můžete být jisti, že to, co vidíte, je to, co dostanete. -Nástroje Haryali London mají životní záruku proti vadám materiálu a zpracování. -Jakýkoli produkt, který se ukáže jako vadný, bude opraven nebo vyměněn bezplatně. -Garantujeme proti prasknutí, selhání spoje a korozi při běžném používání. -Záruka se nevztahuje na běžné opotřebení a použití přístrojů za jejich limity. -Vylučuje také nesprávné použití nástroje tak, že byl tento konkrétní nástroj navržen a měl být použit. -Navíc jsou vyloučeny z této záruky i nástroje poškozené zneužitím nebo náhodou. -PayPal – Jediná forma platby, kterou přijímáme. -Pokud zákazníci nejsou s naším produktem plně spokojeni, prostě nám vraťte položku v nevyužitém stavu a my zpracujeme náhradu, jakmile položka bude přijata. -Pokud máte nějaké dotazy, kontaktujte nás prosím prostřednictvím karty „Položit otázku“, která se nachází na spodní straně stránky se seznamem. -Naše spokojenost zákazníka je na vrcholu naší priority. -Snažíme se poskytnout příjemný nákupní zážitek všem našim zákazníkům. -Pokud máte jakékoli otázky nebo problémy, kontaktujte nás prosím prostřednictvím zprávy „eBay“ a my se budeme snažit odpovědět na všechny dotazy, které nám byly položeny, do 24 hodin. -Pokud z nějakého důvodu není vaše nákup úplně uspokojivý, než zanechte negativní zpětnou vazbu, než se s námi spojíte, protože vyřešíme problém pro vás. -Pokud máte zájem o další produkty, podívejte se prosím do našeho obchodu na eBay. -Sen o tom, aby všechny děti byly bezpečné o Vánocích. -Její bratr (skoro dva) musel být přesvědčen, aby nešel s malým Ježíškem. -Takže tam byl obvyklý jemný chaos, který doprovází jakékoli shromáždění batolat. -Ale všichni byli tak potěšeni, že se to podařilo, když tolik dalších vánočních akcí bylo zrušeno, když se objevila další varianta Covidu. -Moje vnučka je čtyři, což znamená, že polovina jejího života - polovina jejího života! - byla zničena pandemií. -Od té doby, co se správně zorientovala, neví nic jiného než nošení masek, posedlost mytím rukou a udržování odstupu. -Na několika příležitostech (skrze různá uzamčení) když jsem ji viděl, nevěděl jsem, jestli bych ji měl políbit nebo ne. -Jaký druh zprávy to posílá do přijímajícího, super-ostrého mozku malého dítěte? -Bojím se přemýšlet. -Říkám to ne jako někdo, kdo je proti uzamčení nebo odstupu. -Na všechnu kritiku naší vlády se žádná země nedostala přesně. -Od začátku roku 2020 to bylo dva kroky vpřed a jeden zpět (a někdy naopak). -A vědělo se - i když mnozí z nás během těch prvních slunečných měsíců spíše užívali luxusu nevycházet ven - že po celé Británii jsou ti, pro které být doma bylo peklem, ne nebe. -Děti jako Arthur Labinjo-Hughes, který se stal neviditelným bez školního personálu, aby přemýšlel, proč je tak hubený a nemocný, bez sousedů, bez procházejících, nic. -Jsi na stránce knihy? -Můžete upravit velikost textu, fonty, řádkování a odsazení, aby čtení bylo pro vaše oči snazší. -Při čtení stiskněte střed stránky pro zobrazení nabídky čtení. -Klepněte na ikonu Text. -Bylo to náhodou rozlité naším jezdcem. -Za opětovné dodání vám nebudeme účtovat dvakrát. -Pošleme vám pouze novou objednávku. -Váš znovu-dodání je nyní připravováno restaurací. -Velmi vás prosím o trpělivost a počkejte, až bude vaše objednávka doručena do #NUMBER# minut. -Jo... my všichni kluci máme zbraně. -I děti. -Chodíme s nimi jako by to byl Divoký západ. -Ani nevím, kde začít. -Opravdu si myslíte, že by crackhead vlastnil drahou zbraň a pak si našetřil dost peněz na munici? -Crack hlava není „profesionální lupič“. -Pokud neslyšíte o tom, že lidé jsou bodáni, tak co? -Bodnutí nezískávají stejnou pozornost médií jako střelby. -Jen proto, že tisk to nezdůrazňuje, neznamená to, že se to neděje. -Co to má společného s rasou? -Plus Muscle Údržba pro podporu svalové struktury a aktivity -Společná pomoc pro psy je vysoce specifickým doplňkem kloubů a svalů s glukosaminem pro psy, navrženým pro podporu pohyblivosti. -Joint Aid pro psy může být podáváno všem psům jakéhokoli věku na úrovni „Obecná podpora“, aby se udržela volnost pohybu a stav svalů po celý jejich život. -Pro starší a pracující psy nebo ty, kteří ukazují sníženou svalovou hmotu, se doporučuje krmit Joint Aid for Dogs na úrovni „plné podpory“. -Jaké jsou klíčové výhody používání Joint Aid pro psy? -Udržuje pružnost pohybu u všech pracovních a domácích psů bez ohledu na věk, velikost a úroveň cvičení. -Podporuje tvorbu chrupavky, šlach, vazů, synoviální tekutiny a svalů. -Pomáhá udržovat přirozené protizánětlivé akce metabolismu psa. -Poskytuje unikátní kombinaci 22 aktivních nutraceutik. -Obsahuje unikátní Oatinol™ Delivery System pro udržení vysoké rychlosti absorpce živin. -Obsahuje vysoké hladiny Omega 3 pro podporu optimálního zdraví a výkonu. -Vyrábí se jako chutné a snadno krmení 2mm pelletů. --Může být podáváno všem psům bez ohledu na věk, velikost nebo úroveň cvičení. -Pro pokračující podporu se doporučuje Joint Aid krmit denně. -Měřítko je součástí balení. -Nutraceuticals jsou nutriční látky, které poskytují další zdravotní přínosy. -Díky přidání následujících nutraceutik Joint Aid poskytuje doplňkovou podporu pro všechny psy. -Vysoké hladiny 5 konkrétních stravovacích aminokyselin, nezbytných pro produkci svalové tkáně. -Chondroitin je nezbytný pro odolnost chrupavky. -Udržuje normální enzymatickou aktivitu a schopnost držet vodu, aby poskytla zdravou odolnost proti stlačení. -Kollagen má velkou tažnou sílu a poskytuje rámec, který dává tkáním jejich pevnost a pružnost. -Vidím, že detaily odpovídají. -Je mi líto, ale zdá se, že vaše původní objednávka byla náhodou rozlita, proto můj kolega musel udělat novou objednávku. -Nová objednávka je objednávka #NUMBER# a bude tam za pouhých 20 minut. -Jízdní si to bere a bude doručovat co nejdříve. -Tento arabský stát plánuje zvýšit obchod s Ruskem. -Spojené arabské emiráty plánují zvýšit svůj obchodní obrat s Ruskem na 20 miliard dolarů během příštích pěti let, oznámil ministr zahraničního obchodu Thani bin Ahmed Al Zeyoudi. -"Spolupracujeme s ruskou stranou na zvýšení obchodního obratu na 20 miliard dolarů během příštích pěti let a na pokračování investic do dalších oblastí [ekonomické spolupráce]," řekl Al Zeyoudi v sobotu během plenárního zasedání mezinárodního fóra Expo-2020 v Spojených arabských emirátech, které bylo kvůli pandemii Covid-19 odloženo. -Podle úředníka jsou "vztahy mezi Abú Zabí a Moskvou strategické". -Zdůraznil, že až 90% všech ruských investic do arabského světa jsou provedeny v Spojených arabských emirátech. -Spojené arabské emiráty také významně investují do Ruska, což tvoří asi 80% všech arabských investic do ekonomiky Ruska. -"Pokud mluvíme o počtu ruských společností v Spojených arabských emirátech, dosáhl téměř 4 000," uvedl Al Zeyoudi. -Podle ministra již Spojené arabské emiráty investují do několika ruských sektorů, včetně petrochemického průmyslu, ropy a plynu, automobilového průmyslu a přístavů a plánují rozšířit tento seznam. -V roce 2020 dosáhl obchodní obrat mezi oběma státy 3,3 miliardy dolarů a v prvních 10 měsících roku 2021 jeho objem překročil 4 miliardy dolarů, čímž dosáhl nového rekordu, uvedl minulý týden ruský premiér Michail Michustin. -Podle Ministerstva ekonomiky, letos Rusko hlavně vyváželo do Spojených arabských emirátů minerální produkty, drahé kameny a kovy, zatímco ruské dovozy z arabské země zahrnovaly stroje, zařízení a vozidla. -Jak dlouho trvá, než se malware infikuje do vašeho nového počítače? -Pokud používáte bezplatný nebo jiný nekvalitní bezpečnostní software, možná to nebude dlouho trvat vůbec. -Kyberzločinci jsou sofistikovanější než kdy dříve a používají rozmanitou paletu nástrojů k získání přístupu k vašim informacím. -Jiné bezpečnostní řešení prostě nemají prostředky, aby se udržovala krok s novými hrozbami, jak se objevují. -Jak se hrozby zhoršují, my se jenom zlepšujeme. -Naše týmy bezpečnostních expertů neustále analyzují nové hrozby a vymýšlejí nové způsoby, jak chránit vaše zařízení před nimi. -Soustředíme se výhradně na bezpečnost a jsme v tom nejlepší. -Naše koncentrovaná kombinace oddanosti a odbornosti prospívá našim zákazníkům. -Norton předčil konkurenci ve mnoha renomovaných hlava-na-hlava testech a pouze Norton získal PC Magazine Editors 'Choice Award 34krát, včetně 11 let v řadě - více než jakákoli jiná bezpečnostní společnost. -Co to pro vás znamená? -Když si koupíte Norton Security, dostanete jeden z nejlepších bezpečnostních produktů na trhu dnes. -Zahrnujeme pouze ochrannou slib, který může udělit pouze Norton. -Jsme tak jistí ve své schopnosti udržet vás bezpečné, nabízíme záruku vrácení peněz: -Pokud naši odborníci Norton nemohou odstranit virus na vašem PC nebo Macu, vrátíme vám peníze*. -S Norton Security Deluxe můžete rychle a snadno zabezpečit své zařízení. -Norton Security Deluxe poskytuje jednoduchý pohled, který podrobně popisuje stav ochrany vašeho zařízení. -Z jednoho přehledu můžete sledovat nastavení zabezpečení a ochrany identity a dokonce zobrazit historii skenovaných souborů a analyzovaných stahování. -Zatímco pracujeme na zajištění správnosti informací o produktech na našich webových stránkách, výrobci občas mohou měnit seznamy ingrediencí. -Skutečné obalové materiály a materiály mohou obsahovat více a/nebo jiné informace než ty, které jsou uvedeny na našich webových stránkách. -Všechny informace o produktech na našich webových stránkách jsou poskytovány pouze pro informační účely. -Doporučujeme, abyste se nepřímo spoléhali pouze na informace uvedené na našich webových stránkách. -Před použitím nebo konzumací produktu si vždy přečtěte štítky, varování a pokyny uvedené na produktu. -V případě jakýchkoli bezpečnostních obav nebo pro jakékoli další informace o produktu si prosím pečlivě přečtěte pokyny uvedené na etiketě nebo obalu a kontaktujte výrobce. -Obsah na tomto webu není určen jako náhrada za radu poskytnutou lékařem, lékárníkem nebo jiným licencovaným zdravotnickým pracovníkem. -Okamžitě kontaktujte svého zdravotního poskytovatele, pokud podezříváte, že máte zdravotní problém. -Informace a prohlášení o produktech nejsou určeny k diagnostice, léčbě, léčbě nebo prevenci jakéhokoli onemocnění nebo zdravotního stavu. -Organicsbeauty nepřijímá žádnou odpovědnost za nepřesnosti nebo nepravdivé informace o produktech od výrobců nebo jiných třetích stran. -Toto neovlivňuje vaše zákonná práva. -Všechny objednané položky jsou odeslány do 3-5 pracovních dnů po obdržení potvrzení platby prostřednictvím PayPal. -Používáme renomované kurýry pro odesílání našich zásilek, jako je FedEx, DHL, TNT nebo EMS. -Číslo sledování bude poskytnuto po odeslání balíků. -Normální doba dodání je 6-8 pracovních dnů od času odeslání položky. -Upozorňujeme, že čas dodání může být delší v některých jiných přepravních podmínkách, jako je proclení celního úřadu, nedostatek správné adresy, změna adresy nebo nějaké jiné důvody. -Pokud máte jakýkoli dotaz nebo problém, neváhejte nás kontaktovat prostřednictvím systému zpráv eBay nebo klikněte na kartu „Zeptat se prodejce“ níže v každém výpisu. -Odpovíme do 24 hodin. -Uvědomte si, že clo, daně, DPH, poplatky za karanténu, poplatky za změnu adresy nebo jakékoli jiné daně nejsou zahrnuty v ceně položky nebo v poplatcích za dopravu. -Tyto poplatky jsou na zodpovědnosti kupujícího. -Je vám žádáno, abyste se prosím obrátili na celní úřad vaší země, abyste zjistili, jaké jsou tyto další náklady nebo daně atd., předtím, než budete dražit / kupovat tyto položky. -Nemáme žádnou kontrolu nad celními poplatky nebo časem celního procesu nebo nad jinými poplatky; proto je čas dodání pouze pro referenci. -Prodávající nejsou zodpovědní za časy přepravy služby dopravy. -Časy přepravy se mohou lišit zejména během špičkových období. -Clo se obvykle účtuje dopravní společností nebo se vybírá při doručení balíčků. -Zpětná vazba: Pokud máte s produktem nějaký problém, okamžitě nás kontaktujte, protože zajišťujeme rychlé a nejlepší řešení pro jakýkoli problém s našimi produkty. -Omlouvám se, ale nemůžeme změnit adresu, jakmile byla již umístěna. -V tomto případě, co doporučuji, je, že můžete zavolat jezdci, jakmile je již blízko, abyste mohli upravit adresu. -Chcete-li to udělat, jednoduše přejděte na stránku s objednávkou, klepněte na „Nápověda a podpora“ a vyberte možnost „Zavolat jezdci“. -Assam Anti-CAA Outfits Vzdávají hold lidem, kteří zemřeli během protestů. -Před dvěma lety během proti-CAA vzpoury v Assamu bylo zabito pět agitátorů. -Několik organizací v Assamu v neděli vzdalo hold pěti agitátorům, kteří byli před dvěma lety zabiti během proti-CAA protestů, a rozhodlo se obnovit hnutí proti tomuto zákonu. -Památní schůze byly uspořádány v rezidenci Sama Stafforda, jednoho z agitátorů, kteří zemřeli, a na hřišti v Guwahati, s účastníky, kteří se rozhodli znovu zintenzivnit rozruch proti Zákonu o občanství (změna). -Krishak Mukti Sangram Samiti (KMSS), které bylo mezi prvními skupinami, které organizovaly protesty proti CAA po jeho schválení ve Sněmovně, vzdalo hold agitátorům v rezidenci Sam Stafford's Hatigaon. -Sibsagar MLA Akhil Gogoi, který byl během vzpoury vůdcem Krishak Mukti Sangram Samiti a byl za svou roli ve vzpourě uvězněn, při kladení květinových poct u fotografií těch, kteří zemřeli, řekl, že politické strany a "nacionalistické organizace" musí vést obnovení hnutí. -Komentující uměleckou bratrství, které se dostalo do centra pozornosti v roce 2019, řekl: "Nemůžeme očekávat, že budou organizovat agitace. -Jejich pomoc je klíčová, ale neměli by být obviňováni z toho, že neobnovili hnutí.” -Všechny Studentské Unie Assamu (AASU), další klíčový hráč v rozruchu, uspořádali památník na hřišti Hatigaon Higher Secondary School. -Mluvící k této příležitosti, hlavní poradce AASU Samujjal Kumar Bhattacharya řekl: "Je špatné říci, že proti-CAA hnutí zemřelo. -Ztratilo svou intenzitu kvůli zahájení zkoušek (v lednu 2020) a poté pandemie a uzávěry. -Znovu zahájíme agitaci s plnou intenzitou. -Nenecháme oběti jít nazmar," řekl. -Pan Bhattacharya řekl, že protest proti CAA bude opět pan-Northeast jako v roce 2019. -Zpěvák-skladatel Zubeen Garg, který sehrál vedoucí roli v protestech v roce 2019, také vyjádřil svou úctu na programu pořádaném AASU. -"Neuznáme CAA a to je jisté. -Vláda se snaží nás zmást, ale my jim nedovolíme, aby nás donutili to přijmout, řekl. -Několik organizací, včetně AASU, North East Students' Organisation (NESO) a Assam Jatiya Parishad (AJP), si připomnělo "černý den" 11. prosince k označení dvou let od schválení CAA ve Sněmovně. -Dobré odpoledne, děkujeme, že jste se dnes spojili s námi, jste přes #NAME#. -Můžete potvrdit číslo objednávky, jméno na účtu, e-mailovou adresu a dodací adresu, prosím? -Chvilku, nechám opravit soubory pro vás. -Ujistěte se, prosím, že provedete následující kroky> Na vašem čtečce elektronických knih... -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Více dole na obrazovce. -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Vedle „Opravit váš účet #PRS_ORG#“, klepněte na Opravit. -Opravit nyní. -Jaké tituly chybí? -Postupujte podle níže uvedených kroků pro provedení opravy synchronizace vašeho #PRS_ORG# (před zahájením budete potřebovat připojení Wi-Fi): -Přejděte na svou domovskou obrazovku. -Klepněte na ikonu Více v pravém dolním rohu obrazovky (3 vodorovné čáry). -Klepněte na Nastavení. -Klepněte na Informace o zařízení. -Vedle Oprava/obnovení vašeho #PRS_ORG# účtu, klepněte na Oprava/Obnovit. -Opravit nyní/Obnovit -Po dokončení synchronizace opět stiskněte tlačítko Sync Now, abyste nainstalovali dostupné aktualizace. -Prosím, dejte mi vědět, jestli můžete stáhnout a otevřít svou knihu nyní. -Ahoj, jsi tam ještě? -Neslyšeli jsme od vás. -Může se stát, že jsme se odpojili. -Ukončím tuto chatovací relaci. -Pokud se chcete opět spojit s zákaznickou podporou, můžete nás kontaktovat na #URL# a člen našeho týmu bude rád, že vám pomůže. -Proč byli Skyler a Walt Jr. tak naštvaní, že Walt pracuje na domě ve 2. sezóně? -Konkrétně 2.10 "Over" -Walt nahradí ohřívač vody, poté nahradí desky, které byly zřejmě možná ne nutně hnijící. -Proč se Skyler zdá tak naštvaný kvůli tomu? -Úplně znechucená se ptá "Vůbec dneska budeš pracovat?" -Před týdnem nebo dvěma byla o něm nadšená, že bude celou dobu odpočívat a uzdravovat se. -Chápu, že je ve vztahu nešťastná, ale Walter Jr. se zdá být nejasně rozhořčený a úplně zmatený Waltovými rekonstrukcemi. -Jsem si také vědom, že Skyler otevřeně flirtuje s Tedem s nadějí, že se někdo bude chovat k ní jako k prioritě, zatímco nosí dítě, zatímco Walt udělal všechno o sobě od svých 50. narozenin. -Přesto se mi vždy zdá divné, když se na opakovaném sledování zdá, že Sky a Jr. jsou tak neuvěřitelně naštvaní, že Walt dělá něco produktivního doma, než lže, nebo zabíjí lidi, nebo dělá drogy. -Jen opravovat dům jako by to dělal majitel domu a měl by nic jiného než volný čas. -Také chápu, že to je jen další forma zoufalství, aby se pokusil udržet svou roli manžela a rodinného muže, přestože den nebo dva předtím donutil svého náctiletého syna k tequile. -Je jasné, že se snaží získat zpět jejich přízeň tím, že vychvaluje problém, který není okamžitou prioritou, aby to vypadalo, jako by udělal skvělou práci a byl skvělou osobou! -Jeho řízení škod je jasně špatné. -Ať už je to jakkoli, reakce jeho ženy a syna mě stále štvala a v této situaci jsem se cítil nucen zdůraznit Waltův zoufalý pokus napravit ošklivé chyby. -Iowa Santa odchází do důchodu po 50 letech. -Santa z Iowa, který udělal dětem radost po dobu 50 let, říká, že je připravený odložit červený oblek a užít si klidnější Vánoce. -Dave Stoufer odhalil, že zdravotní problémy a problémy související s věkem vedly k jeho rozhodnutí odejít z "nejdelší práce", kterou kdy měl. -Jeho žena Rachel Nicola řekla, že ačkoli je velmi hrdá na práci, kterou její manžel udělal, aby přinesl radost tolika lidem, těší se na to, že bude mít více času oslavit Vánoce s ním. -Těžké sněžení způsobuje zkázu v Srbsku a většině Balkánu. -Těžké sněžení způsobilo v neděli většině Balkánu zkázu, narušující veřejnou dopravu. -Letecké spojení byly zrušeny v hlavním letišti Srbska v Bělehradě v neděli a mnoho oblastí hlásilo výpadky elektrického proudu a poškození budov. -Velká část západního Srbska byla bez elektřiny, jak varovaly úřady před nezbytnou cestou a vyzvaly lidi v Srbsku, aby šetřili energii. -V Beogradu došlo k poškození aut a budov kvůli sněhu. -Podle belgických médií bylo několik letů z a do hlavního letiště v Belgradu zrušeno kvůli počasí a krátkému výpadku proudu v hlavním terminálu. -Dálnice vedoucí k letišti byla uzavřena na několik hodin kvůli dopravní zácpě způsobené sněžením. -Cestující na místním vlaku do Bělehradu byli uvězněni ve sněhu po dobu sedmi hodin, než jim byla poskytnuta autobusová doprava do hlavního města. -Záchranné služby pomáhají orgánům při čištění, zatímco byl vydán další varování před dalším sněhem a ledem. -Mezitím v Bulharsku byly víkendem zuřící těžké deště a velké povodně, které způsobily, že tamní úřady vyhlásily stav nouze. -Nejvíce postižené oblasti byly v oblasti Smolyan, blízko hranice s Řeckem, kde řeky prorazily své břehy a způsobily přetečení silnic a zaplavení domů. -Několik kamionů bylo uvězněno v sesuvu půdy na meziměstské silnici. -Silný vítr narušil dodávky elektrické energie ve desítkách vesnic, uvedly úřady. -Další jižně v Albánii mobilizovaly orgány policie, armáda a záchranné síly k zvládnutí záplav po třech dnech neustálého deště a sněhu. -Řeka Vjosa na jihu zaplavila mnoho oblastí. -Starší pár, který přespal na střeše svého domu na jihu Albánie, byl ráno zachráněn policií. -Mnoho silnic bylo dočasně uzavřeno sesuvy půdy na jihu. -Jinak na severovýchodě a jihovýchodě země těžký sníh ztěžoval nebo dočasně blokoval dopravu. -Skvělé!! -Jsem rád, že jste nyní přistoupili k vašemu e-knize!! -Pro vaši informaci Vám pošlu transkript naší konverzace. -Pokud máte jakékoli další otázky nebo obavy, můžete vždy odpovědět na tento e-mail a my vám budeme moci dále pomoci. -Je tu něco jiného, s čím vám mohu dnes pomoci? -Pokud jde o nákup kvalitního vybavení, spací pytel by měl být na prvním místě seznamu. -Můžete šetřit na všech druzích vybavení, ale ne na spací pytel. -Velkou část času stráveného při kempování nebo expedicích budete trávit ve svém spacím a s Snugpak máte jistotu kvality. -Ověřený a oblíbený, tento britský spací pytel kombinuje mikro velikost balení se seriózním výkonem. -Mnoho lidí vnímá Softie 12 Osprey jako ultimátní čtyřsezónní syntetickou výplň spacího pytle, která je k dispozici. -Od roku 1987 stanovuje standard pro výkon velikosti zimního balíčku, který ostatní následují. -Ti, kteří vědí o Softie 12 Osprey, buď jeden použili nebo si přáli, aby jeden měli. -Používáno od výšin skotských hor až po dno vaší sněhové jámy. -Softie 12 Osprey, jako mnoho dalších našich spacích pytlů v sérii Softie Original, byl přidělen NATO Stock Number. -Quiltovaný vrchol tašky je šitý, plisovaný a vybavený šňůrkou, takže se stahuje do tvaru, podobně jako kapuce bundy. -Aby se zabránilo zaseknutí 2-cestného zipu buď na zipovou krytku nebo na okraje tašky, je za zipem sešita "proti zaseknutí" pásky. -Sponky pro upevnění a visací záložky Vnitřní záložky jsou poskytovány k udržení volného obložení na místě, odpovídající v poloze s záložkami, které poskytujeme na našich obloženích. -Vnější kapsy umožňují snadno pověsit tašku na větrání a sušení. -Zúžení tašky na kruhovou nohu vytváří "mumií" tvar, který je snadno zahřát a minimalizuje hmotnost použitého materiálu. -Těžko vidět na obrázku, ale zip baffle běží po celé délce tašky, za zipem, aby se zabránilo úniku tepla skrz oblast zipu. -Koupit levou a pravou ruku, aby se udělal dvojitý (prosím zkontrolujte při objednávání) -Přichází kompletní s kompresním vak na věci, aby se taška stala menší, když není v používání. -Může být použito s Snugpak Exanda panelem pro vytvoření širšího spacího pytle pro větší pohodlí. -Tento spací pytel může být udělaný extra dlouhý. -Jednoduchý profilovaný spací pytel s jednou vrstvou softie izolace. -Snugpak sídlí v seznamovaném mlejnu postaveném v 1800s na okraji krásného Yorkshire Dales. -Jsou velmi hrdí na to, že jsou jedním z posledních výrobců kvalitních spacích pytlů a izolovaného oblečení nejen v UK, ale po celé Evropě. -Máme věrný pracovní sílu ve své továrně v West Yorkshire v Severní Anglii, kteří jsou vyškoleni k používání moderních strojů a tradičních šicích technik, aby naše nápady ožily. -Kontakt Left Limited je oficiálním dodavatelem pro Snugpak a v našem EBAY obchodě nese velký sortiment jejich vybavení. -Kontakt Left LTD je vedoucím dodavatelem sady pro ozbrojené síly a průmysl osobní ochrany. -Popis Prosím, posuňte se dolů na konec seznamu pro více obrázků. -Zde máme na prodej použitý chronografický ciferník hodinek Longines. -Ciferník je černé barvy s bílými značkami a otvorem pro datum v dolním sub ciferníku. -Ciferník je ve velmi dobrém, ne-li novém starém skladu, stavu. -Zadní strana ciferníku není označena. -Ciferník měří 37mm v průměru a ciferníkové nohy jsou přibližně na 7 a 37. -Podívejte se na obrázky pro více podrobností. -Ciferník je zaručen pravý. -Platba se očekává do 5 pracovních dnů. -Přijímáme platbu pomocí Paypal, Bankovní převod nebo platbu při sběru. -Nemáme možnost přímo přijímat kreditní nebo debetní karty, ale tyto jsou přijatelné prostřednictvím Paypalu. -V některých případech můžeme přijmout pouze bankovní převod, například pro mezinárodní transakci, kde má kupující velmi nízkou nebo žádnou zpětnou vazbu. -Pro domácí dopravu používáme 3 různé typy. -Zadané možnosti se liší v závislosti na aukci. -Normálně používáme Royal Mail první třídy zaznamenané pro balíky do hodnoty £40 a Royal Mail speciální dodávku pro položky nad hodnotou £40. -Úrovně kompenzace za speciální doručení jsou 500 liber, 1000 liber a 2500 liber a my budeme pokrývat vaši zásilku odpovídající částkou, pokud je tento servis použit. -Třetí službu, kterou používáme ve Velké Británii, je kurýrní doručení, které bude obvykle Citylink do 5.30 hodin následující den. -Tento servis používáme pouze pro těžké nebo objemné položky. -Pro mezinárodní dopravu používáme 2 různé metody. -Hlavní způsob doručení je Royal Mail mezinárodní podepsaný. -Toto je služba, která vyžaduje podpis při doručení, ale je sledována pouze v rámci Velké Británie. -Nicméně potvrzení o doručení je k dispozici online. -Maximální úroveň náhrady za tuto službu je 500 liber a časy dodání se liší podle destinace. -K dispozici také za příplatek, pokud je to požadováno, jsou mezinárodní dodávky na druhý den prostřednictvím FEDEX globálního expresu. -Toto je na základě pouze citace a musíte nám poskytnout vaši adresu pro vypracování nabídky. -Maximální úroveň kompenzace na této službě je 1000 dolarů Obchodní podmínky. -Všechny prodeje jsou konečné a očekáváme platbu do 5 pracovních dnů. -Nabízíme 30denní politiku vrácení peněz pro položky, pokud jsou obdrženy zpět ve stejném stavu, ve kterém byly odeslány, se všemi původními obalovými materiály a nebyly poškozeny. -Vyhrazujeme si právo stanovit omezení platebních podmínek pro zboží zasílané do určitých mezinárodních destinací, jako jsou ty, kde je vysoké riziko podvodu. -Prodáváme zde na eBay již více než deset let, nabízíme vysoce kvalitní zboží za skvělé ceny. -Kupujeme i prodáváme prémiové značky hodinek online i offline a všechny naše hodinky jsou kontrolovány hodináři vyškolenými programem WOSTEP (Watches of Switzerland Training Enterprise Program). -Tam, kde je uvedeno, budou hodinky dodány s mechanickou zárukou. -Záruka nezahrnuje nesprávné používání nebo zneužívání hodinek a doporučuje se, aby byly všechny vintage hodinky před ponořením testovány na odolnost vůči vodě. -Pokud si přejete kontaktovat nás, můžete tak učinit pomocí tlačítka kontaktovat prodejce na seznamu. -Vždy nás zajímá slyšet od nových dodavatelů a můžeme také poskytnout velkoobchodní ceny na některé položky, které prodáváme kromě hodinek. -Jsme hrdí na to, že jsme nezávislí a nejsme sponzorováni, schváleni ani podporováni žádnou značkou, kterou prodáváme, včetně Rolexu. -Vážíme si naší zpětné vazby, protože si myslíme, že to hodně říká o tom, jak se staráme o zákazníky. -Vždy zanecháváme zpětnou vazbu našim zákazníkům po obdržení zpětné vazby sami pro položku, protože nám to umožňuje vědět, že byla zakoupena položka a že zákazník je s ní spokojen. -Pokud však nejste v žádném ohledu spokojeni, dejte nám prosím vědět před odchodem zpětnou vazbu, abychom mohli zkusit napravit jakékoli problémy. -Získejte Supersized Images & Free Image Hosting -Pozor Prodávající - Získejte šablony Hosting obrázků, plánování na Auctiva.com. -Sledujte počet zobrazení stránek s bezplatným počítadlem Auctiva. -Joe Biden lituje selhání v zastavení globálního oteplování po smrtelných tornádech. -Prezident Joe Biden v sobotu litoval, že svět selhal v zastavení globálního oteplování po adresování smrtelných tornád, které se prohnaly několika státy. -Všichni víme, že všechno je intenzivnější, když se otepluje klima. -Všechno," řekl. -"A samozřejmě to má nějaký dopad tady." -Bylo hlášeno alespoň 30 tornád ve šesti různých státech, což způsobilo rozsáhlé ničení, a očekává se, že bude z bouře zabito více než 100 lidí. -Prezident řekl, že nezná plný rozsah příspěvku globálního oteplování k smrtelným bouřím, které označil za jeden z "největších výbuchů tornád v historii". -Řekl, že požádá Agenturu pro ochranu životního prostředí o vyšetření. -"Vše, co vím, je, že intenzita počasí napříč všemi oblastmi má nějaký dopad jako důsledek oteplování planety," řekl Biden. -Prezident pochválil reportéra, který se ho zeptal na změny klimatu. -"Jako obvykle, vždycky se ptáš na nejlepší otázku," řekl s ironickým smíchem. -"Jak to budeme řešit?" pokračoval. -"Částí toho je uznání, že pravděpodobnost menšího počtu katastrof počasí, bez pokračování v boji proti globálnímu oteplování, se prostě nestane." -Biden řekl, že byl šokován rekordními požáry v zemi během roku 2021, vyjadřujícím obavy, že globální oteplování bylo hlavním přispěvatelem. -"Takže musíme jednat," řekl. -Biden řekl, že prvním krokem je zachránit životy a pečovat o rodiny, které byly postiženy bouřemi. -Slibuji ti. -Cokoli je potřeba. -Ať je potřeba cokoli, federální vláda to bude dodávat, řekl Biden. -Řekl, že bude nadále pečlivě sledovat obnovu po bouři a udělá vše, co bude federální vládou potřeba. -"Chci, aby lidé ve všech těchto státech věděli. -Překonáme to. -Společně to zvládneme a federální vláda se nevzdá," řekl. -"Tohle je jeden z těch časů, kdy nejsme demokraté ani republikáni." -Prezident řekl, že navštíví postižené oblasti po bouři, když bylo jasné, že nebude překážet místním záchranným úsilím. -"Plánuji jít," řekl on. -Norton Security Deluxe zahrnuje přístup k online odborné pomoci od certifikovaných techniků Norton. -Pokud budete kdykoli potřebovat pomoc, naši zákaznické podpůrní agenti jsou připraveni vám pomoci 24 hodin denně, sedm dní v týdnu. -Chcete-li být oprávněni k slibu o ochraně proti virům, musíte si zakoupit, obnovit nebo aktualizovat svou předplatnou Norton přímo od společnosti Symantec nebo se přihlásit k službě Norton Automatic Renewal. -Pokud zástupce služby Symantec není schopen odstranit virus z vašeho zařízení, pak můžete obdržet plnou náhradu za skutečnou cenu zaplacenou za předplatné Norton, nebo pokud je to Norton bundle, celkovou cenu zaplacenou za Norton bundle (netto jakéhokoli slevy nebo náhrady obdržené a méně jakéhokoli poštovného, manipulace a příslušných daní, s výjimkou určitých států a zemí, kde je poštovné, manipulace a daně vratné) a pouze pro aktuální placenou službu předplatného nebo předplatného balíčku. -Norton subscription musí být nainstalován a aktivován na vašem zařízení před tím, než je infikován virem. -Náhrada NEAPLIKUJE na žádné škody způsobené viry. -Podívejte se na webové stránky Nortonu pro více podrobností. -Chraňte to, co je důležité, s nejlépe hodnocenou bezpečnostní službou. -Váš online život a skutečný život se slučují do jednoho bezproblémového zážitku a potřebujete bezpečnost, která může chránit proti virům, krádeži identity a dalším digitálním hrozbám, aby se nestaly skutečnými bolestmi hlavy. -Vidíme více, analyzujeme více a zastavujeme více online hrozeb. -Autorka 'Rozhovoru s upírem' Anne Rice zemřela ve věku 80 let. -Zemřela kvůli komplikacím vyplývajícím z mrtvice, řekl Christopher Rice. -Největším úspěchem Rice bylo její první román "Rozhovor s upírem", který byl vydán v roce 1976 a představil postavu upíra Lestata, který byl hlavní postavou ve 13 knihové sérii Kronik, z nichž nejnovější byla vydána v roce 2018. -"Měl jsem představu o Lestatovi jako o muži akce, muži, který může dělat věci, které já nemůžu," řekl Rice v přednášce na Southern Illinois University v roce 2010. -"Rozhovor s upírem" byl zfilmován úspěšným filmem v roce 1994, což pomohlo obnovit zájem o žánr upírů, který pokračoval s televizní sérií "The Vampire Diaries" a filmovou sérií "Twilight". -Ačkoli žila většinu svého života v Kalifornii, Rice byla rodilá z New Orleansu a nastavila mnoho svých příběhů tam, podle jejího webového životopisu. -Syn Rice, Christopher Rice, řekl, že byl u postele své matky, když zemřela. -Anne Rice bude pohřbena v soukromém obřadu v New Orleansu, s veřejnou památnou slavností plánovanou na příští rok, řekl. -Děkuji vám, že jste si našli čas mluvit se mnou dnes a doufám, že jsem dokázal vyřešit vaši otázku. Pokud byste nevadilo, abyste hodnotili naši chatovací konverzaci dnes na základě mých zákaznických dovedností, byl bych vám velmi vděčný. Tlačítko hodnocení lze nalézt v tomto chatu. -Doufám, že máte skvělý den a prosím, vraťte se k nám, pokud budete potřebovat další pomoc. diff --git a/spaces/zhang-wei-jian/docker/node_modules/readdirp/index.js b/spaces/zhang-wei-jian/docker/node_modules/readdirp/index.js deleted file mode 100644 index cf739b2dc5f56a2860667ce8e3f8f7f04ad551d0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/readdirp/index.js +++ /dev/null @@ -1,287 +0,0 @@ -'use strict'; - -const fs = require('fs'); -const { Readable } = require('stream'); -const sysPath = require('path'); -const { promisify } = require('util'); -const picomatch = require('picomatch'); - -const readdir = promisify(fs.readdir); -const stat = promisify(fs.stat); -const lstat = promisify(fs.lstat); -const realpath = promisify(fs.realpath); - -/** - * @typedef {Object} EntryInfo - * @property {String} path - * @property {String} fullPath - * @property {fs.Stats=} stats - * @property {fs.Dirent=} dirent - * @property {String} basename - */ - -const BANG = '!'; -const RECURSIVE_ERROR_CODE = 'READDIRP_RECURSIVE_ERROR'; -const NORMAL_FLOW_ERRORS = new Set(['ENOENT', 'EPERM', 'EACCES', 'ELOOP', RECURSIVE_ERROR_CODE]); -const FILE_TYPE = 'files'; -const DIR_TYPE = 'directories'; -const FILE_DIR_TYPE = 'files_directories'; -const EVERYTHING_TYPE = 'all'; -const ALL_TYPES = [FILE_TYPE, DIR_TYPE, FILE_DIR_TYPE, EVERYTHING_TYPE]; - -const isNormalFlowError = error => NORMAL_FLOW_ERRORS.has(error.code); -const [maj, min] = process.versions.node.split('.').slice(0, 2).map(n => Number.parseInt(n, 10)); -const wantBigintFsStats = process.platform === 'win32' && (maj > 10 || (maj === 10 && min >= 5)); - -const normalizeFilter = filter => { - if (filter === undefined) return; - if (typeof filter === 'function') return filter; - - if (typeof filter === 'string') { - const glob = picomatch(filter.trim()); - return entry => glob(entry.basename); - } - - if (Array.isArray(filter)) { - const positive = []; - const negative = []; - for (const item of filter) { - const trimmed = item.trim(); - if (trimmed.charAt(0) === BANG) { - negative.push(picomatch(trimmed.slice(1))); - } else { - positive.push(picomatch(trimmed)); - } - } - - if (negative.length > 0) { - if (positive.length > 0) { - return entry => - positive.some(f => f(entry.basename)) && !negative.some(f => f(entry.basename)); - } - return entry => !negative.some(f => f(entry.basename)); - } - return entry => positive.some(f => f(entry.basename)); - } -}; - -class ReaddirpStream extends Readable { - static get defaultOptions() { - return { - root: '.', - /* eslint-disable no-unused-vars */ - fileFilter: (path) => true, - directoryFilter: (path) => true, - /* eslint-enable no-unused-vars */ - type: FILE_TYPE, - lstat: false, - depth: 2147483648, - alwaysStat: false - }; - } - - constructor(options = {}) { - super({ - objectMode: true, - autoDestroy: true, - highWaterMark: options.highWaterMark || 4096 - }); - const opts = { ...ReaddirpStream.defaultOptions, ...options }; - const { root, type } = opts; - - this._fileFilter = normalizeFilter(opts.fileFilter); - this._directoryFilter = normalizeFilter(opts.directoryFilter); - - const statMethod = opts.lstat ? lstat : stat; - // Use bigint stats if it's windows and stat() supports options (node 10+). - if (wantBigintFsStats) { - this._stat = path => statMethod(path, { bigint: true }); - } else { - this._stat = statMethod; - } - - this._maxDepth = opts.depth; - this._wantsDir = [DIR_TYPE, FILE_DIR_TYPE, EVERYTHING_TYPE].includes(type); - this._wantsFile = [FILE_TYPE, FILE_DIR_TYPE, EVERYTHING_TYPE].includes(type); - this._wantsEverything = type === EVERYTHING_TYPE; - this._root = sysPath.resolve(root); - this._isDirent = ('Dirent' in fs) && !opts.alwaysStat; - this._statsProp = this._isDirent ? 'dirent' : 'stats'; - this._rdOptions = { encoding: 'utf8', withFileTypes: this._isDirent }; - - // Launch stream with one parent, the root dir. - this.parents = [this._exploreDir(root, 1)]; - this.reading = false; - this.parent = undefined; - } - - async _read(batch) { - if (this.reading) return; - this.reading = true; - - try { - while (!this.destroyed && batch > 0) { - const { path, depth, files = [] } = this.parent || {}; - - if (files.length > 0) { - const slice = files.splice(0, batch).map(dirent => this._formatEntry(dirent, path)); - for (const entry of await Promise.all(slice)) { - if (this.destroyed) return; - - const entryType = await this._getEntryType(entry); - if (entryType === 'directory' && this._directoryFilter(entry)) { - if (depth <= this._maxDepth) { - this.parents.push(this._exploreDir(entry.fullPath, depth + 1)); - } - - if (this._wantsDir) { - this.push(entry); - batch--; - } - } else if ((entryType === 'file' || this._includeAsFile(entry)) && this._fileFilter(entry)) { - if (this._wantsFile) { - this.push(entry); - batch--; - } - } - } - } else { - const parent = this.parents.pop(); - if (!parent) { - this.push(null); - break; - } - this.parent = await parent; - if (this.destroyed) return; - } - } - } catch (error) { - this.destroy(error); - } finally { - this.reading = false; - } - } - - async _exploreDir(path, depth) { - let files; - try { - files = await readdir(path, this._rdOptions); - } catch (error) { - this._onError(error); - } - return { files, depth, path }; - } - - async _formatEntry(dirent, path) { - let entry; - try { - const basename = this._isDirent ? dirent.name : dirent; - const fullPath = sysPath.resolve(sysPath.join(path, basename)); - entry = { path: sysPath.relative(this._root, fullPath), fullPath, basename }; - entry[this._statsProp] = this._isDirent ? dirent : await this._stat(fullPath); - } catch (err) { - this._onError(err); - } - return entry; - } - - _onError(err) { - if (isNormalFlowError(err) && !this.destroyed) { - this.emit('warn', err); - } else { - this.destroy(err); - } - } - - async _getEntryType(entry) { - // entry may be undefined, because a warning or an error were emitted - // and the statsProp is undefined - const stats = entry && entry[this._statsProp]; - if (!stats) { - return; - } - if (stats.isFile()) { - return 'file'; - } - if (stats.isDirectory()) { - return 'directory'; - } - if (stats && stats.isSymbolicLink()) { - const full = entry.fullPath; - try { - const entryRealPath = await realpath(full); - const entryRealPathStats = await lstat(entryRealPath); - if (entryRealPathStats.isFile()) { - return 'file'; - } - if (entryRealPathStats.isDirectory()) { - const len = entryRealPath.length; - if (full.startsWith(entryRealPath) && full.substr(len, 1) === sysPath.sep) { - const recursiveError = new Error( - `Circular symlink detected: "${full}" points to "${entryRealPath}"` - ); - recursiveError.code = RECURSIVE_ERROR_CODE; - return this._onError(recursiveError); - } - return 'directory'; - } - } catch (error) { - this._onError(error); - } - } - } - - _includeAsFile(entry) { - const stats = entry && entry[this._statsProp]; - - return stats && this._wantsEverything && !stats.isDirectory(); - } -} - -/** - * @typedef {Object} ReaddirpArguments - * @property {Function=} fileFilter - * @property {Function=} directoryFilter - * @property {String=} type - * @property {Number=} depth - * @property {String=} root - * @property {Boolean=} lstat - * @property {Boolean=} bigint - */ - -/** - * Main function which ends up calling readdirRec and reads all files and directories in given root recursively. - * @param {String} root Root directory - * @param {ReaddirpArguments=} options Options to specify root (start directory), filters and recursion depth - */ -const readdirp = (root, options = {}) => { - let type = options.entryType || options.type; - if (type === 'both') type = FILE_DIR_TYPE; // backwards-compatibility - if (type) options.type = type; - if (!root) { - throw new Error('readdirp: root argument is required. Usage: readdirp(root, options)'); - } else if (typeof root !== 'string') { - throw new TypeError('readdirp: root argument must be a string. Usage: readdirp(root, options)'); - } else if (type && !ALL_TYPES.includes(type)) { - throw new Error(`readdirp: Invalid type passed. Use one of ${ALL_TYPES.join(', ')}`); - } - - options.root = root; - return new ReaddirpStream(options); -}; - -const readdirpPromise = (root, options = {}) => { - return new Promise((resolve, reject) => { - const files = []; - readdirp(root, options) - .on('data', entry => files.push(entry)) - .on('end', () => resolve(files)) - .on('error', error => reject(error)); - }); -}; - -readdirp.promise = readdirpPromise; -readdirp.ReaddirpStream = ReaddirpStream; -readdirp.default = readdirp; - -module.exports = readdirp; diff --git a/spaces/zhuowen999/vits_chinese/commons.py b/spaces/zhuowen999/vits_chinese/commons.py deleted file mode 100644 index 21b446b6bd4dee16cbfbd26fb97d69110b410350..0000000000000000000000000000000000000000 --- a/spaces/zhuowen999/vits_chinese/commons.py +++ /dev/null @@ -1,163 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/zideliu/styledrop/timm/models/layers/config.py b/spaces/zideliu/styledrop/timm/models/layers/config.py deleted file mode 100644 index f07b9d782ba0597c174dee81097c28280335fdba..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/layers/config.py +++ /dev/null @@ -1,115 +0,0 @@ -""" Model / Layer Config singleton state -""" -from typing import Any, Optional - -__all__ = [ - 'is_exportable', 'is_scriptable', 'is_no_jit', - 'set_exportable', 'set_scriptable', 'set_no_jit', 'set_layer_config' -] - -# Set to True if prefer to have layers with no jit optimization (includes activations) -_NO_JIT = False - -# Set to True if prefer to have activation layers with no jit optimization -# NOTE not currently used as no difference between no_jit and no_activation jit as only layers obeying -# the jit flags so far are activations. This will change as more layers are updated and/or added. -_NO_ACTIVATION_JIT = False - -# Set to True if exporting a model with Same padding via ONNX -_EXPORTABLE = False - -# Set to True if wanting to use torch.jit.script on a model -_SCRIPTABLE = False - - -def is_no_jit(): - return _NO_JIT - - -class set_no_jit: - def __init__(self, mode: bool) -> None: - global _NO_JIT - self.prev = _NO_JIT - _NO_JIT = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _NO_JIT - _NO_JIT = self.prev - return False - - -def is_exportable(): - return _EXPORTABLE - - -class set_exportable: - def __init__(self, mode: bool) -> None: - global _EXPORTABLE - self.prev = _EXPORTABLE - _EXPORTABLE = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _EXPORTABLE - _EXPORTABLE = self.prev - return False - - -def is_scriptable(): - return _SCRIPTABLE - - -class set_scriptable: - def __init__(self, mode: bool) -> None: - global _SCRIPTABLE - self.prev = _SCRIPTABLE - _SCRIPTABLE = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _SCRIPTABLE - _SCRIPTABLE = self.prev - return False - - -class set_layer_config: - """ Layer config context manager that allows setting all layer config flags at once. - If a flag arg is None, it will not change the current value. - """ - def __init__( - self, - scriptable: Optional[bool] = None, - exportable: Optional[bool] = None, - no_jit: Optional[bool] = None, - no_activation_jit: Optional[bool] = None): - global _SCRIPTABLE - global _EXPORTABLE - global _NO_JIT - global _NO_ACTIVATION_JIT - self.prev = _SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT - if scriptable is not None: - _SCRIPTABLE = scriptable - if exportable is not None: - _EXPORTABLE = exportable - if no_jit is not None: - _NO_JIT = no_jit - if no_activation_jit is not None: - _NO_ACTIVATION_JIT = no_activation_jit - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _SCRIPTABLE - global _EXPORTABLE - global _NO_JIT - global _NO_ACTIVATION_JIT - _SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT = self.prev - return False diff --git a/spaces/zomehwh/sovits-tannhauser/modules/ddsp.py b/spaces/zomehwh/sovits-tannhauser/modules/ddsp.py deleted file mode 100644 index b09ac5c5c19d165e75e1780877a857be8c104ed7..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-tannhauser/modules/ddsp.py +++ /dev/null @@ -1,190 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import functional as F -import torch.fft as fft -import numpy as np -import librosa as li -import math -from scipy.signal import get_window - - -def safe_log(x): - return torch.log(x + 1e-7) - - -@torch.no_grad() -def mean_std_loudness(dataset): - mean = 0 - std = 0 - n = 0 - for _, _, l in dataset: - n += 1 - mean += (l.mean().item() - mean) / n - std += (l.std().item() - std) / n - return mean, std - - -def multiscale_fft(signal, scales, overlap): - stfts = [] - for s in scales: - S = torch.stft( - signal, - s, - int(s * (1 - overlap)), - s, - torch.hann_window(s).to(signal), - True, - normalized=True, - return_complex=True, - ).abs() - stfts.append(S) - return stfts - - -def resample(x, factor: int): - batch, frame, channel = x.shape - x = x.permute(0, 2, 1).reshape(batch * channel, 1, frame) - - window = torch.hann_window( - factor * 2, - dtype=x.dtype, - device=x.device, - ).reshape(1, 1, -1) - y = torch.zeros(x.shape[0], x.shape[1], factor * x.shape[2]).to(x) - y[..., ::factor] = x - y[..., -1:] = x[..., -1:] - y = torch.nn.functional.pad(y, [factor, factor]) - y = torch.nn.functional.conv1d(y, window)[..., :-1] - - y = y.reshape(batch, channel, factor * frame).permute(0, 2, 1) - - return y - - -def upsample(signal, factor): - signal = signal.permute(0, 2, 1) - signal = nn.functional.interpolate(signal, size=signal.shape[-1] * factor) - return signal.permute(0, 2, 1) - - -def remove_above_nyquist(amplitudes, pitch, sampling_rate): - n_harm = amplitudes.shape[-1] - pitches = pitch * torch.arange(1, n_harm + 1).to(pitch) - aa = (pitches < sampling_rate / 2).float() + 1e-4 - return amplitudes * aa - - -def scale_function(x): - return 2 * torch.sigmoid(x) ** (math.log(10)) + 1e-7 - - -def extract_loudness(signal, sampling_rate, block_size, n_fft=2048): - S = li.stft( - signal, - n_fft=n_fft, - hop_length=block_size, - win_length=n_fft, - center=True, - ) - S = np.log(abs(S) + 1e-7) - f = li.fft_frequencies(sampling_rate, n_fft) - a_weight = li.A_weighting(f) - - S = S + a_weight.reshape(-1, 1) - - S = np.mean(S, 0)[..., :-1] - - return S - - -def extract_pitch(signal, sampling_rate, block_size): - length = signal.shape[-1] // block_size - f0 = crepe.predict( - signal, - sampling_rate, - step_size=int(1000 * block_size / sampling_rate), - verbose=1, - center=True, - viterbi=True, - ) - f0 = f0[1].reshape(-1)[:-1] - - if f0.shape[-1] != length: - f0 = np.interp( - np.linspace(0, 1, length, endpoint=False), - np.linspace(0, 1, f0.shape[-1], endpoint=False), - f0, - ) - - return f0 - - -def mlp(in_size, hidden_size, n_layers): - channels = [in_size] + (n_layers) * [hidden_size] - net = [] - for i in range(n_layers): - net.append(nn.Linear(channels[i], channels[i + 1])) - net.append(nn.LayerNorm(channels[i + 1])) - net.append(nn.LeakyReLU()) - return nn.Sequential(*net) - - -def gru(n_input, hidden_size): - return nn.GRU(n_input * hidden_size, hidden_size, batch_first=True) - - -def harmonic_synth(pitch, amplitudes, sampling_rate): - n_harmonic = amplitudes.shape[-1] - omega = torch.cumsum(2 * math.pi * pitch / sampling_rate, 1) - omegas = omega * torch.arange(1, n_harmonic + 1).to(omega) - signal = (torch.sin(omegas) * amplitudes).sum(-1, keepdim=True) - return signal - - -def amp_to_impulse_response(amp, target_size): - amp = torch.stack([amp, torch.zeros_like(amp)], -1) - amp = torch.view_as_complex(amp) - amp = fft.irfft(amp) - - filter_size = amp.shape[-1] - - amp = torch.roll(amp, filter_size // 2, -1) - win = torch.hann_window(filter_size, dtype=amp.dtype, device=amp.device) - - amp = amp * win - - amp = nn.functional.pad(amp, (0, int(target_size) - int(filter_size))) - amp = torch.roll(amp, -filter_size // 2, -1) - - return amp - - -def fft_convolve(signal, kernel): - signal = nn.functional.pad(signal, (0, signal.shape[-1])) - kernel = nn.functional.pad(kernel, (kernel.shape[-1], 0)) - - output = fft.irfft(fft.rfft(signal) * fft.rfft(kernel)) - output = output[..., output.shape[-1] // 2:] - - return output - - -def init_kernels(win_len, win_inc, fft_len, win_type=None, invers=False): - if win_type == 'None' or win_type is None: - window = np.ones(win_len) - else: - window = get_window(win_type, win_len, fftbins=True) # **0.5 - - N = fft_len - fourier_basis = np.fft.rfft(np.eye(N))[:win_len] - real_kernel = np.real(fourier_basis) - imag_kernel = np.imag(fourier_basis) - kernel = np.concatenate([real_kernel, imag_kernel], 1).T - - if invers: - kernel = np.linalg.pinv(kernel).T - - kernel = kernel * window - kernel = kernel[:, None, :] - return torch.from_numpy(kernel.astype(np.float32)), torch.from_numpy(window[None, :, None].astype(np.float32)) -