diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anti Deep Freeze V6 61 020 2822 TOP.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anti Deep Freeze V6 61 020 2822 TOP.md deleted file mode 100644 index f7e4ee2f57a962814bdd5573431f6b8458e1fdc1..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anti Deep Freeze V6 61 020 2822 TOP.md +++ /dev/null @@ -1,20 +0,0 @@ -
-

How to Use Anti Deep Freeze V6 61 020 2822 to Unfreeze Your Computer

-

Deep Freeze is a software that protects your computer from unwanted changes by restoring it to a frozen state every time you reboot. However, sometimes you may need to make permanent changes to your system or access files that are stored on the frozen drive. In that case, you can use Anti Deep Freeze V6 61 020 2822, a tool that can disable Deep Freeze and unfreeze your computer without requiring a password.

-

anti deep freeze v6 61 020 2822


DOWNLOADhttps://byltly.com/2uKyww



-

Anti Deep Freeze V6 61 020 2822 is compatible with Windows XP, Windows Vista, Windows 7 (32 or 64 bit), and requires 10% free hard drive space[^1^]. It works with Deep Freeze Standard 6.61.020.2822, which is a version of Deep Freeze that can save your required computers and protect them from malware and accidental changes[^1^]. To use Anti Deep Freeze V6 61 020 2822, follow these steps:

-
    -
  1. Download Anti Deep Freeze V6 61 020 2822 from a reliable source. You can find it on some websites or online platforms that offer software downloads[^2^] [^3^] [^4^]. Make sure you scan the file for viruses before opening it.
  2. -
  3. Run Anti Deep Freeze V6 61 020 2822 as an administrator. You will see a window with a list of drives that are frozen by Deep Freeze. Select the drive that you want to unfreeze and click on "Unfreeze".
  4. -
  5. Wait for the process to complete. You will see a message that says "Unfreeze Successful". Click on "OK" and restart your computer.
  6. -
  7. After rebooting, you will notice that your computer is no longer frozen by Deep Freeze. You can now make any changes or access any files that you want. However, be careful not to delete or modify any important system files or settings.
  8. -
  9. If you want to freeze your computer again, you can run Deep Freeze Standard 6.61.020.2822 and enable it on the drive that you want to protect. You will need to enter a password to do so.
  10. -
-

Anti Deep Freeze V6 61 020 2822 is a handy tool that can help you unfreeze your computer when you need to. However, it should be used with caution and only when necessary. Deep Freeze is a useful software that can prevent your computer from being damaged by viruses, malware, or unwanted changes. Therefore, you should always keep it enabled unless you have a valid reason to disable it.

- -

Some of the benefits of using Deep Freeze are that it can save you time and money by reducing the need for IT support and maintenance. It can also improve your security and privacy by preventing unauthorized access to your data and files. Moreover, it can enhance your productivity and performance by ensuring that your computer always runs smoothly and efficiently.

-

However, there are also some drawbacks of using Deep Freeze that you should be aware of. For example, it can prevent you from installing new software or updates that may be beneficial for your system. It can also erase any personal files or settings that you may have saved on the frozen drive. Furthermore, it can cause some problems if you forget your password or lose the tool that can disable it.

-

-

Therefore, you should always use Deep Freeze with care and responsibility. You should only freeze the drives that contain your system files and applications, and leave some space for your personal files on a separate drive or partition. You should also backup your important data regularly and keep a record of your password and the tool that can unfreeze your computer. Finally, you should only use Anti Deep Freeze V6 61 020 2822 when you absolutely need to, and not abuse it for malicious purposes.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md deleted file mode 100644 index b5a2d0913117bac1054ee8b8ad160906f7e52507..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bluedio Bluetooth Headset Driver Windows 7l ((LINK)).md +++ /dev/null @@ -1,42 +0,0 @@ - -

How to Install Bluedio Bluetooth Headset Driver on Windows 7

-

If you have a Bluedio Bluetooth headset and want to use it with your Windows 7 computer, you may need to install the driver for it. A driver is a software program that allows your operating system to communicate with a Bluetooth device. Without the driver, your headset may not work properly or at all.

-

Bluedio Bluetooth Headset Driver Windows 7l


Downloadhttps://byltly.com/2uKxZ6



-

In this article, we will show you how to download and install the Bluedio Bluetooth headset driver on Windows 7. We will also provide some troubleshooting tips in case you encounter any problems.

-

Step 1: Download the Driver

-

The first step is to download the driver for your Bluedio Bluetooth headset. You can find the driver on the official website of Bluedio or on a third-party website that offers drivers for various devices. For example, you can use the link below to download the driver from Dell:

-

Download and Install The Latest Wireless Bluetooth Driver | Dell US

-

-

Alternatively, you can use a driver update tool that can automatically scan your computer and find the best driver for your headset. This can save you time and hassle of searching for the right driver manually.

-

Step 2: Install the Driver

-

Once you have downloaded the driver, you need to install it on your computer. To do this, follow these steps:

-
    -
  1. Double-click on the downloaded file to launch the installation wizard.
  2. -
  3. Follow the on-screen instructions to complete the installation process.
  4. -
  5. Restart your computer if prompted.
  6. -
-

After installing the driver, you should be able to use your Bluedio Bluetooth headset with your Windows 7 computer.

-

Step 3: Pair the Headset with the Computer

-

The final step is to pair your headset with your computer. This means establishing a wireless connection between them so that they can communicate with each other. To do this, follow these steps:

-
    -
  1. Turn on your Bluedio Bluetooth headset and make sure it is in pairing mode. You can usually do this by pressing and holding a button on the headset until you hear a beep or see a flashing light.
  2. -
  3. On your Windows 7 computer, click on the Start button and then click on Devices and Printers.
  4. -
  5. Click on Add a Device and wait for your computer to scan for nearby Bluetooth devices.
  6. -
  7. Select your Bluedio Bluetooth headset from the list of devices and click on Next.
  8. -
  9. If prompted, enter a passcode or confirm a pairing request on your headset and/or computer.
  10. -
  11. Click on Finish to complete the pairing process.
  12. -
-

Once paired, you should be able to use your Bluedio Bluetooth headset as an audio device on your Windows 7 computer. You can adjust the volume, mute, or switch between different audio sources using the controls on your headset or computer.

-

Troubleshooting Tips

-

If you encounter any problems while installing or using your Bluedio Bluetooth headset on Windows 7, here are some tips that may help you:

- -

Examples of molecular properties and phenomena that can be predicted by Gaussian 09

-

Gaussian 09 can help you predict various molecular properties and phenomena that can be useful for your research or application. Some examples are:

- -

Limitations and Drawbacks of Gaussian 09 Torrent 1357

-

Accuracy and reliability issues of Gaussian 09 Torrent 1357

-

Gaussian 09 Torrent 1357 is not a perfect software, and it has some limitations and drawbacks that may affect its accuracy and reliability for certain types of calculations or systems. Some examples are:

- -

Legal and ethical implications of using Gaussian 09 Torrent 1357

-

Gaussian 09 Torrent 1357 is not a legal software, and it has some legal and ethical implications that may affect your reputation or career as a researcher or user. Some examples are:

- -

Alternatives and competitors of Gaussian 09 Torrent 1357

-

Gaussian 09 Torrent 1357 is not a unique software, and it has some alternatives and competitors that may offer similar or better features or benefits for certain types of calculations or systems. Some examples are:

- -

Conclusion

-

In this article, we have told you everything you need to know about Gaussian 09 torrent 1357, including how to download it, how to install it, how to run it, what are its features and benefits, what are its limitations and drawbacks, and what are some alternatives and competitors. We hope that this article has been informative and helpful for you.

-

However, we also want to remind you that Gaussian 09 torrent 1357 is not a legal or ethical software, and it may expose you to various risks and consequences that may outweigh its advantages. Therefore, we strongly advise you to use Gaussian 09 torrent 1357 with caution and discretion, or better yet, to use a legal and authorized version of Gaussian software or any other electronic structure program that suits your needs and preferences.

-

FAQs

-

Here are some frequently asked questions about Gaussian 09 torrent 1357:

-
    -
  1. Q: What is the difference between Gaussian 09 torrent 1357 and Gaussian 16?
  2. -
  3. A: Gaussian 09 torrent 1357 is an illegal file that contains the data of Gaussian 09 software that was released in 2013. Gaussian 16 is the latest version of Gaussian software that was released in 2016. Gaussian 16 introduces several new features and improvements over Gaussian 09, such as enhanced performance, accuracy, functionality, compatibility, usability, documentation, support, etc.
  4. -
  5. Q: How can I get a license for Gaussian software?
  6. -
  7. A: You can get a license for Gaussian software by contacting Gaussian Inc., the original creator and owner of Gaussian software. You can visit their website at www.gaussian.com for more information about their products and services. You can also check if your institution or organization has a site license for Gaussian software that you can use.
  8. -
  9. Q: How can I learn more about Gaussian software?
  10. -
  11. A: You can learn more about Gaussian software by visiting their website at www.gaussian.com or by reading their manuals and publications. You can also find many tutorials and examples online that can help you learn how to use Gaussian software for various types of calculations and systems.
  12. -
  13. Q: How can I cite Gaussian software in my research?
  14. -
  15. A: You can cite Gaussian software in your research by using the following format: M. J. Frisch et al., "Gaussian XX", Wallingford CT: Gaussian Inc., YYYY (where XX is the version number and YYYY is the year of release). You can also include the specific citation for the methods or models that you used in your calculation from the output file or from the website www.gaussian.com/citation.
  16. -
  17. Q: How can I get help or support for Gaussian software?
  18. -
  19. A: You can get help or support for Gaussian software by contacting their technical support team at support@gaussian.com or by visiting their website at www.gaussian.com/support. You can also find many resources online that can help you solve your problems or answer your questions about Gaussian software.
  20. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raskell/livebook/README.md b/spaces/raskell/livebook/README.md deleted file mode 100644 index 7b12495942e63525fa13b91ef4673911e7b3cb26..0000000000000000000000000000000000000000 --- a/spaces/raskell/livebook/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Livebook -emoji: 📓 -colorFrom: pink -colorTo: purple -sdk: docker -fullWidth: true -duplicated_from: livebook-dev/livebook ---- - -You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that. \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md deleted file mode 100644 index 0b964b3206348b7555b21a323f137063a14913cb..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/A Bugs Life Pc Game Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

a bugs life pc game crack


Download Zip === https://urlgoal.com/2uCJKg



- -Patches and workarounds made by me: unofficial bug fixes of the bugs I found ... Half-Life x.1.1.1e (Windows and Linux) hlfreeze/hl-headnut/ ... 1fdad05405
-
-
-

diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md deleted file mode 100644 index 060970ee6b5082d114c6bd1a75861f6638cee70a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (video Ngentot Sama Ibu Kandung 3gp).md +++ /dev/null @@ -1,6 +0,0 @@ -

HD Online Player (video ngentot sama ibu kandung 3gp)


Download >>> https://urlgoal.com/2uCM4D



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py deleted file mode 100644 index ad42b46459aa099ed48780b5cff0cb9099f82b71..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py +++ /dev/null @@ -1,166 +0,0 @@ -from torch.utils.data import Dataset -import numpy as np -import os -import random -import torchvision.transforms as transforms -from PIL import Image, ImageOps -import cv2 -import torch -from PIL.ImageFilter import GaussianBlur -import trimesh -import cv2 - - -class EvalDataset(Dataset): - @staticmethod - def modify_commandline_options(parser): - return parser - - def __init__(self, opt, root=None): - self.opt = opt - self.projection_mode = 'orthogonal' - - # Path setup - self.root = self.opt.dataroot - if root is not None: - self.root = root - self.RENDER = os.path.join(self.root, 'RENDER') - self.MASK = os.path.join(self.root, 'MASK') - self.PARAM = os.path.join(self.root, 'PARAM') - self.OBJ = os.path.join(self.root, 'GEO', 'OBJ') - - self.phase = 'val' - self.load_size = self.opt.loadSize - - self.num_views = self.opt.num_views - - self.max_view_angle = 360 - self.interval = 1 - self.subjects = self.get_subjects() - - # PIL to tensor - self.to_tensor = transforms.Compose([ - transforms.Resize(self.load_size), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ]) - - def get_subjects(self): - var_file = os.path.join(self.root, 'val.txt') - if os.path.exists(var_file): - var_subjects = np.loadtxt(var_file, dtype=str) - return sorted(list(var_subjects)) - all_subjects = os.listdir(self.RENDER) - return sorted(list(all_subjects)) - - def __len__(self): - return len(self.subjects) * self.max_view_angle // self.interval - - def get_render(self, subject, num_views, view_id=None, random_sample=False): - ''' - Return the render data - :param subject: subject name - :param num_views: how many views to return - :param view_id: the first view_id. If None, select a random one. - :return: - 'img': [num_views, C, W, H] images - 'calib': [num_views, 4, 4] calibration matrix - 'extrinsic': [num_views, 4, 4] extrinsic matrix - 'mask': [num_views, 1, W, H] masks - ''' - # For now we only have pitch = 00. Hard code it here - pitch = 0 - # Select a random view_id from self.max_view_angle if not given - if view_id is None: - view_id = np.random.randint(self.max_view_angle) - # The ids are an even distribution of num_views around view_id - view_ids = [(view_id + self.max_view_angle // num_views * offset) % self.max_view_angle - for offset in range(num_views)] - if random_sample: - view_ids = np.random.choice(self.max_view_angle, num_views, replace=False) - - calib_list = [] - render_list = [] - mask_list = [] - extrinsic_list = [] - - for vid in view_ids: - param_path = os.path.join(self.PARAM, subject, '%d_%02d.npy' % (vid, pitch)) - render_path = os.path.join(self.RENDER, subject, '%d_%02d.jpg' % (vid, pitch)) - mask_path = os.path.join(self.MASK, subject, '%d_%02d.png' % (vid, pitch)) - - # loading calibration data - param = np.load(param_path) - # pixel unit / world unit - ortho_ratio = param.item().get('ortho_ratio') - # world unit / model unit - scale = param.item().get('scale') - # camera center world coordinate - center = param.item().get('center') - # model rotation - R = param.item().get('R') - - translate = -np.matmul(R, center).reshape(3, 1) - extrinsic = np.concatenate([R, translate], axis=1) - extrinsic = np.concatenate([extrinsic, np.array([0, 0, 0, 1]).reshape(1, 4)], 0) - # Match camera space to image pixel space - scale_intrinsic = np.identity(4) - scale_intrinsic[0, 0] = scale / ortho_ratio - scale_intrinsic[1, 1] = -scale / ortho_ratio - scale_intrinsic[2, 2] = -scale / ortho_ratio - # Match image pixel space to image uv space - uv_intrinsic = np.identity(4) - uv_intrinsic[0, 0] = 1.0 / float(self.opt.loadSize // 2) - uv_intrinsic[1, 1] = 1.0 / float(self.opt.loadSize // 2) - uv_intrinsic[2, 2] = 1.0 / float(self.opt.loadSize // 2) - # Transform under image pixel space - trans_intrinsic = np.identity(4) - - mask = Image.open(mask_path).convert('L') - render = Image.open(render_path).convert('RGB') - - intrinsic = np.matmul(trans_intrinsic, np.matmul(uv_intrinsic, scale_intrinsic)) - calib = torch.Tensor(np.matmul(intrinsic, extrinsic)).float() - extrinsic = torch.Tensor(extrinsic).float() - - mask = transforms.Resize(self.load_size)(mask) - mask = transforms.ToTensor()(mask).float() - mask_list.append(mask) - - render = self.to_tensor(render) - render = mask.expand_as(render) * render - - render_list.append(render) - calib_list.append(calib) - extrinsic_list.append(extrinsic) - - return { - 'img': torch.stack(render_list, dim=0), - 'calib': torch.stack(calib_list, dim=0), - 'extrinsic': torch.stack(extrinsic_list, dim=0), - 'mask': torch.stack(mask_list, dim=0) - } - - def get_item(self, index): - # In case of a missing file or IO error, switch to a random sample instead - try: - sid = index % len(self.subjects) - vid = (index // len(self.subjects)) * self.interval - # name of the subject 'rp_xxxx_xxx' - subject = self.subjects[sid] - res = { - 'name': subject, - 'mesh_path': os.path.join(self.OBJ, subject + '.obj'), - 'sid': sid, - 'vid': vid, - } - render_data = self.get_render(subject, num_views=self.num_views, view_id=vid, - random_sample=self.opt.random_multiview) - res.update(render_data) - return res - except Exception as e: - print(e) - return self.get_item(index=random.randint(0, self.__len__() - 1)) - - def __getitem__(self, index): - return self.get_item(index) diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/gerbil.py b/spaces/riccorl/relik-entity-linking/relik/inference/gerbil.py deleted file mode 100644 index d4c3f17cacea1d5472de99d1a974ad098585fc20..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/inference/gerbil.py +++ /dev/null @@ -1,254 +0,0 @@ -import argparse -import json -import os -import re -import sys -from http.server import BaseHTTPRequestHandler, HTTPServer -from typing import Iterator, List, Optional, Tuple - -from relik.inference.annotator import Relik -from relik.inference.data.objects import RelikOutput - -# sys.path += ['../'] -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))) - - -import logging - -logger = logging.getLogger(__name__) - - -class GerbilAlbyManager: - def __init__( - self, - annotator: Optional[Relik] = None, - response_logger_dir: Optional[str] = None, - ) -> None: - self.annotator = annotator - self.response_logger_dir = response_logger_dir - self.predictions_counter = 0 - self.labels_mapping = None - - def annotate(self, document: str): - relik_output: RelikOutput = self.annotator(document) - annotations = [(ss, se, l) for ss, se, l, _ in relik_output.labels] - if self.labels_mapping is not None: - return [ - (ss, se, self.labels_mapping.get(l, l)) for ss, se, l in annotations - ] - return annotations - - def set_mapping_file(self, mapping_file_path: str): - with open(mapping_file_path) as f: - labels_mapping = json.load(f) - self.labels_mapping = {v: k for k, v in labels_mapping.items()} - - def write_response_bundle( - self, - document: str, - new_document: str, - annotations: list, - mapped_annotations: list, - ) -> None: - if self.response_logger_dir is None: - return - - if not os.path.isdir(self.response_logger_dir): - os.mkdir(self.response_logger_dir) - - with open( - f"{self.response_logger_dir}/{self.predictions_counter}.json", "w" - ) as f: - out_json_obj = dict( - document=document, - new_document=new_document, - annotations=annotations, - mapped_annotations=mapped_annotations, - ) - - out_json_obj["span_annotations"] = [ - (ss, se, document[ss:se], label) for (ss, se, label) in annotations - ] - - out_json_obj["span_mapped_annotations"] = [ - (ss, se, new_document[ss:se], label) - for (ss, se, label) in mapped_annotations - ] - - json.dump(out_json_obj, f, indent=2) - - self.predictions_counter += 1 - - -manager = GerbilAlbyManager() - - -def preprocess_document(document: str) -> Tuple[str, List[Tuple[int, int]]]: - pattern_subs = { - "-LPR- ": " (", - "-RPR-": ")", - "\n\n": "\n", - "-LRB-": "(", - "-RRB-": ")", - '","': ",", - } - - document_acc = document - curr_offset = 0 - char2offset = [] - - matchings = re.finditer("({})".format("|".join(pattern_subs)), document) - for span_matching in sorted(matchings, key=lambda x: x.span()[0]): - span_start, span_end = span_matching.span() - span_start -= curr_offset - span_end -= curr_offset - - span_text = document_acc[span_start:span_end] - span_sub = pattern_subs[span_text] - document_acc = document_acc[:span_start] + span_sub + document_acc[span_end:] - - offset = len(span_text) - len(span_sub) - curr_offset += offset - - char2offset.append((span_start + len(span_sub), curr_offset)) - - return document_acc, char2offset - - -def map_back_annotations( - annotations: List[Tuple[int, int, str]], char_mapping: List[Tuple[int, int]] -) -> Iterator[Tuple[int, int, str]]: - def map_char(char_idx: int) -> int: - current_offset = 0 - for offset_idx, offset_value in char_mapping: - if char_idx >= offset_idx: - current_offset = offset_value - else: - break - return char_idx + current_offset - - for ss, se, label in annotations: - yield map_char(ss), map_char(se), label - - -def annotate(document: str) -> List[Tuple[int, int, str]]: - new_document, mapping = preprocess_document(document) - logger.info("Mapping: " + str(mapping)) - logger.info("Document: " + str(document)) - annotations = [ - (cs, ce, label.replace(" ", "_")) - for cs, ce, label in manager.annotate(new_document) - ] - logger.info("New document: " + str(new_document)) - mapped_annotations = ( - list(map_back_annotations(annotations, mapping)) - if len(mapping) > 0 - else annotations - ) - - logger.info( - "Annotations: " - + str([(ss, se, document[ss:se], ann) for ss, se, ann in mapped_annotations]) - ) - - manager.write_response_bundle( - document, new_document, mapped_annotations, annotations - ) - - if not all( - [ - new_document[ss:se] == document[mss:mse] - for (mss, mse, _), (ss, se, _) in zip(mapped_annotations, annotations) - ] - ): - diff_mappings = [ - (new_document[ss:se], document[mss:mse]) - for (mss, mse, _), (ss, se, _) in zip(mapped_annotations, annotations) - ] - return None - assert all( - [ - document[mss:mse] == new_document[ss:se] - for (mss, mse, _), (ss, se, _) in zip(mapped_annotations, annotations) - ] - ), (mapped_annotations, annotations) - - return [(cs, ce - cs, label) for cs, ce, label in mapped_annotations] - - -class GetHandler(BaseHTTPRequestHandler): - def do_POST(self): - content_length = int(self.headers["Content-Length"]) - post_data = self.rfile.read(content_length) - self.send_response(200) - self.end_headers() - doc_text = read_json(post_data) - # try: - response = annotate(doc_text) - - self.wfile.write(bytes(json.dumps(response), "utf-8")) - return - - -def read_json(post_data): - data = json.loads(post_data.decode("utf-8")) - # logger.info("received data:", data) - text = data["text"] - # spans = [(int(j["start"]), int(j["length"])) for j in data["spans"]] - return text - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument("--relik-model-name", required=True) - parser.add_argument("--responses-log-dir") - parser.add_argument("--log-file", default="logs/logging.txt") - parser.add_argument("--mapping-file") - return parser.parse_args() - - -def main(): - args = parse_args() - - # init manager - manager.response_logger_dir = args.responses_log_dir - # manager.annotator = Relik.from_pretrained(args.relik_model_name) - - print("Debugging, not using you relik model but an hardcoded one.") - manager.annotator = Relik( - question_encoder="riccorl/relik-retriever-aida-blink-pretrain-omniencoder", - document_index="riccorl/index-relik-retriever-aida-blink-pretrain-omniencoder", - reader="relik/reader/models/relik-reader-deberta-base-new-data", - window_size=32, - window_stride=16, - candidates_preprocessing_fn=(lambda x: x.split("")[0].strip()), - ) - - if args.mapping_file is not None: - manager.set_mapping_file(args.mapping_file) - - port = 6654 - server = HTTPServer(("localhost", port), GetHandler) - logger.info(f"Starting server at http://localhost:{port}") - - # Create a file handler and set its level - file_handler = logging.FileHandler(args.log_file) - file_handler.setLevel(logging.DEBUG) - - # Create a log formatter and set it on the handler - formatter = logging.Formatter( - "%(asctime)s - %(name)s - %(levelname)s - %(message)s" - ) - file_handler.setFormatter(formatter) - - # Add the file handler to the logger - logger.addHandler(file_handler) - - try: - server.serve_forever() - except KeyboardInterrupt: - exit(0) - - -if __name__ == "__main__": - main() diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_8x1_500k_flyingthings3d_subset_384x768.py b/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_8x1_500k_flyingthings3d_subset_384x768.py deleted file mode 100644 index fdd64364a44d1fab44473604462ef6f89838bdf5..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_8x1_500k_flyingthings3d_subset_384x768.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = [ - '../_base_/models/liteflownet/liteflownet.py', - '../_base_/datasets/flyingthings3d_subset_384x768.py', - '../_base_/default_runtime.py' -] - -optimizer = dict(type='Adam', lr=3e-6, weight_decay=0.0004, betas=(0.9, 0.999)) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', by_epoch=False, gamma=0.5, step=[200000, 300000, 400000]) -runner = dict(type='IterBasedRunner', max_iters=500000) -checkpoint_config = dict(by_epoch=False, interval=50000) -evaluation = dict(interval=50000, metric='EPE') - -# Train on FlyingChairs and finetune on FlyingThings3D_subset -load_from = 'https://download.openmmlab.com/mmflow/liteflownet/liteflownet_pre_M2S2R2_8x1_flyingchairs_320x448.pth' # noqa diff --git a/spaces/robinhad/ukrainian-stt/deepspeech/README.md b/spaces/robinhad/ukrainian-stt/deepspeech/README.md deleted file mode 100644 index 9b79cac9d5c7c0ceaa78bd37e4d1439bffb239cd..0000000000000000000000000000000000000000 --- a/spaces/robinhad/ukrainian-stt/deepspeech/README.md +++ /dev/null @@ -1,32 +0,0 @@ -# How to prepare dataset for training - -1. Download Ukrainian dataset from [https://github.com/egorsmkv/speech-recognition-uk](https://github.com/egorsmkv/speech-recognition-uk). -2. Delete Common Voice folder in dataset -3. Download [import_ukrainian.py](scripts/import_ukrainian.py) and put into DeepSpeech/bin folder. -4. Run import script -5. Download Common Voice 6.1 Ukrainian dataset -6. Convert to DeepSpeech format -7. Merge train.csv from dataset and from DeepSpeech into one file -8. Put CV files into dataset files folder -9. Put dev.csv and test.csv into folder - -Note: you can also specify dataset with "," e.g. dataset1/train.csv,dataset2/train.csv. - -You have a reproducible dataset! - - -# Scorer - -1. Refer to DeepSpeech guide for further explanations. - -2. Generate scorer package. -``` -python3 generate_lm.py --input_txt ../../../voice-recognition-ua/data/all_text.txt --output_dir . \ - --top_k 500000 --kenlm_bins ../../../voice-recognition-ua/kenlm/build/bin \ - --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|1" \ - --binary_a_bits 255 --binary_q_bits 8 --binary_type trie -``` -3. Run lm_optimizer to find the best scorer value. -4. Rerun step 2 to generate new scorer. - -Caution: scorer is very model-dependant, so you'll likely need to adjust it to each model. \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/plugins/pixel_decoder.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/plugins/pixel_decoder.py deleted file mode 100644 index 537a187dc5c53279afff377c548e224ac092de69..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/plugins/pixel_decoder.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.runner import BaseModule, ModuleList - - -@PLUGIN_LAYERS.register_module() -class PixelDecoder(BaseModule): - """Pixel decoder with a structure like fpn. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - init_cfg=None): - super().__init__(init_cfg=init_cfg) - self.in_channels = in_channels - self.num_inputs = len(in_channels) - self.lateral_convs = ModuleList() - self.output_convs = ModuleList() - self.use_bias = norm_cfg is None - for i in range(0, self.num_inputs - 1): - lateral_conv = ConvModule( - in_channels[i], - feat_channels, - kernel_size=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=None) - output_conv = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.lateral_convs.append(lateral_conv) - self.output_convs.append(output_conv) - - self.last_feat_conv = ConvModule( - in_channels[-1], - feat_channels, - kernel_size=3, - padding=1, - stride=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.mask_feature = Conv2d( - feat_channels, out_channels, kernel_size=3, stride=1, padding=1) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.last_feat_conv, bias=0) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. Not used here. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): Shape (batch_size, c, h, w). - - memory (Tensor): Output of last stage of backbone.\ - Shape (batch_size, c, h, w). - """ - y = self.last_feat_conv(feats[-1]) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - memory = feats[-1] - return mask_feature, memory - - -@PLUGIN_LAYERS.register_module() -class TransformerEncoderPixelDecoder(PixelDecoder): - """Pixel decoder with transormer encoder inside. - - Args: - in_channels (list[int] | tuple[int]): Number of channels in the - input feature maps. - feat_channels (int): Number channels for feature. - out_channels (int): Number channels for output. - norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization. - Defaults to dict(type='GN', num_groups=32). - act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation. - Defaults to dict(type='ReLU'). - encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer - encoder.Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer encoder position encoding. Defaults to - dict(type='SinePositionalEncoding', num_feats=128, - normalize=True). - init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - norm_cfg=dict(type='GN', num_groups=32), - act_cfg=dict(type='ReLU'), - encoder=None, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - init_cfg=None): - super(TransformerEncoderPixelDecoder, self).__init__( - in_channels, - feat_channels, - out_channels, - norm_cfg, - act_cfg, - init_cfg=init_cfg) - self.last_feat_conv = None - - self.encoder = build_transformer_layer_sequence(encoder) - self.encoder_embed_dims = self.encoder.embed_dims - assert self.encoder_embed_dims == feat_channels, 'embed_dims({}) of ' \ - 'tranformer encoder must equal to feat_channels({})'.format( - feat_channels, self.encoder_embed_dims) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.encoder_in_proj = Conv2d( - in_channels[-1], feat_channels, kernel_size=1) - self.encoder_out_proj = ConvModule( - feat_channels, - feat_channels, - kernel_size=3, - stride=1, - padding=1, - bias=self.use_bias, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def init_weights(self): - """Initialize weights.""" - for i in range(0, self.num_inputs - 2): - caffe2_xavier_init(self.lateral_convs[i].conv, bias=0) - caffe2_xavier_init(self.output_convs[i].conv, bias=0) - - caffe2_xavier_init(self.mask_feature, bias=0) - caffe2_xavier_init(self.encoder_in_proj, bias=0) - caffe2_xavier_init(self.encoder_out_proj.conv, bias=0) - - for p in self.encoder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, feats, img_metas): - """ - Args: - feats (list[Tensor]): Feature maps of each level. Each has - shape of (batch_size, c, h, w). - img_metas (list[dict]): List of image information. Pass in - for creating more accurate padding mask. - - Returns: - tuple: a tuple containing the following: - - mask_feature (Tensor): shape (batch_size, c, h, w). - - memory (Tensor): shape (batch_size, c, h, w). - """ - feat_last = feats[-1] - bs, c, h, w = feat_last.shape - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - padding_mask = feat_last.new_ones((bs, input_img_h, input_img_w), - dtype=torch.float32) - for i in range(bs): - img_h, img_w, _ = img_metas[i]['img_shape'] - padding_mask[i, :img_h, :img_w] = 0 - padding_mask = F.interpolate( - padding_mask.unsqueeze(1), - size=feat_last.shape[-2:], - mode='nearest').to(torch.bool).squeeze(1) - - pos_embed = self.positional_encoding(padding_mask) - feat_last = self.encoder_in_proj(feat_last) - # (batch_size, c, h, w) -> (num_queries, batch_size, c) - feat_last = feat_last.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - # (batch_size, h, w) -> (batch_size, h*w) - padding_mask = padding_mask.flatten(1) - memory = self.encoder( - query=feat_last, - key=None, - value=None, - query_pos=pos_embed, - query_key_padding_mask=padding_mask) - # (num_queries, batch_size, c) -> (batch_size, c, h, w) - memory = memory.permute(1, 2, 0).view(bs, self.encoder_embed_dims, h, - w) - y = self.encoder_out_proj(memory) - for i in range(self.num_inputs - 2, -1, -1): - x = feats[i] - cur_feat = self.lateral_convs[i](x) - y = cur_feat + \ - F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest') - y = self.output_convs[i](y) - - mask_feature = self.mask_feature(y) - return mask_feature, memory diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_scoring_roi_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_scoring_roi_head.py deleted file mode 100644 index 4617988e30abebe9ede13e04dda72632724ce159..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_scoring_roi_head.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from mmdet.core import bbox2roi -from ..builder import HEADS, build_head -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class MaskScoringRoIHead(StandardRoIHead): - """Mask Scoring RoIHead for Mask Scoring RCNN. - - https://arxiv.org/abs/1903.00241 - """ - - def __init__(self, mask_iou_head, **kwargs): - assert mask_iou_head is not None - super(MaskScoringRoIHead, self).__init__(**kwargs) - self.mask_iou_head = build_head(mask_iou_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for Mask head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - mask_results = super(MaskScoringRoIHead, - self)._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is None: - return mask_results - - # mask iou head forward and loss - pos_mask_pred = mask_results['mask_pred'][ - range(mask_results['mask_pred'].size(0)), pos_labels] - mask_iou_pred = self.mask_iou_head(mask_results['mask_feats'], - pos_mask_pred) - pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)), - pos_labels] - - mask_iou_targets = self.mask_iou_head.get_targets( - sampling_results, gt_masks, pos_mask_pred, - mask_results['mask_targets'], self.train_cfg) - loss_mask_iou = self.mask_iou_head.loss(pos_mask_iou_pred, - mask_iou_targets) - mask_results['loss_mask'].update(loss_mask_iou) - return mask_results - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - num_classes = self.mask_head.num_classes - segm_results = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - mask_scores = [[[] for _ in range(num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i] - for i in range(num_imgs) - ] - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - concat_det_labels = torch.cat(det_labels) - # get mask scores with mask iou head - mask_feats = mask_results['mask_feats'] - mask_pred = mask_results['mask_pred'] - mask_iou_pred = self.mask_iou_head( - mask_feats, mask_pred[range(concat_det_labels.size(0)), - concat_det_labels]) - # split batch mask prediction back to each image - num_bboxes_per_img = tuple(len(_bbox) for _bbox in _bboxes) - mask_preds = mask_pred.split(num_bboxes_per_img, 0) - mask_iou_preds = mask_iou_pred.split(num_bboxes_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - mask_scores = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - mask_scores.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], _bboxes[i], det_labels[i], - self.test_cfg, ori_shapes[i], scale_factors[i], - rescale) - # get mask scores with mask iou head - mask_score = self.mask_iou_head.get_mask_scores( - mask_iou_preds[i], det_bboxes[i], det_labels[i]) - segm_results.append(segm_result) - mask_scores.append(mask_score) - return list(zip(segm_results, mask_scores)) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Buruma Pc Game Crack Downloads.md b/spaces/rorallitri/biomedical-language-models/logs/Buruma Pc Game Crack Downloads.md deleted file mode 100644 index a849012026cb7875d25a63ddafdc42b784263e62..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Buruma Pc Game Crack Downloads.md +++ /dev/null @@ -1,15 +0,0 @@ -

buruma pc game crack downloads


Download Ziphttps://tinurll.com/2uznWe



- -November 3, 2021 - Buruma PC Game Crack Downloads jamewand. Buruma Pc Game Crack Download: ✓✓✓ ian buruma play ... Buruma PC Game Crack Download ✓✓✓ ian buruma play ... -Nov 3, 2019 ... -Buruma PC Game Crack Download. -Download Buruma Pc Game Crack. -Download Database. -Download Included ... -Buruma Game Crack Download Download Buruma PC Game Crack ... -Buruma PC Game Crack Download Download Buruma Pc Game Crack Download Buruma ... -Download Buruma PC Game Crack Download. -Download ... 8a78ff9644
-
-
-

diff --git a/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md b/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md deleted file mode 100644 index e511198dea795003af853dbc67a8db28ba031c40..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/CDRoller 11.50 Crack With Keygen A Must-Have Software for Data Recovery Professionals.md +++ /dev/null @@ -1,6 +0,0 @@ -

CDRoller 11.50 Crack With Keygen Free Download 2020


Download Ziphttps://tinurll.com/2uzmYh



- - aaccfb2cb3
-
-
-

diff --git a/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md b/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md deleted file mode 100644 index ff118bcf6f9e74f0ed2d1a70260331ccba62dc46..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/How Ida Beaussart Suffered in Silence Pleure En Silence Streaming 28.md +++ /dev/null @@ -1,16 +0,0 @@ - -

Synopsis - En 1989, à Salomé, Ida Beaussart, 17 ans, tue son père, membre actif d'un groupe néo-nazi. En 1992, elle est acquittée. Le film "Pleure en silence", nous livre les coulisses de ce drame, huit jours avant le geste désespéré d'une enfant maltraitée.

-

Pleure En Silence Streaming 28


DOWNLOAD ===== https://tinurll.com/2uzmIl



-

Synopsis - En 1989, à Salomé, Ida Beaussart, 17 ans, tue son père, membre actif d'un groupe néo-nazi. En 1992, elle est acquittée. Le film "Pleure en silence", nous livre les coulisses de ce drame, huit jours avant le geste désespéré d'une enfant...

-

C'estlaVieillequiaexpliquéàNadiaque,poursavoir quandçarisquaitd'arriver,ilfallaitcompter les jours. Nadia voulait pas avoir de bébé avec le Vieux. Elle s'est mise à pleurer et a commencé à refuserd'alleraveclui.

-

J'ai trouvé que ça, qui a l'air de marcher mais il faut créer un compte par contre t/checkout.html?wm=150&sub=7&filename=Pleure%20en%20silence
Désolé de pas t'avoir été plus utile

-

-

C.B. : J'ai souffert d'un surmenage qui a affecté mes cordes vocales. J'ai dû garder le silence durant trois mois, et j'ai mis un an à récupérer ma voix. Mais ces problèmes sont derrière moi. Mon nouvel album a été retardé pour d'autres raisons. Il sortira quand je serai satisfaite du résultat. En tout cas, il sera positif, comme moi !

-

C.B. : Je m'éclate ! Je partage tout ce que j'ai appris jusqu'ici avec les talents. Je les surprotège vu que j'ai été à leur place dans Popstars. Les battles, c'est dur, je pleure... Mais je suis bien placée pour leur dire que, même si l'on ne gagne pas, on peut faire carrière.

-

"J'arrive avec beaucoup d'émotion, de tristesse, mais aussi avec un sourire car il nous a aussi donné le sourire. A la Fifa, nous rendrons hommage au "Roi" et nous demandons au monde entier de respecter une minute de silence", a déclaré le patron de l'instance à son arrivée.

-

En Italie, une minute de silence sera observée dans les stades lors de la prochaine journée de Série A disputée le 4 janvier, a annoncé ce vendredi la fédération italienne de football (FIGC) dans un communiqué.

-

"La CBF pleure le décès d'Edson Arantes do Nascimento, Pelé, ce jeudi à l'hôpital Albert Einstein de São Paulo. Pelé était bien plus que le plus grand sportif de tous les temps. Notre roi du football a été le plus grand exposant d'un Brésil victorieux, gagnant qui n'a jamais eu peur face aux difficultés. Garçon noir, pauvre et né à Trois Coeurs ("Três Corações"), Pelé nous a montré qu'il y a toujours un nouveau chemin. Il a promis à son père une Coupe du monde et nous a présenté trois, en plus de marquer 95 buts en 113 matchs avec le maillot jaune. Le roi nous a donné un nouveau Brésil et nous ne pouvons que remercier son héritage."

-

A l'annonce du verdict, impeccable dans sa chemise-cravate, le Chilien n'a pas bougé un cil. Pas de réaction, pas de larme. Une impassibilité absolue, dans un silence de cathédrale. Zepeda n'a pas un regard pour ses parents, assis à sa droite.

-

Reste une question, lancinante, éternelle et vaine : que s'est-il passé dans la chambre 106 de la résidence universitaire Théodore-Rousseau, dans la nuit du 4 au 5 décembre 2016 ? Narumi Kurosaki, une mort sans image, mais pas sans son. Des « cris d'horreur », « de terreur », des bruits sourds d'un corps qu'on frappe contre le mur, puis ce « râle » d'agonie, qu'ont entendus les voisins de l'étudiante... Un son, puis un silence de cauchemar.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md b/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md deleted file mode 100644 index 48062f2d171ae41b59b62a6b4ca97085a0c3c6e8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hyena Ek Chalak Haseena Movie Hindi Dubbed LINK Download 720p Movie.md +++ /dev/null @@ -1,6 +0,0 @@ -

Hyena Ek Chalak Haseena movie hindi dubbed download 720p movie


Download Zip >>> https://tinurll.com/2uzosx



- - aaccfb2cb3
-
-
-

diff --git a/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md b/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md deleted file mode 100644 index a7b4a740910b8760b72c9a1a6ca1e4687649c08b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Interfata Windows Xp In Limba Romana Download Music Sfaturi Si Trucuri Pentru O Experienta Optima.md +++ /dev/null @@ -1,6 +0,0 @@ -

Interfata Windows Xp In Limba Romana Download Music


DOWNLOADhttps://tinurll.com/2uzmF3



- - aaccfb2cb3
-
-
-

diff --git a/spaces/rosenthal/chess/chessfenbot/webkit2png.py b/spaces/rosenthal/chess/chessfenbot/webkit2png.py deleted file mode 100644 index 5e507a3eaf73331a0b8d572acac842e7085ff3e4..0000000000000000000000000000000000000000 --- a/spaces/rosenthal/chess/chessfenbot/webkit2png.py +++ /dev/null @@ -1,414 +0,0 @@ -# -# webkit2png.py -# -# Creates screenshots of webpages using by QtWebkit. -# -# Copyright (c) 2014 Roland Tapken -# -# This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public License -# as published by the Free Software Foundation; either version 2 -# of the License, or (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program; if not, write to the Free Software -# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA -# -# Nice ideas "todo": -# - Add QTcpSocket support to create a "screenshot daemon" that -# can handle multiple requests at the same time. - -import time -import os - -from PyQt4.QtCore import * -from PyQt4.QtGui import * -from PyQt4.QtWebKit import * -from PyQt4.QtNetwork import * - -# Class for Website-Rendering. Uses QWebPage, which -# requires a running QtGui to work. -class WebkitRenderer(QObject): - """ - A class that helps to create 'screenshots' of webpages using - Qt's QWebkit. Requires PyQt4 library. - - Use "render()" to get a 'QImage' object, render_to_bytes() to get the - resulting image as 'str' object or render_to_file() to write the image - directly into a 'file' resource. - """ - def __init__(self,**kwargs): - """ - Sets default values for the properties. - """ - - if not QApplication.instance(): - raise RuntimeError(self.__class__.__name__ + " requires a running QApplication instance") - QObject.__init__(self) - - # Initialize default properties - self.width = kwargs.get('width', 0) - self.height = kwargs.get('height', 0) - self.timeout = kwargs.get('timeout', 0) - self.wait = kwargs.get('wait', 0) - self.scaleToWidth = kwargs.get('scaleToWidth', 0) - self.scaleToHeight = kwargs.get('scaleToHeight', 0) - self.scaleRatio = kwargs.get('scaleRatio', 'keep') - self.format = kwargs.get('format', 'png') - self.logger = kwargs.get('logger', None) - - # Set this to true if you want to capture flash. - # Not that your desktop must be large enough for - # fitting the whole window. - self.grabWholeWindow = kwargs.get('grabWholeWindow', False) - self.renderTransparentBackground = kwargs.get('renderTransparentBackground', False) - self.ignoreAlert = kwargs.get('ignoreAlert', True) - self.ignoreConfirm = kwargs.get('ignoreConfirm', True) - self.ignorePrompt = kwargs.get('ignorePrompt', True) - self.interruptJavaScript = kwargs.get('interruptJavaScript', True) - self.encodedUrl = kwargs.get('encodedUrl', False) - self.cookies = kwargs.get('cookies', []) - - # Set some default options for QWebPage - self.qWebSettings = { - QWebSettings.JavascriptEnabled : False, - QWebSettings.PluginsEnabled : False, - QWebSettings.PrivateBrowsingEnabled : True, - QWebSettings.JavascriptCanOpenWindows : False - } - - - def render(self, res): - """ - Renders the given URL into a QImage object - """ - # We have to use this helper object because - # QApplication.processEvents may be called, causing - # this method to get called while it has not returned yet. - helper = _WebkitRendererHelper(self) - helper._window.resize( self.width, self.height ) - image = helper.render(res) - - # Bind helper instance to this image to prevent the - # object from being cleaned up (and with it the QWebPage, etc) - # before the data has been used. - image.helper = helper - - return image - - def render_to_file(self, res, file_object): - """ - Renders the image into a File resource. - Returns the size of the data that has been written. - """ - format = self.format # this may not be constant due to processEvents() - image = self.render(res) - qBuffer = QBuffer() - image.save(qBuffer, format) - file_object.write(qBuffer.buffer().data()) - return qBuffer.size() - - def render_to_bytes(self, res): - """Renders the image into an object of type 'str'""" - format = self.format # this may not be constant due to processEvents() - image = self.render(res) - qBuffer = QBuffer() - image.save(qBuffer, format) - return qBuffer.buffer().data() - -## @brief The CookieJar class inherits QNetworkCookieJar to make a couple of functions public. -class CookieJar(QNetworkCookieJar): - def __init__(self, cookies, qtUrl, parent=None): - QNetworkCookieJar.__init__(self, parent) - for cookie in cookies: - QNetworkCookieJar.setCookiesFromUrl(self, QNetworkCookie.parseCookies(QByteArray(cookie)), qtUrl) - - def allCookies(self): - return QNetworkCookieJar.allCookies(self) - - def setAllCookies(self, cookieList): - QNetworkCookieJar.setAllCookies(self, cookieList) - -class _WebkitRendererHelper(QObject): - """ - This helper class is doing the real work. It is required to - allow WebkitRenderer.render() to be called "asynchronously" - (but always from Qt's GUI thread). - """ - - def __init__(self, parent): - """ - Copies the properties from the parent (WebkitRenderer) object, - creates the required instances of QWebPage, QWebView and QMainWindow - and registers some Slots. - """ - QObject.__init__(self) - - # Copy properties from parent - for key,value in parent.__dict__.items(): - setattr(self,key,value) - - # Determine Proxy settings - proxy = QNetworkProxy(QNetworkProxy.NoProxy) - if 'http_proxy' in os.environ: - proxy_url = QUrl(os.environ['http_proxy']) - if unicode(proxy_url.scheme()).startswith('http'): - protocol = QNetworkProxy.HttpProxy - else: - protocol = QNetworkProxy.Socks5Proxy - - proxy = QNetworkProxy( - protocol, - proxy_url.host(), - proxy_url.port(), - proxy_url.userName(), - proxy_url.password() - ) - - # Create and connect required PyQt4 objects - self._page = CustomWebPage(logger=self.logger, ignore_alert=self.ignoreAlert, - ignore_confirm=self.ignoreConfirm, ignore_prompt=self.ignorePrompt, - interrupt_js=self.interruptJavaScript) - self._page.networkAccessManager().setProxy(proxy) - self._view = QWebView() - self._view.setPage(self._page) - self._window = QMainWindow() - self._window.setCentralWidget(self._view) - - # Import QWebSettings - for key, value in self.qWebSettings.iteritems(): - self._page.settings().setAttribute(key, value) - - # Connect required event listeners - self.connect(self._page, SIGNAL("loadFinished(bool)"), self._on_load_finished) - self.connect(self._page, SIGNAL("loadStarted()"), self._on_load_started) - self.connect(self._page.networkAccessManager(), SIGNAL("sslErrors(QNetworkReply *,const QList&)"), self._on_ssl_errors) - self.connect(self._page.networkAccessManager(), SIGNAL("finished(QNetworkReply *)"), self._on_each_reply) - - # The way we will use this, it seems to be unesseccary to have Scrollbars enabled - self._page.mainFrame().setScrollBarPolicy(Qt.Horizontal, Qt.ScrollBarAlwaysOff) - self._page.mainFrame().setScrollBarPolicy(Qt.Vertical, Qt.ScrollBarAlwaysOff) - self._page.settings().setUserStyleSheetUrl(QUrl("data:text/css,html,body{overflow-y:hidden !important;}")) - - # Show this widget - self._window.show() - - def __del__(self): - """ - Clean up Qt4 objects. - """ - self._window.close() - del self._window - del self._view - del self._page - - def render(self, res): - """ - The real worker. Loads the page (_load_page) and awaits - the end of the given 'delay'. While it is waiting outstanding - QApplication events are processed. - After the given delay, the Window or Widget (depends - on the value of 'grabWholeWindow' is drawn into a QPixmap - and postprocessed (_post_process_image). - """ - self._load_page(res, self.width, self.height, self.timeout) - # Wait for end of timer. In this time, process - # other outstanding Qt events. - if self.wait > 0: - if self.logger: self.logger.debug("Waiting %d seconds " % self.wait) - waitToTime = time.time() + self.wait - while time.time() < waitToTime: - if QApplication.hasPendingEvents(): - QApplication.processEvents() - - if self.renderTransparentBackground: - # Another possible drawing solution - image = QImage(self._page.viewportSize(), QImage.Format_ARGB32) - image.fill(QColor(255,0,0,0).rgba()) - - # http://ariya.blogspot.com/2009/04/transparent-qwebview-and-qwebpage.html - palette = self._view.palette() - palette.setBrush(QPalette.Base, Qt.transparent) - self._page.setPalette(palette) - self._view.setAttribute(Qt.WA_OpaquePaintEvent, False) - - painter = QPainter(image) - painter.setBackgroundMode(Qt.TransparentMode) - self._page.mainFrame().render(painter) - painter.end() - else: - if self.grabWholeWindow: - # Note that this does not fully ensure that the - # window still has the focus when the screen is - # grabbed. This might result in a race condition. - self._view.activateWindow() - image = QPixmap.grabWindow(self._window.winId()) - else: - image = QPixmap.grabWidget(self._window) - - return self._post_process_image(image) - - def _load_page(self, res, width, height, timeout): - """ - This method implements the logic for retrieving and displaying - the requested page. - """ - - # This is an event-based application. So we have to wait until - # "loadFinished(bool)" raised. - cancelAt = time.time() + timeout - self.__loading = True - self.__loadingResult = False # Default - - # When "res" is of type tuple, it has two elements where the first - # element is the HTML code to render and the second element is a string - # setting the base URL for the interpreted HTML code. - # When resource is of type str or unicode, it is handled as URL which - # shal be loaded - if type(res) == tuple: - url = res[1] - else: - url = res - - if self.encodedUrl: - qtUrl = QUrl.fromEncoded(url) - else: - qtUrl = QUrl(url) - - # Set the required cookies, if any - self.cookieJar = CookieJar(self.cookies, qtUrl) - self._page.networkAccessManager().setCookieJar(self.cookieJar) - - # Load the page - if type(res) == tuple: - self._page.mainFrame().setHtml(res[0], qtUrl) # HTML, baseUrl - else: - self._page.mainFrame().load(qtUrl) - - while self.__loading: - if timeout > 0 and time.time() >= cancelAt: - raise RuntimeError("Request timed out on %s" % res) - while QApplication.hasPendingEvents() and self.__loading: - QCoreApplication.processEvents() - - if self.logger: self.logger.debug("Processing result") - - if self.__loading_result == False: - if self.logger: self.logger.warning("Failed to load %s" % res) - - # Set initial viewport (the size of the "window") - size = self._page.mainFrame().contentsSize() - if self.logger: self.logger.debug("contentsSize: %s", size) - if width > 0: - size.setWidth(width) - if height > 0: - size.setHeight(height) - - self._window.resize(size) - - def _post_process_image(self, qImage): - """ - If 'scaleToWidth' or 'scaleToHeight' are set to a value - greater than zero this method will scale the image - using the method defined in 'scaleRatio'. - """ - if self.scaleToWidth > 0 or self.scaleToHeight > 0: - # Scale this image - if self.scaleRatio == 'keep': - ratio = Qt.KeepAspectRatio - elif self.scaleRatio in ['expand', 'crop']: - ratio = Qt.KeepAspectRatioByExpanding - else: # 'ignore' - ratio = Qt.IgnoreAspectRatio - qImage = qImage.scaled(self.scaleToWidth, self.scaleToHeight, ratio, Qt.SmoothTransformation) - if self.scaleRatio == 'crop': - qImage = qImage.copy(0, 0, self.scaleToWidth, self.scaleToHeight) - return qImage - - def _on_each_reply(self,reply): - """ - Logs each requested uri - """ - # print "Received %s" % (reply.url().toString()) - # self.logger.debug("Received %s" % (reply.url().toString())) - - # Eventhandler for "loadStarted()" signal - def _on_load_started(self): - """ - Slot that sets the '__loading' property to true - """ - if self.logger: self.logger.debug("loading started") - self.__loading = True - - # Eventhandler for "loadFinished(bool)" signal - def _on_load_finished(self, result): - """Slot that sets the '__loading' property to false and stores - the result code in '__loading_result'. - """ - if self.logger: self.logger.debug("loading finished with result %s", result) - self.__loading = False - self.__loading_result = result - - # Eventhandler for "sslErrors(QNetworkReply *,const QList&)" signal - def _on_ssl_errors(self, reply, errors): - """ - Slot that writes SSL warnings into the log but ignores them. - """ - for e in errors: - if self.logger: self.logger.warn("SSL: " + e.errorString()) - reply.ignoreSslErrors() - - -class CustomWebPage(QWebPage): - def __init__(self, **kwargs): - """ - Class Initializer - """ - super(CustomWebPage, self).__init__() - self.logger = kwargs.get('logger', None) - self.ignore_alert = kwargs.get('ignore_alert', True) - self.ignore_confirm = kwargs.get('ignore_confirm', True) - self.ignore_prompt = kwargs.get('ignore_prompt', True) - self.interrupt_js = kwargs.get('interrupt_js', True) - - def javaScriptAlert(self, frame, message): - if self.logger: self.logger.debug('Alert: %s', message) - if not self.ignore_alert: - return super(CustomWebPage, self).javaScriptAlert(frame, message) - - def javaScriptConfirm(self, frame, message): - if self.logger: self.logger.debug('Confirm: %s', message) - if not self.ignore_confirm: - return super(CustomWebPage, self).javaScriptConfirm(frame, message) - else: - return False - - def javaScriptPrompt(self, frame, message, default, result): - """ - This function is called whenever a JavaScript program running inside frame tries to prompt - the user for input. The program may provide an optional message, msg, as well as a default value - for the input in defaultValue. - - If the prompt was cancelled by the user the implementation should return false; - otherwise the result should be written to result and true should be returned. - If the prompt was not cancelled by the user, the implementation should return true and - the result string must not be null. - """ - if self.logger: self.logger.debug('Prompt: %s (%s)' % (message, default)) - if not self.ignore_prompt: - return super(CustomWebPage, self).javaScriptPrompt(frame, message, default, result) - else: - return False - - def shouldInterruptJavaScript(self): - """ - This function is called when a JavaScript program is running for a long period of time. - If the user wanted to stop the JavaScript the implementation should return true; otherwise false. - """ - if self.logger: self.logger.debug("WebKit ask to interrupt JavaScript") - return self.interrupt_js diff --git a/spaces/safi842/FashionGen/netdissect/nethook.py b/spaces/safi842/FashionGen/netdissect/nethook.py deleted file mode 100644 index f36e84ee0cae2de2c3be247498408cf66db3ee8f..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/nethook.py +++ /dev/null @@ -1,266 +0,0 @@ -''' -Utilities for instrumenting a torch model. - -InstrumentedModel will wrap a pytorch model and allow hooking -arbitrary layers to monitor or modify their output directly. - -Modified by Erik Härkönen: -- 29.11.2019: Unhooking bugfix -- 25.01.2020: Offset edits, removed old API -''' - -import torch, numpy, types -from collections import OrderedDict - -class InstrumentedModel(torch.nn.Module): - ''' - A wrapper for hooking, probing and intervening in pytorch Modules. - Example usage: - - ``` - model = load_my_model() - with inst as InstrumentedModel(model): - inst.retain_layer(layername) - inst.edit_layer(layername, 0.5, target_features) - inst.edit_layer(layername, offset=offset_tensor) - inst(inputs) - original_features = inst.retained_layer(layername) - ``` - ''' - - def __init__(self, model): - super(InstrumentedModel, self).__init__() - self.model = model - self._retained = OrderedDict() - self._ablation = {} - self._replacement = {} - self._offset = {} - self._hooked_layer = {} - self._old_forward = {} - - def __enter__(self): - return self - - def __exit__(self, type, value, traceback): - self.close() - - def forward(self, *inputs, **kwargs): - return self.model(*inputs, **kwargs) - - def retain_layer(self, layername): - ''' - Pass a fully-qualified layer name (E.g., module.submodule.conv3) - to hook that layer and retain its output each time the model is run. - A pair (layername, aka) can be provided, and the aka will be used - as the key for the retained value instead of the layername. - ''' - self.retain_layers([layername]) - - def retain_layers(self, layernames): - ''' - Retains a list of a layers at once. - ''' - self.add_hooks(layernames) - for layername in layernames: - aka = layername - if not isinstance(aka, str): - layername, aka = layername - if aka not in self._retained: - self._retained[aka] = None - - def retained_features(self): - ''' - Returns a dict of all currently retained features. - ''' - return OrderedDict(self._retained) - - def retained_layer(self, aka=None, clear=False): - ''' - Retrieve retained data that was previously hooked by retain_layer. - Call this after the model is run. If clear is set, then the - retained value will return and also cleared. - ''' - if aka is None: - # Default to the first retained layer. - aka = next(self._retained.keys().__iter__()) - result = self._retained[aka] - if clear: - self._retained[aka] = None - return result - - def edit_layer(self, layername, ablation=None, replacement=None, offset=None): - ''' - Pass a fully-qualified layer name (E.g., module.submodule.conv3) - to hook that layer and modify its output each time the model is run. - The output of the layer will be modified to be a convex combination - of the replacement and x interpolated according to the ablation, i.e.: - `output = x * (1 - a) + (r * a)`. - Additionally or independently, an offset can be added to the output. - ''' - if not isinstance(layername, str): - layername, aka = layername - else: - aka = layername - - # The default ablation if a replacement is specified is 1.0. - if ablation is None and replacement is not None: - ablation = 1.0 - self.add_hooks([(layername, aka)]) - if ablation is not None: - self._ablation[aka] = ablation - if replacement is not None: - self._replacement[aka] = replacement - if offset is not None: - self._offset[aka] = offset - # If needed, could add an arbitrary postprocessing lambda here. - - def remove_edits(self, layername=None, remove_offset=True, remove_replacement=True): - ''' - Removes edits at the specified layer, or removes edits at all layers - if no layer name is specified. - ''' - if layername is None: - if remove_replacement: - self._ablation.clear() - self._replacement.clear() - if remove_offset: - self._offset.clear() - return - - if not isinstance(layername, str): - layername, aka = layername - else: - aka = layername - if remove_replacement and aka in self._ablation: - del self._ablation[aka] - if remove_replacement and aka in self._replacement: - del self._replacement[aka] - if remove_offset and aka in self._offset: - del self._offset[aka] - - def add_hooks(self, layernames): - ''' - Sets up a set of layers to be hooked. - - Usually not called directly: use edit_layer or retain_layer instead. - ''' - needed = set() - aka_map = {} - for name in layernames: - aka = name - if not isinstance(aka, str): - name, aka = name - if self._hooked_layer.get(aka, None) != name: - aka_map[name] = aka - needed.add(name) - if not needed: - return - for name, layer in self.model.named_modules(): - if name in aka_map: - needed.remove(name) - aka = aka_map[name] - self._hook_layer(layer, name, aka) - for name in needed: - raise ValueError('Layer %s not found in model' % name) - - def _hook_layer(self, layer, layername, aka): - ''' - Internal method to replace a forward method with a closure that - intercepts the call, and tracks the hook so that it can be reverted. - ''' - if aka in self._hooked_layer: - raise ValueError('Layer %s already hooked' % aka) - if layername in self._old_forward: - raise ValueError('Layer %s already hooked' % layername) - self._hooked_layer[aka] = layername - self._old_forward[layername] = (layer, aka, - layer.__dict__.get('forward', None)) - editor = self - original_forward = layer.forward - def new_forward(self, *inputs, **kwargs): - original_x = original_forward(*inputs, **kwargs) - x = editor._postprocess_forward(original_x, aka) - return x - layer.forward = types.MethodType(new_forward, layer) - - def _unhook_layer(self, aka): - ''' - Internal method to remove a hook, restoring the original forward method. - ''' - if aka not in self._hooked_layer: - return - layername = self._hooked_layer[aka] - layer, check, old_forward = self._old_forward[layername] - assert check == aka - if old_forward is None: - if 'forward' in layer.__dict__: - del layer.__dict__['forward'] - else: - layer.forward = old_forward - del self._old_forward[layername] - del self._hooked_layer[aka] - if aka in self._ablation: - del self._ablation[aka] - if aka in self._replacement: - del self._replacement[aka] - if aka in self._offset: - del self._offset[aka] - if aka in self._retained: - del self._retained[aka] - - def _postprocess_forward(self, x, aka): - ''' - The internal method called by the hooked layers after they are run. - ''' - # Retain output before edits, if desired. - if aka in self._retained: - self._retained[aka] = x.detach() - - # Apply replacement edit - a = make_matching_tensor(self._ablation, aka, x) - if a is not None: - x = x * (1 - a) - v = make_matching_tensor(self._replacement, aka, x) - if v is not None: - x += (v * a) - - # Apply offset edit - b = make_matching_tensor(self._offset, aka, x) - if b is not None: - x = x + b - - return x - - def close(self): - ''' - Unhooks all hooked layers in the model. - ''' - for aka in list(self._old_forward.keys()): - self._unhook_layer(aka) - assert len(self._old_forward) == 0 - - -def make_matching_tensor(valuedict, name, data): - ''' - Converts `valuedict[name]` to be a tensor with the same dtype, device, - and dimension count as `data`, and caches the converted tensor. - ''' - v = valuedict.get(name, None) - if v is None: - return None - if not isinstance(v, torch.Tensor): - # Accept non-torch data. - v = torch.from_numpy(numpy.array(v)) - valuedict[name] = v - if not v.device == data.device or not v.dtype == data.dtype: - # Ensure device and type matches. - assert not v.requires_grad, '%s wrong device or type' % (name) - v = v.to(device=data.device, dtype=data.dtype) - valuedict[name] = v - if len(v.shape) < len(data.shape): - # Ensure dimensions are unsqueezed as needed. - assert not v.requires_grad, '%s wrong dimensions' % (name) - v = v.view((1,) + tuple(v.shape) + - (1,) * (len(data.shape) - len(v.shape) - 1)) - valuedict[name] = v - return v diff --git a/spaces/samarthagarwal23/Scotch_recommendation/README.md b/spaces/samarthagarwal23/Scotch_recommendation/README.md deleted file mode 100644 index 8821d08f03b25edf09e509a10ed5f84e883636f5..0000000000000000000000000000000000000000 --- a/spaces/samarthagarwal23/Scotch_recommendation/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Scotch_recommendation -emoji: 📊 -colorFrom: yellow -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/samueldomdey/SentimentAnalysisSingle/app.py b/spaces/samueldomdey/SentimentAnalysisSingle/app.py deleted file mode 100644 index c799f05993b7147596e0bba09096349d809aea08..0000000000000000000000000000000000000000 --- a/spaces/samueldomdey/SentimentAnalysisSingle/app.py +++ /dev/null @@ -1,23 +0,0 @@ -# imports -from transformers import pipeline -import gradio as gr - -# define nlp mask -model = "siebert/sentiment-roberta-large-english" -nlp = pipeline(model=model) # set device=0 to use GPU (CPU default, -1) - -# Inference -def inference(sentence): - preds = nlp(sentence) - pred_sentiment = preds[0]["label"] - pred_score = preds[0]["score"] - return pred_sentiment, pred_score - -# launch app -gr.Interface(inference, - inputs=[gr.inputs.Textbox(label="Sentiment to predict", default="I love this!")], - outputs=[gr.outputs.Textbox(type="auto", label="Predicted sentiment"), - gr.outputs.Textbox(type="auto", label="Predicted score")], - description="Sentiment analysis", - allow_flagging=False, - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py deleted file mode 100644 index bf1b6ba8b7581b139ccf4246f9ce7d67d6d89b07..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/losses.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch -from torch import nn - - -class ScaledSoftmaxCE(nn.Module): - def forward(self, x, label): - logits = x[..., :-10] - temp_scales = x[..., -10:] - - - - logprobs = logits.softmax(-1) diff --git a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py b/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py deleted file mode 100644 index ad59022388610f775335cd3f58ba4fb5362ebd90..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/raindrop-deraining-maxim/maxim/blocks/attentions.py +++ /dev/null @@ -1,143 +0,0 @@ -import functools - -import tensorflow as tf -from tensorflow.keras import layers - -from .others import MlpBlock - -Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same") -Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same") - - -def CALayer( - num_channels: int, - reduction: int = 4, - use_bias: bool = True, - name: str = "channel_attention", -): - """Squeeze-and-excitation block for channel attention. - - ref: https://arxiv.org/abs/1709.01507 - """ - - def apply(x): - # 2D global average pooling - y = layers.GlobalAvgPool2D(keepdims=True)(x) - # Squeeze (in Squeeze-Excitation) - y = Conv1x1( - filters=num_channels // reduction, use_bias=use_bias, name=f"{name}_Conv_0" - )(y) - y = tf.nn.relu(y) - # Excitation (in Squeeze-Excitation) - y = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_1")(y) - y = tf.nn.sigmoid(y) - return x * y - - return apply - - -def RCAB( - num_channels: int, - reduction: int = 4, - lrelu_slope: float = 0.2, - use_bias: bool = True, - name: str = "residual_ca", -): - """Residual channel attention block. Contains LN,Conv,lRelu,Conv,SELayer.""" - - def apply(x): - shortcut = x - x = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv1")(x) - x = tf.nn.leaky_relu(x, alpha=lrelu_slope) - x = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_conv2")(x) - x = CALayer( - num_channels=num_channels, - reduction=reduction, - use_bias=use_bias, - name=f"{name}_channel_attention", - )(x) - return x + shortcut - - return apply - - -def RDCAB( - num_channels: int, - reduction: int = 16, - use_bias: bool = True, - dropout_rate: float = 0.0, - name: str = "rdcab", -): - """Residual dense channel attention block. Used in Bottlenecks.""" - - def apply(x): - y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - y = MlpBlock( - mlp_dim=num_channels, - dropout_rate=dropout_rate, - use_bias=use_bias, - name=f"{name}_channel_mixing", - )(y) - y = CALayer( - num_channels=num_channels, - reduction=reduction, - use_bias=use_bias, - name=f"{name}_channel_attention", - )(y) - x = x + y - return x - - return apply - - -def SAM( - num_channels: int, - output_channels: int = 3, - use_bias: bool = True, - name: str = "sam", -): - - """Supervised attention module for multi-stage training. - - Introduced by MPRNet [CVPR2021]: https://github.com/swz30/MPRNet - """ - - def apply(x, x_image): - """Apply the SAM module to the input and num_channels. - Args: - x: the output num_channels from UNet decoder with shape (h, w, c) - x_image: the input image with shape (h, w, 3) - Returns: - A tuple of tensors (x1, image) where (x1) is the sam num_channels used for the - next stage, and (image) is the output restored image at current stage. - """ - # Get num_channels - x1 = Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x) - - # Output restored image X_s - if output_channels == 3: - image = ( - Conv3x3( - filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - + x_image - ) - else: - image = Conv3x3( - filters=output_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - - # Get attention maps for num_channels - x2 = tf.nn.sigmoid( - Conv3x3(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_2")(image) - ) - - # Get attended feature maps - x1 = x1 * x2 - - # Residual connection - x1 = x1 + x - return x1, image - - return apply diff --git a/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py b/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py deleted file mode 100644 index efd6f4927d7b630b9159f687befff5f6c39f02ac..0000000000000000000000000000000000000000 --- a/spaces/sciling/Face_and_Plate_License_Blur/utils/face_datasets.py +++ /dev/null @@ -1,834 +0,0 @@ -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -from utils.general import xyxy2xywh, xywh2xyxy, clean_str -from utils.torch_utils import torch_distributed_zero_first - - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return [x.replace(sa, sb, 1).replace('.' + x.split('.')[-1], '.txt') for x in img_paths] - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadFaceImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - ) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadFaceImagesAndLabels.collate_fn4 if quad else LoadFaceImagesAndLabels.collate_fn) - return dataloader, dataset -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - -class LoadFaceImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, rank=-1): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - else: - raise Exception('%s does not exist' % p) - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - assert self.img_files, 'No images found' - except Exception as e: - raise Exception('Error loading data from %s: %s\nSee %s' % (path, e, help_url)) - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = Path(self.label_files[0]).parent.with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache = torch.load(cache_path) # load - if cache['hash'] != get_hash(self.label_files + self.img_files) or 'results' not in cache: # changed - cache = self.cache_labels(cache_path) # re-cache - else: - cache = self.cache_labels(cache_path) # cache - - # Display cache - [nf, nm, ne, nc, n] = cache.pop('results') # found, missing, empty, corrupted, total - desc = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=desc, total=n, initial=n) - assert nf > 0 or not augment, f'No labels found in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - labels, shapes = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) # 8 threads - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # img, hw_original, hw_resized = load_image(self, i) - gb += self.imgs[i].nbytes - pbar.desc = 'Caching images (%.1fGB)' % (gb / 1E9) - - def cache_labels(self, path=Path('./labels.cache')): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - assert (shape[0] > 9) & (shape[1] > 9), 'image size <10 pixels' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - if len(l): - assert l.shape[1] == 15, 'labels require 15 columns each' - assert (l >= -1).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 15), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 15), dtype=np.float32) - x[im_file] = [l, shape] - except Exception as e: - nc += 1 - print('WARNING: Ignoring corrupted image and/or label %s: %s' % (im_file, e)) - - pbar.desc = f"Scanning '{path.parent / path.stem}' for images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - - if nf == 0: - print(f'WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = [nf, nm, ne, nc, i + 1] - torch.save(x, path) # save for next time - logging.info(f"New cache created: {path}") - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - img, labels = load_mosaic_face(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - img2, labels2 = load_mosaic_face(self, random.randint(0, self.n - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - # Load labels - labels = [] - x = self.labels[index] - if x.size > 0: - # Normalized xywh to pixel xyxy format - labels = x.copy() - labels[:, 1] = ratio[0] * w * (x[:, 1] - x[:, 3] / 2) + pad[0] # pad width - labels[:, 2] = ratio[1] * h * (x[:, 2] - x[:, 4] / 2) + pad[1] # pad height - labels[:, 3] = ratio[0] * w * (x[:, 1] + x[:, 3] / 2) + pad[0] - labels[:, 4] = ratio[1] * h * (x[:, 2] + x[:, 4] / 2) + pad[1] - - #labels[:, 5] = ratio[0] * w * x[:, 5] + pad[0] # pad width - labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 5] + pad[0]) + ( - np.array(x[:, 5] > 0, dtype=np.int32) - 1) - labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 6] + pad[1]) + ( - np.array(x[:, 6] > 0, dtype=np.int32) - 1) - labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 7] + pad[0]) + ( - np.array(x[:, 7] > 0, dtype=np.int32) - 1) - labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 8] + pad[1]) + ( - np.array(x[:, 8] > 0, dtype=np.int32) - 1) - labels[:, 9] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 9] + pad[0]) + ( - np.array(x[:, 9] > 0, dtype=np.int32) - 1) - labels[:, 10] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 10] + pad[1]) + ( - np.array(x[:, 10] > 0, dtype=np.int32) - 1) - labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 11] + pad[0]) + ( - np.array(x[:, 11] > 0, dtype=np.int32) - 1) - labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 12] + pad[1]) + ( - np.array(x[:, 12] > 0, dtype=np.int32) - 1) - labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 13] + pad[0]) + ( - np.array(x[:, 13] > 0, dtype=np.int32) - 1) - labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 14] + pad[1]) + ( - np.array(x[:, 14] > 0, dtype=np.int32) - 1) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - labels[:, [5, 7, 9, 11, 13]] /= img.shape[1] # normalized landmark x 0-1 - labels[:, [5, 7, 9, 11, 13]] = np.where(labels[:, [5, 7, 9, 11, 13]] < 0, -1, labels[:, [5, 7, 9, 11, 13]]) - labels[:, [6, 8, 10, 12, 14]] /= img.shape[0] # normalized landmark y 0-1 - labels[:, [6, 8, 10, 12, 14]] = np.where(labels[:, [6, 8, 10, 12, 14]] < 0, -1, labels[:, [6, 8, 10, 12, 14]]) - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - labels[:, 6] = np.where(labels[:,6] < 0, -1, 1 - labels[:, 6]) - labels[:, 8] = np.where(labels[:, 8] < 0, -1, 1 - labels[:, 8]) - labels[:, 10] = np.where(labels[:, 10] < 0, -1, 1 - labels[:, 10]) - labels[:, 12] = np.where(labels[:, 12] < 0, -1, 1 - labels[:, 12]) - labels[:, 14] = np.where(labels[:, 14] < 0, -1, 1 - labels[:, 14]) - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels[:, 5] = np.where(labels[:, 5] < 0, -1, 1 - labels[:, 5]) - labels[:, 7] = np.where(labels[:, 7] < 0, -1, 1 - labels[:, 7]) - labels[:, 9] = np.where(labels[:, 9] < 0, -1, 1 - labels[:, 9]) - labels[:, 11] = np.where(labels[:, 11] < 0, -1, 1 - labels[:, 11]) - labels[:, 13] = np.where(labels[:, 13] < 0, -1, 1 - labels[:, 13]) - - #左右镜像的时候,左眼、右眼, 左嘴角、右嘴角无法区分, 应该交换位置,便于网络学习 - eye_left = np.copy(labels[:, [5, 6]]) - mouth_left = np.copy(labels[:, [11, 12]]) - labels[:, [5, 6]] = labels[:, [7, 8]] - labels[:, [7, 8]] = eye_left - labels[:, [11, 12]] = labels[:, [13, 14]] - labels[:, [13, 14]] = mouth_left - - labels_out = torch.zeros((nL, 16)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - #showlabels(img, labels[:, 1:5], labels[:, 5:15]) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - #print(index, ' --- labels_out: ', labels_out) - #if nL: - #print( ' : landmarks : ', torch.max(labels_out[:, 5:15]), ' --- ', torch.min(labels_out[:, 5:15])) - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - -def showlabels(img, boxs, landmarks): - for box in boxs: - x,y,w,h = box[0] * img.shape[1], box[1] * img.shape[0], box[2] * img.shape[1], box[3] * img.shape[0] - #cv2.rectangle(image, (x,y), (x+w,y+h), (0,255,0), 2) - cv2.rectangle(img, (int(x - w/2), int(y - h/2)), (int(x + w/2), int(y + h/2)), (0, 255, 0), 2) - - for landmark in landmarks: - #cv2.circle(img,(60,60),30,(0,0,255)) - for i in range(5): - cv2.circle(img, (int(landmark[2*i] * img.shape[1]), int(landmark[2*i+1]*img.shape[0])), 3 ,(0,0,255), -1) - cv2.imshow('test', img) - cv2.waitKey(0) - - -def load_mosaic_face(self, index): - # loads images in a mosaic - labels4 = [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(3)] # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - x = self.labels[index] - labels = x.copy() - if x.size > 0: # Normalized xywh to pixel xyxy format - #box, x1,y1,x2,y2 - labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw - labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh - labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw - labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh - #10 landmarks - - labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (w * x[:, 5] + padw) + (np.array(x[:, 5] > 0, dtype=np.int32) - 1) - labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (h * x[:, 6] + padh) + (np.array(x[:, 6] > 0, dtype=np.int32) - 1) - labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (w * x[:, 7] + padw) + (np.array(x[:, 7] > 0, dtype=np.int32) - 1) - labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (h * x[:, 8] + padh) + (np.array(x[:, 8] > 0, dtype=np.int32) - 1) - labels[:, 9] = np.array(x[:, 9] > 0, dtype=np.int32) * (w * x[:, 9] + padw) + (np.array(x[:, 9] > 0, dtype=np.int32) - 1) - labels[:, 10] = np.array(x[:, 10] > 0, dtype=np.int32) * (h * x[:, 10] + padh) + (np.array(x[:, 10] > 0, dtype=np.int32) - 1) - labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (w * x[:, 11] + padw) + (np.array(x[:, 11] > 0, dtype=np.int32) - 1) - labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (h * x[:, 12] + padh) + (np.array(x[:, 12] > 0, dtype=np.int32) - 1) - labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (w * x[:, 13] + padw) + (np.array(x[:, 13] > 0, dtype=np.int32) - 1) - labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (h * x[:, 14] + padh) + (np.array(x[:, 14] > 0, dtype=np.int32) - 1) - labels4.append(labels) - - # Concat/clip labels - if len(labels4): - labels4 = np.concatenate(labels4, 0) - np.clip(labels4[:, 1:5], 0, 2 * s, out=labels4[:, 1:5]) # use with random_perspective - # img4, labels4 = replicate(img4, labels4) # replicate - - #landmarks - labels4[:, 5:] = np.where(labels4[:, 5:] < 0, -1, labels4[:, 5:]) - labels4[:, 5:] = np.where(labels4[:, 5:] > 2 * s, -1, labels4[:, 5:]) - - labels4[:, 5] = np.where(labels4[:, 6] == -1, -1, labels4[:, 5]) - labels4[:, 6] = np.where(labels4[:, 5] == -1, -1, labels4[:, 6]) - - labels4[:, 7] = np.where(labels4[:, 8] == -1, -1, labels4[:, 7]) - labels4[:, 8] = np.where(labels4[:, 7] == -1, -1, labels4[:, 8]) - - labels4[:, 9] = np.where(labels4[:, 10] == -1, -1, labels4[:, 9]) - labels4[:, 10] = np.where(labels4[:, 9] == -1, -1, labels4[:, 10]) - - labels4[:, 11] = np.where(labels4[:, 12] == -1, -1, labels4[:, 11]) - labels4[:, 12] = np.where(labels4[:, 11] == -1, -1, labels4[:, 12]) - - labels4[:, 13] = np.where(labels4[:, 14] == -1, -1, labels4[:, 13]) - labels4[:, 14] = np.where(labels4[:, 13] == -1, -1, labels4[:, 14]) - - # Augment - img4, labels4 = random_perspective(img4, labels4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - return img4, labels4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - # Histogram equalization - # if random.random() < 0.2: - # for i in range(3): - # img[:, :, i] = cv2.equalizeHist(img[:, :, i]) - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - # warp points - #xy = np.ones((n * 4, 3)) - xy = np.ones((n * 9, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]].reshape(n * 9, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - if perspective: - xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 18) # rescale - else: # affine - xy = xy[:, :2].reshape(n, 18) - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - - landmarks = xy[:, [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]] - mask = np.array(targets[:, 5:] > 0, dtype=np.int32) - landmarks = landmarks * mask - landmarks = landmarks + mask - 1 - - landmarks = np.where(landmarks < 0, -1, landmarks) - landmarks[:, [0, 2, 4, 6, 8]] = np.where(landmarks[:, [0, 2, 4, 6, 8]] > width, -1, landmarks[:, [0, 2, 4, 6, 8]]) - landmarks[:, [1, 3, 5, 7, 9]] = np.where(landmarks[:, [1, 3, 5, 7, 9]] > height, -1,landmarks[:, [1, 3, 5, 7, 9]]) - - landmarks[:, 0] = np.where(landmarks[:, 1] == -1, -1, landmarks[:, 0]) - landmarks[:, 1] = np.where(landmarks[:, 0] == -1, -1, landmarks[:, 1]) - - landmarks[:, 2] = np.where(landmarks[:, 3] == -1, -1, landmarks[:, 2]) - landmarks[:, 3] = np.where(landmarks[:, 2] == -1, -1, landmarks[:, 3]) - - landmarks[:, 4] = np.where(landmarks[:, 5] == -1, -1, landmarks[:, 4]) - landmarks[:, 5] = np.where(landmarks[:, 4] == -1, -1, landmarks[:, 5]) - - landmarks[:, 6] = np.where(landmarks[:, 7] == -1, -1, landmarks[:, 6]) - landmarks[:, 7] = np.where(landmarks[:, 6] == -1, -1, landmarks[:, 7]) - - landmarks[:, 8] = np.where(landmarks[:, 9] == -1, -1, landmarks[:, 8]) - landmarks[:, 9] = np.where(landmarks[:, 8] == -1, -1, landmarks[:, 9]) - - targets[:,5:] = landmarks - - xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # # apply angle-based reduction of bounding boxes - # radians = a * math.pi / 180 - # reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5 - # x = (xy[:, 2] + xy[:, 0]) / 2 - # y = (xy[:, 3] + xy[:, 1]) / 2 - # w = (xy[:, 2] - xy[:, 0]) * reduction - # h = (xy[:, 3] - xy[:, 1]) * reduction - # xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T - - # clip boxes - xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) - xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T) - targets = targets[i] - targets[:, 1:5] = xy[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + 1e-16) > area_thr) & (ar < ar_thr) # candidates - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco128'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco128/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco128', weights=(0.9, 0.1, 0.0)): # from utils.datasets import *; autosplit('../coco128') - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - # Arguments - path: Path to images directory - weights: Train, val, test weights (list) - """ - path = Path(path) # images dir - files = list(path.rglob('*.*')) - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - for i, img in tqdm(zip(indices, files), total=n): - if img.suffix[1:] in img_formats: - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file diff --git a/spaces/sczhou/CodeFormer/app.py b/spaces/sczhou/CodeFormer/app.py deleted file mode 100644 index 485c29cf06adc43e1dcefe561497f53e85f73116..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/app.py +++ /dev/null @@ -1,305 +0,0 @@ -""" -This file is used for deploying hugging face demo: -https://huggingface.co/spaces/sczhou/CodeFormer -""" - -import sys -sys.path.append('CodeFormer') -import os -import cv2 -import torch -import torch.nn.functional as F -import gradio as gr - -from torchvision.transforms.functional import normalize - -from basicsr.utils import imwrite, img2tensor, tensor2img -from basicsr.utils.download_util import load_file_from_url -from facelib.utils.face_restoration_helper import FaceRestoreHelper -from facelib.utils.misc import is_gray -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.realesrgan_utils import RealESRGANer - -from basicsr.utils.registry import ARCH_REGISTRY - - -os.system("pip freeze") - -pretrain_model_url = { - 'codeformer': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth', - 'detection': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth', - 'parsing': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth', - 'realesrgan': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/RealESRGAN_x2plus.pth' -} -# download weights -if not os.path.exists('CodeFormer/weights/CodeFormer/codeformer.pth'): - load_file_from_url(url=pretrain_model_url['codeformer'], model_dir='CodeFormer/weights/CodeFormer', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/facelib/detection_Resnet50_Final.pth'): - load_file_from_url(url=pretrain_model_url['detection'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/facelib/parsing_parsenet.pth'): - load_file_from_url(url=pretrain_model_url['parsing'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None) -if not os.path.exists('CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth'): - load_file_from_url(url=pretrain_model_url['realesrgan'], model_dir='CodeFormer/weights/realesrgan', progress=True, file_name=None) - -# download images -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/fa3fe3d1-76b0-4ca8-ac0d-0a925cb0ff54/06.png', - '01.png') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/a1daba8e-af14-4b00-86a4-69cec9619b53/04.jpg', - '02.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/542d64f9-1712-4de7-85f7-3863009a7c3d/03.jpg', - '03.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/a11098b0-a18a-4c02-a19a-9a7045d68426/010.jpg', - '04.jpg') -torch.hub.download_url_to_file( - 'https://replicate.com/api/models/sczhou/codeformer/files/7cf19c2c-e0cf-4712-9af8-cf5bdbb8d0ee/012.jpg', - '05.jpg') -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/sczhou/CodeFormer/master/inputs/cropped_faces/0729.png', - '06.png') - -def imread(img_path): - img = cv2.imread(img_path) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - return img - -# set enhancer with RealESRGAN -def set_realesrgan(): - half = True if torch.cuda.is_available() else False - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - upsampler = RealESRGANer( - scale=2, - model_path="CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth", - model=model, - tile=400, - tile_pad=40, - pre_pad=0, - half=half, - ) - return upsampler - -upsampler = set_realesrgan() -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -codeformer_net = ARCH_REGISTRY.get("CodeFormer")( - dim_embd=512, - codebook_size=1024, - n_head=8, - n_layers=9, - connect_list=["32", "64", "128", "256"], -).to(device) -ckpt_path = "CodeFormer/weights/CodeFormer/codeformer.pth" -checkpoint = torch.load(ckpt_path)["params_ema"] -codeformer_net.load_state_dict(checkpoint) -codeformer_net.eval() - -os.makedirs('output', exist_ok=True) - -def inference(image, face_align, background_enhance, face_upsample, upscale, codeformer_fidelity): - """Run a single prediction on the model""" - try: # global try - # take the default setting for the demo - only_center_face = False - draw_box = False - detection_model = "retinaface_resnet50" - - print('Inp:', image, background_enhance, face_upsample, upscale, codeformer_fidelity) - face_align = face_align if face_align is not None else True - background_enhance = background_enhance if background_enhance is not None else True - face_upsample = face_upsample if face_upsample is not None else True - upscale = upscale if (upscale is not None and upscale > 0) else 2 - - has_aligned = not face_align - upscale = 1 if has_aligned else upscale - - img = cv2.imread(str(image), cv2.IMREAD_COLOR) - print('\timage size:', img.shape) - - upscale = int(upscale) # convert type to int - if upscale > 4: # avoid memory exceeded due to too large upscale - upscale = 4 - if upscale > 2 and max(img.shape[:2])>1000: # avoid memory exceeded due to too large img resolution - upscale = 2 - if max(img.shape[:2]) > 1500: # avoid memory exceeded due to too large img resolution - upscale = 1 - background_enhance = False - face_upsample = False - - face_helper = FaceRestoreHelper( - upscale, - face_size=512, - crop_ratio=(1, 1), - det_model=detection_model, - save_ext="png", - use_parse=True, - device=device, - ) - bg_upsampler = upsampler if background_enhance else None - face_upsampler = upsampler if face_upsample else None - - if has_aligned: - # the input faces are already cropped and aligned - img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR) - face_helper.is_gray = is_gray(img, threshold=5) - if face_helper.is_gray: - print('\tgrayscale input: True') - face_helper.cropped_faces = [img] - else: - face_helper.read_image(img) - # get face landmarks for each face - num_det_faces = face_helper.get_face_landmarks_5( - only_center_face=only_center_face, resize=640, eye_dist_threshold=5 - ) - print(f'\tdetect {num_det_faces} faces') - # align and warp each face - face_helper.align_warp_face() - - # face restoration for each cropped face - for idx, cropped_face in enumerate(face_helper.cropped_faces): - # prepare data - cropped_face_t = img2tensor( - cropped_face / 255.0, bgr2rgb=True, float32=True - ) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(device) - - try: - with torch.no_grad(): - output = codeformer_net( - cropped_face_t, w=codeformer_fidelity, adain=True - )[0] - restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1)) - del output - torch.cuda.empty_cache() - except RuntimeError as error: - print(f"Failed inference for CodeFormer: {error}") - restored_face = tensor2img( - cropped_face_t, rgb2bgr=True, min_max=(-1, 1) - ) - - restored_face = restored_face.astype("uint8") - face_helper.add_restored_face(restored_face) - - # paste_back - if not has_aligned: - # upsample the background - if bg_upsampler is not None: - # Now only support RealESRGAN for upsampling background - bg_img = bg_upsampler.enhance(img, outscale=upscale)[0] - else: - bg_img = None - face_helper.get_inverse_affine(None) - # paste each restored face to the input image - if face_upsample and face_upsampler is not None: - restored_img = face_helper.paste_faces_to_input_image( - upsample_img=bg_img, - draw_box=draw_box, - face_upsampler=face_upsampler, - ) - else: - restored_img = face_helper.paste_faces_to_input_image( - upsample_img=bg_img, draw_box=draw_box - ) - else: - restored_img = restored_face - - # save restored img - save_path = f'output/out.png' - imwrite(restored_img, str(save_path)) - - restored_img = cv2.cvtColor(restored_img, cv2.COLOR_BGR2RGB) - return restored_img - except Exception as error: - print('Global exception', error) - return None, None - - -title = "CodeFormer: Robust Face Restoration and Enhancement Network" - -description = r"""
CodeFormer logo
-
-Official Gradio demo for Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)
-🔥 CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
-🤗 Try CodeFormer for improved stable-diffusion generation!
-""" - -article = r""" -If CodeFormer is helpful, please help to ⭐ the Github Repo. Thanks! -[![GitHub Stars](https://img.shields.io/github/stars/sczhou/CodeFormer?style=social)](https://github.com/sczhou/CodeFormer) - ---- - -📝 **Citation** - -If our work is useful for your research, please consider citing: -```bibtex -@inproceedings{zhou2022codeformer, - author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change}, - title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer}, - booktitle = {NeurIPS}, - year = {2022} -} -``` - -📋 **License** - -This project is licensed under S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. - -📧 **Contact** - -If you have any questions, please feel free to reach me out at shangchenzhou@gmail.com. - -🤗 **Find Me:** - - - - - - - -
Github FollowTwitter Follow
- -
visitors
-""" - -demo = gr.Interface( - inference, [ - gr.Image(type="filepath", label="Input"), - gr.Checkbox(value=True, label="Pre_Face_Align"), - gr.Checkbox(value=True, label="Background_Enhance"), - gr.Checkbox(value=True, label="Face_Upsample"), - gr.Number(value=2, label="Rescaling_Factor (up to 4)"), - gr.Slider(0, 1, value=0.5, step=0.01, label='Codeformer_Fidelity (0 for better quality, 1 for better identity)') - ], [ - gr.Image(type="numpy", label="Output").style(height='auto') - ], - title=title, - description=description, - article=article, - examples=[ - ['01.png', True, True, True, 2, 0.7], - ['02.jpg', True, True, True, 2, 0.7], - ['03.jpg', True, True, True, 2, 0.7], - ['04.jpg', True, True, True, 2, 0.1], - ['05.jpg', True, True, True, 2, 0.1], - ['06.png', False, True, True, 1, 0.5] - ]) - -DEBUG = os.getenv('DEBUG') == '1' -demo.queue(api_open=False, concurrency_count=2, max_size=10) -demo.launch(debug=DEBUG) -# demo.launch(debug=DEBUG, share=True) \ No newline at end of file diff --git a/spaces/sczhou/ProPainter/RAFT/__init__.py b/spaces/sczhou/ProPainter/RAFT/__init__.py deleted file mode 100644 index e7179ea3ce4ad81425c619772d4bc47bc7ceea3a..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/RAFT/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# from .demo import RAFT_infer -from .raft import RAFT diff --git a/spaces/segments-tobias/conex/espnet2/torch_utils/set_all_random_seed.py b/spaces/segments-tobias/conex/espnet2/torch_utils/set_all_random_seed.py deleted file mode 100644 index ebdca3f537aac53bdc6e6cea168c49805bdf2d2f..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/torch_utils/set_all_random_seed.py +++ /dev/null @@ -1,10 +0,0 @@ -import random - -import numpy as np -import torch - - -def set_all_random_seed(seed: int): - random.seed(seed) - np.random.seed(seed) - torch.random.manual_seed(seed) diff --git a/spaces/seungheondoh/LP-Music-Caps-demo/model/bart.py b/spaces/seungheondoh/LP-Music-Caps-demo/model/bart.py deleted file mode 100644 index 49b39863303bff7d25cde004cd8f2cc019847329..0000000000000000000000000000000000000000 --- a/spaces/seungheondoh/LP-Music-Caps-demo/model/bart.py +++ /dev/null @@ -1,151 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from .modules import AudioEncoder -from transformers import BartForConditionalGeneration, BartTokenizer, BartConfig - -class BartCaptionModel(nn.Module): - def __init__(self, n_mels=128, num_of_conv=6, sr=16000, duration=10, max_length=128, label_smoothing=0.1, bart_type="facebook/bart-base", audio_dim=768): - super(BartCaptionModel, self).__init__() - # non-finetunning case - bart_config = BartConfig.from_pretrained(bart_type) - self.tokenizer = BartTokenizer.from_pretrained(bart_type) - self.bart = BartForConditionalGeneration(bart_config) - - self.n_sample = sr * duration - self.hop_length = int(0.01 * sr) # hard coding hop_size - self.n_frames = int(self.n_sample // self.hop_length) - self.num_of_stride_conv = num_of_conv - 1 - self.n_ctx = int(self.n_frames // 2**self.num_of_stride_conv) + 1 - self.audio_encoder = AudioEncoder( - n_mels = n_mels, # hard coding n_mel - n_ctx = self.n_ctx, - audio_dim = audio_dim, - text_dim = self.bart.config.hidden_size, - num_of_stride_conv = self.num_of_stride_conv - ) - - self.max_length = max_length - self.loss_fct = nn.CrossEntropyLoss(label_smoothing= label_smoothing, ignore_index=-100) - - @property - def device(self): - return list(self.parameters())[0].device - - def shift_tokens_right(self, input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): - """ - Shift input ids one token to the right.ls - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("self.model.config.pad_token_id has to be defined.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - return shifted_input_ids - - def forward_encoder(self, audio): - audio_embs = self.audio_encoder(audio) - encoder_outputs = self.bart.model.encoder( - input_ids=None, - inputs_embeds=audio_embs, - return_dict=True - )["last_hidden_state"] - return encoder_outputs, audio_embs - - def forward_decoder(self, text, encoder_outputs): - text = self.tokenizer(text, - padding='longest', - truncation=True, - max_length=self.max_length, - return_tensors="pt") - input_ids = text["input_ids"].to(self.device) - attention_mask = text["attention_mask"].to(self.device) - - decoder_targets = input_ids.masked_fill( - input_ids == self.tokenizer.pad_token_id, -100 - ) - - decoder_input_ids = self.shift_tokens_right( - decoder_targets, self.bart.config.pad_token_id, self.bart.config.decoder_start_token_id - ) - - decoder_outputs = self.bart( - input_ids=None, - attention_mask=None, - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=attention_mask, - inputs_embeds=None, - labels=None, - encoder_outputs=(encoder_outputs,), - return_dict=True - ) - lm_logits = decoder_outputs["logits"] - loss = self.loss_fct(lm_logits.view(-1, self.tokenizer.vocab_size), decoder_targets.view(-1)) - return loss - - def forward(self, audio, text): - encoder_outputs, _ = self.forward_encoder(audio) - loss = self.forward_decoder(text, encoder_outputs) - return loss - - def generate(self, - samples, - use_nucleus_sampling=False, - num_beams=5, - max_length=128, - min_length=2, - top_p=0.9, - repetition_penalty=1.0, - ): - - # self.bart.force_bos_token_to_be_generated = True - audio_embs = self.audio_encoder(samples) - encoder_outputs = self.bart.model.encoder( - input_ids=None, - attention_mask=None, - head_mask=None, - inputs_embeds=audio_embs, - output_attentions=None, - output_hidden_states=None, - return_dict=True) - - input_ids = torch.zeros((encoder_outputs['last_hidden_state'].size(0), 1)).long().to(self.device) - input_ids[:, 0] = self.bart.config.decoder_start_token_id - decoder_attention_mask = torch.ones((encoder_outputs['last_hidden_state'].size(0), 1)).long().to(self.device) - if use_nucleus_sampling: - outputs = self.bart.generate( - input_ids=None, - attention_mask=None, - decoder_input_ids=input_ids, - decoder_attention_mask=decoder_attention_mask, - encoder_outputs=encoder_outputs, - max_length=max_length, - min_length=min_length, - do_sample=True, - top_p=top_p, - num_return_sequences=1, - repetition_penalty=1.1) - else: - outputs = self.bart.generate(input_ids=None, - attention_mask=None, - decoder_input_ids=input_ids, - decoder_attention_mask=decoder_attention_mask, - encoder_outputs=encoder_outputs, - head_mask=None, - decoder_head_mask=None, - inputs_embeds=None, - decoder_inputs_embeds=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - max_length=max_length, - min_length=min_length, - num_beams=num_beams, - repetition_penalty=repetition_penalty) - - captions = self.tokenizer.batch_decode(outputs, skip_special_tokens=True) - return captions diff --git a/spaces/shabnam91/Sanskrit-TTS/text/cleaners.py b/spaces/shabnam91/Sanskrit-TTS/text/cleaners.py deleted file mode 100644 index 868a236f3fa483f12e7a56120834662c80e1450d..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/text/cleaners.py +++ /dev/null @@ -1,5 +0,0 @@ -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if len(text)==0 or text[-1] != '।': - text += ' ।' - return text diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/docs/README.ko.md b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/docs/README.ko.md deleted file mode 100644 index fdaa9431d642b50cd693905ebc9e90dafb5b1be9..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/docs/README.ko.md +++ /dev/null @@ -1,102 +0,0 @@ -
- -

Retrieval-based-Voice-Conversion-WebUI

-VITS 기반의 간단하고 사용하기 쉬운 음성 변환 프레임워크.

- -[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) - -
- -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) -[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt) -[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) - -[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk) - -
- ------- -[**업데이트 로그**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md) - -[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) - -> [데모 영상](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 확인해 보세요! - -> RVC를 활용한 실시간 음성변환: [w-okada/voice-changer](https://github.com/w-okada/voice-changer) - -> 기본 모델은 50시간 가량의 고퀄리티 오픈 소스 VCTK 데이터셋을 사용하였으므로, 저작권상의 염려가 없으니 안심하고 사용하시기 바랍니다. - -> 저작권 문제가 없는 고퀄리티의 노래를 이후에도 계속해서 훈련할 예정입니다. - -## 소개 -본 Repo는 다음과 같은 특징을 가지고 있습니다: -+ top1 검색을 이용하여 입력 음색 특징을 훈련 세트 음색 특징으로 대체하여 음색의 누출을 방지; -+ 상대적으로 낮은 성능의 GPU에서도 빠른 훈련 가능; -+ 적은 양의 데이터로 훈련해도 좋은 결과를 얻을 수 있음 (최소 10분 이상의 저잡음 음성 데이터를 사용하는 것을 권장); -+ 모델 융합을 통한 음색의 변조 가능 (ckpt 처리 탭->ckpt 병합 선택); -+ 사용하기 쉬운 WebUI (웹 인터페이스); -+ UVR5 모델을 이용하여 목소리와 배경음악의 빠른 분리; - -## 환경의 준비 -poetry를 통해 dependecies를 설치하는 것을 권장합니다. - -다음 명령은 Python 버전 3.8 이상의 환경에서 실행되어야 합니다: -```bash -# PyTorch 관련 주요 dependencies 설치, 이미 설치되어 있는 경우 건너뛰기 가능 -# 참조: https://pytorch.org/get-started/locally/ -pip install torch torchvision torchaudio - -# Windows + Nvidia Ampere Architecture(RTX30xx)를 사용하고 있다면, https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 지정해야 합니다. -#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 - -# Poetry 설치, 이미 설치되어 있는 경우 건너뛰기 가능 -# Reference: https://python-poetry.org/docs/#installation -curl -sSL https://install.python-poetry.org | python3 - - -# Dependecies 설치 -poetry install -``` -pip를 활용하여 dependencies를 설치하여도 무방합니다. - -**공지**: `MacOS`에서 `faiss 1.7.2`를 사용하면 Segmentation Fault: 11 오류가 발생할 수 있습니다. 수동으로 pip를 사용하여 설치하는 경우 `pip install faiss-cpu==1.7.0`을 사용해야 합니다. - -```bash -pip install -r requirements.txt -``` - -## 기타 사전 모델 준비 -RVC 모델은 추론과 훈련을 위하여 다른 사전 모델이 필요합니다. - -[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 통해서 다운로드 할 수 있습니다. - -다음은 RVC에 필요한 사전 모델 및 기타 파일 목록입니다: -```bash -hubert_base.pt - -./pretrained - -./uvr5_weights - -# Windows를 사용하는 경우 이 사전도 필요할 수 있습니다. FFmpeg가 설치되어 있으면 건너뛰어도 됩니다. -ffmpeg.exe -``` -그 후 이하의 명령을 사용하여 WebUI를 시작할 수 있습니다: -```bash -python infer-web.py -``` -Windows를 사용하는 경우 `RVC-beta.7z`를 다운로드 및 압축 해제하여 RVC를 직접 사용하거나 `go-web.bat`을 사용하여 WebUi를 시작할 수 있습니다. - -## 참고 -+ [ContentVec](https://github.com/auspicious3000/contentvec/) -+ [VITS](https://github.com/jaywalnut310/vits) -+ [HIFIGAN](https://github.com/jik876/hifi-gan) -+ [Gradio](https://github.com/gradio-app/gradio) -+ [FFmpeg](https://github.com/FFmpeg/FFmpeg) -+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) -+ [audio-slicer](https://github.com/openvpi/audio-slicer) -## 모든 기여자 분들의 노력에 감사드립니다. - - - - - diff --git a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py b/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py deleted file mode 100644 index abe3cbe49477fe37d4fc16249de8a10f4fb4a013..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .th import * diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py deleted file mode 100644 index 2ad331180113db5ee33186a5abce81e871e0c7c9..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder.py +++ /dev/null @@ -1,576 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy - -from .seecoder_utils import with_pos_embed -from lib.model_zoo.common.get_model import get_model, register - -symbol = 'seecoder' - -########### -# helpers # -########### - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") - -def c2_xavier_fill(module): - # Caffe2 implementation of XavierFill in fact - nn.init.kaiming_uniform_(module.weight, a=1) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - -def with_pos_embed(x, pos): - return x if pos is None else x + pos - -########### -# Modules # -########### - -class Conv2d_Convenience(nn.Conv2d): - def __init__(self, *args, **kwargs): - norm = kwargs.pop("norm", None) - activation = kwargs.pop("activation", None) - super().__init__(*args, **kwargs) - self.norm = norm - self.activation = activation - - def forward(self, x): - x = F.conv2d( - x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - -class DecoderLayer(nn.Module): - def __init__(self, - dim=256, - feedforward_dim=1024, - dropout=0.1, - activation="relu", - n_heads=8,): - - super().__init__() - - self.self_attn = nn.MultiheadAttention(dim, n_heads, dropout=dropout) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(dim) - - self.linear1 = nn.Linear(dim, feedforward_dim) - self.activation = _get_activation_fn(activation) - self.dropout2 = nn.Dropout(dropout) - self.linear2 = nn.Linear(feedforward_dim, dim) - self.dropout3 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(dim) - - def forward(self, x): - h = x - h1 = self.self_attn(x, x, x, attn_mask=None)[0] - h = h + self.dropout1(h1) - h = self.norm1(h) - - h2 = self.linear2(self.dropout2(self.activation(self.linear1(h)))) - h = h + self.dropout3(h2) - h = self.norm2(h) - return h - -class DecoderLayerStacked(nn.Module): - def __init__(self, layer, num_layers, norm=None): - super().__init__() - self.layers = _get_clones(layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward(self, x): - h = x - for _, layer in enumerate(self.layers): - h = layer(h) - if self.norm is not None: - h = self.norm(h) - return h - -class SelfAttentionLayer(nn.Module): - def __init__(self, channels, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(channels, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(channels) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward_post(self, - qkv, - qk_pos = None, - mask = None,): - h = qkv - qk = with_pos_embed(qkv, qk_pos).transpose(0, 1) - v = qkv.transpose(0, 1) - h1 = self.self_attn(qk, qk, v, attn_mask=mask)[0] - h1 = h1.transpose(0, 1) - h = h + self.dropout(h1) - h = self.norm(h) - return h - - def forward_pre(self, tgt, - tgt_mask = None, - tgt_key_padding_mask = None, - query_pos = None): - # deprecated - assert False - tgt2 = self.norm(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, *args, **kwargs): - if self.normalize_before: - return self.forward_pre(*args, **kwargs) - return self.forward_post(*args, **kwargs) - -class CrossAttentionLayer(nn.Module): - def __init__(self, channels, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.multihead_attn = nn.MultiheadAttention(channels, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(channels) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward_post(self, - q, - kv, - q_pos = None, - k_pos = None, - mask = None,): - h = q - q = with_pos_embed(q, q_pos).transpose(0, 1) - k = with_pos_embed(kv, k_pos).transpose(0, 1) - v = kv.transpose(0, 1) - h1 = self.multihead_attn(q, k, v, attn_mask=mask)[0] - h1 = h1.transpose(0, 1) - h = h + self.dropout(h1) - h = self.norm(h) - return h - - def forward_pre(self, tgt, memory, - memory_mask = None, - memory_key_padding_mask = None, - pos = None, - query_pos = None): - # Deprecated - assert False - tgt2 = self.norm(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, *args, **kwargs): - if self.normalize_before: - return self.forward_pre(*args, **kwargs) - return self.forward_post(*args, **kwargs) - -class FeedForwardLayer(nn.Module): - def __init__(self, channels, hidden_channels=2048, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.linear1 = nn.Linear(channels, hidden_channels) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(hidden_channels, channels) - self.norm = nn.LayerNorm(channels) - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward_post(self, x): - h = x - h1 = self.linear2(self.dropout(self.activation(self.linear1(h)))) - h = h + self.dropout(h1) - h = self.norm(h) - return h - - def forward_pre(self, x): - xn = self.norm(x) - h = x - h1 = self.linear2(self.dropout(self.activation(self.linear1(xn)))) - h = h + self.dropout(h1) - return h - - def forward(self, *args, **kwargs): - if self.normalize_before: - return self.forward_pre(*args, **kwargs) - return self.forward_post(*args, **kwargs) - -class MLP(nn.Module): - def __init__(self, in_channels, channels, out_channels, num_layers): - super().__init__() - self.num_layers = num_layers - h = [channels] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) - for n, k in zip([in_channels]+h, h+[out_channels])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - -class PPE_MLP(nn.Module): - def __init__(self, freq_num=20, freq_max=None, out_channel=768, mlp_layer=3): - import math - super().__init__() - self.freq_num = freq_num - self.freq_max = freq_max - self.out_channel = out_channel - self.mlp_layer = mlp_layer - self.twopi = 2 * math.pi - - mlp = [] - in_channel = freq_num*4 - for idx in range(mlp_layer): - linear = nn.Linear(in_channel, out_channel, bias=True) - nn.init.xavier_normal_(linear.weight) - nn.init.constant_(linear.bias, 0) - mlp.append(linear) - if idx != mlp_layer-1: - mlp.append(nn.SiLU()) - in_channel = out_channel - self.mlp = nn.Sequential(*mlp) - nn.init.constant_(self.mlp[-1].weight, 0) - - def forward(self, x, mask=None): - assert mask is None, "Mask not implemented" - h, w = x.shape[-2:] - minlen = min(h, w) - - h_embed, w_embed = torch.meshgrid(torch.arange(h), torch.arange(w), indexing='ij') - if self.training: - import numpy.random as npr - pertube_h, pertube_w = npr.uniform(-0.5, 0.5), npr.uniform(-0.5, 0.5) - else: - pertube_h, pertube_w = 0, 0 - - h_embed = (h_embed+0.5 - h/2 + pertube_h) / (minlen) * self.twopi - w_embed = (w_embed+0.5 - w/2 + pertube_w) / (minlen) * self.twopi - h_embed, w_embed = h_embed.to(x.device).to(x.dtype), w_embed.to(x.device).to(x.dtype) - - dim_t = torch.linspace(0, 1, self.freq_num, dtype=torch.float32, device=x.device) - freq_max = self.freq_max if self.freq_max is not None else minlen/2 - dim_t = freq_max ** dim_t.to(x.dtype) - - pos_h = h_embed[:, :, None] * dim_t - pos_w = w_embed[:, :, None] * dim_t - pos = torch.cat((pos_h.sin(), pos_h.cos(), pos_w.sin(), pos_w.cos()), dim=-1) - pos = self.mlp(pos) - pos = pos.permute(2, 0, 1)[None] - return pos - - def __repr__(self, _repr_indent=4): - head = "Positional encoding " + self.__class__.__name__ - body = [ - "num_pos_feats: {}".format(self.num_pos_feats), - "temperature: {}".format(self.temperature), - "normalize: {}".format(self.normalize), - "scale: {}".format(self.scale), - ] - # _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) - -########### -# Decoder # -########### - -@register('seecoder_decoder') -class Decoder(nn.Module): - def __init__( - self, - inchannels, - trans_input_tags, - trans_num_layers, - trans_dim, - trans_nheads, - trans_dropout, - trans_feedforward_dim,): - - super().__init__() - trans_inchannels = { - k: v for k, v in inchannels.items() if k in trans_input_tags} - fpn_inchannels = { - k: v for k, v in inchannels.items() if k not in trans_input_tags} - - self.trans_tags = sorted(list(trans_inchannels.keys())) - self.fpn_tags = sorted(list(fpn_inchannels.keys())) - self.all_tags = sorted(list(inchannels.keys())) - - if len(self.trans_tags)==0: - assert False # Not allowed - - self.num_trans_lvls = len(self.trans_tags) - - self.inproj_layers = nn.ModuleDict() - for tagi in self.trans_tags: - layeri = nn.Sequential( - nn.Conv2d(trans_inchannels[tagi], trans_dim, kernel_size=1), - nn.GroupNorm(32, trans_dim),) - nn.init.xavier_uniform_(layeri[0].weight, gain=1) - nn.init.constant_(layeri[0].bias, 0) - self.inproj_layers[tagi] = layeri - - tlayer = DecoderLayer( - dim = trans_dim, - n_heads = trans_nheads, - dropout = trans_dropout, - feedforward_dim = trans_feedforward_dim, - activation = 'relu',) - - self.transformer = DecoderLayerStacked(tlayer, trans_num_layers) - for p in self.transformer.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - self.level_embed = nn.Parameter(torch.Tensor(len(self.trans_tags), trans_dim)) - nn.init.normal_(self.level_embed) - - self.lateral_layers = nn.ModuleDict() - self.output_layers = nn.ModuleDict() - for tagi in self.all_tags: - lateral_conv = Conv2d_Convenience( - inchannels[tagi], trans_dim, kernel_size=1, - bias=False, norm=nn.GroupNorm(32, trans_dim)) - c2_xavier_fill(lateral_conv) - self.lateral_layers[tagi] = lateral_conv - - for tagi in self.fpn_tags: - output_conv = Conv2d_Convenience( - trans_dim, trans_dim, kernel_size=3, stride=1, padding=1, - bias=False, norm=nn.GroupNorm(32, trans_dim), activation=F.relu,) - c2_xavier_fill(output_conv) - self.output_layers[tagi] = output_conv - - def forward(self, features): - x = [] - spatial_shapes = {} - for idx, tagi in enumerate(self.trans_tags[::-1]): - xi = features[tagi] - xi = self.inproj_layers[tagi](xi) - bs, _, h, w = xi.shape - spatial_shapes[tagi] = (h, w) - xi = xi.flatten(2).transpose(1, 2) + self.level_embed[idx].view(1, 1, -1) - x.append(xi) - - x_length = [xi.shape[1] for xi in x] - x_concat = torch.cat(x, 1) - y_concat = self.transformer(x_concat) - y = torch.split(y_concat, x_length, dim=1) - - out = {} - for idx, tagi in enumerate(self.trans_tags[::-1]): - h, w = spatial_shapes[tagi] - yi = y[idx].transpose(1, 2).view(bs, -1, h, w) - out[tagi] = yi - - for idx, tagi in enumerate(self.all_tags[::-1]): - lconv = self.lateral_layers[tagi] - if tagi in self.trans_tags: - out[tagi] = out[tagi] + lconv(features[tagi]) - tag_save = tagi - else: - oconv = self.output_layers[tagi] - h = lconv(features[tagi]) - oprev = out[tag_save] - h = h + F.interpolate(oconv(oprev), size=h.shape[-2:], mode="bilinear", align_corners=False) - out[tagi] = h - - return out - -##################### -# Query Transformer # -##################### - -@register('seecoder_query_transformer') -class QueryTransformer(nn.Module): - def __init__(self, - in_channels, - hidden_dim, - num_queries = [8, 144], - nheads = 8, - num_layers = 9, - feedforward_dim = 2048, - mask_dim = 256, - pre_norm = False, - num_feature_levels = 3, - enforce_input_project = False, - with_fea2d_pos = True): - - super().__init__() - - if with_fea2d_pos: - self.pe_layer = PPE_MLP(freq_num=20, freq_max=None, out_channel=hidden_dim, mlp_layer=3) - else: - self.pe_layer = None - - if in_channels!=hidden_dim or enforce_input_project: - self.input_proj = nn.ModuleList() - for _ in range(num_feature_levels): - self.input_proj.append(nn.Conv2d(in_channels, hidden_dim, kernel_size=1)) - c2_xavier_fill(self.input_proj[-1]) - else: - self.input_proj = None - - self.num_heads = nheads - self.num_layers = num_layers - self.transformer_selfatt_layers = nn.ModuleList() - self.transformer_crossatt_layers = nn.ModuleList() - self.transformer_feedforward_layers = nn.ModuleList() - - for _ in range(self.num_layers): - self.transformer_selfatt_layers.append( - SelfAttentionLayer( - channels=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, )) - - self.transformer_crossatt_layers.append( - CrossAttentionLayer( - channels=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, )) - - self.transformer_feedforward_layers.append( - FeedForwardLayer( - channels=hidden_dim, - hidden_channels=feedforward_dim, - dropout=0.0, - normalize_before=pre_norm, )) - - self.num_queries = num_queries - num_gq, num_lq = self.num_queries - self.init_query = nn.Embedding(num_gq+num_lq, hidden_dim) - self.query_pos_embedding = nn.Embedding(num_gq+num_lq, hidden_dim) - - self.num_feature_levels = num_feature_levels - self.level_embed = nn.Embedding(num_feature_levels, hidden_dim) - - def forward(self, x): - # x is a list of multi-scale feature - assert len(x) == self.num_feature_levels - fea2d = [] - fea2d_pos = [] - size_list = [] - - for i in range(self.num_feature_levels): - size_list.append(x[i].shape[-2:]) - if self.pe_layer is not None: - pi = self.pe_layer(x[i], None).flatten(2) - pi = pi.transpose(1, 2) - else: - pi = None - xi = self.input_proj[i](x[i]) if self.input_proj is not None else x[i] - xi = xi.flatten(2) + self.level_embed.weight[i][None, :, None] - xi = xi.transpose(1, 2) - fea2d.append(xi) - fea2d_pos.append(pi) - - bs, _, _ = fea2d[0].shape - num_gq, num_lq = self.num_queries - gquery = self.init_query.weight[:num_gq].unsqueeze(0).repeat(bs, 1, 1) - lquery = self.init_query.weight[num_gq:].unsqueeze(0).repeat(bs, 1, 1) - - gquery_pos = self.query_pos_embedding.weight[:num_gq].unsqueeze(0).repeat(bs, 1, 1) - lquery_pos = self.query_pos_embedding.weight[num_gq:].unsqueeze(0).repeat(bs, 1, 1) - - for i in range(self.num_layers): - level_index = i % self.num_feature_levels - - qout = self.transformer_crossatt_layers[i]( - q = lquery, - kv = fea2d[level_index], - q_pos = lquery_pos, - k_pos = fea2d_pos[level_index], - mask = None,) - lquery = qout - - qout = self.transformer_selfatt_layers[i]( - qkv = torch.cat([gquery, lquery], dim=1), - qk_pos = torch.cat([gquery_pos, lquery_pos], dim=1),) - - qout = self.transformer_feedforward_layers[i](qout) - - gquery = qout[:, :num_gq] - lquery = qout[:, num_gq:] - - output = torch.cat([gquery, lquery], dim=1) - - return output - -################## -# Main structure # -################## - -@register('seecoder') -class SemanticExtractionEncoder(nn.Module): - def __init__(self, - imencoder_cfg, - imdecoder_cfg, - qtransformer_cfg): - super().__init__() - self.imencoder = get_model()(imencoder_cfg) - self.imdecoder = get_model()(imdecoder_cfg) - self.qtransformer = get_model()(qtransformer_cfg) - - def forward(self, x): - fea = self.imencoder(x) - hs = {'res3' : fea['res3'], - 'res4' : fea['res4'], - 'res5' : fea['res5'], } - hs = self.imdecoder(hs) - hs = [hs['res3'], hs['res4'], hs['res5']] - q = self.qtransformer(hs) - return q - - def encode(self, x): - return self(x) diff --git a/spaces/shreydan/youtube-QandA/app.py b/spaces/shreydan/youtube-QandA/app.py deleted file mode 100644 index 920cc2c2ef908095c8a616a98b7c6ebad9e54141..0000000000000000000000000000000000000000 --- a/spaces/shreydan/youtube-QandA/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import streamlit as st -from streamlit_player import st_player - -from model import Engine -from fetch_transcript import fetch_transcript -from preprocessing import create_similarity_text, create_result_url - -with st.container(): - st.title('YouTube Q&A Search') - st.write('Ask YouTube videos questions and get your answers :)') - -with st.container(): - - url_input = st.text_input(label='Video',placeholder='enter YouTube video url') - - question_input = st.text_input(label='Question',placeholder='enter your question') - - get_ans = st.button(label='Answer!') - - if len(url_input)!='' and len(question_input)!='' and get_ans: - - with st.spinner('loading your video...'): - transcript = fetch_transcript(url_input) - model = Engine(transcript) - prev_url = url_input - - with st.spinner('finding an answer...'): - answer = model.ask(question_input) - similarity_text = create_similarity_text(question_input,answer) - groups,timestamps = model.find_similar(similarity_text) - url = create_result_url(url_input,timestamps[0]) - - with st.container(): - - st.caption('Extracted Answer:') - st.write(answer) - st.caption('In Video:') - st_player(url) - diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py deleted file mode 100644 index ab6aa82d3e9055a838f1f9076b12f05fdfc154d0..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface_net.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def conv_bn(inp, oup, stride=1, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_bn_no_relu(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - ) - - -def conv_bn1X1(inp, oup, stride, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True)) - - -def conv_dw(inp, oup, stride, leaky=0.1): - return nn.Sequential( - nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), - nn.BatchNorm2d(inp), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True), - ) - - -class SSH(nn.Module): - - def __init__(self, in_channel, out_channel): - super(SSH, self).__init__() - assert out_channel % 4 == 0 - leaky = 0 - if (out_channel <= 64): - leaky = 0.1 - self.conv3X3 = conv_bn_no_relu(in_channel, out_channel // 2, stride=1) - - self.conv5X5_1 = conv_bn(in_channel, out_channel // 4, stride=1, leaky=leaky) - self.conv5X5_2 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - self.conv7X7_2 = conv_bn(out_channel // 4, out_channel // 4, stride=1, leaky=leaky) - self.conv7x7_3 = conv_bn_no_relu(out_channel // 4, out_channel // 4, stride=1) - - def forward(self, input): - conv3X3 = self.conv3X3(input) - - conv5X5_1 = self.conv5X5_1(input) - conv5X5 = self.conv5X5_2(conv5X5_1) - - conv7X7_2 = self.conv7X7_2(conv5X5_1) - conv7X7 = self.conv7x7_3(conv7X7_2) - - out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1) - out = F.relu(out) - return out - - -class FPN(nn.Module): - - def __init__(self, in_channels_list, out_channels): - super(FPN, self).__init__() - leaky = 0 - if (out_channels <= 64): - leaky = 0.1 - self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride=1, leaky=leaky) - self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride=1, leaky=leaky) - self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride=1, leaky=leaky) - - self.merge1 = conv_bn(out_channels, out_channels, leaky=leaky) - self.merge2 = conv_bn(out_channels, out_channels, leaky=leaky) - - def forward(self, input): - # names = list(input.keys()) - # input = list(input.values()) - - output1 = self.output1(input[0]) - output2 = self.output2(input[1]) - output3 = self.output3(input[2]) - - up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode='nearest') - output2 = output2 + up3 - output2 = self.merge2(output2) - - up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode='nearest') - output1 = output1 + up2 - output1 = self.merge1(output1) - - out = [output1, output2, output3] - return out - - -class MobileNetV1(nn.Module): - - def __init__(self): - super(MobileNetV1, self).__init__() - self.stage1 = nn.Sequential( - conv_bn(3, 8, 2, leaky=0.1), # 3 - conv_dw(8, 16, 1), # 7 - conv_dw(16, 32, 2), # 11 - conv_dw(32, 32, 1), # 19 - conv_dw(32, 64, 2), # 27 - conv_dw(64, 64, 1), # 43 - ) - self.stage2 = nn.Sequential( - conv_dw(64, 128, 2), # 43 + 16 = 59 - conv_dw(128, 128, 1), # 59 + 32 = 91 - conv_dw(128, 128, 1), # 91 + 32 = 123 - conv_dw(128, 128, 1), # 123 + 32 = 155 - conv_dw(128, 128, 1), # 155 + 32 = 187 - conv_dw(128, 128, 1), # 187 + 32 = 219 - ) - self.stage3 = nn.Sequential( - conv_dw(128, 256, 2), # 219 +3 2 = 241 - conv_dw(256, 256, 1), # 241 + 64 = 301 - ) - self.avg = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(256, 1000) - - def forward(self, x): - x = self.stage1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.avg(x) - # x = self.model(x) - x = x.view(-1, 256) - x = self.fc(x) - return x - - -class ClassHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(ClassHead, self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels, self.num_anchors * 2, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 2) - - -class BboxHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(BboxHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 4, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 4) - - -class LandmarkHead(nn.Module): - - def __init__(self, inchannels=512, num_anchors=3): - super(LandmarkHead, self).__init__() - self.conv1x1 = nn.Conv2d(inchannels, num_anchors * 10, kernel_size=(1, 1), stride=1, padding=0) - - def forward(self, x): - out = self.conv1x1(x) - out = out.permute(0, 2, 3, 1).contiguous() - - return out.view(out.shape[0], -1, 10) - - -def make_class_head(fpn_num=3, inchannels=64, anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels, anchor_num)) - return classhead - - -def make_bbox_head(fpn_num=3, inchannels=64, anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels, anchor_num)) - return bboxhead - - -def make_landmark_head(fpn_num=3, inchannels=64, anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels, anchor_num)) - return landmarkhead diff --git a/spaces/sneedium/captcha_pixelplanet/modules/model_language.py b/spaces/sneedium/captcha_pixelplanet/modules/model_language.py deleted file mode 100644 index a643cd5946240548746b22fc9294db63c2dfe7a1..0000000000000000000000000000000000000000 --- a/spaces/sneedium/captcha_pixelplanet/modules/model_language.py +++ /dev/null @@ -1,67 +0,0 @@ -import logging -import torch.nn as nn -from fastai.vision import * - -from modules.model import _default_tfmer_cfg -from modules.model import Model -from modules.transformer import (PositionalEncoding, - TransformerDecoder, - TransformerDecoderLayer) - - -class BCNLanguage(Model): - def __init__(self, config): - super().__init__(config) - d_model = ifnone(config.model_language_d_model, _default_tfmer_cfg['d_model']) - nhead = ifnone(config.model_language_nhead, _default_tfmer_cfg['nhead']) - d_inner = ifnone(config.model_language_d_inner, _default_tfmer_cfg['d_inner']) - dropout = ifnone(config.model_language_dropout, _default_tfmer_cfg['dropout']) - activation = ifnone(config.model_language_activation, _default_tfmer_cfg['activation']) - num_layers = ifnone(config.model_language_num_layers, 4) - self.d_model = d_model - self.detach = ifnone(config.model_language_detach, True) - self.use_self_attn = ifnone(config.model_language_use_self_attn, False) - self.loss_weight = ifnone(config.model_language_loss_weight, 1.0) - self.max_length = config.dataset_max_length + 1 # additional stop token - self.debug = ifnone(config.global_debug, False) - - self.proj = nn.Linear(self.charset.num_classes, d_model, False) - self.token_encoder = PositionalEncoding(d_model, max_len=self.max_length) - self.pos_encoder = PositionalEncoding(d_model, dropout=0, max_len=self.max_length) - decoder_layer = TransformerDecoderLayer(d_model, nhead, d_inner, dropout, - activation, self_attn=self.use_self_attn, debug=self.debug) - self.model = TransformerDecoder(decoder_layer, num_layers) - - self.cls = nn.Linear(d_model, self.charset.num_classes) - - if config.model_language_checkpoint is not None: - logging.info(f'Read language model from {config.model_language_checkpoint}.') - self.load(config.model_language_checkpoint) - - def forward(self, tokens, lengths): - """ - Args: - tokens: (N, T, C) where T is length, N is batch size and C is classes number - lengths: (N,) - """ - if self.detach: tokens = tokens.detach() - embed = self.proj(tokens) # (N, T, E) - embed = embed.permute(1, 0, 2) # (T, N, E) - embed = self.token_encoder(embed) # (T, N, E) - padding_mask = self._get_padding_mask(lengths, self.max_length) - - zeros = embed.new_zeros(*embed.shape) - qeury = self.pos_encoder(zeros) - location_mask = self._get_location_mask(self.max_length, tokens.device) - output = self.model(qeury, embed, - tgt_key_padding_mask=padding_mask, - memory_mask=location_mask, - memory_key_padding_mask=padding_mask) # (T, N, E) - output = output.permute(1, 0, 2) # (N, T, E) - - logits = self.cls(output) # (N, T, C) - pt_lengths = self._get_length(logits) - - res = {'feature': output, 'logits': logits, 'pt_lengths': pt_lengths, - 'loss_weight':self.loss_weight, 'name': 'language'} - return res diff --git a/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py b/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py deleted file mode 100644 index 73b36e750a0037cad8403e383d790f868b509d24..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/criterions/label_smoothed_cross_entropy.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import Optional - -import torch -import torch.nn.functional as F -import numpy as np -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class AjustLabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass): - label_smoothing: float = field( - default=0.0, - metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"}, - ) - report_accuracy: bool = field( - default=False, - metadata={"help": "report accuracy metric"}, - ) - ignore_prefix_size: int = field( - default=0, - metadata={"help": "Ignore first N tokens"}, - ) - ignore_eos: bool = field( - default=False, - metadata={"help": "Ignore eos token"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - drop_worst_ratio: float = field( - default=0.0, - metadata={"help": "ratio for discarding bad samples"}, - ) - drop_worst_after: int = field( - default=0, - metadata={"help": "steps for discarding bad samples"}, - ) - use_rdrop: bool = field( - default=False, metadata={"help": "use R-Drop"} - ) - reg_alpha: float = field( - default=1.0, metadata={"help": "weight for R-Drop"} - ) - sample_patch_num: int = field( - default=196, metadata={"help": "sample patchs for v1"} - ) - constraint_range: Optional[str] = field( - default=None, - metadata={"help": "constraint range"} - ) - - -def construct_rdrop_sample(x): - if isinstance(x, dict): - for key in x: - x[key] = construct_rdrop_sample(x[key]) - return x - elif isinstance(x, torch.Tensor): - return x.repeat(2, *([1] * (x.dim()-1))) - elif isinstance(x, int): - return x * 2 - elif isinstance(x, np.ndarray): - return x.repeat(2) - else: - raise NotImplementedError - - -def kl_loss(p, q): - p_loss = F.kl_div(p, torch.exp(q), reduction='sum') - q_loss = F.kl_div(q, torch.exp(p), reduction='sum') - loss = (p_loss + q_loss) / 2 - return loss - - -def label_smoothed_nll_loss( - lprobs, target, epsilon, update_num, reduce=True, - drop_worst_ratio=0.0, drop_worst_after=0, use_rdrop=False, reg_alpha=1.0, - constraint_masks=None, constraint_start=None, constraint_end=None -): - if target.dim() == lprobs.dim() - 1: - target = target.unsqueeze(-1) - nll_loss = -lprobs.gather(dim=-1, index=target).squeeze(-1) - if constraint_masks is not None: - smooth_loss = -lprobs.masked_fill(~constraint_masks, 0).sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (constraint_masks.sum(1) - 1 + 1e-6) - elif constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - smooth_loss = -lprobs[:, constraint_range].sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (len(constraint_range) - 1 + 1e-6) - else: - smooth_loss = -lprobs.sum(dim=-1, keepdim=True).squeeze(-1) - eps_i = epsilon / (lprobs.size(-1) - 1) - loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss - if drop_worst_ratio > 0 and update_num > drop_worst_after: - if use_rdrop: - true_batch_size = loss.size(0) // 2 - _, indices = torch.topk(loss[:true_batch_size], k=int(true_batch_size * (1 - drop_worst_ratio)), largest=False) - loss = torch.cat([loss[indices], loss[indices+true_batch_size]]) - nll_loss = torch.cat([nll_loss[indices], nll_loss[indices+true_batch_size]]) - lprobs = torch.cat([lprobs[indices], lprobs[indices+true_batch_size]]) - else: - loss, indices = torch.topk(loss, k=int(loss.shape[0] * (1 - drop_worst_ratio)), largest=False) - nll_loss = nll_loss[indices] - lprobs = lprobs[indices] - - ntokens = loss.numel() - nll_loss = nll_loss.sum() - loss = loss.sum() - if use_rdrop: - true_batch_size = lprobs.size(0) // 2 - p = lprobs[:true_batch_size] - q = lprobs[true_batch_size:] - if constraint_start is not None and constraint_end is not None: - constraint_range = [0, 1, 2, 3] + list(range(constraint_start, constraint_end)) - p = p[:, constraint_range] - q = q[:, constraint_range] - loss += kl_loss(p, q) * reg_alpha - - return loss, nll_loss, ntokens - - -@register_criterion( - "ajust_label_smoothed_cross_entropy", dataclass=AjustLabelSmoothedCrossEntropyCriterionConfig -) -class AjustLabelSmoothedCrossEntropyCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size=0, - ignore_eos=False, - report_accuracy=False, - drop_worst_ratio=0, - drop_worst_after=0, - use_rdrop=False, - reg_alpha=1.0, - sample_patch_num=196, - constraint_range=None - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.ignore_prefix_size = ignore_prefix_size - self.ignore_eos = ignore_eos - self.report_accuracy = report_accuracy - self.drop_worst_ratio = drop_worst_ratio - self.drop_worst_after = drop_worst_after - self.use_rdrop = use_rdrop - self.reg_alpha = reg_alpha - self.sample_patch_num = sample_patch_num - - self.constraint_start = None - self.constraint_end = None - if constraint_range is not None: - constraint_start, constraint_end = constraint_range.split(',') - self.constraint_start = int(constraint_start) - self.constraint_end = int(constraint_end) - - def forward(self, model, sample, update_num=0, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - if isinstance(sample, list): - if self.sample_patch_num > 0: - sample[0]['net_input']['sample_patch_num'] = self.sample_patch_num - loss_v1, sample_size_v1, logging_output_v1 = self.forward(model, sample[0], update_num, reduce) - loss_v2, sample_size_v2, logging_output_v2 = self.forward(model, sample[1], update_num, reduce) - loss = loss_v1 / sample_size_v1 + loss_v2 / sample_size_v2 - sample_size = 1 - logging_output = { - "loss": loss.data, - "loss_v1": loss_v1.data, - "loss_v2": loss_v2.data, - "nll_loss": logging_output_v1["nll_loss"].data / sample_size_v1 + logging_output_v2["nll_loss"].data / sample_size_v2, - "ntokens": logging_output_v1["ntokens"] + logging_output_v2["ntokens"], - "nsentences": logging_output_v1["nsentences"] + logging_output_v2["nsentences"], - "sample_size": 1, - "sample_size_v1": sample_size_v1, - "sample_size_v2": sample_size_v2, - } - return loss, sample_size, logging_output - - if self.use_rdrop: - construct_rdrop_sample(sample) - - net_output = model(**sample["net_input"]) - loss, nll_loss, ntokens = self.compute_loss(model, net_output, sample, update_num, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else ntokens - ) - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - } - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def get_lprobs_and_target(self, model, net_output, sample): - conf = sample['conf'][:, None, None] if 'conf' in sample and sample['conf'] is not None else 1 - constraint_masks = None - if "constraint_masks" in sample and sample["constraint_masks"] is not None: - constraint_masks = sample["constraint_masks"] - net_output[0].masked_fill_(~constraint_masks, -math.inf) - if self.constraint_start is not None and self.constraint_end is not None: - net_output[0][:, :, 4:self.constraint_start] = -math.inf - net_output[0][:, :, self.constraint_end:] = -math.inf - lprobs = model.get_normalized_probs(net_output, log_probs=True) * conf - target = model.get_targets(sample, net_output) - if self.ignore_prefix_size > 0: - lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous() - target = target[:, self.ignore_prefix_size :].contiguous() - if constraint_masks is not None: - constraint_masks = constraint_masks[:, self.ignore_prefix_size :, :].contiguous() - if self.ignore_eos: - bsz, seq_len, embed_dim = lprobs.size() - eos_indices = target.eq(self.task.tgt_dict.eos()) - lprobs = lprobs[~eos_indices].reshape(bsz, seq_len-1, embed_dim) - target = target[~eos_indices].reshape(bsz, seq_len-1) - if constraint_masks is not None: - constraint_masks = constraint_masks[~eos_indices].reshape(bsz, seq_len-1, embed_dim) - if constraint_masks is not None: - constraint_masks = constraint_masks.view(-1, constraint_masks.size(-1)) - return lprobs.view(-1, lprobs.size(-1)), target.view(-1), constraint_masks - - def compute_loss(self, model, net_output, sample, update_num, reduce=True): - lprobs, target, constraint_masks = self.get_lprobs_and_target(model, net_output, sample) - if constraint_masks is not None: - constraint_masks = constraint_masks[target != self.padding_idx] - lprobs = lprobs[target != self.padding_idx] - target = target[target != self.padding_idx] - loss, nll_loss, ntokens = label_smoothed_nll_loss( - lprobs, - target, - self.eps, - update_num, - reduce=reduce, - drop_worst_ratio=self.drop_worst_ratio, - drop_worst_after=self.drop_worst_after, - use_rdrop=self.use_rdrop, - reg_alpha=self.reg_alpha, - constraint_masks=constraint_masks, - constraint_start=self.constraint_start, - constraint_end=self.constraint_end - ) - return loss, nll_loss, ntokens - - def compute_accuracy(self, model, net_output, sample): - lprobs, target = self.get_lprobs_and_target(model, net_output, sample) - mask = target.ne(self.padding_idx) - n_correct = torch.sum( - lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask)) - ) - total = torch.sum(mask) - return n_correct, total - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - loss_sum_v1 = sum(log.get("loss_v1", 0) for log in logging_outputs) - loss_sum_v2 = sum(log.get("loss_v2", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - sample_size_v1 = sum(log.get("sample_size_v1", 0) for log in logging_outputs) - sample_size_v2 = sum(log.get("sample_size_v2", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size, sample_size, round=3 - ) - metrics.log_scalar( - "loss_v1", loss_sum_v1 / max(sample_size_v1, 1), max(sample_size_v1, 1), round=3 - ) - metrics.log_scalar( - "loss_v2", loss_sum_v2 / max(sample_size_v2, 1), max(sample_size_v2, 1), round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / sample_size, ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - metrics.log_scalar( - "ntokens", ntokens, 1, round=3 - ) - metrics.log_scalar( - "nsentences", nsentences, 1, round=3 - ) - metrics.log_scalar( - "sample_size", sample_size, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v1", sample_size_v1, 1, round=3 - ) - metrics.log_scalar( - "sample_size_v2", sample_size_v2, 1, round=3 - ) - - total = utils.item(sum(log.get("total", 0) for log in logging_outputs)) - if total > 0: - metrics.log_scalar("total", total) - n_correct = utils.item( - sum(log.get("n_correct", 0) for log in logging_outputs) - ) - metrics.log_scalar("n_correct", n_correct) - metrics.log_derived( - "accuracy", - lambda meters: round( - meters["n_correct"].sum * 100.0 / meters["total"].sum, 3 - ) - if meters["total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh deleted file mode 100644 index 7f4f61d7b1a46f51a1221de6b336cb70b5a0b8b3..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh +++ /dev/null @@ -1 +0,0 @@ -grep "seg id" | sed 's///g' | sed 's/<\/seg>//g' diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md deleted file mode 100644 index 5adab11f4dc3461f9e7126ac391b04e703616e6b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/pay_less_attention_paper/README.md +++ /dev/null @@ -1,176 +0,0 @@ -# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019) - -This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430). - -## Citation: -```bibtex -@inproceedings{wu2018pay, - title = {Pay Less Attention with Lightweight and Dynamic Convolutions}, - author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli}, - booktitle = {International Conference on Learning Representations}, - year = {2019}, - url = {https://arxiv.org/abs/1901.10430}, -} -``` - -## Translation - -### Pre-trained models -For some datasets we release models without GLUs which are faster at inference. - -Model | Description | Dataset | Download ----|---|---|--- -`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz)
IWSLT14 test:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz)
IWSLT14 test:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz)
newstest2017:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) -`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz)
newstest2017:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) - -### Memory-Efficient CUDA Kernels - -Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent. - -To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used. - -```sh -# to install lightconv -cd fairseq/modules/lightconv_layer -python cuda_function_gen.py -python setup.py install - -# to install dynamicconv -cd fairseq/modules/dynamicconv_layer -python cuda_function_gen.py -python setup.py install -``` - -### Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ] - -# Load a transformer trained on WMT'16 En-De -zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt') - -# The underlying model is available under the *models* attribute -assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel) - -# Translate a sentence -zh2en.translate('你好 世界') -# 'Hello World' -``` - -Loading custom models: -```python -from fairseq.models.lightconv import LightConvModel -en2fr = LightConvModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt14_en_fr', - bpe='subword_nmt', - bpe_codes='data-bin/wmt14_en_fr/en.code' -) -en2fr.translate('Hello world!') -# 'Bonjour le monde' -``` - -### Preprocessing the training datasets - -Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data. - -### Training and evaluation options: -To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`. -For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv. -For best BLEU results, lenpen may need to be manually tuned. - -To use the CUDA kernels, first install the PyTorch modules using the commands -above. Once the CUDA modules are installed, they will automatically be used -instead of the PyTorch modules. - -### IWSLT14 De-En -Training and evaluating DynamicConv (without GLU) on a GPU: -```sh -# Training -SAVE="save/dynamic_conv_iwslt" -mkdir -p $SAVE -CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \ - --clip-norm 0 --optimizer adam --lr 0.0005 \ - --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \ - --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --lr-scheduler inverse_sqrt \ - --ddp-backend=legacy_ddp \ - --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \ - -a lightconv_iwslt_de_en --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 0 --decoder-glu 0 -python scripts/average_checkpoints.py --inputs $SAVE \ - --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt" - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet -``` - -### WMT16 En-De -Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt16en2de" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 20000 \ - --arch lightconv_wmt_en_de_big --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt -bash scripts/compound_split_bleu.sh wmt16_gen.txt -``` - -### WMT14 En-Fr -Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt14en2fr" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 70000 \ - --arch lightconv_wmt_en_fr_big --save-dir $SAVE \ - --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md deleted file mode 100644 index 588941eddc5f0280f5254affd40ef49de874c885..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/stories/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# Hierarchical Neural Story Generation (Fan et al., 2018) - -The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset. - -## Pre-trained models - -Description | Dataset | Model | Test set(s) ----|---|---|--- -Stories with Convolutional Model
([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2) - -We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation. - -## Dataset - -The dataset can be downloaded like this: - -```bash -cd examples/stories -curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf - -``` - -and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token. - -## Example usage - -First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story: -```python -data = ["train", "test", "valid"] -for name in data: - with open(name + ".wp_target") as f: - stories = f.readlines() - stories = [" ".join(i.split()[0:1000]) for i in stories] - with open(name + ".wp_target", "w") as o: - for line in stories: - o.write(line.strip() + "\n") -``` - -Once we've trimmed the data we can binarize it and train our model: -```bash -# Binarize the dataset: -export TEXT=examples/stories/writingPrompts -fairseq-preprocess --source-lang wp_source --target-lang wp_target \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10 - -# Train the model: -fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False - -# Train a fusion model: -# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint - -# Generate: -# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary. - -fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}" -``` - -## Citation -```bibtex -@inproceedings{fan2018hierarchical, - title = {Hierarchical Neural Story Generation}, - author = {Fan, Angela and Lewis, Mike and Dauphin, Yann}, - booktitle = {Conference of the Association for Computational Linguistics (ACL)}, - year = 2018, -} -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py deleted file mode 100644 index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/sentence_ranking.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_ranking") -class SentenceRankingCriterion(FairseqCriterion): - def __init__(self, task, ranking_head_name, save_predictions, num_classes): - super().__init__(task) - self.ranking_head_name = ranking_head_name - if save_predictions is not None: - self.prediction_h = open(save_predictions, "w") - else: - self.prediction_h = None - self.num_classes = num_classes - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--save-predictions', metavar='FILE', - help='file to save predictions to') - parser.add_argument('--ranking-head-name', - default='sentence_classification_head', - help='name of the ranking head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute ranking loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.ranking_head_name in model.classification_heads - ), "model must provide sentence ranking head for --criterion=sentence_ranking" - - scores = [] - for idx in range(self.num_classes): - score, _ = model( - **sample["net_input{idx}".format(idx=idx + 1)], - classification_head_name=self.ranking_head_name, - ) - scores.append(score) - - logits = torch.cat(scores, dim=1) - sample_size = logits.size(0) - - if "target" in sample: - targets = model.get_targets(sample, [logits]).view(-1) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - targets = None - loss = torch.tensor(0.0, requires_grad=True) - - if self.prediction_h is not None: - preds = logits.argmax(dim=1) - for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())): - if targets is not None: - label = targets[i].item() - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - else: - print("{}\t{}".format(id, pred), file=self.prediction_h) - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if targets is not None: - logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py b/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py deleted file mode 100644 index 2f4e1ec217a3077d480a917627c835ac6a31a420..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/tools/ut_writer.py +++ /dev/null @@ -1,290 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import json -from pathlib import Path - -from metagpt.provider.openai_api import OpenAIGPTAPI as GPTAPI - -ICL_SAMPLE = '''接口定义: -```text -接口名称:元素打标签 -接口路径:/projects/{project_key}/node-tags -Method:POST - -请求参数: -路径参数: -project_key - -Body参数: -名称 类型 是否必须 默认值 备注 -nodes array 是 节点 - node_key string 否 节点key - tags array 否 节点原标签列表 - node_type string 否 节点类型 DATASET / RECIPE -operations array 是 - tags array 否 操作标签列表 - mode string 否 操作类型 ADD / DELETE - -返回数据: -名称 类型 是否必须 默认值 备注 -code integer 是 状态码 -msg string 是 提示信息 -data object 是 返回数据 -list array 否 node列表 true / false -node_type string 否 节点类型 DATASET / RECIPE -node_key string 否 节点key -``` - -单元测试: -```python -@pytest.mark.parametrize( -"project_key, nodes, operations, expected_msg", -[ -("project_key", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "success"), -("project_key", [{"node_key": "dataset_002", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["tag1"], "mode": "DELETE"}], "success"), -("", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "缺少必要的参数 project_key"), -(123, [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "参数类型不正确"), -("project_key", [{"node_key": "a"*201, "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "请求参数超出字段边界") -] -) -def test_node_tags(project_key, nodes, operations, expected_msg): - pass -``` -以上是一个 接口定义 与 单元测试 样例。 -接下来,请你扮演一个Google 20年经验的专家测试经理,在我给出 接口定义 后,回复我单元测试。有几个要求 -1. 只输出一个 `@pytest.mark.parametrize` 与对应的test_<接口名>函数(内部pass,不实现) --- 函数参数中包含expected_msg,用于结果校验 -2. 生成的测试用例使用较短的文本或数字,并且尽量紧凑 -3. 如果需要注释,使用中文 - -如果你明白了,请等待我给出接口定义,并只回答"明白",以节省token -''' - -ACT_PROMPT_PREFIX = '''参考测试类型:如缺少请求参数,字段边界校验,字段类型不正确 -请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例 -```text -''' - -YFT_PROMPT_PREFIX = '''参考测试类型:如SQL注入,跨站点脚本(XSS),非法访问和越权访问,认证和授权,参数验证,异常处理,文件上传和下载 -请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例 -```text -''' - -OCR_API_DOC = '''```text -接口名称:OCR识别 -接口路径:/api/v1/contract/treaty/task/ocr -Method:POST - -请求参数: -路径参数: - -Body参数: -名称 类型 是否必须 默认值 备注 -file_id string 是 -box array 是 -contract_id number 是 合同id -start_time string 否 yyyy-mm-dd -end_time string 否 yyyy-mm-dd -extract_type number 否 识别类型 1-导入中 2-导入后 默认1 - -返回数据: -名称 类型 是否必须 默认值 备注 -code integer 是 -message string 是 -data object 是 -``` -''' - - -class UTGenerator: - """UT生成器:通过API文档构造UT""" - - def __init__(self, swagger_file: str, ut_py_path: str, questions_path: str, - chatgpt_method: str = "API", template_prefix=YFT_PROMPT_PREFIX) -> None: - """初始化UT生成器 - - Args: - swagger_file: swagger路径 - ut_py_path: 用例存放路径 - questions_path: 模版存放路径,便于后续排查 - chatgpt_method: API - template_prefix: 使用模版,默认使用YFT_UT_PROMPT - """ - self.swagger_file = swagger_file - self.ut_py_path = ut_py_path - self.questions_path = questions_path - assert chatgpt_method in ["API"], "非法chatgpt_method" - self.chatgpt_method = chatgpt_method - - # ICL: In-Context Learning,这里给出例子,要求GPT模仿例子 - self.icl_sample = ICL_SAMPLE - self.template_prefix = template_prefix - - def get_swagger_json(self) -> dict: - """从本地文件加载Swagger JSON""" - with open(self.swagger_file, "r", encoding="utf-8") as file: - swagger_json = json.load(file) - return swagger_json - - def __para_to_str(self, prop, required, name=""): - name = name or prop["name"] - ptype = prop["type"] - title = prop.get("title", "") - desc = prop.get("description", "") - return f'{name}\t{ptype}\t{"是" if required else "否"}\t{title}\t{desc}' - - def _para_to_str(self, prop): - required = prop.get("required", False) - return self.__para_to_str(prop, required) - - def para_to_str(self, name, prop, prop_object_required): - required = name in prop_object_required - return self.__para_to_str(prop, required, name) - - def build_object_properties(self, node, prop_object_required, level: int = 0) -> str: - """递归输出object和array[object]类型的子属性 - - Args: - node (_type_): 子项的值 - prop_object_required (_type_): 是否必填项 - level: 当前递归深度 - """ - - doc = "" - - def dive_into_object(node): - """如果是object类型,递归输出子属性""" - if node.get("type") == "object": - sub_properties = node.get("properties", {}) - return self.build_object_properties(sub_properties, prop_object_required, level=level + 1) - return "" - - if node.get("in", "") in ["query", "header", "formData"]: - doc += f'{" " * level}{self._para_to_str(node)}\n' - doc += dive_into_object(node) - return doc - - for name, prop in node.items(): - doc += f'{" " * level}{self.para_to_str(name, prop, prop_object_required)}\n' - doc += dive_into_object(prop) - if prop["type"] == "array": - items = prop.get("items", {}) - doc += dive_into_object(items) - return doc - - def get_tags_mapping(self) -> dict: - """处理tag与path - - Returns: - Dict: tag: path对应关系 - """ - swagger_data = self.get_swagger_json() - paths = swagger_data["paths"] - tags = {} - - for path, path_obj in paths.items(): - for method, method_obj in path_obj.items(): - for tag in method_obj["tags"]: - if tag not in tags: - tags[tag] = {} - if path not in tags[tag]: - tags[tag][path] = {} - tags[tag][path][method] = method_obj - - return tags - - def generate_ut(self, include_tags) -> bool: - """生成用例文件""" - tags = self.get_tags_mapping() - for tag, paths in tags.items(): - if include_tags is None or tag in include_tags: - self._generate_ut(tag, paths) - return True - - def build_api_doc(self, node: dict, path: str, method: str) -> str: - summary = node["summary"] - - doc = f"接口名称:{summary}\n接口路径:{path}\nMethod:{method.upper()}\n" - doc += "\n请求参数:\n" - if "parameters" in node: - parameters = node["parameters"] - doc += "路径参数:\n" - - # param["in"]: path / formData / body / query / header - for param in parameters: - if param["in"] == "path": - doc += f'{param["name"]} \n' - - doc += "\nBody参数:\n" - doc += "名称\t类型\t是否必须\t默认值\t备注\n" - for param in parameters: - if param["in"] == "body": - schema = param.get("schema", {}) - prop_properties = schema.get("properties", {}) - prop_required = schema.get("required", []) - doc += self.build_object_properties(prop_properties, prop_required) - else: - doc += self.build_object_properties(param, []) - - # 输出返回数据信息 - doc += "\n返回数据:\n" - doc += "名称\t类型\t是否必须\t默认值\t备注\n" - responses = node["responses"] - response = responses.get("200", {}) - schema = response.get("schema", {}) - properties = schema.get("properties", {}) - required = schema.get("required", {}) - - doc += self.build_object_properties(properties, required) - doc += "\n" - doc += "```" - - return doc - - def _store(self, data, base, folder, fname): - file_path = self.get_file_path(Path(base) / folder, fname) - with open(file_path, "w", encoding="utf-8") as file: - file.write(data) - - def ask_gpt_and_save(self, question: str, tag: str, fname: str): - """生成问题,并且存储问题与答案""" - messages = [self.icl_sample, question] - result = self.gpt_msgs_to_code(messages=messages) - - self._store(question, self.questions_path, tag, f"{fname}.txt") - self._store(result, self.ut_py_path, tag, f"{fname}.py") - - def _generate_ut(self, tag, paths): - """处理数据路径下的结构 - - Args: - tag (_type_): 模块名称 - paths (_type_): 路径Object - """ - for path, path_obj in paths.items(): - for method, node in path_obj.items(): - summary = node["summary"] - question = self.template_prefix - question += self.build_api_doc(node, path, method) - self.ask_gpt_and_save(question, tag, summary) - - def gpt_msgs_to_code(self, messages: list) -> str: - """根据不同调用方式选择""" - result = '' - if self.chatgpt_method == "API": - result = GPTAPI().ask_code(msgs=messages) - - return result - - def get_file_path(self, base: Path, fname: str): - """保存不同的文件路径 - - Args: - base (str): 路径 - fname (str): 文件名称 - """ - path = Path(base) - path.mkdir(parents=True, exist_ok=True) - file_path = path / fname - return str(file_path) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md deleted file mode 100644 index 88118a39728df54bec29cdb2db40becc9c8b930c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ATube Catcher 1.0.236 Serial Key [BEST].md +++ /dev/null @@ -1,72 +0,0 @@ -
-

How to Download and Install aTube Catcher 1.0.236 with Serial Key

-

aTube Catcher is a powerful and easy-to-use software that allows you to download videos from various online platforms, such as YouTube, Vimeo, Dailymotion, etc. You can also convert the downloaded videos to different formats, such as MP4, AVI, WMV, MOV, etc., and burn them to DVDs or CDs. Moreover, you can also record your screen, audio, or webcam with aTube Catcher.

-

If you want to enjoy the full features of aTube Catcher without any limitations, you need to activate it with a serial key. In this article, we will show you how to download and install aTube Catcher 1.0.236 with serial key in a few simple steps.

-

aTube Catcher 1.0.236 Serial Key


Download File ››› https://cinurl.com/2uEXLf



-

Step 1: Download aTube Catcher 1.0.236

-

The first step is to download the latest version of aTube Catcher from its official website[^1^]. You can also use the following link to download it directly:

-https://conbluetooth.net/atube-catcher-1-0-236-serial-key-repack/ -

Once you click on the link, you will see a download button on the webpage. Click on it and save the file on your computer.

-

Step 2: Install aTube Catcher 1.0.236

-

The next step is to install aTube Catcher on your computer. To do that, follow these instructions:

-
    -
  • Locate the downloaded file and double-click on it to run it.
  • -
  • Follow the on-screen instructions and accept the terms and conditions.
  • -
  • Choose the destination folder where you want to install aTube Catcher.
  • -
  • Click on the Install button and wait for the installation process to complete.
  • -
  • Click on the Finish button when done.
  • -
-

Step 3: Activate aTube Catcher 1.0.236 with Serial Key

-

The final step is to activate aTube Catcher with a serial key. To do that, follow these steps:

-
    -
  • Launch aTube Catcher from your desktop or start menu.
  • -
  • Click on the Help menu and select Enter Registration Code.
  • -
  • Enter the following serial key in the text box:
  • -9M7KNP-CATNCA-LKBT78 -
  • Click on the OK button and enjoy your activated aTube Catcher.
  • -
-

Congratulations! You have successfully downloaded and installed aTube Catcher 1.0.236 with serial key. Now you can use it to download, convert, record, and burn videos as you wish.

- -

How to Use aTube Catcher 1.0.236

-

Now that you have activated aTube Catcher, you can start using it to download and manage your videos. Here are some of the main features and functions of aTube Catcher:

-

Download Videos

-

To download videos from online platforms, follow these steps:

-

-
    -
  • Copy the URL of the video that you want to download from your browser.
  • -
  • Paste it in the URL box of aTube Catcher.
  • -
  • Select the output format and quality that you prefer from the drop-down menus.
  • -
  • Click on the Download button and wait for the download to finish.
  • -
  • You can find the downloaded video in the destination folder that you chose during the installation.
  • -
-

Convert Videos

-

To convert videos to different formats, follow these steps:

-
    -
  • Click on the Video Converter button on the main interface of aTube Catcher.
  • -
  • Add the video files that you want to convert by clicking on the Add button or dragging and dropping them.
  • -
  • Select the output format and quality that you want from the drop-down menus.
  • -
  • Click on the Convert button and wait for the conversion to finish.
  • -
  • You can find the converted video files in the destination folder that you chose during the installation.
  • -
-

Record Screen, Audio, or Webcam

-

To record your screen, audio, or webcam with aTube Catcher, follow these steps:

-
    -
  • Click on the Screen Record button on the main interface of aTube Catcher.
  • -
  • Select the source that you want to record from the drop-down menu (Screen, Audio, or Webcam).
  • -
  • Adjust the settings and options according to your preferences (such as resolution, frame rate, audio quality, etc.).
  • -
  • Click on the Record button and start recording your activity.
  • -
  • Click on the Stop button when you are done.
  • -
  • You can find the recorded file in the destination folder that you chose during the installation.
  • -
-

Burn Videos to DVD or CD

-

To burn videos to DVD or CD with aTube Catcher, follow these steps:

-
    -
  • Click on the DVD/CD Creator button on the main interface of aTube Catcher.
  • -
  • Add the video files that you want to burn by clicking on the Add button or dragging and dropping them.
  • -
  • Select the output format and quality that you want from the drop-down menus (DVD or CD).
  • -
  • Insert a blank DVD or CD into your drive and select it from the drop-down menu.
  • -
  • Click on the Burn button and wait for the burning process to finish.
  • -
  • You can eject your DVD or CD when it is done.
  • -

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md deleted file mode 100644 index 982f308e3b2a9bf3ad140a10abc39e1ee2452eb0..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Caterpillar Et 2010 Factory Password _HOT_ Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

caterpillar et 2010 factory password keygen


Download ->>> https://cinurl.com/2uEXG4



- -Diesel particulate filter : DPF Delete and DPF Removal Golf 4 2. ... Calibration Video EGR/DPF-related fault code appears during truck ... 1 Calterm Master Tool v7. ... isx descargar sis caterpillar 2016 + cat et2016a, inlcuye crack keygen ... 2015 I have deleted 2, 2010 and a 2011 ISX Cummins engines. 4d29de3e1b
-
-
-

diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Encase Forensic V7 Crack.iso [Extra Quality].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Encase Forensic V7 Crack.iso [Extra Quality].md deleted file mode 100644 index 16467ec61edbc81fdc9026f9db114e5bd6169da7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Encase Forensic V7 Crack.iso [Extra Quality].md +++ /dev/null @@ -1,8 +0,0 @@ - -

it is possible to mount ios device using imgburn. the role of imgburn is to burn cd/dvd for windows. imgburn is a free data imaging tool that can be used to create and mount forensic images. the main features of imgburn are that it can mount images and create a cd/dvd image for windows. the final cd/dvd image is created by imgburn and is then mounted on the windows machine. on the other hand it is possible to mount ios device using imgburn. imgburn will search for the boot partition of the ios device to mount it.

-

the file system of the ios device is mounted using a fat filesystem which is a simple file system that is very easy to use. most of the time forensic investigators prefer to mount the ios device using a fat file system. fat is a file system which is very easy to use and has no complex algorithms. the main reason for using fat is the simplicity and ease of use. another reason for using a fat is that the ios device stores data in a fat file system. this means that the fat file system can be used to extract data from the device.

-

Encase Forensic V7 Crack.iso


DOWNLOAD >>>>> https://cinurl.com/2uEYWP



-

the way my forensic process works is i review the data on the device and then select the type of analysis to perform. i would then attempt to extract data from the device. if that didnt work i would use a recovery program. once i have the data on a machine i would then extract it and do a side-by-side comparison to determine what data is hidden and what data is not hidden. hope that helps.

-

i have found a hidden bios on the samsung galaxy s4 gt-i9505. the link to the bios dump is: there are also links to the other phones that i have found hidden bios’s. i have a sample list of the phones and the hidden bios’s that i have found: > encase forensic v7 crack.iso

the problem with the bios is that its hidden. the bios chip has a secret key that can open the bios chip. if you have this key then you can extract the data from the bios. if you have the key then its a matter of locating the bios chip and extracting it. if you have the bios dump then you can compare it to the bios of a phone that you have no hidden bios. if you have the bios dump then you can extract the data from the bios and do a side-by-side comparison.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md deleted file mode 100644 index 1e400ecef162a5e95268ba1a677870fa2ea8a4fc..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Robot Studio 5.15.02 25 _HOT_.md +++ /dev/null @@ -1,42 +0,0 @@ -
-

How to use RobotStudio 5.15.02 25 for offline programming and simulation of ABB robots

-

RobotStudio is a software tool developed by ABB Robotics that allows users to create, simulate and test a complete robot installation in a virtual 3D environment without having to visit or disturb their actual production line. RobotStudio 5.15.02 25 is the latest version of RobotStudio that was released in April 2023 and includes several new features and improvements.

-

In this article, we will show you how to use RobotStudio 5.15.02 25 for offline programming and simulation of ABB robots, such as the IRB140 model. We will cover the following topics:

-

robot studio 5.15.02 25


Download ✸✸✸ https://cinurl.com/2uEXsk



-
    -
  • How to download and install RobotStudio 5.15.02 25 and RobotWare
  • -
  • How to create a new station and add a robot, a tool and a work object
  • -
  • How to program the robot using RAPID language and graphical editors
  • -
  • How to simulate the robot motion and check for collisions and errors
  • -
  • How to export the program to a real robot controller
  • -
-

How to download and install RobotStudio 5.15.02 25 and RobotWare

-

To use RobotStudio 5.15.02 25, you need to have a valid subscription and activation key from ABB Robotics. You can request a free trial or purchase a subscription from the ABB Robotics website[^1^]. You also need to have RobotWare installed on your computer, which is the software that runs on the real robot controller. RobotWare can be installed from RobotApps within RobotStudio.

-

To download and install RobotStudio 5.15.02 25, follow these steps:

-
    -
  1. Go to the ABB Robotics website[^1^] and register or log in with your account.
  2. -
  3. Select the RobotStudio 2022.3.2 image file and click on Download.
  4. -
  5. Save the file on your computer and run it as an administrator.
  6. -
  7. Follow the instructions on the screen and choose the installation type (Minimal, Complete or Custom).
  8. -
  9. When prompted, enter your activation key and click on Activate.
  10. -
  11. Wait for the installation to finish and launch RobotStudio from the Start menu or desktop shortcut.
  12. -
-

How to create a new station and add a robot, a tool and a work object

-

A station is a virtual representation of your robot installation that contains all the components and settings that you need to program and simulate your robot. To create a new station and add a robot, a tool and a work object, follow these steps:

-
    -
  1. In RobotStudio, click on File > New > Station.
  2. -
  3. In the Station Explorer panel on the left, right-click on Controllers and select Add Controller.
  4. -
  5. In the Add Controller dialog box, select the type of controller that matches your real robot controller (e.g., IRC5) and click on OK.
  6. -
  7. In the Station Explorer panel, right-click on Robots under your controller and select Add Robot.
  8. -
  9. In the Add Robot dialog box, select the type of robot that matches your real robot (e.g., IRB140) and click on OK.
  10. -
  11. In the Station Explorer panel, right-click on Tools under your robot and select Add Tool.
  12. -
  13. In the Add Tool dialog box, select a tool from the library or browse for a custom tool file and click on OK.
  14. -
  15. In the Station Explorer panel, right-click on Work Objects under your controller and select Add Work Object.
  16. -
  17. In the Add Work Object dialog box, select a work object from the library or browse for a custom work object file and click on OK.
  18. -
  19. In the Graphics Window panel on the right, you can see your station with all the components that you added. You can use the mouse buttons and scroll wheel to zoom, pan and rotate the view.
  20. -
- -

How to program the

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/track/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/demo/analyze/track/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tappyness1/error_analysis_obj_det/README.md b/spaces/tappyness1/error_analysis_obj_det/README.md deleted file mode 100644 index c74a5a7bbad759755d9a6b511de672135a8666b0..0000000000000000000000000000000000000000 --- a/spaces/tappyness1/error_analysis_obj_det/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Error Analysis Obj Det -emoji: 🚀 -colorFrom: blue -colorTo: yellow -sdk: streamlit -python_version: 3.8.9 -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md deleted file mode 100644 index e06a9e7bda860f1c2ec94093d38015c9d540dab2..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

Autodesk AutoCAD 2018.0.2 Final (x86 X64) Keygen - [SH] Keygen


Download Filehttps://bytlly.com/2uGiws



-
- d5da3c52bf
-
-
-

diff --git a/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md b/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md deleted file mode 100644 index da07b9afa039294ed4c0c08751f7df040eaab755..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest].md +++ /dev/null @@ -1,31 +0,0 @@ - -

How to Download and Install CLIP STUDIO PAINT EX 1.9 with License Key

-

CLIP STUDIO PAINT EX is a powerful and versatile software for creating digital art, comics, animation, and more. It offers a wide range of tools and features to suit any style and workflow. Whether you are a beginner or a professional, you can enjoy the benefits of CLIP STUDIO PAINT EX with its easy-to-use interface and customizable settings.

-

CLIP STUDIO PAINT EX 1.9 Setup License Key Full [Latest]


Download 🗹 https://bytlly.com/2uGm2a



-

In this article, we will show you how to download and install CLIP STUDIO PAINT EX 1.9 with a license key, which is the latest version of the software as of April 2023. This version includes some new and improved features, such as:

-
    -
  • A new vector eraser tool that can erase any part of a vector layer without affecting the rest.
  • -
  • A new colorize feature that can automatically color your line art based on your settings.
  • -
  • A new animation timeline that can display multiple layers and frames at once.
  • -
  • A new export option that can export your animation as a GIF file.
  • -
  • And more!
  • -
-

To download and install CLIP STUDIO PAINT EX 1.9 with a license key, follow these steps:

-
    -
  1. Go to the official website of CLIP STUDIO PAINT and click on the "Download" button.
  2. -
  3. Select your operating system (Windows or Mac) and your language.
  4. -
  5. Enter your email address and click on the "Send" button. You will receive an email with a download link and a license key.
  6. -
  7. Click on the download link and save the file to your computer.
  8. -
  9. Run the installer and follow the instructions on the screen.
  10. -
  11. When prompted, enter your license key and click on the "Activate" button.
  12. -
  13. Enjoy using CLIP STUDIO PAINT EX 1.9!
  14. -
-

If you have any questions or issues with the installation process, you can contact the customer support team of CLIP STUDIO PAINT through their website or social media channels. They will be happy to assist you with any problem you may encounter.

-

CLIP STUDIO PAINT EX 1.9 is a great software for creating stunning digital art, comics, animation, and more. It has everything you need to unleash your creativity and express your vision. Download it today and see for yourself what it can do for you!

-

- -

One of the best features of CLIP STUDIO PAINT EX 1.9 is its compatibility with various devices and formats. You can use it on your PC, tablet, or smartphone, and you can import and export files in various formats, such as PSD, PNG, JPG, BMP, TIFF, PDF, EPUB, and more. You can also sync your files across different devices using the CLIP STUDIO cloud service. This way, you can access your work anytime and anywhere.

-

Another great feature of CLIP STUDIO PAINT EX 1.9 is its extensive library of resources and materials. You can browse and download thousands of brushes, textures, patterns, fonts, 3D models, and more from the CLIP STUDIO ASSETS store. You can also create your own materials and share them with other users. You can also access tutorials and tips from professional artists and experts on the CLIP STUDIO TIPS website. You can learn new skills and techniques to improve your art and workflow.

-

CLIP STUDIO PAINT EX 1.9 is not only a software for creating digital art, comics, animation, and more. It is also a software for connecting with other artists and enthusiasts. You can join the CLIP STUDIO community and interact with millions of users from around the world. You can share your work, get feedback, join contests, participate in events, and more. You can also follow your favorite artists and discover new ones. You can also get inspired by the amazing works of others and find new ideas for your own projects.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md b/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md deleted file mode 100644 index d8b6e629b857694a4c62d52d021f904792516821..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Club International Magazine Online BEST.md +++ /dev/null @@ -1,9 +0,0 @@ -

Club International Magazine Online


Download File »»» https://bytlly.com/2uGjVc



- -Collectible magazine delivered to your door. A private club for inquisitive minds built around. magazine, application and . Subscribe on the site CITY. Magazine ''City''. -Download city magazine, city magazine app, city app, city magazine app, city magazine app, city magazine app, city magazine, app. -City magazine. -In the City No 02(22), 2012. Magazine City. 8a78ff9644
-
-
-

diff --git a/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md b/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md deleted file mode 100644 index 9663d35679ccbc04f0170e1a1fd18badfe66bcd3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Free Sainik Full Movie Download Hindi Mp4 EXCLUSIVE.md +++ /dev/null @@ -1,6 +0,0 @@ -

free Sainik full movie download hindi mp4


Download - https://bytlly.com/2uGkcN



- -10-Dec-2019 - Sainik (1993) Full Hindi Movie | Akshay Kumar, Ashwini ... dubbed movies to watch online and download in HD.. tamil new movie free . ... download, Sainik 1993 HD Mobile movie, Sainik 1993 HD Mp4 movie, ... 1fdad05405
-
-
-

diff --git a/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md b/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md deleted file mode 100644 index 1111f957931a3e949d170d6bf8294f7c7cd6e259..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HOT! Apostilas De Ingles Kumon .pdf.md +++ /dev/null @@ -1,52 +0,0 @@ -

HOT! Apostilas De Ingles Kumon .pdf


Download File ••• https://bytlly.com/2uGiMZ



- -links to PDF. EOF. - -2. - -A: - -You are asking for details about the commands, but you did not specify what language or environment. - -For this reason, I give a general answer. - -You can always get the source of a Bash script by - -$ man -k "command" - -or - -$ info "#command" - -$ grep command /usr/share/doc/packagename/README.Debian.gz - -To get a list of all the commands used, either type - -$ man --list - -$ man --section - -and read from the bottom up. - -MONROE, LA (KTRK) -- A 26-year-old man is dead after he was shot while working as a bartender at a Monroe establishment. - -The incident happened at the New Orleans Lounge on Piety Street. - -The Monroe Police Department says there was a dispute inside the lounge over a parking spot. - -After a dispute, one of the suspects pulled out a gun and shot the victim twice. - -The victim was transported to a local hospital where he later died. - -The other suspect fled the scene after the shooting. - -The Monroe Police Department says the victim's identity has not been released at this time. - -An investigation into the incident is ongoing. - -[Cardiac arrest in a patient with post-traumatic stress disorder]. - -A 40-year-old male was found in cardiac arrest by his roommate. He was admitted to the emergency department of the Tokushukai Medico-Psychiatric Hospital, and resuscitation was attempted. He was diagnosed with cardiogenic shock on admission and was transferred to the intensive care unit. The patient was found to have a tracheostomy tube, a nasogastric tube and a central venous line. The cause of cardiac arrest was discussed by a team including a psychiatrist and cardiologist and the patient was diagnosed with post-traumatic stress disorder (PTSD). The patient was treated with diazepam. He was weaned from the ventilator but he was discharged from the intensive care unit on the sixth hospital day. The patient was readmitted to the hospital on the 23rd hospital day with a staphylococcus infection in the stoma site of the tracheostomy tube. He was diagnosed with cardiogenic shock again and was treated with 4fefd39f24
-
-
-

diff --git a/spaces/theekshana/boardpac_chat_app_test/README.md b/spaces/theekshana/boardpac_chat_app_test/README.md deleted file mode 100644 index 0fbd7d650584a138bdbe91bfdbf624e83f90a726..0000000000000000000000000000000000000000 --- a/spaces/theekshana/boardpac_chat_app_test/README.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: Boardpac Chat App Test -emoji: 😻 -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# privateGPT -Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection! - -Built with [LangChain](https://github.com/hwchase17/langchain), [GPT4All](https://github.com/nomic-ai/gpt4all), [LlamaCpp](https://github.com/ggerganov/llama.cpp), [Chroma](https://www.trychroma.com/) and [SentenceTransformers](https://www.sbert.net/). - -demo - -### how to run -python -m streamlit run app.py - -# Environment Setup -In order to set your environment up to run the code here, first install all requirements: - -```shell -pip3 install -r requirements.txt -``` - -Then, download the LLM model and place it in a directory of your choice: -- LLM: default to [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin). If you prefer a different GPT4All-J compatible model, just download it and reference it in your `.env` file. - -Copy the `example.env` template into `.env` -```shell -cp example.env .env -``` - -and edit the variables appropriately in the `.env` file. -``` -MODEL_TYPE: supports LlamaCpp or GPT4All -PERSIST_DIRECTORY: is the folder you want your vectorstore in -MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM -MODEL_N_CTX: Maximum token limit for the LLM model -MODEL_N_BATCH: Number of tokens in the prompt that are fed into the model at a time. Optimal value differs a lot depending on the model (8 works well for GPT4All, and 1024 is better for LlamaCpp) -EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://www.sbert.net/docs/pretrained_models.html) -TARGET_SOURCE_CHUNKS: The amount of chunks (sources) that will be used to answer a question -``` - -Note: because of the way `langchain` loads the `SentenceTransformers` embeddings, the first time you run the script it will require internet connection to download the embeddings model itself. - -## Test dataset -This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example. - -## Instructions for ingesting your own dataset - -Put any and all your files into the `source_documents` directory - -The supported extensions are: - - - `.csv`: CSV, - - `.docx`: Word Document, - - `.doc`: Word Document, - - `.enex`: EverNote, - - `.eml`: Email, - - `.epub`: EPub, - - `.html`: HTML File, - - `.md`: Markdown, - - `.msg`: Outlook Message, - - `.odt`: Open Document Text, - - `.pdf`: Portable Document Format (PDF), - - `.pptx` : PowerPoint Document, - - `.ppt` : PowerPoint Document, - - `.txt`: Text file (UTF-8), - -Run the following command to ingest all the data. - -```shell -python ingest.py -``` - -Output should look like this: - -```shell -Creating new vectorstore -Loading documents from source_documents -Loading new documents: 100%|██████████████████████| 1/1 [00:01<00:00, 1.73s/it] -Loaded 1 new documents from source_documents -Split into 90 chunks of text (max. 500 tokens each) -Creating embeddings. May take some minutes... -Using embedded DuckDB with persistence: data will be stored in: db -Ingestion complete! You can now run privateGPT.py to query your documents -``` - -It will create a `db` folder containing the local vectorstore. Will take 20-30 seconds per document, depending on the size of the document. -You can ingest as many documents as you want, and all will be accumulated in the local embeddings database. -If you want to start from an empty database, delete the `db` folder. - -Note: during the ingest process no data leaves your local environment. You could ingest without an internet connection, except for the first time you run the ingest script, when the embeddings model is downloaded. - -## Ask questions to your documents, locally! -In order to ask a question, run a command like: - -```shell -python privateGPT.py -``` - -And wait for the script to require your input. - -```plaintext -> Enter a query: -``` - -Hit enter. You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again. - -Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment. - -Type `exit` to finish the script. - - -### CLI -The script also supports optional command-line arguments to modify its behavior. You can see a full list of these arguments by running the command ```python privateGPT.py --help``` in your terminal. - - -# How does it work? -Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance. - -- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `HuggingFaceEmbeddings` (`SentenceTransformers`). It then stores the result in a local vector database using `Chroma` vector store. -- `privateGPT.py` uses a local LLM based on `GPT4All-J` or `LlamaCpp` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs. -- `GPT4All-J` wrapper was introduced in LangChain 0.0.162. - -# System Requirements - -## Python Version -To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile. - -## C++ Compiler -If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++ compiler on your computer. - -### For Windows 10/11 -To install a C++ compiler on Windows 10/11, follow these steps: - -1. Install Visual Studio 2022. -2. Make sure the following components are selected: - * Universal Windows Platform development - * C++ CMake tools for Windows -3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/). -4. Run the installer and select the `gcc` component. - -## Mac Running Intel -When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '-march=native'_ during pip install. - -If so set your archflags during pip install. eg: _ARCHFLAGS="-arch x86_64" pip3 install -r requirements.txt_ - -# Disclaimer -This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance. diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md b/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md deleted file mode 100644 index bd163026935490b5af57b60d621c238df78ab945..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Avunu Subtitles Download [VERIFIED].md +++ /dev/null @@ -1,38 +0,0 @@ - -Here is a possible title and article for the keyword "Avunu subtitles download": - -

How to Download Avunu Subtitles in Different Languages

-

Avunu is a popular Telugu horror thriller film series that has two parts: Avunu (2012) and Avunu Part 2 (2015). The films follow the story of a young couple who move into a new apartment and experience paranormal activities. The films are directed by Ravi Babu and star Poorna and Harshvardhan Rane in the lead roles.

-

If you are a fan of Avunu and want to watch it with subtitles in your preferred language, you might be wondering how to download them. There are many websites that offer subtitles for Avunu, but not all of them are reliable or safe. Some might contain malware, viruses, or inaccurate translations. To avoid these risks, you need to find a trustworthy source for Avunu subtitles download.

-

Avunu subtitles download


Download File ☆☆☆ https://urlcod.com/2uK3gz



-

One of the best websites for Avunu subtitles download is SUBDL. SUBDL is a fast and easy subtitle website that offers subtitles in various languages for movies and TV shows. You can find subtitles for Avunu in English, French, Spanish, and more. SUBDL also provides subtitles for Avunu Part 2, the sequel to the first film.

-

To download Avunu subtitles from SUBDL, you just need to follow these simple steps:

-
    -
  1. Go to https://subdl.com/subtitle/sd55624/avunu-valliddaru-ishtapaddaru for Avunu (2012) or https://subdl.com/subtitle/sd55625/avunu-part-2 for Avunu Part 2 (2015).
  2. -
  3. Select your desired language from the filter menu on the left side of the page.
  4. -
  5. Click on the download button next to the subtitle file that matches your video quality and format.
  6. -
  7. Save the subtitle file to your device and extract it if it is compressed.
  8. -
  9. Rename the subtitle file to match the name of your video file.
  10. -
  11. Play your video with your preferred media player and enjoy Avunu with subtitles.
  12. -
-

That's it! You have successfully downloaded Avunu subtitles from SUBDL. Now you can watch this thrilling horror film series with subtitles in your preferred language. SUBDL is a reliable and safe website for Avunu subtitles download, as well as other movies and TV shows. You can also request subtitles for any content that is not available on SUBDL. SUBDL is your ultimate destination for subtitle downloads.

Here are a few more paragraphs for the article: - -

Why Watch Avunu with Subtitles?

-

Avunu is a film series that has received critical acclaim and commercial success for its innovative and realistic portrayal of horror. The films use minimal special effects and rely on sound design, camera angles, and acting to create a sense of dread and suspense. The films also explore themes such as marital issues, sexual harassment, and superstition.

-

Watching Avunu with subtitles can enhance your viewing experience in many ways. First of all, subtitles can help you understand the dialogues better, especially if you are not familiar with the Telugu language or the regional accents. Subtitles can also help you catch the subtle details and nuances that might be missed otherwise. Subtitles can also make you more immersed in the story and the atmosphere of the film.

-

Moreover, watching Avunu with subtitles can also help you learn a new language or improve your existing language skills. You can compare the original audio with the translated subtitles and learn new words, phrases, and expressions. You can also improve your listening comprehension and pronunciation by following along with the subtitles. Watching Avunu with subtitles can be a fun and effective way to learn Telugu or any other language.

- -

Where to Watch Avunu Online?

-

If you are looking for a way to watch Avunu online, you have several options to choose from. You can either rent or buy the films from various streaming platforms such as Amazon Prime Video, YouTube, Google Play Movies, iTunes, or Netflix. You can also watch the films for free on some websites that host pirated content, but this is not recommended as it is illegal and unethical.

-

The best way to watch Avunu online is to use a legal and safe streaming service that offers high-quality video and audio, as well as subtitles in different languages. One of the best streaming services for Avunu is Aha. Aha is a Telugu-exclusive OTT platform that offers a wide range of movies and shows in various genres. You can watch Avunu and Avunu Part 2 on Aha with subtitles in English or Hindi.

-

-

To watch Avunu on Aha, you just need to follow these simple steps:

-
    -
  1. Go to https://www.aha.video/ and sign up for an account.
  2. -
  3. Select your preferred subscription plan from monthly or yearly options.
  4. -
  5. Search for Avunu or Avunu Part 2 in the search bar or browse through the horror category.
  6. -
  7. Click on the play button and enjoy the film with subtitles.
  8. -
-

Aha is a great streaming service for Avunu fans as it offers high-quality video and audio, as well as subtitles in different languages. You can also watch other Telugu movies and shows on Aha with subtitles. Aha is your ultimate destination for Telugu entertainment online.

7196e7f11a
-
-
\ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md b/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md deleted file mode 100644 index fb1fff7980c9a981fbf9078e1eabf8a0de2bd12b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Crez des btiments en 3D avec Archicad 13 francais gratuit avec crack le logiciel BIM incontournable.md +++ /dev/null @@ -1,197 +0,0 @@ - -

Archicad 13 francais gratuit avec crack: comment télécharger et installer le logiciel BIM pour la modélisation 3D

-

Vous êtes à la recherche d'un logiciel BIM (Building Information Modeling) pour la modélisation 3D de bâtiments et d'architecture? Vous voulez profiter des fonctionnalités avancées de Archicad 13 sans payer le prix fort? Vous êtes au bon endroit! Dans cet article, nous allons vous expliquer comment télécharger et installer Archicad 13 francais gratuit avec crack, un logiciel qui vous permettra de concevoir des projets architecturaux en 3D avec une documentation complète et un travail collaboratif. Nous allons également vous présenter les principales caractéristiques, les avantages et les risques de ce logiciel, ainsi que quelques conseils pour l'utiliser efficacement.

-

archicad 13 francais gratuit avec crack


Download File ··· https://urlcod.com/2uK73s



-

Qu'est-ce que Archicad 13?

-

ArchiCAD est un logiciel d'architecture spécialisé dans le BIM, c'est-à-dire la modélisation des données du bâtiment. Il vous permettra de modéliser un bâtiment en trois dimensions et de concevoir une documentation complète qui sera utile pendant toute la durée d'un projet architectural. ArchiCAD est développé par la société Graphisoft, qui fait partie du groupe Nemetschek, leader mondial des solutions logicielles pour l'architecture, l'ingénierie et la construction.

-

Les principales caractéristiques de ArchiCAD 13

-

ArchiCAD 13 est la version sortie en 2009 du logiciel. Elle apporte plusieurs nouveautés et améliorations par rapport aux versions précédentes, notamment:

-
    -
  • L'outil MORPH, qui permet de modéliser et d'éditer des formes libres en 3D.
  • -
  • L'outil SHELL, qui permet de créer des toits complexes et des formes incurvées.
  • -
  • L'outil CURTAIN WALL, qui permet de créer des façades vitrées personnalisables.
  • -
  • L'intégration du moteur de rendu CineRender, qui offre des possibilités de visualisation photoréalistes.
  • -
  • L'amélioration du travail collaboratif grâce au système Delta Server, qui réduit le temps de synchronisation entre les différents intervenants du projet.
  • -
  • L'amélioration de l'interface utilisateur, qui facilite l'accès aux commandes et aux paramètres.
  • -
  • L'amélioration de la compatibilité avec les formats DWG et DXF, qui facilitent l'échange de données avec les autres logiciels CAD.
  • -
-

Les avantages de ArchiCAD 13 par rapport aux autres logiciels BIM

-

ArchiCAD 13 présente plusieurs avantages par rapport aux autres logiciels BIM du marché, tels que Revit ou SketchUp. Parmi ces avantages, on peut citer:

-
    -
  • La facilité d'utilisation, qui permet aux utilisateurs débutants ou expérimentés de prendre en main le logiciel rapidement.
  • -
  • La flexibilité, qui permet aux utilisateurs de personnaliser le logiciel selon leurs besoins et leurs préférences.
  • -
  • La performance, qui permet aux utilisateurs de travailler sur des projets complexes sans ralentir le logiciel ou le système.
  • -
  • La fiabilité, qui garantit aux utilisateurs la sécurité et la stabilité du logiciel et des données.
  • -
  • Le support, qui offre aux utilisateurs un service client réactif et compétent.
  • -
-

Comment télécharger ArchiCAD 13 francais gratuit avec crack?

-

Si vous souhaitez télécharger ArchiCAD 13 francais gratuit avec crack, vous devez savoir que vous vous exposez à des risques juridiques et techniques. En effet, il s'agit d'une version illégale du logiciel, qui n'a pas été autorisée par son éditeur. Vous pouvez donc être poursuivi en justice pour violation du droit d'auteur ou pour contrefaçon. De plus, vous pouvez être victime de virus ou de malwares qui peuvent endommager votre ordinateur ou voler vos données personnelles. Nous vous déconseillons donc fortement de recourir à cette méthode. Si vous voulez utiliser ArchiCAD 13 en toute légalité et en toute sécurité, vous pouvez opter pour une version d'essai gratuite pendant 30 jours ou pour une version étudiante gratuite pendant un an. Vous pouvez également acheter une licence officielle sur le site web de Graphisoft ou auprès d'un revendeur agréé.

-

Les prérequis pour installer ArchiCAD 13

-

Si vous décidez malgré tout de télécharger ArchiCAD 13 francais gratuit avec crack, vous devez vérifier que votre ordinateur respecte les prérequis suivants:

-
    -
  • Système d'exploitation: Windows XP SP3 ou supérieur
  • -
  • Processeur: Intel Pentium IV ou supérieur
  • -
  • Mémoire vive: 1 Go minimum
  • -
  • Espace disque: 5 Go minimum
  • -
  • Résolution d'écran: 1024 x 768 minimum
  • -
  • Carte graphique: compatible OpenGL
  • -
-

Les étapes pour télécharger ArchiCAD 13 francais gratuit avec crack

-

Si vous avez vérifié que votre ordinateur respecte les prérequis ci-dessus, vous pouvez suivre les étapes suivantes pour télécharger ArchiCAD 13 francais gratuit avec crack:

-
    -
  1. Rendez-vous sur un site web qui propose le téléchargement du fichier archivé contenant le logiciel et le crack. Par exemple, vous pouvez utiliser le lien suivant:
  2. -
  3. Cliquez sur le bouton "Download" ou "Télécharger" et attendez que le téléchargement se termine.
  4. -
  5. Ouvrez le fichier archivé avec un logiciel comme WinRAR ou WinZip et extrayez son contenu dans un dossier de votre choix.
  6. -
  7. Ouvrez le dossier extrait et lancez le fichier "Setup.exe" pour lancer l'installation du logiciel.
  8. -
  9. Suivez les instructions à l'écran et choisissez les options d'installation selon vos préférences.
  10. -
  11. A la fin de l'installation, ne lancez pas le logiciel et fermez toutes les fenêtres.
  12. -
  13. Ouvrez le dossier "Crack" et copiez le fichier "Archicad.exe" dans le dossier d'installation du logiciel (par défaut C:\Program Files\Graphisoft\Archicad).
  14. -
  15. Collez le fichier "Archicad.exe" dans le dossier d'installation du logiciel en écrasant le fichier existant.
  16. -
  17. Lancez le fichier "Archicad.exe" depuis le dossier d'installation du logiciel pour démarrer le logiciel.
  18. -

    Les risques et les précautions à prendre avant d'utiliser ArchiCAD 13 francais gratuit avec crack

    -

    Comme nous l'avons mentionné précédemment, utiliser ArchiCAD 13 francais gratuit avec crack comporte des risques juridiques et techniques. Vous devez donc être conscient des conséquences possibles et prendre des précautions avant d'utiliser le logiciel. Voici quelques conseils à suivre:

    -

    telecharger archicad 13 francais gratuit version complete avec crack
    -archicad 13 francais gratuit pour mac avec crack
    -comment installer archicad 13 francais gratuit avec crack
    -archicad 13 francais gratuit 32 bits avec crack
    -archicad 13 francais gratuit 64 bits avec crack
    -archicad 13 francais gratuit licence avec crack
    -archicad 13 francais gratuit serial avec crack
    -archicad 13 francais gratuit keygen avec crack
    -archicad 13 francais gratuit patch avec crack
    -archicad 13 francais gratuit activation avec crack
    -archicad 13 francais gratuit torrent avec crack
    -archicad 13 francais gratuit mega avec crack
    -archicad 13 francais gratuit uptobox avec crack
    -archicad 13 francais gratuit mediafire avec crack
    -archicad 13 francais gratuit rapidshare avec crack
    -archicad 13 francais gratuit sans virus avec crack
    -archicad 13 francais gratuit sans inscription avec crack
    -archicad 13 francais gratuit sans mot de passe avec crack
    -archicad 13 francais gratuit sans survey avec crack
    -archicad 13 francais gratuit sans pub avec crack
    -archicad 13 francais gratuit facile a installer avec crack
    -archicad 13 francais gratuit compatible windows 10 avec crack
    -archicad 13 francais gratuit compatible windows 7 avec crack
    -archicad 13 francais gratuit compatible windows xp avec crack
    -archicad 13 francais gratuit compatible windows vista avec crack
    -archicad 13 francais gratuit compatible windows 8 avec crack
    -archicad 13 francais gratuit logiciel de conception architecturale avec crack
    -archicad 13 francais gratuit logiciel de dessin technique avec crack
    -archicad 13 francais gratuit logiciel de modelisation 3d avec crack
    -archicad 13 francais gratuit logiciel de rendu photorealiste avec crack
    -archicad 13 francais gratuit logiciel de collaboration bim avec crack
    -archicad 13 francais gratuit logiciel de gestion de projet avec crack
    -archicad 13 francais gratuit logiciel de calcul de structure avec crack
    -archicad 13 francais gratuit logiciel de simulation energetique avec crack
    -archicad 13 francais gratuit logiciel de documentation graphique avec crack
    -archicad 13 francais gratuit formation en ligne gratuite avec crack
    -archicad 13 francais gratuit tutoriel video gratuit avec crack
    -archicad 13 francais gratuit guide utilisateur pdf gratuit avec crack
    -archicad 13 francais gratuit support technique gratuit avec crack
    -archicad 13 francais gratuit forum d'entraide gratuit avec crack
    -avis sur archicad 13 francais gratuit avec crack
    -comparatif entre archicad et autocad en francais et en version gratuite et cracker
    -comment cracker la version d'essai de archicad en version francaise et gratuite
    -comment migrer de la version precedente de archicad a la version francaise et gratuite et cracker de la version actuelle
    -comment mettre a jour la version francaise et gratuite et cracker de archicad
    -comment exporter les projets realises sur la version francaise et gratuite et cracker de archicad vers d'autres formats
    -comment importer les projets realises sur d'autres logiciels vers la version francaise et gratuite et cracker de archicad
    -comment optimiser les performances de la version francaise et gratuite et cracker de archicad
    -comment personnaliser l'interface de la version francaise et gratuite et cracker de archicad
    -comment resoudre les problemes techniques de la version francaise et gratuite et cracker de archicad

    -
      -
    • Vérifiez la fiabilité du site web qui propose le téléchargement du fichier archivé. Lisez les commentaires des autres utilisateurs et évitez les sites qui ont une mauvaise réputation ou qui demandent des informations personnelles.
    • -
    • Scannez le fichier archivé avec un antivirus avant de l'ouvrir. Supprimez le fichier si votre antivirus détecte une menace ou un fichier malveillant.
    • -
    • Désactivez votre connexion internet et votre antivirus pendant l'installation du logiciel et le lancement du crack. Cela évitera que le logiciel soit détecté comme illégal ou que le crack soit supprimé par votre antivirus.
    • -
    • Ne mettez pas à jour le logiciel ni le crack. Cela risquerait de rendre le logiciel inutilisable ou de révéler votre utilisation illégale.
    • -
    • Ne partagez pas le logiciel ni le crack avec d'autres personnes. Cela augmenterait les chances que le logiciel soit repéré par son éditeur ou par les autorités.
    • -
    • Ne stockez pas vos projets sur le cloud ni sur des supports externes. Cela pourrait compromettre la sécurité de vos données ou la confidentialité de vos projets.
    • -
    -

    Comment utiliser ArchiCAD 13 pour la modélisation 3D?

    -

    Une fois que vous avez installé et lancé ArchiCAD 13 francais gratuit avec crack, vous pouvez commencer à utiliser le logiciel pour la modélisation 3D de vos projets architecturaux. Voici quelques étapes à suivre pour créer un projet en 3D avec ArchiCAD 13:

    -
      -
    1. Cliquez sur le menu "Fichier" puis sur "Nouveau" pour créer un nouveau projet.
    2. -
    3. Choisissez les paramètres de base de votre projet, tels que le nom, l'emplacement, l'échelle, l'unité de mesure, etc.
    4. -
    5. Cliquez sur le menu "Affichage" puis sur "Plan" pour passer en mode plan.
    6. -
    7. Utilisez les outils de dessin et de construction pour tracer les murs, les dalles, les poteaux, les poutres, etc. de votre bâtiment.
    8. -
    9. Cliquez sur le menu "Affichage" puis sur "3D" pour passer en mode 3D.
    10. -
    11. Utilisez les outils de modélisation pour ajouter des éléments 3D à votre bâtiment, tels que des fenêtres, des portes, des escaliers, des toits, etc.
    12. -
    13. Utilisez les outils de modification pour ajuster la forme, la taille, la position, l'orientation, etc. de vos éléments 3D.
    14. -
    15. Utilisez les outils de rendu pour appliquer des matériaux, des textures, des couleurs, des ombres, des lumières, etc. à vos éléments 3D.
    16. -
    17. Utilisez les outils de navigation pour changer le point de vue et la perspective de votre scène 3D.
    18. -
    19. Cliquez sur le menu "Fichier" puis sur "Enregistrer" pour sauvegarder votre projet.
    20. -
    -

    Les outils de modélisation de ArchiCAD 13

    -

    ArchiCAD 13 vous offre une large gamme d'outils de modélisation qui vous permettront de créer des formes simples ou complexes en 3D. Parmi ces outils, on peut citer:

    -
      -
    • L'outil MORPH, qui vous permet de créer et d'éditer des formes libres en 3D. Vous pouvez déformer, sculpter, fusionner ou couper des formes selon vos envies.
    • -
    • L'outil SHELL, qui vous permet de créer des toits complexes et des formes incurvées. Vous pouvez définir la forme générale, l'épaisseur, la courbure et les extrusions de vos coques.
    • -
    • L'outil CURTAIN WALL, qui vous permet de créer des façades vitrées personnalisables. Vous pouvez définir la structure, les panneaux, les accessoires et les ouvertures de vos murs-rideaux.
    • -
    • L'outil OBJECT LIBRARY, qui vous permet d'accéder à une bibliothèque d'objets prédéfinis que vous pouvez insérer dans votre scène 3D. Vous pouvez choisir parmi des catégories telles que mobilier, équipement, végétation, symboles, etc.
    • -

      Les outils de modélisation de ArchiCAD 13

      -

      ArchiCAD 13 vous offre une large gamme d'outils de modélisation qui vous permettront de créer des formes simples ou complexes en 3D. Parmi ces outils, on peut citer:

      -
        -
      • L'outil MORPH, qui vous permet de créer et d'éditer des formes libres en 3D. Vous pouvez déformer, sculpter, fusionner ou couper des formes selon vos envies.
      • -
      • L'outil SHELL, qui vous permet de créer des toits complexes et des formes incurvées. Vous pouvez définir la forme générale, l'épaisseur, la courbure et les extrusions de vos coques.
      • -
      • L'outil CURTAIN WALL, qui vous permet de créer des façades vitrées personnalisables. Vous pouvez définir la structure, les panneaux, les accessoires et les ouvertures de vos murs-rideaux.
      • -
      • L'outil OBJECT LIBRARY, qui vous permet d'accéder à une bibliothèque d'objets prédéfinis que vous pouvez insérer dans votre scène 3D. Vous pouvez choisir parmi des catégories telles que mobilier, équipement, végétation, symboles, etc.
      • -
      • L'outil GDL EDITOR, qui vous permet de créer et de modifier vos propres objets en utilisant le langage GDL (Geometric Description Language). Vous pouvez définir les propriétés géométriques, graphiques et fonctionnelles de vos objets.
      • -
      -

      Les outils de documentation de ArchiCAD 13

      -

      ArchiCAD 13 vous permet également de concevoir une documentation complète et précise de votre projet en 3D. Vous pouvez générer des plans, des coupes, des élévations, des nomenclatures, des détails et des vues en 2D et en 3D. Vous pouvez également exporter vos documents aux formats PDF, DWF ou DWG et DXF. Parmi les outils de documentation de ArchiCAD 13, on peut citer:

      -
        -
      • L'outil LAYOUT BOOK, qui vous permet d'organiser vos documents dans un livre de mise en page. Vous pouvez créer des chapitres, des sous-chapitres et des pages selon la structure de votre projet.
      • -
      • L'outil VIEW MAP, qui vous permet de gérer vos vues en 2D et en 3D. Vous pouvez créer des vues personnalisées à partir de votre scène 3D et les modifier selon vos besoins.
      • -
      • L'outil PUBLISHER SETS, qui vous permet de publier vos documents dans différents formats et supports. Vous pouvez choisir les documents à publier, le format de sortie, le mode d'impression ou d'envoi.
      • -
      • L'outil ANNOTATION TOOLS, qui vous permet d'ajouter des annotations à vos documents. Vous pouvez insérer du texte, des cotes, des étiquettes, des hachures, des lignes directrices, etc.
      • -

        Les outils de documentation de ArchiCAD 13

        -

        ArchiCAD 13 vous permet également de concevoir une documentation complète et précise de votre projet en 3D. Vous pouvez générer des plans, des coupes, des élévations, des nomenclatures, des détails et des vues en 2D et en 3D. Vous pouvez également exporter vos documents aux formats PDF, DWF ou DWG et DXF. Parmi les outils de documentation de ArchiCAD 13, on peut citer:

        -
          -
        • L'outil LAYOUT BOOK, qui vous permet d'organiser vos documents dans un livre de mise en page. Vous pouvez créer des chapitres, des sous-chapitres et des pages selon la structure de votre projet.
        • -
        • L'outil VIEW MAP, qui vous permet de gérer vos vues en 2D et en 3D. Vous pouvez créer des vues personnalisées à partir de votre scène 3D et les modifier selon vos besoins.
        • -
        • L'outil PUBLISHER SETS, qui vous permet de publier vos documents dans différents formats et supports. Vous pouvez choisir les documents à publier, le format de sortie, le mode d'impression ou d'envoi.
        • -
        • L'outil ANNOTATION TOOLS, qui vous permet d'ajouter des annotations à vos documents. Vous pouvez insérer du texte, des cotes, des étiquettes, des hachures, des lignes directrices, etc.
        • -
        • L'outil SCHEDULES AND INDEXES, qui vous permet de créer des listes et des tableaux à partir des données de votre projet. Vous pouvez générer des nomenclatures d'éléments, des quantitatifs de matériaux, des index de plans, etc.
        • -
        -

        Les outils de collaboration de ArchiCAD 13

        -

        ArchiCAD 13 vous offre aussi la possibilité de travailler en équipe sur un même projet en 3D. Vous pouvez partager et synchroniser vos données avec les autres intervenants du projet, tels que les architectes, les ingénieurs, les clients ou les consultants. Vous pouvez également communiquer et échanger des informations avec les autres utilisateurs du logiciel. Parmi les outils de collaboration de ArchiCAD 13, on peut citer:

        -
          -
        • L'outil TEAMWORK, qui vous permet de travailler sur un projet commun avec plusieurs utilisateurs. Vous pouvez accéder au projet depuis un serveur centralisé et modifier les parties qui vous sont attribuées.
        • -
        • L'outil DELTA SERVER, qui vous permet de réduire le temps de synchronisation entre les utilisateurs. Il détecte et transmet uniquement les modifications effectuées sur le projet.
        • -
        • L'outil BIM SERVER MANAGER, qui vous permet de gérer le serveur centralisé du projet. Vous pouvez contrôler les accès au projet, les sauvegardes, les versions et les révisions.
        • -

          Les outils de collaboration de ArchiCAD 13

          -

          ArchiCAD 13 vous offre aussi la possibilité de travailler en équipe sur un même projet en 3D. Vous pouvez partager et synchroniser vos données avec les autres intervenants du projet, tels que les architectes, les ingénieurs, les clients ou les consultants. Vous pouvez également communiquer et échanger des informations avec les autres utilisateurs du logiciel. Parmi les outils de collaboration de ArchiCAD 13, on peut citer:

          -
            -
          • L'outil TEAMWORK, qui vous permet de travailler sur un projet commun avec plusieurs utilisateurs. Vous pouvez accéder au projet depuis un serveur centralisé et modifier les parties qui vous sont attribuées.
          • -
          • L'outil DELTA SERVER, qui vous permet de réduire le temps de synchronisation entre les utilisateurs. Il détecte et transmet uniquement les modifications effectuées sur le projet.
          • -
          • L'outil BIM SERVER MANAGER, qui vous permet de gérer le serveur centralisé du projet. Vous pouvez contrôler les accès au projet, les sauvegardes, les versions et les révisions.
          • -
          • L'outil BIM EXPLORER, qui vous permet de visualiser et de présenter votre projet en 3D. Vous pouvez naviguer dans votre scène 3D et créer des animations ou des visites virtuelles.
          • -
          • L'outil BIMX, qui vous permet de partager votre projet en 3D avec vos clients ou vos partenaires. Vous pouvez exporter votre scène 3D dans un format interactif qui peut être consulté sur un ordinateur ou un appareil mobile.
          • -
          -

          Conclusion

          -

          ArchiCAD 13 est un logiciel BIM puissant et polyvalent qui vous permet de modéliser des bâtiments en 3D avec une documentation complète et un travail collaboratif. Il offre une multitude d'outils et de fonctionnalités pour la modélisation, le rendu, la documentation et la communication de vos projets architecturaux. Cependant, il s'agit d'un logiciel payant qui nécessite une licence officielle pour être utilisé légalement et en toute sécurité. Si vous souhaitez télécharger ArchiCAD 13 francais gratuit avec crack, vous devez être conscient des risques juridiques et techniques que cela implique. Nous vous conseillons donc de choisir une alternative légale et sûre, comme une version d'essai gratuite ou une version étudiante gratuite. Vous pouvez également acheter une licence officielle sur le site web de Graphisoft ou auprès d'un revendeur agréé.

          -

          FAQ

          -

          Voici quelques questions fréquemment posées sur ArchiCAD 13 francais gratuit avec crack:

          -
            -
          • Quelle est la différence entre ArchiCAD 13 et ArchiCAD 24?
          • -

            ArchiCAD 24 est la dernière version du logiciel, sortie en 2020. Elle apporte plusieurs améliorations et nouveautés par rapport à ArchiCAD 13, notamment:

            -
              -
            • Une meilleure intégration du BIMcloud, la plateforme cloud de Graphisoft qui facilite le travail collaboratif.
            • -
            • Une meilleure prise en charge du format IFC (Industry Foundation Classes), qui permet l'échange de données entre les différents logiciels BIM.
            • -
            • Une meilleure performance du logiciel grâce à l'utilisation du multi-threading et du multi-processing.
            • -
            • Une meilleure qualité du rendu grâce à l'utilisation du moteur Twinmotion, qui offre des effets visuels réalistes.
            • -
            • Une meilleure conception structurelle grâce à l'intégration du logiciel Archicad Structural Analysis (ASA), qui permet le calcul des contraintes et des déformations.
            • -
            -
          • Où puis-je trouver des tutoriels pour apprendre à utiliser ArchiCAD 13?
          • -

            Vous pouvez trouver des tutoriels pour apprendre à utiliser ArchiCAD 13 sur le site web de Graphisoft ou sur des plateformes en ligne comme YouTube ou Udemy. Vous pouvez également consulter des livres ou des magazines spécialisés sur le sujet. -

          • Comment puis-je obtenir une version d'essai gratuite ou une version étudiante gratuite de ArchiCAD 13?
          • -

            Pour obtenir une version d'essai gratuite ou une version étudiante gratuite de ArchiCAD 13, vous devez vous rendre sur le site web de Graphisoft et remplir un formulaire d'inscription. Vous recevrez ensuite un lien par e-mail pour télécharger le logiciel. La version d'essai gratuite est valable pendant 30 jours et la version étudiante gratuite est valable pendant un an. -

          • Comment puis-je acheter une licence officielle de ArchiCAD 13?
          • -

            Pour acheter une licence officielle de ArchiCAD 13, vous devez vous rendre sur le site web de Graphisoft ou auprès d'un revendeur agréé. Vous devrez choisir entre une licence perpétuelle ou une licence annuelle, selon vos besoins et votre budget. Vous devrez également choisir entre une licence individuelle ou une licence réseau, selon le nombre d'utilisateurs que vous souhaitez autoriser. -

          • Comment puis-je contacter le service client de Graphisoft?
          • -

            Pour contacter le service client de Graphisoft, vous pouvez utiliser le formulaire de contact disponible sur le site web de Graphisoft ou envoyer un e-mail à l'adresse support@graphisoft.com. Vous pouvez également appeler le numéro +36-1-437-3000 ou consulter la FAQ disponible sur le site web de Graphisoft. -

          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md b/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md deleted file mode 100644 index 7bb00d13f6e0ada13a6e28897203691638941472..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/IBM ViaVoice Gold Arabic 4.3.rarl A Users Guide and FAQ.md +++ /dev/null @@ -1,173 +0,0 @@ -
          -

          IBM ViaVoice Gold Arabic 4.3.rarl: A Review

          -

          If you are looking for a voice recognition software that can help you create documents, emails, and other texts in Arabic, you might want to check out IBM ViaVoice Gold Arabic 4.3.rarl. This software is designed to provide you with a fast, accurate, and easy way to dictate and edit your texts using your voice.

          -

          In this article, we will review IBM ViaVoice Gold Arabic 4.3.rarl and tell you everything you need to know about it. We will cover its features, benefits, installation process, usage tips, pros, cons, and more. By the end of this article, you will be able to decide if this software is suitable for your needs and preferences.

          -

          IBM ViaVoice Gold Arabic 4.3.rarl


          DOWNLOADhttps://urlcod.com/2uK62U



          -

          What is IBM ViaVoice Gold Arabic 4.3.rarl?

          -

          A brief introduction to IBM ViaVoice Gold Arabic 4.3.rarl

          -

          IBM ViaVoice Gold Arabic 4.3.rarl is a voice recognition software that allows you to dictate and edit texts in Arabic using your voice. It is developed by IBM Corporation, a leading company in the field of artificial intelligence and natural language processing.

          -

          IBM ViaVoice Gold Arabic 4.3.rarl is based on the technology of IBM ViaVoice, which was first launched in 1997 as one of the first voice recognition software in the market. Since then, IBM ViaVoice has been improved and updated with various versions and languages, including Arabic.

          -

          IBM ViaVoice Gold Arabic 4.3.rarl is the latest version of IBM ViaVoice for Arabic speakers. It was released in 2022 and it is compatible with Windows XP, Vista, 7, 8, and 10 operating systems.

          -

          The features and benefits of IBM ViaVoice Gold Arabic 4.3.rarl

          -

          IBM ViaVoice Gold Arabic 4.3.rarl has many features and benefits that make it a powerful and convenient voice recognition software for Arabic speakers. Some of these features and benefits are:

          -
            -
          • It supports both Modern Standard Arabic (MSA) and Egyptian Colloquial Arabic (ECA), which are the most widely used varieties of Arabic in the world.
          • -
          • It has a high accuracy rate of over 95%, which means that it can recognize your voice and words correctly most of the time.
          • -
          • It has a fast response time of less than a second, which means that it can process your voice input quickly and display the text output on your screen without delay.
          • -
          • It has a user-friendly interface that is easy to navigate and customize according to your preferences.
          • -
          • It has a built-in text editor that allows you to edit your texts using your voice or keyboard.
          • -
          • It has a speech feedback feature that reads back your texts aloud so that you can check them for errors or corrections.
          • -
          • It has a vocabulary builder feature that allows you to add new words or phrases to its dictionary so that it can recognize them better in the future.
          • -
          • It has a voice training feature that allows you to improve its recognition accuracy by adapting it to your voice characteristics and pronunciation.
          • -
          • It has a voice command feature that allows you to control your computer applications using your voice.
          • -
          • It has a voice macro feature that allows you to create shortcuts for frequently used commands or texts using your voice.
          • -
          • It has a compatibility feature that allows you to use it with other applications such as Microsoft Word, Excel, PowerPoint, Outlook, Internet Explorer, Firefox, Chrome, Skype, WhatsApp, Facebook Messenger, etc.
          • -
          -

          How to download and install IBM ViaVoice Gold Arabic 4.3.rarl?

          -

          The system requirements for IBM ViaVoice Gold Arabic 4.3.rarl

          -

          To download and install IBM ViaVoice Gold Arabic 4.3.rarl on your computer, you need to make sure that your computer meets the following system requirements:

          - - - - - - - - -
          Operating systemWindows XP/Vista/7/8/10
          CPUPentium IV or higher
          RAM512 MB or higher
          Disk space1 GB or higher
          Sound card16-bit or higher
          MicrophoneAnalog or USB headset microphone
          Internet connectionRequired for activation and updates
          -

          The steps to download and install IBM ViaVoice Gold Arabic 4.3.rarl

          -

          To download and install IBM ViaVoice Gold Arabic 4.3.rarl on your computer, you need to follow these steps:

          -
            -
          1. Go to one of the websites that offer the download link for IBM ViaVoice Gold Arabic 4.3.rarl . Make sure that the website is trustworthy and secure before downloading anything from it.
          2. -
          3. Click on the download button or link and save the file on your computer.
          4. -
          5. Extract the file using a program such as WinRAR or WinZip.
          6. -
          7. Run the setup.exe file as an administrator.
          8. -
          9. Follow the instructions on the screen to complete the installation process.
          10. -
          11. Activate the software using the serial number provided by the website or by contacting IBM customer service.
          12. -
          13. Restart your computer if prompted.
          14. -
          15. Lunch the software from your desktop or start menu.
          16. -
          17. Select your preferred language (MSA or ECA) and complete the voice training session.
          18. -
          19. Enjoy using IBM ViaVoice Gold Arabic 4.3.rarl!
          20. -
          -

          How to use IBM ViaVoice Gold Arabic 4.3.rarl?

          -

          The main functions and commands of IBM ViaVoice Gold Arabic 4.3.rarl

          -

          To use IBM ViaVoice Gold Arabic 4.3.rarl effectively, you need to know its main functions and commands:

          -

          IBM ViaVoice Gold Arabic 4.3 download
          -IBM ViaVoice Gold Arabic 4.3 activation
          -IBM ViaVoice Gold Arabic 4.3 review
          -IBM ViaVoice Gold Arabic 4.3 troubleshooting
          -IBM ViaVoice Gold Arabic 4.3 speech recognition software
          -IBM ViaVoice Gold Arabic 4.3 voicepad
          -IBM ViaVoice Gold Arabic 4.3 VTDirect
          -IBM ViaVoice Gold Arabic 4.3 KEYSRV95
          -IBM ViaVoice Gold Arabic 4.3 online patch
          -IBM ViaVoice Gold Arabic 4.3 registration key
          -IBM ViaVoice Gold Arabic 4.3 crack
          -IBM ViaVoice Gold Arabic 4.3 serial number
          -IBM ViaVoice Gold Arabic 4.3 free download
          -IBM ViaVoice Gold Arabic 4.3 latest version
          -IBM ViaVoice Gold Arabic 4.3 update
          -IBM ViaVoice Gold Arabic 4.3 system requirements
          -IBM ViaVoice Gold Arabic 4.3 features
          -IBM ViaVoice Gold Arabic 4.3 benefits
          -IBM ViaVoice Gold Arabic 4.3 alternatives
          -IBM ViaVoice Gold Arabic 4.3 comparison
          -IBM ViaVoice Gold Arabic 4.3 price
          -IBM ViaVoice Gold Arabic 4.3 discount
          -IBM ViaVoice Gold Arabic 4.3 coupon code
          -IBM ViaVoice Gold Arabic 4.3 demo
          -IBM ViaVoice Gold Arabic 4.3 tutorial
          -IBM ViaVoice Gold Arabic 4.3 user guide
          -IBM ViaVoice Gold Arabic 4.3 manual
          -IBM ViaVoice Gold Arabic 4.3 support
          -IBM ViaVoice Gold Arabic 4.3 customer service
          -IBM ViaVoice Gold Arabic 4.3 feedback
          -IBM ViaVoice Gold Arabic 4.3 testimonials
          -IBM ViaVoice Gold Arabic 4.3 ratings
          -IBM ViaVoice Gold Arabic 4.3 comments
          -IBM ViaVoice Gold Arabic 4.3 questions and answers
          -IBM ViaVoice Gold Arabic 4.3 tips and tricks
          -IBM ViaVoice Gold Arabic 4.3 best practices
          -IBM ViaVoice Gold Arabic 4.3 use cases
          -IBM ViaVoice Gold Arabic 4.3 success stories
          -IBM ViaVoice Gold Arabic 4.3 case studies
          -IBM ViaVoice Gold Arabic 4.3 examples
          -IBM ViaVoice Gold Arabic 4.3 applications
          -IBM ViaVoice Gold Arabic 4.3 industry solutions
          -IBM ViaVoice Gold Arabic 4.3 niche markets
          -IBM ViaVoice Gold Arabic 4.3 pendant magnifier[^5^]
          -IBM Viavoice Express Edition Crack[^1^]
          -How to install ibm viavoice gold arabic[^2^]
          -Ibm viavoice gold arabic for windows xp[^2^]
          -Ibm viavoice gold arabic for windows vista[^2^]
          -Ibm viavoice gold arabic for windows seven[^2^]

          -
            -
          • The dictation function allows you to create texts using your voice instead of typing them on your keyboard. To start dictating, say "بدء التحدث" (start speaking) or click on the microphone icon on the toolbar. To stop dictating, say "إيقاف التحدث" (stop speaking) or click on the microphone icon again.
          • -
          • The editing function allows you to edit your texts using your voice or keyboard after dictating them. To edit a word or phrase, say "تحرير" (edit) followed by the word or phrase you want to edit. To delete a word or phrase, say "حذف" (delete) followed by the word or phrase you want to delete. To insert a word or phrase, say "إدراج" (insert) followed by the word or phrase you want to insert.
          • -
          • The speech feedback function allows you to hear your texts read back aloud by a synthetic voice after dictating or editing them. To activate this function, say "تشغيل الصوت" (turn on sound) or click on the speaker icon on Continuing the article: the toolbar. To deactivate this function, say "إيقاف الصوت" (turn off sound) or click on the speaker icon again.
          • -
          • The vocabulary builder function allows you to add new words or phrases to the software's dictionary so that it can recognize them better in the future. To activate this function, say "إضافة كلمة" (add word) or click on the plus icon on the toolbar. To deactivate this function, say "إنهاء الإضافة" (end adding) or click on the plus icon again.
          • -
          • The voice training function allows you to improve the software's recognition accuracy by adapting it to your voice characteristics and pronunciation. To activate this function, say "تدريب الصوت" (train voice) or click on the star icon on the toolbar. To deactivate this function, say "إنهاء التدريب" (end training) or click on the star icon again.
          • -
          • The voice command function allows you to control your computer applications using your voice. To activate this function, say "أوامر الصوت" (voice commands) or click on the gear icon on the toolbar. To deactivate this function, say "إيقاف الأوامر" (stop commands) or click on the gear icon again.
          • -
          • The voice macro function allows you to create shortcuts for frequently used commands or texts using your voice. To activate this function, say "ماكرو الصوت" (voice macro) or click on the lightning icon on the toolbar. To deactivate this function, say "إيقاف الماكرو" (stop macro) or click on the lightning icon again.
          • -
          • The compatibility function allows you to use the software with other applications such as Microsoft Word, Excel, PowerPoint, Outlook, Internet Explorer, Firefox, Chrome, Skype, WhatsApp, Facebook Messenger, etc. To activate this function, say "التوافق" (compatibility) or click on the globe icon on the toolbar. To deactivate this function, say "إيقاف التوافق" (stop compatibility) or click on the globe icon again.
          • -
          -

          The tips and tricks to improve your voice recognition and dictation with IBM ViaVoice Gold Arabic 4.3.rarl

          -

          To improve your voice recognition and dictation with IBM ViaVoice Gold Arabic 4.3.rarl, you can follow these tips and tricks:

          -
            -
          • Use a good quality microphone that is compatible with your sound card and operating system.
          • -
          • Adjust the microphone volume and position so that it can capture your voice clearly and avoid background noise.
          • -
          • Speak clearly and naturally in a normal tone and speed.
          • -
          • Pronounce each word and syllable correctly and distinctly.
          • -
          • Use proper punctuation and capitalization when dictating.
          • -
          • Use short pauses between words and phrases to separate them.
          • -
          • Use longer pauses between sentences and paragraphs to indicate them.
          • -
          • Use specific commands to format, correct, or delete your texts.
          • -
          • Review your texts for errors or corrections using the speech feedback or text editor functions.
          • -
          • Add new words or phrases to the vocabulary builder function if they are not recognized by the software.
          • -
          • Train your voice regularly using the voice training function to adapt the software to your voice characteristics and pronunciation.
          • -
          • Create shortcuts for frequently used commands or texts using the voice macro function.
          • -
          • Control your computer applications using the voice command function.
          • -
          • Use the compatibility function to use the software with other applications.
          • -
          -

          What are the pros and cons of IBM ViaVoice Gold Arabic 4.3.rarl?

          -

          The advantages of IBM ViaVoice Gold Arabic 4.3.rarl

          -

          IBM ViaVoice Gold Arabic 4.3.rarl has many advantages that make it a useful and efficient voice recognition software for Arabic speakers. Some of these advantages are:

          -
            -
          • It saves you time and effort by allowing you to create texts using your voice instead of typing them on your keyboard.
          • -
          • It improves your productivity and creativity by allowing you to focus on your ideas and thoughts instead of typing errors or corrections.
          • -
          • It enhances your accessibility and mobility by allowing you to use your voice as an input device instead of a mouse or keyboard.
          • -
          • It supports both MSA and ECA, which are the most widely used varieties of Arabic in the world.
          • -
          • It has a high accuracy rate of over 95%, which means that it can recognize your voice and words correctly most of the time.
          • -
          • It has a fast response time of less than a second, which means that it can process your voice input quickly and display the text output on your screen without delay.
          • -
          • It has a user-friendly interface that is easy to navigate and customize according to your preferences.
          • -
          • It has a built-in text editor that allows you to edit your texts using your voice or keyboard.
          • -
          • It has a speech feedback feature that reads back your texts aloud so that you can check them for errors or corrections.
          • -
          • It has a vocabulary builder feature that allows you to add new words or phrases to its dictionary so that it can recognize them better in Continuing the article: depending on the website and the currency.
          • -
          • Q: How can I get the serial number for IBM ViaVoice Gold Arabic 4.3.rarl?
          • -
          • A: The serial number for IBM ViaVoice Gold Arabic 4.3.rarl is usually provided by the website that offers the download link. If you don't receive the serial number or if it doesn't work, you can contact IBM customer service by phone or email and provide them with your purchase details and proof of payment.
          • -
          • Q: How can I update IBM ViaVoice Gold Arabic 4.3.rarl?
          • -
          • A: To update IBM ViaVoice Gold Arabic 4.3.rarl, you need to have an internet connection and follow these steps:
          • -
              -
            1. Open the software and click on the help icon on the toolbar.
            2. -
            3. Select "Check for updates" from the menu.
            4. -
            5. Follow the instructions on the screen to download and install the latest updates.
            6. -
            7. Restart your computer if prompted.
            8. -
            -
          • Q: How can I uninstall IBM ViaVoice Gold Arabic 4.3.rarl?
          • -
          • A: To uninstall IBM ViaVoice Gold Arabic 4.3.rarl, you need to follow these steps:
          • -
              -
            1. Close the software and any other applications that are using it.
            2. -
            3. Go to the control panel and select "Add or remove programs".
            4. -
            5. Find and select "IBM ViaVoice Gold Arabic 4.3.rarl" from the list of programs.
            6. -
            7. Click on the "Remove" button and follow the instructions on the screen to complete the uninstallation process.
            8. -
            9. Delete any remaining files or folders related to the software from your computer.
            10. -
            -
          • Q: How can I contact IBM customer service?
          • -
          • A: You can contact IBM customer service by phone or email using the following information:
          • - - - -
            Phone+1-800-426-4968 (USA)
            Emailsupport@us.ibm.com
            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md deleted file mode 100644 index f6c777c3357a61dcf6c717f7cf1b7960633a68b2..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/DJ Studio 5 Mod APK How to Download and Install This Fantastic App on Your Android Device.md +++ /dev/null @@ -1,124 +0,0 @@ - -

            DJ Studio 5 Mod APK Download: A Complete Guide

            -

            Do you love mixing music and creating your own beats? Do you want to turn your Android device into a mobile DJ station? If yes, then you should check out DJ Studio 5, a free music mixer app that allows you to manipulate music in various ways. However, if you want to enjoy all the features and functions of this app without any limitations, you will need to download the mod apk version. In this article, we will tell you everything you need to know about DJ Studio 5 Mod APK, including its features, how to download and install it, its pros and cons, and some frequently asked questions.

            -

            dj studio 5 mod apk download


            DOWNLOADhttps://bltlly.com/2uOsmi



            -

            Features of DJ Studio 5 Mod APK

            -

            DJ Studio 5 is a powerful app that lets you spin, mix, and scratch music on your Android device. It has a lot of features that make it a comprehensive and fun app for both beginners and experts. However, some of these features are not available in the original version of the app, which requires an in-app purchase to unlock unlimited playback. That's why you need the mod apk version, which gives you access to all the features and functions for free. Here are some of the features of DJ Studio 5 Mod APK:

            -
              -
            • Unlimited playback and access to all functions: With the mod apk version, you can play as many songs as you want without any interruptions or ads. You can also use all the functions of the app without any restrictions or limitations.
            • -
            • Customizable interface and skins: You can choose between a single deck or twin decks mode, depending on your preference and skill level. You can also customize the interface and skins of the app according to your taste and style.
            • -
            • Support for various audio formats and external devices: You can load music from your device's library or from external sources like USB drives or SD cards. You can also use external devices like headphones, speakers, or MIDI controllers to enhance your mixing experience.
            • -
            • Live recording and sharing of mixes: You can record your mixes live and save them on your device or share them on Soundcloud or other social media platforms. You can also listen to other users' mixes and rate them.
            • -
            • Equalizer, loop, BPM, and other tools for music manipulation: You can adjust the sound levels, create loops, change the tempo, and apply various effects to your music using the tools provided by the app. You can also sync the tracks automatically or manually using the BPM feature.
            • -
            -

            How to Download and Install DJ Studio 5 Mod APK

            -

            If you are interested in downloading and installing DJ Studio 5 Mod APK on your Android device, you will need to follow these simple steps:

            -
              -
            1. Enable unknown sources on your device: To install apps from sources other than the Google Play Store, you will need to enable the unknown sources option on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
            2. -
            3. Download the mod apk file from a trusted source: You can find the mod apk file of DJ Studio 5 on various websites and blogs, but not all of them are safe and reliable. Therefore, you should always download the file from a trusted source that has positive reviews and ratings. You can use this link to download the file safely and quickly.
            4. -
            5. Locate and install the file on your device: After downloading the file, you will need to locate it on your device using a file manager app. You can usually find it in the Downloads folder or the folder where you saved it. Once you find it, tap on it and follow the instructions to install it on your device.
            6. -
            7. Launch the app and enjoy mixing music: After installing the app, you can launch it from your app drawer or home screen. You will see a welcome screen that will guide you through the basic features and functions of the app. You can then start loading music and mixing it as you wish.
            8. -
            -

            Pros and Cons of DJ Studio 5 Mod APK

            -

            DJ Studio 5 Mod APK is a great app for anyone who loves music and wants to create their own mixes. However, like any other app, it also has some pros and cons that you should be aware of before using it. Here are some of them:

            -

            dj studio 5 free music mixer apk download
            -dj studio 5 pro mod apk download
            -dj studio 5 unlocked apk download
            -dj studio 5 android app download
            -dj studio 5 latest version apk download
            -dj studio 5 full mod apk download
            -dj studio 5 premium apk download
            -dj studio 5 modded apk download
            -dj studio 5 hack apk download
            -dj studio 5 cracked apk download
            -dj studio 5 music mixer app download
            -dj studio 5 mod apk free download
            -dj studio 5 no ads apk download
            -dj studio 5 offline apk download
            -dj studio 5 update apk download
            -dj studio 5 best music mixer apk download
            -dj studio 5 mod apk unlimited playback download
            -dj studio 5 nexus s apk download
            -dj studio 5 old version apk download
            -dj studio 5 original apk download
            -dj studio 5 professional music mixer apk download
            -dj studio 5 mod apk android download
            -dj studio 5 all features unlocked apk download
            -dj studio 5 beatronik apk download
            -dj studio 5 new version apk download
            -dj studio 5 paid apk download
            -dj studio 5 modded app download
            -dj studio 5 hacked app download
            -dj studio 5 cracked app download
            -dj studio 5 music mixer app free download
            -dj studio 5 pro mod app download
            -dj studio 5 unlocked app download
            -dj studio 5 android app free download
            -dj studio 5 latest version app download
            -dj studio 5 full mod app download
            -dj studio 5 premium app download
            -dj studio 5 no ads app download
            -dj studio 5 offline app download
            -dj studio 5 update app download
            -dj studio 5 best music mixer app download
            -dj studio 5 mod app free download
            -dj studio 5 unlimited playback app download
            -dj studio 5 nexus s app download
            -dj studio 5 old version app download
            -dj studio 5 original app download
            -dj studio 5 professional music mixer app download
            -dj studio 5 mod app android download
            -dj studio 5 all features unlocked app download
            -dj studio 5 beatronik app download

            - - - - - - - - - - - - - - - - - - - - - - - - - -
            ProsCons
            - Free, comprehensive, fun, and easy to use- May have compatibility issues with some devices and Android versions
            - Customizable interface and skins- Steep learning curve for beginners and advanced users
            - Support for various audio formats and external devices- No effects like reverb, flanger, or delay
            - Live recording and sharing of mixes- May consume a lot of battery and storage space
            - Equalizer, loop, BPM, and other tools for music manipulation- May not be legal or ethical to use the mod apk version
            -

            Conclusion and FAQs

            -

            DJ Studio 5 Mod APK is a great app for aspiring and professional DJs who want to mix music on their Android devices. It offers a lot of features and functions that make it one of the best free DJing apps available. However, it also has some drawbacks that may affect its performance and user experience. Therefore, users should be careful when downloading and installing the mod apk version and always use it at their own risk.

            -

            If you have any questions or doubts about DJ Studio 5 Mod APK, you may find the answers in the following FAQs:

            -

            Q: Is DJ Studio 5 Mod APK safe to use?

            -

            A: DJ Studio 5 Mod APK is generally safe to use as long as you download it from a trusted source and scan it for viruses or malware before installing it. However, you should also be aware that using mod apk versions of apps may violate their terms of service and may result in legal or ethical issues.

            -

            Q: Is DJ Studio 5 Mod APK compatible with my device?

            -

            A: DJ Studio 5 Mod APK is compatible with most Android devices that run on Android 4.0 or higher. However, some devices may have compatibility issues due to different hardware or software specifications. Therefore, you should always check the compatibility of your device before downloading and installing the app.

            -

            Q: How can I update DJ Studio 5 Mod APK?

            -

            A: DJ Studio 5 Mod APK is not available on the Google Play Store, so you cannot update it automatically or manually through the store. Instead, you will need to check for updates from the source where you downloaded the app or from other websites or blogs that offer the latest version of the app.

            -

            Q: How can I uninstall DJ Studio 5 Mod APK?

            -

            A: If you want to uninstall DJ Studio 5 Mod APK from your device, you can do so by following these steps:

            -
              -
            1. Go to Settings > Apps > DJ Studio 5 Mod APK and tap on it.
            2. -
            3. Tap on Uninstall and confirm your action.
            4. -
            5. Wait for the app to be uninstalled from your device.
            6. -
            7. Delete the mod apk file from your device if you still have it.
            8. -

            Q: What are the alternatives to DJ Studio 5 Mod APK?

            -

            A: If you are looking for other apps that can help you mix music on your Android device, you may want to try some of these alternatives:

            -
              -
            • edjing Mix: This is another popular and free app that lets you create amazing mixes with your music library or from various online sources. It has a lot of features and effects that make it a professional and fun app for DJs of all levels.
            • -
            • Cross DJ: This is a powerful and intuitive app that allows you to mix tracks with accuracy and creativity. It has a sleek and user-friendly interface that makes it easy to use. It also supports various audio formats and external devices.
            • -
            • DJ Mixer Studio: This is a simple and elegant app that enables you to mix music with ease and style. It has a minimalist and colorful design that makes it attractive and enjoyable. It also has a variety of tools and functions that make it a versatile and reliable app for mixing music.
            • -
            -

            I hope you enjoyed reading this article and learned something new about DJ Studio 5 Mod APK. If you have any feedback or suggestions, please feel free to leave a comment below. Thank you for your time and attention.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md deleted file mode 100644 index 62a8b589caef0ca8a142c361f66d8c6d36fda656..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Grade 12 Mathematics P1 Mock Exams and Answers.md +++ /dev/null @@ -1,149 +0,0 @@ -
            -

            How to Download Grade 12 Mathematics P1

            -

            Mathematics P1 is one of the papers that you have to write in Grade 12 if you are taking Mathematics as a subject. It covers topics such as algebra, calculus, trigonometry, geometry, and statistics. Mathematics P1 is a challenging paper that requires a lot of practice and preparation. One of the best ways to prepare for Mathematics P1 is to download past exam papers and memos from reliable sources. By doing so, you can:

            -
              -
            • Get familiar with the exam format and structure.
            • -
            • Test your knowledge and skills on various topics and questions.
            • -
            • Improve your speed and accuracy in solving problems.
            • -
            • Learn from your mistakes and gaps in understanding.
            • -
            • Boost your confidence and reduce your anxiety.
            • -
            -

            In this article, we will show you how to download Grade 12 Mathematics P1 from different sources, and how to use them effectively for your exam preparation. Let's get started!

            -

            download grade 12 mathematics p1


            Download Zip ✒ ✒ ✒ https://bltlly.com/2uOg9Y



            -

            Sources of Grade 12 Mathematics P1

            -

            There are many sources where you can find and download Grade 12 Mathematics P1, but not all of them are reliable and updated. Some of them may have errors, missing pages, or outdated content. Therefore, you need to be careful and selective when choosing your sources. Here are some of the sources that we recommend:

            -

            SA Exam Papers

            -

            SA Exam Papers is a website that provides a comprehensive range of past year exam papers and memos for various subjects and grades in South Africa. You can find Mathematics P1 papers from 2023 to as far back as 2009, from national, provincial, and common tests. You can also find papers in English and Afrikaans languages, as well as question papers, answer books, addendums, and memorandums.

            -

            To download Grade 12 Mathematics P1 from SA Exam Papers, you need to:

            -

            Download grade 12 mathematics p1 past exam papers and memos
            -Download grade 12 mathematics p1 June 2021 exam paper and memo
            -Download grade 12 mathematics p1 NSC exam papers and marking guidelines
            -Download grade 12 mathematics p1 CAPS exam papers and answer books
            -Download grade 12 mathematics p1 Gauteng provincial exam papers and memos
            -Download grade 12 mathematics p1 KZN provincial exam papers and memos
            -Download grade 12 mathematics p1 Western Cape provincial exam papers and memos
            -Download grade 12 mathematics p1 Eastern Cape provincial exam papers and memos
            -Download grade 12 mathematics p1 Mpumalanga provincial exam papers and memos
            -Download grade 12 mathematics p1 North West provincial exam papers and memos
            -Download grade 12 mathematics p1 Free State provincial exam papers and memos
            -Download grade 12 mathematics p1 Limpopo provincial exam papers and memos
            -Download grade 12 mathematics p1 Northern Cape provincial exam papers and memos
            -Download grade 12 mathematics p1 IEB exam papers and memos
            -Download grade 12 mathematics p1 SACAI exam papers and memos
            -Download grade 12 mathematics p1 preparatory exam papers and memos
            -Download grade 12 mathematics p1 common test papers and memos
            -Download grade 12 mathematics p1 exemplar papers and memos
            -Download grade 12 mathematics p1 revision papers and memos
            -Download grade 12 mathematics p1 study guides and notes
            -Download grade 12 mathematics p1 worksheets and solutions
            -Download grade 12 mathematics p1 assignments and feedback
            -Download grade 12 mathematics p1 tests and answers
            -Download grade 12 mathematics p1 practice questions and explanations
            -Download grade 12 mathematics p1 video lessons and tutorials
            -Download grade 12 mathematics p1 online courses and quizzes
            -Download grade 12 mathematics p1 textbooks and summaries
            -Download grade 12 mathematics p1 formula sheets and cheat sheets
            -Download grade 12 mathematics p1 mind maps and diagrams
            -Download grade 12 mathematics p1 tips and tricks
            -Download grade 12 mathematics p1 skills development and assessment tasks
            -Download grade 12 mathematics p1 projects and investigations
            -Download grade 12 mathematics p1 challenges and competitions
            -Download grade 12 mathematics p1 games and activities
            -Download grade 12 mathematics p1 podcasts and audiobooks
            -Download grade 12 mathematics p1 apps and software
            -Download grade 12 mathematics p1 calculators and tools
            -Download grade 12 mathematics p1 blogs and forums
            -Download grade 12 mathematics p1 articles and news
            -Download grade 12 mathematics p1 research papers and journals

            -
              -
            1. Go to [text](^i^), where i is the index of the URL for SA Exam Papers.
            2. -
            3. Scroll down to find the table with the headings "Year" and "Exam Semester".
            4. -
            5. Select the year and exam semester that you want to download.
            6. -
            7. You will see another table with the headings "Paper", "Language", "Type", "Download".
            8. -
            9. Select the paper that you want to download (Mathematics P1).
            10. -
            11. Select the language that you want to download (English or Afrikaans).
            12. -
            13. Select the type that you want to download (Question Paper or Memorandum).
            14. -
            15. Click on the "Download" button.
            16. -
            17. The paper will open in a new tab or window as a PDF file.
            18. -
            19. You can save it on your device or print it out.
            20. -
            -

            Here is a screenshot of how SA Exam Papers looks like:

            - Screenshot of SA Exam Papers website -

            Edwardsmaths

            -

            Edwardsmaths is another website that provides past exam papers and memos for various subjects and grades in South Africa. You can find Mathematics P1 papers from 2023 to 2018, from national, provincial, and common tests. You can also find papers in English and Afrikaans languages, as well as question papers and memorandums.

            -

            To download Grade 12 Mathematics P1 from Edwardsmaths, you need to:

            -
              -
            1. Go to [text](^i^), where i is the index of the URL for Edwardsmaths.
            2. -
            3. Scroll down to find the section with the title "Grade 12 Mathematics Exam Papers and Memos".
            4. -
            5. Select the year that you want to download.
            6. -
            7. You will see a table with the headings "Paper", "Question Paper", "Memo".
            8. -
            9. Select the paper that you want to download (Mathematics P1).
            10. -
            11. Select the question paper or memo that you want to download.
            12. -
            13. The paper will open in a new tab or window as a PDF file.
            14. -
            15. You can save it on your device or print it out.
            16. -
            -

            Here is a screenshot of how Edwardsmaths looks like:

            - Screenshot of Edwardsmaths website -

            National Department of Basic Education

            -

            National Department of Basic Education is the official website of the government department that oversees primary and secondary education in South Africa. You can find Mathematics P1 papers from 2023 to 2014, from national and supplementary exams. You can also find papers in English and Afrikaans languages, as well as question papers and memorandums.

            -

            To download Grade 12 Mathematics P1 from National Department of Basic Education, you need to:

            -
              -
            1. Go to [text](^i^), where i is the index of the URL for National Department of Basic Education.
            2. -
            3. Scroll down to find the section with the title "Past Exam Papers".
            4. -
            5. Select the grade that you want to download (Grade 12).
            6. -
            7. Select the subject that you want to download (Mathematics).
            8. -
            9. Select the year that you want to download.
            10. -
            11. You will see a list of papers with the titles "Paper 1", "Paper 2", etc.
            12. -
            13. Select the paper that you want to download (Mathematics P1).
            14. -
            15. Select the language that you want to download (English or Afrikaans).
            16. -
            17. Select the question paper or memo that you want to download.
            18. -
            19. The paper will open in a new tab or window as a PDF file.
            20. -
            21. You can save it on your device or print it out.
            22. -
            -

            Here is a screenshot of how National Department of Basic Education looks like:

            - Screenshot of National Department of Basic Education website -

            Steps to Download Grade 12 Mathematics P1

            -

            Now that you know some of the sources where you can find and download Grade 12 Mathematics P1, let's go through the steps to download them from each source. As you can see, each source has a slightly different process, but they are all easy and straightforward. Here are the steps for each source:

            -

            SA Exam Papers

            - - - - - - - - - - - - -
            StepDescriptionExample
            1Go to [text](^i^), where i is the index of the URL for SA Exam Papers.Screenshot of SA Exam Papers homepage
            2Scroll down to find the table with the headings "Year" and "Exam Semester".Screenshot of SA Exam Papers table
            3Select the year and exam semester that you want to download.Screenshot of SA Exam Papers selection
            4You will see another table with the headings "Paper", "Language", "Type", "Download".Screenshot of SA Exam Papers table
            5Select the paper that you want to download (Mathematics P1).Screenshot of SA Exam Papers selection
            6Select the language that you want to download (English or Afrikaans).Screenshot of SA Exam Papers selection
            7Select the type that you want to download (Question Paper or Memorandum).Screenshot of SA Exam Papers selection
            8Click on the "Download" button.Screenshot of SA Exam Papers download
            9The paper will open in a new tab or window as a PDF file.Screenshot of SA Exam Papers PDF file
            10You can save it on your device or print it out.Screenshot of SA Exam Papers save or print
            -

            Edwardsmaths

            - - - - - - - - - - -
            StepDescriptionExample
            1Go to [text](^i^), where i is the index of the URL for Edwardsmaths.Screenshot of Edwardsmaths homepage
            2Scroll down to find the section with the title "Grade 12 Mathematics Exam Papers and Memos".Screenshot of Edwardsmaths section
            3Select the year that you want to download.Screenshot of Edwardsmaths selection
            4You will see a table with the headings "Paper", "Question Paper", "Memo".Screenshot of Edwardsmaths table
            5Select the paper that you want to download (Mathematics P1).Screenshot of Edwardsmaths selection
            6Select the question paper or memo that you want to download.Screenshot of Edwardsmaths selection
            7The paper will open in a new tab or window as a PDF file.Screenshot of Edwardsmaths PDF file
            8You can save it on your device or print it out.Screenshot of Edwardsmaths save or print
            -

            National Department of Basic Education

            - - - - - -
            StepDescriptionExample
            1Go to [text](^i^), where i is the index of the URL for National Department of Basic Education.Screenshot of National Department of Basic Education homepage
            2Scroll down to find the section with the title "Past Exam Papers".Screenshot of National Department of Basic Education section
            3Select the grade that you want to download (Grade 12).Screenshot of National Department of Basic Education selection
            4Select the subject that you want to download (Mathematics).Screenshot of National Department of Basic Education selection -

            Q: How often should I download and practice Grade 12 Mathematics P1?

            -

            A: There is no definitive answer to this question, as it depends on your personal goals, preferences, and schedule. However, a general rule of thumb is to download and practice Grade 12 Mathematics P1 at least once a week, or more frequently if you have more time and motivation. You should also vary the papers that you download, so that you can expose yourself to different types and levels of questions.

            -

            Q: How can I download Grade 12 Mathematics P1 on my phone or tablet?

            -

            A: You can download Grade 12 Mathematics P1 on your phone or tablet by following the same steps as on your computer. However, you may need to install a PDF reader app on your device, such as Adobe Acrobat Reader or Google PDF Viewer, to open and view the files. You may also need to adjust the zoom and orientation of the files to fit your screen size and resolution.

            -

            Q: How can I share Grade 12 Mathematics P1 with my friends or classmates?

            -

            A: You can share Grade 12 Mathematics P1 with your friends or classmates by sending them the links or files of the papers that you downloaded. You can also create a study group or chat group where you can discuss and compare your solutions and strategies. Sharing Grade 12 Mathematics P1 with others can help you learn from each other and motivate each other.

            -

            Q: How can I get feedback or help on Grade 12 Mathematics P1?

            -

            A: You can get feedback or help on Grade 12 Mathematics P1 by asking your teachers, tutors, or mentors for guidance and clarification. You can also use online platforms or forums where you can post your questions or doubts and get answers from experts or peers. Some examples of these platforms are Quora, Reddit, Stack Exchange, and Math Help Forum.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md deleted file mode 100644 index 2dc8fac53d7cea144be53045e18314e15cd42e5b..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download King Ludo and Relive Your Childhood Memories with this Fun Board Game.md +++ /dev/null @@ -1,108 +0,0 @@ - -

            How to Download King Ludo - The Most Popular Game of the Year

            -

            Do you love playing board games with your friends and family? Do you want to enjoy a classic game with a modern twist? Do you want to experience the thrill of rolling the dice and moving your tokens to the center of the board? If you answered yes to any of these questions, then you should download King Ludo, the most popular game of the year!

            -

            What is King Ludo?

            -

            King Ludo is a game that has taken the world by storm. It is based on the ancient game of Pachisi, which was played by Indian kings and queens in ancient times. King Ludo is a game that combines luck, strategy, and skill. Here are some of the features that make King Ludo so amazing:

            -

            download king ludo


            DOWNLOADhttps://bltlly.com/2uOhSV



            -

            A modern version of the classic board game

            -

            King Ludo follows the traditional rules and the old school look of the board game. You have four tokens of your color that you need to move from your base to your home. You roll a dice and move your tokens accordingly. You can also capture or block your opponents' tokens. The first player to bring all their tokens to their home wins the game.

            -

            A cross-platform multiplayer game with voice chat

            -

            King Ludo is not just a game that you can play by yourself. You can also play it with your friends and family online or offline. King Ludo supports up to six players in online multiplayer mode. You can also invite and challenge your Facebook friends or make new buddies from around the world. You can also chat with your opponents using voice chat and send them emojis.

            -

            A game with various modes, themes, and features

            -

            King Ludo is a game that never gets boring. You can choose from different modes, such as quick mode, tournament mode, team up mode, and snake and ladders mode. You can also customize your game with different themes, such as disco, nature, Egypt, candy, Christmas, pirate, and more. You can also access an exciting inventory where you can get new dice, funny emojis, voice notes, rewards, and more.

            -

            Why should you download King Ludo?

            -

            King Ludo is a game that has many benefits for you. Here are some of the reasons why you should download King Ludo:

            -

            It is fun, easy, and addictive

            -

            King Ludo is a game that will keep you entertained for hours. It is easy to learn and play, but also challenging and competitive. You will enjoy rolling the dice and moving your tokens while trying to beat your opponents. You will also feel a sense of accomplishment when you win the game.

            -

            It is suitable for all ages and occasions

            -

            King Ludo is a game that everyone can enjoy. It is suitable for all ages, from kids to adults. It is also suitable for all occasions, from casual to formal. You can play King Ludo with your family at home, with your friends at a party, with your colleagues at work, or with strangers online.

            -

            It is free to play and has millions of downloads

            -

            King Ludo is a game that does not cost you anything to play. It is free to download and install on your device. It also does not require an internet connection to play offline mode. King Ludo has over 900 million downloads worldwide and has won many awards and accolades. It is one of the top-rated games on Google Play Store and App Store.

            -

            How to download King Ludo on different devices?

            -

            King Ludo is a game that you can play on any device, whether it is a smartphone, a tablet, or a computer. Here are the steps to download King Ludo on different devices:

            -

            download king ludo game for android
            -download king ludo app for ios
            -download king ludo online board game
            -download king ludo voice chat mode
            -download king ludo quick mode
            -download king ludo mask mode
            -download king ludo 6 player mode
            -download king ludo tournaments
            -download king ludo live themes
            -download king ludo classic board game
            -download king ludo apk file
            -download king ludo for pc
            -download king ludo offline mode
            -download king ludo with friends and family
            -download king ludo dice game of kings
            -download king ludo cross platform multiplayer game
            -download king ludo free game
            -download king ludo latest version
            -download king ludo mod apk
            -download king ludo hack version
            -download king ludo unlimited coins and gems
            -download king ludo snake and ladder game
            -download king ludo parchisi game
            -download king ludo parcheesi game
            -download king ludo pachisi game
            -download king ludo best casual game in board games
            -download king ludo most popular Ludo game in India
            -download king ludo 900+ million downloads game
            -download king ludo no internet connection required game
            -download king ludo play with computer or local multiplayer game
            -download king ludo invite and challenge your Facebook friends game
            -download king ludo play with world players and make them your buddies game
            -download king ludo private chat with your Facebook friends and buddies game
            -download king ludo express yourself by sending emojis to your opponents game
            -download king ludo recall your childhood game
            -download king ludo modern version of the royal game of Pachisi game
            -download king ludo traditional rules and the old school look of the Ludo game game
            -download king ludo beat other players and become the Ludo King game
            -download king ludo fun for the whole family game
            -download king ludo perfect time pass game of Ludo board game

            -

            Download King Ludo on Android

            -

            If you have an Android device, you can download King Ludo from the Google Play Store. Here is how:

            -
              -
            1. Open the Google Play Store app on your device.
            2. -
            3. Search for "King Ludo" in the search bar.
            4. -
            5. Select the game from the list of results and tap on "Install".
            6. -
            7. Wait for the game to download and install on your device.
            8. -
            9. Open the game and enjoy playing King Ludo.
            10. -
            -

            Download King Ludo on iOS

            -

            If you have an iOS device, you can download King Ludo from the App Store. Here is how:

            -
              -
            1. Open the App Store app on your device.
            2. -
            3. Search for "King Ludo" in the search bar.
            4. -
            5. Select the game from the list of results and tap on "Get".
            6. -
            7. Enter your Apple ID and password if prompted.
            8. -
            9. Wait for the game to download and install on your device.
            10. -
            11. Open the game and enjoy playing King Ludo.
            12. -
            -

            Download King Ludo on PC

            -

            If you want to play King Ludo on your PC, you will need to use an emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, such as BlueStacks, NoxPlayer, MEmu, etc. Here is how to download King Ludo on PC using BlueStacks:

            -
              -
            1. Download and install BlueStacks from its official website: https://www.bluestacks.com/
            2. -
            3. Launch BlueStacks and sign in with your Google account.
            4. -
            5. Open the Google Play Store app within BlueStacks.
            6. -
            7. Search for "King Ludo" in the search bar.
            8. -
            9. Select the game from the list of results and click on "Install".
            10. -
            11. Wait for the game to download and install on your PC.
            12. -
            13. Open the game and enjoy playing King Ludo.
            14. -
            -

            Conclusion

            -

            King Ludo is a game that you should not miss. It is a game that will bring you joy, excitement, and nostalgia. It is a game that will connect you with your friends and family. It is a game that will challenge your mind and test your luck. It is a game that will make you feel like a king or a queen. So what are you waiting for? Download King Ludo today and have fun!

            -

            FAQs

            -

            Here are some of the frequently asked questions about King Ludo:

            -

            Q: How can I play King Ludo offline?

            -

            A: You can play King Ludo offline by choosing the offline mode in the main menu. You can play with up to six players using one device or with computer players.

            -

            Q: How can I earn coins in King Ludo?

            -

            A: You can earn coins in King Ludo by winning games, completing daily tasks, spinning the wheel, watching videos, or buying them with real money.

            -

            Q: How can I use coins in King Ludo?

            -

            A: You can use coins in King Ludo to buy new dice, themes, emojis, voice notes, rewards, and more from the inventory.

            -

            Q: How can I change my profile picture in King Ludo?

            -

            A: You can change your profile picture in King Ludo by tapping on your avatar in the main menu and choosing from the gallery or taking a photo.

            -

            Q: How can I report a bug or a problem in King Ludo?

            -

            A: You can report a bug or a problem in King Ludo by tapping on the settings icon in the main menu and choosing "Contact Us". You can also email them at support@kingludogame.com or visit their website at https://www.kingludogame.com/

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/src/routes/conversation/[id]/share/+server.ts b/spaces/timpal0l/chat-ui/src/routes/conversation/[id]/share/+server.ts deleted file mode 100644 index 5f97daa091152c8074797f1d9f48ebc93fdde718..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/routes/conversation/[id]/share/+server.ts +++ /dev/null @@ -1,54 +0,0 @@ -import { base } from "$app/paths"; -import { PUBLIC_ORIGIN } from "$env/static/public"; -import { collections } from "$lib/server/database.js"; -import type { SharedConversation } from "$lib/types/SharedConversation.js"; -import { sha256 } from "$lib/utils/sha256.js"; -import { error } from "@sveltejs/kit"; -import { ObjectId } from "mongodb"; -import { nanoid } from "nanoid"; - -export async function POST({ params, url, locals }) { - const conversation = await collections.conversations.findOne({ - _id: new ObjectId(params.id), - sessionId: locals.sessionId, - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - const hash = await sha256(JSON.stringify(conversation.messages)); - - const existingShare = await collections.sharedConversations.findOne({ hash }); - - if (existingShare) { - return new Response( - JSON.stringify({ - url: getShareUrl(url, existingShare._id), - }), - { headers: { "Content-Type": "application/json" } } - ); - } - - const shared: SharedConversation = { - _id: nanoid(7), - createdAt: new Date(), - messages: conversation.messages, - hash, - updatedAt: new Date(), - title: conversation.title, - }; - - await collections.sharedConversations.insertOne(shared); - - return new Response( - JSON.stringify({ - url: getShareUrl(url, shared._id), - }), - { headers: { "Content-Type": "application/json" } } - ); -} - -function getShareUrl(url: URL, shareId: string): string { - return `${PUBLIC_ORIGIN || url.origin}${base}/r/${shareId}`; -} diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Advance Steel 2017 64bit Activation Code Zip File BEST.md b/spaces/tioseFevbu/cartoon-converter/scripts/Advance Steel 2017 64bit Activation Code Zip File BEST.md deleted file mode 100644 index 5f2d60330ad28f101ae367b0c08222cc4b028475..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Advance Steel 2017 64bit Activation Code Zip File BEST.md +++ /dev/null @@ -1,131 +0,0 @@ - -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            - | Table 2: Article with HTML formatting |

            Advance Steel 2017 64bit Activation Code Zip File: What You Need to Know

            -

            If you are a structural engineer, detailer, or fabricator who works with steel structures, you may have heard of Advance Steel 2017, a powerful software for structural design and detailing. Advance Steel 2017 is a comprehensive solution that supports a Building Information Modeling (BIM) process to help you more accurately detail structural elements and miscellaneous steel. It also enables you to generate detail drawings, bills of materials, and NC files for fabrication and erection.

            -

            Advance Steel 2017 64bit Activation Code Zip File


            Download Filehttps://urlcod.com/2uHvnd



            -

            But how do you install and activate Advance Steel 2017 64bit on your computer? And what are the benefits of using this software for your projects? In this article, we will answer these questions and more. We will show you how to download, install, and activate Advance Steel 2017 64bit using an activation code zip file. We will also give you some tips and best practices on how to use Advance Steel 2017 64bit effectively.

            -

            How to download Advance Steel 2017 64bit

            -

            Before you can install and activate Advance Steel 2017 64bit, you need to download the software from the Autodesk website. Here are the steps to follow:

            -
              -
            1. Go to the Autodesk Advance Steel product page and click on the Download Free Trial button.
            2. -
            3. Select your operating system (Windows 64-bit) and your preferred language. Then, enter your email address and click Next.
            4. -
            5. Choose whether you want to download the software directly or use a download manager. Then, click Download Now.
            6. -
            7. Save the file to your computer and wait for the download to complete.
            8. -
            -

            Note: You can also download Advance Steel 2017 64bit from your Autodesk Account if you have a valid subscription or license. Just sign in to your account, go to Products & Services, find Advance Steel 2017, and click on the Download button.

            -

            How to check your system requirements and compatibility

            -

            Before you install Advance Steel 2017 64bit, you should check if your computer meets the minimum system requirements for the software. You can find the system requirements on the Autodesk Advance Steel product page or on the Autodesk Knowledge Network. Here are some of the main requirements:

            -
              -
            • Operating system: Microsoft Windows 10 (64-bit only), Microsoft Windows 8.1 with Update KB2919355 (64-bit only), or Microsoft Windows 7 SP1 (64-bit only)
            • -
            • CPU: Intel Core i5 or equivalent AMD processor with SSE2 technology
            • -
            • Memory: 8 GB RAM (16 GB recommended)
            • -
            • Disk space: 9 GB free disk space for installation
            • -
            • Display: 1920 x 1080 or greater True Color video display adapter; DirectX®11 capable graphics card with Shader Model 3 as recommended by Autodesk
            • -
            • Browser: Internet Explorer® version 11 or later
            • -
            • .NET Framework: .NET Framework Version 4.6
            • -
            -

            You should also check if your computer is compatible with Advance Steel 2017 64bit. You can do this by running the Autodesk Prerequisite Checker tool, which is included in the installation package. This tool will scan your computer and detect any potential issues or conflicts that may prevent a successful installation or activation of Advance Steel 2017 64bit.

            -

            -

            How to prepare your computer for installation

            -

            Before you install Advance Steel 2017 64bit, you should prepare your computer by doing the following:

            -
              -
            • Disable any antivirus or firewall software that may interfere with the installation process.
            • -
            • Close any other applications that are running on your computer.
            • -
            • Make sure you have administrator rights on your computer.
            • -
            • Make sure you have a stable internet connection.
            • -
            • Make sure you have enough disk space for the installation.
            • -
            • Make sure you have your product key and activation code zip file ready.
            • -
            -

            Note: Your product key is a 25-character alphanumeric code that identifies your product and license type. Your activation code zip file is a compressed file that contains an XML file with your activation code. You can obtain these codes from your Autodesk Account, from an email confirmation, or from a reseller.

            How to install Advance Steel 2017 64bit

            -

            After you have downloaded and prepared your computer for installation, you can proceed to install Advance Steel 2017 64bit. Here are the steps to follow:

            -
              -
            1. Locate the setup file that you downloaded and double-click on it to run it.
            2. -
            3. On the Autodesk Advance Steel 2017 Setup dialog box, click on Install.
            4. -
            5. On the Autodesk Advance Steel 2017 Installation dialog box, read and accept the license agreement and click Next.
            6. -
            7. On the Product Information dialog box, enter your product key and serial number and click Next.
            8. -
            9. On the Configure Installation dialog box, select the components and features that you want to install and click Next.
            10. -
            11. On the Installation Location dialog box, choose the folder where you want to install Advance Steel 2017 64bit and click Next.
            12. -
            13. On the Ready to Install dialog box, review your installation settings and click Install.
            14. -
            15. Wait for the installation process to complete. You can monitor the progress on the Installation Progress dialog box.
            16. -
            17. When the installation is finished, click Finish.
            18. -
            -

            Note: You can also customize your installation by clicking on the Customize button on the Configure Installation dialog box. This will allow you to change your installation language, select your content packs, and configure your network license settings.

            -

            How to activate Advance Steel 2017 64bit

            -

            After you have installed Advance Steel 2017 64bit, you need to activate it using your activation code zip file. Here are the steps to follow:

            -
              -
            1. Launch Advance Steel 2017 64bit from your desktop or start menu.
            2. -
            3. On the Let's Get Started screen, select Enter a Serial Number and click Next.
            4. -
            5. On the Product License Activation screen, enter your product key and serial number and click Next.
            6. -
            7. On the License Method screen, select Stand-Alone License and click Next.
            8. -
            9. On the Activate screen, click on Activate Online Now.
            10. -
            11. On the Activation Code screen, click on Browse and locate your activation code zip file on your computer. Then, click Open.
            12. -
            13. The activation code will be automatically entered in the text box. Click Next.
            14. -
            15. Your product will be activated and registered. Click Finish.
            16. -
            -

            Note: If you encounter any problems or errors during the activation process, you can refer to the Autodesk Knowledge Network for troubleshooting tips and solutions. You can also contact Autodesk support or your reseller for assistance.

            2017 64bit, you can improve your productivity, efficiency, and quality of your structural engineering projects. You can also collaborate better with other stakeholders and disciplines using the BIM process.

            -

            We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!

            -

            FAQs

            -

            Here are some of the frequently asked questions about Advance Steel 2017 64bit and their answers:

            -

            Q1: What are the system requirements for Advance Steel 2017 64bit?

            -

            A1: The minimum system requirements for Advance Steel 2017 64bit are:

            -
              -
            • Operating system: Microsoft Windows 10 (64-bit only), Microsoft Windows 8.1 with Update KB2919355 (64-bit only), or Microsoft Windows 7 SP1 (64-bit only)
            • -
            • CPU: Intel Core i5 or equivalent AMD processor with SSE2 technology
            • -
            • Memory: 8 GB RAM (16 GB recommended)
            • -
            • Disk space: 9 GB free disk space for installation
            • -
            • Display: 1920 x 1080 or greater True Color video display adapter; DirectX®11 capable graphics card with Shader Model 3 as recommended by Autodesk
            • -
            • Browser: Internet Explorer® version 11 or later
            • -
            • .NET Framework: .NET Framework Version 4.6
            • -
            -

            Q2: What are the differences between Advance Steel 2017 and previous versions?

            -

            A2: Some of the main differences between Advance Steel 2017 and previous versions are:

            -
              -
            • Advance Steel 2017 supports Windows 10 operating system.
            • -
            • Advance Steel 2017 has improved performance and stability.
            • -
            • Advance Steel 2017 has new and enhanced features and tools, such as the new ribbon interface, the new connection vault, the new model browser, the new drawing style manager, the new BOM editor, and more.
            • -
            • Advance Steel 2017 has better interoperability and integration with other Autodesk products, such as Revit, AutoCAD, Navisworks, and BIM 360.
            • -
            -

            Q3: How can I update or upgrade my Advance Steel 2017 license?

            -

            A3: You can update or upgrade your Advance Steel 2017 license by doing the following:

            -
              -
            • If you have a subscription or maintenance plan for Advance Steel 2017, you can download and install the latest updates and service packs from your Autodesk Account or from the Autodesk Advance Steel Downloads page.
            • -
            • If you want to upgrade to a newer version of Advance Steel, you can purchase a new license or renew your subscription or maintenance plan from your Autodesk Account or from an Autodesk reseller.
            • -
            -

            Q4: How can I get support or training for Advance Steel 2017?

            -

            A4: You can get support or training for Advance Steel 2017 by accessing the following resources:

            - -

            Q5: How can I integrate Advance Steel 2017 with other Autodesk products?

            -

            A5: You can integrate Advance Steel 2017 with other Autodesk products by using the following methods:

            -
              -
            • You can import and export data between Advance Steel 2017 and Revit using the Advance Steel Extension for Revit. This allows you to synchronize structural models and data between the two software.
            • -
            • You can import and export data between Advance Steel 2017 and AutoCAD using the Advance Steel Extension for AutoCAD. This allows you to create and modify structural elements and connections in AutoCAD and transfer them to Advance Steel 2017.
            • -
            • You can import and export data between Advance Steel 2017 and Navisworks using the Advance Steel Extension for Navisworks. This allows you to review and coordinate structural models and data in Navisworks.
            • -
            • You can import and export data between Advance Steel 2017 and BIM 360 using the Advance Steel Extension for BIM 360. This allows you to collaborate and share structural models and data in the cloud using BIM 360.
            • -

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md deleted file mode 100644 index 214af605fbabe2d8d41e30bd7fceeba87fe0b36d..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Annabelle Movie Download 720p 16 !!HOT!!.md +++ /dev/null @@ -1,23 +0,0 @@ - -

            How to Download Annabelle Movie in 720p HD Quality

            -

            If you are a fan of horror movies, you might have heard of the Annabelle series, which is a spin-off of the popular Conjuring franchise. Annabelle is a haunted doll that causes terror and mayhem wherever it goes. The series consists of three movies: Annabelle (2014), Annabelle: Creation (2017), and Annabelle Comes Home (2019).

            -

            In this article, we will show you how to download Annabelle movie in 720p HD quality using torrent sites. Torrent sites are online platforms that allow users to share and download files, such as movies, music, games, etc. However, torrenting is illegal in many countries and can expose you to malware, viruses, and legal issues. Therefore, we advise you to use a VPN (Virtual Private Network) service to protect your online privacy and security when downloading torrents.

            -

            Annabelle Movie Download 720p 16


            Download ····· https://urlcod.com/2uHxSa



            -

            Steps to Download Annabelle Movie in 720p HD Quality

            -
              -
            1. Choose a reliable torrent site that has the Annabelle movie you want to download. Some of the popular torrent sites are YTS.mx[^1^], The Pirate Bay, 1337x, etc. You can also use a torrent search engine like Torrentz2 or Zooqle to find the best torrent for your movie.
            2. -
            3. Search for the keyword "Annabelle Movie Download 720p 16" on the torrent site. This will show you a list of torrents that match your query. You can sort them by seeders, leechers, size, date, etc. to find the best one for your needs. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. The more seeders and fewer leechers a torrent has, the faster and more reliable it will be.
            4. -
            5. Download the torrent file or magnet link of the Annabelle movie you want to download. A torrent file is a small file that contains information about the file you want to download, such as its name, size, hash, trackers, etc. A magnet link is a URL that contains the same information as a torrent file but does not require downloading. You can open either of them with a torrent client, which is a software that enables you to download and upload files using the BitTorrent protocol.
            6. -
            7. Choose a reputable torrent client that supports your device and operating system. Some of the popular torrent clients are uTorrent, BitTorrent, qBittorrent, Vuze, etc. You can download them from their official websites or app stores.
            8. -
            9. Open the torrent file or magnet link with your torrent client. This will start downloading the Annabelle movie in 720p HD quality to your device. You can monitor the progress of your download on your torrent client interface. You can also pause, resume, or cancel your download at any time.
            10. -
            11. Enjoy watching the Annabelle movie in 720p HD quality on your device. You can use any media player that supports the video format of your downloaded file. Some of the common video formats are MP4, MKV, AVI, etc. You can also use subtitles or dubbing if available.
            12. -
            -

            Tips and Warnings

            -
              -
            • Always use a VPN service when downloading torrents to hide your IP address and encrypt your traffic. This will prevent your ISP (Internet Service Provider) from tracking your online activity and throttling your speed or blocking your access to torrent sites. It will also protect you from hackers, malware, viruses, and legal issues that may arise from torrenting.
            • -
            • Choose a VPN service that has fast speed, unlimited bandwidth, no logs policy, and multiple servers in different countries. Some of the best VPN services for torrenting are ExpressVPN, NordVPN, Surfshark, etc.
            • -
            • Check the comments and ratings of the torrents before downloading them to avoid fake or malicious files. You can also use antivirus software or malware scanners to scan your downloaded files for any threats.
            • -
            • Delete or seed your downloaded files after watching them to save space on your device and help other users who want to download them.
            • -
            • Do not download or share copyrighted content

              7196e7f11a
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md deleted file mode 100644 index 06bf6d745aa5c64de6be9026caa105d79d32b1f0..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Atomix VirtualDJ Pro Infinity 8.3.4787 Crack WORK.md +++ /dev/null @@ -1,27 +0,0 @@ - -

              How to Crack Atomix VirtualDJ Pro Infinity 8.3.4787 and Enjoy Its Amazing Features

              -

              Atomix VirtualDJ Pro Infinity is a professional DJ software that allows you to mix music, videos, and karaoke with ease. It has a powerful engine that lets you manipulate and combine different components of your tracks, such as vocals, instruments, kicks, hi-hats, etc. You can also use performance pads to unleash your creativity and create stunning remixes on the fly.

              -

              Atomix VirtualDJ Pro Infinity 8.3.4787 Crack


              Download File ––– https://urlcod.com/2uHvJf



              -

              If you want to enjoy the full potential of VirtualDJ Pro Infinity, you need to crack it and activate it with a license key. Here are the steps to do that:

              -
                -
              1. Turn off your anti-virus software and download the crack file from this link [^1^].
              2. -
              3. Install VirtualDJ Pro Infinity 8.3.4787 trial setup.exe from the downloaded file.
              4. -
              5. Do not run the application after installation.
              6. -
              7. Block VirtualDJ via firewall or run virtualdj_hosts_patch.cmd as an administrator to prevent it from connecting to the internet.
              8. -
              9. Copy virtualdj_pro file from Crack folder and paste it into the installation directory (C:\Program Files\VirtualDJ).
              10. -
              11. Run VirtualDJ Pro Infinity and enter any name and email address when prompted for registration.
              12. -
              13. Enjoy your cracked VirtualDJ Pro Infinity with unlimited features!
              14. -
              -

              Note: This crack is only for educational purposes and we do not support piracy. Please buy the original software from the official website [^3^] if you like it and can afford it.

              Some Tips and Tricks to Master VirtualDJ Pro Infinity

              -

              Now that you have cracked VirtualDJ Pro Infinity, you might be wondering how to use it like a pro. Here are some tips and tricks that will help you improve your DJ skills and impress your audience.

              -

              -
                -
              • Use the sync button to match the tempo and phase of two tracks automatically. This will save you time and make your transitions smoother. You can also use the pitch slider to adjust the tempo manually if you prefer.
              • -
              • Use the cue points to mark specific parts of a track that you want to play or loop. You can set up to 8 cue points per track and trigger them with the performance pads or the keyboard. You can also use cue points to jump to different parts of a track or create mashups.
              • -
              • Use the effects to add some spice to your mix. VirtualDJ Pro Infinity comes with a wide range of effects, such as echo, flanger, reverb, filter, etc. You can apply them to one or both decks, or to the master output. You can also chain multiple effects together and adjust their parameters with the knobs or sliders.
              • -
              • Use the sampler to play short samples, such as vocals, drums, horns, etc. You can load your own samples or use the ones provided by VirtualDJ Pro Infinity. You can trigger them with the performance pads or the keyboard, and sync them with the tempo of the tracks. You can also record your own samples from any source and save them for later use.
              • -
              • Use the video mixer to mix videos along with your music. VirtualDJ Pro Infinity supports various video formats, such as MP4, AVI, WMV, etc. You can load videos on each deck and mix them with crossfader and effects. You can also add text, images, logos, etc. to your video mix with the video editor.
              • -
              -

              These are just some of the tips and tricks that you can use with VirtualDJ Pro Infinity. There are many more features and functions that you can explore and customize according to your preferences. For more tutorials and guides, you can check out this video [^1^] or visit the official forum .

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md b/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md deleted file mode 100644 index 6a4caf5191df6a1fc5d02126964e7e0dede6b60b..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Copy My Data For Mac.md +++ /dev/null @@ -1,221 +0,0 @@ - - - -

              Copy My Data for Mac: How to Transfer Your Data from One Mac to Another

              - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

              If you have a new Mac and want to transfer your data from your old one, you might be wondering how to do it easily and quickly. Fortunately, there is a handy app called Copy My Data that can help you with this task.

              -

              Copy My Data is a free app that allows you to copy your contacts, calendars, photos, videos, messages, notes, and more from one device to another over Wi-Fi. You can use it to transfer your data between two Macs, or between a Mac and an iPhone, iPad, iPod touch, or Android device.

              -

              Copy My Data For Mac


              DOWNLOADhttps://urlcod.com/2uHw3I



              -

              In this article, we will show you how to use Copy My Data for Mac to transfer your data from one Mac to another. We will also show you some other methods to copy your data, such as using Migration Assistant, Time Machine, or keyboard shortcuts.

              What You Need to Copy Your Data

              Before you start copying your data, you need to make sure you have everything you need. Here are some of the things you need to copy your data:

              -
                -
              • A Wi-Fi network that both Macs can connect to.
              • -
              • The Copy My Data app installed on both Macs. You can download it from the Mac App Store for free.
              • -
              • The devices you want to transfer data from and to. Make sure they have enough battery power or are plugged in.
              • -
              • Optionally, you can also use a Time Machine backup or a USB storage device to copy your data. You will need an appropriate adapter if your Mac does not have a USB port.
              • -
              -

              Once you have everything ready, you can start copying your data with Copy My Data for Mac.

              How to Copy Your Data Wirelessly with Migration Assistant

              If you want to transfer all or most of your data from one Mac to another, you can use Migration Assistant, a built-in app that lets you move your user accounts, apps, files, folders, and settings over Wi-Fi. This method is recommended if you are setting up a new Mac or replacing an old one.

              -

              To use Migration Assistant, you need to follow these steps on both your new and old Macs:

              -

              How to Use Migration Assistant on Your New Mac

                -
              1. Turn on your new Mac and follow the onscreen instructions until you see the Migration Assistant screen.
              2. -
              3. Select the option to transfer from a Mac, Time Machine backup, or startup disk, and click Continue.
              4. -
              5. If prompted, enter your administrator password and click OK.
              6. -
              7. Choose the other Mac from the list of available devices, and click Continue.
              8. -
              9. A security code will appear on both Macs. Make sure they match, and click Continue on your new Mac.
              10. -

              How to Use Migration Assistant on Your Old Mac

                -
              1. Open Migration Assistant from the Utilities folder in the Applications folder.
              2. -
              3. Select the option to transfer to another Mac, and click Continue.
              4. -
              5. If prompted, enter your administrator password and click OK.
              6. -
              7. Wait for the other Mac to appear on the screen, and click Continue.
              8. -
              9. A security code will appear on both Macs. Make sure they match, and click Continue on your old Mac.
              10. -

              How to Select and Transfer the Information You Want

                -
              1. On your new Mac, you will see a list of information that you can transfer from your old Mac. You can select or deselect the items you want by checking or unchecking the boxes next to them.
              2. -
              3. You can also click the disclosure triangle next to each item to see more details and options. For example, you can choose which user accounts, apps, or folders you want to transfer.
              4. -
              5. If you have more than one user account on your old Mac, you will need to enter the password for each account that you want to transfer.
              6. -
              7. After you have selected everything you want to transfer, click Continue.
              8. -
              9. The transfer will begin and may take some time depending on the amount of data and the speed of your Wi-Fi network. You can see the progress and estimated time on both Macs.
              10. -
              11. When the transfer is complete, click Quit on both Macs. Your new Mac will restart and you will be able to log in with your transferred user accounts and access your transferred data.
              12. -

              How to Copy Your Data from a Time Machine Backup or a USB Storage Device

              If you have a Time Machine backup or a USB storage device that contains your data, you can also use them to copy your data to your new Mac. This method is useful if you don't have a Wi-Fi network or if you only want to transfer some of your data.

              -

              To use this method, you need to follow these steps:

              How to Connect the Backup or Storage Device to Your New Mac

                -
              1. Connect the backup or storage device to your new Mac using an appropriate adapter if necessary. For example, if your device has a USB-A connector and your Mac has a USB-C port, you will need a USB-A to USB-C adapter.
              2. -
              3. Wait for the device to appear on your desktop or in the Finder sidebar. If it does not appear, you may need to format it for Mac using Disk Utility.
              4. -
              5. Open the device and locate the files or folders that you want to copy. You can also use Spotlight or Finder search to find them.
              6. -
              7. Select the files or folders and drag them to your new Mac. You can drag them to the desktop, the Documents folder, or any other location you prefer.
              8. -
              9. Wait for the copying process to finish. You can see the progress and estimated time in a window that pops up.
              10. -
              11. Eject the device by dragging it to the Trash icon or by right-clicking or control-clicking it and choosing Eject.
              12. -
              13. Disconnect the device from your new Mac.
              14. -

              How to Restore Your Content from a Backup

                -
              1. If you have a Time Machine backup, you can use Time Machine to restore your content. To do this, open Time Machine from the Applications folder or from the menu bar icon.
              2. -
              3. Select the backup that contains your data. You can use the timeline on the right side of the screen or the arrows at the bottom to navigate through different backup dates and times.
              4. -
              5. Find the files or folders that you want to restore. You can also use Spotlight or Finder search to find them.
              6. -
              7. Select the files or folders and click Restore. You can also right-click or control-click them and choose Restore.
              8. -
              9. Choose where you want to restore them on your new Mac. You can overwrite existing files or keep both versions.
              10. -
              11. Wait for the restoring process to finish. You can see the progress and estimated time in a window that pops up.
              12. -
              13. If you have another backup software, such as Carbon Copy Cloner or SuperDuper, you can use it to restore your content as well. Follow the instructions provided by the software developer.
              14. -

              How to Copy and Paste on Mac with Keyboard Shortcuts

              If you want to copy and paste a small amount of data, such as a file, a text, or an image, you can use keyboard shortcuts to do it quickly and easily. Keyboard shortcuts are combinations of keys that you press to perform certain actions. They can save you time and effort when working on your Mac.

              -

              To use keyboard shortcuts to copy and paste on Mac, you need to follow these steps:

              How to Copy on Mac

                -
              1. Select the file or text that you want to copy. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
              2. -
              3. Press Command + C on your keyboard. This will copy the item to your clipboard, which is a temporary storage area for copied data.
              4. -
              5. You will see a brief animation or a sound indicating that the item has been copied. You can also check the Edit menu in the menu bar and see that the Copy option is highlighted.
              6. -

              How to Paste on Mac

                -
              1. Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
              2. -
              3. Press Command + V on your keyboard. This will paste the item from your clipboard to the location.
              4. -
              5. You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
              6. -

              How to Cut on Mac

                -
              1. Select the file or text that you want to cut. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
              2. -
              3. Press Command + X on your keyboard. This will cut the item from its original location and copy it to your clipboard.
              4. -
              5. You will see a brief animation or a sound indicating that the item has been cut. You can also check the Edit menu in the menu bar and see that the Cut option is highlighted.
              6. -
              7. Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
              8. -
              9. Press Command + V on your keyboard. This will paste the item from your clipboard to the location.
              10. -
              11. You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
              12. -

              How to Copy and Paste on Mac with Mouse or Trackpad

              If you prefer to use your mouse or trackpad to copy and paste on Mac, you can also do that with a few clicks. You can use the right-click or the control-click to access the contextual menu that contains the copy, paste, and cut options. This method is convenient if you don't want to use the keyboard or if you want to have more control over the copying and pasting process.

              -

              To use your mouse or trackpad to copy and paste on Mac, you need to follow these steps:

              How to Copy on Mac

                -
              1. Select the file or text that you want to copy. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
              2. -
              3. Right-click or control-click the item. This will open a contextual menu that contains various options.
              4. -
              5. Choose Copy from the menu. This will copy the item to your clipboard.
              6. -
              7. You will see a brief animation or a sound indicating that the item has been copied. You can also check the Edit menu in the menu bar and see that the Copy option is highlighted.
              8. -

              How to Paste on Mac

                -
              1. Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
              2. -
              3. Right-click or control-click where you want to paste. This will open a contextual menu that contains various options.
              4. -
              5. Choose Paste from the menu. This will paste the item from your clipboard to the location.
              6. -
              7. You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
              8. -

              How to Cut on Mac

                -
              1. Select the file or text that you want to cut. You can use your mouse or trackpad to drag over the item, or use the Shift and arrow keys to highlight it.
              2. -
              3. Right-click or control-click the item. This will open a contextual menu that contains various options.
              4. -
              5. Choose Cut from the menu. This will cut the item from its original location and copy it to your clipboard.
              6. -
              7. You will see a brief animation or a sound indicating that the item has been cut. You can also check the Edit menu in the menu bar and see that the Cut option is highlighted.
              8. -
              9. Move the cursor to where you want to paste the item. You can use your mouse or trackpad to click on the location, or use the arrow keys to navigate.
              10. -
              11. Right-click or control-click where you want to paste. This will open a contextual menu that contains various options.
              12. -
              13. Choose Paste from the menu. This will paste the item from your clipboard to the location.
              14. -
              15. You will see a brief animation or a sound indicating that the item has been pasted. You can also check the Edit menu in the menu bar and see that the Paste option is highlighted.
              16. -

              Conclusion

              In this article, we have shown you how to use Copy My Data for Mac to transfer your data from one Mac to another. We have also shown you some other methods to copy your data, such as using Migration Assistant, Time Machine, or keyboard shortcuts.

              -

              Copying your data on Mac is easy and fast with these methods. You can choose the one that suits your needs and preferences. Whether you want to transfer all or some of your data, you can do it without losing any quality or information.

              -

              Here are some tips and recommendations for copying data on Mac:

              -
                -
              • Make sure you have a reliable Wi-Fi network or a compatible backup or storage device before you start copying your data.
              • -
              • Back up your data regularly to avoid losing it in case of any accidents or errors.
              • -
              • Use Copy My Data for Mac to transfer your data between different devices, such as Macs, iPhones, iPads, iPods, or Androids.
              • -
              • Use Migration Assistant to transfer your data between two Macs over Wi-Fi.
              • -
              • Use Time Machine or another backup software to restore your data from a backup.
              • -
              • Use keyboard shortcuts or mouse clicks to copy and paste small amounts of data.
              • -
              -

              We hope this article has helped you learn how to copy your data on Mac. If you have any questions or feedback, please let us know in the comments below.

              FAQs:

              -
                -
              1. How do I copy my data from a Mac to an iPhone?
              2. -
              3. You can use Copy My Data for Mac to transfer your data from a Mac to an iPhone over Wi-Fi. You need to install the app on both devices and follow the instructions on the screen. You can also use iTunes or Finder to sync your data from a Mac to an iPhone using a USB cable.
              4. -
              5. How do I copy my data from a Mac to an Android?
              6. -
              7. You can use Copy My Data for Mac to transfer your data from a Mac to an Android over Wi-Fi. You need to install the app on both devices and follow the instructions on the screen. You can also use Android File Transfer or another software to drag and drop your files from a Mac to an Android using a USB cable.
              8. -
              9. How do I copy my data from one user account to another on the same Mac?
              10. -
              11. You can use Migration Assistant to transfer your data from one user account to another on the same Mac. You need to log out of the current user account and log in as an administrator. Then, open Migration Assistant and select the option to transfer information to this Mac. Choose the user account that you want to transfer from and select the information that you want to transfer. Click Continue and wait for the transfer to finish.
              12. -
              13. How do I copy my data from a Windows PC to a Mac?
              14. -
              15. You can use Migration Assistant to transfer your data from a Windows PC to a Mac over Wi-Fi or Ethernet. You need to download and install Windows Migration Assistant on your PC and open Migration Assistant on your Mac. Select the option to transfer from a Windows PC and follow the instructions on the screen. You can also use an external hard drive or another storage device to copy your files from a PC to a Mac.
              16. -
              17. How do I copy my photos from a Mac to iCloud?
              18. -
              19. You can use iCloud Photos to sync your photos from a Mac to iCloud. You need to turn on iCloud Photos on your Mac and sign in with the same Apple ID that you use on your other devices. Your photos will be uploaded and stored in iCloud automatically. You can also use Photos for Mac or another app to import your photos from a camera or a memory card to your Mac and then upload them to iCloud.
              20. -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md deleted file mode 100644 index fc38ef286f8e51afe8776c91515603e01c57e430..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cpa Network Script Nulled Theme.md +++ /dev/null @@ -1,140 +0,0 @@ -
              -

              CPA Network Script Nulled Theme: What You Need to Know

              -

              If you are a webmaster who wants to run your own CPA/Affiliate network, you might have heard about CPA network script and nulled theme. But what are they exactly and why are they popular among some webmasters? In this article, we will explain what CPA network script is, what nulled theme is, and what you need to know about them before using them for your website.

              -

              Benefits of CPA Network Script

              -

              CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently. It provides you with all the tools and features you need to manage your offers, track your conversions, pay your affiliates, and more. With CPA network script, you can have full control over your network and customize it according to your preferences and needs.

              -

              cpa network script nulled theme


              Download File ☆☆☆☆☆ https://urlcod.com/2uHxOI



              -

              Features of CPA Network Script

              -

              Some of the features of CPA network script are:

              -
                -
              • Dashboard: A user-friendly and intuitive dashboard that shows you the overview of your network's performance, such as revenue, clicks, conversions, EPC, etc.
              • -
              • Offer Management: A comprehensive and flexible offer management system that allows you to add, edit, delete, approve, reject, pause, resume, and categorize your offers. You can also set different payout rates, commission types, caps, geo-targeting, tracking parameters, landing pages, etc. for your offers.
              • -
              • Tracking System: A robust and accurate tracking system that tracks every click, impression, conversion, and event on your network. You can also integrate with third-party tracking platforms, such as Voluum, Binom, BeMob, etc.
              • -
              • Payment System: A secure and reliable payment system that allows you to pay your affiliates on time and in various methods, such as PayPal, Payoneer, Wire Transfer, etc. You can also set different payment terms, thresholds, currencies, fees, etc. for your affiliates.
              • -
              • Affiliate Management: A powerful and easy-to-use affiliate management system that allows you to manage your affiliates effectively. You can add, edit, delete, approve, reject, ban, suspend, and assign different roles and permissions to your affiliates. You can also communicate with your affiliates via email or chat.
              • -
              • Reporting and Analytics: A detailed and insightful reporting and analytics system that allows you to monitor and optimize your network's performance. You can generate various reports and charts based on different metrics, filters, time periods, etc. You can also export or download your data in various formats.
              • -
              • And more: There are many more features of CPA network script that make it a complete solution for your CPA/Affiliate network. Some of them are: API integration, fraud detection, smart link, postback, offer wall, landing page builder, etc.
              • -
              -

              Examples of CPA Network Script

              -

              There are many CPA network scripts available in the market that you can choose from. Some of the examples are:

              - - - - - - - - - - - - - - - - - - - - - -
              NameDescriptionPrice
              DreamAffA premium CPA network script that offers a fully responsive design, advanced features, lifetime updates, and 24/7 support.$499
              AdFlexA popular CPA network script that offers a multi-language interface, multiple payment gateways, custom domains, and more.$69
              OfferWallA simple and affordable CPA network script that offers a ready-made offer wall template, easy installation, and basic features.$29
              -

              These are just some of the examples of CPA network script. You can find more options online or create your own custom script if you have the skills and resources.

              -

              Risks of Nulled Theme

              -

              Nulled theme is a theme that has been modified or cracked to remove the license verification or activation code from the original theme. It is usually distributed for free or at a very low price on various websites or forums. Some webmasters use nulled theme to save money or to access premium features without paying for them. However, using nulled theme is very risky and can cause serious problems for your website.

              -

              Legal Issues of Nulled Theme

              -

              Nulled theme can violate the intellectual property rights and terms of service of the original theme developers. By using nulled theme, you are infringing on their rights and breaking their rules. This can result in legal actions, such as lawsuits, fines, or penalties. You can also lose your access to the original theme and its updates, support, and features. You can also damage your reputation and credibility as a webmaster.

              -

              Security Issues of Nulled Theme

              -

              Nulled theme can contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website. These can allow hackers to access your website, steal your data, inject ads, redirect your traffic, or even take over your website. They can also harm your visitors, infect their devices, or expose their personal information. You can also face legal consequences if your website is involved in any illegal or unethical activities due to the nulled theme.

              -

              -

              Quality Issues of Nulled Theme

              -

              Nulled theme can have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website. These can cause your website to crash, slow down, display incorrectly, or lose some features. They can also make your website vulnerable to attacks or exploits. You can also miss out on the latest updates, improvements, and innovations from the original theme developers. You can also have difficulties in finding support or solutions for your problems with the nulled theme.

              -

              Alternatives to Nulled Theme

              -

              Using nulled theme is not worth the risk and hassle for your website. It is better to use original theme or other legitimate alternatives than nulled theme. Here are some of the alternatives you can consider:

              -

              Original Theme

              -

              The best alternative to nulled theme is the original theme. By using the original theme, you can enjoy more benefits than nulled theme, such as:

              -
                -
              • Support: You can get professional and timely support from the original theme developers or their team. You can also access their documentation, tutorials, forums, etc.
              • -
              • Updates: You can get regular and automatic updates from the original theme developers that fix bugs, improve performance, add features, etc.
              • -
              • Customization Options: You can get more customization options from the original theme that allow you to change the appearance, layout, functionality, etc. of your website according to your needs and preferences.
              • -
              • And more: There are many more benefits of using the original theme that make it a worthwhile investment for your website. Some of them are: security, quality, compatibility, reputation, etc.
              • -
              -

              The price of the original theme may vary depending on the features, quality, popularity, etc. of the theme. However, you can find some affordable options online or look for discounts or coupons that can lower the cost.

              -

              Free Theme

              -

              If you have a limited budget but still want a quality theme for your website, you can opt for a free theme. A free theme is a theme that is available for free or at no cost on various websites or platforms. Some webmasters use free themes to test their websites or to start their online presence.

              -

              However, not all free themes are created equal. Some free themes may have some drawbacks or limitations compared to premium themes, such as:

                -
              • Support: You may not get any support or assistance from the free theme developers or their team. You may have to rely on yourself or other users to solve your problems.
              • -
              • Updates: You may not get any updates or improvements from the free theme developers. You may have to use the same version of the theme for a long time or look for other alternatives.
              • -
              • Customization Options: You may not get many customization options from the free theme. You may have to stick with the default settings or make some changes manually.
              • -
              • And more: There are some more drawbacks or limitations of using free themes that you should be aware of. Some of them are: security, quality, compatibility, reputation, etc.
              • -
              -

              Therefore, you should be careful and selective when choosing a free theme for your website. You should check the reviews, ratings, feedback, etc. of the free theme before using it. You should also scan the free theme for any malicious code, malware, backdoors, etc. that can harm your website.

              -

              Some examples of free themes that are compatible with CPA network script are:

              - - - - - - - - - - - - - - - - - - - - - -
              NameDescriptionURL
              AstraA fast, lightweight, and customizable free theme that works well with CPA network script and other plugins.
              OceanWPA versatile, responsive, and SEO-friendly free theme that offers many features and options for CPA network script and other plugins.
              GeneratePressA simple, secure, and stable free theme that provides a solid foundation for CPA network script and other plugins.
              -

              These are just some of the examples of free themes that are compatible with CPA network script. You can find more options online or create your own custom theme if you have the skills and resources.

              -

              Premium Theme

              -

              If you want a professional and unique theme for your website, you can opt for a premium theme. A premium theme is a theme that is available for a certain price or fee on various websites or platforms. Some webmasters use premium themes to enhance their websites or to stand out from their competitors.

              -

              Premium themes usually offer more benefits than free themes, such as:

              -
                -
              • Support: You can get professional and timely support from the premium theme developers or their team. You can also access their documentation, tutorials, forums, etc.
              • -
              • Updates: You can get regular and automatic updates from the premium theme developers that fix bugs, improve performance, add features, etc.
              • -
              • Customization Options: You can get more customization options from the premium theme that allow you to change the appearance, layout, functionality, etc. of your website according to your needs and preferences.
              • -
              • And more: There are many more benefits of using premium themes that make them a worthwhile investment for your website. Some of them are: security, quality, compatibility, reputation, etc.
              • -
              -

              The price of the premium theme may vary depending on the features, quality, popularity, etc. of the theme. However, you can find some reasonable options online or look for discounts or coupons that can lower the cost.

              -

              Some examples of premium themes that are compatible with CPA network script are:

              - - - - - - - - - - - - - - - - -CouponXLA powerful and flexible premium theme that is suitable for CPA network script and other offer/cashback plugins.$49
              NameDescriptionPriceURL
              DooA modern and stylish premium theme that is designed for CPA network script and other affiliate marketing plugins.$59
              CouponisA sleek and elegant premium theme that is optimized for CPA network script and other coupon/deal plugins.$49
              These are just some of the examples of premium themes that are compatible with CPA network script. You can find more options online or create your own custom theme if you have the skills and resources.

              ConclusionIn conclusion, CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently. It provides you with all the tools and features you need to manage your offers, track your conversions, pay your affiliates, and more. However, using nulled theme for your CPA network script is very risky and can cause serious problems for your website. Nulled theme can violate the intellectual property rights and terms of service of the original theme developers, contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website, and have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website. Therefore, it is better to use original theme or other legitimate alternatives than nulled theme. Original theme can provide more benefits than nulled theme, such as support, updates, customization options, etc. Free theme can be a good option for webmasters who have limited budget but still want a quality theme for their website. Premium theme can be a worthwhile investment for webmasters who want a professional and unique theme for their website.

              -

              We hope this article has helped you understand what CPA network script nulled theme is and what you need to know about it before using it for your website. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

              -

              FAQs

              -

              Here are some of the frequently asked questions related to the topic of this article and their answers:

              -
                -
              1. What is CPA network script?
              2. -
              3. CPA network script is a software that allows you to create your own CPA/Affiliate network easily and efficiently.
              4. -
              5. What is nulled theme?
              6. -
              7. Nulled theme is a theme that has been modified or cracked to remove the license verification or activation code from the original theme.
              8. -
              9. Why is nulled theme risky?
              10. -
              11. Nulled theme is risky because it can violate the intellectual property rights and terms of service of the original theme developers, contain malicious code, malware, backdoors, etc. that can compromise the security and performance of your website, and have bugs, errors, compatibility issues, outdated features, etc. that can affect the quality and functionality of your website.
              12. -
              13. What are the alternatives to nulled theme?
              14. -
              15. The alternatives to nulled theme are original theme, free theme, and premium theme.
              16. -
              17. Where can I find CPA network script and themes?
              18. -
              19. You can find CPA network script and themes on various websites or platforms online. Some of them are: Codecanyon, Themeforest, WordPress, etc.
              20. -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md deleted file mode 100644 index d55e2489c345943531265011f1ded7b975e83141..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Day Trading Options Jeff Augen Free Pdf 32 TOP.md +++ /dev/null @@ -1,26 +0,0 @@ - -

              How to Profit from Day Trading Options with Jeff Augen's Strategies

              -

              Day trading options can be a lucrative way to take advantage of price distortions and anomalies in very brief time frames. However, it also requires a high level of skill, discipline and risk management. In this article, we will explore some of the strategies and techniques that Jeff Augen, a veteran option trader and author of Day Trading Options: Profiting from Price Distortions in Very Brief Time Frames,[^1^] has developed and shared in his book.

              -

              day trading options jeff augen free pdf 32


              Download Ziphttps://urlcod.com/2uHwzl



              -

              What is Day Trading Options?

              -

              Day trading options is the practice of buying and selling options contracts within the same trading day, usually with the intention of closing the position before the market closes. Day traders aim to exploit short-term price movements and volatility fluctuations that occur during the day, often triggered by news events, earnings announcements, technical signals or market sentiment.

              -

              Options are contracts that give the buyer the right, but not the obligation, to buy or sell an underlying asset at a specified price (strike) before or on a certain date (expiration). Options can be classified into two types: calls and puts. A call option gives the buyer the right to buy the underlying asset, while a put option gives the buyer the right to sell the underlying asset. The seller (or writer) of an option receives a premium from the buyer in exchange for taking on the risk of being assigned (or exercised) if the option is in-the-money at expiration.

              -

              Why Day Trade Options?

              -

              Day trading options has several advantages over other forms of trading, such as:

              -

              -
                -
              • Leverage: Options allow traders to control a large amount of shares with a relatively small amount of capital. This means that traders can potentially magnify their profits (and losses) with a small price movement in the underlying asset.
              • -
              • Flexibility: Options offer a variety of strategies and combinations that can suit different market conditions and risk preferences. Traders can use options to speculate on the direction, magnitude or volatility of the underlying asset's price movement, or to hedge their existing positions.
              • -
              • Liquidity: Options are traded on exchanges and have standardized specifications, which make them easy to buy and sell. Some options have high trading volume and narrow bid-ask spreads, which reduce transaction costs and slippage.
              • -
              -

              What are Jeff Augen's Strategies?

              -

              Jeff Augen is an experienced option trader who has written several books on options trading, including Day Trading Options. In his book, he reveals insights and techniques that he has developed and tested over many years of trading. Some of his strategies include:

              -
                -
              • Trading volatility distortions: Augen introduces a concept called the implied volatility surface, which is a three-dimensional representation of how implied volatility varies across different strike prices and expiration dates for a given underlying asset. He shows how to use this tool to identify and trade situations where implied volatility is either too high or too low compared to historical volatility or fair volatility.[^3^]
              • -
              • Working with intraday price spike charts: Augen presents a new charting technique that uses ultra-short-term price spikes to measure volatility and identify trends at the single-minute level. He demonstrates how to use this technique to trade options based on intraday price patterns and technical indicators.
              • -
              • Special events trading: Augen explains how to trade options around special events that cause significant volatility distortions in the market, such as earnings announcements, dividends, mergers and acquisitions, economic reports and political events. He provides guidelines on how to select the best option strategy, strike price and expiration date for each event.
              • -
              -

              Conclusion

              -

              Day trading options can be a profitable way to take advantage of short-term price movements and volatility fluctuations in the market. However, it also requires a high level of skill, discipline and risk management. Jeff Augen's book Day Trading Options provides valuable insights and techniques that can help traders improve their performance and profitability. The book is available for download as a PDF file[^2^] or as an ebook[^1^]. Traders who want to learn more

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tomg-group-umd/pez-dispenser/README.md b/spaces/tomg-group-umd/pez-dispenser/README.md deleted file mode 100644 index 08391472ca5472f9df284911c58fb49c5f8b4bfd..0000000000000000000000000000000000000000 --- a/spaces/tomg-group-umd/pez-dispenser/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pez Dispenser -emoji: ⚡ -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py b/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py deleted file mode 100644 index 5fd85d9a858236f4feb8903e3f4bf95f9eccaf94..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/ner/bert_softmax/bert_softmax_cluener_18e.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = [ - '../../_base_/schedules/schedule_adadelta_18e.py', - '../../_base_/default_runtime.py' -] - -categories = [ - 'address', 'book', 'company', 'game', 'government', 'movie', 'name', - 'organization', 'position', 'scene' -] - -test_ann_file = 'data/cluener2020/dev.json' -train_ann_file = 'data/cluener2020/train.json' -vocab_file = 'data/cluener2020/vocab.txt' - -max_len = 128 -loader = dict( - type='HardDiskLoader', - repeat=1, - parser=dict(type='LineJsonParser', keys=['text', 'label'])) - -ner_convertor = dict( - type='NerConvertor', - annotation_type='bio', - vocab_file=vocab_file, - categories=categories, - max_len=max_len) - -test_pipeline = [ - dict(type='NerTransform', label_convertor=ner_convertor, max_len=max_len), - dict(type='ToTensorNER') -] - -train_pipeline = [ - dict(type='NerTransform', label_convertor=ner_convertor, max_len=max_len), - dict(type='ToTensorNER') -] -dataset_type = 'NerDataset' - -train = dict( - type=dataset_type, - ann_file=train_ann_file, - loader=loader, - pipeline=train_pipeline, - test_mode=False) - -test = dict( - type=dataset_type, - ann_file=test_ann_file, - loader=loader, - pipeline=test_pipeline, - test_mode=True) -data = dict( - samples_per_gpu=8, workers_per_gpu=2, train=train, val=test, test=test) - -evaluation = dict(interval=1, metric='f1-score') - -model = dict( - type='NerClassifier', - encoder=dict( - type='BertEncoder', - max_position_embeddings=512, - init_cfg=dict( - type='Pretrained', - checkpoint='https://download.openmmlab.com/mmocr/ner/' - 'bert_softmax/bert_pretrain.pth')), - decoder=dict(type='FCDecoder'), - loss=dict(type='MaskedCrossEntropyLoss'), - label_convertor=ner_convertor) - -test_cfg = None diff --git a/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md b/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md deleted file mode 100644 index 6d189bc7daffde42e6815f8f10725c6065f89240..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/docs/zh_cn/datasets/kie.md +++ /dev/null @@ -1,34 +0,0 @@ -# 关键信息提取 - -## 概览 - -关键信息提取任务的数据集,文件目录应按如下配置: - -```text -└── wildreceipt - ├── class_list.txt - ├── dict.txt - ├── image_files - ├── test.txt - └── train.txt -``` - -## 准备步骤 - -### WildReceipt - -- 下载并解压 [wildreceipt.tar](https://download.openmmlab.com/mmocr/data/wildreceipt.tar) - -### WildReceiptOpenset - -- 准备好 [WildReceipt](#WildReceipt)。 -- 转换 WildReceipt 成 OpenSet 格式: -```bash -# 你可以运行以下命令以获取更多可用参数: -# python tools/data/kie/closeset_to_openset.py -h -python tools/data/kie/closeset_to_openset.py data/wildreceipt/train.txt data/wildreceipt/openset_train.txt -python tools/data/kie/closeset_to_openset.py data/wildreceipt/test.txt data/wildreceipt/openset_test.txt -``` -:::{note} -[这篇教程](../tutorials/kie_closeset_openset.md)里讲述了更多 CloseSet 和 OpenSet 数据格式之间的区别。 -::: diff --git a/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py b/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py deleted file mode 100644 index 40dfbc1db6ee04d8599d25cd01a43ee07361def6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tools/data/textdet/ctw1500_converter.py +++ /dev/null @@ -1,231 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import glob -import os.path as osp -import xml.etree.ElementTree as ET -from functools import partial - -import mmcv -import numpy as np -from shapely.geometry import Polygon - -from mmocr.utils import convert_annotations, list_from_file - - -def collect_files(img_dir, gt_dir, split): - """Collect all images and their corresponding groundtruth files. - - Args: - img_dir(str): The image directory - gt_dir(str): The groundtruth directory - split(str): The split of dataset. Namely: training or test - - Returns: - files(list): The list of tuples (img_file, groundtruth_file) - """ - assert isinstance(img_dir, str) - assert img_dir - assert isinstance(gt_dir, str) - assert gt_dir - - # note that we handle png and jpg only. Pls convert others such as gif to - # jpg or png offline - suffixes = ['.png', '.PNG', '.jpg', '.JPG', '.jpeg', '.JPEG'] - - imgs_list = [] - for suffix in suffixes: - imgs_list.extend(glob.glob(osp.join(img_dir, '*' + suffix))) - - files = [] - if split == 'training': - for img_file in imgs_list: - gt_file = gt_dir + '/' + osp.splitext( - osp.basename(img_file))[0] + '.xml' - files.append((img_file, gt_file)) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - elif split == 'test': - for img_file in imgs_list: - gt_file = gt_dir + '/000' + osp.splitext( - osp.basename(img_file))[0] + '.txt' - files.append((img_file, gt_file)) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - - return files - - -def collect_annotations(files, split, nproc=1): - """Collect the annotation information. - - Args: - files(list): The list of tuples (image_file, groundtruth_file) - split(str): The split of dataset. Namely: training or test - nproc(int): The number of process to collect annotations - - Returns: - images(list): The list of image information dicts - """ - assert isinstance(files, list) - assert isinstance(split, str) - assert isinstance(nproc, int) - - load_img_info_with_split = partial(load_img_info, split=split) - if nproc > 1: - images = mmcv.track_parallel_progress( - load_img_info_with_split, files, nproc=nproc) - else: - images = mmcv.track_progress(load_img_info_with_split, files) - - return images - - -def load_txt_info(gt_file, img_info): - anno_info = [] - for line in list_from_file(gt_file): - # each line has one ploygen (n vetices), and one text. - # e.g., 695,885,866,888,867,1146,696,1143,####Latin 9 - line = line.strip() - strs = line.split(',') - category_id = 1 - assert strs[28][0] == '#' - xy = [int(x) for x in strs[0:28]] - assert len(xy) == 28 - coordinates = np.array(xy).reshape(-1, 2) - polygon = Polygon(coordinates) - iscrowd = 0 - area = polygon.area - # convert to COCO style XYWH format - min_x, min_y, max_x, max_y = polygon.bounds - bbox = [min_x, min_y, max_x - min_x, max_y - min_y] - text = strs[28][4:] - - anno = dict( - iscrowd=iscrowd, - category_id=category_id, - bbox=bbox, - area=area, - text=text, - segmentation=[xy]) - anno_info.append(anno) - img_info.update(anno_info=anno_info) - return img_info - - -def load_xml_info(gt_file, img_info): - - obj = ET.parse(gt_file) - anno_info = [] - for image in obj.getroot(): # image - for box in image: # image - h = box.attrib['height'] - w = box.attrib['width'] - x = box.attrib['left'] - y = box.attrib['top'] - text = box[0].text - segs = box[1].text - pts = segs.strip().split(',') - pts = [int(x) for x in pts] - assert len(pts) == 28 - # pts = [] - # for iter in range(2,len(box)): - # pts.extend([int(box[iter].attrib['x']), - # int(box[iter].attrib['y'])]) - iscrowd = 0 - category_id = 1 - bbox = [int(x), int(y), int(w), int(h)] - - coordinates = np.array(pts).reshape(-1, 2) - polygon = Polygon(coordinates) - area = polygon.area - anno = dict( - iscrowd=iscrowd, - category_id=category_id, - bbox=bbox, - area=area, - text=text, - segmentation=[pts]) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def load_img_info(files, split): - """Load the information of one image. - - Args: - files(tuple): The tuple of (img_file, groundtruth_file) - split(str): The split of dataset: training or test - - Returns: - img_info(dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - assert isinstance(split, str) - - img_file, gt_file = files - # read imgs with ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - split_name = osp.basename(osp.dirname(img_file)) - img_info = dict( - # remove img_prefix for filename - file_name=osp.join(split_name, osp.basename(img_file)), - height=img.shape[0], - width=img.shape[1], - # anno_info=anno_info, - segm_file=osp.join(split_name, osp.basename(gt_file))) - - if split == 'training': - img_info = load_xml_info(gt_file, img_info) - elif split == 'test': - img_info = load_txt_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert ctw1500 annotations to COCO format') - parser.add_argument('root_path', help='ctw1500 root path') - parser.add_argument('-o', '--out-dir', help='output path') - parser.add_argument( - '--split-list', - nargs='+', - help='a list of splits. e.g., "--split-list training test"') - - parser.add_argument( - '--nproc', default=1, type=int, help='number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - out_dir = args.out_dir if args.out_dir else root_path - mmcv.mkdir_or_exist(out_dir) - - img_dir = osp.join(root_path, 'imgs') - gt_dir = osp.join(root_path, 'annotations') - - set_name = {} - for split in args.split_list: - set_name.update({split: 'instances_' + split + '.json'}) - assert osp.exists(osp.join(img_dir, split)) - - for split, json_name in set_name.items(): - print(f'Converting {split} into {json_name}') - with mmcv.Timer(print_tmpl='It takes {}s to convert icdar annotation'): - files = collect_files( - osp.join(img_dir, split), osp.join(gt_dir, split), split) - image_infos = collect_annotations(files, split, nproc=args.nproc) - convert_annotations(image_infos, osp.join(out_dir, json_name)) - - -if __name__ == '__main__': - main() diff --git a/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py b/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py deleted file mode 100644 index 1797e68a516915698769606a774316dcf5436b3c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/cli/procs/line_ocr.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2022, National Diet Library, Japan -# -# This software is released under the CC BY 4.0. -# https://creativecommons.org/licenses/by/4.0/ - - -import copy -import numpy -import subprocess -import xml.etree.ElementTree as ET - -from .base_proc import BaseInferenceProcess - - -class LineOcrProcess(BaseInferenceProcess): - """ - 行文字認識推論を実行するプロセスのクラス。 - BaseInferenceProcessを継承しています。 - """ - def __init__(self, cfg, pid): - """ - Parameters - ---------- - cfg : dict - 本推論処理における設定情報です。 - pid : int - 実行される順序を表す数値。 - """ - super().__init__(cfg, pid, '_line_ocr') - process1 = subprocess.Popen(['cat', self.cfg['line_ocr']['char_list']], stdout=subprocess.PIPE) - process2 = subprocess.Popen(['tr', '-d', '\\n'], stdin=process1.stdout, stdout=subprocess.PIPE) - self.character = '〓' + process2.stdout.read().decode() - - from src.text_recognition.text_recognition import InferencerWithCLI - self._inferencer = InferencerWithCLI(self.cfg['line_ocr'], self.character) - self._run_src_inference = self._inferencer.inference_wich_cli - - def _is_valid_input(self, input_data): - """ - 本クラスの推論処理における入力データのバリデーション。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - [変数なし] : bool - 入力データが正しければTrue, そうでなければFalseを返します。 - """ - if type(input_data['img']) is not numpy.ndarray: - print('LineOcrProcess: input img is not numpy.ndarray') - return False - if type(input_data['xml']) is not ET.ElementTree: - print('LineOcrProcess: input xml is not ElementTree') - return False - return True - - def _run_process(self, input_data): - """ - 推論処理の本体部分。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - result : dict - 推論処理の結果を保持する辞書型データ。 - 基本的にinput_dataと同じ構造です。 - """ - result = [] - print('### Line OCR Process ###') - result_xml = self._run_src_inference(input_data['img'], input_data['xml'], - accept_empty=self.cfg['line_ocr']['accept_empty'], - yield_block_page_num=self.cfg['line_ocr']['yield_block_page_num'], - yield_block_pillar=self.cfg['line_ocr']['yield_block_pillar']) - - output_data = copy.deepcopy(input_data) - output_data['xml'] = result_xml - result.append(output_data) - - return result diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py deleted file mode 100644 index dedac3f46b4710d16a8bc66f00663e379b2ebdc7..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py +++ /dev/null @@ -1,50 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - neck=dict( - type='FPN_CARAFE', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1, - compressed_channels=64))) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py deleted file mode 100644 index 93b7d51912abaaab55ceac5263737d02cd4e99fa..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py +++ /dev/null @@ -1,61 +0,0 @@ -_base_ = './mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron2/resnext101_32x8d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=8, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - style='pytorch')) - -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], - std=[57.375, 57.120, 58.395], - to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) - -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py deleted file mode 100644 index 232c8bc4cf010084b817c545ab4e2ef34fdd4549..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/model_converters/upgrade_model_version.py +++ /dev/null @@ -1,209 +0,0 @@ -import argparse -import re -import tempfile -from collections import OrderedDict - -import torch -from mmcv import Config - - -def is_head(key): - valid_head_list = [ - 'bbox_head', 'mask_head', 'semantic_head', 'grid_head', 'mask_iou_head' - ] - - return any(key.startswith(h) for h in valid_head_list) - - -def parse_config(config_strings): - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - is_two_stage = True - is_ssd = False - is_retina = False - reg_cls_agnostic = False - if 'rpn_head' not in config.model: - is_two_stage = False - # check whether it is SSD - if config.model.bbox_head.type == 'SSDHead': - is_ssd = True - elif config.model.bbox_head.type == 'RetinaHead': - is_retina = True - elif isinstance(config.model['bbox_head'], list): - reg_cls_agnostic = True - elif 'reg_class_agnostic' in config.model.bbox_head: - reg_cls_agnostic = config.model.bbox_head \ - .reg_class_agnostic - temp_file.close() - return is_two_stage, is_ssd, is_retina, reg_cls_agnostic - - -def reorder_cls_channel(val, num_classes=81): - # bias - if val.dim() == 1: - new_val = torch.cat((val[1:], val[:1]), dim=0) - # weight - else: - out_channels, in_channels = val.shape[:2] - # conv_cls for softmax output - if out_channels != num_classes and out_channels % num_classes == 0: - new_val = val.reshape(-1, num_classes, in_channels, *val.shape[2:]) - new_val = torch.cat((new_val[:, 1:], new_val[:, :1]), dim=1) - new_val = new_val.reshape(val.size()) - # fc_cls - elif out_channels == num_classes: - new_val = torch.cat((val[1:], val[:1]), dim=0) - # agnostic | retina_cls | rpn_cls - else: - new_val = val - - return new_val - - -def truncate_cls_channel(val, num_classes=81): - - # bias - if val.dim() == 1: - if val.size(0) % num_classes == 0: - new_val = val[:num_classes - 1] - else: - new_val = val - # weight - else: - out_channels, in_channels = val.shape[:2] - # conv_logits - if out_channels % num_classes == 0: - new_val = val.reshape(num_classes, in_channels, *val.shape[2:])[1:] - new_val = new_val.reshape(-1, *val.shape[1:]) - # agnostic - else: - new_val = val - - return new_val - - -def truncate_reg_channel(val, num_classes=81): - # bias - if val.dim() == 1: - # fc_reg | rpn_reg - if val.size(0) % num_classes == 0: - new_val = val.reshape(num_classes, -1)[:num_classes - 1] - new_val = new_val.reshape(-1) - # agnostic - else: - new_val = val - # weight - else: - out_channels, in_channels = val.shape[:2] - # fc_reg | rpn_reg - if out_channels % num_classes == 0: - new_val = val.reshape(num_classes, -1, in_channels, - *val.shape[2:])[1:] - new_val = new_val.reshape(-1, *val.shape[1:]) - # agnostic - else: - new_val = val - - return new_val - - -def convert(in_file, out_file, num_classes): - """Convert keys in checkpoints. - - There can be some breaking changes during the development of mmdetection, - and this tool is used for upgrading checkpoints trained with old versions - to the latest one. - """ - checkpoint = torch.load(in_file) - in_state_dict = checkpoint.pop('state_dict') - out_state_dict = OrderedDict() - meta_info = checkpoint['meta'] - is_two_stage, is_ssd, is_retina, reg_cls_agnostic = parse_config( - '#' + meta_info['config']) - if meta_info['mmdet_version'] <= '0.5.3' and is_retina: - upgrade_retina = True - else: - upgrade_retina = False - - # MMDetection v2.5.0 unifies the class order in RPN - # if the model is trained in version=2.5.0 - if meta_info['mmdet_version'] < '2.5.0': - upgrade_rpn = True - else: - upgrade_rpn = False - - for key, val in in_state_dict.items(): - new_key = key - new_val = val - if is_two_stage and is_head(key): - new_key = 'roi_head.{}'.format(key) - - # classification - if upgrade_rpn: - m = re.search( - r'(conv_cls|retina_cls|rpn_cls|fc_cls|fcos_cls|' - r'fovea_cls).(weight|bias)', new_key) - else: - m = re.search( - r'(conv_cls|retina_cls|fc_cls|fcos_cls|' - r'fovea_cls).(weight|bias)', new_key) - if m is not None: - print(f'reorder cls channels of {new_key}') - new_val = reorder_cls_channel(val, num_classes) - - # regression - if upgrade_rpn: - m = re.search(r'(fc_reg).(weight|bias)', new_key) - else: - m = re.search(r'(fc_reg|rpn_reg).(weight|bias)', new_key) - if m is not None and not reg_cls_agnostic: - print(f'truncate regression channels of {new_key}') - new_val = truncate_reg_channel(val, num_classes) - - # mask head - m = re.search(r'(conv_logits).(weight|bias)', new_key) - if m is not None: - print(f'truncate mask prediction channels of {new_key}') - new_val = truncate_cls_channel(val, num_classes) - - m = re.search(r'(cls_convs|reg_convs).\d.(weight|bias)', key) - # Legacy issues in RetinaNet since V1.x - # Use ConvModule instead of nn.Conv2d in RetinaNet - # cls_convs.0.weight -> cls_convs.0.conv.weight - if m is not None and upgrade_retina: - param = m.groups()[1] - new_key = key.replace(param, f'conv.{param}') - out_state_dict[new_key] = val - print(f'rename the name of {key} to {new_key}') - continue - - m = re.search(r'(cls_convs).\d.(weight|bias)', key) - if m is not None and is_ssd: - print(f'reorder cls channels of {new_key}') - new_val = reorder_cls_channel(val, num_classes) - - out_state_dict[new_key] = new_val - checkpoint['state_dict'] = out_state_dict - torch.save(checkpoint, out_file) - - -def main(): - parser = argparse.ArgumentParser(description='Upgrade model version') - parser.add_argument('in_file', help='input checkpoint file') - parser.add_argument('out_file', help='output checkpoint file') - parser.add_argument( - '--num-classes', - type=int, - default=81, - help='number of classes of the original model') - args = parser.parse_args() - convert(args.in_file, args.out_file, args.num_classes) - - -if __name__ == '__main__': - main() diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md b/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md deleted file mode 100644 index 4bab00c40215efd1e7ebe736a7d4bea869534d32..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Alcatech BPM Studio Pro 4.91 Serial Key Keygen.md +++ /dev/null @@ -1,28 +0,0 @@ - -

              How to Download Alcatech BPM Studio Pro 4.91 Serial Key Keygen for Free

              -

              Alcatech BPM Studio Pro 4.91 is a professional software for mixing and editing audio files. It allows you to create your own music tracks, remixes, podcasts, radio shows, and more. With Alcatech BPM Studio Pro 4.91, you can also manage your music library, record your voice, apply effects, and burn CDs or DVDs.

              -

              Alcatech BPM Studio Pro 4.91 Serial Key keygen


              Download File –––––>>> https://urlcod.com/2uyW33



              -

              However, Alcatech BPM Studio Pro 4.91 is not a free software. You need to purchase a license key to activate it and enjoy its full features. But what if you don't want to spend money on it? Is there a way to get Alcatech BPM Studio Pro 4.91 serial key keygen for free?

              -

              The answer is yes. In this article, we will show you how to download Alcatech BPM Studio Pro 4.91 serial key keygen for free from the internet. We will also provide you with some tips and warnings to avoid scams and viruses.

              -

              What is Alcatech BPM Studio Pro 4.91 Serial Key Keygen?

              -

              A serial key is a unique code that identifies a software product and verifies its authenticity. A keygen is a program that generates serial keys for various software products. A serial key keygen is a combination of both: a program that generates serial keys for a specific software product.

              -

              Alcatech BPM Studio Pro 4.91 serial key keygen is a program that generates serial keys for Alcatech BPM Studio Pro 4.91 software. By using this program, you can get a valid serial key for Alcatech BPM Studio Pro 4.91 without paying anything.

              -

              How to Download Alcatech BPM Studio Pro 4.91 Serial Key Keygen for Free?

              -

              There are many websites that claim to offer Alcatech BPM Studio Pro 4.91 serial key keygen for free download. However, not all of them are trustworthy or safe. Some of them may contain malware, spyware, adware, or other harmful programs that can damage your computer or steal your personal information.

              -

              To avoid such risks, you need to be careful and selective when choosing a website to download Alcatech BPM Studio Pro 4.91 serial key keygen for free. Here are some tips and warnings to help you:

              -

              -
                -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from unknown or suspicious websites. They may contain viruses or other malicious programs that can harm your computer or compromise your security.
              • -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that require you to complete surveys, offers, or tasks before downloading. They may be scams that try to trick you into giving away your personal information or money.
              • -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that ask you to enter your email address or phone number before downloading. They may spam you with unwanted messages or calls.
              • -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that have too many pop-ups, ads, or redirects. They may be annoying or misleading.
              • -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that have poor ratings, reviews, or feedback from other users. They may be unreliable or fraudulent.
              • -
              • Do not download Alcatech BPM Studio Pro 4.91 serial key keygen from websites that do not provide any information about the program, such as its size, version, source, or compatibility. They may be fake or outdated.
              • -
              -

              Instead, you should download Alcatech BPM Studio Pro 4.91 serial key keygen from reputable and trusted websites that have the following features:

              -
                -
              • They provide clear and accurate information about the program, such as its size, version, source, and compatibility.
              • -
              • They have good ratings, reviews, and feedback from other users who have downloaded the program successfully.
              • -
              • They have secure and fast download links that do not

                d5da3c52bf
                -
                -
                \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md b/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md deleted file mode 100644 index a2b56e93cd15ab98f7122434150f241d3fb38944..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Antares Mic Mod EFX (MAC PC) -CORE.md +++ /dev/null @@ -1,133 +0,0 @@ - -

                Antares Mic Mod EFX (MAC PC) -CORE: how to transform the sound of your microphone with a plugin

                - -

                Would you like to have a collection of more than 100 legendary microphones to use in your recordings, mixes or live performances? Would you like to be able to change the sound of your current microphone for another more expensive or exclusive one? Would you like to be able to control the specific options of each microphone, such as the low cut filter, the pickup pattern or the tube saturation? If the answer is yes, then you are interested in knowing Antares Mic Mod EFX (MAC PC) -CORE, a plugin that allows you to do all that and more with just a few clicks.

                -

                Antares Mic Mod EFX (MAC PC) -CORE


                Download Zip 🗹 https://urlcod.com/2uyWfk



                - -

                What is Antares Mic Mod EFX (MAC PC) -CORE?

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. It is a microphone modeling tool that uses Antares' patented Spectral Shaping Tool technology to reproduce the sonic characteristics of each microphone. With this plugin, you can expand your microphone collection with models of vintage microphones from Neumann, AKG and others, as well as a wide selection of modern and boutique microphones.

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is very easy to use. You just have to select the microphone you are using (or that you used during your original recording) and the microphone you want it to sound like. The plugin takes care of making the sound transformation and allows you to adjust the specific options of each microphone. For example, you can turn on or off the low cut filter, change the pickup pattern or add tube saturation. Each option has the same sonic effect that it would have with the real microphone.

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound of the microphones you always wanted to have. It is also a very useful tool for broadcast and podcasting applications. Antares Mic Mod EFX (MAC PC) -CORE is available as a plugin for RTAS (Mac and PC), VST (Mac and PC) and Audio Units.

                - -

                What are the advantages of using Antares Mic Mod EFX (MAC PC) -CORE?

                - -

                Using Antares Mic Mod EFX (MAC PC) -CORE has several advantages, among which are:

                - -
                  -
                • You don't have to spend money on buying expensive or hard-to-get microphones, as the plugin offers you a great variety of precise digital models.
                • -
                • You can improve the sound of your recordings or mixes, using the most suitable microphone model for each source or style.
                • -
                • You can experiment with different sounds and options, creating unique and interesting combinations.
                • -
                • You can use the plugin live, bringing the sound of your favorite microphones to the stage without risking damaging or losing them.
                • -
                - -

                Using Antares Mic Mod EFX (MAC PC) -CORE is a very practical and creative way to make the most of your microphones and achieve professional results.

                - -

                What models of microphones does Antares Mic Mod EFX (MAC PC) -CORE include?

                - -

                Antares Mic Mod EFX (MAC PC) -CORE includes precise digital models of more than 100 legendary microphones. These are some examples:

                -

                - -
                  -
                • Akg C12A
                • -
                • Akg C414
                • -
                • Akg C414B/ULS Limited Edition Gold
                • -
                • Akg C414B/ULS Modified by Audio Upgrades
                • -
                • Akg The Tube
                • -
                • Audix D4
                • -
                • B&K 4007
                • -
                • Beyerdynamic M500
                • -
                • Groove Tubes MD1b-FET
                • -
                • Groove Tubes VELO-8
                • -
                • Mojave Audio MA-200
                • -
                • Neumann KM84
                • -
                • Neumann U47
                • -
                • Neumann U67
                • -
                • Neumann U87
                • -
                • Rode Classic II
                • -
                • Rode NT1-A
                • -
                • Rode NT2-A
                • -
                • Rode NT1000
                • -
                • Royer R-121
                • -
                • Sennheiser MD421-II
                • -
                • Sennheiser MKH40
                • -
                • Sony C-800G
                • -
                • Townsend Labs Sphere L22 Precision Microphone Modeling System
                • -
                - -

                These are just some examples, you can check the complete list on Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.

                - -

                Conclusion

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. With this plugin, you can access a great variety of precise digital models of more than 100 legendary microphones. You just have to select the microphone you are using and the one you want it to sound like, and adjust the specific options of each one. The plugin takes care of making the sound transformation and offers you a professional result. Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound you always wanted to have. If you are interested in this plugin, you can download it from Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.

                -

                How to use Antares Mic Mod EFX (MAC PC) -CORE?

                - -

                Using Antares Mic Mod EFX (MAC PC) -CORE is very simple and intuitive. You just have to follow these steps:

                - -
                  -
                1. Install the plugin on your computer and activate it with your license code.
                2. -
                3. Open your DAW and insert the plugin on the track where you have your microphone signal or your recorded audio.
                4. -
                5. Select the source microphone from the list of available models. If your microphone is not on the list, you can select a similar one or use the generic model.
                6. -
                7. Select the modeled microphone from the list of available models. You can browse by categories or use the search function.
                8. -
                9. Adjust the proximity effect, the low cut filter, the pickup pattern and the tube saturation according to your preferences and needs.
                10. -
                11. Compare the original and modeled sound by using the bypass button or the output level control.
                12. -
                - -

                You can also use presets to quickly access different combinations of source and modeled microphones. You can save your own presets or use the ones included with the plugin.

                - -

                Who can benefit from Antares Mic Mod EFX (MAC PC) -CORE?

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that can benefit anyone who works with microphones, whether in studio or live situations. Some examples are:

                - -
                  -
                • Singers and vocalists who want to get the sound of their favorite microphones without having to buy them or rent them.
                • -
                • Musicians and producers who want to enhance their recordings or mixes with different microphone sounds and options.
                • -
                • Engineers and sound technicians who want to have a versatile and flexible tool for microphone modeling and processing.
                • -
                • Podcasters and broadcasters who want to improve the quality and clarity of their voice with different microphone models.
                • -
                • Live performers who want to bring the sound of their studio microphones to the stage without risking damaging or losing them.
                • -
                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that can help you achieve professional results with any microphone you have or use.

                -

                How to optimize the article for SEO?

                - -

                SEO stands for Search Engine Optimization, which is the process of improving the visibility and ranking of a website or a web page in the search engines. SEO is important for attracting more traffic and potential customers to your website or blog. There are many factors that affect SEO, such as keywords, content, links, structure, speed, and more.

                - -

                One of the most important factors for SEO is the content of your article. You want to write content that is relevant, engaging, informative, and original. You also want to use keywords that match the query of your target audience and that are related to your topic. Keywords are the words or phrases that people type in the search engines to find what they are looking for. You can use tools like Google Keyword Planner or Moz Keyword Explorer to find out what keywords are popular and relevant for your topic.

                - -

                When you write your article, you want to use your keywords strategically and naturally throughout your content. You don't want to overuse or spam your keywords, as this can have a negative effect on your SEO and your readers. You want to use your keywords in the following places:

                - -
                  -
                • The title of your article: The title is the first thing that people see when they search for your topic. It should be catchy, clear, and include your main keyword.
                • -
                • The headers and subheaders of your article: The headers and subheaders help to organize your content and make it easier to read and scan. They should also include your keywords or variations of them.
                • -
                • The introduction and conclusion of your article: The introduction and conclusion are the parts that summarize your main points and capture the attention and interest of your readers. They should also include your keywords or synonyms of them.
                • -
                • The body of your article: The body is the main part of your article where you provide the information, facts, examples, and arguments that support your topic. You should use your keywords or related terms throughout your paragraphs, but not too often or too close together.
                • -
                • The meta description of your article: The meta description is a short summary of your article that appears below the title in the search results. It should be concise, compelling, and include your main keyword.
                • -
                • The URL of your article: The URL is the address of your web page that appears in the browser bar. It should be descriptive, easy to read, and include your main keyword.
                • -
                - -

                By using keywords in these places, you can optimize your article for SEO and make it more likely to rank higher in the search engines.

                - -

                What are some common mistakes to avoid when writing an article?

                - -

                Writing an article is not an easy task. It requires research, planning, writing, editing, and proofreading. Along the way, you may encounter some common mistakes that can affect the quality and effectiveness of your article. Here are some of them and how to avoid them:

                - -
                  -
                • Not knowing your audience: Before you write your article, you should know who you are writing for and what they are looking for. You should tailor your tone, style, language, and content to suit their needs and preferences.
                • -
                • Not having a clear purpose: Before you write your article, you should know what you want to achieve with it and what message you want to convey. You should have a clear thesis statement that summarizes your main point and guides your writing.
                • -
                • Not doing enough research: Before you write your article, you should do enough research on your topic and gather reliable sources of information. You should cite your sources properly and avoid plagiarism.
                • -
                • Not having a clear structure: Before you write your article, you should have a clear outline that organizes your ideas and arguments into a logical flow. You should have an introduction, a body, and a conclusion that follow a coherent structure.
                • -
                • Not using transitions: When you write your article, you should use transitions to connect your sentences and paragraphs and create a smooth flow of ideas. Transitions are words or phrases that show the relationship between different parts of your text.
                • -
                • Not using headings: When you write your article, you should use headings to divide your content into sections and sub-sections that make it easier to read and scan. Headings also help to highlight the main points of each section.
                • -
                • Not proofreading and editing: After you write your article, you should proofread and edit it carefully to check for spelling, grammar, punctuation, style, clarity, accuracy, and consistency errors. You can use tools like Grammarly or Hemingway Editor to help you with this task.
                • -
                - -

                By avoiding these common mistakes, you can improve the quality and effectiveness of your article.

                -

                Conclusion

                - -

                Antares Mic Mod EFX (MAC PC) -CORE is a plugin that allows you to model the sound of your microphone with that of another different one. With this plugin, you can access a great variety of precise digital models of more than 100 legendary microphones. You just have to select the microphone you are using and the one you want it to sound like, and adjust the specific options of each one. The plugin takes care of making the sound transformation and offers you a professional result. Antares Mic Mod EFX (MAC PC) -CORE is a plugin that you can use both in studio and live, to get the sound you always wanted to have. If you are interested in this plugin, you can download it from Antares' official website: https://www.antarestech.com/product/mic-mod-efx/.

                - -

                In this article, we have explained what Antares Mic Mod EFX (MAC PC) -CORE is, what are its advantages, what models of microphones it includes, how to use it, how to optimize it for SEO, and what are some common mistakes to avoid when writing an article. We hope that this article has been useful and informative for you and that you have learned something new about this amazing plugin. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md b/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md deleted file mode 100644 index ac38849003bfde4cafb6d661a20bc31c0f2eab2d..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Deep Freeze Standard Edition 7.71.020.4499 Final Full Version How to Freeze Your System Settings and Data.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Deep Freeze Standard Edition 7.71.020.4499 Final Full Version


                Download Ziphttps://urlcod.com/2uyWzw



                - - aaccfb2cb3
                -
                -
                -

                diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/build_reference.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/build_reference.py deleted file mode 100644 index 65dcc6c0d623978b426a192022478bfccac6c7aa..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/build_reference.py +++ /dev/null @@ -1,117 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license -""" -Helper file to build Ultralytics Docs reference section. Recursively walks through ultralytics dir and builds an MkDocs -reference section of *.md files composed of classes and functions, and also creates a nav menu for use in mkdocs.yaml. - -Note: Must be run from repository root directory. Do not run from docs directory. -""" - -import os -import re -from collections import defaultdict -from pathlib import Path -from ultralytics.yolo.utils import ROOT - -NEW_YAML_DIR = ROOT.parent -CODE_DIR = ROOT -REFERENCE_DIR = ROOT.parent / 'docs/reference' - - -def extract_classes_and_functions(filepath): - with open(filepath, 'r') as file: - content = file.read() - - class_pattern = r"(?:^|\n)class\s(\w+)(?:\(|:)" - func_pattern = r"(?:^|\n)def\s(\w+)\(" - - classes = re.findall(class_pattern, content) - functions = re.findall(func_pattern, content) - - return classes, functions - - -def create_markdown(py_filepath, module_path, classes, functions): - md_filepath = py_filepath.with_suffix('.md') - - # Read existing content and keep header content between first two --- - header_content = "" - if md_filepath.exists(): - with open(md_filepath, 'r') as file: - existing_content = file.read() - header_parts = existing_content.split('---', 2) - if len(header_parts) >= 3: - header_content = f"{header_parts[0]}---{header_parts[1]}---\n\n" - - module_path = module_path.replace('.__init__', '') - md_content = [f"## {class_name}\n---\n### ::: {module_path}.{class_name}\n

                \n" for class_name in classes] - md_content.extend(f"## {func_name}\n---\n### ::: {module_path}.{func_name}\n

                \n" for func_name in functions) - md_content = header_content + "\n".join(md_content) - - os.makedirs(os.path.dirname(md_filepath), exist_ok=True) - with open(md_filepath, 'w') as file: - file.write(md_content) - - return md_filepath.relative_to(NEW_YAML_DIR) - - -def nested_dict(): - return defaultdict(nested_dict) - - -def sort_nested_dict(d): - return { - key: sort_nested_dict(value) if isinstance(value, dict) else value - for key, value in sorted(d.items()) - } - - -def create_nav_menu_yaml(nav_items): - nav_tree = nested_dict() - - for item_str in nav_items: - item = Path(item_str) - parts = item.parts - current_level = nav_tree['reference'] - for part in parts[2:-1]: # skip the first two parts (docs and reference) and the last part (filename) - current_level = current_level[part] - - md_file_name = parts[-1].replace('.md', '') - current_level[md_file_name] = item - - nav_tree_sorted = sort_nested_dict(nav_tree) - - def _dict_to_yaml(d, level=0): - yaml_str = "" - indent = " " * level - for k, v in d.items(): - if isinstance(v, dict): - yaml_str += f"{indent}- {k}:\n{_dict_to_yaml(v, level + 1)}" - else: - yaml_str += f"{indent}- {k}: {str(v).replace('docs/', '')}\n" - return yaml_str - - with open(NEW_YAML_DIR / 'nav_menu_updated.yml', 'w') as file: - yaml_str = _dict_to_yaml(nav_tree_sorted) - file.write(yaml_str) - - -def main(): - nav_items = [] - for root, _, files in os.walk(CODE_DIR): - for file in files: - if file.endswith(".py"): - py_filepath = Path(root) / file - classes, functions = extract_classes_and_functions(py_filepath) - - if classes or functions: - py_filepath_rel = py_filepath.relative_to(CODE_DIR) - md_filepath = REFERENCE_DIR / py_filepath_rel - module_path = f"ultralytics.{py_filepath_rel.with_suffix('').as_posix().replace('/', '.')}" - md_rel_filepath = create_markdown(md_filepath, module_path, classes, functions) - nav_items.append(str(md_rel_filepath)) - - create_nav_menu_yaml(nav_items) - - -if __name__ == "__main__": - main() diff --git a/spaces/verkaDerkaDerk/face-mesh-workflow/utils.py b/spaces/verkaDerkaDerk/face-mesh-workflow/utils.py deleted file mode 100644 index 1eb4a4ad1eb59347f5ebc5478c58542986ba77e7..0000000000000000000000000000000000000000 --- a/spaces/verkaDerkaDerk/face-mesh-workflow/utils.py +++ /dev/null @@ -1,128 +0,0 @@ -# from https://huggingface.co/spaces/shariqfarooq/ZoeDepth/raw/main/utils.py - -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import matplotlib -import matplotlib.cm -import numpy as np -import torch - -def colorize(value, vmin=None, vmax=None, cmap='magma_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): - """Converts a depth map to a color image. - - Args: - value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed - vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. - vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. - cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. - invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. - invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. - background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). - gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. - value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. - - Returns: - numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) - """ - if isinstance(value, torch.Tensor): - value = value.detach().cpu().numpy() - - value = value.squeeze() - if invalid_mask is None: - invalid_mask = value == invalid_val - mask = np.logical_not(invalid_mask) - - # normalize - vmin = np.percentile(value[mask],2) if vmin is None else vmin - vmax = np.percentile(value[mask],85) if vmax is None else vmax - if vmin != vmax: - value = (value - vmin) / (vmax - vmin) # vmin..vmax - else: - # Avoid 0-division - value = value * 0. - - # squeeze last dim if it exists - # grey out the invalid values - - value[invalid_mask] = np.nan - cmapper = matplotlib.cm.get_cmap(cmap) - if value_transform: - value = value_transform(value) - # value = value / value.max() - value = cmapper(value, bytes=True) # (nxmx4) - - # img = value[:, :, :] - img = value[...] - img[invalid_mask] = background_color - - # return img.transpose((2, 0, 1)) - if gamma_corrected: - # gamma correction - img = img / 255 - img = np.power(img, 2.2) - img = img * 255 - img = img.astype(np.uint8) - return img - - -import os - -# bard... -def find_most_recently_created_directory(temp_dir): - """Finds the most recently created directory in a directory. - - Args: - temp_dir: The directory to search. - - Returns: - The path to the most recently created directory. - """ - - directories = os.listdir(temp_dir) - most_recently_created_directory = None - for directory in directories: - path = os.path.join(temp_dir, directory) - st = os.stat(path) - if most_recently_created_directory is None or st.mtime > most_recently_created_directory.mtime: - most_recently_created_directory = path - - if most_recently_created_directory is None: - most_recently_created_directory = temp_dir - - return most_recently_created_directory - - -#chatgpt -def get_most_recent_subdirectory(path): - if not os.path.isdir(path): - return path - - subdirectories = [f for f in os.listdir(path) if os.path.isdir(os.path.join(path, f))] - if not subdirectories: - return path - - most_recent_subdirectory = max(subdirectories, key=lambda d: os.path.getctime(os.path.join(path, d))) - return os.path.join(path, most_recent_subdirectory) - diff --git a/spaces/vjain/SemanticPlaigarismChekcer/app.py b/spaces/vjain/SemanticPlaigarismChekcer/app.py deleted file mode 100644 index db7ad4fb5033c828fa2a64194a3fedb2ebb25296..0000000000000000000000000000000000000000 --- a/spaces/vjain/SemanticPlaigarismChekcer/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import gradio as gr -import openai -import pandas as pd -import numpy as np -import os -openai.api_key="openai_apiKey" -from openai.embeddings_utils import get_embedding -from openai.embeddings_utils import cosine_similarity - -def similarity(input): - df= pd.read_csv("meg_embeddings.csv") - df['embedding'] = df['embedding'].apply(eval).apply(np.array) - input = input - input_vector = get_embedding(input, engine="text-embedding-ada-002") - df["similarities"] = df['embedding'].apply(lambda x: cosine_similarity(x, input_vector)) - sorted_df =df.sort_values("similarities", ascending=False) - top_row = sorted_df.loc[0] - return sorted_df.iloc[0][["text", "similarities"]] - -input_text = gr.inputs.Textbox(label="Enter your text here") -text_output = gr.outputs.Textbox(label="Most similar text") -similarity_output = gr.outputs.Textbox(label="Similarity score") - -ui = gr.Interface(fn=similarity, - inputs=input_text, - outputs=[text_output, similarity_output], - title="Semantic Plagiarism Checker", - description="Check if your text is semantically similar to pre-existing texts to prevent plagiarism.", - theme="compact", - layout="vertical", - inputs_layout="stacked", - outputs_layout="stacked", - allow_flagging=False) - - - -ui.launch() \ No newline at end of file diff --git a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/css/df_style.css b/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/css/df_style.css deleted file mode 100644 index 6e81d4454866a9510b54b6a5c27f612b6009148f..0000000000000000000000000000000000000000 --- a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/css/df_style.css +++ /dev/null @@ -1,24 +0,0 @@ -.mystyle { - font-size: 11pt; - font-family: Arial; - border-collapse: collapse; - border: 1px solid silver; -} -.mystyle thead th{ - position: sticky; - top: -1; - background: #866EC7; - color:white; -} -.mystyle td, th { - padding: 5px; -} - -.mystyle tr:nth-child(even) { - background: #e0e0e0; -} - -.mystyle tr:hover { - background: silver; - cursor: pointer; -} \ No newline at end of file diff --git a/spaces/vslasor/VLS7-ClinicalTerminologyUIUX-GR/files/Readme.md b/spaces/vslasor/VLS7-ClinicalTerminologyUIUX-GR/files/Readme.md deleted file mode 100644 index 9d494f6d6336624e46e1ca6eb75996bf156099d8..0000000000000000000000000000000000000000 --- a/spaces/vslasor/VLS7-ClinicalTerminologyUIUX-GR/files/Readme.md +++ /dev/null @@ -1 +0,0 @@ -Files Directory - drop in examples here to ref by app.py \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py deleted file mode 100644 index c190cee6bdc7922b688ea75dc8f152fa15c24617..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000) -checkpoint_config = dict(by_epoch=False, interval=8000) -evaluation = dict(interval=8000, metric='mIoU') diff --git a/spaces/wangguanlin/vits_Kazari/shanghainese.py b/spaces/wangguanlin/vits_Kazari/shanghainese.py deleted file mode 100644 index cdff2c5056e2787f8c92da5c369636e0abbc5918..0000000000000000000000000000000000000000 --- a/spaces/wangguanlin/vits_Kazari/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import os, sys, re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'(?:(?:^|[^三四五六七八九])十|廿)两', lambda x: x.group()[:-1]+'二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/weide/ChuanhuChatGPT2/assets/custom.css b/spaces/weide/ChuanhuChatGPT2/assets/custom.css deleted file mode 100644 index f98c7df263b11afa4ddfb5d6ed18aef2ef234226..0000000000000000000000000000000000000000 --- a/spaces/weide/ChuanhuChatGPT2/assets/custom.css +++ /dev/null @@ -1,250 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* 覆盖gradio的页脚信息QAQ */ -footer { - display: none !important; -} -#footer{ - text-align: center; -} -#footer div{ - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -/* user_info */ -#user_info { - white-space: nowrap; - margin-top: -1.3em !important; - padding-left: 112px !important; -} -#user_info p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -/* usage_display */ -#usage_display { - position: relative; - margin: 0; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - padding: .5em 1em; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill);; - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - color: #FFFFFF !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/weiwandaixu/ChatGPT3.5/modules/base_model.py b/spaces/weiwandaixu/ChatGPT3.5/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/weiwandaixu/ChatGPT3.5/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
              • {domain_name}
              • \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
                  \n\n" + "".join(display_append) + "
                " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/whitphx/gradio-static-test/dist/assets/DropdownArrow-5fa4dd09.css b/spaces/whitphx/gradio-static-test/dist/assets/DropdownArrow-5fa4dd09.css deleted file mode 100644 index c47d6f6f010f0626b0036068fe41d683b37b2954..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/DropdownArrow-5fa4dd09.css +++ /dev/null @@ -1 +0,0 @@ -.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)} diff --git a/spaces/wuhuik/bingo/tests/parse.ts b/spaces/wuhuik/bingo/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/wuhuik/bingo/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/datasets/pa100k.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/datasets/pa100k.py deleted file mode 100644 index 61dd26cf54eef209e3ee3a8081802a0da8a77f66..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/datasets/pa100k.py +++ /dev/null @@ -1,59 +0,0 @@ -from __future__ import division, print_function, absolute_import -import numpy as np -import os.path as osp -from scipy.io import loadmat - -from .dataset import Dataset - - -class PA100K(Dataset): - """Pedestrian attribute dataset. - - 80k training images + 20k test images. - - The folder structure should be: - pa100k/ - data/ # images - annotation/ - annotation.mat - """ - dataset_dir = 'pa100k' - - def __init__(self, root='', **kwargs): - self.root = osp.abspath(osp.expanduser(root)) - self.dataset_dir = osp.join(self.root, self.dataset_dir) - self.data_dir = osp.join(self.dataset_dir, 'data') - self.anno_mat_path = osp.join( - self.dataset_dir, 'annotation', 'annotation.mat' - ) - - required_files = [self.data_dir, self.anno_mat_path] - self.check_before_run(required_files) - - train, val, test, attr_dict = self.extract_data() - super(PA100K, self).__init__(train, val, test, attr_dict, **kwargs) - - def extract_data(self): - # anno_mat is a dictionary with keys: ['test_images_name', 'val_images_name', - # 'train_images_name', 'val_label', 'attributes', 'test_label', 'train_label'] - anno_mat = loadmat(self.anno_mat_path) - - def _extract(key_name, key_label): - names = anno_mat[key_name] - labels = anno_mat[key_label] - num_imgs = names.shape[0] - data = [] - for i in range(num_imgs): - name = names[i, 0][0] - attrs = labels[i, :].astype(np.float32) - img_path = osp.join(self.data_dir, name) - data.append((img_path, attrs)) - return data - - train = _extract('train_images_name', 'train_label') - val = _extract('val_images_name', 'val_label') - test = _extract('test_images_name', 'test_label') - attrs = anno_mat['attributes'] - attr_dict = {i: str(attr[0][0]) for i, attr in enumerate(attrs)} - - return train, val, test, attr_dict diff --git a/spaces/xiaolv/claude2_xiaolv_api_file_chat/README.md b/spaces/xiaolv/claude2_xiaolv_api_file_chat/README.md deleted file mode 100644 index fbff73fc767722229d9b7ab1ee821eb776696862..0000000000000000000000000000000000000000 --- a/spaces/xiaolv/claude2_xiaolv_api_file_chat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: New-Bing-with Your Cookies -emoji: 🐨 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: other -duplicated_from: xiaolv/claude2_xiaolv_api_updata ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xillegas/duolingo-bot/my_app.rb b/spaces/xillegas/duolingo-bot/my_app.rb deleted file mode 100644 index cd7e87d77e0f2847b7774797ab29fc65256aa6c6..0000000000000000000000000000000000000000 --- a/spaces/xillegas/duolingo-bot/my_app.rb +++ /dev/null @@ -1,5 +0,0 @@ -require 'sinatra' - -get '/' do - 'Hello world!' -end diff --git a/spaces/xwsm/gpt/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/xwsm/gpt/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/xwsm/gpt/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/xxie92/antibody_visulization/anarci/schemes.py b/spaces/xxie92/antibody_visulization/anarci/schemes.py deleted file mode 100644 index 61f812aeae74b3d0409361a44b3768b70887afd2..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/anarci/schemes.py +++ /dev/null @@ -1,1691 +0,0 @@ -# ANARCI - Antibody Numbering and Antigen Receptor ClassIfication -# Copyright (C) 2016 Oxford Protein Informatics Group (OPIG) -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details.# -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . - -''' -Module containing functions to convert hmm alignment to a numbering scheme. - -Currently implemented - -For IG's -IMGT -Chothia -Kabat -Martin (Extended Chothia) -Aho -Wolfguy - -For TR's -IMGT -(Aho) - ---------------------------------------------------------------------------------------------------------------------- -Functions are written to a template: - -There are 128 match states in the HMMs (these are the IMGT states). The alignment to these states must be converted to -correspond to the scheme of choice. - -We define: - - a state string consisting of 'X' and 'I' where: - X means that for the state there is an equivalent position in the numbering scheme. - I means that for the state there is not an equivalent position in the numbering scheme. It should therefore be - considered as an insertion in the scheme. - - - a region string consisting of characters (integers in the currently implemented schemes). Each character -corresponds to a contiguous region. Therefore each state can be assigned a region according to the scheme. - - - a mapping between region characters and region indices as a dictionary. e.g. the first region character maps -to 0, second to 1 ... - - - a dictionary containing the difference between state number (imgt) and scheme number at the *beginning* of -each region using the region indices as keys and the difference as values. - - - the number of regions defined - - - a list for which delete states should not be included in the numbering (typically those for the cdrs). This -will allow the length of the region to be the number of residues found instead of the number of possible states plus -insertions. - - -This all goes into the _number_regions function along with the sequence and the state_vector (the alignment from the -HMM). - -_number regions will then divide the aligned part of the sequence into as many regions as defined above. Within each -region it will give a numbering according to the input parameters. A list of lists will be returned containing the -numbered sequence for each region. - -Some of the regions will not be numbered correctly according to the scheme. For example the insertions for the CDRs -will not necessarily be on the correct residue. For each different scheme these regions are then modified (see code -for implementation) - -Finally the full numbered sequence is compiled and returned to the calling function. ---------------------------------------------------------------------------------------------------------------------- - -Other schemes can be implemented following the template above. - - -''' - -# Alphabet used for insertion (last (-1th) is a blank space for no insertion) -alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "AA", "BB", "CC", "DD", "EE", "FF", "GG", "HH", "II", "JJ", "KK", "LL", "MM", "NN", "OO", "PP", "QQ", "RR", "SS", "TT", "UU", "VV", "WW", "XX", "YY", "ZZ", " "] - -# Blosum62 matrix. Used in some annotation methods to recognise pre-defined motifs -blosum62 = {('B', 'N'): 3, ('W', 'L'): -2, ('G', 'G'): 6, ('X', 'S'): 0, ('X', 'D'): -1, ('K', 'G'): -2, ('S', 'E'): 0, ('X', 'M'): -1, ('Y', 'E'): -2, ('W', 'R'): -3, ('I', 'R'): -3, ('X', 'Z'): -1, ('H', 'E'): 0, ('V', 'M'): 1, ('N', 'R'): 0, ('I', 'D'): -3, ('F', 'D'): -3, ('W', 'C'): -2, ('N', 'A'): -2, ('W', 'Q'): -2, ('L', 'Q'): -2, ('S', 'N'): 1, ('Z', 'K'): 1, ('V', 'N'): -3, ('Q', 'N'): 0, ('M', 'K'): -1, ('V', 'H'): -3, ('G', 'E'): -2, ('S', 'L'): -2, ('P', 'R'): -2, ('D', 'A'): -2, ('S', 'C'): -1, ('E', 'D'): 2, ('Y', 'G'): -3, ('W', 'P'): -4, ('X', 'X'): -1, ('Z', 'L'): -3, ('Q', 'A'): -1, ('V', 'Y'): -1, ('W', 'A'): -3, ('G', 'D'): -1, ('X', 'P'): -2, ('K', 'D'): -1, ('T', 'N'): 0, ('Y', 'F'): 3, ('W', 'W'): 11, ('Z', 'M'): -1, ('L', 'D'): -4, ('M', 'R'): -1, ('Y', 'K'): -2, ('F', 'E'): -3, ('M', 'E'): -2, ('S', 'S'): 4, ('X', 'C'): -2, ('Y', 'L'): -1, ('H', 'R'): 0, ('P', 'P'): 7, ('K', 'C'): -3, ('S', 'A'): 1, ('P', 'I'): -3, ('Q', 'Q'): 5, ('L', 'I'): 2, ('P', 'F'): -4, ('B', 'A'): -2, ('Z', 'N'): 0, ('M', 'Q'): 0, ('V', 'I'): 3, ('Q', 'C'): -3, ('I', 'H'): -3, ('Z', 'D'): 1, ('Z', 'P'): -1, ('Y', 'W'): 2, ('T', 'G'): -2, ('B', 'P'): -2, ('P', 'A'): -1, ('C', 'D'): -3, ('Y', 'H'): 2, ('X', 'V'): -1, ('B', 'B'): 4, ('Z', 'F'): -3, ('M', 'L'): 2, ('F', 'G'): -3, ('S', 'M'): -1, ('M', 'G'): -3, ('Z', 'Q'): 3, ('S', 'Q'): 0, ('X', 'A'): 0, ('V', 'T'): 0, ('W', 'F'): 1, ('S', 'H'): -1, ('X', 'N'): -1, ('B', 'Q'): 0, ('K', 'A'): -1, ('I', 'Q'): -3, ('X', 'W'): -2, ('N', 'N'): 6, ('W', 'T'): -2, ('P', 'D'): -1, ('B', 'C'): -3, ('I', 'C'): -1, ('V', 'K'): -2, ('X', 'Y'): -1, ('K', 'R'): 2, ('Z', 'R'): 0, ('W', 'E'): -3, ('T', 'E'): -1, ('B', 'R'): -1, ('L', 'R'): -2, ('Q', 'R'): 1, ('X', 'F'): -1, ('T', 'S'): 1, ('B', 'D'): 4, ('Z', 'A'): -1, ('M', 'N'): -2, ('V', 'D'): -3, ('F', 'A'): -2, ('X', 'E'): -1, ('F', 'H'): -1, ('M', 'A'): -1, ('K', 'Q'): 1, ('Z', 'S'): 0, ('X', 'G'): -1, ('V', 'V'): 4, ('W', 'D'): -4, ('X', 'H'): -1, ('S', 'F'): -2, ('X', 'L'): -1, ('B', 'S'): 0, ('S', 'G'): 0, ('P', 'M'): -2, ('Y', 'M'): -1, ('H', 'D'): -1, ('B', 'E'): 1, ('Z', 'B'): 1, ('I', 'E'): -3, ('V', 'E'): -2, ('X', 'T'): 0, ('X', 'R'): -1, ('R', 'R'): 5, ('Z', 'T'): -1, ('Y', 'D'): -3, ('V', 'W'): -3, ('F', 'L'): 0, ('T', 'C'): -1, ('X', 'Q'): -1, ('B', 'T'): -1, ('K', 'N'): 0, ('T', 'H'): -2, ('Y', 'I'): -1, ('F', 'Q'): -3, ('T', 'I'): -1, ('T', 'Q'): -1, ('P', 'L'): -3, ('R', 'A'): -1, ('B', 'F'): -3, ('Z', 'C'): -3, ('M', 'H'): -2, ('V', 'F'): -1, ('F', 'C'): -2, ('L', 'L'): 4, ('M', 'C'): -1, ('C', 'R'): -3, ('D', 'D'): 6, ('E', 'R'): 0, ('V', 'P'): -2, ('S', 'D'): 0, ('E', 'E'): 5, ('W', 'G'): -2, ('P', 'C'): -3, ('F', 'R'): -3, ('B', 'G'): -1, ('C', 'C'): 9, ('I', 'G'): -4, ('V', 'G'): -3, ('W', 'K'): -3, ('G', 'N'): 0, ('I', 'N'): -3, ('Z', 'V'): -2, ('A', 'A'): 4, ('V', 'Q'): -2, ('F', 'K'): -3, ('T', 'A'): 0, ('B', 'V'): -3, ('K', 'L'): -2, ('L', 'N'): -3, ('Y', 'N'): -2, ('F', 'F'): 6, ('L', 'G'): -4, ('B', 'H'): 0, ('Z', 'E'): 4, ('Q', 'D'): 0, ('X', 'B'): -1, ('Z', 'W'): -3, ('S', 'K'): 0, ('X', 'K'): -1, ('V', 'R'): -3, ('K', 'E'): 1, ('I', 'A'): -1, ('P', 'H'): -2, ('B', 'W'): -4, ('K', 'K'): 5, ('H', 'C'): -3, ('E', 'N'): 0, ('Y', 'Q'): -1, ('H', 'H'): 8, ('B', 'I'): -3, ('C', 'A'): 0, ('I', 'I'): 4, ('V', 'A'): 0, ('W', 'I'): -3, ('T', 'F'): -2, ('V', 'S'): -2, ('T', 'T'): 5, ('F', 'M'): 0, ('L', 'E'): -3, ('M', 'M'): 5, ('Z', 'G'): -2, ('D', 'R'): -2, ('M', 'D'): -3, ('W', 'H'): -2, ('G', 'C'): -3, ('S', 'R'): -1, ('S', 'I'): -2, ('P', 'Q'): -1, ('Y', 'A'): -2, ('X', 'I'): -1, ('E', 'A'): -1, ('B', 'Y'): -3, ('K', 'I'): -3, ('H', 'A'): -2, ('P', 'G'): -2, ('F', 'N'): -3, ('H', 'N'): 1, ('B', 'K'): 0, ('V', 'C'): -1, ('T', 'L'): -1, ('P', 'K'): -1, ('W', 'S'): -3, ('T', 'D'): -1, ('T', 'M'): -1, ('P', 'N'): -2, ('K', 'H'): -1, ('T', 'R'): -1, ('Y', 'R'): -2, ('L', 'C'): -1, ('B', 'L'): -4, ('Z', 'Y'): -2, ('W', 'N'): -4, ('G', 'A'): 0, ('S', 'P'): -1, ('E', 'Q'): 2, ('C', 'N'): -3, ('H', 'Q'): 0, ('D', 'N'): 1, ('Y', 'C'): -2, ('L', 'H'): -3, ('E', 'C'): -4, ('Z', 'H'): 0, ('H', 'G'): -2, ('P', 'E'): -1, ('Y', 'S'): -2, ('G', 'R'): -2, ('B', 'M'): -3, ('Z', 'Z'): 4, ('W', 'M'): -1, ('Y', 'T'): -2, ('Y', 'P'): -3, ('Y', 'Y'): 7, ('T', 'K'): -1, ('Z', 'I'): -3, ('T', 'P'): -1, ('V', 'L'): 1, ('F', 'I'): 0, ('G', 'Q'): -2, ('L', 'A'): -1, ('M', 'I'): 1} - - -def smooth_insertions(state_vector): - ''' - The function aims to correct to the expected imgt alignment. Renumbering functions then translate from the imgt scheme to the - appropriate scheme. - - Handle insertions made by HMMER that we suspect may be in the wrong position. - Edge cases include: - - Insertions at the C terminal of fw1, fw3 and fw3 regions. Can occur when 'conserved' residues have been mutated and the - same amino acid appears in the the following CDR (e.g. mutate cysteine at 104 but the CDR3 has one or more cysteines) - - Same as above possible (but not observed in structure seqs) for N terminal of fw2, fw3 and fw4... TODO - - Heavily mutated N terminal regions that are partially recognised (e.g. 3gk8 chain H). Insertions should not be allowed - before N terminal deletions have been used. Preserve deletion locations that are not N terminal (e.g. 10 in IMGT H) if - the gap has been placed by the alignment. - - ''' - # Small overhead doing these corrections but worth it for reducing edge cases. - - # Enforce insertion patterns as below. The CDRs are renumbered in each case so that insertions are placed accoring to the scheme -# '11111111111111111111111111222222222222333333333333333334444444444555555555555555555555555555555555555555666666666666677777777777' -# ' mmmi mmmi mmmi ' -# ' mmmi immm mmmi immm mmmi immm ' - - # Enforce any insertions at the end and beginning of framework regions to be moved into the CDR region for renumbering. - enforced_patterns = [ [(25,'m'),(26,'m'),( 27,'m'),( 28,'i')], - [(38,'i'),(38,'m'),(39,'m'),(40,'m')], - [(54,'m'),(55,'m'),(56,'m'),(57,'i')], - [(65,'i'),(65,'m'),(66,'m'),(67,'m')], - [(103,'m'),(104,'m'),(105,'m'),(106,'i')], - [(117,'i'),(117,'m'),(118,'m'),(119,'m')] ] - - # Insertions in FW1 are only allowed if there are a fewer number of n-terminal deletions made. - - state_buffer = [] - sv = [] - for (state_id, state_type ), si in state_vector: - if state_id < 23: # Everything before the cysteine at 23. - state_buffer.append( ((state_id, state_type ), si) ) - reg = -1 - elif 25 <= state_id < 28: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 0 - elif 37 < state_id <= 40: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 1 - elif 54 <= state_id < 57: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 2 - elif 64 < state_id <= 67: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 3 - elif 103 <= state_id < 106: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 4 - elif 116 < state_id <= 119: # Add to the buffer - state_buffer.append( ((state_id, state_type ), si) ) - reg = 5 - elif len(state_buffer) != 0: # Add the buffer and reset - - # Find the number of insertions in the buffer - nins = sum( 1 for s in state_buffer if s[0][1] == 'i' ) - - # If there are insertions, adjust the alignment - if nins > 0: # We have insertions - - if reg == -1: # FW1, only adjust if there are the same or more N terminal deletions than insertions - nt_dels = state_buffer[0][0][0] - 1 # Missing states - for (_id, _type ), _si in state_buffer: # Explicit deletion states. - if _type == 'd' or _si == None: - nt_dels +=1 - else: # First residue found - break - if nt_dels >= nins: # More n terminal deletions than insertions found. Likely misalignment. - - # Preserve the deleted states structure by using the same match annotations - new_states = [ s for s, _ in state_buffer if s[1] == 'm'] - _first = new_states[0][0] - - # Remove the deletions so that only residue positions are included - state_buffer = [ s for s in state_buffer if s[0][1] != 'd' ] - - # Extend N terminal states backwards from the first match states - _add = len( state_buffer ) - len( new_states ) - assert _add >= 0, 'Implementation logic error' # Should be adding a positive number of positions - new_states = [ (_,'m') for _ in range( _first - _add, _first ) ] + new_states - assert len(new_states)==len(state_buffer), 'Implementation logic error' # Should have the same length - - # Assign them preserving the order of the sequence. - for i in range( len(state_buffer ) ): - sv.append( ( new_states[i], state_buffer[i][1]) ) - else: - sv += state_buffer # The insertions may be incorrect but unknown what to do. Let the alignment place. - else: - # Remove any deletions in the buffer. Unlikely to happen but do anyway - state_buffer = [ s for s in state_buffer if s[0][1] != 'd' ] - - # Define the new states defined by the enforced pattern and the length of the buffer - if reg % 2: # nterm fw - new_states = [enforced_patterns[reg][0]]*max( 0, len(state_buffer)-3) + enforced_patterns[reg][ max( 4-len(state_buffer), 1):] - else: # cterm fw - new_states = enforced_patterns[reg][:3] + [enforced_patterns[reg][2]]*max( 0, len(state_buffer)-3) - # Assign them preserving the order of the sequence. - for i in range( len(state_buffer ) ): - sv.append( ( new_states[i], state_buffer[i][1]) ) - - else: # Nothing to do - either all match or deletion states. - sv += state_buffer - - # Add the current state - sv.append( ((state_id, state_type ), si) ) - - # Reset state buffer - state_buffer = [] - - else: # Simply append - sv.append( ((state_id, state_type ), si) ) - - - return sv - - -# General function to give annotations for regions that have direct mappings onto the hmm alignment (imgt states) -def _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions): - """ - General function to number a sequence and divide it into different regions - - @param sequence: The sequence string - @param state_vector: The list of states from the aligned hmm - @param state_string: A string of states for the scheme relative to IMGT (this is X for a direct equivalence, I if needs to be treated as insertion) - @param region_string: A string of characters that indicate which hmm states are in each regions for this scheme (i.e. how should the sequence be divided up) - @param region_index_dict: A dictionary converting the characters in region string to an index of the regions. - @param rels: The difference of the numbering integer at the *start* of each region - @param n_regions: The number of regions - @param exclude_deletions: A list of region indices for which deletion states should not be included. Typically the CDRs. - These will be reannotated in the scheme function. Also allows the reset of insertions. - - @return: A list of lists where each region has been numbered according to the scheme. Some regions will need renumbering. This should be taken care of after the function called. - - """ - - state_vector = smooth_insertions( state_vector ) - - _regions = [ [] for _ in range(n_regions) ] - - # Initialise the insertion index (-1 is a blank space) and the previous state. - insertion = -1 - previous_state_id = 1 - previous_state_type = 'd' - start_index, end_index = None, None - - region = None - - # Iterate over the aligned state vector - for (state_id, state_type ), si in state_vector: - - # Retrieve the region index - if state_type != "i" or region is None: # BUG_FIX - JD 9/4/15 - do not allow a new region to start as an insertion. - region = region_index_dict[region_string[state_id-1]] - - - # Check the state_types - if state_type == "m": # It is a match - - # Check whether this position is in the scheme as an independent state - if state_string[state_id-1]=="I": # No, it should be treated as an insertion - if previous_state_type != 'd': # Unless there was a deletion beforehand in which case this should be a real pos. - insertion +=1 # Increment the insertion annotation index - rels[region] -= 1 # Update the relative numbering from the imgt states - else: # Yes - insertion = -1 # Reset the insertions - - # Add the numbering annotation to the appropriate region list - _regions[region].append( ( (state_id + rels[region], alphabet[insertion] ), sequence[si] ) ) - previous_state_id = state_id # Record the previous state ID - if start_index is None: - start_index = si - end_index = si - - previous_state_type = state_type - - elif state_type == "i": # It is an insertion - insertion +=1 # Increment the insertion annotation index - - # Add the numbering annotation to the appropriate region list - _regions[region].append( ( (previous_state_id + rels[region], alphabet[insertion]), sequence[si] ) ) - if start_index is None: - start_index = si - end_index = si - - previous_state_type = state_type - - else: # It is a deletion - previous_state_type = state_type - - # Check whether this position is in the scheme as an independent state - if state_string[state_id-1]=="I": # No, therefore irrelevant to the scheme. - rels[region] -= 1 # Update the relative numbering from the imgt states - continue - - insertion = -1 # Reset the insertions - previous_state_id = state_id # Record the previous state ID, should not be needed (no delete to insert state transition) - - - # Reset the inssertion index if necessary and allowed. (Means the insertion code is meaningless and will be reannotated) - if insertion >= 25 and region in exclude_deletions: - insertion = 0 - - assert insertion < 25, "Too many insertions for numbering scheme to handle" # We ran out of letters. - - return _regions, start_index, end_index - - -# Functions to perform the numbering and the corrections for each of the implemented schemes. -# These have been written fairly verbosely so that the template of how to generate a function for a new scheme is more clear. -# They have two stages: Perform the mapping between imgt and the scheme; Renumber those regions that do not map nicely onto imgt (e.g. CDR insertions) - - - -######## -# IMGT # -######## -# - Renumbering of the CDR 1 and 2 regions in IMGT has now been implemented to ensure consistency with the gapping rules of the -# scheme. Previously gaps were defined using the HMM alignment as the underlying model was already based on the IMGT scheme. This -# worked well in original test cases but appears to give inaccurate annotations in a significant number of cases in NGS size -# sequence sets. We therefore now explicitly renumber the CDR 1 and 2 as with all the other schemes. - -def number_imgt(state_vector, sequence): - """ - Apply the IMGT numbering scheme for heavy or light chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in IMGT scheme, I is an insertion. (All X's for IMGT) - XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX XXXXXXXXXXXXXXXXX XXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXX XXXXXXXXXXX - 11111111111111111111111111 222222222222 33333333333333333 4444444444 555555555555555555555555555555555555555 6666666666666 77777777777 - - Regions - (N.B These do not match up with any particular definition of CDR) - 1. All positions before CDR1 - 2. CDR1 positions - 3. Positions between CDR1/2 - 4. CDR2 positions - 5. Positions between CDR2/3 - 6. CDR positions 105 (inc) to 118 (exc) - 7. Positions after CDR3 - - """ - - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111111111111111111111222222222222333333333333333334444444444555555555555555555555555555555555555555666666666666677777777777' - - region_index_dict = { - "1":0, - "2":1, - "3":2, - "4":3, - "5":4, - "6":5, - "7":6 - } - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1:0, - 2:0, - 3:0, - 4:0, - 5:0, - 6:0, - 7:0 - } - - n_regions = 7 - - exclude_deletions = [1,3,5] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - ############### - # Renumbering # - ############### - - _numbering = [ _regions[0], # Fw1 - [], # CDR1 - _regions[2], # Fw2 - [], # CDR2 - _regions[4], # Fw3 - [], # CDR3 - _regions[6], # Fw4 - - ] - - # The alignment from HMMER should be correct for CDRs 1 and 2. Testing has shown not always the case and 'manual' renumbering - # is required as with the other schemes. - - # CDR1 - # CDR1 has a range from 27 (inc.) to 39 (exc.) and has a theoretical maximum length of 12. - cdr1seq = "".join([ x[1] for x in _regions[1] if x[1] != "-" ]) - cdr1length = len(cdr1seq) - si = 0 - prev_state = 26 - for ann in get_imgt_cdr(cdr1length, 12, 27, 39): - if not ann: - _numbering[1].append( ((prev_state+1, ' '), '-') ) - prev_state += 1 - else: - _numbering[1].append( (ann, cdr1seq[si]) ) - prev_state = ann[0] - si += 1 - - # CDR2 - # CDR2 has a range from 56 (inc.) to 66 (exc.) and has a theoretical length of 10. - cdr2seq = "".join([ x[1] for x in _regions[3] if x[1] != "-" ]) - cdr2length = len(cdr2seq) - si = 0 - prev_state = 55 - for ann in get_imgt_cdr(cdr2length, 10, 56, 66): - if not ann: - _numbering[3].append( ((prev_state+1, ' '), '-') ) - prev_state += 1 - else: - _numbering[3].append( (ann, cdr2seq[si]) ) - prev_state = ann[0] - si += 1 - - # FW3. We allow the HMM to place insertions. Technically all insertion points are taken care of but in reality insertions can - # and do occur. No specification of where the insertions should be placed. - - - # CDR3 - # CDR3 has a range from 105 (inc.) to 118 (exc.). Insertions are placed on 112 and 111 symetrically. IMGT has a technical - # maximum length of 65 (13 positions, 26*2 insertions) . In practice ANARCI will not recognise CDR3s of this length. - cdr3seq = "".join([ x[1] for x in _regions[5] if x[1] != "-" ]) - cdr3length = len(cdr3seq) - if cdr3length > 117: return [], startindex, endindex # Too many insertions. Do not apply numbering. - si = 0 - previous_state_id = 104 - for ann in get_imgt_cdr(cdr3length, 13, 105, 118): - if ann is None: - _numbering[5].append( ((previous_state_id+1, " "), "-" ) ) - previous_state_id+=1 - else: - _numbering[5].append( (ann, cdr3seq[si] ) ) - previous_state_id = ann[0] - si+=1 - - # Return the full vector and the start and end indices of the numbered region of the sequence - return gap_missing( _numbering ), startindex, endindex - -def get_imgt_cdr(length, maxlength, start, end): - """ - Symmetrically number a CDR loop (e.g. CDRL1/CDRH2 for IMGT) - @param length: Define the length of target CDR - @param maxlength: Define the theoretical limit (e.g. L1 = 12 for the IMGT scheme) - @param start, end: Start and end position numbers - """ - annotations = [ None for _ in range(max(length, maxlength)) ] - if length == 0: - return annotations - elif length == 1: - annotations[0] = (start, ' ') - return annotations - - front, back = 0, -1 - #az = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" - #za = "ZYXWVUTSRQPONMLKJIHGFEDCBA" - - az = alphabet[:-1] - za = az[::-1] - - for i in range(min(length, maxlength)): - if i % 2: - annotations[back] = (end + back, " ") - back -= 1 - else: - annotations[front] = (start + front, " ") - front += 1 - - # Add insertions around the centre point - centrepoint = [ i for i,v in enumerate(annotations) if v == None ] - if not centrepoint: - return annotations - - centre_left = annotations[min(centrepoint)-1][0] # Get the index right before the first None - centre_right = annotations[max(centrepoint)+1][0] # Get the index right after the first None - - # For cases with an even max length - if not maxlength % 2: - frontfactor, backfactor = maxlength//2, maxlength//2 - # For cases with an odd max length - else: - frontfactor, backfactor = (maxlength//2)+1, maxlength//2 - - for i in range(max(0, length-maxlength)): - if not i % 2: - annotations[back] = (centre_right, za[back + backfactor]) - back -= 1 - else: - annotations[front] = (centre_left, az[front - frontfactor]) - front += 1 - - return annotations - - -####### -# Aho # -####### -# Heuristic regapping based on the AHo specification as detailed on AAAAA website. Gap order depends on the chain type -def number_aho(state_vector, sequence, chain_type): - """ - Apply the Aho numbering scheme - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in IMGT scheme, I is an insertion. (All X's for IMGT) - - XXXXXXX XXX XXXXXXXXXXXXXX XXXXXXXXXXXXXXXX XXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXX XXXXXXXXXXXXX XXXXXXXXXXXXX XXXXXXXXXXX - AAAAAAA BBB CCCCCCCCCCCCCC DDDDDDDDDDDDDDDD EEEEEEEEEEEEEEE FFFFFFFFFFFFFFFFFFFF HHHHHHHHHHHHHHHH IIIIIIIIIIIII JJJJJJJJJJJJJ KKKKKKKKKKK - - - Regions - (N.B These do not match up with any particular definition of CDR) - A. EMPTY (now included in B) - B. 1-10 inclusive. Indel occurs at 8 - C. 11-24 inclusive. - D. 25-42 inclusive (deletion surround 28) 32-42 inclusive (deletions surround 36) - E. 43-57 inclusive - F. 58-77 inclusive (deletions surround 63). Alpha chains have deletions at 74,75 - G. EMPTY (now included in H) - H. 78-93 inclusive gaps on 86 then 85, insertions on 85 linearly - I. 94-106 inclusive - J. 107-138 inclusive gaps on 123 symetrically. - K. 139-149 inclusive. - - """ - - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = 'BBBBBBBBBBCCCCCCCCCCCCCCDDDDDDDDDDDDDDDDEEEEEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFHHHHHHHHHHHHHHHHIIIIIIIIIIIIIJJJJJJJJJJJJJKKKKKKKKKKK' -# 1 2 3 4 5 7 8 9 10 - - - region_index_dict = dict( list(zip( "ABCDEFGHIJK", list(range(11)) )) ) - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1:0, - 2:0, - 3:0, - 4:2, - 5:2, - 6:2, - 7:2, - 8:2, - 9:2, - 10:21} - - n_regions = 11 - - exclude_deletions = [1,3,4,5,7,9] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - ############### - # Renumbering # - ############### - - _numbering = [ _regions[0], _regions[1], _regions[2],[], _regions[4], [], _regions[6], [], _regions[8],_regions[9],_regions[10] ] - - ################################## - # Move the indel in fw 1 onto 8 # - ################################## - - # Place indels on 8 - # Find the first recognised residue and change the expected length of the stretch given the starting point. - # This prevents n terminal deletions being placed at 8 incorrectly. - length = len( _regions[1] ) - if length > 0: - start = _regions[1][0][0][0] - stretch_len = 10 - (start -1) - if length > stretch_len: # Insertions are present. Place on 8 - annotations = [ (_," ") for _ in range(start,9) ] + [ (8,alphabet[_]) for _ in range( length - stretch_len ) ] + [(9," "),(10," ")] - else: - ordered_deletions = [(8," ")] + [(_," ") for _ in range(start, 11) if _ != 8] - annotations = sorted( ordered_deletions[max(stretch_len-length, 0):] ) - _numbering[1] = [ (annotations[i], _regions[1][i][1]) for i in range(length) ] - - ######### - # CDR 1 # - divided in two parts in the Aho scheme. - ######### - gaps at 28 depending on the chain type. - - # "VH domains, as well as the majority of the VA domains, have a one-residue gap in position 28, VK and VB domains a two-residue - # gap in position 27 and 28." - - # We use the link below as the reference for the scheme. - # https://www.bioc.uzh.ch/plueckthun/antibody/Numbering/Alignment.html - - # Some of the header lines in these images are offset by one (VH)! The gaps really are centered at 28 and 36 - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VK.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VL.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VH.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VA.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VB.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VG.html - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VD.html - - # We gap the CDR1 in a heuristic way using the gaps. - # This means that CDR1 gapping will not always be correct. For example if one grafts a Kappa CDR1 loop onto a Lambda framework - # the gapping patter might now be incorrect. - # Not a fan of being so prescriptive. - - # The CDR1 region included here ranges from AHo 25 to AHo 42 inclusive - - # The order in which the two loops are gapped is dependent on the chain type (see alignments in URLs above). - # Not all lengths are defined as not all lengths were crystallised in 2001 (or today). Where no example of the length was - # available the rule followed is to continue gapping the C terminal 'loop', then the N terminal 'loop', then 31 then the fw. - # In all cases I have commented where the gapping is undefined. Note that for alpha chains the gapping rules are inconsistent. - - _L = 28,36,35,37,34,38,27,29,33,39,32,40,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then 31, then fw. - _K = 28,27,36,35,37,34,38,33,39,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then fw. - _H = 28,36,35,37,34,38,27,33,39,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then fw. - # N.B. The header on the alignment image for PDB_VH is offset by 1! - _A = 28,36,35,37,34,38,33,39,27,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then fw. - # N.B The gapping is inconsistent for alpha chains. I follow the paper's statement that most VA have - # one gap at 28 and remove 28 and 27 before removing 40. - _B = 28,36,35,37,34,38,33,39,27,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then 31, then fw. - _D = 28,36,35,37,34,38,27,33,39,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then 31, then fw. - # N.B only two sequence patterns available. - _G = 28,36,35,37,34,38,27,33,39,32,40,29,26,30,25,31,41,42 - # |-> undefined by AHo. Gapping C terminal loop then N terminal then 31, then fw. - # N.B only one sequence patterns available. Delta copied. - - ordered_deletions = { 'L':_L,'K':_K, 'H':_H, 'A':_A, 'B':_B, 'D':_D, 'G':_G } - - length = len( _regions[3] ) - - annotations = [ (i, ' ') for i in sorted( ordered_deletions[chain_type][ max(18-length, 0): ] ) ] - - # Insertions are not described in the AHo scheme but must be included as there is a significant number of CDRH1s that are - # longer than the number of positions. - insertions = max( length-18 , 0 ) - if insertions > 26: - return [], startindex, endindex # Too many insertions. Do not apply numbering. - elif insertions > 0: - # They are placed on residue 36 alphabetically. - insertat = annotations.index( (36, ' ') )+1 # Always 12 - assert insertat == 12, 'AHo numbering failed' - annotations = annotations[:insertat] + [ (36, alphabet[a]) for a in range( insertions ) ] + annotations[insertat:] - - _numbering[3] = [ (annotations[i], _regions[3][i][1]) for i in range(length) ] - - ######### - # CDR 2 # - ######### - # Gaps are placed symetically at 63. - # For VA a second gap is placed at 74 and 75 according to the text in the paper. However, all the reference sequences show a - # gap at 73 and 74 see: - # https://www.bioc.uzh.ch/plueckthun/antibody/Sequences/Rearranged/PDB_VA.html - # and - # https://www.bioc.uzh.ch/plueckthun/antibody/Numbering/Alignment.html - # Either I am mis-interpreting the text in the paper or there is something a little inconsistent here... - # Given that *all* the numbered examples show the VA gap at 73 and 74 on the AAAAA website I have decided to implement this. - # - - # This region describes 58 to 77 inclusive - - if chain_type == 'A': - ordered_deletions = [74,73,63,62,64,61,65,60,66,59,67,58,68,69,70,71,72,75,76,77] - else: - ordered_deletions = [63,62,64,61,65,60,66,59,67,58,68,69,70,71,72,73,74,75,76,77] - - length = len(_regions[5]) - - annotations = [ (i, ' ') for i in sorted( ordered_deletions[ max(20-length, 0): ] ) ] - - # Insertions are not described in the AHo scheme but must be included. - insertions = max( length-20 , 0 ) - if insertions > 26: - return [], startindex, endindex # Too many insertions. Do not apply numbering. - elif insertions > 0: - # They are placed on residue 63 alphabetically. - insertat = annotations.index( (63, ' ') )+1 # Always 6 - assert insertat == 6, 'AHo numbering failed' - annotations = annotations[:insertat] + [ (63, alphabet[a]) for a in range( insertions ) ] + annotations[insertat:] - - _numbering[5] = [ (annotations[i], _regions[5][i][1]) for i in range(length) ] - - ######### - # FW3 ############################################ - # Move deletions onto 86 then 85. Insertions on 85 # - #################################################### - ordered_deletions = [86,85,87,84,88,83,89,82,90,81,91,80,92,79,93,78] - length=len( _regions[7] ) - - annotations = [ (i, ' ') for i in sorted( ordered_deletions[ max(16-length, 0): ] ) ] - - # Insertions are not described in the AHo scheme but must be included. - insertions = max( length-16 , 0 ) - if insertions > 26: - return [], startindex, endindex # Too many insertions. Do not apply numbering. - elif insertions > 0: - # They are placed on residue 85 alphabetically. - insertat = annotations.index( (85, ' ') )+1 # Always 8 - assert insertat == 8, 'AHo numbering failed' - annotations = annotations[:insertat] + [ (85, alphabet[a]) for a in range( insertions ) ] + annotations[insertat:] - - _numbering[7] = [ (annotations[i], _regions[7][i][1]) for i in range(length) ] - - - ######### - # CDR 3 # - ######### - # Deletions on 123. - # Point of the Aho scheme is that they have accounted for all possible positions. - # Assumption is that no more insertions will occur.... - # We'll put insertions on 123 linearly.(i.e.ABCDEF...) if they ever do. - - ordered_deletions = [123,124,122,125,121,126,120,127,119,128,118,129,117,130,116,131,115,132,114,133,113,134,112,135,111, - 136,110,137,109,138,108,107] - - length=len( _regions[9] ) - - annotations = [ (i, ' ') for i in sorted( ordered_deletions[ max(32-length, 0): ] ) ] - - # Insertions are not described in the AHo scheme but must be included. - insertions = max( length-32 , 0 ) - if insertions > 26: - return [], startindex, endindex # Too many insertions. Do not apply numbering. - elif insertions > 0: - # They are placed on residue 123 alphabetically. - insertat = annotations.index( (123, ' ') )+1 # Always 17 - assert insertat == 17, 'AHo numbering failed' - annotations = annotations[:insertat] + [ (123, alphabet[a]) for a in range( insertions ) ] + annotations[insertat:] - - _numbering[9] = [ (annotations[i], _regions[9][i][1]) for i in range(length) ] - - # AHo includes one extra position than IMGT in what it considers the variable domain for light chains. - #If the last state is 148 and there is at least one more residue left, then add the residue to the numbering. - numbering = gap_missing( _numbering ) - if len(numbering) > 0: - if numbering[-1][0] == (148, ' ') and numbering[-1][1] != '-' and endindex+1 < len(sequence): - numbering.append( ( (149, ' '), sequence[endindex+1]) ) - endindex +=1 - - return numbering, startindex, endindex - - -########### -# Chothia # -########### - -# Heavy chains -def number_chothia_heavy(state_vector, sequence): - """ - Apply the Chothia numbering scheme for heavy chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Chothia scheme, I is an insertion. - - XXXXXXXXXI XXXXXXXXXXXXX XXXXXXXIIIIXX XXXXXXXXXXXXXXXXXX XXXIXIIXXXX XXXXXXXIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXX XXXXXXXXIIIXX XXXXXXXXXXX' - 1111111111 2222222222222 3333333333333 444444444444444444 55555555555 666666666666666666666666666666666666666 7777777777777 88888888888' - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Put the insertions at Chothia position 6 - 2 - Simple mapping (treat "I" states as inserts and not own match states) - 3 - CDRH1 - 30 (inc) to 34 (exc) put insertions on 31 - 4 - Simple mapping (treat "I" states as inserts and not own match states) - 5 - CDRH2 - 52 (inc) 58 (exc) put insertions on 52 - 6 - Simple mapping (treat "I" states as inserts and not own match states) - 7 - CDRH3 93 (inc) to 103 (exc) put insertion on 100 - 8 - Simple mapping (treat "I" states as inserts and not own match states) - - - Regions 1,3,5 and 7 are renumbered - - """ - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXIXXXXXXXXXXXXXXXXXXXXIIIIXXXXXXXXXXXXXXXXXXXXXXXIXIIXXXXXXXXXXXIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111112222222222222333333333333333444444444444444455555555555666666666666666666666666666666666666666777777777777788888888888' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6,"8":7} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1:-1, - 2:-1, - 3:-5, - 4:-5, - 5:-8, - 6:-12, - 7:-15} - - n_regions = 8 - - exclude_deletions = [0,2,4,6] # Don't put deletions in these regions - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - - ############### - # Renumbering # - ############### - - _numbering = [ [], _regions[1] , [], _regions[3] , [], _regions[5], [], _regions[7] ] - - # Chothia H region 1 (index 0) - # Insertions are placed at Chothia position 6. - # Count how many we recognised as insertion by the hmm - insertions = len( [ 1 for _ in _regions[0] if _[0][1] != " " ] ) - # We will place all insertion in this region at Chothia position 6. - if insertions: - start = _regions[0][0][0][0] # The starting Chothia number as found by the HMM (could easily start from 2 for example) - # I have a feeling this may be a source of a bug in very unusual cases. Can't break for now. Will catch mistakes in a validate function. - length = len( _regions[0] ) - annotations = [ (_, " ") for _ in range(start, 7) ] + [ (6, alphabet[_]) for _ in range(insertions) ] + [(7," "),(8," "),(9," ")] - _numbering[0] = [ (annotations[i], _regions[0][i][1]) for i in range(length) ] - else: - _numbering[0] = _regions[0] - - - # CDR1 - # Chothia H region 3 (index 2) - # put insertions onto 31 - length = len( _regions[2] ) - insertions = max(length - 11, 0) # Pulled back to the cysteine as heavily engineered cdr1's are not playing nicely - - if insertions: - annotations = [(_, " ") for _ in range(23,32)] + [(31, alphabet[i]) for i in range(insertions) ] + [(32," "),(33," ")] - else: - annotations = [(_, " ") for _ in range(23,32)][:length-2] + [(32," "),(33," ")][:length] - - _numbering[2] = [ (annotations[i], _regions[2][i][1]) for i in range(length) ] - - # CDR2 - # Chothia H region 5 (index 4) - # put insertions onto 52 - length = len( _regions[4] ) - # 50 to 57 inclusive - insertions = max(length - 8, 0) # Eight positions can be accounted for, the remainder are insertions - # Delete in the order, 52, 51, 50,53, 54 ,55, 56, 57 - annotations = [(50, " "),(51, " "), (52, " ")][:max(0,length-5)] - annotations += [(52, alphabet[i]) for i in range(insertions) ] - annotations += [(53, " "),(54, " "),(55, " "),(56, " "),(57, " ")][ abs( min(0,length-5) ):] - _numbering[4] = [ (annotations[i], _regions[4][i][1]) for i in range(length) ] - - # FW3 - insertions are annotated on 82. The first three are normal positions and annotated automatically. - # Additional insertions do not occur with the kabat or the chothia numbering scheme. - # It does not make sense to place more than A, B, C on 82 as Martin and AHo work show that this is not a place that accepts - # additional insertions. - # The decision here is to allow the alignment to place additional insertions. This is in contrast to Martin where the region - # is renumbered to place insertions on 72. - - # CDR3 - # Chothia H region 7 (index 6) - # put insertions onto 100 - length = len( _regions[6] ) - if length > 36: return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = get_cdr3_annotations(length, scheme="chothia", chain_type="heavy") - _numbering[6] = [ (annotations[i], _regions[6][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - return gap_missing( _numbering ), startindex, endindex - -# Light chains -def number_chothia_light(state_vector, sequence): - """ - Apply the Chothia numbering scheme for light chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Chothia scheme, I is an insertion. - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX IIIIIIX XXXXXXXXXXXXXXXXXXXX XIIIIIIIXXX XXXXXIXXXXXXXIIXXXXXXXXXXXXXXXXXXXXXX XXXXXIIIIXX XXXXXXXXXXXXX - 11111111111111111111111111111 2222222 33333333333333333333 44444444444 5555555555555555555555555555555555555 66666666666 7777777777777 - - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Simple mapping (treat "I" states as inserts and not own match states) - 2 - CDRL1 - 24 (inc) to 35 (exc) put insertions on 30 - 3 - Simple mapping (treat "I" states as inserts and not own match states) - 4 - CDRL2 - 51 (inc) 55 (exc) put insertions on 52 - 5 - Simple mapping (treat "I" states as inserts and not own match states) - 6 - CDRL3 89 (inc) to 98 (exc) put insertion on 95 - 7 - Simple mapping (treat "I" states as inserts and not own match states) - - Region 2, 3 and 5 are renumbered - - """ - - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXIIIIIIXXXXXXXXXXXXXXXXXXXXXXIIIIIIIXXXXXXXXIXXXXXXXIIXXXXXXXXXXXXXXXXXXXXXXXXXXXIIIIXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111111111111111111222222222222222223333333333333333444444444445555555555555555555555555555555555555666666666666677777777777' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1: 0, - 2:-6, - 3:-6, - 4:-13, - 5:-16, - 6:-20, - } - - - n_regions = 7 - - exclude_deletions = [1,3,4,5] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - _numbering = [ _regions[0], [], _regions[2], [], _regions[4], [], _regions[6] ] - - - ############### - # Renumbering # - ############### - - # CDR1 - # Chothia L region 2 (index 1) - # put insertions onto 30 - length = len( _regions[1] ) - insertions = max(length - 11, 0) # Eleven positions can be accounted for, the remainder are insertions - # Delete forward from 31 - annotations = [(24, " "),(25, " "), (26, " "), (27, " "), (28, " "),(29, " "),(30, " ")][:max(0,length)] - annotations += [(30, alphabet[i]) for i in range(insertions) ] - annotations += [(31, " "),(32, " "),(33, " "),(34, " ")][ abs( min(0,length-11) ):] - _numbering[1] = [ (annotations[i], _regions[1][i][1]) for i in range(length) ] - - - # CDR2 - # Chothia L region 4 (index 3) - # put insertions onto 52. - length = len( _regions[3] ) - insertions = max( length - 4, 0 ) - if insertions > 0: - annotations = [(51, " "),(52, " ")] + [(52, alphabet[i]) for i in range(insertions) ] + [(53, " "),(54, " ")] - _numbering[3] = [ (annotations[i], _regions[3][i][1]) for i in range(length) ] - else: # How to gap L2 in Chothia/Kabat/Martin is unclear so we let the alignment do it. - _numbering[3] = _regions[3] - - # FW3 - # Insertions on 68. First deletion 68. Otherwise default to alignment - length = len( _regions[4] ) - insertions = max(length - 34, 0) - if insertions > 0: # Insertions on 68 - annotations = [(i," ") for i in range(55,69)]+[(68, alphabet[i]) for i in range(insertions) ]+[(i," ") for i in range(69,89)] - _numbering[4] = [ (annotations[i], _regions[4][i][1]) for i in range(length) ] - elif length == 33: # First deletion on 68 - annotations = [(i," ") for i in range(55,68)]+[(i," ") for i in range(69,89)] - _numbering[4] = [ (annotations[i], _regions[4][i][1]) for i in range(length) ] - else: # More deletions - allow alignment to place them - _numbering[4] = _regions[4] - - - # CDR3 - # Chothia L region 6 (index 5) - # put insertions onto 95 - length = len( _regions[5] ) - - if length > 35: return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = get_cdr3_annotations(length, scheme="chothia", chain_type="light") - _numbering[5] = [ (annotations[i], _regions[5][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - - return gap_missing( _numbering ), startindex, endindex - - -######### -# Kabat # -######### - -# Heavy chains -def number_kabat_heavy(state_vector, sequence): - """ - Apply the Kabat numbering scheme for heavy chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Kabat scheme, I is an insertion. - XXXXXXXXXI XXXXXXXXXXXXXXXXXXXX IIIIXXXXXX XXXXXXXXXXXXXXXX XIXII XXXXXXXXXXXIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXX XXXXXXIII XXXXXXXXXXXXX - 1111111111 22222222222222222222 3333333333 4444444444444444 55555 666666666666666666666666666666666666666666666 777777777 8888888888888 - - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Put the insertions at Chothia position 6 - 2 - Simple mapping (treat "I" states as inserts and not own match states) - 3 - CDRH1 - 30 (inc) to 36 (exc) put insertions on 35 - 4 - Simple mapping (treat "I" states as inserts and not own match states) - 5 - CDRH2 - 52 (inc) 58 (exc) put insertions on 52 - 6 - Simple mapping (treat "I" states as inserts and not own match states) - 7 - CDRH3 93 (inc) to 103 (exc) put insertion on 100 - 8 - Simple mapping (treat "I" states as inserts and not own match states) - - """ - - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXIXXXXXXXXXXXXXXXXXXXXIIIIXXXXXXXXXXXXXXXXXXXXXXXIXIIXXXXXXXXXXXIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111112222222222222333333333333333334444444444444455555555555666666666666666666666666666666666666666777777777777788888888888' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6,"8":7} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1:-1, - 2:-1, - 3:-5, - 4:-5, - 5:-8, - 6:-12, - 7:-15} - - n_regions = 8 - - exclude_deletions = [2,4,6] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - - ############### - # Renumbering # - ############### - - # Renumbering required for 0, 2, 4, 6 regions in Chothia heavy - - _numbering = [ [], _regions[1] , [], _regions[3] , [], _regions[5], [], _regions[7] ] - - - # Kabat H region 1 (index 0) - # Insertions are placed at Kabat position 6. - # Count how many we recognised as insertion by the hmm - insertions = len( [ 1 for _ in _regions[0] if _[0][1] != " " ] ) - # We will place all insertion in this region at Kabat position 6. - if insertions: - start = _regions[0][0][0][0] # The starting Kabat number as found by the HMM (could easily start from 2 for example) - # I have a feeling this may be a source of a bug in very unusual cases. Can't break for now. Will catch mistakes in a validate function. - length = len( _regions[0] ) - annotations = [ (_, " ") for _ in range(start, 7) ] + [ (6, alphabet[_]) for _ in range(insertions) ] + [(7," "),(8," "),(9," ")] - _numbering[0] = [ (annotations[i], _regions[0][i][1]) for i in range(length) ] - else: - _numbering[0] = _regions[0] - - - # CDR1 - # Kabat H region 3 (index 2) - # Put insertions onto 35. Delete from 35 backwards - length = len( _regions[2] ) - insertions = max(0,length - 13) - annotations = [(_,' ') for _ in range(23, 36)][:length] - annotations += [(35, alphabet[i]) for i in range(insertions) ] - _numbering[2] = [ (annotations[i], _regions[2][i][1]) for i in range(length) ] - - # CDR2 - # Chothia H region 5 (index 4) - # put insertions onto 52 - length = len( _regions[4] ) - # 50 to 57 inclusive - insertions = max(length - 8, 0) # Eight positions can be accounted for, the remainder are insertions - # Delete in the order, 52, 51, 50,53, 54 ,55, 56, 57 - annotations = [(50, " "),(51, " "), (52, " ")][:max(0,length-5)] - annotations += [(52, alphabet[i]) for i in range(insertions) ] - annotations += [(53, " "),(54, " "),(55, " "),(56, " "),(57, " ")][ abs( min(0,length-5) ):] - _numbering[4] = [ (annotations[i], _regions[4][i][1]) for i in range(length) ] - - # FW3 - insertions are annotated on 82. The first three are normal positions and annotated automatically. - # Additional insertions do not occur with the kabat or the chothia numbering scheme. - # It does not make sense to place more than A, B, C on 82 as Martin and AHo work show that this is not a place that accepts - # additional insertions. - # The decision here is to allow the alignment to place additional insertions. This is in contrast to Martin where the region - # is renumbered to place insertions on 72. - - # CDR3 - # Chothia H region 7 (index 6) - # put insertions onto 100 - length = len( _regions[6] ) - if length > 36: return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = get_cdr3_annotations(length, scheme="kabat", chain_type="heavy") # Chothia and Kabat the same here - _numbering[6] = [ (annotations[i], _regions[6][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - return gap_missing( _numbering ), startindex, endindex - -# Light chains -def number_kabat_light(state_vector, sequence): - """ - Apply the Kabat numbering scheme for light chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Kabat scheme, I is an insertion. - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX IIIIIIX XXXXXXXXXXXXXXXXXXXX XIIIIIIIXXX XXXXXIXXXXXXXIIXXXXXXXXXXXXXXXXXXXXXX XXXXXIIIIXX XXXXXXXXXXXXX - 11111111111111111111111111111 2222222 33333333333333333333 44444444444 5555555555555555555555555555555555555 66666666666 7777777777777 - - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Simple mapping (treat "I" states as inserts and not own match states) - 2 - CDRL1 - 24 (inc) to 35 (exc) put insertions on 27 - 3 - Simple mapping (treat "I" states as inserts and not own match states) - 4 - CDRL2 - 51 (inc) 55 (exc) put insertions on 52 - 5 - Simple mapping (treat "I" states as inserts and not own match states) - 6 - CDRL3 89 (inc) to 96 (exc) put insertion on 95 - 7 - Simple mapping (treat "I" states as inserts and not own match states) - - """ - - # Set up the numbering - - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXIIIIIIXXXXXXXXXXXXXXXXXXXXXXIIIIIIIXXXXXXXXIXXXXXXXIIXXXXXXXXXXXXXXXXXXXXXXXXXXXIIIIXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111111111111111111222222222222222223333333333333333444444444445555555555555555555555555555555555555666666666666677777777777' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1: 0, - 2:-6, - 3:-6, - 4:-13, - 5:-16, - 6:-20, - } - - n_regions = 7 - - exclude_deletions = [1,3,5] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - _numbering = [ _regions[0], [], _regions[2], [], _regions[4], [], _regions[6] ] - - - ############### - # Renumbering # - ############### - - # CDR1 - # Kabat L region 2 (index 1) - # put insertions onto 27 - length = len( _regions[1] ) - insertions = max(length - 11, 0) # Eleven positions can be accounted for, the remainder are insertions - # Delete forward from 28 - annotations = [(24, " "),(25, " "), (26, " "), (27, " ")][:max(0,length)] - annotations += [(27, alphabet[i]) for i in range(insertions) ] - annotations += [(28, " "),(29, " "),(30, " "),(31, " "),(32, " "),(33, " "),(34, " ")][ abs( min(0,length-11) ):] - _numbering[1] = [ (annotations[i], _regions[1][i][1]) for i in range(length) ] - - # CDR2 - # Chothia L region 4 (index 3) - # put insertions onto 52. - length = len( _regions[3] ) - insertions = max( length - 4, 0 ) - if insertions > 0: - annotations = [(51, " "),(52, " ")] + [(52, alphabet[i]) for i in range(insertions) ] + [(53, " "),(54, " ")] - _numbering[3] = [ (annotations[i], _regions[3][i][1]) for i in range(length) ] - else: # How to gap L2 in Chothia/Kabat/Martin is unclear so we let the alignment do it. - _numbering[3] = _regions[3] - - - # FW3 - # All insertions are placed by alignment. This is in contrast to Martin (and Chothia) where they are placed on 68. - # The kabat scheme was defined using a sequence alignment alone. In keeping with this, insertions in FW3 are also only placed - # with respect to the sequence alignment (the HMM). - - # CDR3 - # Chothia L region 6 (index 5) - # put insertions onto 95 - length = len( _regions[5] ) - - if length > 35: return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = get_cdr3_annotations(length, scheme="kabat", chain_type="light") - _numbering[5] = [ (annotations[i], _regions[5][i][1]) for i in range(length) ] - - return gap_missing( _numbering ), startindex, endindex - - - - -############################# -# Martin (extended Chothia) # -############################# - -# Heavy chains -def number_martin_heavy(state_vector, sequence): - """ - Apply the Martin (extended Chothia) numbering scheme for heavy chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Martin scheme, I is an insertion. - XXXXXXXXXI XXXXXXXXXXXXXXXXXXXX IIIIXX XXXXXXXXXXXXXXXXXXXX XIXII XXXXXXXXXXXIXXXXXXXXIIIXXXXXXXXXXXXXXXXXXXXXX XXXXXXIII XXXXXXXXXXXXX - 1111111111 22222222222222222222 333333 44444444444444444444 55555 666666666666666666666666666666666666666666666 777777777 8888888888888 - - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Put the insertions at Chothia position 8 - 2 - Simple mapping (treat "I" states as inserts and not own match states) - 3 - CDRH1 - 30 (inc) to 34 (exc) put insertions on 31 - 4 - Simple mapping (treat "I" states as inserts and not own match states) - 5 - CDRH2 - 52 (inc) 58 (exc) put insertions on 52 - 6 - Simple mapping (treat "I" states as inserts and not own match states) - 7 - CDRH3 93 (inc) to 103 (exc) put insertion on 100 - 8 - Simple mapping (treat "I" states as inserts and not own match states) - - - Regions 1,3,5 and 7 are renumbered - - """ - - # Set up the numbering - - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXIXXXXXXXXXXXXXXXXXXXXIIIIXXXXXXXXXXXXXXXXXXXXXXXIXIIXXXXXXXXXXXIXXXXXXXXIIIXXXXXXXXXXXXXXXXXXXXXXXXXXXXIIIXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111112222222222222333333333333333444444444444444455555555555666666666666666666666666666666666666666777777777777788888888888' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6,"8":7} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:0, - 1:-1, - 2:-1, - 3:-5, - 4:-5, - 5:-8, - 6:-12, - 7:-15} - - n_regions = 8 - - exclude_deletions = [2,4,5,6] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - - ############### - # Renumbering # - ############### - - # Renumbering required for 0, 2, 4, 6 regions in Chothia heavy - - _numbering = [ [], _regions[1] , [], _regions[3] , [], _regions[5], [], _regions[7] ] - - # Chothia H region 1 (index 0) - # Insertions are placed at Chothia position 8. - # Count how many we recognised as insertion by the hmm - insertions = len( [ 1 for _ in _regions[0] if _[0][1] != " " ] ) - # We will place all insertion in this region at Chothia position 8. - if insertions: - start = _regions[0][0][0][0] # The starting Chothia number as found by the HMM (could easily start from 2 for example) - # I have a feeling this may be a source of a bug in very unusual cases. Can't break for now. Will catch mistakes in a validate function. - length = len( _regions[0] ) - annotations = [ (_, " ") for _ in range(start, 9) ] + [ (8, alphabet[_]) for _ in range(insertions) ] + [(9," ")] - _numbering[0] = [ (annotations[i], _regions[0][i][1]) for i in range(length) ] - else: - _numbering[0] = _regions[0] - - - # CDR1 - # Chothia H region 3 (index 2) - # put insertions onto 31 - length = len( _regions[2] ) - insertions = max(length - 11, 0) # Pulled back to the cysteine as heavily engineered cdr1's are not playing nicely - if insertions: - annotations = [(_, " ") for _ in range(23,32)] + [(31, alphabet[i]) for i in range(insertions) ] + [(32," "),(33," ")] - else: - annotations = [(_, " ") for _ in range(23,32)][:length-2] + [(32," "),(33," ")][:length] - _numbering[2] = [ (annotations[i], _regions[2][i][1]) for i in range(length) ] - - # CDR2 - # Chothia H region 5 (index 4) - # put insertions onto 52 - length = len( _regions[4] ) - # 50 to 57 inclusive - insertions = max(length - 8, 0) # Eight positions can be accounted for, the remainder are insertions - # Delete in the order, 52, 51, 50,53, 54 ,55, 56, 57 - annotations = [(50, " "),(51, " "), (52, " ")][:max(0,length-5)] - annotations += [(52, alphabet[i]) for i in range(insertions) ] - annotations += [(53, " "),(54, " "),(55, " "),(56, " "),(57, " ")][ abs( min(0,length-5) ):] - _numbering[4] = [ (annotations[i], _regions[4][i][1]) for i in range(length) ] - - # FW3 - # Place all insertions on 72 explicitly. - # This is in contrast to Chothia implementation where 3 insertions are on 82 and then further insertions are placed by the - # alignment - # Gaps are placed according to the alignment. - length = len( _regions[5] ) - insertions = max(length - 35, 0) - if insertions > 0: # Insertions on 72 - annotations = [(i,' ') for i in range(58,73)]+[(72, alphabet[i]) for i in range(insertions) ]+[(i,' ') for i in range(73,93)] - _numbering[5] = [ (annotations[i], _regions[5][i][1]) for i in range(length) ] - else: # Deletions - all alignment to place them. - _numbering[4] = _regions[4] - - - # CDR3 - # Chothia H region 7 (index 6) - # put insertions onto 100 - length = len( _regions[6] ) - if length > 36: return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = get_cdr3_annotations(length, scheme="chothia", chain_type="heavy") - _numbering[6] = [ (annotations[i], _regions[6][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - return gap_missing( _numbering ), startindex, endindex - -# Light chains -def number_martin_light(state_vector, sequence): - """ - Apply the Martin numbering scheme for light chains - - Rules should be implemented using two strings - the state string and the region string. - - There are 128 states in the HMMs. Treat X as a direct match in Martin scheme, I is an insertion. - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX IIIIIIX XXXXXXXXXXXXXXXXXXXX XIIIIIIIXXX XXXXXIXXXXXXXIIXXXXXXXXXXXXXXXXXXXXXX XXXXXIIIIXX XXXXXXXXXXXXX - 11111111111111111111111111111 2222222 33333333333333333333 44444444444 5555555555555555555555555555555555555 66666666666 7777777777777 - - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Simple mapping (treat "I" states as inserts and not own match states) - 2 - CDRL1 - 30 (inc) to 31 (exc) put insertions on 30 - 3 - Simple mapping (treat "I" states as inserts and not own match states) - 4 - CDRL2 - 51 (inc) 55 (exc) put insertions on 52 - 5 - Simple mapping (treat "I" states as inserts and not own match states) - 6 - CDRL3 89 (inc) to 96 (exc) put insertion on 95 - 7 - Simple mapping (treat "I" states as inserts and not own match states) - - Region 2, 3 and 5 are renumbered - - """ - - # The Martin and Chothia specification for light chains are very similar. Martin is more explicit in the location of indels - # but unlike the heavy chain these are additional instead of changes to the Chothia scheme. Thus, Chothia light is implemented - # as martin light. - return number_chothia_light(state_vector,sequence) - - -########### -# Wolfguy # -########### -# The Wolfguy numbering scheme is an in-house scheme used at Roche. It has been described publicly in the paper: -# Prediction of VH-VL domain orientation for antibody variable domain modeling. Bujotzek A. et al. Protein 2015 83(4) 681-95 -# -# It is similar in gapping as IMGT and is defined only for heavy and light antibody chains. -# Unlike other schemes the numbering denotes both the chain (heavy 101-499, light 501-799) and the region (less than -50 framework -# greater than -50 CDR). All CDRs of length less than 50 can be handled without the need for insertion codes. Numbering of the -# framework behaves similarly to IMGT in that all positions are assumed to be accounted for. Framework insertions are placed by -# the alignment. -# -# Numbering of all CDRs is performed symmetrically with the exception of CDRL1. In this case the CDR is numbered according to a -# pattern specific to the canonical class. This is recognised by length and by sequence similarity to a consensus sequence. If a -# length has not been observed it is numbered symmetrically. - - -def number_wolfguy_heavy(state_vector, sequence): - """ - Apply the wolfguy numbering scheme for heavy chains - - The scheme numbers the sequence using different segments so that the numbering tells you - where in the antibody the sequence is describing. - - XXXXXXXXXIXXXXXXXXXXXXXXXX XXXXXXXXXXXXXX XXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXIX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXX XXXXXXXXXXX - 11111111111111111111111111 22222222222222 33333333333333 44444444444444444444 555555555555555555555555555555 6666666666666 77777777777' - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Simple mapping (treat "I" states as inserts and not own match states) - 2 - CDRH1 - 155-199 (inc). Gap symmetrically about 175-176. - 3 - Simple mapping (treat "I" states as inserts and not own match states) - 4 - CDRH2 - 251-299 (inc). Gap symmetrically about 271-272, then gap back from 294. - 5 - Simple mapping (treat "I" states as inserts and not own match states) - 6 - CDRH3 331,332 and 351-399 (inc). Gap according to the - 7 - Simple mapping (treat "I" states as inserts and not own match states) - - Start gaps on rhs each time. - """ - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXIXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXIXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '11111111111111111111111111222222222222223333333333333344444444444444444444555555555555555555555555555555666666666666677777777777' - - region_index_dict = {"1":0,"2":1,"3":2,"4":3,"5":4,"6":5,"7":6} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:100, - 1:124, - 2:160, - 3:196, - 4:226, - 5:244, - 6:283} - - n_regions = 7 - - exclude_deletions = [1,3,5] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - ############### - # Renumbering # - ############### - - # Renumbering required for 1, 3, 5 regions in wolfguy heavy - _numbering = [ _regions[0], [] , _regions[2], [], _regions[4] , [], _regions[6] ] - - # CDRH1 - # Delete symmetrically about 177. Delete right first. - # May have to change this to reflect where the point of symmetry is - ordered_deletions = [151] - for p1,p2 in zip( list(range(152,176)), list(range(199, 175,-1))): ordered_deletions += [ p1,p2 ] - length = len( _regions[1] ) - annotations = sorted(ordered_deletions[:length]) - _numbering[1] = [ ((annotations[i]," "), _regions[1][i][1]) for i in range(length) ] - - # CDRH2 - # Delete symmetrically about 271. Delete right first. - # Then delete right from 288 - ordered_deletions = [251] - for p1,p2 in zip( list(range(252,271)), list(range(290, 271,-1))): ordered_deletions += [ p1,p2 ] - ordered_deletions.append( 271 ) - ordered_deletions = list(range( 299, 290, -1)) + ordered_deletions - length = len( _regions[3] ) - annotations = sorted(ordered_deletions[:length]) - _numbering[3] = [ ((annotations[i]," "), _regions[3][i][1]) for i in range(length) ] - - # CDRH3 - # Delete symmetrically about 374. Delete right first. - # Scheme changes at length 8 - # Scheme changes at length 12 - ordered_deletions = [] - for p1,p2 in zip( list(range(356,374)), list(range(391, 373,-1))): ordered_deletions += [ p1,p2 ] - ordered_deletions = [ 354, 394, 355, 393, 392 ] + ordered_deletions - ordered_deletions = [331,332] + [ 399, 398, 351, 352, 397, 353, 396, 395 ] + ordered_deletions - length = len( _regions[5] ) - - if length > len(ordered_deletions): return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = sorted(ordered_deletions[:length]) - _numbering[5] = [ ((annotations[i]," "), _regions[5][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - return sum( _numbering, [] ), startindex, endindex - - -def number_wolfguy_light(state_vector, sequence): - """ - Apply the wolfguy numbering scheme for light chains - - The scheme numbers the sequence using different segments so that the numbering tells you - where in the antibody the sequence is describing. - - XXXXXXX XXX XXXXXXXXXXXXX XXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXX XXXXXXXXXXXXXX XXXIXXXXXXX XXXX XXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXX XXXXXXXXXXX - 1111111 AAA BBBBBBBBBBBBB 22222222222222222 333333333333333 44444444444444 55555555555 6666 77777777777777777777 8888888888888 99999999999 - - Regions - (N.B These do not match up with any particular definition of CDR) - 1 - Simple mapping (treat "I" states as inserts and not own match states) - A - Move indels onto 508 - B - Simple mapping (treat "I" states as inserts and not own match states) - 2 - CDRL1 - 551-599 (inc). Assign via the matching consensus sequence and length. - 3 - Simple mapping (treat "I" states as inserts and not own match states) - 4 - CDRL2 - 651-699 (inc). Gap about 673 then right from 694 - 5 - Simple mapping (treat "I" states as inserts and not own match states) - 6 - Move indels onto 713 and 714 - 7 - Simple mapping (treat "I" states as inserts and not own match states) - 8 - CDRL3 751-799 (inc). Gap symmetrically about 374-375 - 9 - Simple mapping (treat "I" states as inserts and not own match states) - - """ - # Set up the numbering - - # State string - 'X' means the imgt position exists in the scheme. 'I' means that it should be treated as an insertion of the previous number - state_string = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXIXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - - # Region string - regions that should be treated separately in putting the numbering together - region_string = '1111111AAABBBBBBBBBBBBB222222222222222223333333333333334444444444444455555555555666677777777777777777777888888888888899999999999' - - region_index_dict = {"1":0,"A":1,"B":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8,"8":9,"9":10} - - # Define how the scheme's numbering differs from IMGT at the start of each region. - # This is updated in the loop below - rels = {0:500, - 1:500, - 2:500, - 3:527, - 4:560, - 5:595, - 6:631, - 7:630, - 8:630, - 9:646, - 10:683} - - n_regions = 11 - - exclude_deletions = [1,3,5,7,9] - - _regions, startindex, endindex = _number_regions(sequence, state_vector, state_string , region_string, region_index_dict, rels, n_regions, exclude_deletions) - - ############### - # Renumbering # - ############### - - # Renumbering required for 1, 3, 5 regions in wolfguy heavy - _numbering = [ _regions[0], [], _regions[2], [] , _regions[4], [], _regions[6], [], _regions[8], [], _regions[10] ] - - - # Gaps in the first section go 508 instead of the imgt 510 equivalent - length = len(_regions[1] ) - annotations = sorted([ (510,' '), (509, ' '), (508, ' ')][ :length ] + [(508,a) for a in alphabet[:max(0, length-3)]]) - _numbering[1] = [ (annotations[i], _regions[1][i][1]) for i in range(length) ] - - # CDRL1 - # Number by predicting the canonical - length = len(_regions[3] ) - annotations = _get_wolfguy_L1( _regions[3], length) - _numbering[3] = [ ((annotations[i]," "), _regions[3][i][1]) for i in range(length) ] - - # CDRL2 - # Delete about 673. Finally delete right from 694. Maintain 651 as the last deletion - ordered_deletions = [] - for p1,p2 in zip( list(range(652,673)), list(range(694, 672,-1))): ordered_deletions += [ p2,p1 ] - ordered_deletions = [651] + list(range( 699, 694, -1)) + ordered_deletions + [673] - - length = len( _regions[5] ) - annotations = sorted(ordered_deletions[:length]) - _numbering[5] = [ ((annotations[i]," "), _regions[5][i][1]) for i in range(length) ] - - - # The placement of the indel in wolfguy is different to that in imgt - length = len( _regions[7] ) - insertions = max( 0, length - 4 ) - annotations = [(711, ' '), (712, ' '), (713, ' '), (714, ' ')][:length] + [ (714, a) for a in alphabet[:insertions] ] - _numbering[7] = [ (annotations[i], _regions[7][i][1]) for i in range(length) ] - - # CDRL3 - # Delete symmetrically about 775. Delete right first. Finally delete 798 and 799 - ordered_deletions = [] - for p1,p2 in zip( list(range(751,775)), list(range(799, 775,-1))): ordered_deletions += [ p1,p2 ] - ordered_deletions.append( 775 ) - - length = len( _regions[9] ) - if length > len(ordered_deletions): return [], startindex, endindex # Too many insertions. Do not apply numbering. - annotations = sorted(ordered_deletions[:length]) - _numbering[9] = [ ((annotations[i]," "), _regions[9][i][1]) for i in range(length) ] - - # Return the full vector and the start and end indices of the numbered region of the sequence - return sum( _numbering, [] ), startindex, endindex - - -def _get_wolfguy_L1(seq, length): - """ - Wolfguy's L1 annotation is based on recognising the length and the sequence pattern defined - by a set of rules. If the length has not been characterised, we number symmetrically about the - middle of the loop. - """ - - # These are the annotations for different lengths of L1 according to the wolfguy definitions. - L1_sequences = { - 9: [['9', 'XXXXXXXXX', [551, 552, 554, 556, 563, 572, 597, 598, 599]]], - 10: [['10', 'XXXXXXXXXX', [551, 552, 553, 556, 561, 562, 571, 597, 598, 599]]], - 11: [['11a', 'RASQDISSYLA', [551, 552, 553, 556, 561, 562, 571, 596, 597, 598, 599]], - ['11b', 'GGNNIGSKSVH', [551, 552, 554, 556, 561, 562, 571, 572, 597, 598, 599]], - ['11b.2','SGDQLPKKYAY', [551, 552, 554, 556, 561, 562, 571, 572, 597, 598, 599]]], - 12: [['12a', 'TLSSQHSTYTIE', [551, 552, 553, 554, 555, 556, 561, 563, 572, 597, 598, 599]], - ['12b', 'TASSSVSSSYLH', [551, 552, 553, 556, 561, 562, 571, 595, 596, 597, 598, 599]], - ['12c', 'RASQSVxNNYLA', [551, 552, 553, 556, 561, 562, 571, 581, 596, 597, 598, 599]], - ['12d', 'rSShSIrSrrVh', [551, 552, 553, 556, 561, 562, 571, 581, 596, 597, 598, 599]]], - 13: [['13a', 'SGSSSNIGNNYVS', [551, 552, 554, 555, 556, 557, 561, 562, 571, 572, 597, 598, 599]], - ['13b', 'TRSSGSLANYYVQ', [551, 552, 553, 554, 556, 561, 562, 563, 571, 572, 597, 598, 599]]], - 14: [['14a', 'RSSTGAVTTSNYAN', [551, 552, 553, 554, 555, 561, 562, 563, 564, 571, 572, 597, 598, 599]], - ['14b', 'TGTSSDVGGYNYVS', [551, 552, 554, 555, 556, 557, 561, 562, 571, 572, 596, 597, 598, 599]]], - 15: [['15', 'XXXXXXXXXXXXXXX', [551, 552, 553, 556, 561, 562, 563, 581, 582, 594, 595, 596, 597, 598, 599]]], - 16: [['16', 'XXXXXXXXXXXXXXXX', [551, 552, 553, 556, 561, 562, 563, 581, 582, 583, 594, 595, 596, 597, 598, 599]]], - 17: [['17', 'XXXXXXXXXXXXXXXXX', [551, 552, 553, 556, 561, 562, 563, 581, 582, 583, 584, 594, 595, 596, 597, 598, 599]]] - } - - if length in L1_sequences: # Use the pre-defined motif - # Find the maximum scoring canonical form for this length. - curr_max = None, -10000 - for canonical in L1_sequences[length]: - sub_score = 0 - for i in range( length ): - try: - sub_score += blosum62[ (seq[i][1].upper(), canonical[1][i].upper() ) ] - except KeyError: - sub_score += blosum62[ (canonical[1][i].upper(), seq[i][1].upper() ) ] - if sub_score > curr_max[1]: - curr_max = canonical, sub_score - - # return the annotations - return curr_max[0][2] - else: # Use a symmetric numbering about the anchors. - ordered_deletions = [] - for p1,p2 in zip( list(range(551,575)), list(range(599, 575,-1))): ordered_deletions += [ p2,p1 ] - ordered_deletions.append(575) - return sorted( ordered_deletions[:length] ) - -def gap_missing( numbering ): - ''' - Place gaps when a number is missing. All except wolfguy are continuously numbered - ''' - # Gaps placed where a number is not present - num = [ ((0,' '),'-') ] - for p, a in sum( numbering, [] ): - if p[0] > num[-1][0][0]+1: - for _i in range( num[-1][0][0]+1, p[0] ): - num.append( ((_i, ' '), '-' ) ) - num.append( (p,a) ) - return num[1:] - - -###################### -# Annotation of CDR3 # -###################### - -def get_cdr3_annotations(length, scheme="imgt", chain_type=""): - """ - Given a length of a cdr3 give back a list of the annotations that should be applied to the sequence. - - This function should be depreciated - """ - az = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" - za = "ZYXWVUTSRQPONMLKJIHGFEDCBA" - - if scheme=="imgt": - start, end = 105, 118 # start (inclusive) end (exclusive) - annotations = [None for _ in range(max(length,13))] - front = 0 - back = -1 - assert (length-13) < 50, "Too many insertions for numbering scheme to handle" # We ran out of letters. - for i in range(min(length,13)): - if i%2: - annotations[back] = (end+back, " ") - back -= 1 - else: - annotations[front] = (start+front, " ") - front += 1 - for i in range(max(0,length-13)): # add insertions onto 111 and 112 in turn - if i%2: - annotations[back] = (112, za[back+6]) - back-=1 - else: - annotations[front] = (111, az[front-7]) - front +=1 - return annotations - - elif scheme in [ "chothia", "kabat"] and chain_type=="heavy": # For chothia and kabat - # Number forwards from 93 - insertions = max(length - 10, 0) - assert insertions < 27, "Too many insertions for numbering scheme to handle" # We ran out of letters. - ordered_deletions = [ (100, ' '), (99,' '), (98,' '), (97,' '), (96,' '), (95,' '), (101,' '),(102,' '),(94,' '), (93,' ') ] - annotations = sorted( ordered_deletions[ max(0, 10-length): ] + [ (100,a) for a in az[:insertions ] ] ) - return annotations - - elif scheme in [ "chothia", "kabat"] and chain_type=="light": - # Number forwards from 89 - insertions = max(length - 9, 0) - assert insertions < 27, "Too many insertions for numbering scheme to handle" # We ran out of letters. - ordered_deletions = [ (95,' '),(94,' '),(93,' '),( 92,' '),(91,' '),(96,' '),(97,' '),(90,' '),(89,' ') ] - annotations = sorted( ordered_deletions[ max(0, 9-length): ] + [ (95,a) for a in az[:insertions ] ] ) - return annotations - - else: - raise AssertionError("Unimplemented scheme") - diff --git a/spaces/yamashiro3/Whisper-gpt-voicescribe/app.py b/spaces/yamashiro3/Whisper-gpt-voicescribe/app.py deleted file mode 100644 index 52655c5cf42bc71a4c35e8e54bb9ba373a57f3b6..0000000000000000000000000000000000000000 --- a/spaces/yamashiro3/Whisper-gpt-voicescribe/app.py +++ /dev/null @@ -1,436 +0,0 @@ -# import whisper -from faster_whisper import WhisperModel -import datetime -import subprocess -import gradio as gr -from pathlib import Path -import pandas as pd -import re -import time -import os -import numpy as np -from sklearn.cluster import AgglomerativeClustering -from sklearn.metrics import silhouette_score - -from pytube import YouTube -import yt_dlp -import torch -import pyannote.audio -from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding -from pyannote.audio import Audio -from pyannote.core import Segment - -from gpuinfo import GPUInfo - -import wave -import contextlib -from transformers import pipeline -import psutil -import openai -import tempfile - -whisper_models = ["tiny", "base", "small", "medium", "large-v1", "large-v2"] -source_languages = { - "en": "English", - "ja": "Japanese", -} - -source_language_list = [key[0] for key in source_languages.items()] - -MODEL_NAME = "vumichien/whisper-medium-jp" -lang = "ja" - -device = 0 if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) -os.makedirs('output', exist_ok=True) -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe") - -embedding_model = PretrainedSpeakerEmbedding( - "speechbrain/spkrec-ecapa-voxceleb", - device=torch.device("cuda" if torch.cuda.is_available() else "cpu")) - - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
                ' - "
                " - ) - return HTML_str - - -def yt_transcribe(yt_url): - # yt = YouTube(yt_url) - # html_embed_str = _return_yt_html_embed(yt_url) - # stream = yt.streams.filter(only_audio=True)[0] - # stream.download(filename="audio.mp3") - - ydl_opts = { - 'format': 'bestvideo*+bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'mp3', - 'preferredquality': '192', - }], - 'outtmpl': 'audio.%(ext)s', - } - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([yt_url]) - - text = pipe("audio.mp3")["text"] - return html_embed_str, text - - -def convert_time(secs): - return datetime.timedelta(seconds=round(secs)) - - -def get_youtube(video_url): - # yt = YouTube(video_url) - # abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - - ydl_opts = { - 'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best', - } - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info = ydl.extract_info(video_url, download=False) - abs_video_path = ydl.prepare_filename(info) - ydl.process_info(info) - - print("Success download video") - print(abs_video_path) - return abs_video_path - - -def speech_to_text(video_file_path, selected_source_lang, whisper_model, num_speakers): - """ - # Transcribe youtube link using OpenAI Whisper - 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - 2. Generating speaker embeddings for each segments. - 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio - """ - - # model = whisper.load_model(whisper_model) - # model = WhisperModel(whisper_model, device="cuda", compute_type="int8_float16") - model = WhisperModel(whisper_model, compute_type="int8") - time_start = time.time() - if (video_file_path == None): - raise ValueError("Error no video input") - print(video_file_path) - - try: - # Read and convert youtube video - _, file_ending = os.path.splitext(f'{video_file_path}') - print(f'file enging is {file_ending}') - audio_file = video_file_path.replace(file_ending, ".wav") - print("starting conversion to wav") - os.system(f'ffmpeg -i "{video_file_path}" -ar 16000 -ac 1 -c:a pcm_s16le "{audio_file}"') - - # Get duration - with contextlib.closing(wave.open(audio_file, 'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - duration = frames / float(rate) - print(f"conversion to wav ready, duration of audio file: {duration}") - - # Transcribe audio - options = dict(language=selected_source_lang, beam_size=5, best_of=5) - transcribe_options = dict(task="transcribe", **options) - segments_raw, info = model.transcribe(audio_file, **transcribe_options) - - # Convert back to original openai format - segments = [] - i = 0 - for segment_chunk in segments_raw: - chunk = {} - chunk["start"] = segment_chunk.start - chunk["end"] = segment_chunk.end - chunk["text"] = segment_chunk.text - segments.append(chunk) - i += 1 - print("transcribe audio done with fast whisper") - except Exception as e: - raise RuntimeError("Error converting video to audio") - - try: - # Create embedding - def segment_embedding(segment): - audio = Audio() - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = audio.crop(audio_file, clip) - return embedding_model(waveform[None]) - - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(segment) - embeddings = np.nan_to_num(embeddings) - print(f'Embedding shape: {embeddings.shape}') - - if num_speakers == 0: - # Find the best number of speakers - score_num_speakers = {} - - for num_speakers in range(2, 10 + 1): - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - score = silhouette_score(embeddings, clustering.labels_, metric='euclidean') - score_num_speakers[num_speakers] = score - best_num_speaker = max(score_num_speakers, key=lambda x: score_num_speakers[x]) - print(f"The best number of speakers: {best_num_speaker} with {score_num_speakers[best_num_speaker]} score") - else: - best_num_speaker = num_speakers - - # Assign speaker label - clustering = AgglomerativeClustering(best_num_speaker).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - - # Make output - objects = { - 'Start': [], - 'End': [], - 'Speaker': [], - 'Text': [] - } - text = '' - for (i, segment) in enumerate(segments): - if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - if i != 0: - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - text = '' - text += segment["text"] + ' ' - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - - time_end = time.time() - time_diff = time_end - time_start - memory = psutil.virtual_memory() - gpu_utilization, gpu_memory = GPUInfo.gpu_usage() - gpu_utilization = gpu_utilization[0] if len(gpu_utilization) > 0 else 0 - gpu_memory = gpu_memory[0] if len(gpu_memory) > 0 else 0 - system_info = f""" - *Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB.* - *Processing time: {time_diff:.5} seconds.* - *GPU Utilization: {gpu_utilization}%, GPU Memory: {gpu_memory}MiB.* - """ - save_path = "output/transcript_result.csv" - df_results = pd.DataFrame(objects) - df_results.to_csv(save_path) - return df_results, system_info, save_path - - except Exception as e: - raise RuntimeError("Error Running inference with local model", e) - - -def create_transcription_summary(openai_key, prompt): - openai.api_key = openai_key - system_template = prompt - - with open("output/transcript_result.csv", "r") as file: - transcript_text = file.read() - - completion = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": system_template}, - {"role": "user", "content": transcript_text} - ] - ) - transcript_summary = completion.choices[0].message.content - return transcript_summary - - -# ---- Gradio Layout ----- -# Inspiration from https://huggingface.co/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles -video_in = gr.Video(label="Video file", mirror_webcam=False) -youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True) -df_init = pd.DataFrame(columns=['Start', 'End', 'Speaker', 'Text']) -memory = psutil.virtual_memory() -selected_source_lang = gr.Dropdown(choices=source_language_list, type="value", value="en", - label="Spoken language in video", interactive=True) -selected_whisper_model = gr.Dropdown(choices=whisper_models, type="value", value="base", label="Selected Whisper model", - interactive=True) -number_speakers = gr.Number(precision=0, value=0, - label="Input number of speakers for better results. If value=0, model will automatic find the best number of speakers", - interactive=True) -system_info = gr.Markdown( - f"*Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB*") -download_transcript = gr.File(label="Download transcript") -transcription_df = gr.DataFrame(value=df_init, label="Transcription dataframe", row_count=(0, "dynamic"), max_rows=10, - wrap=True, overflow_row_behaviour='paginate') -openai_key_in = gr.Textbox(lines=1, label="openai_key", type="password") -openai_prompt_in = gr.TextArea(label="openai_prompt", value="""音声の文字起こしが渡されます。 - -この音声のサマリーをMarkdown形式で作成してください。サマリーは、以下のような形式で書いてください。 -- 会議の目的 -- 会議の内容 -- 会議の結果""") -openai_summary_out = gr.Textbox(label="openai_summary") -save_path = "output/transcript_result.csv" - -title = "Whisper speaker diarization" -demo = gr.Blocks(title=title) -demo.encrypt = False - -with demo: - with gr.Tab("Whisper speaker diarization"): - gr.Markdown(''' -
                -

                Whisper speaker diarization

                - This space uses Whisper models from OpenAI with CTranslate2 which is a fast inference engine for Transformer models to recognize the speech (4 times faster than original openai model with same accuracy) - and ECAPA-TDNN model from SpeechBrain to encode and clasify speakers -
                - ''') - - with gr.Row(): - gr.Markdown(''' - ### Transcribe youtube link using OpenAI Whisper - ##### 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - ##### 2. Generating speaker embeddings for each segments. - ##### 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - ''') - - with gr.Row(): - gr.Markdown(''' - ### You can test by following examples: - ''') - examples = gr.Examples(examples= - ["https://www.youtube.com/watch?v=N251e97Awh4", - "https://www.youtube.com/watch?v=-UX0X45sYe4", - "https://www.youtube.com/watch?v=REMyAsPC2So"], - label="Examples", inputs=[youtube_url_in]) - - with gr.Row(): - with gr.Column(): - youtube_url_in.render() - download_youtube_btn = gr.Button("Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [ - video_in]) - print(video_in) - - with gr.Row(): - with gr.Column(): - video_in.render() - with gr.Column(): - gr.Markdown(''' - ##### Here you can start the transcription process. - ##### Please select the source language for transcription. - ##### You can select a range of assumed numbers of speakers. - ''') - selected_source_lang.render() - selected_whisper_model.render() - number_speakers.render() - transcribe_btn = gr.Button("Transcribe audio and diarization") - transcribe_btn.click(speech_to_text, - [video_in, selected_source_lang, selected_whisper_model, number_speakers], - [transcription_df, system_info, download_transcript] - ) - - with gr.Row(): - gr.Markdown(''' - ##### Here you will get transcription output - ##### ''') - - with gr.Row(): - with gr.Column(): - download_transcript.render() - transcription_df.render() - # system_info.render() - # gr.Markdown( - # '''
                visitor badgeLicense: Apache 2.0
                ''') - - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - From here, you can perform an evaluation analysis based on the transcription done using ChatGPT. - Feel free to change the prompt as needed. - Depending on the prompt, you can generate a summary of the conversation or an action list. - ''') - openai_key_in.render() - openai_prompt_in.render() - openai_summary_btn = gr.Button("Evaluate and analyze transcription content") - openai_summary_btn.click(create_transcription_summary, - [openai_key_in, openai_prompt_in], - [openai_summary_out] - ) - - with gr.Row(): - with gr.Column(): - openai_summary_out.render() - system_info.render() - gr.Markdown( - '''
                visitor badgeLicense: Apache 2.0
                ''') - - - # with gr.Tab("Whisper Transcribe Japanese Audio"): - # gr.Markdown(f''' - #
                - #

                Whisper Transcribe Japanese Audio

                - #
                - # Transcribe long-form microphone or audio inputs with the click of a button! The fine-tuned - # checkpoint
                {MODEL_NAME} to transcribe audio files of arbitrary length. - # ''') - # microphone = gr.inputs.Audio(source="microphone", type="filepath", optional=True) - # upload = gr.inputs.Audio(source="upload", type="filepath", optional=True) - # transcribe_btn = gr.Button("Transcribe Audio") - # text_output = gr.Textbox() - # with gr.Row(): - # gr.Markdown(''' - # ### You can test by following examples: - # ''') - # examples = gr.Examples(examples= - # ["sample1.wav", - # "sample2.wav", - # ], - # label="Examples", inputs=[upload]) - # transcribe_btn.click(transcribe, [microphone, upload], outputs=text_output) - # - # with gr.Tab("Whisper Transcribe Japanese YouTube"): - # gr.Markdown(f''' - #
                - #

                Whisper Transcribe Japanese YouTube

                - #
                - # Transcribe long-form YouTube videos with the click of a button! The fine-tuned checkpoint: - # {MODEL_NAME} to transcribe audio files of arbitrary length. - # ''') - # youtube_link = gr.Textbox(label="Youtube url", lines=1, interactive=True) - # yt_transcribe_btn = gr.Button("Transcribe YouTube") - # text_output2 = gr.Textbox() - # html_output = gr.Markdown() - # yt_transcribe_btn.click(yt_transcribe, [youtube_link], outputs=[html_output, text_output2]) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/yiningmao/metaphor-detection-baseline/run_classifier_dataset_utils.py b/spaces/yiningmao/metaphor-detection-baseline/run_classifier_dataset_utils.py deleted file mode 100644 index 49d17bf392c14f547d9e362dffa56dcccf2fc403..0000000000000000000000000000000000000000 --- a/spaces/yiningmao/metaphor-detection-baseline/run_classifier_dataset_utils.py +++ /dev/null @@ -1,669 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" BERT classification fine-tuning: utilities to work with GLUE tasks """ - -from __future__ import absolute_import, division, print_function - -import csv -import logging -import os -import sys -import torch -from tqdm import tqdm - -from scipy.stats import pearsonr, spearmanr, truncnorm -from sklearn.metrics import ( - matthews_corrcoef, - f1_score, - precision_score, - recall_score, - mean_squared_error, -) -import random -import nltk -from nltk.corpus import wordnet - -logger = logging.getLogger(__name__) - - -class InputExample(object): - """A single training/test example for simple sequence classification.""" - - def __init__( - self, - guid, - text_a, - text_b=None, - label=None, - POS=None, - FGPOS=None, - text_a_2=None, - text_b_2=None, - ): - """Constructs a InputExample. - - Args: - guid: Unique id for the example. - text_a: string. The untokenized text of the first sequence. For single - sequence tasks, only this sequence must be specified. - text_b: (Optional) string. The untokenized text of the second sequence. - Only must be specified for sequence pair tasks. - label: (Optional) string. The label of the example. This should be - specified for train and dev examples, but not for test examples. - """ - self.guid = guid - self.text_a = text_a - self.text_b = text_b - self.label = label - self.POS = POS - self.FGPOS = FGPOS - self.text_a_2 = text_a_2 - self.text_b_2 = text_b_2 - - -class InputFeatures(object): - """A single set of features of data.""" - - def __init__( - self, - input_ids, - input_mask, - segment_ids, - label_id, - guid=None, - input_ids_2=None, - input_mask_2=None, - segment_ids_2=None, - ): - self.input_ids = input_ids - self.input_mask = input_mask - self.segment_ids = segment_ids - self.label_id = label_id - self.guid = guid - self.input_ids_2 = input_ids_2 - self.input_mask_2 = input_mask_2 - self.segment_ids_2 = segment_ids_2 - - -class DataProcessor(object): - """Base class for data converters for sequence classification data sets.""" - - def get_train_examples(self, data_dir): - """Gets a collection of `InputExample`s for the train set.""" - raise NotImplementedError() - - def get_dev_examples(self, data_dir): - """Gets a collection of `InputExample`s for the dev set.""" - raise NotImplementedError() - - def get_labels(self): - """Gets the list of labels for this data set.""" - raise NotImplementedError() - - @classmethod - def _read_tsv(cls, input_file, quotechar=None): - """Reads a tab separated value file.""" - with open(input_file, "r", encoding="utf-8") as f: - reader = csv.reader(f, delimiter="\t", quotechar=quotechar) - lines = [] - for line in reader: - if sys.version_info[0] == 2: - line = list(unicode(cell, "utf-8") for cell in line) - lines.append(line) - return lines - - -class TrofiProcessor(DataProcessor): - """Processor for the TroFi and MOH-X data set.""" - - def get_train_examples(self, data_dir, k=None): - """See base class.""" - if k is not None: - return self._create_examples( - self._read_tsv(os.path.join(data_dir, "train" + str(k) + ".tsv")), "train" - ) - else: - return self._create_examples( - self._read_tsv(os.path.join(data_dir, "train.tsv")), "train" - ) - - def get_test_examples(self, data_dir, k=None): - """See base class.""" - if k is not None: - return self._create_examples( - self._read_tsv(os.path.join(data_dir, "test" + str(k) + ".tsv")), "test" - ) - else: - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_dev_examples(self, data_dir, k=None): - """See base class.""" - if k is not None: - return self._create_examples( - self._read_tsv(os.path.join(data_dir, "dev" + str(k) + ".tsv")), "dev" - ) - else: - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training and dev sets.""" - examples = [] - for (i, line) in enumerate(lines): - if i == 0: - continue - guid = "%s-%s" % (set_type, line[0]) - text_a = line[2] - label = line[1] - POS = line[3] - FGPOS = line[4] - index = line[-1] - examples.append( - InputExample( - guid=guid, text_a=text_a, text_b=index, label=label, POS=POS, FGPOS=FGPOS - ) - ) - return examples - - -class VUAProcessor(DataProcessor): - """Processor for the VUA data set.""" - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training and dev sets.""" - examples = [] - for (i, line) in enumerate(lines): - if i == 0: - continue - guid = "%s-%s" % (set_type, line[0]) - text_a = line[2] - label = line[1] - POS = line[3] - FGPOS = line[4] - if len(line) == 8: - index = line[5] - text_a_2 = line[6] - index_2 = line[7] - examples.append( - InputExample( - guid=guid, - text_a=text_a, - text_b=index, - label=label, - POS=POS, - FGPOS=FGPOS, - text_a_2=text_a_2, - text_b_2=index_2, - ) - ) - else: - index = line[-1] - examples.append( - InputExample( - guid=guid, text_a=text_a, text_b=index, label=label, POS=POS, FGPOS=FGPOS - ) - ) - return examples - - -def convert_examples_to_features( - examples, label_list, max_seq_length, tokenizer, output_mode, args -): - """Loads a data file into a list of `InputBatch`s.""" - label_map = {label: i for i, label in enumerate(label_list)} - - features = [] - for (ex_index, example) in tqdm(enumerate(examples)): - if ex_index % 10000 == 0: - logger.info("Writing example %d of %d" % (ex_index, len(examples))) - - tokens_a = tokenizer.tokenize(example.text_a) # tokenize the sentence - tokens_b = None - - try: - text_b = int(example.text_b) # index of target word - tokens_b = text_b - - # truncate the sentence to max_seq_len - if len(tokens_a) > max_seq_length - 2: - tokens_a = tokens_a[: (max_seq_length - 2)] - - # Find the target word index - for i, w in enumerate(example.text_a.split()): - # If w is a target word, tokenize the word and save to text_b - if i == text_b: - # consider the index due to models that use a byte-level BPE as a tokenizer (e.g., GPT2, RoBERTa) - text_b = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - break - w_tok = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - - # Count number of tokens before the target word to get the target word index - if w_tok: - tokens_b += len(w_tok) - 1 - - except TypeError: - if example.text_b: - tokens_b = tokenizer.tokenize(example.text_b) - # Modifies `tokens_a` and `tokens_b` in place so that the total - # length is less than the specified length. - # Account for [CLS], [SEP], [SEP] with "- 3" - _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3) - else: - # Account for [CLS] and [SEP] with "- 2" - if len(tokens_a) > max_seq_length - 2: - tokens_a = tokens_a[: (max_seq_length - 2)] - - tokens = [tokenizer.cls_token] + tokens_a + [tokenizer.sep_token] - segment_ids = [0] * len(tokens) - input_ids = tokenizer.convert_tokens_to_ids(tokens) - - # set the target word as 1 in segment ids - try: - tokens_b += 1 # add 1 to the target word index considering [CLS] - for i in range(len(text_b)): - segment_ids[tokens_b + i] = 1 - except TypeError: - pass - - # The mask has 1 for real tokens and 0 for padding tokens. Only real - # tokens are attended to. - input_mask = [1] * len(input_ids) - - # Zero-pad up to the sequence length. - padding = [tokenizer.convert_tokens_to_ids(tokenizer.pad_token)] * ( - max_seq_length - len(input_ids) - ) - input_ids += padding - input_mask += [0] * len(padding) - segment_ids += [0] * len(padding) - - assert len(input_ids) == max_seq_length - assert len(input_mask) == max_seq_length - assert len(segment_ids) == max_seq_length - - if output_mode == "classification": - label_id = label_map[example.label] - else: - raise KeyError(output_mode) - - if ex_index < 5: - logger.info("*** Example ***") - logger.info("guid: %s" % (example.guid)) - logger.info("tokens: %s" % " ".join([str(x) for x in tokens])) - logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) - logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) - logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) - logger.info("label: %s (id = %s)" % (example.label, str(label_id))) - - features.append( - InputFeatures( - input_ids=input_ids, - input_mask=input_mask, - segment_ids=segment_ids, - label_id=label_id, - guid=example.guid + " " + str(example.text_b), - ) - ) - return features - - -def convert_two_examples_to_features( - examples, label_list, max_seq_length, tokenizer, output_mode, win_size=-1 -): - """Loads a data file into a list of `InputBatch`s.""" - label_map = {label: i for i, label in enumerate(label_list)} - - features = [] - for (ex_index, example) in enumerate(examples): - if ex_index % 10000 == 0: - logger.info("Writing example %d of %d" % (ex_index, len(examples))) - - tokens_a = tokenizer.tokenize(example.text_a) # tokenize the sentence - tokens_b = None - text_b = None - - try: - text_b = int(example.text_b) # index of target word - tokens_b = text_b - - # truncate the sentence to max_seq_len - if len(tokens_a) > max_seq_length - 2: - tokens_a = tokens_a[: (max_seq_length - 2)] - - # Find the target word index - for i, w in enumerate(example.text_a.split()): - # If w is a target word, tokenize the word and save to text_b - if i == text_b: - # consider the index due to models that use a byte-level BPE as a tokenizer (e.g., GPT2, RoBERTa) - text_b = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - break - w_tok = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - - # Count number of tokens before the target word to get the target word index - if w_tok: - tokens_b += len(w_tok) - 1 - - except TypeError: - if example.text_b: - tokens_b = tokenizer.tokenize(example.text_b) - - # Modifies `tokens_a` and `tokens_b` in place so that the total - # length is less than the specified length. - # Account for [CLS], [SEP], [SEP] with "- 3" - _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3) - else: - # Account for [CLS] and [SEP] with "- 2" - if len(tokens_a) > max_seq_length - 2: - tokens_a = tokens_a[: (max_seq_length - 2)] - - tokens = [tokenizer.cls_token] + tokens_a + [tokenizer.sep_token] - segment_ids = [0] * len(tokens) - #import pdb; pdb.set_trace() - # set the target word as 1 in segment ids - try: - tokens_b += 1 # add 1 to the target word index considering [CLS] - for i in range(len(text_b)): - segment_ids[tokens_b + i] = 1 - - # concatentate the second sentence ( ["[CLS]"] + tokens_a + ["[SEP]"] -> ["[CLS]"] + tokens_a + ["[SEP]"] + text_b + ["[SEP]"]) - tokens = tokens + text_b + [tokenizer.sep_token] - segment_ids = segment_ids + [0] * len(text_b) - except TypeError: - pass - - # The mask has 1 for real tokens and 0 for padding tokens. Only real - # tokens are attended to. - input_ids = tokenizer.convert_tokens_to_ids(tokens) - input_mask = [1] * len(input_ids) - - # Zero-pad up to the sequence length. - padding = [tokenizer.convert_tokens_to_ids(tokenizer.pad_token)] * ( - max_seq_length - len(input_ids) - ) - input_ids += padding - input_mask += [0] * len(padding) - segment_ids += [0] * len(padding) - - assert len(input_ids) == max_seq_length - assert len(input_mask) == max_seq_length - assert len(segment_ids) == max_seq_length - - if output_mode == "classification": - label_id = label_map[example.label] - else: - raise KeyError(output_mode) - - if ex_index < 5: - logger.info("*** Example ***") - logger.info("guid: %s" % (example.guid)) - logger.info("tokens: %s" % " ".join([str(x) for x in tokens])) - logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) - logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) - logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) - logger.info("label: %s (id = %s)" % (example.label, str(label_id))) - - features.append( - InputFeatures( - input_ids=input_ids, - input_mask=input_mask, - segment_ids=segment_ids, - label_id=label_id, - guid=example.guid + " " + example.text_b, - ) - ) - return features - - -def convert_examples_to_two_features( - examples, label_list, max_seq_length, tokenizer, output_mode, args -): - """Loads a data file into a list of `InputBatch`s.""" - label_map = {label: i for i, label in enumerate(label_list)} - #import pdb; pdb.set_trace() - # examples = examples[:args.max_data_num] if args.max_data_num is not None else examples - - features = [] - for (ex_index, example) in tqdm(enumerate(examples)): - if ex_index % 10000 == 0: - logger.info("Writing example %d of %d" % (ex_index, len(examples))) - - tokens_a = tokenizer.tokenize(example.text_a) # tokenize the sentence - tokens_b = None - text_b = None - #import pdb; pdb.set_trace() - try: - #import pdb; pdb.set_trace() - text_b = int(example.text_b) # index of target word - tokens_b = text_b - - # truncate the sentence to max_seq_len - if len(tokens_a) > max_seq_length - 6: - tokens_a = tokens_a[: (max_seq_length - 6)] - - # Find the target word index - for i, w in enumerate(example.text_a.split()): - # If w is a target word, tokenize the word and save to text_b - if i == text_b: - # consider the index due to models that use a byte-level BPE as a tokenizer (e.g., GPT2, RoBERTa) - text_b = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - break - - w_tok = tokenizer.tokenize(w) if i == 0 else tokenizer.tokenize(" " + w) - - # Count number of tokens before the target word to get the target word index - if w_tok: - tokens_b += len(w_tok) - 1 - - if tokens_b + len(text_b) > max_seq_length - 6: - continue - - except TypeError: - #import pdb; pdb.set_trace() - print('Y|', example.text_b, tokens_b) - if example.text_b: - tokens_b = tokenizer.tokenize(example.text_b) - # Account for [CLS], [SEP], [SEP] with "- 3" - _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3) - else: - # Account for [CLS] and [SEP] with "- 2" - if len(tokens_a) > max_seq_length - 2: - tokens_a = tokens_a[: (max_seq_length - 2)] - - tokens = [tokenizer.cls_token] + tokens_a + [tokenizer.sep_token] - print('after|', text_b, tokens_b, tokens) - #print('N|', tokens_b) - # POS tag tokens - if args.use_pos: - POS_token = tokenizer.tokenize(example.POS) - tokens += POS_token + [tokenizer.sep_token] - - # Local context - if args.use_local_context: - local_start = 1 - local_end = local_start + len(tokens_a) - comma1 = tokenizer.tokenize(",")[0] - comma2 = tokenizer.tokenize(" ,")[0] - for i, w in enumerate(tokens): - if i < tokens_b + 1 and (w in [comma1, comma2]): - local_start = i - if i > tokens_b + 1 and (w in [comma1, comma2]): - local_end = i - break - segment_ids = [ - 2 if i >= local_start and i <= local_end else 0 for i in range(len(tokens)) - ] - else: - segment_ids = [0] * len(tokens) - - # POS tag encoding - after_token_a = False - for i, t in enumerate(tokens): - if t == tokenizer.sep_token: - after_token_a = True - if after_token_a and t != tokenizer.sep_token: - segment_ids[i] = 3 - - input_ids = tokenizer.convert_tokens_to_ids(tokens) - - try: - tokens_b += 1 # add 1 to the target word index considering [CLS] - for i in range(len(text_b)): - segment_ids[tokens_b + i] = 1 - except TypeError: - pass - - input_mask = [1] * len(input_ids) - padding = [tokenizer.convert_tokens_to_ids(tokenizer.pad_token)] * ( - max_seq_length - len(input_ids) - ) - input_ids += padding - input_mask += [0] * len(padding) - segment_ids += [0] * len(padding) - - assert len(input_ids) == max_seq_length - assert len(input_mask) == max_seq_length - assert len(segment_ids) == max_seq_length - - if output_mode == "classification": - label_id = label_map[example.label] - else: - raise KeyError(output_mode) - - # Second features (Target word) - tokens = [tokenizer.cls_token] + text_b + [tokenizer.sep_token] - segment_ids_2 = [0] * len(tokens) - try: - tokens_b = 1 # add 1 to the target word index considering [CLS] - for i in range(len(text_b)): - segment_ids_2[tokens_b + i] = 1 - except TypeError: - pass - - # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to. - input_ids_2 = tokenizer.convert_tokens_to_ids(tokens) - input_mask_2 = [1] * len(input_ids_2) - - padding = [tokenizer.convert_tokens_to_ids(tokenizer.pad_token)] * ( - max_seq_length - len(input_ids_2) - ) - input_ids_2 += padding - input_mask_2 += [0] * len(padding) - segment_ids_2 += [0] * len(padding) - - assert len(input_ids_2) == max_seq_length - assert len(input_mask_2) == max_seq_length - assert len(segment_ids_2) == max_seq_length - - features.append( - InputFeatures( - input_ids=input_ids, - input_mask=input_mask, - segment_ids=segment_ids, - label_id=label_id, - guid=example.guid + " " + str(example.text_b), - input_ids_2=input_ids_2, - input_mask_2=input_mask_2, - segment_ids_2=segment_ids_2, - ) - ) - - return features - - -def _truncate_seq_pair(tokens_a, tokens_b, max_length): - """Truncates a sequence pair in place to the maximum length.""" - - # This is a simple heuristic which will always truncate the longer sequence - # one token at a time. This makes more sense than truncating an equal percent - # of tokens from each, since if one sequence is very short then each token - # that's truncated likely contains more information than a longer sequence. - while True: - total_length = len(tokens_a) + len(tokens_b) - if total_length <= max_length: - break - if len(tokens_a) > len(tokens_b): - tokens_a.pop() - else: - tokens_b.pop() - - -def simple_accuracy(preds, labels): - return (preds == labels).mean() - - -def seq_accuracy(preds, labels): - acc = [] - for idx, pred in enumerate(preds): - acc.append((pred == labels[idx]).mean()) - return acc.mean() - - -def acc_and_f1(preds, labels): - acc = simple_accuracy(preds, labels) - f1 = f1_score(y_true=labels, y_pred=preds) - return { - "acc": acc, - "f1": f1, - "acc_and_f1": (acc + f1) / 2, - } - - -def all_metrics(preds, labels): - acc = simple_accuracy(preds, labels) - f1 = f1_score(y_true=labels, y_pred=preds) - pre = precision_score(y_true=labels, y_pred=preds) - rec = recall_score(y_true=labels, y_pred=preds) - return { - "acc": acc, - "precision": pre, - "recall": rec, - "f1": f1, - } - - -def compute_metrics(preds, labels): - assert len(preds) == len(labels) - return all_metrics(preds, labels) - - -processors = { - "vua": VUAProcessor, - "trofi": TrofiProcessor, -} - -output_modes = { - "vua": "classification", - "trofi": "classification", -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/configuration_big_bird.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/configuration_big_bird.py deleted file mode 100644 index 53bf1ee6f44b752543088e4163b5ad3dc00203bf..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/big_bird/configuration_big_bird.py +++ /dev/null @@ -1,178 +0,0 @@ -# coding=utf-8 -# Copyright 2021 Google Research and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" BigBird model configuration""" -from collections import OrderedDict -from typing import Mapping - -from ...configuration_utils import PretrainedConfig -from ...onnx import OnnxConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "google/bigbird-roberta-base": "https://huggingface.co/google/bigbird-roberta-base/resolve/main/config.json", - "google/bigbird-roberta-large": "https://huggingface.co/google/bigbird-roberta-large/resolve/main/config.json", - "google/bigbird-base-trivia-itc": "https://huggingface.co/google/bigbird-base-trivia-itc/resolve/main/config.json", - # See all BigBird models at https://huggingface.co/models?filter=big_bird -} - - -class BigBirdConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`BigBirdModel`]. It is used to instantiate an - BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the BigBird - [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 50358): - Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`BigBirdModel`]. - hidden_size (`int`, *optional*, defaults to 768): - Dimension of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention probabilities. - max_position_embeddings (`int`, *optional*, defaults to 4096): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 1024 or 2048 or 4096). - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`BigBirdModel`]. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - is_decoder (`bool`, *optional*, defaults to `False`): - Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). Only - relevant if `config.is_decoder=True`. - attention_type (`str`, *optional*, defaults to `"block_sparse"`) - Whether to use block sparse attention (with n complexity) as introduced in paper or original attention - layer (with n^2 complexity). Possible values are `"original_full"` and `"block_sparse"`. - use_bias (`bool`, *optional*, defaults to `True`) - Whether to use bias in query, key, value. - rescale_embeddings (`bool`, *optional*, defaults to `False`) - Whether to rescale embeddings with (hidden_size ** 0.5). - block_size (`int`, *optional*, defaults to 64) - Size of each block. Useful only when `attention_type == "block_sparse"`. - num_random_blocks (`int`, *optional*, defaults to 3) - Each query is going to attend these many number of random blocks. Useful only when `attention_type == - "block_sparse"`. - classifier_dropout (`float`, *optional*): - The dropout ratio for the classification head. - - Example: - - ```python - >>> from transformers import BigBirdConfig, BigBirdModel - - >>> # Initializing a BigBird google/bigbird-roberta-base style configuration - >>> configuration = BigBirdConfig() - - >>> # Initializing a model (with random weights) from the google/bigbird-roberta-base style configuration - >>> model = BigBirdModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "big_bird" - - def __init__( - self, - vocab_size=50358, - hidden_size=768, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu_new", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=4096, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - use_cache=True, - pad_token_id=0, - bos_token_id=1, - eos_token_id=2, - sep_token_id=66, - attention_type="block_sparse", - use_bias=True, - rescale_embeddings=False, - block_size=64, - num_random_blocks=3, - classifier_dropout=None, - **kwargs, - ): - super().__init__( - pad_token_id=pad_token_id, - bos_token_id=bos_token_id, - eos_token_id=eos_token_id, - sep_token_id=sep_token_id, - **kwargs, - ) - - self.vocab_size = vocab_size - self.max_position_embeddings = max_position_embeddings - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_act = hidden_act - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.initializer_range = initializer_range - self.type_vocab_size = type_vocab_size - self.layer_norm_eps = layer_norm_eps - self.use_cache = use_cache - - self.rescale_embeddings = rescale_embeddings - self.attention_type = attention_type - self.use_bias = use_bias - self.block_size = block_size - self.num_random_blocks = num_random_blocks - self.classifier_dropout = classifier_dropout - - -class BigBirdOnnxConfig(OnnxConfig): - @property - def inputs(self) -> Mapping[str, Mapping[int, str]]: - if self.task == "multiple-choice": - dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"} - else: - dynamic_axis = {0: "batch", 1: "sequence"} - return OrderedDict( - [ - ("input_ids", dynamic_axis), - ("attention_mask", dynamic_axis), - ] - ) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/__init__.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/__init__.py deleted file mode 100644 index bdd994b49294485c27610772f97f177741f5518f..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .utils.env import setup_environment - -setup_environment() - - -# This line will be programatically read/write by setup.py. -# Leave them at the bottom of this file and don't touch them. -__version__ = "0.6" diff --git a/spaces/yueranseo/mygpt/README.md b/spaces/yueranseo/mygpt/README.md deleted file mode 100644 index 79790f767ded0eb77b8129f8e960c65b8d166c14..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/yuyuyu-skst/White-box-Cartoonization/README.md b/spaces/yuyuyu-skst/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/yuyuyu-skst/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/README.md b/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/README.md deleted file mode 100644 index 519a4235726a8d94f3c1c03dbd2cf16fe1636acd..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/is-generator-function/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# is-generator-function [![Version Badge][2]][1] - -[![github actions][actions-image]][actions-url] -[![coverage][codecov-image]][codecov-url] -[![dependency status][5]][6] -[![dev dependency status][7]][8] -[![License][license-image]][license-url] -[![Downloads][downloads-image]][downloads-url] - -[![npm badge][11]][1] - -Is this a native generator function? - -## Example - -```js -var isGeneratorFunction = require('is-generator-function'); -assert(!isGeneratorFunction(function () {})); -assert(!isGeneratorFunction(null)); -assert(isGeneratorFunction(function* () { yield 42; return Infinity; })); -``` - -## Tests -Simply clone the repo, `npm install`, and run `npm test` - -[1]: https://npmjs.org/package/is-generator-function -[2]: https://versionbadg.es/inspect-js/is-generator-function.svg -[5]: https://david-dm.org/inspect-js/is-generator-function.svg -[6]: https://david-dm.org/inspect-js/is-generator-function -[7]: https://david-dm.org/inspect-js/is-generator-function/dev-status.svg -[8]: https://david-dm.org/inspect-js/is-generator-function#info=devDependencies -[11]: https://nodei.co/npm/is-generator-function.png?downloads=true&stars=true -[license-image]: https://img.shields.io/npm/l/is-generator-function.svg -[license-url]: LICENSE -[downloads-image]: https://img.shields.io/npm/dm/is-generator-function.svg -[downloads-url]: https://npm-stat.com/charts.html?package=is-generator-function -[codecov-image]: https://codecov.io/gh/inspect-js/is-generator-function/branch/main/graphs/badge.svg -[codecov-url]: https://app.codecov.io/gh/inspect-js/is-generator-function/ -[actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/inspect-js/is-generator-function -[actions-url]: https://github.com/inspect-js/is-generator-function/actions diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/neq.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/neq.js deleted file mode 100644 index f944c01576973f8c98ad4d446f7f85295a4b1d4a..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/neq.js +++ /dev/null @@ -1,3 +0,0 @@ -const compare = require('./compare') -const neq = (a, b, loose) => compare(a, b, loose) !== 0 -module.exports = neq diff --git a/spaces/zhengxuan-github/NEW_bing/README.md b/spaces/zhengxuan-github/NEW_bing/README.md deleted file mode 100644 index 94c9d4ca3003a41798362ae4194c3e9cf738bd74..0000000000000000000000000000000000000000 --- a/spaces/zhengxuan-github/NEW_bing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NEW Bing -emoji: 🏆 -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zion581/sentiment_analysis_by_rohan/README.md b/spaces/zion581/sentiment_analysis_by_rohan/README.md deleted file mode 100644 index cfbb722b46704a9ef51197c8f695066ab11995b8..0000000000000000000000000000000000000000 --- a/spaces/zion581/sentiment_analysis_by_rohan/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sentiment Analysis By Rohan -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference