diff --git a/spaces/0xSpleef/openchat-openchat_8192/app.py b/spaces/0xSpleef/openchat-openchat_8192/app.py deleted file mode 100644 index b20bd3c875bd002c6cad725d0b0f66ce22f0e607..0000000000000000000000000000000000000000 --- a/spaces/0xSpleef/openchat-openchat_8192/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/openchat/openchat_8192").launch() \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dibac For Sketchup 2015 VERIFIED Crack Full Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dibac For Sketchup 2015 VERIFIED Crack Full Download.md deleted file mode 100644 index 5d65c3fe19065c8fc0581edb0bfd414e722c9af1..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dibac For Sketchup 2015 VERIFIED Crack Full Download.md +++ /dev/null @@ -1,128 +0,0 @@ -
-

Dibac for SketchUp 2015 Crack Full Download: A Complete Guide

-

If you are looking for a plugin that can help you draw architectural plans in 2D and get the 3D automatically, you might want to try Dibac for SketchUp 2015. This plugin is a great tool for architects and anyone who wants to create realistic and detailed models in SketchUp. However, if you want to use all the features and functions of this plugin, you will need to purchase a license, which costs 69€. Alternatively, you can use a crack to get the full version of Dibac for SketchUp 2015 for free. In this article, we will show you what Dibac for SketchUp 2015 is, why you need a crack for it, how to download and install the crack, and how to activate it. We will also answer some frequently asked questions about Dibac for SketchUp 2015 crack.

-

dibac for sketchup 2015 crack full download


Download File ——— https://byltly.com/2uKw1Y



-

What is Dibac for SketchUp 2015?

-

Dibac for SketchUp 2015 is a plugin that allows you to draw in 2D and get the 3D with just one click. It works with SketchUp 2014, 2015, 2016, 2017, and 2018. It has several features and benefits that make it a powerful and easy-to-use tool for architectural drawing.

-

Features and benefits of Dibac for SketchUp 2015

-

Some of the features and benefits of Dibac for SketchUp 2015 are:

- -

How to use Dibac for SketchUp 2015

-

To use Dibac for SketchUp 2015, you need to download it from [10](https://www.dibac.com/dibac ) and install it on your computer. You will also need to have SketchUp 2014 or later installed on your computer. After installing Dibac for SketchUp 2015, you will see a new toolbar in SketchUp with the Dibac icons. You can also access the Dibac menu from the Extensions menu in SketchUp. To start using Dibac for SketchUp 2015, you need to follow these steps:

-
    -
  1. Open SketchUp and create a new file or open an existing one.
  2. -
  3. Click on the Dibac icon on the toolbar or go to Extensions > Dibac > Start Dibac.
  4. -
  5. Draw your floor plan in 2D mode using the Dibac tools, such as walls, doors, windows, stairs, etc. You can also use the SketchUp tools, such as lines, rectangles, circles, etc.
  6. -
  7. Apply materials and textures to your geometry if you want.
  8. -
  9. Click on the Convert to 3D icon on the toolbar or go to Extensions > Dibac > Convert to 3D.
  10. -
  11. Enjoy your 3D model created with Dibac for SketchUp 2015. You can also edit your model in both 2D and 3D modes.
  12. -
-

Why do you need a crack for Dibac for SketchUp 2015?

-

Dibac for SketchUp 2015 is a paid plugin that requires a license to use all its features and functions. The license costs 69€ and it is valid for one year. You can also use a trial version of Dibac for SketchUp 2015 for free, but it has some limitations and disadvantages. Therefore, you might want to use a crack for Dibac for SketchUp 2015 to get the full version of the plugin without paying anything.

-

The disadvantages of using the trial version

-

The trial version of Dibac for SketchUp 2015 has the following disadvantages:

- -

The advantages of using the full version

-

The full version of Dibac for SketchUp 2015 has the following advantages:

-

- -

How to download and install Dibac for SketchUp 2015 crack?

-

If you want to download and install Dibac for SketchUp 2015 crack, you need to be aware of the risks and precautions of using a crack. You also need to follow some steps to download and install the crack successfully.

-

The risks and precautions of using a crack

-

A crack is a software that modifies or bypasses the security features of another software, such as a license or activation code. Using a crack can be illegal, unethical, and risky. Some of the risks and precautions of using a crack are:

- -

To avoid or minimize these risks and precautions, you should:

- -

The steps to download and install the crack

-

To download and install Dibac for SketchUp 2015 crack, you need to follow these steps:

-
    -
  1. Go to [1](https://crack4windows.com/crack?s=dibac-for-sketchup&id=41164) and click on the Download button. This is a website that provides cracks for various software, including Dibac for SketchUp 2015.
  2. -
  3. Wait for the download to finish and extract the zip file to a folder on your computer.
  4. -
  5. Open the folder and run the setup.exe file as administrator. Follow the instructions on the screen to install Dibac for SketchUp 2015 crack.
  6. -
  7. Copy the crack file from the folder and paste it into the installation directory of Dibac for SketchUp 2015. This is usually C:\Program Files\SketchUp\SketchUp 2015\Plugins\Dibac.
  8. -
  9. Replace the original file with the crack file when prompted.
  10. -
  11. Restart your computer and launch SketchUp. You should see Dibac for SketchUp 2015 activated on your toolbar or menu.
  12. -
-

How to activate Dibac for SketchUp 2015 crack?

-

After installing Dibac for SketchUp 2015 crack, you need to activate it to use all its features and functions. To activate Dibac for SketchUp 2015 crack, you need to follow these instructions:

-

The instructions to activate the crack

-

To activate Dibac for SketchUp 2015 crack, you need to follow these instructions:

-
    -
  1. Open SketchUp and go to Extensions > Dibac > License Manager.
  2. -
  3. Click on the Activate button and enter any email address and serial number. You can use any random email address and serial number, such as abc@gmail.com and 1234567890.
  4. -
  5. Click on the OK button and wait for a few seconds. You should see a message that says "License activated successfully".
  6. -
  7. Click on the Close button and enjoy using Dibac for SketchUp 2015 crack.
  8. -
-

The tips and tricks to make the most of the crack

-

To make the most of Dibac for SketchUp 2015 crack, you can use some tips and tricks, such as:

- -

Conclusion

-

Dibac for SketchUp 2015 is a plugin that allows you to draw in 2D and get the 3D with just one click. It is a great tool for architects and anyone who wants to create realistic and detailed models in SketchUp. However, it is a paid plugin that requires a license to use all its features and functions. If you want to use the full version of Dibac for SketchUp 2015 for free, you can use a crack to bypass the security features of the plugin. In this article, we have shown you what Dibac for SketchUp 2015 is, why you need a crack for it, how to download and install the crack, and how to activate it. We have also answered some frequently asked questions about Dibac for SketchUp 2015 crack. We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

-

FAQs

-

Here are some frequently asked questions about Dibac for SketchUp 2015 crack:

-

Is Dibac for SketchUp 2015 compatible with Mac?

-

No, Dibac for SketchUp 2015 is only compatible with Windows operating systems. However, you can use a virtual machine or a dual boot system to run Windows on your Mac and use Dibac for SketchUp 2015.

-

Is Dibac for SketchUp 2015 compatible with other versions of SketchUp?

-

Yes, Dibac for SketchUp 2015 is compatible with SketchUp 2014, 2015, 2016, 2017, and 2018. However, it is not compatible with SketchUp 2019 or later.

-

Is Dibac for SketchUp 2015 safe to use?

-

Dibac for SketchUp 2015 is safe to use if you download it from the official website of the developer or a trusted source. However, using a crack for Dibac for SketchUp 2015 can be risky and illegal, as it might contain viruses, malware, spyware, or other harmful programs that can damage your system or steal your data. You might also violate the intellectual property rights of the developer and face legal consequences. Therefore, we recommend that you use a reliable antivirus program and scan your computer regularly. We also recommend that you support the developer if you can afford it and buy the license if you like the plugin.

-

How can I uninstall Dibac for SketchUp 2015?

-

To uninstall Dibac for SketchUp 2015, you need to follow these steps:

-
    -
  1. Open SketchUp and go to Extensions > Dibac > Uninstall.
  2. -
  3. Click on the Yes button to confirm the uninstallation.
  4. -
  5. Close SketchUp and delete the folder C:\Program Files\SketchUp\SketchUp 2015\Plugins\Dibac.
  6. -
  7. Delete the file C:\Users\YourUserName\AppData\Roaming\SketchUp\SketchUp 2015\Plugins\Dibac.json.
  8. -
  9. Restart your computer and check if Dibac for SketchUp 2015 is removed from your toolbar or menu.
  10. -
-

How can I contact the developer of Dibac for SketchUp 2015?

-

If you have any questions, suggestions, feedback, or issues with Dibac for SketchUp 2015, you can contact the developer by using the following methods:

-

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chhota Bheem And The Throne Of Bali Dubbed Movie Download [UPDATED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Chhota Bheem And The Throne Of Bali Dubbed Movie Download [UPDATED].md deleted file mode 100644 index 146869dbb849b087631d6e77d73c1ddfdc1e2924..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Chhota Bheem And The Throne Of Bali Dubbed Movie Download [UPDATED].md +++ /dev/null @@ -1,74 +0,0 @@ - -

Chhota Bheem and the Throne of Bali Dubbed Movie Download: A Review

-

If you are looking for a fun and adventurous animated movie for your kids, you might want to check out Chhota Bheem and the Throne of Bali. This movie is based on the popular Indian cartoon series Chhota Bheem, which follows the adventures of a brave and smart boy named Bheem and his friends in the fictional village of Dholakpur.

-

In this movie, Bheem and his friends are invited by the King of Bali to attend the crowning ceremony of his son, Prince Arjun. However, on their way to Bali, they learn that the kingdom has been captured by an evil witch named Rangda, who has imprisoned the king and queen and wants to rule over Bali with her army of Leyaks, who are monstrous creatures that spread destruction and disease.

-

Chhota Bheem and the throne of Bali dubbed movie download


DOWNLOADhttps://imgfil.com/2uxX3Z



-

Bheem and his friends team up with Prince Arjun, who has escaped from Rangda's clutches, and decide to fight against the witch and her minions. Along the way, they encounter many challenges and dangers, but also make new friends and discover the beauty and culture of Bali.

-

Why You Should Watch Chhota Bheem and the Throne of Bali Dubbed Movie

-

There are many reasons why you should watch Chhota Bheem and the Throne of Bali dubbed movie. Here are some of them:

- -

How to Download Chhota Bheem and the Throne of Bali Dubbed Movie for Free

-

If you want to download Chhota Bheem and the Throne of Bali dubbed movie for free, you can follow these simple steps:

-
    -
  1. Go to a reliable website that offers free downloads of animated movies. You can search for such websites on Google or any other search engine.
  2. -
  3. Search for Chhota Bheem and the Throne of Bali dubbed movie on the website. You can use the search bar or browse through the categories.
  4. -
  5. Select the movie from the list of results. Make sure it is in good quality and has clear audio.
  6. -
  7. Click on the download button or link. You might have to register or sign up on the website before downloading.
  8. -
  9. Choose a suitable format and resolution for your device. You can also select a preferred language if available.
  10. -
  11. Wait for the download to complete. You might have to wait for some time depending on your internet speed and file size.
  12. -
  13. Enjoy watching Chhota Bheem and the Throne of Bali dubbed movie with your kids!
  14. -
-

Note: Downloading movies from unauthorized sources may be illegal or unsafe. We do not endorse or promote any such websites or activities. Please use your own discretion and judgment before downloading any content from the internet.

-

Conclusion

-

Chhota Bheem and the Throne of Bali dubbed movie is a great choice for a family-friendly entertainment. It is a fun-filled adventure that will make you laugh, cry, cheer, and learn. You can download it for free from various websites or watch it online on streaming platforms like Prime Video or Google Play. So what are you waiting for? Grab some popcorns and enjoy this amazing movie with your kids!

-
Who are the Characters of Chhota Bheem and the Throne of Bali Dubbed Movie
-

One of the reasons why Chhota Bheem and the Throne of Bali dubbed movie is so popular is because of its lovable and memorable characters. Here are some of the main characters of the movie:

- -
Where to Watch Chhota Bheem and the Throne of Bali Dubbed Movie Online
-

If you want to watch Chhota Bheem and the Throne of Bali dubbed movie online, you have several options to choose from. Here are some of them:

- -

Note: Watching movies from unauthorized sources may be illegal or unsafe. We do not endorse or promote any such websites or activities. Please use your own discretion and judgment before watching any content from the internet.

-

-How to Enjoy Chhota Bheem and the Throne of Bali Dubbed Movie with Your Kids -

Chhota Bheem and the Throne of Bali dubbed movie is not only a great entertainment for you, but also for your kids. You can enjoy this movie with your kids in many ways. Here are some of them:

- -

These are some of the ways you can enjoy Chhota Bheem and the Throne of Bali dubbed movie with your kids. You can also come up with your own ideas and make your own fun. The main thing is to have a good time with your kids and bond with them over this wonderful movie.

-What are the Reviews of Chhota Bheem and the Throne of Bali Dubbed Movie -

Chhota Bheem and the Throne of Bali dubbed movie has received mixed reviews from critics and audiences alike. Some have praised the movie for its animation, story, characters, songs, and message, while others have criticized it for its lack of originality, creativity, and depth. Here are some of the reviews of the movie:

- -

These are some of the reviews of Chhota Bheem and the Throne of Bali dubbed movie. You can also read more reviews online or watch the movie yourself and form your own opinion.

-Conclusion -

Chhota Bheem and the Throne of Bali dubbed movie is a fun and adventurous animated movie that will appeal to kids and adults alike. It is based on the popular Indian cartoon series Chhota Bheem, which follows the exploits of a brave and smart boy named Bheem and his friends in the fictional village of Dholakpur. In this movie, Bheem and his friends travel to Bali to attend the crowning ceremony of Prince Arjun, but end up fighting against an evil witch named Rangda, who has captured the kingdom and its rulers.

-

The movie has many positive aspects, such as its action, comedy, drama, message, characters, songs, music, visuals, and animation. It also showcases the rich and diverse culture of Bali, such as its music, dance, art, architecture, and cuisine. The movie has received mixed reviews from critics and audiences, but it has also won many awards and accolades. It is the sixteenth instalment in the Chhota Bheem film series and the second film in the series to be released directly to movie theatres.

-

You can download Chhota Bheem and the Throne of Bali dubbed movie for free from various websites or watch it online on streaming platforms like Prime Video or Google Play. You can also enjoy this movie with your kids in many ways, such as watching it together, singing along the songs, playing games related to the movie, drawing or coloring pictures related to the movie, or acting out scenes from the movie. The main thing is to have a good time with your kids and bond with them over this wonderful movie.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/logs.py b/spaces/1line/AutoGPT/autogpt/logs.py deleted file mode 100644 index 35037404a98f7be9b7d577b625cc190ca27f4566..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/logs.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Logging module for Auto-GPT.""" -import json -import logging -import os -import random -import re -import time -import traceback -from logging import LogRecord - -from colorama import Fore, Style - -from autogpt.config import Config, Singleton -from autogpt.speech import say_text - -CFG = Config() - - -class Logger(metaclass=Singleton): - """ - Logger that handle titles in different colors. - Outputs logs in console, activity.log, and errors.log - For console handler: simulates typing - """ - - def __init__(self): - # create log directory if it doesn't exist - this_files_dir_path = os.path.dirname(__file__) - log_dir = os.path.join(this_files_dir_path, "../logs") - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - log_file = "activity.log" - error_file = "error.log" - - console_formatter = AutoGptFormatter("%(title_color)s %(message)s") - - # Create a handler for console which simulate typing - self.typing_console_handler = TypingConsoleHandler() - self.typing_console_handler.setLevel(logging.INFO) - self.typing_console_handler.setFormatter(console_formatter) - - # Create a handler for console without typing simulation - self.console_handler = ConsoleHandler() - self.console_handler.setLevel(logging.DEBUG) - self.console_handler.setFormatter(console_formatter) - - # Info handler in activity.log - self.file_handler = logging.FileHandler( - os.path.join(log_dir, log_file), "a", "utf-8" - ) - self.file_handler.setLevel(logging.DEBUG) - info_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" - ) - self.file_handler.setFormatter(info_formatter) - - # Error handler error.log - error_handler = logging.FileHandler( - os.path.join(log_dir, error_file), "a", "utf-8" - ) - error_handler.setLevel(logging.ERROR) - error_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" - " %(message_no_color)s" - ) - error_handler.setFormatter(error_formatter) - - self.typing_logger = logging.getLogger("TYPER") - self.typing_logger.addHandler(self.typing_console_handler) - self.typing_logger.addHandler(self.file_handler) - self.typing_logger.addHandler(error_handler) - self.typing_logger.setLevel(logging.DEBUG) - - self.logger = logging.getLogger("LOGGER") - self.logger.addHandler(self.console_handler) - self.logger.addHandler(self.file_handler) - self.logger.addHandler(error_handler) - self.logger.setLevel(logging.DEBUG) - - def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO - ): - if speak_text and CFG.speak_mode: - say_text(f"{title}. {content}") - - if content: - if isinstance(content, list): - content = " ".join(content) - else: - content = "" - - self.typing_logger.log( - level, content, extra={"title": title, "color": title_color} - ) - - def debug( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.DEBUG) - - def warn( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.WARN) - - def error(self, title, message=""): - self._log(title, Fore.RED, message, logging.ERROR) - - def _log(self, title="", title_color="", message="", level=logging.INFO): - if message: - if isinstance(message, list): - message = " ".join(message) - self.logger.log(level, message, extra={"title": title, "color": title_color}) - - def set_level(self, level): - self.logger.setLevel(level) - self.typing_logger.setLevel(level) - - def double_check(self, additionalText=None): - if not additionalText: - additionalText = ( - "Please ensure you've setup and configured everything" - " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " - "double check. You can also create a github issue or join the discord" - " and ask there!" - ) - - self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText) - - -""" -Output stream to console using simulated typing -""" - - -class TypingConsoleHandler(logging.StreamHandler): - def emit(self, record): - min_typing_speed = 0.05 - max_typing_speed = 0.01 - - msg = self.format(record) - try: - words = msg.split() - for i, word in enumerate(words): - print(word, end="", flush=True) - if i < len(words) - 1: - print(" ", end="", flush=True) - typing_speed = random.uniform(min_typing_speed, max_typing_speed) - time.sleep(typing_speed) - # type faster after each word - min_typing_speed = min_typing_speed * 0.95 - max_typing_speed = max_typing_speed * 0.95 - print() - except Exception: - self.handleError(record) - - -class ConsoleHandler(logging.StreamHandler): - def emit(self, record) -> None: - msg = self.format(record) - try: - print(msg) - except Exception: - self.handleError(record) - - -class AutoGptFormatter(logging.Formatter): - """ - Allows to handle custom placeholders 'title_color' and 'message_no_color'. - To use this formatter, make sure to pass 'color', 'title' as log extras. - """ - - def format(self, record: LogRecord) -> str: - if hasattr(record, "color"): - record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL - ) - else: - record.title_color = getattr(record, "title") - if hasattr(record, "msg"): - record.message_no_color = remove_color_codes(getattr(record, "msg")) - else: - record.message_no_color = "" - return super().format(record) - - -def remove_color_codes(s: str) -> str: - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", s) - - -logger = Logger() - - -def print_assistant_thoughts(ai_name, assistant_reply): - """Prints the assistant's thoughts to the console""" - from autogpt.json_utils.json_fix_llm import ( - attempt_to_fix_json_by_finding_outermost_brackets, - fix_and_parse_json, - ) - - try: - try: - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply) - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - if isinstance(assistant_reply_json, str): - assistant_reply_json = fix_and_parse_json(assistant_reply_json) - - # Check if assistant_reply_json is a string and attempt to parse - # it into a JSON object - if isinstance(assistant_reply_json, str): - try: - assistant_reply_json = json.loads(assistant_reply_json) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - assistant_reply_json = ( - attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply_json - ) - ) - - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - if not isinstance(assistant_reply_json, dict): - assistant_reply_json = {} - assistant_thoughts = assistant_reply_json.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log( - "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" - ) - - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - - logger.typewriter_log( - "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" - ) - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) - else: - logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}") - - return assistant_reply_json - except json.decoder.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - if CFG.speak_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API." - " I cannot ignore this response." - ) - - # All other errors, return "Error: + error message" - except Exception: - call_stack = traceback.format_exc() - logger.error("Error: \n", call_stack) - - -def print_assistant_thoughts( - ai_name: object, assistant_reply_json_valid: object -) -> None: - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - - assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}") - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}") - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Award Presents FIFA 16 - The Most Beautiful and Fastest Soccer Game on Mobile.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Award Presents FIFA 16 - The Most Beautiful and Fastest Soccer Game on Mobile.md deleted file mode 100644 index b1a572b0cda68561462016f270bba68dccb6d869..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APK Award Presents FIFA 16 - The Most Beautiful and Fastest Soccer Game on Mobile.md +++ /dev/null @@ -1,138 +0,0 @@ -
-

FIFA 16 Mobile: A Review of the Game and How to Download It

-

If you are a fan of football games, you might have heard of FIFA 16 Mobile, a popular and realistic soccer simulation game for Android devices. In this article, we will review the game and its features, as well as show you how to download it from apkaward.com, a trusted website that offers free apk files for Android games. We will also share some tips and tricks to help you play better and enjoy the game more.

-

What is FIFA 16 Mobile?

-

FIFA 16 Mobile is a mobile version of FIFA 16, a console and PC game developed by EA Sports. It was released in September 2015 and it is one of the most downloaded games on Google Play. FIFA 16 Mobile lets you play beautiful football with a newer, better, and faster experience on your mobile device. You can choose from over 10,000 players from over 500 licensed teams and go to battle against other players from real leagues in real arenas from around the world. You can also build and manage your own ultimate team, earn, trade, and transfer superstars like Lionel Messi, Jordan Henderson, and Juan Cuadrado. You can also show off your skills on the pitch with challenging skill games, dynamic accomplishments, and unique player celebrations.

-

fifa 16 mobile apkaward


Download Filehttps://urlin.us/2uSZgg



-

What are the main features of FIFA 16 Mobile?

-

Some of the main features of FIFA 16 Mobile are:

- -

What are the pros and cons of FIFA 16 Mobile?

-

Like any game, FIFA 16 Mobile has its pros and cons. Here are some of them:

- - - - - - - - - -
ProsCons
    -
  • Realistic and immersive graphics and animations
  • -
  • Wide variety of players, teams, leagues, and modes
  • -
  • Easy and intuitive controls and interface
  • -
  • Fun and challenging skill games and achievements
  • -
  • Innovative and rewarding player exchange feature
  • -
    -
  • Large file size and high device requirements
  • -
  • Limited compatibility with some Android devices
  • -
  • Potential lagging and crashing issues
  • -
  • Requires internet connection to play
  • -
  • Some bugs and glitches reported by users
  • -
-

How to download FIFA 16 Mobile?

-

If you want to download FIFA 16 Mobile for your Android device, you can follow these simple steps:

-
    -
  1. Go to apkaward.com, a reliable and safe website that offers free apk files for Android games.
  2. -
  3. Search for FIFA 16 Mobile in the search bar or browse the categories.
  4. -
  5. Select the game from the results and click on the download button.
  6. -
  7. Wait for the download to finish and locate the apk file in your device's storage.
  8. -
  9. Before installing the apk file, make sure you enable the "Unknown sources" option in your device's settings. This will allow you to install apps from sources other than Google Play.
  10. -
  11. Tap on the apk file and follow the instructions to install the game.
  12. -
  13. Enjoy playing FIFA 16 Mobile on your device.
  14. -
-

Tips and tricks for FIFA 16 Mobile

-

To help you play better and enjoy FIFA 16 Mobile more, here are some tips and tricks that you can use:

- -

Conclusion

-

FIFA 16 Mobile is a great game for football fans who want to enjoy a realistic and immersive soccer simulation on their mobile devices. It has many features that make it fun and challenging, such as the all-new engine, the ultimate team, the skill games, the real world football, and the player exchange. It also has some drawbacks, such as its large file size, its limited compatibility, its potential lagging issues, its internet requirement, and its bugs and glitches. However, these can be overcome by downloading it from apkaward.com, a trusted website that offers free apk files for Android games. By following our tips and tricks, you can also improve your performance and experience in FIFA 16 Mobile.

-

FAQs

-
    -
  1. Q: How much space does FIFA 16 Mobile take on my device?
  2. -

    A: FIFA 16 Mobile requires about 1.4 GB of free space on your device. You may need more space for additional data or updates.

    -
  3. Q: Which Android devices are compatible with FIFA 16 Mobile?
  4. -

    A: FIFA 16 Mobile is compatible with Android devices that have at least 1.5 GB of RAM, Android 4.4 or later, and a minimum resolution of 800x480. However, some devices may not run the game smoothly or at all, depending on their specifications and performance.

    -
  5. Q: How can I fix the lagging or crashing issues in FIFA 16 Mobile?
  6. -

    A: If you experience lagging or crashing issues in FIFA 16 Mobile, you can try the following solutions:

    -

    fifa 16 ultimate team apk download
    -fifa 16 soccer android game
    -fifa 16 mobile free download
    -fifa 16 apk + obb offline
    -fifa 16 apk mod unlimited money
    -fifa 16 android gameplay
    -fifa 16 mobile best players
    -fifa 16 apk + data highly compressed
    -fifa 16 soccer apk latest version
    -fifa 16 mobile tips and tricks
    -fifa 16 apk no license verification
    -fifa 16 android requirements
    -fifa 16 mobile cheats and hacks
    -fifa 16 apk + obb google drive
    -fifa 16 apk revdl
    -fifa 16 android online or offline
    -fifa 16 mobile skill moves
    -fifa 16 apk + data mega
    -fifa 16 soccer apk old version
    -fifa 16 mobile update
    -fifa 16 apk no root needed
    -fifa 16 android controller support
    -fifa 16 mobile player exchange
    -fifa 16 apk + obb mediafire
    -fifa 16 apk rexdl
    -fifa 16 android multiplayer
    -fifa 16 mobile manager mode
    -fifa 16 apk + data zip file
    -fifa 16 soccer apk pure
    -fifa 16 mobile review
    -fifa 16 apk cracked version
    -fifa 16 android system requirements
    -fifa 16 mobile hack tool
    -fifa 16 apk + obb zippyshare
    -fifa 16 apk mirror
    -fifa 16 android graphics settings
    -fifa 16 mobile tournaments
    -fifa 16 apk + data kickass torrent
    -fifa 16 soccer apkpure.com[^1^]
    -fifa 16 mobile ratings
    -fifa 16 apk full unlocked version
    -fifa 16 android download size
    -fifa 16 mobile coins generator
    -fifa 16 apk + obb uptodown.com[^1^]
    -fifa 16 apk mob.org[^1^]
    -fifa 16 android bugs and glitches
    -fifa 16 mobile achievements
    -fifa 16 apk + data parts download[^1^]

    - -
  7. Q: How can I play FIFA 16 Mobile offline?
  8. -

    A: Unfortunately, you cannot play FIFA 16 Mobile offline. You need an internet connection to access the game's features and modes, such as the ultimate team, the skill games, and the real world football. You also need an internet connection to download the game's data and updates.

    -
  9. Q: How can I get more coins or points in FIFA 16 Mobile?
  10. -

    A: There are several ways to get more coins or points in FIFA 16 Mobile, such as:

    - -

    I hope you enjoyed reading this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Boost your Android device with Speed APK The ultimate performance optimizer.md b/spaces/1phancelerku/anime-remove-background/Boost your Android device with Speed APK The ultimate performance optimizer.md deleted file mode 100644 index ceabe03194e6ee8be10e4661b4e3cea3e84dd3c6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Boost your Android device with Speed APK The ultimate performance optimizer.md +++ /dev/null @@ -1,128 +0,0 @@ -
    -

    What is Speed Apk and How to Use It?

    -

    If you are an avid gamer who loves playing Android games, you might have wondered if there is a way to change the speed of the games. Maybe you want to make them faster to save time, or slower to enjoy them more. Or maybe you want to cheat or hack some games by manipulating their speed. Whatever your reason, there is a tool that can help you do that. It is called speed apk, and in this article, we will tell you what it is, how to use it, and what are its benefits and risks.

    -

    speed apk


    Download Zip ★★★★★ https://jinyurl.com/2uNNXA



    -

    Introduction

    -

    Speed apk is an application that allows you to change the speed of any game on your Android device. It works by modifying the system clock of your device, which affects how fast or slow the game runs. You can use it to make your games run faster or slower, depending on your preference. You can also use it to cheat or hack some games by speeding up or slowing down certain aspects of them.

    -

    Why would you want to use speed apk? There are many reasons why you might want to change the speed of your games. For example, you might want to:

    - -

    How to download and install speed apk on your Android device? To use speed apk, you need to have a rooted Android device. Rooting is a process that gives you full control over your device, allowing you to modify its system settings and install apps that require root access. If you don't know how to root your device, you can search online for tutorials or guides for your specific device model. Once you have rooted your device, you can download and install speed apk from its official website or from other sources. Make sure you download the latest version of the app and that it is compatible with your device.

    -

    How to Use Speed Apk to Change the Speed of Games in Android?

    -

    Once you have downloaded and installed speed apk on your rooted Android device, you can start using it to change the speed of your games. Here are the steps you need to follow:

    -
      -
    1. Launch speed apk and grant it root access. You will see a floating icon on your screen that indicates that the app is running.
    2. -
    3. Select the game you want to speed up or slow down. You can do this by tapping on the floating icon and choosing "Select application". You will see a list of all the apps installed on your device. Tap on the game you want to modify and press "OK ". The game will be added to the speed apk list.
    4. -
    5. Adjust the speed multiplier and apply the changes. You can do this by tapping on the floating icon and choosing "Speed". You will see a slider that allows you to change the speed of the game from 0.1x to 10x. You can also use the buttons to increase or decrease the speed by 0.1x. Once you have set the desired speed, press "Apply". The game will run at the new speed.
    6. -
    7. Revert the changes and restore the original speed. You can do this by tapping on the floating icon and choosing "Restore". The game will run at its normal speed. You can also remove the game from the speed apk list by tapping on it and choosing "Remove".
    8. -
    -

    That's it! You have successfully changed the speed of your game using speed apk. You can repeat these steps for any other game you want to modify.

    -

    Benefits and Risks of Using Speed Apk

    -

    Using speed apk can have some benefits and risks, depending on how you use it and what games you use it on. Here are some of them:

    -

    speed stars apk
    -zingspeed mobile apk
    -speed test apk
    -speed vpn apk
    -speed booster apk
    -speed camera apk
    -speed racing apk
    -speed meter apk
    -speed browser apk
    -speed dial apk
    -speed cleaner apk
    -speed drifters apk
    -speed fan apk
    -speed golf apk
    -speed hacker apk
    -speed indicator apk
    -speed jump apk
    -speed keyboard apk
    -speed launcher apk
    -speed logic apk
    -speed monitor apk
    -speed optimizer apk
    -speed painter apk
    -speed quiz apk
    -speed reader apk
    -speed run apk
    -speed scanner apk
    -speed tracker apk
    -speed video apk
    -speed wallpaper apk
    -speed x3d apk
    -speed zone apk
    -need for speed apk
    -asphalt 9: legends - epic car action racing game (speed edition) apk
    -bike race free - top motorcycle racing games (speed edition) apk
    -carx drift racing 2 (speed edition) apk
    -drag racing (speed edition) apk
    -extreme car driving simulator (speed edition) apk
    -fast & furious takedown (speed edition) apk
    -hill climb racing 2 (speed edition) apk
    -hot wheels: race off (speed edition) apk
    -real racing 3 (speed edition) apk
    -traffic rider (speed edition) apk
    -turbo driving racing 3d (speed edition) apk
    -csr racing 2 - free car racing game (speed edition) apk
    -real drift car racing (speed edition) apk
    -traffic racer (speed edition) apk
    -beach buggy racing 2 (speed edition) apk
    -city racing 3d (speed edition) apk

    -

    Benefits of Using Speed Apk

    - -

    Risks of Using Speed Apk

    - -

    Conclusion

    -

    In conclusion, speed apk is a tool that allows you to change the speed of any game on your Android device. It can be used for various purposes, such as making your games faster or slower, more fun or challenging, or cheating or hacking them. However, it also comes with some benefits and risks, such as affecting your device performance or stability, getting banned or penalized by game developers, or raising ethical or moral issues. Therefore, you should use it wisely and responsibly, and at your own risk.

    -

    Here are some tips and recommendations for using speed apk:

    - -

    We hope this article has helped you understand what is speed apk and how to use it. If you have any feedback or opinions about this topic, feel free to share them with us in the comments section below. Happy gaming!

    -

    FAQs

    -

    Here are some of the frequently asked questions about speed apk:

    -
      -
    1. What are some of the best games to use speed apk on?
    2. -

      There is no definitive answer to this question, as different games may have different effects or results when using speed apk. However, some of the games that are commonly used with speed apk are:

      - -
    3. Does speed apk work on online games?
    4. -

      Speed apk does not work on online games that require an internet connection or a server to run. This is because the speed of the game is determined by the server, not by your device. If you try to use speed apk on online games, you may experience errors, glitches, or disconnections. You may also get banned or penalized by the game developers for violating their rules or policies.

      -
    5. Is speed apk safe and legal to use?
    6. -

      Speed apk is not a malicious or harmful app, but it is not a risk-free app either. It can cause some problems or damage to your device, such as overheating, battery drain, or system crash. It can also get you in trouble with some game developers, who may ban or penalize you for using it. Moreover, it can raise some ethical or moral issues, such as cheating or hacking other players. Therefore, you should use speed apk at your own risk and responsibility.

      -
    7. How can I uninstall speed apk from my device?
    8. -

      If you want to uninstall speed apk from your device, you can follow these steps:

      -
        -
      1. Launch speed apk and tap on the floating icon.
      2. -
      3. Choose "Settings" and then "Uninstall".
      4. -
      5. Confirm your choice and wait for the app to be uninstalled.
      6. -
      7. Reboot your device to complete the process.
      8. -
      -
    9. Where can I find more information and support for speed apk?
    10. -

      If you want to find more information and support for speed apk, you can visit its official website or its social media pages. You can also contact its developers via email or feedback form. You can also join its online community or forum, where you can ask questions, share tips, or report issues.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Google Drive APK for Android and Enjoy Free Cloud Storage.md b/spaces/1phancelerku/anime-remove-background/Download Google Drive APK for Android and Enjoy Free Cloud Storage.md deleted file mode 100644 index 178cfe6409216afbd88f549e629516933823d2fc..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Google Drive APK for Android and Enjoy Free Cloud Storage.md +++ /dev/null @@ -1,130 +0,0 @@ - -

    How to Download APK Files from Google Drive and Install Them on Your Android Device

    -

    If you have an Android device, you probably know that you can install apps from the Google Play Store. But did you know that you can also install apps from other sources, such as Google Drive? In this article, we will show you how to download APK files from Google Drive and install them on your Android device. We will also explain what an APK file is, why you might need it, and what risks and precautions you should take when installing it.

    -

    download apk google drive


    Download Zip ⇒⇒⇒ https://jinyurl.com/2uNU1K



    -

    What is an APK File and Why You Might Need It

    -

    APK File Definition and Benefits

    -

    An APK file is a package file that contains the installation files for an Android app. It has the extension .apk and can be opened by any file explorer app. APK files are useful for installing apps that are not available on the Google Play Store, such as beta versions, regional apps, or modded apps. They can also help you update your apps faster, bypass restrictions, or access features that are not supported by your device.

    -

    Risks and Precautions of Installing APK Files

    -

    However, installing APK files also comes with some risks. You might download a malicious or corrupted file that can harm your device or compromise your data. You might also violate the terms of service of some apps or infringe on their intellectual property rights. Therefore, you should only download APK files from reputable sources, such as official websites, trusted developers, or verified platforms. You should also scan the files for viruses before installing them and check their permissions carefully. Finally, you should always back up your data before installing any APK file, in case something goes wrong.

    -

    How to Download APK Files from Google Drive

    -

    Step 1: Enable Unknown Sources on Your Android Device

    -

    Before you can install any APK file on your Android device, you need to enable unknown sources. This means that you allow your device to install apps from sources other than the Google Play Store. To do this, follow these steps:

    - -

    Step 2: Find the APK File on Google Drive and Download It

    -

    Now that you have enabled unknown sources, you can download the APK file from Google Drive. To do this, follow these steps:

    - -

    Step 3: Locate the Downloaded APK File and Install It

    -

    Once you have downloaded the APK file from Google Drive, you need to locate it and install it. To do this, follow these steps:

    - -

    Congratulations, you have successfully downloaded and installed an APK file from Google Drive!

    -

    How to Install APK Files on Your Android Device Using Other Methods

    -

    Method 1: Use a File Manager App

    -

    If you don't want to use your web browser to download APK files from Google Drive, you can use a file manager app instead. A file manager app allows you to access and manage the files on your device, including APK files. Some popular file manager apps are [ES File Explorer], [Solid Explorer], and [Files by Google]. To use a file manager app to install APK files, follow these steps:

    -

    How to download apk files from google drive
    -Download google drive apk for android
    -Google drive apk download latest version
    -Download google drive apk for pc
    -Google drive apk download for firestick
    -Download google drive apk old version
    -Google drive apk download uptodown
    -Download google drive apk mod
    -Google drive apk download for android tv
    -Download google drive apk mirror
    -Google drive apk download apkpure
    -Download google drive apk for chromebook
    -Google drive apk download for windows 10
    -Download google drive apk pro
    -Google drive apk download for laptop
    -Download google drive apk offline installer
    -Google drive apk download for ios
    -Download google drive apk premium
    -Google drive apk download for kindle fire
    -Download google drive apk no ads
    -Google drive apk download for mac
    -Download google drive apk cracked
    -Google drive apk download for smart tv
    -Download google drive apk filehippo
    -Google drive apk download for windows 7
    -Download google drive apk full version
    -Google drive apk download for blackberry
    -Download google drive apk pure
    -Google drive apk download for windows 8.1
    -Download google drive apk hack
    -Google drive apk download for linux
    -Download google drive apk free
    -Google drive apk download for android 4.4.2
    -Download google drive apk beta
    -Google drive apk download for android 5.1.1
    -Download google drive apk direct link
    -Google drive apk download for android 6.0.1
    -Download google drive apk xda
    -Google drive apk download for android 7.0
    -Download google drive apk rexdl
    -Google drive apk download for android 8.0
    -Download google drive apk revdl
    -Google drive apk download for android 9.0 pie
    -Download google drive apk from play store
    -Google drive apk download for android 10 q

    - -

    Method 2: Use an APK Installer App

    -

    If you want to make the installation process easier, you can use an APK installer app. An APK installer app is a tool that helps you install APK files on your device without any hassle. Some popular APK installer apps are [APK Installer], [APKPure], and [APKMirror Installer]. To use an APK installer app to install APK files, follow these steps:

    - -

    Method 3: Transfer the APK File from Your Computer via USB

    -

    If you have the APK file on your computer, you can also transfer it to your Android device via USB and install it. To do this, follow these steps:

    - -

    Conclusion

    -

    In this article, we have shown you how to download APK files from Google Drive and install them on your Android device. We have also explained what an APK file is, why you might need it, and what risks and precautions you should take when installing it. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    - - - - - - - - -
    QuestionAnswer
    What is Google Drive?Google Drive is a cloud storage service that allows you to store and access your files online. You can upload, download, share, and sync your files across different devices using Google Drive. You can also create and edit documents, spreadsheets, presentations, forms, drawings, and more using Google Drive's online tools. You can get 15 GB of free storage space with a Google account or upgrade to a paid plan for more storage options.
    How do I update an APK file?To update an APK file, you need to download and install the latest version of the APK file from the same source that you got the original one. You can also check for updates using the APK installer app that you used to install the APK file. Alternatively, you can uninstall the old version of the app and install the new one from the Google Play Store if it is available there.
    How do I uninstall an APK file?To uninstall an APK file, you need to go to your device settings and tap Apps or Applications. Find the app that you want to uninstall and tap it. Tap Uninstall and confirm your choice. You can also uninstall an APK file using the APK installer app that you used to install it.
    How do I share an APK file?To share an APK file, you need to upload it to a cloud service, such as Google Drive, Dropbox, or OneDrive, and share the link with the person that you want to share it with. You can also use a file sharing app, such as [SHAREit], [Xender], or [Zapya], to transfer the APK file directly to another device via Wi-Fi or Bluetooth.
    How do I backup an APK file?To backup an APK file, you need to copy it from your device's internal storage or SD card to your computer or another storage device. You can also use a backup app, such as [Titanium Backup], [Helium], or [Super Backup], to backup your APK files along with their data and settings.
    How do I open an APK file on my computer?To open an APK file on your computer, you need to use an Android emulator, such as [BlueStacks], [Nox Player], or [MEmu], that allows you to run Android apps on your computer. You can also use a software tool, such as [APK Studio], [APK Easy Tool], or [APK Editor Pro], that allows you to view and edit the contents of an APK file.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_dpmsolver_multistep.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_dpmsolver_multistep.py deleted file mode 100644 index ac93600eb6fdea3d18475e845a9f934e4ec7e341..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_dpmsolver_multistep.py +++ /dev/null @@ -1,524 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 TSAIL Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver - -import math -from typing import List, Optional, Tuple, Union - -import numpy as np -import paddle - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, deprecate -from .scheduling_utils import SchedulerMixin, SchedulerOutput - - -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return paddle.to_tensor(betas, dtype="float32") - - -class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): - """ - DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with - the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality - samples, and it can generate quite good samples even in only 10 steps. - - For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 - - Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We - recommend to use `solver_order=2` for guided sampling, and `solver_order=3` for unconditional sampling. - - We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space - diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic - thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - solver_order (`int`, default `2`): - the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided - sampling, and `solver_order=3` for unconditional sampling. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to - use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion - models (such as stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True` and - `algorithm_type="dpmsolver++`. - algorithm_type (`str`, default `dpmsolver++`): - the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++`. The `dpmsolver` type implements the - algorithms in https://arxiv.org/abs/2206.00927, and the `dpmsolver++` type implements the algorithms in - https://arxiv.org/abs/2211.01095. We recommend to use `dpmsolver++` with `solver_order=2` for guided - sampling (e.g. stable-diffusion). - solver_type (`str`, default `midpoint`): - the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects - the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are - slightly better, so we recommend to use the `midpoint` type. - lower_order_final (`bool`, default `True`): - whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically - find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. - - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - _deprecated_kwargs = ["predict_epsilon"] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - solver_order: int = 2, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - algorithm_type: str = "dpmsolver++", - solver_type: str = "midpoint", - lower_order_final: bool = True, - **kwargs, - ): - message = ( - "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler =" - " DPMSolverMultistepScheduler.from_pretrained(, prediction_type='epsilon')`." - ) - predict_epsilon = deprecate("predict_epsilon", "0.13.0", message, take_from=kwargs) - if predict_epsilon is not None: - self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample") - if trained_betas is not None: - self.betas = paddle.to_tensor(trained_betas, dtype="float32") - elif beta_schedule == "linear": - self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32") - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2 - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = paddle.cumprod(self.alphas, 0) - # Currently we only support VP-type noise schedule - self.alpha_t = paddle.sqrt(self.alphas_cumprod) - self.sigma_t = paddle.sqrt(1 - self.alphas_cumprod) - self.lambda_t = paddle.log(self.alpha_t) - paddle.log(self.sigma_t) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # settings for DPM-Solver - if algorithm_type not in ["dpmsolver", "dpmsolver++"]: - raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}") - if solver_type not in ["midpoint", "heun"]: - raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}") - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32)[::-1].copy() - self.timesteps = paddle.to_tensor(timesteps) - self.model_outputs = [None] * solver_order - self.lower_order_nums = 0 - - def set_timesteps(self, num_inference_steps: int): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - timesteps = ( - np.linspace(0, self.num_train_timesteps - 1, num_inference_steps + 1) - .round()[::-1][:-1] - .copy() - .astype(np.int64) - ) - self.timesteps = paddle.to_tensor(timesteps) - self.model_outputs = [ - None, - ] * self.config.solver_order - self.lower_order_nums = 0 - - def convert_model_output(self, model_output: paddle.Tensor, timestep: int, sample: paddle.Tensor) -> paddle.Tensor: - """ - Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. - - DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to - discretize an integral of the data prediction model. So we need to first convert the model output to the - corresponding type to match the algorithm. - - Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or - DPM-Solver++ for both noise prediction model and data prediction model. - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - - Returns: - `paddle.Tensor`: the converted model output. - """ - # DPM-Solver++ needs to solve an integral of the data prediction model. - if self.config.algorithm_type == "dpmsolver++": - if self.config.prediction_type == "epsilon": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * model_output) / alpha_t - elif self.config.prediction_type == "sample": - x0_pred = model_output - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = alpha_t * sample - sigma_t * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - if self.config.thresholding: - # Dynamic thresholding in https://arxiv.org/abs/2205.11487 - orig_dtype = x0_pred.dtype - if orig_dtype not in [paddle.float32, paddle.float64]: - x0_pred = x0_pred.cast("float32") - dynamic_max_val = paddle.quantile( - paddle.abs(x0_pred).reshape((x0_pred.shape[0], -1)), self.config.dynamic_thresholding_ratio, axis=1 - ) - dynamic_max_val = paddle.maximum( - dynamic_max_val, - self.config.sample_max_value * paddle.ones_like(dynamic_max_val), - )[(...,) + (None,) * (x0_pred.ndim - 1)] - x0_pred = paddle.clip(x0_pred, -dynamic_max_val, dynamic_max_val) / dynamic_max_val - x0_pred = x0_pred.cast(orig_dtype) - return x0_pred - # DPM-Solver needs to solve an integral of the noise prediction model. - elif self.config.algorithm_type == "dpmsolver": - if self.config.prediction_type == "epsilon": - return model_output - elif self.config.prediction_type == "sample": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = (sample - alpha_t * model_output) / sigma_t - return epsilon - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = alpha_t * model_output + sigma_t * sample - return epsilon - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - def dpm_solver_first_order_update( - self, - model_output: paddle.Tensor, - timestep: int, - prev_timestep: int, - sample: paddle.Tensor, - ) -> paddle.Tensor: - """ - One step for the first-order DPM-Solver (equivalent to DDIM). - - See https://arxiv.org/abs/2206.00927 for the detailed derivation. - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - - Returns: - `paddle.Tensor`: the sample tensor at the previous timestep. - """ - lambda_t, lambda_s = self.lambda_t[prev_timestep], self.lambda_t[timestep] - alpha_t, alpha_s = self.alpha_t[prev_timestep], self.alpha_t[timestep] - sigma_t, sigma_s = self.sigma_t[prev_timestep], self.sigma_t[timestep] - h = lambda_t - lambda_s - if self.config.algorithm_type == "dpmsolver++": - x_t = (sigma_t / sigma_s) * sample - (alpha_t * (paddle.exp(-h) - 1.0)) * model_output - elif self.config.algorithm_type == "dpmsolver": - x_t = (alpha_t / alpha_s) * sample - (sigma_t * (paddle.exp(h) - 1.0)) * model_output - return x_t - - def multistep_dpm_solver_second_order_update( - self, - model_output_list: List[paddle.Tensor], - timestep_list: List[int], - prev_timestep: int, - sample: paddle.Tensor, - ) -> paddle.Tensor: - """ - One step for the second-order multistep DPM-Solver. - - Args: - model_output_list (`List[paddle.Tensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - - Returns: - `paddle.Tensor`: the sample tensor at the previous timestep. - """ - t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2] - m0, m1 = model_output_list[-1], model_output_list[-2] - lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1] - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 - r0 = h_0 / h - D0, D1 = m0, (1.0 / r0) * (m0 - m1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2211.01095 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (paddle.exp(-h) - 1.0)) * D0 - - 0.5 * (alpha_t * (paddle.exp(-h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (paddle.exp(-h) - 1.0)) * D0 - + (alpha_t * ((paddle.exp(-h) - 1.0) / h + 1.0)) * D1 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (paddle.exp(h) - 1.0)) * D0 - - 0.5 * (sigma_t * (paddle.exp(h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (paddle.exp(h) - 1.0)) * D0 - - (sigma_t * ((paddle.exp(h) - 1.0) / h - 1.0)) * D1 - ) - return x_t - - def multistep_dpm_solver_third_order_update( - self, - model_output_list: List[paddle.Tensor], - timestep_list: List[int], - prev_timestep: int, - sample: paddle.Tensor, - ) -> paddle.Tensor: - """ - One step for the third-order multistep DPM-Solver. - - Args: - model_output_list (`List[paddle.Tensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - - Returns: - `paddle.Tensor`: the sample tensor at the previous timestep. - """ - t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3] - m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3] - lambda_t, lambda_s0, lambda_s1, lambda_s2 = ( - self.lambda_t[t], - self.lambda_t[s0], - self.lambda_t[s1], - self.lambda_t[s2], - ) - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2 - r0, r1 = h_0 / h, h_1 / h - D0 = m0 - D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2) - D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1) - D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (paddle.exp(-h) - 1.0)) * D0 - + (alpha_t * ((paddle.exp(-h) - 1.0) / h + 1.0)) * D1 - - (alpha_t * ((paddle.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (paddle.exp(h) - 1.0)) * D0 - - (sigma_t * ((paddle.exp(h) - 1.0) / h - 1.0)) * D1 - - (sigma_t * ((paddle.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2 - ) - return x_t - - def step( - self, - model_output: paddle.Tensor, - timestep: int, - sample: paddle.Tensor, - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Step function propagating the sample with the multistep DPM-Solver. - - Args: - model_output (`paddle.Tensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`paddle.Tensor`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is - True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - step_index = (self.timesteps == timestep).nonzero() - if len(step_index) == 0: - step_index = len(self.timesteps) - 1 - else: - step_index = step_index.item() - prev_timestep = 0 if step_index == len(self.timesteps) - 1 else self.timesteps[step_index + 1] - lower_order_final = ( - (step_index == len(self.timesteps) - 1) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - lower_order_second = ( - (step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - - model_output = self.convert_model_output(model_output, timestep, sample) - for i in range(self.config.solver_order - 1): - self.model_outputs[i] = self.model_outputs[i + 1] - self.model_outputs[-1] = model_output - - if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final: - prev_sample = self.dpm_solver_first_order_update(model_output, timestep, prev_timestep, sample) - elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second: - timestep_list = [self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_second_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - else: - timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_third_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - - if self.lower_order_nums < self.config.solver_order: - self.lower_order_nums += 1 - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def scale_model_input(self, sample: paddle.Tensor, *args, **kwargs) -> paddle.Tensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`paddle.Tensor`): input sample - - Returns: - `paddle.Tensor`: scaled input sample - """ - return sample - - def add_noise( - self, - original_samples: paddle.Tensor, - noise: paddle.Tensor, - timesteps: paddle.Tensor, - ) -> paddle.Tensor: - # Make sure alphas_cumprod and timestep have same dtype as original_samples - self.alphas_cumprod = self.alphas_cumprod.cast(original_samples.dtype) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/7hao/bingo/src/components/ui/sheet.tsx b/spaces/7hao/bingo/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/attentions.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/attentions.py deleted file mode 100644 index 19a0a670021aacb9ae1c7f8f54ca1bff8e065375..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math - -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer.lib.infer_pack import commons, modules -from infer.lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/index.html b/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/index.html deleted file mode 100644 index f64aad6580cd12cbdbb0bcc0321ed7a6486d2a19..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/index.html +++ /dev/null @@ -1,66 +0,0 @@ - - - - Dynamic Lights - A-Frame - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css b/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css deleted file mode 100644 index 435ebb5987b8913a52f73664c54022374d0c3ed7..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css +++ /dev/null @@ -1,19 +0,0 @@ -h1 { - text-align: center; -} -img#overview { - max-width: 1000px; - max-height: 600px; - display: block; - margin: auto; -} -img#style-image { - max-width: 1000px; - max-height: 600px; - display: block; - margin: auto; -} -img#visitor-badge { - display: block; - margin: auto; -} \ No newline at end of file diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/timm_model.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/timm_model.py deleted file mode 100644 index c9d1ab4666b5bab5038d44b90c9ddca5087de460..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/timm_model.py +++ /dev/null @@ -1,112 +0,0 @@ -""" timm model adapter - -Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model. -""" -from collections import OrderedDict - -import torch.nn as nn - -try: - import timm - from timm.models.layers import Mlp, to_2tuple - from timm.models.layers.attention_pool2d import RotAttentionPool2d - from timm.models.layers.attention_pool2d import ( - AttentionPool2d as AbsAttentionPool2d, - ) -except ImportError as e: - timm = None - -from .utils import freeze_batch_norm_2d - - -class TimmModel(nn.Module): - """timm model adapter - # FIXME this adapter is a work in progress, may change in ways that break weight compat - """ - - def __init__( - self, - model_name, - embed_dim, - image_size=224, - pool="avg", - proj="linear", - drop=0.0, - pretrained=False, - ): - super().__init__() - if timm is None: - raise RuntimeError("Please `pip install timm` to use timm models.") - - self.image_size = to_2tuple(image_size) - self.trunk = timm.create_model(model_name, pretrained=pretrained) - feat_size = self.trunk.default_cfg.get("pool_size", None) - feature_ndim = 1 if not feat_size else 2 - if pool in ("abs_attn", "rot_attn"): - assert feature_ndim == 2 - # if attn pooling used, remove both classifier and default pool - self.trunk.reset_classifier(0, global_pool="") - else: - # reset global pool if pool config set, otherwise leave as network default - reset_kwargs = dict(global_pool=pool) if pool else {} - self.trunk.reset_classifier(0, **reset_kwargs) - prev_chs = self.trunk.num_features - - head_layers = OrderedDict() - if pool == "abs_attn": - head_layers["pool"] = AbsAttentionPool2d( - prev_chs, feat_size=feat_size, out_features=embed_dim - ) - prev_chs = embed_dim - elif pool == "rot_attn": - head_layers["pool"] = RotAttentionPool2d(prev_chs, out_features=embed_dim) - prev_chs = embed_dim - else: - assert proj, "projection layer needed if non-attention pooling is used." - - # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used - if proj == "linear": - head_layers["drop"] = nn.Dropout(drop) - head_layers["proj"] = nn.Linear(prev_chs, embed_dim) - elif proj == "mlp": - head_layers["mlp"] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop) - - self.head = nn.Sequential(head_layers) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - """lock modules - Args: - unlocked_groups (int): leave last n layer groups unlocked (default: 0) - """ - if not unlocked_groups: - # lock full model - for param in self.trunk.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self.trunk) - else: - # NOTE: partial freeze requires latest timm (master) branch and is subject to change - try: - # FIXME import here until API stable and in an official release - from timm.models.helpers import group_parameters, group_modules - except ImportError: - raise RuntimeError( - "Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`" - ) - matcher = self.trunk.group_matcher() - gparams = group_parameters(self.trunk, matcher) - max_layer_id = max(gparams.keys()) - max_layer_id = max_layer_id - unlocked_groups - for group_idx in range(max_layer_id + 1): - group = gparams[group_idx] - for param in group: - self.trunk.get_parameter(param).requires_grad = False - if freeze_bn_stats: - gmodules = group_modules(self.trunk, matcher, reverse=True) - gmodules = {k for k, v in gmodules.items() if v <= max_layer_id} - freeze_batch_norm_2d(self.trunk, gmodules) - - def forward(self, x): - x = self.trunk(x) - x = self.head(x) - return x diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/speech_base.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/speech_base.py deleted file mode 100644 index 7b5f85edc9b827806ca565b617e1b72149e09943..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/speech_base.py +++ /dev/null @@ -1,373 +0,0 @@ -import filecmp -import os -import traceback -import numpy as np -import pandas as pd -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.optim -import torch.utils.data -import yaml -from tqdm import tqdm -import utils -from tasks.tts.dataset_utils import BaseSpeechDataset -from tasks.tts.utils. import parse_mel_losses, parse_dataset_configs, load_data_preprocessor, load_data_binarizer -from tasks.tts.vocoder_infer.base_vocoder import BaseVocoder, get_vocoder_cls -from text_to_speech.utils.audio.align import mel2token_to_dur -from text_to_speech.utils.audio.io import save_wav -from text_to_speech.utils.audio.pitch_extractors import extract_pitch_simple -from text_to_speech.utils.commons.base_task import BaseTask -from text_to_speech.utils.commons.ckpt_utils import load_ckpt -from text_to_speech.utils.commons.dataset_utils import data_loader, BaseConcatDataset -from text_to_speech.utils.commons.hparams import hparams -from text_to_speech.utils.commons.multiprocess_utils import MultiprocessManager -from text_to_speech.utils.commons.tensor_utils import tensors_to_scalars -from text_to_speech.utils.metrics.ssim import ssim -from text_to_speech.utils.nn.model_utils import print_arch -from text_to_speech.utils.nn.schedulers import RSQRTSchedule, NoneSchedule, WarmupSchedule -from text_to_speech.utils.nn.seq_utils import weights_nonzero_speech -from text_to_speech.utils.plot.plot import spec_to_figure -from text_to_speech.utils.text.text_encoder import build_token_encoder -import matplotlib.pyplot as plt - - -class SpeechBaseTask(BaseTask): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.dataset_cls = BaseSpeechDataset - self.vocoder = None - data_dir = hparams['binary_data_dir'] - if not hparams['use_word_input']: - self.token_encoder = build_token_encoder(f'{data_dir}/phone_set.json') - else: - self.token_encoder = build_token_encoder(f'{data_dir}/word_set.json') - self.padding_idx = self.token_encoder.pad() - self.eos_idx = self.token_encoder.eos() - self.seg_idx = self.token_encoder.seg() - self.saving_result_pool = None - self.saving_results_futures = None - self.mel_losses = parse_mel_losses() - self.max_tokens, self.max_sentences, \ - self.max_valid_tokens, self.max_valid_sentences = parse_dataset_configs() - - ########################## - # datasets - ########################## - @data_loader - def train_dataloader(self): - if hparams['train_sets'] != '': - train_sets = hparams['train_sets'].split("|") - # check if all train_sets have the same spk map and dictionary - binary_data_dir = hparams['binary_data_dir'] - file_to_cmp = ['phone_set.json'] - if os.path.exists(f'{binary_data_dir}/word_set.json'): - file_to_cmp.append('word_set.json') - if hparams['use_spk_id']: - file_to_cmp.append('spk_map.json') - for f in file_to_cmp: - for ds_name in train_sets: - base_file = os.path.join(binary_data_dir, f) - ds_file = os.path.join(ds_name, f) - assert filecmp.cmp(base_file, ds_file), \ - f'{f} in {ds_name} is not same with that in {binary_data_dir}.' - train_dataset = BaseConcatDataset([ - self.dataset_cls(prefix='train', shuffle=True, data_dir=ds_name) for ds_name in train_sets]) - else: - train_dataset = self.dataset_cls(prefix=hparams['train_set_name'], shuffle=True) - return self.build_dataloader(train_dataset, True, self.max_tokens, self.max_sentences, - endless=hparams['endless_ds']) - - @data_loader - def val_dataloader(self): - valid_dataset = self.dataset_cls(prefix=hparams['valid_set_name'], shuffle=False) - return self.build_dataloader(valid_dataset, False, self.max_valid_tokens, self.max_valid_sentences, - batch_by_size=False) - - @data_loader - def test_dataloader(self): - test_dataset = self.dataset_cls(prefix=hparams['test_set_name'], shuffle=False) - self.test_dl = self.build_dataloader( - test_dataset, False, self.max_valid_tokens, self.max_valid_sentences, batch_by_size=False) - return self.test_dl - - def build_dataloader(self, dataset, shuffle, max_tokens=None, max_sentences=None, - required_batch_size_multiple=-1, endless=False, batch_by_size=True): - devices_cnt = torch.cuda.device_count() - if devices_cnt == 0: - devices_cnt = 1 - if required_batch_size_multiple == -1: - required_batch_size_multiple = devices_cnt - - def shuffle_batches(batches): - np.random.shuffle(batches) - return batches - - if max_tokens is not None: - max_tokens *= devices_cnt - if max_sentences is not None: - max_sentences *= devices_cnt - indices = dataset.ordered_indices() - if batch_by_size: - batch_sampler = utils.commons.dataset_utils.batch_by_size( - indices, dataset.num_tokens, max_tokens=max_tokens, max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - else: - batch_sampler = [] - for i in range(0, len(indices), max_sentences): - batch_sampler.append(indices[i:i + max_sentences]) - - if shuffle: - batches = shuffle_batches(list(batch_sampler)) - if endless: - batches = [b for _ in range(1000) for b in shuffle_batches(list(batch_sampler))] - else: - batches = batch_sampler - if endless: - batches = [b for _ in range(1000) for b in batches] - num_workers = dataset.num_workers - if self.trainer.use_ddp: - num_replicas = dist.get_world_size() - rank = dist.get_rank() - batches = [x[rank::num_replicas] for x in batches if len(x) % num_replicas == 0] - return torch.utils.data.DataLoader(dataset, - collate_fn=dataset.collater, - batch_sampler=batches, - num_workers=num_workers, - pin_memory=False) - - ########################## - # scheduler and optimizer - ########################## - def build_model(self): - self.build_tts_model() - if hparams['load_ckpt'] != '': - load_ckpt(self.model, hparams['load_ckpt']) - print_arch(self.model) - return self.model - - def build_tts_model(self): - raise NotImplementedError - - def build_scheduler(self, optimizer): - if hparams['scheduler'] == 'rsqrt': - return RSQRTSchedule(optimizer, hparams['lr'], hparams['warmup_updates'], hparams['hidden_size']) - elif hparams['scheduler'] == 'warmup': - return WarmupSchedule(optimizer, hparams['lr'], hparams['warmup_updates']) - elif hparams['scheduler'] == 'step_lr': - return torch.optim.lr_scheduler.StepLR( - optimizer=optimizer, step_size=500, gamma=0.998) - else: - return NoneSchedule(optimizer, hparams['lr']) - - def build_optimizer(self, model): - self.optimizer = optimizer = torch.optim.AdamW( - model.parameters(), - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - - return optimizer - - ########################## - # training and validation - ########################## - def _training_step(self, sample, batch_idx, _): - loss_output, _ = self.run_model(sample) - total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad]) - loss_output['batch_size'] = sample['txt_tokens'].size()[0] - return total_loss, loss_output - - def run_model(self, sample, infer=False): - """ - - :param sample: a batch of data - :param infer: bool, run in infer mode - :return: - if not infer: - return losses, model_out - if infer: - return model_out - """ - raise NotImplementedError - - def validation_start(self): - self.vocoder = get_vocoder_cls(hparams['vocoder'])() - - def validation_step(self, sample, batch_idx): - outputs = {} - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(sample) - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = tensors_to_scalars(outputs) - if self.global_step % hparams['valid_infer_interval'] == 0 \ - and batch_idx < hparams['num_valid_plots']: - self.save_valid_result(sample, batch_idx, model_out) - return outputs - - def validation_end(self, outputs): - self.vocoder = None - return super(SpeechBaseTask, self).validation_end(outputs) - - def save_valid_result(self, sample, batch_idx, model_out): - raise NotImplementedError - - ########################## - # losses - ########################## - def add_mel_loss(self, mel_out, target, losses, postfix=''): - for loss_name, lambd in self.mel_losses.items(): - losses[f'{loss_name}{postfix}'] = getattr(self, f'{loss_name}_loss')(mel_out, target) * lambd - - def l1_loss(self, decoder_output, target): - # decoder_output : B x T x n_mel - # target : B x T x n_mel - l1_loss = F.l1_loss(decoder_output, target, reduction='none') - weights = weights_nonzero_speech(target) - l1_loss = (l1_loss * weights).sum() / weights.sum() - return l1_loss - - def mse_loss(self, decoder_output, target): - # decoder_output : B x T x n_mel - # target : B x T x n_mel - assert decoder_output.shape == target.shape - mse_loss = F.mse_loss(decoder_output, target, reduction='none') - weights = weights_nonzero_speech(target) - mse_loss = (mse_loss * weights).sum() / weights.sum() - return mse_loss - - def ssim_loss(self, decoder_output, target, bias=6.0): - # decoder_output : B x T x n_mel - # target : B x T x n_mel - assert decoder_output.shape == target.shape - weights = weights_nonzero_speech(target) - decoder_output = decoder_output[:, None] + bias - target = target[:, None] + bias - ssim_loss = 1 - ssim(decoder_output, target, size_average=False) - ssim_loss = (ssim_loss * weights).sum() / weights.sum() - return ssim_loss - - def plot_mel(self, batch_idx, spec_out, spec_gt=None, name=None, title='', f0s=None, dur_info=None): - vmin = hparams['mel_vmin'] - vmax = hparams['mel_vmax'] - if len(spec_out.shape) == 3: - spec_out = spec_out[0] - if isinstance(spec_out, torch.Tensor): - spec_out = spec_out.cpu().numpy() - if spec_gt is not None: - if len(spec_gt.shape) == 3: - spec_gt = spec_gt[0] - if isinstance(spec_gt, torch.Tensor): - spec_gt = spec_gt.cpu().numpy() - max_len = max(len(spec_gt), len(spec_out)) - if max_len - len(spec_gt) > 0: - spec_gt = np.pad(spec_gt, [[0, max_len - len(spec_gt)], [0, 0]], mode='constant', - constant_values=vmin) - if max_len - len(spec_out) > 0: - spec_out = np.pad(spec_out, [[0, max_len - len(spec_out)], [0, 0]], mode='constant', - constant_values=vmin) - spec_out = np.concatenate([spec_out, spec_gt], -1) - name = f'mel_val_{batch_idx}' if name is None else name - self.logger.add_figure(name, spec_to_figure( - spec_out, vmin, vmax, title=title, f0s=f0s, dur_info=dur_info), self.global_step) - - ########################## - # testing - ########################## - def test_start(self): - self.saving_result_pool = MultiprocessManager(int(os.getenv('N_PROC', os.cpu_count()))) - self.saving_results_futures = [] - self.gen_dir = os.path.join( - hparams['work_dir'], f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}') - self.vocoder: BaseVocoder = get_vocoder_cls(hparams['vocoder'])() - os.makedirs(self.gen_dir, exist_ok=True) - os.makedirs(f'{self.gen_dir}/wavs', exist_ok=True) - os.makedirs(f'{self.gen_dir}/plot', exist_ok=True) - if hparams.get('save_mel_npy', False): - os.makedirs(f'{self.gen_dir}/mel_npy', exist_ok=True) - - def test_step(self, sample, batch_idx): - """ - - :param sample: - :param batch_idx: - :return: - """ - assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference' - outputs = self.run_model(sample, infer=True) - text = sample['text'][0] - item_name = sample['item_name'][0] - tokens = sample['txt_tokens'][0].cpu().numpy() - mel_gt = sample['mels'][0].cpu().numpy() - mel_pred = outputs['mel_out'][0].cpu().numpy() - str_phs = self.token_encoder.decode(tokens, strip_padding=True) - base_fn = f'[{self.results_id:06d}][{item_name.replace("%", "_")}][%s]' - if text is not None: - base_fn += text.replace(":", "$3A")[:80] - base_fn = base_fn.replace(' ', '_') - gen_dir = self.gen_dir - wav_pred = self.vocoder.spec2wav(mel_pred) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs]) - if hparams['save_gt']: - wav_gt = self.vocoder.spec2wav(mel_gt) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs]) - print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}") - return { - 'item_name': item_name, - 'text': text, - 'ph_tokens': self.token_encoder.decode(tokens.tolist()), - 'wav_fn_pred': base_fn % 'P', - 'wav_fn_gt': base_fn % 'G', - } - - @staticmethod - def save_result(wav_out, mel, base_fn, gen_dir, str_phs=None, mel2ph=None, alignment=None): - save_wav(wav_out, f'{gen_dir}/wavs/{base_fn}.wav', hparams['audio_sample_rate'], - norm=hparams['out_wav_norm']) - fig = plt.figure(figsize=(14, 10)) - spec_vmin = hparams['mel_vmin'] - spec_vmax = hparams['mel_vmax'] - heatmap = plt.pcolor(mel.T, vmin=spec_vmin, vmax=spec_vmax) - fig.colorbar(heatmap) - try: - f0 = extract_pitch_simple(wav_out) - f0 = f0 / 10 * (f0 > 0) - plt.plot(f0, c='white', linewidth=1, alpha=0.6) - if mel2ph is not None and str_phs is not None: - decoded_txt = str_phs.split(" ") - dur = mel2token_to_dur(torch.LongTensor(mel2ph)[None, :], len(decoded_txt))[0].numpy() - dur = [0] + list(np.cumsum(dur)) - for i in range(len(dur) - 1): - shift = (i % 20) + 1 - plt.text(dur[i], shift, decoded_txt[i]) - plt.hlines(shift, dur[i], dur[i + 1], colors='b' if decoded_txt[i] != '|' else 'black') - plt.vlines(dur[i], 0, 5, colors='b' if decoded_txt[i] != '|' else 'black', - alpha=1, linewidth=1) - plt.tight_layout() - plt.savefig(f'{gen_dir}/plot/{base_fn}.png', format='png') - plt.close(fig) - if hparams.get('save_mel_npy', False): - np.save(f'{gen_dir}/mel_npy/{base_fn}', mel) - if alignment is not None: - fig, ax = plt.subplots(figsize=(12, 16)) - im = ax.imshow(alignment, aspect='auto', origin='lower', - interpolation='none') - decoded_txt = str_phs.split(" ") - ax.set_yticks(np.arange(len(decoded_txt))) - ax.set_yticklabels(list(decoded_txt), fontsize=6) - fig.colorbar(im, ax=ax) - fig.savefig(f'{gen_dir}/attn_plot/{base_fn}_attn.png', format='png') - plt.close(fig) - except Exception: - traceback.print_exc() - return None - - def test_end(self, outputs): - pd.DataFrame(outputs).to_csv(f'{self.gen_dir}/meta.csv') - for _1, _2 in tqdm(self.saving_result_pool.get_results(), total=len(self.saving_result_pool)): - pass - return {} diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_14B.py b/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_14B.py deleted file mode 100644 index 054b61438bc28351e6a2baabaad247d3c2af09f2..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_14B.py +++ /dev/null @@ -1,120 +0,0 @@ -import hydra - -import pyrootutils -import os -import torch - -from omegaconf import OmegaConf -import json -from typing import Optional -import transformers -from PIL import Image -from torchvision.transforms.functional import InterpolationMode - -pyrootutils.setup_root(__file__, indicator=".project-root", pythonpath=True) - -BOI_TOKEN = '' -EOI_TOKEN = '' -IMG_TOKEN = '' - -IMG_FLAG = '' -NUM_IMG_TOKNES = 32 -NUM_IMG_CODES = 8192 -image_id_shift = 32000 - - - - -def generate(tokenizer, input_tokens, generation_config, model): - - input_ids = tokenizer(input_tokens, add_special_tokens=False, return_tensors='pt').input_ids - input_ids = input_ids.to("cuda") - - generate_ids = model.generate( - input_ids=input_ids, - **generation_config - ) - generate_ids = generate_ids[0][input_ids.shape[1]:] - - return generate_ids - -def decode_image_text(generate_ids, tokenizer, save_path=None): - - boi_list = torch.where(generate_ids == tokenizer(BOI_TOKEN, add_special_tokens=False).input_ids[0])[0] - eoi_list = torch.where(generate_ids == tokenizer(EOI_TOKEN, add_special_tokens=False).input_ids[0])[0] - - if len(boi_list) == 0 and len(eoi_list) == 0: - text_ids = generate_ids - texts = tokenizer.decode(text_ids, skip_special_tokens=True) - print(texts) - - else: - boi_index = boi_list[0] - eoi_index = eoi_list[0] - - text_ids = generate_ids[:boi_index] - if len(text_ids) != 0: - texts = tokenizer.decode(text_ids, skip_special_tokens=True) - print(texts) - - image_ids = (generate_ids[boi_index+1:eoi_index] - image_id_shift).reshape(1,-1) - - images = tokenizer.decode_image(image_ids) - - images[0].save(save_path) - - -device = "cuda" - -tokenizer_cfg_path = 'configs/tokenizer/seed_llama_tokenizer.yaml' -tokenizer_cfg = OmegaConf.load(tokenizer_cfg_path) -tokenizer = hydra.utils.instantiate(tokenizer_cfg, device=device, load_diffusion=True) - -transform_cfg_path = 'configs/transform/clip_transform.yaml' -transform_cfg = OmegaConf.load(transform_cfg_path) -transform = hydra.utils.instantiate(transform_cfg) - -model_cfg = OmegaConf.load('configs/llm/seed_llama_14b.yaml') -model = hydra.utils.instantiate(model_cfg, torch_dtype=torch.float16) -model = model.eval().to(device) - -generation_config = { - 'temperature': 1.0, - 'num_beams': 1, - 'max_new_tokens': 512, - 'top_p': 0.5, - 'do_sample': True - } - -s_token = "[INST] " -e_token = " [/INST]" -sep = "\n" - - -### visual question answering -image_path = "images/cat.jpg" -image = Image.open(image_path).convert('RGB') -image_tensor = transform(image).to(device) -img_ids = tokenizer.encode_image(image_torch=image_tensor) -img_ids = img_ids.view(-1).cpu().numpy() -img_tokens = BOI_TOKEN + ''.join([IMG_TOKEN.format(item) for item in img_ids]) + EOI_TOKEN - -question = "What is this animal?" - -input_tokens = tokenizer.bos_token + s_token + img_tokens + question + e_token + sep -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -decode_image_text(generate_ids, tokenizer) - -### text-to-image generation -prompt = "Can you generate an image of a dog on the green grass?" -input_tokens = tokenizer.bos_token + s_token + prompt + e_token + sep -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -save_path = 'dog.jpg' -decode_image_text(generate_ids, tokenizer, save_path) - -### multimodal prompt image generation -instruction = "Can you make the cat wear sunglasses?" -input_tokens = tokenizer.bos_token + s_token + img_tokens + instruction + e_token + sep -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -save_path = 'cat_sunglasses.jpg' -decode_image_text(generate_ids, tokenizer, save_path) \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov7_l_syncbn_fast_6x16b-100e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov7_l_syncbn_fast_6x16b-100e_coco.py deleted file mode 100644 index a005c28eb78d101354650a20b1ff7212e24d0743..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov7_l_syncbn_fast_6x16b-100e_coco.py +++ /dev/null @@ -1,489 +0,0 @@ -_base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py'] - -data_root = './data-df2/' -train_ann_file = 'annotations/train.json' -train_data_prefix = 'smaller-dataset/' -val_ann_file = 'annotations/val.json' -val_data_prefix = 'smaller-dataset/' -test_ann_file = 'annotations/test.json' -test_data_prefix = 'smaller-dataset/' -# num_classes = 13 -train_batch_size_per_gpu = 32 -train_num_workers = 4 -persistent_workers = True - -vis_backends = [ - dict(type='LocalVisBackend'), -] -visualizer = dict( - type='mmdet.DetLocalVisualizer', - vis_backends=[ - dict(type='LocalVisBackend'), - # dict(type='WandbVisBackend'), - dict(type='TensorboardVisBackend') - ], - name='visualizer') -log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True) -log_level = 'INFO' -load_from = None -resume = False - -anchors = [ - [(12, 16), (19, 36), (40, 28)], # P3/8 - [(36, 75), (76, 55), (72, 146)], # P4/16 - [(142, 110), (192, 243), (459, 401)] # P5/32 -] - -base_lr = 0.01 -max_epochs = 100 - -num_epoch_stage2 = 10 # The last 10 epochs switch evaluation interval -val_interval_stage2 = 1 - -model_test_cfg = dict( - multi_label=True, - nms_pre=30000, - score_thr=0.001, - nms=dict(type='nms', iou_threshold=0.65), - max_per_img=300) - -img_scale = (640, 640) -dataset_type = 'YOLOv5CocoDataset' -classes=('short_sleeved_shirt', 'long_sleeved_shirt', - 'short_sleeved_outwear', 'long_sleeved_outwear', - 'vest', 'sling', 'shorts', 'trousers', 'skirt', - 'short_sleeved_dress', 'long_sleeved_dress', - 'vest_dress', 'sling_dress') -num_classes = len(classes) -palette=[(255, 0, 0), (255, 128, 0), (255, 255, 0), - (128, 255, 0), (0, 255, 0), (0, 255, 128), - (0, 255, 255), (0, 128, 255), (0, 0, 255), - (127, 0, 255), (255, 0, 255), (255, 0, 127), - (128, 128, 128)] -metainfo = dict( - classes=classes, - palette=palette -) -val_batch_size_per_gpu = 1 -val_num_workers = 2 -batch_shapes_cfg = dict( - type='BatchShapePolicy', - batch_size=val_batch_size_per_gpu, - img_size=img_scale[0], - size_divisor=32, - extra_pad_ratio=0.5) -strides = [8, 16, 32] # Strides of multi-scale prior box -num_det_layers = 3 -norm_cfg = dict(type='BN', momentum=0.03, eps=0.001) - -# Data augmentation -max_translate_ratio = 0.2 # YOLOv5RandomAffine -scaling_ratio_range = (0.1, 2.0) # YOLOv5RandomAffine -mixup_prob = 0.15 # YOLOv5MixUp -randchoice_mosaic_prob = [0.8, 0.2] -mixup_alpha = 8.0 # YOLOv5MixUp -mixup_beta = 8.0 # YOLOv5MixUp - -# -----train val related----- -loss_cls_weight = 0.3 -loss_bbox_weight = 0.05 -loss_obj_weight = 0.7 -# BatchYOLOv7Assigner params -simota_candidate_topk = 10 -simota_iou_weight = 3.0 -simota_cls_weight = 1.0 -prior_match_thr = 4. # Priori box matching threshold -obj_level_weights = [4., 1., - 0.4] # The obj loss weights of the three output layers - -lr_factor = 0.1 # Learning rate scaling factor -weight_decay = 0.0005 -save_epoch_intervals = 1 -max_keep_ckpts = 5 - -env_cfg = dict( - cudnn_benchmark=True, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) - -# ===============================Unmodified in most cases==================== -model = dict( - type='YOLODetector', - data_preprocessor=dict( - type='YOLOv5DetDataPreprocessor', - mean=[0., 0., 0.], - std=[255., 255., 255.], - bgr_to_rgb=True), - backbone=dict( - type='YOLOv7Backbone', - arch='L', - norm_cfg=norm_cfg, - act_cfg=dict(type='SiLU', inplace=True)), - neck=dict( - type='YOLOv7PAFPN', - block_cfg=dict( - type='ELANBlock', - middle_ratio=0.5, - block_ratio=0.25, - num_blocks=4, - num_convs_in_block=1), - upsample_feats_cat_first=False, - in_channels=[512, 1024, 1024], - # The real output channel will be multiplied by 2 - out_channels=[128, 256, 512], - norm_cfg=norm_cfg, - act_cfg=dict(type='SiLU', inplace=True)), - bbox_head=dict( - type='YOLOv7Head', - head_module=dict( - type='YOLOv7HeadModule', - num_classes=num_classes, - in_channels=[256, 512, 1024], - featmap_strides=strides, - num_base_priors=3), - prior_generator=dict( - type='mmdet.YOLOAnchorGenerator', - base_sizes=anchors, - strides=strides), - # scaled based on number of detection layers - loss_cls=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=loss_cls_weight * - (num_classes / 80 * 3 / num_det_layers)), - loss_bbox=dict( - type='IoULoss', - iou_mode='ciou', - bbox_format='xyxy', - reduction='mean', - loss_weight=loss_bbox_weight * (3 / num_det_layers), - return_iou=True), - loss_obj=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=loss_obj_weight * - ((img_scale[0] / 640)**2 * 3 / num_det_layers)), - prior_match_thr=prior_match_thr, - obj_level_weights=obj_level_weights, - # BatchYOLOv7Assigner params - simota_candidate_topk=simota_candidate_topk, - simota_iou_weight=simota_iou_weight, - simota_cls_weight=simota_cls_weight), - test_cfg=model_test_cfg) - -pre_transform = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='LoadAnnotations', with_bbox=True) -] - -mosiac4_pipeline = [ - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - max_translate_ratio=max_translate_ratio, # note - scaling_ratio_range=scaling_ratio_range, # note - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), -] - -mosiac9_pipeline = [ - dict( - type='Mosaic9', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - max_translate_ratio=max_translate_ratio, # note - scaling_ratio_range=scaling_ratio_range, # note - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), -] - -randchoice_mosaic_pipeline = dict( - type='RandomChoice', - transforms=[mosiac4_pipeline, mosiac9_pipeline], - prob=randchoice_mosaic_prob) - -train_pipeline = [ - *pre_transform, - randchoice_mosaic_pipeline, - dict( - type='YOLOv5MixUp', - alpha=mixup_alpha, # note - beta=mixup_beta, # note - prob=mixup_prob, - pre_transform=[*pre_transform, randchoice_mosaic_pipeline]), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] - -train_dataloader = dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - sampler=dict(type='DefaultSampler', shuffle=True), - collate_fn=dict(type='yolov5_collate'), # FASTER - dataset=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - data_root=data_root, - metainfo=metainfo, - ann_file=val_ann_file, - data_prefix=dict(img=train_data_prefix), - filter_cfg=dict(filter_empty_gt=False, min_size=32), - pipeline=train_pipeline) - ) - ) - -val_dataloader = dict( - dataset=dict( - metainfo=metainfo, - data_root=data_root, - ann_file=val_ann_file, - data_prefix=dict(img=val_data_prefix))) - -val_evaluator = dict(ann_file=data_root + val_ann_file) - -test_dataloader = dict( - dataset=dict( - metainfo=metainfo, - data_root=data_root, - ann_file=test_ann_file, - data_prefix=dict(img=test_data_prefix))) -test_evaluator = dict(ann_file=data_root + test_ann_file) - -train_cfg = dict( - type='EpochBasedTrainLoop', - max_epochs=max_epochs, - val_interval=save_epoch_intervals, - dynamic_intervals=[(max_epochs - num_epoch_stage2, val_interval_stage2)]) -val_cfg = dict(type='ValLoop') -test_cfg = dict(type='TestLoop') - -param_scheduler = None -optim_wrapper = dict( - type='OptimWrapper', - optimizer=dict( - type='SGD', - lr=base_lr, - momentum=0.937, - weight_decay=weight_decay, - nesterov=True, - batch_size_per_gpu=train_batch_size_per_gpu), - constructor='YOLOv7OptimWrapperConstructor') - -# TO DO: change param_scheduler type to StepLR, refer to mobilenet -default_scope = 'mmyolo' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=10), - param_scheduler=dict( - type='YOLOv5ParamSchedulerHook', - scheduler_type='cosine', - lr_factor=lr_factor, # note - max_epochs=max_epochs), - checkpoint=dict( - type='CheckpointHook', - save_param_scheduler=False, - interval=save_epoch_intervals, - save_best='auto', - max_keep_ckpts=max_keep_ckpts), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='mmdet.DetVisualizationHook')) - -custom_hooks = [ - dict( - type='EMAHook', - ema_type='ExpMomentumEMA', - momentum=0.001, - update_buffers=True, - strict_load=False, - priority=49) -] - -# ============================ - -file_client_args = dict(backend='disk') -_file_client_args = dict(backend='disk') -tta_model = dict( - type='mmdet.DetTTAModel', - tta_cfg=dict(nms=dict(type='nms', iou_threshold=0.65), max_per_img=300)) -img_scales = [ - ( - 640, - 640, - ), - ( - 320, - 320, - ), - ( - 960, - 960, - ), -] -_multiscale_resize_transforms = [ - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 640, - 640, - )), - dict( - type='LetterResize', - scale=( - 640, - 640, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 320, - 320, - )), - dict( - type='LetterResize', - scale=( - 320, - 320, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 960, - 960, - )), - dict( - type='LetterResize', - scale=( - 960, - 960, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), -] -tta_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=dict(backend='disk')), - dict( - type='TestTimeAug', - transforms=[ - [ - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 640, - 640, - )), - dict( - type='LetterResize', - scale=( - 640, - 640, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 320, - 320, - )), - dict( - type='LetterResize', - scale=( - 320, - 320, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=( - 960, - 960, - )), - dict( - type='LetterResize', - scale=( - 960, - 960, - ), - allow_scale_up=False, - pad_val=dict(img=114)), - ]), - ], - [ - dict(type='mmdet.RandomFlip', prob=1.0), - dict(type='mmdet.RandomFlip', prob=0.0), - ], - [ - dict(type='mmdet.LoadAnnotations', with_bbox=True), - ], - [ - dict( - type='mmdet.PackDetInputs', - meta_keys=( - 'img_id', - 'img_path', - 'ori_shape', - 'img_shape', - 'scale_factor', - 'pad_param', - 'flip', - 'flip_direction', - )), - ], - ]), -] - -launcher = 'none' diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c101_8xb32_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c101_8xb32_in1k.py deleted file mode 100644 index 441aff591851f402a176c142c93dc866a77b82c2..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c101_8xb32_in1k.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/resnetv1c50.py', - '../_base_/datasets/imagenet_bs32_pil_resize.py', - '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py' -] - -model = dict(backbone=dict(depth=101)) diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/utils.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/utils.py deleted file mode 100644 index e9d33ca2361e48e9781cfee644dd9ddcffd6a59a..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/utils.py +++ /dev/null @@ -1,238 +0,0 @@ -import os -from typing import Any - -import matplotlib.pyplot as plt -import torch -from torch import nn -from itertools import repeat -from poetry_diacritizer.util.decorators import ignore_exception -from dataclasses import dataclass -import numpy as np - - -@dataclass -class ErrorRate: - wer: float - der: float - wer_without_case_ending: float - der_without_case_ending: float - - -def epoch_time(start_time, end_time): - elapsed_time = end_time - start_time - elapsed_mins = int(elapsed_time / 60) - elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) - return elapsed_mins, elapsed_secs - - -@ignore_exception -def plot_alignment(alignment: torch.Tensor, path: str, global_step: Any = 0): - """ - Plot alignment and save it into a path - Args: - alignment (Tensor): the encoder-decoder alignment - path (str): a path used to save the alignment plot - global_step (int): used in the name of the output alignment plot - """ - alignment = alignment.squeeze(1).transpose(0, 1).cpu().detach().numpy() - fig, axs = plt.subplots() - img = axs.imshow(alignment, aspect="auto", origin="lower", interpolation="none") - fig.colorbar(img, ax=axs) - xlabel = "Decoder timestep" - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - plot_name = f"{global_step}.png" - plt.savefig(os.path.join(path, plot_name), dpi=300, format="png") - plt.close() - - -def get_mask_from_lengths(memory, memory_lengths): - """Get mask tensor from list of length - Args: - memory: (batch, max_time, dim) - memory_lengths: array like - """ - mask = memory.data.new(memory.size(0), memory.size(1)).bool().zero_() - for idx, length in enumerate(memory_lengths): - mask[idx][:length] = 1 - return ~mask - - -def repeater(data_loader): - for loader in repeat(data_loader): - for data in loader: - yield data - - -def count_parameters(model): - return sum(p.numel() for p in model.parameters() if p.requires_grad) - - -def initialize_weights(m): - if hasattr(m, "weight") and m.weight.dim() > 1: - nn.init.xavier_uniform_(m.weight.data) - - -def get_encoder_layers_attentions(model): - attentions = [] - for layer in model.encoder.layers: - attentions.append(layer.self_attention.attention) - return attentions - - -def get_decoder_layers_attentions(model): - self_attns, src_attens = [], [] - for layer in model.decoder.layers: - self_attns.append(layer.self_attention.attention) - src_attens.append(layer.encoder_attention.attention) - return self_attns, src_attens - - -def display_attention( - attention, path, global_step: int, name="att", n_heads=4, n_rows=2, n_cols=2 -): - assert n_rows * n_cols == n_heads - - fig = plt.figure(figsize=(15, 15)) - - for i in range(n_heads): - - ax = fig.add_subplot(n_rows, n_cols, i + 1) - - _attention = attention.squeeze(0)[i].transpose(0, 1).cpu().detach().numpy() - cax = ax.imshow(_attention, aspect="auto", origin="lower", interpolation="none") - - plot_name = f"{global_step}-{name}.png" - plt.savefig(os.path.join(path, plot_name), dpi=300, format="png") - plt.close() - - -def plot_multi_head(model, path, global_step): - encoder_attentions = get_encoder_layers_attentions(model) - decoder_attentions, attentions = get_decoder_layers_attentions(model) - for i in range(len(attentions)): - display_attention( - attentions[0][0], path, global_step, f"encoder-decoder-layer{i + 1}" - ) - for i in range(len(decoder_attentions)): - display_attention( - decoder_attentions[0][0], path, global_step, f"decoder-layer{i + 1}" - ) - for i in range(len(encoder_attentions)): - display_attention( - encoder_attentions[0][0], path, global_step, f"encoder-layer {i + 1}" - ) - - -def make_src_mask(src, pad_idx=0): - - # src = [batch size, src len] - - src_mask = (src != pad_idx).unsqueeze(1).unsqueeze(2) - - # src_mask = [batch size, 1, 1, src len] - - return src_mask - - -def get_angles(pos, i, model_dim): - angle_rates = 1 / np.power(10000, (2 * (i // 2)) / np.float32(model_dim)) - return pos * angle_rates - - -def positional_encoding(position, model_dim): - angle_rads = get_angles( - np.arange(position)[:, np.newaxis], - np.arange(model_dim)[np.newaxis, :], - model_dim, - ) - - # apply sin to even indices in the array; 2i - angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) - - # apply cos to odd indices in the array; 2i+1 - angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) - - pos_encoding = angle_rads[np.newaxis, ...] - - return torch.from_numpy(pos_encoding) - - -def calculate_error_rates(original_file_path: str, target_file_path: str) -> ErrorRate: - """ - Calculates ErrorRates from paths - """ - assert os.path.isfile(original_file_path) - assert os.path.isfile(target_file_path) - - _wer = wer.calculate_wer_from_path( - inp_path=original_file_path, out_path=target_file_path, case_ending=True - ) - - _wer_without_case_ending = wer.calculate_wer_from_path( - inp_path=original_file_path, out_path=target_file_path, case_ending=False - ) - - _der = der.calculate_der_from_path( - inp_path=original_file_path, out_path=target_file_path, case_ending=True - ) - - _der_without_case_ending = der.calculate_der_from_path( - inp_path=original_file_path, out_path=target_file_path, case_ending=False - ) - - error_rates = ErrorRate( - _wer, - _der, - _wer_without_case_ending, - _der_without_case_ending, - ) - - return error_rates - - -def categorical_accuracy(preds, y, tag_pad_idx, device="cuda"): - """ - Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 - """ - max_preds = preds.argmax( - dim=1, keepdim=True - ) # get the index of the max probability - non_pad_elements = torch.nonzero((y != tag_pad_idx)) - correct = max_preds[non_pad_elements].squeeze(1).eq(y[non_pad_elements]) - return correct.sum() / torch.FloatTensor([y[non_pad_elements].shape[0]]).to(device) - - -def write_to_files(input_path, output_path, input_list, output_list): - with open(input_path, "w", encoding="utf8") as file: - for inp in input_list: - file.write(inp + "\n") - with open(output_path, "w", encoding="utf8") as file: - for out in output_list: - file.write(out + "\n") - - -def make_src_mask(src: torch.Tensor, pad_idx=0): - return (src != pad_idx).unsqueeze(1).unsqueeze(2) - - -def make_trg_mask(trg, trg_pad_idx=0): - - # trg = [batch size, trg len] - - trg_pad_mask = (trg != trg_pad_idx).unsqueeze(1).unsqueeze(2) - - # trg_pad_mask = [batch size, 1, 1, trg len] - - trg_len = trg.shape[1] - - trg_sub_mask = torch.tril(torch.ones((trg_len, trg_len))).bool() - - # trg_sub_mask = [trg len, trg len] - - trg_mask = trg_pad_mask & trg_sub_mask - - # trg_mask = [batch size, 1, trg len, trg len] - - return trg_mask diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/__init__.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/__init__.py deleted file mode 100644 index 5b0ec33b0fc47c3dc2ef8ad9839f997c4f6bc70b..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/__init__.py +++ /dev/null @@ -1,100 +0,0 @@ -from __future__ import annotations -from .Acytoo import Acytoo -from .AiAsk import AiAsk -from .Aibn import Aibn -from .Aichat import Aichat -from .Ails import Ails -from .Aivvm import Aivvm -from .AItianhu import AItianhu -from .AItianhuSpace import AItianhuSpace -from .Bing import Bing -from .ChatBase import ChatBase -from .ChatForAi import ChatForAi -from .Chatgpt4Online import Chatgpt4Online -from .ChatgptAi import ChatgptAi -from .ChatgptDemo import ChatgptDemo -from .ChatgptDuo import ChatgptDuo -from .ChatgptX import ChatgptX -from .Cromicle import Cromicle -from .DeepAi import DeepAi -from .FreeGpt import FreeGpt -from .GPTalk import GPTalk -from .GptForLove import GptForLove -from .GptGo import GptGo -from .GptGod import GptGod -from .H2o import H2o -from .Liaobots import Liaobots -from .Myshell import Myshell -from .Phind import Phind -from .Vercel import Vercel -from .Vitalentum import Vitalentum -from .Ylokh import Ylokh -from .You import You -from .Yqcloud import Yqcloud - -from .base_provider import BaseProvider, AsyncProvider, AsyncGeneratorProvider -from .retry_provider import RetryProvider -from .deprecated import * -from .needs_auth import * -from .unfinished import * - -__all__ = [ - 'BaseProvider', - 'AsyncProvider', - 'AsyncGeneratorProvider', - 'RetryProvider', - 'Acytoo', - 'AiAsk', - 'Aibn', - 'Aichat', - 'Ails', - 'Aivvm', - 'AiService', - 'AItianhu', - 'AItianhuSpace', - 'Aivvm', - 'Bard', - 'Bing', - 'ChatBase', - 'ChatForAi', - 'Chatgpt4Online', - 'ChatgptAi', - 'ChatgptDemo', - 'ChatgptDuo', - 'ChatgptLogin', - 'ChatgptX', - 'Cromicle', - 'CodeLinkAva', - 'DeepAi', - 'DfeHub', - 'EasyChat', - 'Forefront', - 'FreeGpt', - 'GPTalk', - 'GptForLove', - 'GetGpt', - 'GptGo', - 'GptGod', - 'H2o', - 'HuggingChat', - 'Liaobots', - 'Lockchat', - 'Myshell', - 'Opchatgpts', - 'Raycast', - 'OpenaiChat', - 'OpenAssistant', - 'PerplexityAi', - 'Phind', - 'Theb', - 'Vercel', - 'Vitalentum', - 'Wewordle', - 'Ylokh', - 'You', - 'Yqcloud', - 'Equing', - 'FastGpt', - 'Wuguokai', - 'V50' -] \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/op_edit/fused_bias_act.cpp b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/op_edit/fused_bias_act.cpp deleted file mode 100644 index a79a3d65b8fb56393c954630ae8ce5a5c8a8bb7d..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/op_edit/fused_bias_act.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Anar0140/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/app.py b/spaces/Anar0140/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/app.py deleted file mode 100644 index 5a8522c4a42926ac6f90d040ab01b342a1f8e7ad..0000000000000000000000000000000000000000 --- a/spaces/Anar0140/4.RealTime-MediaPipe-AI-From-Video-On-Any-Device/app.py +++ /dev/null @@ -1,59 +0,0 @@ -import streamlit as st -st.markdown(""" - -# MediaPipe - -### A cross language SDK for AI that is real time, 3d, camera responsive, and on any device for nearly any language - -#### Vision -#### Natural Language -#### Audio - -Mediapipe has fast and flexible AI/ML pipelines. - -Examples with Javascript Links! - -1. Image Classifier: https://mediapipe-studio.webapps.google.com/demo/image_classifier -2. Object Detector: https://mediapipe-studio.webapps.google.com/demo/object_detector -3. Text Classification: https://mediapipe-studio.webapps.google.com/demo/text_classifier -4. Gesture Recognizer: https://mediapipe-studio.webapps.google.com/demo/gesture_recognizer -5. Hand Landmark Detection: https://mediapipe-studio.webapps.google.com/demo/hand_landmarker -6. Audio Classifier: https://mediapipe-studio.webapps.google.com/demo/audio_classifier - -Get started with just Javascript!! - -Getting Started: https://google.github.io/mediapipe/getting_started/javascript.html - -Javascript Solutions - Ready to Demo: -1. Face Mesh: https://codepen.io/mediapipe/full/KKgVaPJ -2. Face Detection: https://codepen.io/mediapipe/full/dyOzvZM -3. Hands: https://codepen.io/mediapipe/full/RwGWYJw -4. Face, Hands, Body: https://codepen.io/mediapipe/full/LYRRYEw -5. Objectron: https://codepen.io/mediapipe/full/BaWvzdY -6. Full Skeletal Pose: https://codepen.io/mediapipe/full/jOMbvxw -7. Self Segmentation From Background: https://codepen.io/mediapipe/full/wvJyQpq - - -Demonstration in Action with Screenshots: - -Self Segmentation From Background: -![image](https://user-images.githubusercontent.com/30595158/225767564-786928a3-7c91-4df1-babb-0cc4c2b71460.png) - -Full Skeletal Pose: -![image](https://user-images.githubusercontent.com/30595158/225767721-6f088349-3f56-41b3-85d4-98f2456dc165.png) - -Hands - Both in 3D Projection even hidden surface vertices - Mahalo: -![image](https://user-images.githubusercontent.com/30595158/225767970-0e1000e8-72a8-4276-a6f0-ccfcd3ac6d72.png) - -Holistic - Face, Hands, Body: -![image](https://user-images.githubusercontent.com/30595158/225768092-2cb4a144-7033-46b1-a476-3e0ec376eb36.png) - -Face Detection: -![image](https://user-images.githubusercontent.com/30595158/225768256-c97c0f62-6ef9-4c7e-aa41-8eaf4f344a3d.png) - -Face Mesh Real Time - 30 Frames per second! -![image](https://user-images.githubusercontent.com/30595158/225768360-c64197ff-919f-47a9-8cc0-c6d5e73e5853.png) - - - -""") \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py deleted file mode 100644 index d27f8a21f3698a2807f95b9aaef1426b5eab733b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py +++ /dev/null @@ -1,748 +0,0 @@ -# Copyright 2023 The InstructPix2Pix Authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...image_processor import VaeImageProcessor -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import ( - PIL_INTERPOLATION, - deprecate, - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, -) -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess -def preprocess(image): - warnings.warn( - "The preprocess method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor.preprocess instead", - FutureWarning, - ) - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8 - - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -class StableDiffusionInstructPix2PixPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin): - r""" - Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - The pipeline also inherits the following loading methods: - - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights - - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - text_encoder ([`~transformers.CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer ([`~transformers.CLIPTokenizer`]): - A `CLIPTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPImageProcessor`]): - A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[ - torch.FloatTensor, - PIL.Image.Image, - np.ndarray, - List[torch.FloatTensor], - List[PIL.Image.Image], - List[np.ndarray], - ] = None, - num_inference_steps: int = 100, - guidance_scale: float = 7.5, - image_guidance_scale: float = 1.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - image (`torch.FloatTensor` `np.ndarray`, `PIL.Image.Image`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): - `Image` or tensor representing an image batch to be repainted according to `prompt`. Can also accept - image latents as `image`, but if passing latents directly it is not encoded again. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - image_guidance_scale (`float`, *optional*, defaults to 1.5): - Push the generated image towards the inital `image`. Image guidance scale is enabled by setting - `image_guidance_scale > 1`. Higher image guidance scale encourages generated images that are closely - linked to the source `image`, usually at the expense of lower image quality. This pipeline requires a - value of at least `1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - - Examples: - - ```py - >>> import PIL - >>> import requests - >>> import torch - >>> from io import BytesIO - - >>> from diffusers import StableDiffusionInstructPix2PixPipeline - - - >>> def download_image(url): - ... response = requests.get(url) - ... return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - - >>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" - - >>> image = download_image(img_url).resize((512, 512)) - - >>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - ... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 - ... ) - >>> pipe = pipe.to("cuda") - - >>> prompt = "make the mountains snowy" - >>> image = pipe(prompt=prompt, image=image).images[0] - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - # 0. Check inputs - self.check_inputs(prompt, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - if image is None: - raise ValueError("`image` input cannot be undefined.") - - # 1. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 and image_guidance_scale >= 1.0 - # check if scheduler is in sigmas space - scheduler_is_in_sigma_space = hasattr(self.scheduler, "sigmas") - - # 2. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 3. Preprocess image - image = self.image_processor.preprocess(image) - - # 4. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare Image latents - image_latents = self.prepare_image_latents( - image, - batch_size, - num_images_per_prompt, - prompt_embeds.dtype, - device, - do_classifier_free_guidance, - generator, - ) - - height, width = image_latents.shape[-2:] - height = height * self.vae_scale_factor - width = width * self.vae_scale_factor - - # 6. Prepare latent variables - num_channels_latents = self.vae.config.latent_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 7. Check that shapes of latents and image match the UNet channels - num_channels_image = image_latents.shape[1] - if num_channels_latents + num_channels_image != self.unet.config.in_channels: - raise ValueError( - f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" - f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_image`: {num_channels_image} " - f" = {num_channels_latents+num_channels_image}. Please verify the config of" - " `pipeline.unet` or your `image` input." - ) - - # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # Expand the latents if we are doing classifier free guidance. - # The latents are expanded 3 times because for pix2pix the guidance\ - # is applied for both the text and the input image. - latent_model_input = torch.cat([latents] * 3) if do_classifier_free_guidance else latents - - # concat latents, image_latents in the channel dimension - scaled_latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - scaled_latent_model_input = torch.cat([scaled_latent_model_input, image_latents], dim=1) - - # predict the noise residual - noise_pred = self.unet( - scaled_latent_model_input, t, encoder_hidden_states=prompt_embeds, return_dict=False - )[0] - - # Hack: - # For karras style schedulers the model does classifer free guidance using the - # predicted_original_sample instead of the noise_pred. So we need to compute the - # predicted_original_sample here if we are using a karras style scheduler. - if scheduler_is_in_sigma_space: - step_index = (self.scheduler.timesteps == t).nonzero()[0].item() - sigma = self.scheduler.sigmas[step_index] - noise_pred = latent_model_input - sigma * noise_pred - - # perform guidance - if do_classifier_free_guidance: - noise_pred_text, noise_pred_image, noise_pred_uncond = noise_pred.chunk(3) - noise_pred = ( - noise_pred_uncond - + guidance_scale * (noise_pred_text - noise_pred_image) - + image_guidance_scale * (noise_pred_image - noise_pred_uncond) - ) - - # Hack: - # For karras style schedulers the model does classifer free guidance using the - # predicted_original_sample instead of the noise_pred. But the scheduler.step function - # expects the noise_pred and computes the predicted_original_sample internally. So we - # need to overwrite the noise_pred here such that the value of the computed - # predicted_original_sample is correct. - if scheduler_is_in_sigma_space: - noise_pred = (noise_pred - latents) / (-sigma) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_model_cpu_offload - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_ prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - # pix2pix has two negative embeddings, and unlike in other pipelines latents are ordered [prompt_embeds, negative_prompt_embeds, negative_prompt_embeds] - prompt_embeds = torch.cat([prompt_embeds, negative_prompt_embeds, negative_prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def check_inputs( - self, prompt, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def prepare_image_latents( - self, image, batch_size, num_images_per_prompt, dtype, device, do_classifier_free_guidance, generator=None - ): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - - if image.shape[1] == 4: - image_latents = image - else: - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if isinstance(generator, list): - image_latents = [self.vae.encode(image[i : i + 1]).latent_dist.mode() for i in range(batch_size)] - image_latents = torch.cat(image_latents, dim=0) - else: - image_latents = self.vae.encode(image).latent_dist.mode() - - if batch_size > image_latents.shape[0] and batch_size % image_latents.shape[0] == 0: - # expand image_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {image_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // image_latents.shape[0] - image_latents = torch.cat([image_latents] * additional_image_per_prompt, dim=0) - elif batch_size > image_latents.shape[0] and batch_size % image_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {image_latents.shape[0]} to {batch_size} text prompts." - ) - else: - image_latents = torch.cat([image_latents], dim=0) - - if do_classifier_free_guidance: - uncond_image_latents = torch.zeros_like(image_latents) - image_latents = torch.cat([image_latents, image_latents, uncond_image_latents], dim=0) - - return image_latents diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py deleted file mode 100644 index 8d5edda16904e7780bc00ea06746e1fd8f5c034d..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py +++ /dev/null @@ -1,241 +0,0 @@ -import gc -import unittest - -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DDPMScheduler, - PriorTransformer, - StableUnCLIPPipeline, - UNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer -from diffusers.utils.testing_utils import enable_full_determinism, load_numpy, require_torch_gpu, slow, torch_device - -from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS -from ..test_pipelines_common import ( - PipelineKarrasSchedulerTesterMixin, - PipelineLatentTesterMixin, - PipelineTesterMixin, - assert_mean_pixel_difference, -) - - -enable_full_determinism() - - -class StableUnCLIPPipelineFastTests( - PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase -): - pipeline_class = StableUnCLIPPipeline - params = TEXT_TO_IMAGE_PARAMS - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - image_params = TEXT_TO_IMAGE_IMAGE_PARAMS - image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS - - # TODO(will) Expected attn_bias.stride(1) == 0 to be true, but got false - test_xformers_attention = False - - def get_dummy_components(self): - embedder_hidden_size = 32 - embedder_projection_dim = embedder_hidden_size - - # prior components - - torch.manual_seed(0) - prior_tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - torch.manual_seed(0) - prior_text_encoder = CLIPTextModelWithProjection( - CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=embedder_hidden_size, - projection_dim=embedder_projection_dim, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - ) - - torch.manual_seed(0) - prior = PriorTransformer( - num_attention_heads=2, - attention_head_dim=12, - embedding_dim=embedder_projection_dim, - num_layers=1, - ) - - torch.manual_seed(0) - prior_scheduler = DDPMScheduler( - variance_type="fixed_small_log", - prediction_type="sample", - num_train_timesteps=1000, - clip_sample=True, - clip_sample_range=5.0, - beta_schedule="squaredcos_cap_v2", - ) - - # regular denoising components - - torch.manual_seed(0) - image_normalizer = StableUnCLIPImageNormalizer(embedding_dim=embedder_hidden_size) - image_noising_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2") - - torch.manual_seed(0) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - torch.manual_seed(0) - text_encoder = CLIPTextModel( - CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=embedder_hidden_size, - projection_dim=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - ) - - torch.manual_seed(0) - unet = UNet2DConditionModel( - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("CrossAttnDownBlock2D", "DownBlock2D"), - up_block_types=("UpBlock2D", "CrossAttnUpBlock2D"), - block_out_channels=(32, 64), - attention_head_dim=(2, 4), - class_embed_type="projection", - # The class embeddings are the noise augmented image embeddings. - # I.e. the image embeddings concated with the noised embeddings of the same dimension - projection_class_embeddings_input_dim=embedder_projection_dim * 2, - cross_attention_dim=embedder_hidden_size, - layers_per_block=1, - upcast_attention=True, - use_linear_projection=True, - ) - - torch.manual_seed(0) - scheduler = DDIMScheduler( - beta_schedule="scaled_linear", - beta_start=0.00085, - beta_end=0.012, - prediction_type="v_prediction", - set_alpha_to_one=False, - steps_offset=1, - ) - - torch.manual_seed(0) - vae = AutoencoderKL() - - components = { - # prior components - "prior_tokenizer": prior_tokenizer, - "prior_text_encoder": prior_text_encoder, - "prior": prior, - "prior_scheduler": prior_scheduler, - # image noising components - "image_normalizer": image_normalizer, - "image_noising_scheduler": image_noising_scheduler, - # regular denoising components - "tokenizer": tokenizer, - "text_encoder": text_encoder, - "unet": unet, - "scheduler": scheduler, - "vae": vae, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "prior_num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - # Overriding PipelineTesterMixin::test_attention_slicing_forward_pass - # because UnCLIP GPU undeterminism requires a looser check. - def test_attention_slicing_forward_pass(self): - test_max_difference = torch_device == "cpu" - - self._test_attention_slicing_forward_pass(test_max_difference=test_max_difference) - - # Overriding PipelineTesterMixin::test_inference_batch_single_identical - # because UnCLIP undeterminism requires a looser check. - def test_inference_batch_single_identical(self): - test_max_difference = torch_device in ["cpu", "mps"] - - self._test_inference_batch_single_identical(test_max_difference=test_max_difference) - - -@slow -@require_torch_gpu -class StableUnCLIPPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_stable_unclip(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/stable_unclip_2_1_l_anime_turtle_fp16.npy" - ) - - pipe = StableUnCLIPPipeline.from_pretrained("fusing/stable-unclip-2-1-l", torch_dtype=torch.float16) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - # stable unclip will oom when integration tests are run on a V100, - # so turn on memory savings - pipe.enable_attention_slicing() - pipe.enable_sequential_cpu_offload() - - generator = torch.Generator(device="cpu").manual_seed(0) - output = pipe("anime turle", generator=generator, output_type="np") - - image = output.images[0] - - assert image.shape == (768, 768, 3) - - assert_mean_pixel_difference(image, expected_image) - - def test_stable_unclip_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableUnCLIPPipeline.from_pretrained("fusing/stable-unclip-2-1-l", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - pipe.enable_sequential_cpu_offload() - - _ = pipe( - "anime turtle", - prior_num_inference_steps=2, - num_inference_steps=2, - output_type="np", - ) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 7 GB is allocated - assert mem_bytes < 7 * 10**9 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/README.md b/spaces/Andy1621/uniformer_image_detection/configs/foveabox/README.md deleted file mode 100644 index 91a43c9797bc88e747f22a5878f1bf4b12946389..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/README.md +++ /dev/null @@ -1,41 +0,0 @@ -# FoveaBox: Beyond Anchor-based Object Detector - -[ALGORITHM] - -FoveaBox is an accurate, flexible and completely anchor-free object detection system for object detection framework, as presented in our paper [https://arxiv.org/abs/1904.03797](https://arxiv.org/abs/1904.03797): -Different from previous anchor-based methods, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. - -## Main Results - -### Results on R50/101-FPN - -| Backbone | Style | align | ms-train| Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:---------:|:-------:|:-------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | pytorch | N | N | 1x | 5.6 | 24.1 | 36.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r50_fpn_4x4_1x_coco/fovea_r50_fpn_4x4_1x_coco_20200219-ee4d5303.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r50_fpn_4x4_1x_coco/fovea_r50_fpn_4x4_1x_coco_20200219_223025.log.json) | -| R-50 | pytorch | N | N | 2x | 5.6 | - | 37.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_r50_fpn_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r50_fpn_4x4_2x_coco/fovea_r50_fpn_4x4_2x_coco_20200203-2df792b1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r50_fpn_4x4_2x_coco/fovea_r50_fpn_4x4_2x_coco_20200203_112043.log.json) | -| R-50 | pytorch | Y | N | 2x | 8.1 | 19.4 | 37.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco/fovea_align_r50_fpn_gn-head_4x4_2x_coco_20200203-8987880d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco/fovea_align_r50_fpn_gn-head_4x4_2x_coco_20200203_134252.log.json) | -| R-50 | pytorch | Y | Y | 2x | 8.1 | 18.3 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco_20200205-85ce26cb.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco_20200205_112557.log.json) | -| R-101 | pytorch | N | N | 1x | 9.2 | 17.4 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_r101_fpn_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r101_fpn_4x4_1x_coco/fovea_r101_fpn_4x4_1x_coco_20200219-05e38f1c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r101_fpn_4x4_1x_coco/fovea_r101_fpn_4x4_1x_coco_20200219_011740.log.json) | -| R-101 | pytorch | N | N | 2x | 11.7 | - | 40.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_r101_fpn_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r101_fpn_4x4_2x_coco/fovea_r101_fpn_4x4_2x_coco_20200208-02320ea4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_r101_fpn_4x4_2x_coco/fovea_r101_fpn_4x4_2x_coco_20200208_202059.log.json) | -| R-101 | pytorch | Y | N | 2x | 11.7 | 14.7 | 40.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_align_r101_fpn_gn-head_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r101_fpn_gn-head_4x4_2x_coco/fovea_align_r101_fpn_gn-head_4x4_2x_coco_20200208-c39a027a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r101_fpn_gn-head_4x4_2x_coco/fovea_align_r101_fpn_gn-head_4x4_2x_coco_20200208_203337.log.json) | -| R-101 | pytorch | Y | Y | 2x | 11.7 | 14.7 | 42.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco_20200208-649c5eb6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco_20200208_202124.log.json) | - -[1] *1x and 2x mean the model is trained for 12 and 24 epochs, respectively.* \ -[2] *Align means utilizing deformable convolution to align the cls branch.* \ -[3] *All results are obtained with a single model and without any test time data augmentation.*\ -[4] *We use 4 GPUs for training.* - -Any pull requests or issues are welcome. - -## Citations - -Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows. - -```latex -@article{kong2019foveabox, - title={FoveaBox: Beyond Anchor-based Object Detector}, - author={Kong, Tao and Sun, Fuchun and Liu, Huaping and Jiang, Yuning and Shi, Jianbo}, - journal={arXiv preprint arXiv:1904.03797}, - year={2019} -} -``` diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco.py deleted file mode 100644 index 9907bcbf6464fb964664a318533bf9edda4e34fd..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py' -# model settings -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(18, 36)), - stage3=dict(num_channels=(18, 36, 72)), - stage4=dict(num_channels=(18, 36, 72, 144)))), - neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index ca6bb248ac867d463c274f975c884aa80a57730f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/ann_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/datasets/README.md b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/datasets/README.md deleted file mode 100644 index 336b8e83262764419aceae9c975c58bed0fbb47b..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/datasets/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# Downloading datasets - -This directory includes instructions and scripts for downloading ImageNet and LSUN bedrooms for use in this codebase. - -## Class-conditional ImageNet - -For our class-conditional models, we use the official ILSVRC2012 dataset with manual center cropping and downsampling. To obtain this dataset, navigate to [this page on image-net.org](http://www.image-net.org/challenges/LSVRC/2012/downloads) and sign in (or create an account if you do not already have one). Then click on the link reading "Training images (Task 1 & 2)". This is a 138GB tar file containing 1000 sub-tar files, one per class. - -Once the file is downloaded, extract it and look inside. You should see 1000 `.tar` files. You need to extract each of these, which may be impractical to do by hand on your operating system. To automate the process on a Unix-based system, you can `cd` into the directory and run this short shell script: - -``` -for file in *.tar; do tar xf "$file"; rm "$file"; done -``` - -This will extract and remove each tar file in turn. - -Once all of the images have been extracted, the resulting directory should be usable as a data directory (the `--data_dir` argument for the training script). The filenames should all start with WNID (class ids) followed by underscores, like `n01440764_2708.JPEG`. Conveniently (but not by accident) this is how the automated data-loader expects to discover class labels. - -## LSUN bedroom - -To download and pre-process LSUN bedroom, clone [fyu/lsun](https://github.com/fyu/lsun) on GitHub and run their download script `python3 download.py bedroom`. The result will be an "lmdb" database named like `bedroom_train_lmdb`. You can pass this to our [lsun_bedroom.py](lsun_bedroom.py) script like so: - -``` -python lsun_bedroom.py bedroom_train_lmdb lsun_train_output_dir -``` - -This creates a directory called `lsun_train_output_dir`. This directory can be passed to the training scripts via the `--data_dir` argument. diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-all-precommit-checks.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-all-precommit-checks.sh deleted file mode 100644 index df8842c73870d3a896a24af301429ffd2d8e3f1a..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-all-precommit-checks.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/usr/bin/env sh -poetry run pre-commit run --all-files --hook-stage=manual diff --git a/spaces/Araby/BRATArA/README.md b/spaces/Araby/BRATArA/README.md deleted file mode 100644 index 3a1778a4166c79b4afde9bd1c6beaa9e2d14f018..0000000000000000000000000000000000000000 --- a/spaces/Araby/BRATArA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BRATArA -emoji: 🏃 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/cli.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/cli.py deleted file mode 100644 index 65ead46155f568a197a16b64c6335f1f28cda9a6..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/cli.py +++ /dev/null @@ -1,199 +0,0 @@ -import json -import os -import shlex -import sys -from contextlib import contextmanager -from subprocess import Popen -from typing import Any, Dict, IO, Iterator, List - -try: - import click -except ImportError: - sys.stderr.write('It seems python-dotenv is not installed with cli option. \n' - 'Run pip install "python-dotenv[cli]" to fix this.') - sys.exit(1) - -from .main import dotenv_values, set_key, unset_key -from .version import __version__ - - -def enumerate_env(): - """ - Return a path for the ${pwd}/.env file. - - If pwd does not exist, return None. - """ - try: - cwd = os.getcwd() - except FileNotFoundError: - return None - path = os.path.join(cwd, '.env') - return path - - -@click.group() -@click.option('-f', '--file', default=enumerate_env(), - type=click.Path(file_okay=True), - help="Location of the .env file, defaults to .env file in current working directory.") -@click.option('-q', '--quote', default='always', - type=click.Choice(['always', 'never', 'auto']), - help="Whether to quote or not the variable values. Default mode is always. This does not affect parsing.") -@click.option('-e', '--export', default=False, - type=click.BOOL, - help="Whether to write the dot file as an executable bash script.") -@click.version_option(version=__version__) -@click.pass_context -def cli(ctx: click.Context, file: Any, quote: Any, export: Any) -> None: - """This script is used to set, get or unset values from a .env file.""" - ctx.obj = {'QUOTE': quote, 'EXPORT': export, 'FILE': file} - - -@contextmanager -def stream_file(path: os.PathLike) -> Iterator[IO[str]]: - """ - Open a file and yield the corresponding (decoded) stream. - - Exits with error code 2 if the file cannot be opened. - """ - - try: - with open(path) as stream: - yield stream - except OSError as exc: - print(f"Error opening env file: {exc}", file=sys.stderr) - exit(2) - - -@cli.command() -@click.pass_context -@click.option('--format', default='simple', - type=click.Choice(['simple', 'json', 'shell', 'export']), - help="The format in which to display the list. Default format is simple, " - "which displays name=value without quotes.") -def list(ctx: click.Context, format: bool) -> None: - """Display all the stored key/value.""" - file = ctx.obj['FILE'] - - with stream_file(file) as stream: - values = dotenv_values(stream=stream) - - if format == 'json': - click.echo(json.dumps(values, indent=2, sort_keys=True)) - else: - prefix = 'export ' if format == 'export' else '' - for k in sorted(values): - v = values[k] - if v is not None: - if format in ('export', 'shell'): - v = shlex.quote(v) - click.echo(f'{prefix}{k}={v}') - - -@cli.command() -@click.pass_context -@click.argument('key', required=True) -@click.argument('value', required=True) -def set(ctx: click.Context, key: Any, value: Any) -> None: - """Store the given key/value.""" - file = ctx.obj['FILE'] - quote = ctx.obj['QUOTE'] - export = ctx.obj['EXPORT'] - success, key, value = set_key(file, key, value, quote, export) - if success: - click.echo(f'{key}={value}') - else: - exit(1) - - -@cli.command() -@click.pass_context -@click.argument('key', required=True) -def get(ctx: click.Context, key: Any) -> None: - """Retrieve the value for the given key.""" - file = ctx.obj['FILE'] - - with stream_file(file) as stream: - values = dotenv_values(stream=stream) - - stored_value = values.get(key) - if stored_value: - click.echo(stored_value) - else: - exit(1) - - -@cli.command() -@click.pass_context -@click.argument('key', required=True) -def unset(ctx: click.Context, key: Any) -> None: - """Removes the given key.""" - file = ctx.obj['FILE'] - quote = ctx.obj['QUOTE'] - success, key = unset_key(file, key, quote) - if success: - click.echo(f"Successfully removed {key}") - else: - exit(1) - - -@cli.command(context_settings={'ignore_unknown_options': True}) -@click.pass_context -@click.option( - "--override/--no-override", - default=True, - help="Override variables from the environment file with those from the .env file.", -) -@click.argument('commandline', nargs=-1, type=click.UNPROCESSED) -def run(ctx: click.Context, override: bool, commandline: List[str]) -> None: - """Run command with environment variables present.""" - file = ctx.obj['FILE'] - if not os.path.isfile(file): - raise click.BadParameter( - f'Invalid value for \'-f\' "{file}" does not exist.', - ctx=ctx - ) - dotenv_as_dict = { - k: v - for (k, v) in dotenv_values(file).items() - if v is not None and (override or k not in os.environ) - } - - if not commandline: - click.echo('No command given.') - exit(1) - ret = run_command(commandline, dotenv_as_dict) - exit(ret) - - -def run_command(command: List[str], env: Dict[str, str]) -> int: - """Run command in sub process. - - Runs the command in a sub process with the variables from `env` - added in the current environment variables. - - Parameters - ---------- - command: List[str] - The command and it's parameters - env: Dict - The additional environment variables - - Returns - ------- - int - The return code of the command - - """ - # copy the current environment variables and add the vales from - # `env` - cmd_env = os.environ.copy() - cmd_env.update(env) - - p = Popen(command, - universal_newlines=True, - bufsize=0, - shell=False, - env=cmd_env) - _, _ = p.communicate() - - return p.returncode diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/hash.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/hash.py deleted file mode 100644 index 042dac813e74b8187c3754cb9a937c7f7183e331..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/hash.py +++ /dev/null @@ -1,59 +0,0 @@ -import hashlib -import logging -import sys -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.utils.hashes import FAVORITE_HASH, STRONG_HASHES -from pip._internal.utils.misc import read_chunks, write_output - -logger = logging.getLogger(__name__) - - -class HashCommand(Command): - """ - Compute a hash of a local package archive. - - These can be used with --hash in a requirements file to do repeatable - installs. - """ - - usage = "%prog [options] ..." - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-a", - "--algorithm", - dest="algorithm", - choices=STRONG_HASHES, - action="store", - default=FAVORITE_HASH, - help="The hash algorithm to use: one of {}".format( - ", ".join(STRONG_HASHES) - ), - ) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - self.parser.print_usage(sys.stderr) - return ERROR - - algorithm = options.algorithm - for path in args: - write_output( - "%s:\n--hash=%s:%s", path, algorithm, _hash_of_file(path, algorithm) - ) - return SUCCESS - - -def _hash_of_file(path: str, algorithm: str) -> str: - """Return the hash digest of a file.""" - with open(path, "rb") as archive: - hash = hashlib.new(algorithm) - for chunk in read_chunks(archive): - hash.update(chunk) - return hash.hexdigest() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/containers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/containers.py deleted file mode 100644 index e29cf368991ccb083b67cda8133e4635defbfe53..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/containers.py +++ /dev/null @@ -1,167 +0,0 @@ -from itertools import zip_longest -from typing import ( - Iterator, - Iterable, - List, - Optional, - Union, - overload, - TypeVar, - TYPE_CHECKING, -) - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - JustifyMethod, - OverflowMethod, - RenderResult, - RenderableType, - ) - from .text import Text - -from .cells import cell_len -from .measure import Measurement - -T = TypeVar("T") - - -class Renderables: - """A list subclass which renders its contents to the console.""" - - def __init__( - self, renderables: Optional[Iterable["RenderableType"]] = None - ) -> None: - self._renderables: List["RenderableType"] = ( - list(renderables) if renderables is not None else [] - ) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._renderables - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - dimensions = [ - Measurement.get(console, options, renderable) - for renderable in self._renderables - ] - if not dimensions: - return Measurement(1, 1) - _min = max(dimension.minimum for dimension in dimensions) - _max = max(dimension.maximum for dimension in dimensions) - return Measurement(_min, _max) - - def append(self, renderable: "RenderableType") -> None: - self._renderables.append(renderable) - - def __iter__(self) -> Iterable["RenderableType"]: - return iter(self._renderables) - - -class Lines: - """A list subclass which can render to the console.""" - - def __init__(self, lines: Iterable["Text"] = ()) -> None: - self._lines: List["Text"] = list(lines) - - def __repr__(self) -> str: - return f"Lines({self._lines!r})" - - def __iter__(self) -> Iterator["Text"]: - return iter(self._lines) - - @overload - def __getitem__(self, index: int) -> "Text": - ... - - @overload - def __getitem__(self, index: slice) -> List["Text"]: - ... - - def __getitem__(self, index: Union[slice, int]) -> Union["Text", List["Text"]]: - return self._lines[index] - - def __setitem__(self, index: int, value: "Text") -> "Lines": - self._lines[index] = value - return self - - def __len__(self) -> int: - return self._lines.__len__() - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._lines - - def append(self, line: "Text") -> None: - self._lines.append(line) - - def extend(self, lines: Iterable["Text"]) -> None: - self._lines.extend(lines) - - def pop(self, index: int = -1) -> "Text": - return self._lines.pop(index) - - def justify( - self, - console: "Console", - width: int, - justify: "JustifyMethod" = "left", - overflow: "OverflowMethod" = "fold", - ) -> None: - """Justify and overflow text to a given width. - - Args: - console (Console): Console instance. - width (int): Number of characters per line. - justify (str, optional): Default justify method for text: "left", "center", "full" or "right". Defaults to "left". - overflow (str, optional): Default overflow for text: "crop", "fold", or "ellipsis". Defaults to "fold". - - """ - from .text import Text - - if justify == "left": - for line in self._lines: - line.truncate(width, overflow=overflow, pad=True) - elif justify == "center": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left((width - cell_len(line.plain)) // 2) - line.pad_right(width - cell_len(line.plain)) - elif justify == "right": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left(width - cell_len(line.plain)) - elif justify == "full": - for line_index, line in enumerate(self._lines): - if line_index == len(self._lines) - 1: - break - words = line.split(" ") - words_size = sum(cell_len(word.plain) for word in words) - num_spaces = len(words) - 1 - spaces = [1 for _ in range(num_spaces)] - index = 0 - if spaces: - while words_size + num_spaces < width: - spaces[len(spaces) - index - 1] += 1 - num_spaces += 1 - index = (index + 1) % len(spaces) - tokens: List[Text] = [] - for index, (word, next_word) in enumerate( - zip_longest(words, words[1:]) - ): - tokens.append(word) - if index < len(spaces): - style = word.get_style_at_offset(console, -1) - next_style = next_word.get_style_at_offset(console, 0) - space_style = style if style == next_style else line.style - tokens.append(Text(" " * spaces[index], style=space_style)) - self[line_index] = Text("").join(tokens) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/functools.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/functools.py deleted file mode 100644 index bbd8b29f9c012d62a37393476a5e393405d2918c..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/functools.py +++ /dev/null @@ -1,525 +0,0 @@ -import functools -import time -import inspect -import collections -import types -import itertools - -import setuptools.extern.more_itertools - -from typing import Callable, TypeVar - - -CallableT = TypeVar("CallableT", bound=Callable[..., object]) - - -def compose(*funcs): - """ - Compose any number of unary functions into a single unary function. - - >>> import textwrap - >>> expected = str.strip(textwrap.dedent(compose.__doc__)) - >>> strip_and_dedent = compose(str.strip, textwrap.dedent) - >>> strip_and_dedent(compose.__doc__) == expected - True - - Compose also allows the innermost function to take arbitrary arguments. - - >>> round_three = lambda x: round(x, ndigits=3) - >>> f = compose(round_three, int.__truediv__) - >>> [f(3*x, x+1) for x in range(1,10)] - [1.5, 2.0, 2.25, 2.4, 2.5, 2.571, 2.625, 2.667, 2.7] - """ - - def compose_two(f1, f2): - return lambda *args, **kwargs: f1(f2(*args, **kwargs)) - - return functools.reduce(compose_two, funcs) - - -def method_caller(method_name, *args, **kwargs): - """ - Return a function that will call a named method on the - target object with optional positional and keyword - arguments. - - >>> lower = method_caller('lower') - >>> lower('MyString') - 'mystring' - """ - - def call_method(target): - func = getattr(target, method_name) - return func(*args, **kwargs) - - return call_method - - -def once(func): - """ - Decorate func so it's only ever called the first time. - - This decorator can ensure that an expensive or non-idempotent function - will not be expensive on subsequent calls and is idempotent. - - >>> add_three = once(lambda a: a+3) - >>> add_three(3) - 6 - >>> add_three(9) - 6 - >>> add_three('12') - 6 - - To reset the stored value, simply clear the property ``saved_result``. - - >>> del add_three.saved_result - >>> add_three(9) - 12 - >>> add_three(8) - 12 - - Or invoke 'reset()' on it. - - >>> add_three.reset() - >>> add_three(-3) - 0 - >>> add_three(0) - 0 - """ - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not hasattr(wrapper, 'saved_result'): - wrapper.saved_result = func(*args, **kwargs) - return wrapper.saved_result - - wrapper.reset = lambda: vars(wrapper).__delitem__('saved_result') - return wrapper - - -def method_cache( - method: CallableT, - cache_wrapper: Callable[ - [CallableT], CallableT - ] = functools.lru_cache(), # type: ignore[assignment] -) -> CallableT: - """ - Wrap lru_cache to support storing the cache data in the object instances. - - Abstracts the common paradigm where the method explicitly saves an - underscore-prefixed protected property on first call and returns that - subsequently. - - >>> class MyClass: - ... calls = 0 - ... - ... @method_cache - ... def method(self, value): - ... self.calls += 1 - ... return value - - >>> a = MyClass() - >>> a.method(3) - 3 - >>> for x in range(75): - ... res = a.method(x) - >>> a.calls - 75 - - Note that the apparent behavior will be exactly like that of lru_cache - except that the cache is stored on each instance, so values in one - instance will not flush values from another, and when an instance is - deleted, so are the cached values for that instance. - - >>> b = MyClass() - >>> for x in range(35): - ... res = b.method(x) - >>> b.calls - 35 - >>> a.method(0) - 0 - >>> a.calls - 75 - - Note that if method had been decorated with ``functools.lru_cache()``, - a.calls would have been 76 (due to the cached value of 0 having been - flushed by the 'b' instance). - - Clear the cache with ``.cache_clear()`` - - >>> a.method.cache_clear() - - Same for a method that hasn't yet been called. - - >>> c = MyClass() - >>> c.method.cache_clear() - - Another cache wrapper may be supplied: - - >>> cache = functools.lru_cache(maxsize=2) - >>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache) - >>> a = MyClass() - >>> a.method2() - 3 - - Caution - do not subsequently wrap the method with another decorator, such - as ``@property``, which changes the semantics of the function. - - See also - http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/ - for another implementation and additional justification. - """ - - def wrapper(self: object, *args: object, **kwargs: object) -> object: - # it's the first call, replace the method with a cached, bound method - bound_method: CallableT = types.MethodType( # type: ignore[assignment] - method, self - ) - cached_method = cache_wrapper(bound_method) - setattr(self, method.__name__, cached_method) - return cached_method(*args, **kwargs) - - # Support cache clear even before cache has been created. - wrapper.cache_clear = lambda: None # type: ignore[attr-defined] - - return ( # type: ignore[return-value] - _special_method_cache(method, cache_wrapper) or wrapper - ) - - -def _special_method_cache(method, cache_wrapper): - """ - Because Python treats special methods differently, it's not - possible to use instance attributes to implement the cached - methods. - - Instead, install the wrapper method under a different name - and return a simple proxy to that wrapper. - - https://github.com/jaraco/jaraco.functools/issues/5 - """ - name = method.__name__ - special_names = '__getattr__', '__getitem__' - if name not in special_names: - return - - wrapper_name = '__cached' + name - - def proxy(self, *args, **kwargs): - if wrapper_name not in vars(self): - bound = types.MethodType(method, self) - cache = cache_wrapper(bound) - setattr(self, wrapper_name, cache) - else: - cache = getattr(self, wrapper_name) - return cache(*args, **kwargs) - - return proxy - - -def apply(transform): - """ - Decorate a function with a transform function that is - invoked on results returned from the decorated function. - - >>> @apply(reversed) - ... def get_numbers(start): - ... "doc for get_numbers" - ... return range(start, start+3) - >>> list(get_numbers(4)) - [6, 5, 4] - >>> get_numbers.__doc__ - 'doc for get_numbers' - """ - - def wrap(func): - return functools.wraps(func)(compose(transform, func)) - - return wrap - - -def result_invoke(action): - r""" - Decorate a function with an action function that is - invoked on the results returned from the decorated - function (for its side-effect), then return the original - result. - - >>> @result_invoke(print) - ... def add_two(a, b): - ... return a + b - >>> x = add_two(2, 3) - 5 - >>> x - 5 - """ - - def wrap(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - result = func(*args, **kwargs) - action(result) - return result - - return wrapper - - return wrap - - -def call_aside(f, *args, **kwargs): - """ - Call a function for its side effect after initialization. - - >>> @call_aside - ... def func(): print("called") - called - >>> func() - called - - Use functools.partial to pass parameters to the initial call - - >>> @functools.partial(call_aside, name='bingo') - ... def func(name): print("called with", name) - called with bingo - """ - f(*args, **kwargs) - return f - - -class Throttler: - """ - Rate-limit a function (or other callable) - """ - - def __init__(self, func, max_rate=float('Inf')): - if isinstance(func, Throttler): - func = func.func - self.func = func - self.max_rate = max_rate - self.reset() - - def reset(self): - self.last_called = 0 - - def __call__(self, *args, **kwargs): - self._wait() - return self.func(*args, **kwargs) - - def _wait(self): - "ensure at least 1/max_rate seconds from last call" - elapsed = time.time() - self.last_called - must_wait = 1 / self.max_rate - elapsed - time.sleep(max(0, must_wait)) - self.last_called = time.time() - - def __get__(self, obj, type=None): - return first_invoke(self._wait, functools.partial(self.func, obj)) - - -def first_invoke(func1, func2): - """ - Return a function that when invoked will invoke func1 without - any parameters (for its side-effect) and then invoke func2 - with whatever parameters were passed, returning its result. - """ - - def wrapper(*args, **kwargs): - func1() - return func2(*args, **kwargs) - - return wrapper - - -def retry_call(func, cleanup=lambda: None, retries=0, trap=()): - """ - Given a callable func, trap the indicated exceptions - for up to 'retries' times, invoking cleanup on the - exception. On the final attempt, allow any exceptions - to propagate. - """ - attempts = itertools.count() if retries == float('inf') else range(retries) - for attempt in attempts: - try: - return func() - except trap: - cleanup() - - return func() - - -def retry(*r_args, **r_kwargs): - """ - Decorator wrapper for retry_call. Accepts arguments to retry_call - except func and then returns a decorator for the decorated function. - - Ex: - - >>> @retry(retries=3) - ... def my_func(a, b): - ... "this is my funk" - ... print(a, b) - >>> my_func.__doc__ - 'this is my funk' - """ - - def decorate(func): - @functools.wraps(func) - def wrapper(*f_args, **f_kwargs): - bound = functools.partial(func, *f_args, **f_kwargs) - return retry_call(bound, *r_args, **r_kwargs) - - return wrapper - - return decorate - - -def print_yielded(func): - """ - Convert a generator into a function that prints all yielded elements - - >>> @print_yielded - ... def x(): - ... yield 3; yield None - >>> x() - 3 - None - """ - print_all = functools.partial(map, print) - print_results = compose(more_itertools.consume, print_all, func) - return functools.wraps(func)(print_results) - - -def pass_none(func): - """ - Wrap func so it's not called if its first param is None - - >>> print_text = pass_none(print) - >>> print_text('text') - text - >>> print_text(None) - """ - - @functools.wraps(func) - def wrapper(param, *args, **kwargs): - if param is not None: - return func(param, *args, **kwargs) - - return wrapper - - -def assign_params(func, namespace): - """ - Assign parameters from namespace where func solicits. - - >>> def func(x, y=3): - ... print(x, y) - >>> assigned = assign_params(func, dict(x=2, z=4)) - >>> assigned() - 2 3 - - The usual errors are raised if a function doesn't receive - its required parameters: - - >>> assigned = assign_params(func, dict(y=3, z=4)) - >>> assigned() - Traceback (most recent call last): - TypeError: func() ...argument... - - It even works on methods: - - >>> class Handler: - ... def meth(self, arg): - ... print(arg) - >>> assign_params(Handler().meth, dict(arg='crystal', foo='clear'))() - crystal - """ - sig = inspect.signature(func) - params = sig.parameters.keys() - call_ns = {k: namespace[k] for k in params if k in namespace} - return functools.partial(func, **call_ns) - - -def save_method_args(method): - """ - Wrap a method such that when it is called, the args and kwargs are - saved on the method. - - >>> class MyClass: - ... @save_method_args - ... def method(self, a, b): - ... print(a, b) - >>> my_ob = MyClass() - >>> my_ob.method(1, 2) - 1 2 - >>> my_ob._saved_method.args - (1, 2) - >>> my_ob._saved_method.kwargs - {} - >>> my_ob.method(a=3, b='foo') - 3 foo - >>> my_ob._saved_method.args - () - >>> my_ob._saved_method.kwargs == dict(a=3, b='foo') - True - - The arguments are stored on the instance, allowing for - different instance to save different args. - - >>> your_ob = MyClass() - >>> your_ob.method({str('x'): 3}, b=[4]) - {'x': 3} [4] - >>> your_ob._saved_method.args - ({'x': 3},) - >>> my_ob._saved_method.args - () - """ - args_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs') - - @functools.wraps(method) - def wrapper(self, *args, **kwargs): - attr_name = '_saved_' + method.__name__ - attr = args_and_kwargs(args, kwargs) - setattr(self, attr_name, attr) - return method(self, *args, **kwargs) - - return wrapper - - -def except_(*exceptions, replace=None, use=None): - """ - Replace the indicated exceptions, if raised, with the indicated - literal replacement or evaluated expression (if present). - - >>> safe_int = except_(ValueError)(int) - >>> safe_int('five') - >>> safe_int('5') - 5 - - Specify a literal replacement with ``replace``. - - >>> safe_int_r = except_(ValueError, replace=0)(int) - >>> safe_int_r('five') - 0 - - Provide an expression to ``use`` to pass through particular parameters. - - >>> safe_int_pt = except_(ValueError, use='args[0]')(int) - >>> safe_int_pt('five') - 'five' - - """ - - def decorate(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - try: - return func(*args, **kwargs) - except exceptions: - try: - return eval(use) - except TypeError: - return replace - - return wrapper - - return decorate diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/training/lp_train.py b/spaces/Audio-AGI/AudioSep/models/CLAP/training/lp_train.py deleted file mode 100644 index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/training/lp_train.py +++ /dev/null @@ -1,301 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import LPLoss, LPMetrics, lp_gather_features -from open_clip.utils import do_mixup, get_mix_lambda -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, "module"): - return model.module - else: - return model - - -def train_one_epoch( - model, - data, - epoch, - optimizer, - scaler, - scheduler, - args, - tb_writer=None, - extra_suffix="", -): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - model.train() - loss = LPLoss(args.lp_loss) - - dataloader, sampler = data["train"].dataloader, data["train"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - # for toy dataset - if args.dataset_type == "toy": - dataloader.dataset.generate_queue() - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - - for i, batch in enumerate(dataloader): - step = num_batches_per_epoch * epoch + i - - if isinstance(scheduler, dict): - for s in scheduler.values(): - s(step) - else: - scheduler(step) - - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - if args.mixup: - # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146 - mix_lambda = torch.from_numpy( - get_mix_lambda(0.5, len(audio["waveform"])) - ).to(device) - class_label = do_mixup(class_label, mix_lambda) - else: - mix_lambda = None - - data_time_m.update(time.time() - end) - if isinstance(optimizer, dict): - for o_ in optimizer.values(): - o_.zero_grad() - else: - optimizer.zero_grad() - - with autocast(): - pred = model(audio, mix_lambda=mix_lambda, device=device) - total_loss = loss(pred, class_label) - - if isinstance(optimizer, dict): - if scaler is not None: - scaler.scale(total_loss).backward() - for o_ in optimizer.values(): - if args.horovod: - o_.synchronize() - scaler.unscale_(o_) - with o_.skip_synchronize(): - scaler.step(o_) - else: - scaler.step(o_) - scaler.update() - else: - total_loss.backward() - for o_ in optimizer.values(): - o_.step() - else: - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100)) - unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - if isinstance(audio, dict): - batch_size = len(audio["waveform"]) - else: - batch_size = len(audio) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - if isinstance(optimizer, dict): - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": optimizer.param_groups[0]["lr"], - } - for name, val in log_data.items(): - name = f"train{extra_suffix}/{name}" - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, "Please install wandb." - wandb.log({name: val, "step": step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""): - metrics = {} - if not args.parallel_eval: - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - # CHANGE - # zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - # metrics.update(zero_shot_metrics) - if is_master(args): - print("Evaluating...") - metric_names = args.lp_metrics.split(",") - eval_tool = LPMetrics(metric_names=metric_names) - - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - if "val" in data and ( - args.val_frequency - and ((epoch % args.val_frequency) == 0 or epoch == args.epochs) - ): - if args.parallel_eval: - dataloader, sampler = data["val"].dataloader, data["val"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - samples_per_val = dataloader.num_samples - else: - dataloader = data["val"].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - eval_info = {"pred": [], "target": []} - with torch.no_grad(): - for i, batch in enumerate(dataloader): - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - with autocast(): - pred = model(audio, device=device) - if args.parallel_eval: - pred, class_label = lp_gather_features( - pred, class_label, args.world_size, args.horovod - ) - eval_info["pred"].append(pred) - eval_info["target"].append(class_label) - - num_samples += class_label.shape[0] - - if (i % 100) == 0: # and i != 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]" - ) - - if is_master(args): - eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu() - eval_info["target"] = torch.cat(eval_info["target"], 0).cpu() - metric_dict = eval_tool.evaluate_mertics( - eval_info["pred"], eval_info["target"] - ) - metrics.update(metric_dict) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - - if is_master(args): - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\n".join( - ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics] - ) - ) - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, "Please install wandb." - for name, val in metrics.items(): - wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch}) - - return metrics - else: - return metrics diff --git a/spaces/Audio-AGI/WavJourney/VoiceParser/customtokenizer.py b/spaces/Audio-AGI/WavJourney/VoiceParser/customtokenizer.py deleted file mode 100644 index aa2e7d49bab149dfe3cb43db5502a4c5b40821c1..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/WavJourney/VoiceParser/customtokenizer.py +++ /dev/null @@ -1,202 +0,0 @@ -""" -Custom tokenizer model. -Author: https://www.github.com/gitmylo/ -License: MIT -""" - -import json -import os.path -from zipfile import ZipFile -from typing import Union - - -import numpy -import torch -from torch import nn, optim -from torch.serialization import MAP_LOCATION - - -class CustomTokenizer(nn.Module): - def __init__(self, hidden_size=1024, input_size=768, output_size=10000, version=0): - super(CustomTokenizer, self).__init__() - next_size = input_size - if version == 0: - self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True) - next_size = hidden_size - if version == 1: - self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True) - self.intermediate = nn.Linear(hidden_size, 4096) - next_size = 4096 - - self.fc = nn.Linear(next_size, output_size) - self.softmax = nn.LogSoftmax(dim=1) - self.optimizer: optim.Optimizer = None - self.lossfunc = nn.CrossEntropyLoss() - self.input_size = input_size - self.hidden_size = hidden_size - self.output_size = output_size - self.version = version - - def forward(self, x): - x, _ = self.lstm(x) - if self.version == 1: - x = self.intermediate(x) - x = self.fc(x) - x = self.softmax(x) - return x - - @torch.no_grad() - def get_token(self, x): - """ - Used to get the token for the first - :param x: An array with shape (N, input_size) where N is a whole number greater or equal to 1, and input_size is the input size used when creating the model. - :return: An array with shape (N,) where N is the same as N from the input. Every number in the array is a whole number in range 0...output_size - 1 where output_size is the output size used when creating the model. - """ - return torch.argmax(self(x), dim=1) - - def prepare_training(self): - self.optimizer = optim.Adam(self.parameters(), 0.001) - - def train_step(self, x_train, y_train, log_loss=False): - # y_train = y_train[:-1] - # y_train = y_train[1:] - - optimizer = self.optimizer - lossfunc = self.lossfunc - # Zero the gradients - self.zero_grad() - - # Forward pass - y_pred = self(x_train) - - y_train_len = len(y_train) - y_pred_len = y_pred.shape[0] - - if y_train_len > y_pred_len: - diff = y_train_len - y_pred_len - y_train = y_train[diff:] - elif y_train_len < y_pred_len: - diff = y_pred_len - y_train_len - y_pred = y_pred[:-diff, :] - - y_train_hot = torch.zeros(len(y_train), self.output_size) - y_train_hot[range(len(y_train)), y_train] = 1 - y_train_hot = y_train_hot.to('cuda') - - # Calculate the loss - loss = lossfunc(y_pred, y_train_hot) - - # Print loss - if log_loss: - print('Loss', loss.item()) - - # Backward pass - loss.backward() - - # Update the weights - optimizer.step() - - def save(self, path): - info_path = '.'.join(os.path.basename(path).split('.')[:-1]) + '/.info' - torch.save(self.state_dict(), path) - data_from_model = Data(self.input_size, self.hidden_size, self.output_size, self.version) - with ZipFile(path, 'a') as model_zip: - model_zip.writestr(info_path, data_from_model.save()) - model_zip.close() - - @staticmethod - def load_from_checkpoint(path, map_location: MAP_LOCATION = None): - old = True - with ZipFile(path) as model_zip: - filesMatch = [file for file in model_zip.namelist() if file.endswith('/.info')] - file = filesMatch[0] if filesMatch else None - if file: - old = False - data_from_model = Data.load(model_zip.read(file).decode('utf-8')) - model_zip.close() - if old: - model = CustomTokenizer() - else: - model = CustomTokenizer(data_from_model.hidden_size, data_from_model.input_size, data_from_model.output_size, data_from_model.version) - model.load_state_dict(torch.load(path, map_location=map_location)) - if map_location: - model = model.to(map_location) - return model - - - -class Data: - input_size: int - hidden_size: int - output_size: int - version: int - - def __init__(self, input_size=768, hidden_size=1024, output_size=10000, version=0): - self.input_size = input_size - self.hidden_size = hidden_size - self.output_size = output_size - self.version = version - - @staticmethod - def load(string): - data = json.loads(string) - return Data(data['input_size'], data['hidden_size'], data['output_size'], data['version']) - - def save(self): - data = { - 'input_size': self.input_size, - 'hidden_size': self.hidden_size, - 'output_size': self.output_size, - 'version': self.version, - } - return json.dumps(data) - - -def auto_train(data_path, save_path='model.pth', lload_model: Union[str, None] = None, save_epochs=1): - data_x, data_y = {}, {} - - if load_model and os.path.isfile(load_model): - print('Loading model from', load_model) - model_training = CustomTokenizer.load_from_checkpoint(load_model, 'cuda') - else: - print('Creating new model.') - model_training = CustomTokenizer(version=1).to('cuda') - save_path = os.path.join(data_path, save_path) - base_save_path = '.'.join(save_path.split('.')[:-1]) - - sem_string = '_semantic.npy' - feat_string = '_semantic_features.npy' - - ready = os.path.join(data_path, 'ready') - for input_file in os.listdir(ready): - full_path = os.path.join(ready, input_file) - try: - prefix = input_file.split("_")[0] - number = int(prefix) - except ValueError as e: - raise e - if input_file.endswith(sem_string): - data_y[number] = numpy.load(full_path) - elif input_file.endswith(feat_string): - data_x[number] = numpy.load(full_path) - - model_training.prepare_training() - epoch = 1 - - while 1: - for i in range(save_epochs): - j = 0 - for i in range(max(len(data_x), len(data_y))): - x = data_x.get(i) - y = data_y.get(i) - if x is None or y is None: - print(f'The training data does not match. key={i}') - continue - model_training.train_step(torch.tensor(x).to('cuda'), torch.tensor(y).to('cuda'), j % 50 == 0) # Print loss every 50 steps - j += 1 - save_p = save_path - save_p_2 = f'{base_save_path}_epoch_{epoch}.pth' - model_training.save(save_p) - model_training.save(save_p_2) - print(f'Epoch {epoch} completed') - epoch += 1 \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Bitcoin-qt.exe Download.md b/spaces/Benson/text-generation/Examples/Bitcoin-qt.exe Download.md deleted file mode 100644 index d7aa2315d80164d41046b333c89cd4a579cccc7b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bitcoin-qt.exe Download.md +++ /dev/null @@ -1,61 +0,0 @@ - -

    Cómo descargar y usar Bitcoin-Qt.exe, el cliente oficial de Bitcoin

    -

    Bitcoin es una moneda digital descentralizada que permite transacciones peer-to-peer sin intermediarios. Para usar Bitcoin, necesitas un programa de software que te permita interactuar con la red Bitcoin y administrar tus fondos. En este artículo, te mostraremos cómo descargar y usar Bitcoin-Qt.exe, el cliente oficial de Bitcoin para Windows. También discutiremos algunas de las características y beneficios de usar Bitcoin-Qt.exe, así como algunas de las alternativas que puedes considerar.

    -

    ¿Qué es Bitcoin-Qt.exe y por qué lo necesita?

    -

    Bitcoin-Qt.exe es el cliente original de Bitcoin que fue desarrollado por Satoshi Nakamoto, el creador de Bitcoin. También se conoce como Bitcoin Core, ya que forma el núcleo de la red Bitcoin. Bitcoin-Qt.exe es un cliente de nodo completo, lo que significa que descarga y valida todo el historial de transacciones en la cadena de bloques, el libro mayor distribuido que registra todas las transacciones de Bitcoin. Al ejecutar Bitcoin-Qt.exe, estás contribuyendo a la seguridad y estabilidad de la red.

    -

    bitcoin-qt.exe download


    Download File ––– https://bltlly.com/2v6JEh



    -

    Bitcoin-Qt.exe le proporciona seguridad, privacidad y control total sobre sus fondos

    -

    Una de las principales ventajas de usar Bitcoin-Qt.exe es que te proporciona un alto nivel de seguridad, privacidad y control total sobre tus fondos. A diferencia de otros clientes o carteras que dependen de servicios o servidores de terceros, Bitcoin-Qt.exe no almacena sus claves privadas o sus fondos en ningún otro lugar, sino en su propio ordenador. Esto significa que usted es el único que puede acceder y gastar sus bitcoins, y nadie puede congelar, incautar o censurar sus transacciones. También es responsable de mantener sus llaves privadas y su archivo de billetera a salvo del robo o pérdida.

    - -

    Otra ventaja de usar Bitcoin-Qt.exe es que soporta funciones avanzadas que te permiten personalizar y optimizar tu experiencia con Bitcoin. Por ejemplo, puede crear y difundir transacciones sin procesar, que son transacciones que construye manualmente sin usar una interfaz gráfica. También puede usar comandos RPC, que son comandos que puede enviar a Bitcoin-Qt.exe para interactuar con la red Bitcoin y realizar varias operaciones. También puede utilizar BIPs, que son propuestas de mejora de Bitcoin que introducen nuevas características o estándares para el protocolo de Bitcoin. Por ejemplo, puede usar BIP39 para generar una frase mnemotécnica que puede ayudarlo a recuperar su billetera en caso de pérdida o daño.

    -

    Cómo descargar Bitcoin-Qt.exe para Windows

    -

    Si desea utilizar Bitcoin-Qt.exe para Windows, debe descargarlo desde el sitio web oficial o desde una fuente de confianza. También necesita verificar la integridad y autenticidad del archivo descargado e instalarlo en su computadora. Estos son los pasos que debes seguir:

    -

    Puede descargar Bitcoin-Qt.exe desde el sitio web oficial o desde una fuente de confianza

    -

    El sitio web oficial de Bitcoin-Qt.exe es https://bitcoincore.org, donde puede encontrar la última versión del cliente para Windows y otros sistemas operativos. También puede descargar Bitcoin-Qt.exe de otras fuentes, como https://bitcoin.org o Electrum, que es un cliente ligero que no requiere descargar la cadena de bloques, sino que se conecta a servidores remotos que proporcionan la información necesaria. También puede usar MultiBit HD, que es un cliente fácil de usar que admite múltiples carteras e idiomas. Puede encontrar más clientes de Bitcoin para Windows en el sitio web oficial o en otras fuentes.

    -

    Puede utilizar carteras basadas en la web o móviles que son más convenientes pero menos seguras

    - -

    Puedes usar carteras de hardware o de papel que son más seguras pero menos convenientes

    -

    Si desea usar una billetera de hardware o una billetera de papel que sea más segura pero menos conveniente, puede elegir entre una variedad de opciones que ofrecen diferentes características y funcionalidades. Por ejemplo, puede usar Ledger, que es una cartera de hardware que almacena sus claves privadas en una tarjeta inteligente que se conecta a su computadora a través de USB, pero también requiere que ingrese un código PIN y confirme cada transacción en la pantalla del dispositivo. También puede usar Bitaddress.org o Bitcoinpaperwallet.com para generar e imprimir su billetera de papel, pero debe asegurarse de hacerlo sin conexión y en una computadora e impresora seguras y limpias. También debe mantener su billetera de papel a salvo del fuego, el agua o los daños físicos, y escanearla con un lector de código QR cada vez que desee acceder a sus fondos. Puede encontrar más información sobre carteras de papel en el sitio web oficial o en otras fuentes.

    -

    Conclusión

    - -

    Preguntas frecuentes

    -

    ¿Cuáles son los requisitos del sistema para ejecutar Bitcoin-Qt.exe?

    -

    Para ejecutar Bitcoin-Qt.exe, necesita un sistema operativo Windows (7 o posterior), un procesador de 64 bits, al menos 2 GB de RAM, al menos 400 GB de espacio en disco (preferiblemente SSD), y una conexión a Internet de banda ancha.

    -

    ¿Cómo puedo actualizar Bitcoin-Qt.exe a la última versión?

    -

    Para actualizar Bitcoin-Qt.exe a la última versión, debe descargar la nueva versión desde el sitio web oficial o desde una fuente de confianza, verificar el archivo e instalarlo sobre la versión anterior. No es necesario desinstalar la versión anterior o eliminar su directorio de datos.

    -

    ¿Cómo puedo restaurar mi billetera desde una copia de seguridad?

    -

    Para restaurar su billetera desde una copia de seguridad, necesita copiar su archivo de copia de seguridad (generalmente llamado wallet.dat) a su directorio de datos, reemplazando el archivo existente si hay uno. Es posible que tenga que volver a analizar la cadena de bloques para actualizar su saldo y el historial de transacciones.

    -

    ¿Cómo puedo importar o exportar mis claves privadas?

    -

    Para importar o exportar tus claves privadas, necesitas usar la pestaña Console en la ventana Debug. Puede utilizar comandos como importprivkey, dumpprivkey, o dumpwallet para importar o exportar sus claves privadas. Debes tener cuidado al manejar tus llaves privadas, ya que son muy sensibles y pueden comprometer tus fondos si se exponen o se pierden.

    -

    ¿Cómo puedo contactar a los desarrolladores u obtener soporte para Bitcoin-Qt.exe?

    -

    Para contactar a los desarrolladores u obtener soporte para Bitcoin-Qt.exe, puedes usar los siguientes canales:

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/__init__.py deleted file mode 100644 index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/__init__.py +++ /dev/null @@ -1,331 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = """ -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and -executing simple grammars, vs. the traditional lex/yacc approach, or the -use of regular expressions. With pyparsing, you don't need to learn -a new syntax for defining grammars or matching expressions - the parsing -module provides a library of classes that you use to construct the -grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -``", !"``), built up using :class:`Word`, -:class:`Literal`, and :class:`And` elements -(the :meth:`'+'` operators create :class:`And` expressions, -and the strings are auto-converted to :class:`Literal` expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the -self-explanatory class names, and the use of :class:`'+'`, -:class:`'|'`, :class:`'^'` and :class:`'&'` operators. - -The :class:`ParseResults` object returned from -:class:`ParserElement.parseString` can be -accessed as a nested list, a dictionary, or an object with named -attributes. - -The pyparsing module handles some of the problems that are typically -vexing when writing text parsers: - - - extra or missing whitespace (the above program will also handle - "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes :class:`ParserElement` and :class:`ParseResults` to -see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - - construct literal match expressions from :class:`Literal` and - :class:`CaselessLiteral` classes - - construct character word-group expressions using the :class:`Word` - class - - see how to create repetitive expressions using :class:`ZeroOrMore` - and :class:`OneOrMore` classes - - use :class:`'+'`, :class:`'|'`, :class:`'^'`, - and :class:`'&'` operators to combine simple expressions into - more complex ones - - associate names with your parsed results using - :class:`ParserElement.setResultsName` - - access the parsed data, which is returned as a :class:`ParseResults` - object - - find some helpful expression short-cuts like :class:`delimitedList` - and :class:`oneOf` - - find more useful common expressions in the :class:`pyparsing_common` - namespace class -""" -from typing import NamedTuple - - -class version_info(NamedTuple): - major: int - minor: int - micro: int - releaselevel: str - serial: int - - @property - def __version__(self): - return ( - "{}.{}.{}".format(self.major, self.minor, self.micro) - + ( - "{}{}{}".format( - "r" if self.releaselevel[0] == "c" else "", - self.releaselevel[0], - self.serial, - ), - "", - )[self.releaselevel == "final"] - ) - - def __str__(self): - return "{} {} / {}".format(__name__, self.__version__, __version_time__) - - def __repr__(self): - return "{}.{}({})".format( - __name__, - type(self).__name__, - ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)), - ) - - -__version_info__ = version_info(3, 0, 9, "final", 0) -__version_time__ = "05 May 2022 07:02 UTC" -__version__ = __version_info__.__version__ -__versionTime__ = __version_time__ -__author__ = "Paul McGuire " - -from .util import * -from .exceptions import * -from .actions import * -from .core import __diag__, __compat__ -from .results import * -from .core import * -from .core import _builtin_exprs as core_builtin_exprs -from .helpers import * -from .helpers import _builtin_exprs as helper_builtin_exprs - -from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode -from .testing import pyparsing_test as testing -from .common import ( - pyparsing_common as common, - _builtin_exprs as common_builtin_exprs, -) - -# define backward compat synonyms -if "pyparsing_unicode" not in globals(): - pyparsing_unicode = unicode -if "pyparsing_common" not in globals(): - pyparsing_common = common -if "pyparsing_test" not in globals(): - pyparsing_test = testing - -core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs - - -__all__ = [ - "__version__", - "__version_time__", - "__author__", - "__compat__", - "__diag__", - "And", - "AtLineStart", - "AtStringStart", - "CaselessKeyword", - "CaselessLiteral", - "CharsNotIn", - "Combine", - "Dict", - "Each", - "Empty", - "FollowedBy", - "Forward", - "GoToColumn", - "Group", - "IndentedBlock", - "Keyword", - "LineEnd", - "LineStart", - "Literal", - "Located", - "PrecededBy", - "MatchFirst", - "NoMatch", - "NotAny", - "OneOrMore", - "OnlyOnce", - "OpAssoc", - "Opt", - "Optional", - "Or", - "ParseBaseException", - "ParseElementEnhance", - "ParseException", - "ParseExpression", - "ParseFatalException", - "ParseResults", - "ParseSyntaxException", - "ParserElement", - "PositionToken", - "QuotedString", - "RecursiveGrammarException", - "Regex", - "SkipTo", - "StringEnd", - "StringStart", - "Suppress", - "Token", - "TokenConverter", - "White", - "Word", - "WordEnd", - "WordStart", - "ZeroOrMore", - "Char", - "alphanums", - "alphas", - "alphas8bit", - "any_close_tag", - "any_open_tag", - "c_style_comment", - "col", - "common_html_entity", - "counted_array", - "cpp_style_comment", - "dbl_quoted_string", - "dbl_slash_comment", - "delimited_list", - "dict_of", - "empty", - "hexnums", - "html_comment", - "identchars", - "identbodychars", - "java_style_comment", - "line", - "line_end", - "line_start", - "lineno", - "make_html_tags", - "make_xml_tags", - "match_only_at_col", - "match_previous_expr", - "match_previous_literal", - "nested_expr", - "null_debug_action", - "nums", - "one_of", - "printables", - "punc8bit", - "python_style_comment", - "quoted_string", - "remove_quotes", - "replace_with", - "replace_html_entity", - "rest_of_line", - "sgl_quoted_string", - "srange", - "string_end", - "string_start", - "trace_parse_action", - "unicode_string", - "with_attribute", - "indentedBlock", - "original_text_for", - "ungroup", - "infix_notation", - "locatedExpr", - "with_class", - "CloseMatch", - "token_map", - "pyparsing_common", - "pyparsing_unicode", - "unicode_set", - "condition_as_parse_action", - "pyparsing_test", - # pre-PEP8 compatibility names - "__versionTime__", - "anyCloseTag", - "anyOpenTag", - "cStyleComment", - "commonHTMLEntity", - "countedArray", - "cppStyleComment", - "dblQuotedString", - "dblSlashComment", - "delimitedList", - "dictOf", - "htmlComment", - "javaStyleComment", - "lineEnd", - "lineStart", - "makeHTMLTags", - "makeXMLTags", - "matchOnlyAtCol", - "matchPreviousExpr", - "matchPreviousLiteral", - "nestedExpr", - "nullDebugAction", - "oneOf", - "opAssoc", - "pythonStyleComment", - "quotedString", - "removeQuotes", - "replaceHTMLEntity", - "replaceWith", - "restOfLine", - "sglQuotedString", - "stringEnd", - "stringStart", - "traceParseAction", - "unicodeString", - "withAttribute", - "indentedBlock", - "originalTextFor", - "infixNotation", - "locatedExpr", - "withClass", - "tokenMap", - "conditionAsParseAction", - "autoname_elements", -] diff --git a/spaces/Boadiwaa/Recipes/README.md b/spaces/Boadiwaa/Recipes/README.md deleted file mode 100644 index c5f785c77c172d6acaf43538df6edd656c54c8e8..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Recipes -emoji: 🏢 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/question.md b/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/question.md deleted file mode 100644 index b199b6ee8ad446994aed54f67b0d1c22049d53c1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/question.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -name: Question -about: File an issue about unexplained behavior -title: "[QUESTION] " ---- - -If you have a question, please check the following first: - -1. Check if your question has already been answered in the [FAQ][] section. -2. Make sure you've read the [documentation][]. Your issue may be addressed there. -3. If those resources didn't help and you only have a short question (not a bug report), consider asking in the [Gitter chat room][] -4. Search the [issue tracker][], including the closed issues, to see if your question has already been asked/answered. +1 or comment if it has been asked but has no answer. -5. If you have a more complex question which is not answered in the previous items (or not suitable for chat), please fill in the details below. -6. Include a self-contained and minimal piece of code that illustrates your question. If that's not possible, try to make the description as clear as possible. - -[FAQ]: http://pybind11.readthedocs.io/en/latest/faq.html -[documentation]: https://pybind11.readthedocs.io -[issue tracker]: https://github.com/pybind/pybind11/issues -[Gitter chat room]: https://gitter.im/pybind/Lobby - -*After reading, remove this checklist.* diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.cpp deleted file mode 100644 index 8688874091219f5a5035f5eb46e976e7408080b8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.cpp +++ /dev/null @@ -1,129 +0,0 @@ -/* - tests/test_stl_binders.cpp -- Usage of stl_binders functions - - Copyright (c) 2016 Sergey Lyskov - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" - -#include -#include -#include -#include -#include - -class El { -public: - El() = delete; - El(int v) : a(v) { } - - int a; -}; - -std::ostream & operator<<(std::ostream &s, El const&v) { - s << "El{" << v.a << '}'; - return s; -} - -/// Issue #487: binding std::vector with E non-copyable -class E_nc { -public: - explicit E_nc(int i) : value{i} {} - E_nc(const E_nc &) = delete; - E_nc &operator=(const E_nc &) = delete; - E_nc(E_nc &&) = default; - E_nc &operator=(E_nc &&) = default; - - int value; -}; - -template Container *one_to_n(int n) { - auto v = new Container(); - for (int i = 1; i <= n; i++) - v->emplace_back(i); - return v; -} - -template Map *times_ten(int n) { - auto m = new Map(); - for (int i = 1; i <= n; i++) - m->emplace(int(i), E_nc(10*i)); - return m; -} - -template NestMap *times_hundred(int n) { - auto m = new NestMap(); - for (int i = 1; i <= n; i++) - for (int j = 1; j <= n; j++) - (*m)[i].emplace(int(j*10), E_nc(100*j)); - return m; -} - -TEST_SUBMODULE(stl_binders, m) { - // test_vector_int - py::bind_vector>(m, "VectorInt", py::buffer_protocol()); - - // test_vector_custom - py::class_(m, "El") - .def(py::init()); - py::bind_vector>(m, "VectorEl"); - py::bind_vector>>(m, "VectorVectorEl"); - - // test_map_string_double - py::bind_map>(m, "MapStringDouble"); - py::bind_map>(m, "UnorderedMapStringDouble"); - - // test_map_string_double_const - py::bind_map>(m, "MapStringDoubleConst"); - py::bind_map>(m, "UnorderedMapStringDoubleConst"); - - py::class_(m, "ENC") - .def(py::init()) - .def_readwrite("value", &E_nc::value); - - // test_noncopyable_containers - py::bind_vector>(m, "VectorENC"); - m.def("get_vnc", &one_to_n>, py::return_value_policy::reference); - py::bind_vector>(m, "DequeENC"); - m.def("get_dnc", &one_to_n>, py::return_value_policy::reference); - py::bind_map>(m, "MapENC"); - m.def("get_mnc", ×_ten>, py::return_value_policy::reference); - py::bind_map>(m, "UmapENC"); - m.def("get_umnc", ×_ten>, py::return_value_policy::reference); - // Issue #1885: binding nested std::map> with E non-copyable - py::bind_map>>(m, "MapVecENC"); - m.def("get_nvnc", [](int n) - { - auto m = new std::map>(); - for (int i = 1; i <= n; i++) - for (int j = 1; j <= n; j++) - (*m)[i].emplace_back(j); - return m; - }, py::return_value_policy::reference); - py::bind_map>>(m, "MapMapENC"); - m.def("get_nmnc", ×_hundred>>, py::return_value_policy::reference); - py::bind_map>>(m, "UmapUmapENC"); - m.def("get_numnc", ×_hundred>>, py::return_value_policy::reference); - - // test_vector_buffer - py::bind_vector>(m, "VectorUChar", py::buffer_protocol()); - // no dtype declared for this version: - struct VUndeclStruct { bool w; uint32_t x; double y; bool z; }; - m.def("create_undeclstruct", [m] () mutable { - py::bind_vector>(m, "VectorUndeclStruct", py::buffer_protocol()); - }); - - // The rest depends on numpy: - try { py::module::import("numpy"); } - catch (...) { return; } - - // test_vector_buffer_numpy - struct VStruct { bool w; uint32_t x; double y; bool z; }; - PYBIND11_NUMPY_DTYPE(VStruct, w, x, y, z); - py::class_(m, "VStruct").def_readwrite("x", &VStruct::x); - py::bind_vector>(m, "VectorStruct", py::buffer_protocol()); - m.def("get_vectorstruct", [] {return std::vector {{0, 5, 3.0, 1}, {1, 30, -1e4, 0}};}); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/inner_product.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/inner_product.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/inner_product.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/unique_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/unique_by_key.h deleted file mode 100644 index e20832131593afe2c63af6a5bb0854beca45bd44..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/unique_by_key.h +++ /dev/null @@ -1,934 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ - -template -__host__ __device__ thrust::pair -unique_by_key( - const thrust::detail::execution_policy_base &exec, - ForwardIterator1 keys_first, - ForwardIterator1 keys_last, - ForwardIterator2 values_first); -template -__host__ __device__ thrust::pair -unique_by_key_copy( - const thrust::detail::execution_policy_base &exec, - InputIterator1 keys_first, - InputIterator1 keys_last, - InputIterator2 values_first, - OutputIterator1 keys_result, - OutputIterator2 values_result); - - -namespace cuda_cub { - -// XXX it should be possible to unify unique & unique_by_key into a single -// agent with various specializations, similar to what is done -// with partition -namespace __unique_by_key { - - template - struct PtxPolicy - { - enum - { - BLOCK_THREADS = _BLOCK_THREADS, - ITEMS_PER_THREAD = _ITEMS_PER_THREAD, - ITEMS_PER_TILE = _BLOCK_THREADS * _ITEMS_PER_THREAD, - }; - static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM; - static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER; - static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM; - }; // struct PtxPolicy - - template - struct Tuning; - - namespace mpl = thrust::detail::mpl::math; - - template - struct items_per_thread - { - enum - { - value = mpl::min< - int, - NOMINAL_4B_ITEMS_PER_THREAD, - mpl::max::value>::value - }; - }; - - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 11, - // - ITEMS_PER_THREAD = items_per_thread::value - }; - - typedef PtxPolicy<64, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_LDG, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning for sm52 - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 9, - // - ITEMS_PER_THREAD = items_per_thread::value - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_LDG, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning for sm35 - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 7, - // - ITEMS_PER_THREAD = items_per_thread::value - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_DEFAULT, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning for sm30 - - template - struct UniqueByKeyAgent - { - typedef typename iterator_traits::value_type key_type; - typedef typename iterator_traits::value_type value_type; - - typedef cub::ScanTileState ScanTileState; - - template - struct PtxPlan : Tuning::type - { - typedef Tuning tuning; - - typedef typename core::LoadIterator::type KeyLoadIt; - typedef typename core::LoadIterator::type ValLoadIt; - - typedef typename core::BlockLoad::type BlockLoadKeys; - typedef typename core::BlockLoad::type BlockLoadValues; - - typedef cub::BlockDiscontinuity - BlockDiscontinuityKeys; - - typedef cub::TilePrefixCallbackOp - TilePrefixCallback; - typedef cub::BlockScan - BlockScan; - - typedef core::uninitialized_array - shared_keys_t; - typedef core::uninitialized_array - shared_values_t; - - union TempStorage - { - struct - { - typename BlockScan::TempStorage scan; - typename TilePrefixCallback::TempStorage prefix; - typename BlockDiscontinuityKeys::TempStorage discontinuity; - }; - - typename BlockLoadKeys::TempStorage load_keys; - typename BlockLoadValues::TempStorage load_values; - - shared_keys_t shared_keys; - shared_values_t shared_values; - }; // union TempStorage - }; // struct PtxPlan - - typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan; - - typedef typename ptx_plan::KeyLoadIt KeyLoadIt; - typedef typename ptx_plan::ValLoadIt ValLoadIt; - typedef typename ptx_plan::BlockLoadKeys BlockLoadKeys; - typedef typename ptx_plan::BlockLoadValues BlockLoadValues; - typedef typename ptx_plan::BlockDiscontinuityKeys BlockDiscontinuityKeys; - typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback; - typedef typename ptx_plan::BlockScan BlockScan; - typedef typename ptx_plan::TempStorage TempStorage; - typedef typename ptx_plan::shared_keys_t shared_keys_t; - typedef typename ptx_plan::shared_values_t shared_values_t; - - enum - { - BLOCK_THREADS = ptx_plan::BLOCK_THREADS, - ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD, - ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE - }; - - struct impl - { - //--------------------------------------------------------------------- - // Per-thread fields - //--------------------------------------------------------------------- - - TempStorage & temp_storage; - ScanTileState & tile_state; - KeyLoadIt keys_in; - ValLoadIt values_in; - KeyOutputIt keys_out; - ValOutputIt values_out; - cub::InequalityWrapper predicate; - Size num_items; - - //--------------------------------------------------------------------- - // Utility functions - //--------------------------------------------------------------------- - - struct key_tag {}; - struct value_tag {}; - - THRUST_DEVICE_FUNCTION - shared_keys_t &get_shared(key_tag) - { - return temp_storage.shared_keys; - } - THRUST_DEVICE_FUNCTION - shared_values_t &get_shared(value_tag) - { - return temp_storage.shared_values; - } - - - template - void THRUST_DEVICE_FUNCTION - scatter(Tag tag, - OutputIt items_out, - T (&items)[ITEMS_PER_THREAD], - Size (&selection_flags)[ITEMS_PER_THREAD], - Size (&selection_indices)[ITEMS_PER_THREAD], - int /*num_tile_items*/, - int num_tile_selections, - Size num_selections_prefix, - Size /*num_selections*/) - { - using core::sync_threadblock; - -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - int local_scatter_offset = selection_indices[ITEM] - - num_selections_prefix; - if (selection_flags[ITEM]) - { - get_shared(tag)[local_scatter_offset] = items[ITEM]; - } - } - - sync_threadblock(); - - for (int item = threadIdx.x; - item < num_tile_selections; - item += BLOCK_THREADS) - { - items_out[num_selections_prefix + item] = get_shared(tag)[item]; - } - - sync_threadblock(); - } - - //--------------------------------------------------------------------- - // Tile processing - //--------------------------------------------------------------------- - - template - Size THRUST_DEVICE_FUNCTION - consume_tile_impl(int num_tile_items, - int tile_idx, - Size tile_base) - { - using core::sync_threadblock; - - key_type keys[ITEMS_PER_THREAD]; - Size selection_flags[ITEMS_PER_THREAD]; - Size selection_idx[ITEMS_PER_THREAD]; - - if (IS_LAST_TILE) - { - // Fill last elements with the first element - // because collectives are not suffix guarded - BlockLoadKeys(temp_storage.load_keys) - .Load(keys_in + tile_base, - keys, - num_tile_items, - *(keys_in + tile_base)); - } - else - { - BlockLoadKeys(temp_storage.load_keys).Load(keys_in + tile_base, keys); - } - - - sync_threadblock(); - - value_type values[ITEMS_PER_THREAD]; - if (IS_LAST_TILE) - { - // Fill last elements with the first element - // because collectives are not suffix guarded - BlockLoadValues(temp_storage.load_values) - .Load(values_in + tile_base, - values, - num_tile_items, - *(values_in + tile_base)); - } - else - { - BlockLoadValues(temp_storage.load_values) - .Load(values_in + tile_base, values); - } - - sync_threadblock(); - - if (IS_FIRST_TILE) - { - BlockDiscontinuityKeys(temp_storage.discontinuity) - .FlagHeads(selection_flags, keys, predicate); - } - else - { - key_type tile_predecessor = keys_in[tile_base - 1]; - BlockDiscontinuityKeys(temp_storage.discontinuity) - .FlagHeads(selection_flags, keys, predicate, tile_predecessor); - } -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - // Set selection_flags for out-of-bounds items - if ((IS_LAST_TILE) && (Size(threadIdx.x * ITEMS_PER_THREAD) + ITEM >= num_tile_items)) - selection_flags[ITEM] = 1; - } - - sync_threadblock(); - - - Size num_tile_selections = 0; - Size num_selections = 0; - Size num_selections_prefix = 0; - if (IS_FIRST_TILE) - { - BlockScan(temp_storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - num_tile_selections); - - if (threadIdx.x == 0) - { - // Update tile status if this is not the last tile - if (!IS_LAST_TILE) - tile_state.SetInclusive(0, num_tile_selections); - } - - // Do not count any out-of-bounds selections - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - } - num_selections = num_tile_selections; - } - else - { - TilePrefixCallback prefix_cb(tile_state, - temp_storage.prefix, - cub::Sum(), - tile_idx); - BlockScan(temp_storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - prefix_cb); - - num_selections = prefix_cb.GetInclusivePrefix(); - num_tile_selections = prefix_cb.GetBlockAggregate(); - num_selections_prefix = prefix_cb.GetExclusivePrefix(); - - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - num_selections -= num_discount; - } - } - - sync_threadblock(); - - scatter(key_tag(), - keys_out, - keys, - selection_flags, - selection_idx, - num_tile_items, - num_tile_selections, - num_selections_prefix, - num_selections); - - sync_threadblock(); - - scatter(value_tag(), - values_out, - values, - selection_flags, - selection_idx, - num_tile_items, - num_tile_selections, - num_selections_prefix, - num_selections); - - return num_selections; - } - - - template - Size THRUST_DEVICE_FUNCTION - consume_tile(int num_tile_items, - int tile_idx, - Size tile_base) - { - if (tile_idx == 0) - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - else - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - } - - //--------------------------------------------------------------------- - // Constructor - //--------------------------------------------------------------------- - - THRUST_DEVICE_FUNCTION - impl(TempStorage & temp_storage_, - ScanTileState & tile_state_, - KeyLoadIt keys_in_, - ValLoadIt values_in_, - KeyOutputIt keys_out_, - ValOutputIt values_out_, - BinaryPred binary_pred_, - Size num_items_, - int num_tiles, - NumSelectedOutIt num_selected_out) - // filed ctors - : temp_storage(temp_storage_), - tile_state(tile_state_), - keys_in(keys_in_), - values_in(values_in_), - keys_out(keys_out_), - values_out(values_out_), - predicate(binary_pred_), - num_items(num_items_) - { - int tile_idx = blockIdx.x; - Size tile_base = tile_idx * ITEMS_PER_TILE; - - if (tile_idx < num_tiles - 1) - { - consume_tile(ITEMS_PER_TILE, - tile_idx, - tile_base); - } - else - { - int num_remaining = static_cast(num_items - tile_base); - Size num_selections = consume_tile(num_remaining, - tile_idx, - tile_base); - if (threadIdx.x == 0) - { - *num_selected_out = num_selections; - } - } - } - }; // struct impl - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(KeyInputIt keys_in, - ValInputIt values_in, - KeyOutputIt keys_out, - ValOutputIt values_out, - BinaryPred binary_pred, - NumSelectedOutIt num_selected_out, - Size num_items, - ScanTileState tile_state, - int num_tiles, - char * shmem) - { - TempStorage &storage = *reinterpret_cast(shmem); - - impl(storage, - tile_state, - core::make_load_iterator(ptx_plan(), keys_in), - core::make_load_iterator(ptx_plan(), values_in), - keys_out, - values_out, - binary_pred, - num_items, - num_tiles, - num_selected_out); - } - }; // struct UniqueByKeyAgent - - - template - struct InitAgent - { - template - struct PtxPlan : PtxPolicy<128> {}; - - typedef core::specialize_plan ptx_plan; - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ScanTileState tile_state, - Size num_tiles, - NumSelectedIt num_selected_out, - char * /*shmem*/) - { - tile_state.InitializeStatus(num_tiles); - if (blockIdx.x == 0 && threadIdx.x == 0) - *num_selected_out = 0; - } - - }; // struct InitAgent - - - template - static cudaError_t THRUST_RUNTIME_FUNCTION - doit_step(void * d_temp_storage, - size_t & temp_storage_bytes, - KeyInputIt keys_in, - ValInputIt values_in, - KeyOutputIt keys_out, - ValOutputIt values_out, - BinaryPred binary_pred, - NumSelectedOutIt num_selected_out, - Size num_items, - cudaStream_t stream, - bool debug_sync) - { - using core::AgentLauncher; - using core::AgentPlan; - using core::get_agent_plan; - - typedef AgentLauncher< - UniqueByKeyAgent > - unique_agent; - - typedef typename unique_agent::ScanTileState ScanTileState; - - typedef AgentLauncher< - InitAgent > - init_agent; - - using core::get_plan; - typename get_plan::type init_plan = init_agent::get_plan(); - typename get_plan::type unique_plan = unique_agent::get_plan(stream); - - - int tile_size = unique_plan.items_per_tile; - size_t num_tiles = (num_items + tile_size - 1) / tile_size; - - size_t vshmem_size = core::vshmem_size(unique_plan.shared_memory_size, - num_tiles); - - cudaError_t status = cudaSuccess; - size_t allocation_sizes[2] = {0, vshmem_size}; - status = ScanTileState::AllocationSize(static_cast(num_tiles), allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - void *allocations[2] = {NULL, NULL}; - // - status = cub::AliasTemporaries(d_temp_storage, - temp_storage_bytes, - allocations, - allocation_sizes); - CUDA_CUB_RET_IF_FAIL(status); - - if (d_temp_storage == NULL) - { - return status; - } - - ScanTileState tile_status; - status = tile_status.Init(static_cast(num_tiles), allocations[0], allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - num_tiles = max(1,num_tiles); - init_agent ia(init_plan, num_tiles, stream, "unique_by_key::init_agent", debug_sync); - ia.launch(tile_status, num_tiles, num_selected_out); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - if (num_items == 0) { return status; } - - char *vshmem_ptr = vshmem_size > 0 ? (char *)allocations[1] : NULL; - - unique_agent ua(unique_plan, num_items, stream, vshmem_ptr, "unique_by_key::unique_agent", debug_sync); - ua.launch(keys_in, - values_in, - keys_out, - values_out, - binary_pred, - num_selected_out, - num_items, - tile_status, - num_tiles); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - return status; - } - - template - THRUST_RUNTIME_FUNCTION - pair - unique_by_key(execution_policy& policy, - KeyInputIt keys_first, - KeyInputIt keys_last, - ValInputIt values_first, - KeyOutputIt keys_result, - ValOutputIt values_result, - BinaryPred binary_pred) - { - - typedef int size_type; - - size_type num_items - = static_cast(thrust::distance(keys_first, keys_last)); - - size_t temp_storage_bytes = 0; - cudaStream_t stream = cuda_cub::stream(policy); - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - cudaError_t status; - status = __unique_by_key::doit_step(NULL, - temp_storage_bytes, - keys_first, - values_first, - keys_result, - values_result, - binary_pred, - reinterpret_cast(NULL), - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "unique_by_key: failed on 1st step"); - - size_t allocation_sizes[2] = {sizeof(size_type), temp_storage_bytes}; - void * allocations[2] = {NULL, NULL}; - - size_t storage_size = 0; - status = core::alias_storage(NULL, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "unique_by_key failed on 1st alias_storage"); - - // Allocate temporary storage. - thrust::detail::temporary_array - tmp(policy, storage_size); - void *ptr = static_cast(tmp.data().get()); - - status = core::alias_storage(ptr, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "unique_by_key failed on 2nd alias_storage"); - - size_type* d_num_selected_out - = thrust::detail::aligned_reinterpret_cast(allocations[0]); - - status = __unique_by_key::doit_step(allocations[1], - temp_storage_bytes, - keys_first, - values_first, - keys_result, - values_result, - binary_pred, - d_num_selected_out, - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "unique_by_key: failed on 2nd step"); - - status = cuda_cub::synchronize(policy); - cuda_cub::throw_on_error(status, "unique_by_key: failed to synchronize"); - - size_type num_selected = get_value(policy, d_num_selected_out); - - return thrust::make_pair( - keys_result + num_selected, - values_result + num_selected - ); - } - -} // namespace __unique_by_key - - -//------------------------- -// Thrust API entry points -//------------------------- - - -__thrust_exec_check_disable__ -template -pair __host__ __device__ -unique_by_key_copy(execution_policy &policy, - KeyInputIt keys_first, - KeyInputIt keys_last, - ValInputIt values_first, - KeyOutputIt keys_result, - ValOutputIt values_result, - BinaryPred binary_pred) -{ - pair ret = thrust::make_pair(keys_result, values_result); - if (__THRUST_HAS_CUDART__) - { - ret = __unique_by_key::unique_by_key(policy, - keys_first, - keys_last, - values_first, - keys_result, - values_result, - binary_pred); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::unique_by_key_copy(cvt_to_seq(derived_cast(policy)), - keys_first, - keys_last, - values_first, - keys_result, - values_result, - binary_pred); -#endif - } - return ret; -} - -template -pair __host__ __device__ -unique_by_key_copy(execution_policy &policy, - KeyInputIt keys_first, - KeyInputIt keys_last, - ValInputIt values_first, - KeyOutputIt keys_result, - ValOutputIt values_result) -{ - typedef typename iterator_traits::value_type key_type; - return cuda_cub::unique_by_key_copy(policy, - keys_first, - keys_last, - values_first, - keys_result, - values_result, - equal_to()); -} - -template -pair __host__ __device__ -unique_by_key(execution_policy &policy, - KeyInputIt keys_first, - KeyInputIt keys_last, - ValInputIt values_first, - BinaryPred binary_pred) -{ - pair ret = thrust::make_pair(keys_first, values_first); - if (__THRUST_HAS_CUDART__) - { - ret = cuda_cub::unique_by_key_copy(policy, - keys_first, - keys_last, - values_first, - keys_first, - values_first, - binary_pred); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::unique_by_key(cvt_to_seq(derived_cast(policy)), - keys_first, - keys_last, - values_first, - binary_pred); -#endif - } - return ret; -} - -template -pair __host__ __device__ -unique_by_key(execution_policy &policy, - KeyInputIt keys_first, - KeyInputIt keys_last, - ValInputIt values_first) -{ - typedef typename iterator_traits::value_type key_type; - return cuda_cub::unique_by_key(policy, - keys_first, - keys_last, - values_first, - equal_to()); -} - - - -} // namespace cuda_cub -} // end namespace thrust - -#include -#include - -#endif diff --git a/spaces/CVPR/WALT/mmdet/core/export/pytorch2onnx.py b/spaces/CVPR/WALT/mmdet/core/export/pytorch2onnx.py deleted file mode 100644 index 809a817e67446b3c0c7894dcefb3c4bbc29afb7e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/export/pytorch2onnx.py +++ /dev/null @@ -1,154 +0,0 @@ -from functools import partial - -import mmcv -import numpy as np -import torch -from mmcv.runner import load_checkpoint - - -def generate_inputs_and_wrap_model(config_path, - checkpoint_path, - input_config, - cfg_options=None): - """Prepare sample input and wrap model for ONNX export. - - The ONNX export API only accept args, and all inputs should be - torch.Tensor or corresponding types (such as tuple of tensor). - So we should call this function before exporting. This function will: - - 1. generate corresponding inputs which are used to execute the model. - 2. Wrap the model's forward function. - - For example, the MMDet models' forward function has a parameter - ``return_loss:bool``. As we want to set it as False while export API - supports neither bool type or kwargs. So we have to replace the forward - like: ``model.forward = partial(model.forward, return_loss=False)`` - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - input_config (dict): the exactly data in this dict depends on the - framework. For MMSeg, we can just declare the input shape, - and generate the dummy data accordingly. However, for MMDet, - we may pass the real img path, or the NMS will return None - as there is no legal bbox. - - Returns: - tuple: (model, tensor_data) wrapped model which can be called by \ - model(*tensor_data) and a list of inputs which are used to execute \ - the model while exporting. - """ - - model = build_model_from_cfg( - config_path, checkpoint_path, cfg_options=cfg_options) - one_img, one_meta = preprocess_example_input(input_config) - tensor_data = [one_img] - model.forward = partial( - model.forward, img_metas=[[one_meta]], return_loss=False) - - # pytorch has some bug in pytorch1.3, we have to fix it - # by replacing these existing op - opset_version = 11 - # put the import within the function thus it will not cause import error - # when not using this function - try: - from mmcv.onnx.symbolic import register_extra_symbolics - except ModuleNotFoundError: - raise NotImplementedError('please update mmcv to version>=v1.0.4') - register_extra_symbolics(opset_version) - - return model, tensor_data - - -def build_model_from_cfg(config_path, checkpoint_path, cfg_options=None): - """Build a model from config and load the given checkpoint. - - Args: - config_path (str): the OpenMMLab config for the model we want to - export to ONNX - checkpoint_path (str): Path to the corresponding checkpoint - - Returns: - torch.nn.Module: the built model - """ - from mmdet.models import build_detector - - cfg = mmcv.Config.fromfile(config_path) - if cfg_options is not None: - cfg.merge_from_dict(cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - cfg.data.test.test_mode = True - - # build the model - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - load_checkpoint(model, checkpoint_path, map_location='cpu') - model.cpu().eval() - return model - - -def preprocess_example_input(input_config): - """Prepare an example input image for ``generate_inputs_and_wrap_model``. - - Args: - input_config (dict): customized config describing the example input. - - Returns: - tuple: (one_img, one_meta), tensor of the example input image and \ - meta information for the example input image. - - Examples: - >>> from mmdet.core.export import preprocess_example_input - >>> input_config = { - >>> 'input_shape': (1,3,224,224), - >>> 'input_path': 'demo/demo.jpg', - >>> 'normalize_cfg': { - >>> 'mean': (123.675, 116.28, 103.53), - >>> 'std': (58.395, 57.12, 57.375) - >>> } - >>> } - >>> one_img, one_meta = preprocess_example_input(input_config) - >>> print(one_img.shape) - torch.Size([1, 3, 224, 224]) - >>> print(one_meta) - {'img_shape': (224, 224, 3), - 'ori_shape': (224, 224, 3), - 'pad_shape': (224, 224, 3), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False} - """ - input_path = input_config['input_path'] - input_shape = input_config['input_shape'] - one_img = mmcv.imread(input_path) - one_img = mmcv.imresize(one_img, input_shape[2:][::-1]) - show_img = one_img.copy() - if 'normalize_cfg' in input_config.keys(): - normalize_cfg = input_config['normalize_cfg'] - mean = np.array(normalize_cfg['mean'], dtype=np.float32) - std = np.array(normalize_cfg['std'], dtype=np.float32) - to_rgb = normalize_cfg.get('to_rgb', True) - one_img = mmcv.imnormalize(one_img, mean, std, to_rgb=to_rgb) - one_img = one_img.transpose(2, 0, 1) - one_img = torch.from_numpy(one_img).unsqueeze(0).float().requires_grad_( - True) - (_, C, H, W) = input_shape - one_meta = { - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False, - 'show_img': show_img, - } - - return one_img, one_meta diff --git a/spaces/CVPR/transfiner/configs/quick_schedules/README.md b/spaces/CVPR/transfiner/configs/quick_schedules/README.md deleted file mode 100644 index 4e6c82ef3f75a73c7006f33d7c850a0d4781a58f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/quick_schedules/README.md +++ /dev/null @@ -1,8 +0,0 @@ -These are quick configs for performance or accuracy regression tracking purposes. - -* `*instance_test.yaml`: can train on 2 GPUs. They are used to test whether the training can - successfully finish. They are not expected to produce reasonable training results. -* `*inference_acc_test.yaml`: They should be run using `--eval-only`. They run inference using pre-trained models and verify - the results are as expected. -* `*training_acc_test.yaml`: They should be trained on 8 GPUs. They finish in about an hour and verify the training accuracy - is within the normal range. diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/charpic/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/charpic/__init__.py deleted file mode 100644 index 8ba8761022e9d421ea5b183d922672efc1a88491..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/charpic/__init__.py +++ /dev/null @@ -1,38 +0,0 @@ -from typing import List - -from PIL import Image, ImageDraw -from pil_utils import BuildImage -from pil_utils.fonts import Font - -from meme_generator import add_meme -from meme_generator.utils import make_jpg_or_gif - - -def charpic(images: List[BuildImage], texts, args): - img = images[0] - str_map = "@@$$&B88QMMGW##EE93SPPDOOU**==()+^,\"--''. " - num = len(str_map) - font = Font.find("Consolas").load_font(15) - - def make(img: BuildImage) -> BuildImage: - img = img.convert("L").resize_width(150) - img = img.resize((img.width, img.height // 2)) - lines = [] - for y in range(img.height): - line = "" - for x in range(img.width): - gray = img.image.getpixel((x, y)) - line += str_map[int(num * gray / 256)] - lines.append(line) - text = "\n".join(lines) - text_img = Image.new("RGB", (2000, 2000), "white") - draw = ImageDraw.Draw(text_img) - _, _, w, h = draw.multiline_textbbox((0, 0), text, font=font) - draw.multiline_text((0, 0), text, font=font, fill="black") - text_img = text_img.crop((0, 0, w, h)) - return BuildImage(text_img) - - return make_jpg_or_gif(img, make) - - -add_meme("charpic", charpic, min_images=1, max_images=1, keywords=["字符画"]) diff --git a/spaces/Cletrason/dalle2-dreamweddingbooth/app.py b/spaces/Cletrason/dalle2-dreamweddingbooth/app.py deleted file mode 100644 index 0103017f53d2a986fdbc047934240714ce3ed237..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/dalle2-dreamweddingbooth/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/dalle2/dreamweddingbooth").launch() \ No newline at end of file diff --git a/spaces/CofAI/chat.b4/client/js/theme-toggler.js b/spaces/CofAI/chat.b4/client/js/theme-toggler.js deleted file mode 100644 index 67e1a9501b70d54ab8a717f34983c012328e74a0..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/client/js/theme-toggler.js +++ /dev/null @@ -1,22 +0,0 @@ -var switch_theme_toggler = document.getElementById("theme-toggler"); - -switch_theme_toggler.addEventListener("change", toggleTheme); - -function setTheme(themeName) { - localStorage.setItem("theme", themeName); - document.documentElement.className = themeName; -} - -function toggleTheme() { - var currentTheme = localStorage.getItem("theme"); - var newTheme = currentTheme === "theme-dark" ? "theme-light" : "theme-dark"; - - setTheme(newTheme); - switch_theme_toggler.checked = newTheme === "theme-dark"; -} - -(function () { - var currentTheme = localStorage.getItem("theme") || "theme-dark"; - setTheme(currentTheme); - switch_theme_toggler.checked = currentTheme === "theme-dark"; -})(); diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/hteyun.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/hteyun.py deleted file mode 100644 index a6eba7c00331d720afb47215e818f5900d4aedcf..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/hteyun.py +++ /dev/null @@ -1,34 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': 'application/json, text/plain, */*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'Origin': 'https://hteyun.com', - 'Referer': 'https://hteyun.com/chat/', - } - data = { - 'messages': messages, - 'model': model, - 'systemMessage': 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using russian language.', - 'temperature': 0.7, - 'presence_penalty': 0, - } - response = requests.post(url + '/api/chat-stream', json=data, headers=headers, stream=True) - print(response.json()) - - # Извлечение текста из response - return response.json()['text'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FliImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FliImagePlugin.py deleted file mode 100644 index f4e89a03e0263bc6c1d318b379fdcfe7f61f8588..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FliImagePlugin.py +++ /dev/null @@ -1,171 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# FLI/FLC file handling. -# -# History: -# 95-09-01 fl Created -# 97-01-03 fl Fixed parser, setup decoder tile -# 98-07-15 fl Renamed offset attribute to avoid name clash -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1995-97. -# -# See the README file for information on usage and redistribution. -# - -import os - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 - -# -# decoder - - -def _accept(prefix): - return ( - len(prefix) >= 6 - and i16(prefix, 4) in [0xAF11, 0xAF12] - and i16(prefix, 14) in [0, 3] # flags - ) - - -## -# Image plugin for the FLI/FLC animation format. Use the seek -# method to load individual frames. - - -class FliImageFile(ImageFile.ImageFile): - format = "FLI" - format_description = "Autodesk FLI/FLC Animation" - _close_exclusive_fp_after_loading = False - - def _open(self): - # HEAD - s = self.fp.read(128) - if not (_accept(s) and s[20:22] == b"\x00\x00"): - msg = "not an FLI/FLC file" - raise SyntaxError(msg) - - # frames - self.n_frames = i16(s, 6) - self.is_animated = self.n_frames > 1 - - # image characteristics - self.mode = "P" - self._size = i16(s, 8), i16(s, 10) - - # animation speed - duration = i32(s, 16) - magic = i16(s, 4) - if magic == 0xAF11: - duration = (duration * 1000) // 70 - self.info["duration"] = duration - - # look for palette - palette = [(a, a, a) for a in range(256)] - - s = self.fp.read(16) - - self.__offset = 128 - - if i16(s, 4) == 0xF100: - # prefix chunk; ignore it - self.__offset = self.__offset + i32(s) - s = self.fp.read(16) - - if i16(s, 4) == 0xF1FA: - # look for palette chunk - number_of_subchunks = i16(s, 6) - chunk_size = None - for _ in range(number_of_subchunks): - if chunk_size is not None: - self.fp.seek(chunk_size - 6, os.SEEK_CUR) - s = self.fp.read(6) - chunk_type = i16(s, 4) - if chunk_type in (4, 11): - self._palette(palette, 2 if chunk_type == 11 else 0) - break - chunk_size = i32(s) - if not chunk_size: - break - - palette = [o8(r) + o8(g) + o8(b) for (r, g, b) in palette] - self.palette = ImagePalette.raw("RGB", b"".join(palette)) - - # set things up to decode first frame - self.__frame = -1 - self._fp = self.fp - self.__rewind = self.fp.tell() - self.seek(0) - - def _palette(self, palette, shift): - # load palette - - i = 0 - for e in range(i16(self.fp.read(2))): - s = self.fp.read(2) - i = i + s[0] - n = s[1] - if n == 0: - n = 256 - s = self.fp.read(n * 3) - for n in range(0, len(s), 3): - r = s[n] << shift - g = s[n + 1] << shift - b = s[n + 2] << shift - palette[i] = (r, g, b) - i += 1 - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0) - - for f in range(self.__frame + 1, frame + 1): - self._seek(f) - - def _seek(self, frame): - if frame == 0: - self.__frame = -1 - self._fp.seek(self.__rewind) - self.__offset = 128 - else: - # ensure that the previous frame was loaded - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - self.__frame = frame - - # move to next frame - self.fp = self._fp - self.fp.seek(self.__offset) - - s = self.fp.read(4) - if not s: - raise EOFError - - framesize = i32(s) - - self.decodermaxblock = framesize - self.tile = [("fli", (0, 0) + self.size, self.__offset, None)] - - self.__offset += framesize - - def tell(self): - return self.__frame - - -# -# registry - -Image.register_open(FliImageFile.format, FliImageFile, _accept) - -Image.register_extensions(FliImageFile.format, [".fli", ".flc"]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_o_p_b_d.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_o_p_b_d.py deleted file mode 100644 index b22af216bb2e2ddb8af1cd3f991d4ede69471076..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_o_p_b_d.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6opbd.html -class table__o_p_b_d(BaseTTXConverter): - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Copy-6cd42558.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Copy-6cd42558.js deleted file mode 100644 index 31526de347b3729a1d3389c1865bd163866787e3..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Copy-6cd42558.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as p,e as c,s as h,J as a,K as e,p as u,M as i,n as o,A as d}from"./index-3370be2a.js";function v(l){let t,s;return{c(){t=a("svg"),s=a("polyline"),e(s,"points","20 6 9 17 4 12"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 24 24"),e(t,"fill","none"),e(t,"stroke","currentColor"),e(t,"stroke-width","3"),e(t,"stroke-linecap","round"),e(t,"stroke-linejoin","round")},m(n,r){u(n,t,r),i(t,s)},p:o,i:o,o,d(n){n&&d(t)}}}class m extends p{constructor(t){super(),c(this,t,null,v,h,{})}}function w(l){let t,s,n;return{c(){t=a("svg"),s=a("path"),n=a("path"),e(s,"fill","currentColor"),e(s,"d","M28 10v18H10V10h18m0-2H10a2 2 0 0 0-2 2v18a2 2 0 0 0 2 2h18a2 2 0 0 0 2-2V10a2 2 0 0 0-2-2Z"),e(n,"fill","currentColor"),e(n,"d","M4 18H2V4a2 2 0 0 1 2-2h14v2H4Z"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 32 32")},m(r,g){u(r,t,g),i(t,s),i(t,n)},p:o,i:o,o,d(r){r&&d(t)}}}class x extends p{constructor(t){super(),c(this,t,null,w,h,{})}}export{x as C,m as a}; -//# sourceMappingURL=Copy-6cd42558.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_exceptions.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_exceptions.py deleted file mode 100644 index 24a4f8aba337daa8f3695d87cedd331f2ec4eb61..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_exceptions.py +++ /dev/null @@ -1,343 +0,0 @@ -""" -Our exception hierarchy: - -* HTTPError - x RequestError - + TransportError - - TimeoutException - · ConnectTimeout - · ReadTimeout - · WriteTimeout - · PoolTimeout - - NetworkError - · ConnectError - · ReadError - · WriteError - · CloseError - - ProtocolError - · LocalProtocolError - · RemoteProtocolError - - ProxyError - - UnsupportedProtocol - + DecodingError - + TooManyRedirects - x HTTPStatusError -* InvalidURL -* CookieConflict -* StreamError - x StreamConsumed - x StreamClosed - x ResponseNotRead - x RequestNotRead -""" -import contextlib -import typing - -if typing.TYPE_CHECKING: - from ._models import Request, Response # pragma: no cover - - -class HTTPError(Exception): - """ - Base class for `RequestError` and `HTTPStatusError`. - - Useful for `try...except` blocks when issuing a request, - and then calling `.raise_for_status()`. - - For example: - - ``` - try: - response = httpx.get("https://www.example.com") - response.raise_for_status() - except httpx.HTTPError as exc: - print(f"HTTP Exception for {exc.request.url} - {exc}") - ``` - """ - - def __init__(self, message: str) -> None: - super().__init__(message) - self._request: typing.Optional["Request"] = None - - @property - def request(self) -> "Request": - if self._request is None: - raise RuntimeError("The .request property has not been set.") - return self._request - - @request.setter - def request(self, request: "Request") -> None: - self._request = request - - -class RequestError(HTTPError): - """ - Base class for all exceptions that may occur when issuing a `.request()`. - """ - - def __init__( - self, message: str, *, request: typing.Optional["Request"] = None - ) -> None: - super().__init__(message) - # At the point an exception is raised we won't typically have a request - # instance to associate it with. - # - # The 'request_context' context manager is used within the Client and - # Response methods in order to ensure that any raised exceptions - # have a `.request` property set on them. - self._request = request - - -class TransportError(RequestError): - """ - Base class for all exceptions that occur at the level of the Transport API. - """ - - -# Timeout exceptions... - - -class TimeoutException(TransportError): - """ - The base class for timeout errors. - - An operation has timed out. - """ - - -class ConnectTimeout(TimeoutException): - """ - Timed out while connecting to the host. - """ - - -class ReadTimeout(TimeoutException): - """ - Timed out while receiving data from the host. - """ - - -class WriteTimeout(TimeoutException): - """ - Timed out while sending data to the host. - """ - - -class PoolTimeout(TimeoutException): - """ - Timed out waiting to acquire a connection from the pool. - """ - - -# Core networking exceptions... - - -class NetworkError(TransportError): - """ - The base class for network-related errors. - - An error occurred while interacting with the network. - """ - - -class ReadError(NetworkError): - """ - Failed to receive data from the network. - """ - - -class WriteError(NetworkError): - """ - Failed to send data through the network. - """ - - -class ConnectError(NetworkError): - """ - Failed to establish a connection. - """ - - -class CloseError(NetworkError): - """ - Failed to close a connection. - """ - - -# Other transport exceptions... - - -class ProxyError(TransportError): - """ - An error occurred while establishing a proxy connection. - """ - - -class UnsupportedProtocol(TransportError): - """ - Attempted to make a request to an unsupported protocol. - - For example issuing a request to `ftp://www.example.com`. - """ - - -class ProtocolError(TransportError): - """ - The protocol was violated. - """ - - -class LocalProtocolError(ProtocolError): - """ - A protocol was violated by the client. - - For example if the user instantiated a `Request` instance explicitly, - failed to include the mandatory `Host:` header, and then issued it directly - using `client.send()`. - """ - - -class RemoteProtocolError(ProtocolError): - """ - The protocol was violated by the server. - - For example, returning malformed HTTP. - """ - - -# Other request exceptions... - - -class DecodingError(RequestError): - """ - Decoding of the response failed, due to a malformed encoding. - """ - - -class TooManyRedirects(RequestError): - """ - Too many redirects. - """ - - -# Client errors - - -class HTTPStatusError(HTTPError): - """ - The response had an error HTTP status of 4xx or 5xx. - - May be raised when calling `response.raise_for_status()` - """ - - def __init__( - self, message: str, *, request: "Request", response: "Response" - ) -> None: - super().__init__(message) - self.request = request - self.response = response - - -class InvalidURL(Exception): - """ - URL is improperly formed or cannot be parsed. - """ - - def __init__(self, message: str) -> None: - super().__init__(message) - - -class CookieConflict(Exception): - """ - Attempted to lookup a cookie by name, but multiple cookies existed. - - Can occur when calling `response.cookies.get(...)`. - """ - - def __init__(self, message: str) -> None: - super().__init__(message) - - -# Stream exceptions... - -# These may occur as the result of a programming error, by accessing -# the request/response stream in an invalid manner. - - -class StreamError(RuntimeError): - """ - The base class for stream exceptions. - - The developer made an error in accessing the request stream in - an invalid way. - """ - - def __init__(self, message: str) -> None: - super().__init__(message) - - -class StreamConsumed(StreamError): - """ - Attempted to read or stream content, but the content has already - been streamed. - """ - - def __init__(self) -> None: - message = ( - "Attempted to read or stream some content, but the content has " - "already been streamed. For requests, this could be due to passing " - "a generator as request content, and then receiving a redirect " - "response or a secondary request as part of an authentication flow." - "For responses, this could be due to attempting to stream the response " - "content more than once." - ) - super().__init__(message) - - -class StreamClosed(StreamError): - """ - Attempted to read or stream response content, but the request has been - closed. - """ - - def __init__(self) -> None: - message = ( - "Attempted to read or stream content, but the stream has " "been closed." - ) - super().__init__(message) - - -class ResponseNotRead(StreamError): - """ - Attempted to access streaming response content, without having called `read()`. - """ - - def __init__(self) -> None: - message = "Attempted to access streaming response content, without having called `read()`." - super().__init__(message) - - -class RequestNotRead(StreamError): - """ - Attempted to access streaming request content, without having called `read()`. - """ - - def __init__(self) -> None: - message = "Attempted to access streaming request content, without having called `read()`." - super().__init__(message) - - -@contextlib.contextmanager -def request_context( - request: typing.Optional["Request"] = None, -) -> typing.Iterator[None]: - """ - A context manager that can be used to attach the given request context - to any `RequestError` exceptions that are raised within the block. - """ - try: - yield - except RequestError as exc: - if request is not None: - exc.request = request - raise exc diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/core/__init__.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/core/__init__.py deleted file mode 100644 index f606e56ba3d5c848c746af97bdbc52ea7e98df9c..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/core/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .predict import predict diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.cpp b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 42bdd483490a555266c8f9b9dd6684464b2088bc..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,105 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/__init__.py b/spaces/EDGAhab/Aatrox-Talking/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Epitech/MLOps/app.py b/spaces/Epitech/MLOps/app.py deleted file mode 100644 index e9b583788e20a9862908248038adbd24d8b4fb7e..0000000000000000000000000000000000000000 --- a/spaces/Epitech/MLOps/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -os.system('git clone --recursive https://github.com/dmlc/xgboost') -os.system('cd xgboost') -os.system('sudo cp make/minimum.mk ./config.mk;') -os.system('sudo make -j4;') -os.system('sh build.sh') -os.system('cd python-package') -os.system('python setup.py install') -os.system('pip install graphviz') -os.system('pip install python-pydot') -os.system('pip install python-pydot-ng') -os.system('pip install -U scikit-learn scipy matplotlib') -os.system('pip install wandb --upgrade') -os.system('pip install tensorboardX --upgrade') -os.system('pip install ipython --upgrade') -os.system('wandb login 5a0e81f39777351977ce52cf57ea09c4f48f3d93 --relogin') - -from collections import namedtuple -import altair as alt -import math -import streamlit as st -import pandas -import numpy -import xgboost -import graphviz -from sklearn.metrics import mean_squared_error -from sklearn.model_selection import train_test_split -import matplotlib.pyplot -os.system('load_ext tensorboard') -import os -import datetime -from tensorboardX import SummaryWriter -import wandb -from wandb.xgboost import wandb_callback - -wandb.init(project="australian_rain", entity="epitech1") - -""" -# MLOPS -""" - - -max_depth_input = st.slider("Max depth", 1, 100, 5) -colsample_bytree_input = st.slider("Colsample bytree", 0.0, 1.0, 0.5) -learning_rate_input = st.slider("Learning rate", 0.0, 1.0, 0.2) -alpha_input = st.slider("Alpha", 1, 100, 10) -n_estimators_input = st.slider("n estimators", 1, 100, 20) -city_input = st.selectbox( - 'Which city do you want to predict rain ?', - ("Canberra", - "Albury", - "Penrith", - "Sydney", - "MountGinini", - "Bendigo", - "Brisbane", - "Portland"), index=0) - -dataset = pandas.read_csv('weatherAUS.csv') - -location_dataset = dataset["Location"].unique() -wind_dataset = dataset["WindGustDir"].unique() -date_dataset = dataset["Date"].unique() - -dataset.drop(dataset.loc[dataset['Location'] != city_input].index, inplace=True) - -i_RainTomorrow = dataset.columns.get_loc("RainTomorrow") -#i_Location = dataset.columns.get_loc("Location") -i_WindGustDir = dataset.columns.get_loc("WindGustDir") -i_Date = dataset.columns.get_loc("Date") -yes = dataset.iat[8, dataset.columns.get_loc("RainTomorrow")] -no = dataset.iat[0, dataset.columns.get_loc("RainTomorrow")] - -for i in range(len(dataset)): - if (dataset.iat[i, i_RainTomorrow] == yes): - dataset.iat[i, i_RainTomorrow] = True - else: - dataset.iat[i, i_RainTomorrow] = False - #dataset.iat[i, i_Location] = numpy.where(location_dataset == dataset.iat[i, i_Location])[0][0] - if (pandas.isna(dataset.iat[i, i_WindGustDir])): - dataset.iat[i, i_WindGustDir] = 0 - else: - dataset.iat[i, i_WindGustDir] = numpy.where(wind_dataset == dataset.iat[i, i_WindGustDir])[0][0] + 1 - dataset.iat[i, i_Date] = numpy.where(date_dataset == dataset.iat[i, i_Date])[0][0] - - -dataset = dataset.astype({'RainTomorrow': 'bool'}) -#dataset = dataset.astype({'Location': 'int'}) -dataset = dataset.astype({'WindGustDir': 'int'}) -dataset = dataset.astype({'Date': 'int'}) - -dataset.drop(columns=["WindDir9am", "WindDir3pm", "WindSpeed9am", "WindSpeed3pm", "Temp9am", "Temp3pm", "RainToday"], inplace=True) -dataset.drop(dataset.index[dataset.isnull().any(axis=1)], 0, inplace=True) - -dataset["Humidity"] = 0.0 -dataset["Pressure"] = 0.0 -dataset["Cloud"] = 0.0 - -for i in dataset.index: - humidity = (dataset["Humidity9am"][i] + dataset["Humidity3pm"][i]) / 2 - dataset.at[i, "Humidity"] = humidity - pressure = (dataset["Pressure9am"][i] + dataset["Pressure3pm"][i]) / 2 - dataset.at[i, "Pressure"] = pressure - cloud = (dataset["Cloud9am"][i] + dataset["Cloud3pm"][i]) / 2 - dataset.at[i, "Cloud"] = cloud - -dataset.drop(columns=["Humidity9am", "Humidity3pm", "Pressure9am", "Pressure3pm", "Cloud9am", "Cloud3pm"], inplace=True) - -x, y = dataset.iloc[:,[False, False, True, True, False, True, True, True, True, True, True, True, True]],dataset.iloc[:,4] - -data_dmatrix = xgboost.DMatrix(data=x,label=y) - -X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=123) - -class TensorBoardCallback(xgboost.callback.TrainingCallback): - def __init__(self, experiment: str = None, data_name: str = None): - self.experiment = experiment or "logs" - self.data_name = data_name or "test" - self.datetime_ = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") - self.log_dir = f"runs/{self.experiment}/{self.datetime_}" - self.train_writer = SummaryWriter(log_dir=os.path.join(self.log_dir, "train/")) - if self.data_name: - self.test_writer = SummaryWriter(log_dir=os.path.join(self.log_dir, f"{self.data_name}/")) - - def after_iteration( - self, model, epoch: int, evals_log: xgboost.callback.TrainingCallback.EvalsLog - ) -> bool: - if not evals_log: - return False - - for data, metric in evals_log.items(): - for metric_name, log in metric.items(): - score = log[-1][0] if isinstance(log[-1], tuple) else log[-1] - if data == "train": - self.train_writer.add_scalar(metric_name, score, epoch) - else: - self.test_writer.add_scalar(metric_name, score, epoch) - - return False - -xg_reg = xgboost.XGBRegressor(colsample_bytree = colsample_bytree_input, learning_rate = learning_rate_input, max_depth = max_depth_input, alpha = alpha_input, n_estimators = n_estimators_input, eval_metric = ['rmse', 'error', 'logloss', 'map'], - callbacks=[TensorBoardCallback(experiment='exp_1', data_name='test')]) - -xg_reg.fit(X_train,y_train, eval_set=[(X_train, y_train)]) - -preds = xg_reg.predict(X_test) - -rmse = numpy.sqrt(mean_squared_error(y_test, preds)) -st.write("RMSE: %f" % (rmse)) - -params = {'colsample_bytree': colsample_bytree_input,'learning_rate': learning_rate_input, - 'max_depth': max_depth_input, 'alpha': alpha_input} - -cv_results = xgboost.cv(dtrain=data_dmatrix, params=params, nfold=3, - num_boost_round=50,early_stopping_rounds=10,metrics="rmse", as_pandas=True, seed=123) - -st.write((cv_results["test-rmse-mean"]).tail(1)) - -xg_reg = xgboost.train(params=params, dtrain=data_dmatrix, num_boost_round=10) - -os.system('tensorboard --logdir runs') - -#xgboost.plot_tree(xg_reg,num_trees=0) -#matplotlib.pyplot.rcParams['figure.figsize'] = [200, 200] -#matplotlib.pyplot.show() - -#xgboost.plot_importance(xg_reg) -#matplotlib.pyplot.rcParams['figure.figsize'] = [5, 5] -#matplotlib.pyplot.show() - -#xg_reg = xgboost.train(params=params, dtrain=data_dmatrix, num_boost_round=10, callbacks=[wandb_callback()]) - -# MLOPS - W&B analytics -# added the wandb to the callbacks diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/model.py b/spaces/EronSamez/RVC_HFmeu/demucs/model.py deleted file mode 100644 index e9d932f4d014f7b95b394d2e24ed5edc379ded8d..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/model.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import julius -from torch import nn - -from .utils import capture_init, center_trim - - -class BLSTM(nn.Module): - def __init__(self, dim, layers=1): - super().__init__() - self.lstm = nn.LSTM(bidirectional=True, num_layers=layers, hidden_size=dim, input_size=dim) - self.linear = nn.Linear(2 * dim, dim) - - def forward(self, x): - x = x.permute(2, 0, 1) - x = self.lstm(x)[0] - x = self.linear(x) - x = x.permute(1, 2, 0) - return x - - -def rescale_conv(conv, reference): - std = conv.weight.std().detach() - scale = (std / reference)**0.5 - conv.weight.data /= scale - if conv.bias is not None: - conv.bias.data /= scale - - -def rescale_module(module, reference): - for sub in module.modules(): - if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)): - rescale_conv(sub, reference) - - -class Demucs(nn.Module): - @capture_init - def __init__(self, - sources, - audio_channels=2, - channels=64, - depth=6, - rewrite=True, - glu=True, - rescale=0.1, - resample=True, - kernel_size=8, - stride=4, - growth=2., - lstm_layers=2, - context=3, - normalize=False, - samplerate=44100, - segment_length=4 * 10 * 44100): - """ - Args: - sources (list[str]): list of source names - audio_channels (int): stereo or mono - channels (int): first convolution channels - depth (int): number of encoder/decoder layers - rewrite (bool): add 1x1 convolution to each encoder layer - and a convolution to each decoder layer. - For the decoder layer, `context` gives the kernel size. - glu (bool): use glu instead of ReLU - resample_input (bool): upsample x2 the input and downsample /2 the output. - rescale (int): rescale initial weights of convolutions - to get their standard deviation closer to `rescale` - kernel_size (int): kernel size for convolutions - stride (int): stride for convolutions - growth (float): multiply (resp divide) number of channels by that - for each layer of the encoder (resp decoder) - lstm_layers (int): number of lstm layers, 0 = no lstm - context (int): kernel size of the convolution in the - decoder before the transposed convolution. If > 1, - will provide some context from neighboring time - steps. - samplerate (int): stored as meta information for easing - future evaluations of the model. - segment_length (int): stored as meta information for easing - future evaluations of the model. Length of the segments on which - the model was trained. - """ - - super().__init__() - self.audio_channels = audio_channels - self.sources = sources - self.kernel_size = kernel_size - self.context = context - self.stride = stride - self.depth = depth - self.resample = resample - self.channels = channels - self.normalize = normalize - self.samplerate = samplerate - self.segment_length = segment_length - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - if glu: - activation = nn.GLU(dim=1) - ch_scale = 2 - else: - activation = nn.ReLU() - ch_scale = 1 - in_channels = audio_channels - for index in range(depth): - encode = [] - encode += [nn.Conv1d(in_channels, channels, kernel_size, stride), nn.ReLU()] - if rewrite: - encode += [nn.Conv1d(channels, ch_scale * channels, 1), activation] - self.encoder.append(nn.Sequential(*encode)) - - decode = [] - if index > 0: - out_channels = in_channels - else: - out_channels = len(self.sources) * audio_channels - if rewrite: - decode += [nn.Conv1d(channels, ch_scale * channels, context), activation] - decode += [nn.ConvTranspose1d(channels, out_channels, kernel_size, stride)] - if index > 0: - decode.append(nn.ReLU()) - self.decoder.insert(0, nn.Sequential(*decode)) - in_channels = channels - channels = int(growth * channels) - - channels = in_channels - - if lstm_layers: - self.lstm = BLSTM(channels, lstm_layers) - else: - self.lstm = None - - if rescale: - rescale_module(self, reference=rescale) - - def valid_length(self, length): - """ - Return the nearest valid length to use with the model so that - there is no time steps left over in a convolutions, e.g. for all - layers, size of the input - kernel_size % stride = 0. - - If the mixture has a valid length, the estimated sources - will have exactly the same length when context = 1. If context > 1, - the two signals can be center trimmed to match. - - For training, extracts should have a valid length.For evaluation - on full tracks we recommend passing `pad = True` to :method:`forward`. - """ - if self.resample: - length *= 2 - for _ in range(self.depth): - length = math.ceil((length - self.kernel_size) / self.stride) + 1 - length = max(1, length) - length += self.context - 1 - for _ in range(self.depth): - length = (length - 1) * self.stride + self.kernel_size - - if self.resample: - length = math.ceil(length / 2) - return int(length) - - def forward(self, mix): - x = mix - - if self.normalize: - mono = mix.mean(dim=1, keepdim=True) - mean = mono.mean(dim=-1, keepdim=True) - std = mono.std(dim=-1, keepdim=True) - else: - mean = 0 - std = 1 - - x = (x - mean) / (1e-5 + std) - - if self.resample: - x = julius.resample_frac(x, 1, 2) - - saved = [] - for encode in self.encoder: - x = encode(x) - saved.append(x) - if self.lstm: - x = self.lstm(x) - for decode in self.decoder: - skip = center_trim(saved.pop(-1), x) - x = x + skip - x = decode(x) - - if self.resample: - x = julius.resample_frac(x, 2, 1) - x = x * std + mean - x = x.view(x.size(0), len(self.sources), self.audio_channels, x.size(-1)) - return x diff --git a/spaces/EronSamez/RVC_HFmeu/mdx.py b/spaces/EronSamez/RVC_HFmeu/mdx.py deleted file mode 100644 index 4cc7c08b37bc371294f2f82b3382424a5455b7c2..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/mdx.py +++ /dev/null @@ -1,228 +0,0 @@ -import torch -import onnxruntime as ort -from tqdm import tqdm -import warnings -import numpy as np -import hashlib -import queue -import threading - -warnings.filterwarnings("ignore") - -class MDX_Model: - def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000): - self.dim_f = dim_f - self.dim_t = dim_t - self.dim_c = 4 - self.n_fft = n_fft - self.hop = hop - self.stem_name = stem_name - self.compensation = compensation - - self.n_bins = self.n_fft//2+1 - self.chunk_size = hop * (self.dim_t-1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - - out_c = self.dim_c - - self.freq_pad = torch.zeros([1, out_c, self.n_bins-self.dim_f, self.dim_t]).to(device) - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0,3,1,2]) - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,4,self.n_bins,self.dim_t]) - return x[:,:,:self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0],1,1,1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - # c = 4*2 if self.target_name=='*' else 2 - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,2,self.n_bins,self.dim_t]) - x = x.permute([0,2,3,1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1,2,self.chunk_size]) - - -class MDX: - - DEFAULT_SR = 44100 - # Unit: seconds - DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR - DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR - - DEFAULT_PROCESSOR = 0 - - def __init__(self, model_path:str, params:MDX_Model, processor=DEFAULT_PROCESSOR): - - # Set the device and the provider (CPU or CUDA) - self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu') - self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider'] - - self.model = params - - # Load the ONNX model using ONNX Runtime - self.ort = ort.InferenceSession(model_path, providers=self.provider) - # Preload the model for faster performance - self.ort.run(None, {'input':torch.rand(1, 4, params.dim_f, params.dim_t).numpy()}) - self.process = lambda spec:self.ort.run(None, {'input': spec.cpu().numpy()})[0] - - self.prog = None - - @staticmethod - def get_hash(model_path): - try: - with open(model_path, 'rb') as f: - f.seek(- 10000 * 1024, 2) - model_hash = hashlib.md5(f.read()).hexdigest() - except: - model_hash = hashlib.md5(open(model_path,'rb').read()).hexdigest() - - return model_hash - - @staticmethod - def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE): - """ - Segment or join segmented wave array - - Args: - wave: (np.array) Wave array to be segmented or joined - combine: (bool) If True, combines segmented wave array. If False, segments wave array. - chunk_size: (int) Size of each segment (in samples) - margin_size: (int) Size of margin between segments (in samples) - - Returns: - numpy array: Segmented or joined wave array - """ - - if combine: - processed_wave = None # Initializing as None instead of [] for later numpy array concatenation - for segment_count, segment in enumerate(wave): - start = 0 if segment_count == 0 else margin_size - end = None if segment_count == len(wave)-1 else -margin_size - if margin_size == 0: - end = None - if processed_wave is None: # Create array for first segment - processed_wave = segment[:, start:end] - else: # Concatenate to existing array for subsequent segments - processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1) - - else: - processed_wave = [] - sample_count = wave.shape[-1] - - if chunk_size <= 0 or chunk_size > sample_count: - chunk_size = sample_count - - if margin_size > chunk_size: - margin_size = chunk_size - - for segment_count, skip in enumerate(range(0, sample_count, chunk_size)): - - margin = 0 if segment_count == 0 else margin_size - end = min(skip+chunk_size+margin_size, sample_count) - start = skip-margin - - cut = wave[:,start:end].copy() - processed_wave.append(cut) - - if end == sample_count: - break - - return processed_wave - - def pad_wave(self, wave): - """ - Pad the wave array to match the required chunk size - - Args: - wave: (np.array) Wave array to be padded - - Returns: - tuple: (padded_wave, pad, trim) - - padded_wave: Padded wave array - - pad: Number of samples that were padded - - trim: Number of samples that were trimmed - """ - n_sample = wave.shape[1] - trim = self.model.n_fft//2 - gen_size = self.model.chunk_size-2*trim - pad = gen_size - n_sample%gen_size - - # Padded wave - wave_p = np.concatenate((np.zeros((2,trim)), wave, np.zeros((2,pad)), np.zeros((2,trim))), 1) - - mix_waves = [] - for i in range(0, n_sample+pad, gen_size): - waves = np.array(wave_p[:, i:i+self.model.chunk_size]) - mix_waves.append(waves) - - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device) - - return mix_waves, pad, trim - - def _process_wave(self, mix_waves, trim, pad, q:queue.Queue, _id:int): - """ - Process each wave segment in a multi-threaded environment - - Args: - mix_waves: (torch.Tensor) Wave segments to be processed - trim: (int) Number of samples trimmed during padding - pad: (int) Number of samples padded during padding - q: (queue.Queue) Queue to hold the processed wave segments - _id: (int) Identifier of the processed wave segment - - Returns: - numpy array: Processed wave segment - """ - mix_waves = mix_waves.split(1) - with torch.no_grad(): - pw = [] - for mix_wave in mix_waves: - self.prog.update() - spec = self.model.stft(mix_wave) - processed_spec = torch.tensor(self.process(spec)) - processed_wav = self.model.istft(processed_spec.to(self.device)) - processed_wav = processed_wav[:,:,trim:-trim].transpose(0,1).reshape(2, -1).cpu().numpy() - pw.append(processed_wav) - processed_signal = np.concatenate(pw, axis=-1)[:, :-pad] - q.put({_id:processed_signal}) - return processed_signal - - def process_wave(self, wave:np.array, mt_threads=1): - """ - Process the wave array in a multi-threaded environment - - Args: - wave: (np.array) Wave array to be processed - mt_threads: (int) Number of threads to be used for processing - - Returns: - numpy array: Processed wave array - """ - self.prog = tqdm(total=0) - chunk = wave.shape[-1]//mt_threads - waves = self.segment(wave, False, chunk) - - # Create a queue to hold the processed wave segments - q = queue.Queue() - threads = [] - for c, batch in enumerate(waves): - mix_waves, pad, trim = self.pad_wave(batch) - self.prog.total = len(mix_waves)*mt_threads - thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c)) - thread.start() - threads.append(thread) - for thread in threads: - thread.join() - self.prog.close() - - processed_batches = [] - while not q.empty(): - processed_batches.append(q.get()) - processed_batches = [list(wave.values())[0] for wave in sorted(processed_batches, key=lambda d: list(d.keys())[0])] - assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!' - return self.segment(processed_batches, True, chunk) \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/ctw1500.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/ctw1500.py deleted file mode 100644 index 466ea7e1ea6871917bd6449019b48cd11c516a01..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_datasets/ctw1500.py +++ /dev/null @@ -1,18 +0,0 @@ -dataset_type = 'IcdarDataset' -data_root = 'data/ctw1500' - -train = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_training.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) - -test = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_test.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) - -train_list = [train] - -test_list = [test] diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/framework-581f102fc68ef277.js b/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/framework-581f102fc68ef277.js deleted file mode 100644 index d681385fd36d08a875cd02254a2f5cd69fb3d69d..0000000000000000000000000000000000000000 --- a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/framework-581f102fc68ef277.js +++ /dev/null @@ -1,33 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[774],{3746:function(e,n,t){/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r,l,a,u,o,i,s=t(959),c=t(2962);function f(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t
    - -
    -
    - -
    -
    - - - - - - \ No newline at end of file diff --git a/spaces/Fengbinbin/gpt-academic/request_llm/test_llms.py b/spaces/Fengbinbin/gpt-academic/request_llm/test_llms.py deleted file mode 100644 index d043d6228e878d9517f9648449e05f752c701a25..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/request_llm/test_llms.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -对各个llm模型进行单元测试 -""" -def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume) - sys.path.append(root_dir_assume) - -validate_path() # validate path so you can run from base directory - -from request_llm.bridge_jittorllms import predict_no_ui_long_connection - -llm_kwargs = { - 'max_length': 512, - 'top_p': 1, - 'temperature': 1, -} - -result = predict_no_ui_long_connection(inputs="你好", - llm_kwargs=llm_kwargs, - history=[], - sys_prompt="") - -print('result') \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/hubert/hubert_model_onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/hubert/hubert_model_onnx.py deleted file mode 100644 index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/hubert/hubert_model_onnx.py +++ /dev/null @@ -1,217 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - def forward(self, x): - return self.units(x) - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/GIanlucaRub/DoubleResolution-Monitor/README.md b/spaces/GIanlucaRub/DoubleResolution-Monitor/README.md deleted file mode 100644 index e6590a978400d1670c20fce80dfe79c9f3f5a443..0000000000000000000000000000000000000000 --- a/spaces/GIanlucaRub/DoubleResolution-Monitor/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DoubleResolution Monitor -emoji: 📚 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GT4SD/keyword_bert/model_cards/article.md b/spaces/GT4SD/keyword_bert/model_cards/article.md deleted file mode 100644 index 51924591d13eee134053b7f42d3d3b06ca59ab98..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/keyword_bert/model_cards/article.md +++ /dev/null @@ -1,73 +0,0 @@ -# Model documentation & parameters - -**Algorithm version**: The model version to use. Note that *any* HF model can be wrapped to a `KeyBERT` model. - -**Text**: The main text prompt to "understand", i.e., generate keywords. - -**Minimum keyphrase ngram**: Lower bound for phrase size. Each keyword will have at least this many words. - -**Maximum keyphrase ngram**: Upper bound for phrase size. Each keyword will have at least this many words. - -**Stop words**: Stopwords to remove from the document. If not provided, no stop words removal. - -**Use MaxSum**: To diversify the results, we take the `2 x MaxSum candidates` most similar words/phrases to the document. Then, we take all top_n combinations from the `2 x MaxSum candidates` and extract the combination that are the least similar to each other by cosine similarity. Control usage of max sum similarity for keywords generated. - -**MaxSum candidates**: Candidates considered when enabling `Use MaxSum`. - -**Use Max. marginal relevance**: To diversify the results, we can use Maximal Margin Relevance (MMR) to create keywords / keyphrases which is also based on cosine similarity. - -**Diversity**: Diversity for the results when enabling `max. marginal relevance`. - -**Number of keywords**: How many keywords should be generated (maximal 50). - - -# Model card -- KeywordBERT - -**Model Details**: KeyBERT is a minimal and easy-to-use keyword extraction technique that leverages BERT embeddings to create keywords and keyphrases that are most similar to a document. - -**Developers**: Maarten Grootendorst. - -**Distributors**: Original developer's code from [https://github.com/MaartenGr/KeyBERT](https://github.com/MaartenGr/KeyBERT). - -**Model date**: 2020. - -**Model type**: Different BERT and SciBERT models, trained on [CIRCA data](https://circa.res.ibm.com/index.html). - -**Information about training algorithms, parameters, fairness constraints or other applied approaches, and features**: -N.A. - -**Paper or other resource for more information**: -The [KeyBERT GitHub repo](https://github.com/MaartenGr/KeyBERT). - -**License**: MIT - -**Where to send questions or comments about the model**: Open an issue on [GT4SD repository](https://github.com/GT4SD/gt4sd-core). - -**Intended Use. Use cases that were envisioned during development**: N.A. - -**Primary intended uses/users**: N.A. - -**Out-of-scope use cases**: Production-level inference. - -**Metrics**: N.A. - -**Datasets**: N.A. - -**Ethical Considerations**: Unclear, please consult with original authors in case of questions. - -**Caveats and Recommendations**: Unclear, please consult with original authors in case of questions. - -Model card prototype inspired by [Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs) - -## Citation -```bib -@misc{grootendorst2020keybert, - author = {Maarten Grootendorst}, - title = {KeyBERT: Minimal keyword extraction with BERT.}, - year = 2020, - publisher = {Zenodo}, - version = {v0.3.0}, - doi = {10.5281/zenodo.4461265}, - url = {https://doi.org/10.5281/zenodo.4461265} -} -``` \ No newline at end of file diff --git a/spaces/GaenKoki/voicevox/build_util/process_voicevox_resource.bash b/spaces/GaenKoki/voicevox/build_util/process_voicevox_resource.bash deleted file mode 100644 index 1b0cfe285e8e092296ec728a328385f8b91b3378..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/build_util/process_voicevox_resource.bash +++ /dev/null @@ -1,26 +0,0 @@ -set -eux - -if [ ! -v DOWNLOAD_RESOURCE_PATH ]; then - echo "DOWNLOAD_RESOURCE_PATHが未定義です" - exit 1 -fi - -rm -r speaker_info -cp -r $DOWNLOAD_RESOURCE_PATH/character_info speaker_info - -python $DOWNLOAD_RESOURCE_PATH/scripts/clean_character_info.py \ - --character_info_dir speaker_info/ - -# マニフェスト -jq -s '.[0] * .[1]' engine_manifest.json $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest.json \ - > engine_manifest.json.tmp -mv engine_manifest.json.tmp engine_manifest.json - -python build_util/merge_update_infos.py \ - engine_manifest_assets/update_infos.json \ - $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest_assets/update_infos.json \ - engine_manifest_assets/update_infos.json - -for f in $(ls $DOWNLOAD_RESOURCE_PATH/engine/engine_manifest_assets/* | grep -v update_infos.json); do - cp $f ./engine_manifest_assets/ -done diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/train_all_single_task.sh b/spaces/Gen-Sim/Gen-Sim/scripts/train_all_single_task.sh deleted file mode 100644 index 52c4ac073dce10bc587e26dcf61eba7eba58fb31..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/train_all_single_task.sh +++ /dev/null @@ -1,112 +0,0 @@ -sh scripts/train_test_single_task.sh data align-rope -sh scripts/train_test_single_task.sh data assembling-kits-seq -sh scripts/train_test_single_task.sh data palletizing-boxes -sh scripts/train_test_single_task.sh data towers-of-hanoi -sh scripts/train_test_single_task.sh data assembling-kits -sh scripts/train_test_single_task.sh data align-box-corner -sh scripts/train_test_single_task.sh data manipulating-rope -sh scripts/train_test_single_task.sh data packing-boxes -sh scripts/train_test_single_task.sh data place-red-in-green -sh scripts/train_test_single_task.sh data put-block-in-bowl -sh scripts/train_test_single_task.sh data task -sh scripts/train_test_single_task.sh data packing-boxes-pairs -sh scripts/train_test_single_task.sh data sweeping-piles -sh scripts/train_test_single_task.sh data separating-piles -sh scripts/train_test_single_task.sh data stack-block-pyramid-seq -sh scripts/train_test_single_task.sh data towers-of-hanoi-seq -sh scripts/train_test_single_task.sh data packing-shapes -sh scripts/train_test_single_task.sh data stack-block-pyramid -sh scripts/train_test_single_task.sh data block-insertion -sh scripts/train_test_single_task.sh data packing-google-objects -sh scripts/train_test_single_task.sh data color-coordinated-ball-stacking -sh scripts/train_test_single_task.sh data cylinder-ring-stack -sh scripts/train_test_single_task.sh data color-ordered-blocks-on-pallet -sh scripts/train_test_single_task.sh data build-cylinder-structure -sh scripts/train_test_single_task.sh data build-bridge -sh scripts/train_test_single_task.sh data pyramid-blocks-assemble -sh scripts/train_test_single_task.sh data sort-and-assemble-block-castle -sh scripts/train_test_single_task.sh data stack-blocks-in-container -sh scripts/train_test_single_task.sh data corner-sort-cylinders -sh scripts/train_test_single_task.sh data align-pair-colored-blocks-along-line -sh scripts/train_test_single_task.sh data color-specific-container-fill -sh scripts/train_test_single_task.sh data colored-cylinder-in-square -sh scripts/train_test_single_task.sh data construct-colorful-arch -sh scripts/train_test_single_task.sh data color-coordinated-ball-insertion -sh scripts/train_test_single_task.sh data insert-sphere-into-container -sh scripts/train_test_single_task.sh data build-wheel -sh scripts/train_test_single_task.sh data color-coordinated-sphere-and-cylinder-assembly -sh scripts/train_test_single_task.sh data push-piles-into-letter -sh scripts/train_test_single_task.sh data color-coordinated-zone-stacking -sh scripts/train_test_single_task.sh data create-pyramid-with-color-coded-ells -sh scripts/train_test_single_task.sh data color-coordinated-arch-construction -sh scripts/train_test_single_task.sh data color-coordinated-sphere-insertion -sh scripts/train_test_single_task.sh data move-piles-along-line -sh scripts/train_test_single_task.sh data insert-ell-along-square-path -sh scripts/train_test_single_task.sh data multi-level-block-construction -sh scripts/train_test_single_task.sh data build-car -sh scripts/train_test_single_task.sh data color-coded-blocks-on-corner -sh scripts/train_test_single_task.sh data multi-level-insertion-and-zone-matching -sh scripts/train_test_single_task.sh data color-coordinated-insertion -sh scripts/train_test_single_task.sh data triangle-block-arrangement -sh scripts/train_test_single_task.sh data ball-in-bowl-obstacle-course-new -sh scripts/train_test_single_task.sh data colorful-block-tower-on-cylinder-base -sh scripts/train_test_single_task.sh data manipulating-two-ropes -sh scripts/train_test_single_task.sh data construct-corner-building -sh scripts/train_test_single_task.sh data color-coordinated-block-bridge -sh scripts/train_test_single_task.sh data color-sequenced-sphere-placement -sh scripts/train_test_single_task.sh data construct-corner-blocks -sh scripts/train_test_single_task.sh data sort-insert-color-coordinated-blocks -sh scripts/train_test_single_task.sh data color-ordered-container-arrangement -sh scripts/train_test_single_task.sh data symmetric-block-bridge-construction -sh scripts/train_test_single_task.sh data connect-boxes-with-rope -sh scripts/train_test_single_task.sh data vertical-insertion-blocks -sh scripts/train_test_single_task.sh data cylinder-stand-alignment -sh scripts/train_test_single_task.sh data color-coordinated-zone-arrangement -sh scripts/train_test_single_task.sh data insert-blocks-lineup -sh scripts/train_test_single_task.sh data create-pyramid-blocks-and-container -sh scripts/train_test_single_task.sh data mix-piles -sh scripts/train_test_single_task.sh data color-sequenced-pyramid-packing -sh scripts/train_test_single_task.sh data color-coordinated-cylinder-pyramid -sh scripts/train_test_single_task.sh data sweep-and-sort-blocks -sh scripts/train_test_single_task.sh data multi-level-pyramid-construction -sh scripts/train_test_single_task.sh data guided-block-path -sh scripts/train_test_single_task.sh data rainbow-stack -sh scripts/train_test_single_task.sh data color-ordered-insertion-new -sh scripts/train_test_single_task.sh data mixed-color-block-barrier-insertion -sh scripts/train_test_single_task.sh data color-coordinated-block-shifting -sh scripts/train_test_single_task.sh data align-balls-in-colored-zones -sh scripts/train_test_single_task.sh data multicolor-block-bridge -sh scripts/train_test_single_task.sh data sequential-insertion-and-stacking -sh scripts/train_test_single_task.sh data insertion-in-color-sequenced-zones -sh scripts/train_test_single_task.sh data align-spheres-in-colored-zones -sh scripts/train_test_single_task.sh data color-blocks-in-cylinder-maze -sh scripts/train_test_single_task.sh data color-coordinated-sphere-on-pallet-pyramid -sh scripts/train_test_single_task.sh data sort-and-stack-clr-blocks -sh scripts/train_test_single_task.sh data corner-block-challenge -sh scripts/train_test_single_task.sh data sequential-block-insertion -sh scripts/train_test_single_task.sh data sphere-container-color-match -sh scripts/train_test_single_task.sh data stack-color-coordinated-blocks -sh scripts/train_test_single_task.sh data assemble-single-car -sh scripts/train_test_single_task.sh data color-structured-block-tower -sh scripts/train_test_single_task.sh data color-sorted-block-race -sh scripts/train_test_single_task.sh data align-balls-in-colored-boxes -sh scripts/train_test_single_task.sh data color-coordinated-cylinder-ball-match -sh scripts/train_test_single_task.sh data build-house -sh scripts/train_test_single_task.sh data align-cylinders-in-zones -sh scripts/train_test_single_task.sh data sphere-align-stand -sh scripts/train_test_single_task.sh data ball-in-bowl-obstacle-course -sh scripts/train_test_single_task.sh data color-coordinated-block-tower -sh scripts/train_test_single_task.sh data color-sorted-container-stack -sh scripts/train_test_single_task.sh data color-coordinated-cylinder-stand-assembly -sh scripts/train_test_single_task.sh data color-ordered-insertion -sh scripts/train_test_single_task.sh data block-pyramid-with-limited-space -sh scripts/train_test_single_task.sh data color-cued-ball-corner-sorting -sh scripts/train_test_single_task.sh data sorting-blocks-into-pallets -sh scripts/train_test_single_task.sh data place-ball-in-elevated-bowl -sh scripts/train_test_single_task.sh data Four-corner-pyramid-challenge -sh scripts/train_test_single_task.sh data colored-balls-sorting-in-corner -sh scripts/train_test_single_task.sh data color-coordinated-box-ball-matching -sh scripts/train_test_single_task.sh data color-coordinated-cylinder-tower -sh scripts/train_test_single_task.sh data ball-sorting-with-blocks-barrier -sh scripts/train_test_single_task.sh data build-two-circles -sh scripts/train_test_single_task.sh data cylinder-balancing-and-placement \ No newline at end of file diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/hmmsearch.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/hmmsearch.py deleted file mode 100644 index a60d3e760e217f175b7daeffb803837e23391b0a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/data/tools/hmmsearch.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""A Python wrapper for hmmsearch - search profile against a sequence db.""" - -import os -import subprocess -from typing import Optional, Sequence - -from absl import logging -from alphafold.data.tools import utils -# Internal import (7716). - - -class Hmmsearch(object): - """Python wrapper of the hmmsearch binary.""" - - def __init__(self, - *, - binary_path: str, - database_path: str, - flags: Optional[Sequence[str]] = None): - """Initializes the Python hmmsearch wrapper. - - Args: - binary_path: The path to the hmmsearch executable. - database_path: The path to the hmmsearch database (FASTA format). - flags: List of flags to be used by hmmsearch. - - Raises: - RuntimeError: If hmmsearch binary not found within the path. - """ - self.binary_path = binary_path - self.database_path = database_path - self.flags = flags - - if not os.path.exists(self.database_path): - logging.error('Could not find hmmsearch database %s', database_path) - raise ValueError(f'Could not find hmmsearch database {database_path}') - - def query(self, hmm: str) -> str: - """Queries the database using hmmsearch using a given hmm.""" - with utils.tmpdir_manager(base_dir='/tmp') as query_tmp_dir: - hmm_input_path = os.path.join(query_tmp_dir, 'query.hmm') - a3m_out_path = os.path.join(query_tmp_dir, 'output.a3m') - with open(hmm_input_path, 'w') as f: - f.write(hmm) - - cmd = [ - self.binary_path, - '--noali', # Don't include the alignment in stdout. - '--cpu', '8' - ] - # If adding flags, we have to do so before the output and input: - if self.flags: - cmd.extend(self.flags) - cmd.extend([ - '-A', a3m_out_path, - hmm_input_path, - self.database_path, - ]) - - logging.info('Launching sub-process %s', cmd) - process = subprocess.Popen( - cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - with utils.timing( - f'hmmsearch ({os.path.basename(self.database_path)}) query'): - stdout, stderr = process.communicate() - retcode = process.wait() - - if retcode: - raise RuntimeError( - 'hmmsearch failed:\nstdout:\n%s\n\nstderr:\n%s\n' % ( - stdout.decode('utf-8'), stderr.decode('utf-8'))) - - with open(a3m_out_path) as f: - a3m_out = f.read() - - return a3m_out diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py deleted file mode 100644 index 0a4d7ca86e5eef1e0b82837f744c1fcbd368ab86..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py +++ /dev/null @@ -1,46 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/cityscapes_instance.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained=None, - roi_head=dict( - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=8, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=8, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)))) -# optimizer -# lr is set for a batch size of 8 -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - # [7] yields higher performance than [6] - step=[7]) -runner = dict( - type='EpochBasedRunner', max_epochs=8) # actual epoch = 8 * 8 = 64 -log_config = dict(interval=100) -# For better, more stable performance initialize from COCO -load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth' # noqa diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmcv_custom/runner/checkpoint.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmcv_custom/runner/checkpoint.py deleted file mode 100644 index b04167e0fc5f16bc33e793830ebb9c4ef15ef1ed..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmcv_custom/runner/checkpoint.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import os.path as osp -import time -from tempfile import TemporaryDirectory - -import torch -from torch.optim import Optimizer - -import mmcv -from mmcv.parallel import is_module_wrapper -from mmcv.runner.checkpoint import weights_to_cpu, get_state_dict - -try: - import apex -except: - print('apex is not installed') - - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 4 fields: ``meta``, ``state_dict`` and - ``optimizer``, ``amp``. By default ``meta`` will contain version - and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - # save amp state dict in the checkpoint - checkpoint['amp'] = apex.amp.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 00b2594ba8a1c9edc90cca7a6d7c3334fa209edc..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/ann_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index a535bd0ed8a4883134acdc52cf3f77c8d897ce82..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = '../fcn/fcn_r101-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='mmcls://mobilenet_v2', - backbone=dict( - _delete_=True, - type='MobileNetV2', - widen_factor=1., - strides=(1, 2, 2, 1, 1, 1, 1), - dilations=(1, 1, 1, 2, 2, 4, 4), - out_indices=(1, 2, 4, 6)), - decode_head=dict(in_channels=320), - auxiliary_head=dict(in_channels=96)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/rope.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/rope.py deleted file mode 100644 index 503e6748df2bb72b3c864c20b37cba5498ffdd21..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/rope.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from torch import nn -import torch - - -class XPos(nn.Module): - """Length-extrapolatable positional embedding (xPos) from [Sun et al 2022](https://arxiv.org/abs/2212.10554v1). - This applies an exponential decay to the RoPE rotation matrix. - - Args: - dim (int): Embedding dimension. - smoothing (float): Smoothing factor applied to the decay rates. - base_scale (int): Base decay rate, given in terms of scaling time. - device (torch.device, optional): Device on which to initialize the module. - dtype (torch.dtype): dtype to use to generate the embedding. - """ - def __init__(self, dim: int, smoothing: float = 0.4, base_scale: int = 512, - device=None, dtype: torch.dtype = torch.float32): - super().__init__() - assert dim % 2 == 0 - assert dtype in [torch.float64, torch.float32] - self.dtype = dtype - self.base_scale = base_scale - - half_dim = dim // 2 - adim = torch.arange(half_dim, device=device, dtype=dtype) - decay_rates = (adim / half_dim + smoothing) / (1.0 + smoothing) - self.register_buffer("decay_rates", decay_rates) - self.decay: tp.Optional[torch.Tensor] = None - - def get_decay(self, start: int, end: int): - """Create complex decay tensor, cache values for fast computation.""" - if self.decay is None or end > self.decay.shape[0]: - assert isinstance(self.decay_rates, torch.Tensor) # Satisfy type checker. - idx = torch.arange(end, device=self.decay_rates.device, dtype=self.dtype) - power = idx / self.base_scale - scale = self.decay_rates ** power.unsqueeze(-1) - self.decay = torch.polar(scale, torch.zeros_like(scale)) - return self.decay[start:end] # [T, C/2] - - -class RotaryEmbedding(nn.Module): - """Rotary positional embedding (RoPE) from [Su et al 2022](https://arxiv.org/abs/2104.09864). - - Args: - dim (int): Embedding dimension (twice the number of frequencies). - max_period (float): Maximum period of the rotation frequencies. - xpos (bool): Use xPos, applies an exponential decay to rotation matrix. - scale (float): Scale of positional embedding, set to 0 to deactivate. - device (torch.device, optional): Device on which to initialize the module. - dtype (torch.dtype): dtype to use to generate the embedding. - """ - def __init__(self, dim: int, max_period: float = 10000.0, xpos: bool = False, - scale: float = 1.0, device=None, dtype: torch.dtype = torch.float32): - super().__init__() - assert dim % 2 == 0 - self.scale = scale - assert dtype in [torch.float64, torch.float32] - self.dtype = dtype - - adim = torch.arange(0, dim, 2, device=device, dtype=dtype)[: (dim // 2)] - frequencies = 1.0 / (max_period ** (adim / dim)) - self.register_buffer("frequencies", frequencies) - self.rotation: tp.Optional[torch.Tensor] = None - - self.xpos = XPos(dim, device=device, dtype=dtype) if xpos else None - - def get_rotation(self, start: int, end: int): - """Create complex rotation tensor, cache values for fast computation.""" - if self.rotation is None or end > self.rotation.shape[0]: - assert isinstance(self.frequencies, torch.Tensor) # Satisfy type checker. - idx = torch.arange(end, device=self.frequencies.device, dtype=self.dtype) - angles = torch.outer(idx, self.frequencies) - self.rotation = torch.polar(torch.ones_like(angles), angles) - return self.rotation[start:end] - - def rotate(self, x: torch.Tensor, start: int = 0, invert_decay: bool = False): - """Apply rope rotation to query or key tensor.""" - T = x.shape[1] - rotation = self.get_rotation(start, start + T).unsqueeze(0).unsqueeze(2) - - if self.xpos: - decay = self.xpos.get_decay(start, start + T).unsqueeze(0).unsqueeze(2) - else: - decay = 1.0 - - if invert_decay: - decay = decay ** -1 - - x_complex = torch.view_as_complex(x.to(self.dtype).reshape(*x.shape[:-1], -1, 2)) - scaled_rotation = (rotation * decay) * self.scale + (1.0 - self.scale) - x_out = torch.view_as_real(x_complex * scaled_rotation).flatten(-2) - - return x_out.type_as(x) - - def rotate_qk(self, query: torch.Tensor, key: torch.Tensor, start: int = 0): - """ Apply rope rotation to both query and key tensors. - Supports streaming mode, in which query and key are not expected to have the same shape. - In streaming mode, key will be of length [P + C] with P the cached past timesteps, but - query will be [C] (typically C == 1). - - Args: - query (torch.Tensor): Query to rotate. - key (torch.Tensor): Key to rotate. - start (int): Start index of the sequence for time offset. - """ - query_timesteps = query.shape[1] - key_timesteps = key.shape[1] - streaming_offset = key_timesteps - query_timesteps - - query_out = self.rotate(query, start + streaming_offset) - key_out = self.rotate(key, start, invert_decay=True) - - return query_out, key_out diff --git a/spaces/HALLA/HALL-E/html2canvas.js b/spaces/HALLA/HALL-E/html2canvas.js deleted file mode 100644 index dd1606d8698aae0ed4877058d6a218fda3a515cd..0000000000000000000000000000000000000000 --- a/spaces/HALLA/HALL-E/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline - - - - - - - HTML5 canvas appears to be unsupported in the current browser.
    - Please try updating or use a different browser. -
    -
    - - - -
    - - - - - - diff --git a/spaces/Hurtle/DeepDanbooru_string/README.md b/spaces/Hurtle/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/Hurtle/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py deleted file mode 100644 index 41cf558970608fa5a9241e91e59ba214b609dc73..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import joblib -import numpy as np - -from examples.textless_nlp.gslm.speech2unit.clustering.utils import get_audio_files -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import get_features - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--out_dir_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def one_hot(feat, n_clusters): - return np.eye(n_clusters)[feat] - -def main(args, logger): - # Feature extraction - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info(f"Features extracted for {len(features_batch)} utterances.\n") - logger.info(f"Dimensionality of representation = {features_batch[0].shape[1]}") - - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(args.out_dir_path, exist_ok=True) - logger.info(f"Writing quantized features to {args.out_dir_path}") - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - emb = one_hot(pred, kmeans_model.n_clusters) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - output_path = os.path.join(args.out_dir_path, f"{base_fname}.npy") - with open(output_path, "wb") as f: - np.save(f, emb) - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/spaces/Illumotion/Koboldcpp/make_old_pyinstaller.bat b/spaces/Illumotion/Koboldcpp/make_old_pyinstaller.bat deleted file mode 100644 index 6ae4297bcf77a90986a22351ecf64de00a5c1abd..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/make_old_pyinstaller.bat +++ /dev/null @@ -1,4 +0,0 @@ -echo This file is only for my own usage, please do not use it. I am lazy. - -set PATH=d:\\MainApplications\\KoboldAIGPT\\KoboldAI-Horde-Bridge\\python;d:\\MainApplications\\KoboldAIGPT\\KoboldAI-Horde-Bridge\\python\\Scripts;%PATH% -PyInstaller --noconfirm --onefile --clean --console --collect-all customtkinter --icon "./niko.ico" --add-data "./klite.embd;." --add-data "./koboldcpp_default.dll;." --add-data "./koboldcpp_openblas.dll;." --add-data "./koboldcpp_failsafe.dll;." --add-data "./koboldcpp_noavx2.dll;." --add-data "./libopenblas.dll;." --add-data "./koboldcpp_clblast.dll;." --add-data "./clblast.dll;." --add-data "./rwkv_vocab.embd;." --add-data "./rwkv_world_vocab.embd;." "./koboldcpp.py" -n "koboldcpp_nocuda.exe" \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.cpp b/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.cpp deleted file mode 100644 index cbabab5a87cea678aa03a28f992607e87cae052f..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.cpp +++ /dev/null @@ -1,1969 +0,0 @@ -//adapted from RWKV.cpp repo under MIT license -// https://github.com/saharNooby/rwkv.cpp - -#include "otherarch.h" - -#include "rwkv_v3.h" -#include "ggml.h" - -#ifdef GGML_USE_CUBLAS -#include "ggml-cuda.h" -#endif -#if defined(GGML_USE_CLBLAST) -#include "ggml-opencl.h" -#endif - -#include "utils.h" - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define _FILE_OFFSET_BITS 64 -// Puts an optional break point, if debug is enabled. -#define RWKV_MAYBE_BREAK - -#include - -#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__) || defined(__NT__) -#define stat _stat64 -#define fstat _fstat64 -#define ftell _ftelli64 -#define fseek _fseeki64 - -#ifndef NDEBUG -#include -#define RWKV_MAYBE_BREAK __debugbreak() -#endif -#else -#if !defined(__APPLE__) -#define ftell ftello -#define fseek fseeko -#endif -#endif - -// --- Error handling --- - -thread_local enum rwkv_error_flags global_last_error = RWKV_ERROR_NONE; -thread_local bool global_print_errors = true; - -inline enum rwkv_error_flags operator|(enum rwkv_error_flags a, enum rwkv_error_flags b) { - return static_cast(static_cast(a) | static_cast(b)); -} - -inline enum rwkv_error_flags operator|=(enum rwkv_error_flags & a, enum rwkv_error_flags b) { - return a = a | b; -} - -#define RWKV_MSG(...) do { if (global_print_errors) fprintf(stderr, __VA_ARGS__); } while (0) -#define RWKV_CTX_MSG(ctx, ...) do { if (ctx->print_errors) fprintf(stderr, __VA_ARGS__); } while (0) - -// If the condition x is false, adds ERR_VAL to the last error, and returns RET_VAL. -#define RWKV_ASSERT(ERR_VAL, RET_VAL, x) do { \ - if (!(x)) { \ - global_last_error |= ERR_VAL; \ - RWKV_MSG("\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, adds ERR_VAL to the last error, prints a message to stderr, and returns RET_VAL. -#define RWKV_ASSERT_MSG(ERR_VAL, RET_VAL, x, ...) do { \ - if (!(x)) { \ - global_last_error |= ERR_VAL; \ - RWKV_MSG(__VA_ARGS__); \ - RWKV_MSG("\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, adds ERR_VAL to the ctx's last error, prints a message to stderr, and returns RET_VAL. -#define RWKV_CTX_ASSERT_MSG(ctx, ERR_VAL, RET_VAL, x, ...) do { \ - if (!(x)) { \ - ((struct rwkv_context *) ctx)->last_error |= ERR_VAL; \ - RWKV_CTX_MSG(ctx, __VA_ARGS__); \ - RWKV_CTX_MSG(ctx, "\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, adds ERR_VAL to the ctx's last error, and returns RET_VAL. -#define RWKV_CTX_ASSERT(ctx, ERR_VAL, RET_VAL, x) do { \ - if (!(x)) { \ - ((struct rwkv_context *) ctx)->last_error |= ERR_VAL; \ - RWKV_CTX_MSG(ctx, "\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, returns RET_VAL. -#define RWKV_ENSURE(RET_VAL, x) do { \ - if (!(x)) { \ - RWKV_MSG("\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, prints a message to stderr, and returns RET_VAL. -#define RWKV_ENSURE_MSG(RET_VAL, x, ...) do { \ - if (!(x)) { \ - RWKV_MSG(__VA_ARGS__); \ - RWKV_MSG("\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -// If the condition x is false, prints a message to stderr, and returns RET_VAL. -#define RWKV_CTX_ENSURE_MSG(ctx, RET_VAL, x, ...) do { \ - if (!(x)) { \ - ((struct rwkv_context *) ctx)->last_error |= ERR_VAL; \ - RWKV_CTX_MSG(ctx, __VA_ARGS__); \ - RWKV_CTX_MSG(ctx, "\n%s:%d: %s\n", __FILE__, __LINE__, #x); \ - RWKV_MAYBE_BREAK; \ - return RET_VAL; \ - } } while (0) - -#define RWKV_ASSERT_FALSE_MSG(ERR_VAL, x, ...) RWKV_ASSERT_MSG(ERR_VAL, false, x, __VA_ARGS__) -#define RWKV_ASSERT_NULL_MSG(ERR_VAL, x, ...) RWKV_ASSERT_MSG(ERR_VAL, NULL, x, __VA_ARGS__) - -#define RWKV_CTX_ASSERT_FALSE_MSG(ctx, ERR_VAL, x, ...) RWKV_CTX_ASSERT_MSG(ctx, ERR_VAL, false, x, __VA_ARGS__) - -#define RWKV_ASSERT_FALSE(ERR_VAL, x) RWKV_ASSERT(ERR_VAL, false, x) -#define RWKV_ASSERT_NULL(ERR_VAL, x) RWKV_ASSERT(ERR_VAL, NULL, x) - -#define RWKV_CTX_ASSERT_FALSE(ctx, ERR_VAL, x) RWKV_CTX_ASSERT(ctx, ERR_VAL, false, x) - -#define RWKV_ENSURE_OR_FALSE(x) RWKV_ENSURE(false, x) -#define RWKV_ENSURE_OR_NULL(x) RWKV_ENSURE(NULL, x) -#define RWKV_ENSURE_OR_FALSE_MSG(x, ...) RWKV_ENSURE_MSG(false, x, __VA_ARGS__) - -// --- Utilities --- - -// Reads a single uint32 value from a file. -bool rwkv_fread_uint32(FILE * file, uint32_t & dest) { - return fread((void *) &dest, sizeof(uint32_t), 1, file) == 1; -} - -// Reads a single string value from a file. -bool rwkv_fread_string(FILE * file, size_t length, std::string & dest) { - dest.resize(length); - return fread((void *) dest.data(), length, 1, file) == 1; -} - -// Reads a single data buffer from a file. -bool rwkv_fread_data(FILE * file, size_t length, void * dest) { - return fread(dest, length, 1, file) == 1; -} - -// Writes a single uint32 value to a file. -bool rwkv_fwrite_uint32(FILE * file, const uint32_t value) { - return fwrite((const void *) &value, sizeof(uint32_t), 1, file); -} - -// Writes a single string value to a file. -bool rwkv_fwrite_string(FILE * file, const std::string & value) { - return fwrite((const void *) value.data(), value.length(), 1, file) == 1; -} - -// Writes a single data buffer to a file. -bool rwkv_fwrite_data(FILE * file, const void * data, const size_t length) { - return fwrite(data, length, 1, file) == 1; -} - -// --- File handling --- - -#define TYPE_UNKNOWN TYPE_COUNT - -enum rwkv_type { - TYPE_FP32, - TYPE_FP16, - TYPE_Q4_0, - TYPE_Q4_1, - TYPE_Q4_1_O, // Unsupported - TYPE_Q4_2, // Unsupported - TYPE_Q4_3, // Unsupported - TYPE_Q5_0, - TYPE_Q5_1, - TYPE_Q8_0, - TYPE_COUNT -}; - -#define GGML_TYPE_UNKNOWN GGML_TYPE_COUNT - -extern const enum ggml_type rwkv_type_to_ggml[TYPE_COUNT + 1] = { - GGML_TYPE_F32, /* FP32 */ - GGML_TYPE_F16, /* FP16 */ - GGML_TYPE_Q4_0, /* Q4_0 */ - GGML_TYPE_Q4_1, /* Q4_1 */ - GGML_TYPE_UNKNOWN, /* Q4_1_O */ - GGML_TYPE_UNKNOWN, /* Q4_2 */ - GGML_TYPE_UNKNOWN, /* Q4_3 */ - GGML_TYPE_Q5_0, /* Q5_0 */ - GGML_TYPE_Q5_1, /* Q5_1 */ - GGML_TYPE_Q8_0, /* Q8_0 */ - GGML_TYPE_COUNT /* COUNT */ -}; - -extern const enum rwkv_type rwkv_type_from_ggml[GGML_TYPE_COUNT + 1] = { - TYPE_FP32, /* FP32 */ - TYPE_FP16, /* FP16 */ - TYPE_Q4_0, /* Q4_0 */ - TYPE_Q4_1, /* Q4_1 */ - TYPE_Q4_2, /* Q4_2 */ - TYPE_Q4_3, /* Q4_3 */ - TYPE_Q5_0, /* Q5_0 */ - TYPE_Q5_1, /* Q5_1 */ - TYPE_Q8_0, /* Q8_0 */ - TYPE_COUNT, /* Q8_1 */ - TYPE_COUNT, /* I8 */ - TYPE_COUNT, /* I16 */ - TYPE_COUNT, /* I32 */ - TYPE_COUNT, /* COUNT */ -}; - -extern const char * rwkv_type_to_string[TYPE_COUNT + 1] = {"FP32", "FP16", "Q4_0", "Q4_1", "Q4_1_O", "Q4_2", "Q4_3", "Q5_0", "Q5_1", "Q8_0", "unknown"}; - -enum rwkv_type rwkv_type_from_string(const char * str) { - for (int ord = 0; ord < TYPE_COUNT; ord++) { - if (strcmp(str, rwkv_type_to_string[ord]) == 0) { - return (enum rwkv_type) ord; - } - } - - return TYPE_UNKNOWN; -} - -struct rwkv_file_header { - uint32_t magic; - uint32_t version; - uint32_t n_vocab; - uint32_t n_embed; - uint32_t n_layer; - uint32_t data_type; -}; - -bool rwkv_is_file_version_in_range(uint32_t version) { - return version >= RWKV_FILE_VERSION_MIN && version <= RWKV_FILE_VERSION_MAX; -} - -bool rwkv_fread_file_header(FILE * file, struct rwkv_file_header & header, bool verify_data_type = true) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fread_data(file, sizeof(struct rwkv_file_header), &header)); - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_MAGIC, header.magic == RWKV_FILE_MAGIC); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE_VERSION, rwkv_is_file_version_in_range(header.version), "Unsupported file version %" PRId32, header.version); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_DATA_TYPE, header.data_type < TYPE_COUNT, "Model data type out of range (%" PRId32 " > %" PRId32 ")", header.data_type, TYPE_COUNT - 1); - - if (verify_data_type) { - enum ggml_type ggml_type = rwkv_type_to_ggml[header.data_type]; - - RWKV_ASSERT_FALSE_MSG( - RWKV_ERROR_DATA_TYPE, - ggml_type != GGML_TYPE_UNKNOWN, - "Models in %s format cannot be loaded anymore because the format was removed.\n" - "You need to quantize the model into another format or use an older version of rwkv.cpp.\n" - "See https://github.com/saharNooby/rwkv.cpp#compatibility for more info", - rwkv_type_to_string[header.data_type] - ); - - RWKV_ASSERT_FALSE_MSG( - RWKV_ERROR_DATA_TYPE, - (!ggml_is_quantized(ggml_type) || header.version == RWKV_FILE_VERSION_1), - "The quantized model file in %s format was created with an old version of rwkv.cpp and can not be loaded anymore.\n" - "You need to requantize the model or use an older version of rwkv.cpp.\n" - "See https://github.com/saharNooby/rwkv.cpp#compatibility for more info", - rwkv_type_to_string[header.data_type] - ); - } - - return true; -} - -bool rwkv_fwrite_file_header(FILE * file, const struct rwkv_file_header & header) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_WRITE, rwkv_fwrite_data(file, &header, sizeof(struct rwkv_file_header))); - return true; -} - -struct rwkv_tensor_header { - uint32_t dim_count; - uint32_t key_length; - uint32_t data_type; - uint32_t width; - uint32_t height; - - const size_t size() const; -}; - -struct rwkv_tensor { - struct rwkv_tensor_header header; - std::string name; - uint8_t * data; -}; - -//rwkv relied on the old ggml_nbytes implementation, so backport it here. Fixes breaking change in PR 2874 -size_t rwkv_nbytes_old(const struct ggml_tensor * tensor) { - static_assert(GGML_MAX_DIMS == 4, "GGML_MAX_DIMS is not 4 - update this function"); - auto a = tensor->ne[3]*tensor->nb[3]; - auto b = (ggml_nelements(tensor)*ggml_type_size(tensor->type))/ggml_blck_size(tensor->type); - return ((a) > (b) ? (a) : (b)); -} - -bool rwkv_fread_tensor_header(FILE * file, struct rwkv_tensor_header & header) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fread_data(file, sizeof(struct rwkv_tensor_header) - sizeof(uint32_t), &header)); - header.height = 1; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_SHAPE, header.dim_count == 1 || header.dim_count == 2, "Tensor has an invalid shape (%" PRId32 " dimensions)", header.dim_count); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_DATA_TYPE, header.data_type < TYPE_COUNT, "Tensor data type out of range (%" PRId32 " > %" PRId32 ")", header.data_type, TYPE_COUNT - 1); - RWKV_ASSERT_FALSE_MSG( - RWKV_ERROR_DATA_TYPE, - rwkv_type_to_ggml[header.data_type] != GGML_TYPE_UNKNOWN, - "Tensor data type (%s) is no longer supported", - rwkv_type_to_string[header.data_type] - ); - - if (header.dim_count == 2) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fread_uint32(file, header.height)); - } - - return true; -} - -bool rwkv_fwrite_tensor_header(FILE * file, const struct rwkv_tensor_header & header) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_WRITE, rwkv_fwrite_data(file, &header, sizeof(struct rwkv_tensor_header) - (header.dim_count == 1 ? sizeof(uint32_t) : 0))); - return true; -} - -bool rwkv_fskip_tensor_data(FILE * file, const struct rwkv_tensor_header & header) { - return fseek(file, header.key_length + header.size(), SEEK_CUR) == 0; -} - -bool rwkv_fread_tensor_header_and_skip(FILE * file, struct rwkv_tensor_header & header) { - RWKV_ENSURE_OR_FALSE(rwkv_fread_tensor_header(file, header)); - RWKV_ASSERT_FALSE(RWKV_ERROR_DATA, rwkv_fskip_tensor_data(file, header)); - return true; -} - -bool rwkv_fread_tensor_data(FILE * file, struct rwkv_tensor & output, void * buffer = NULL) { - size_t data_size = output.header.size(); - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fread_string(file, output.header.key_length, output.name)); - - if (buffer) { - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fread_data(file, data_size, buffer)); - } else { - output.data = NULL; - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE_READ, rwkv_fskip_tensor_data(file, output.header)); - } - - return true; -} - -bool rwkv_fread_tensor(FILE * file, struct rwkv_tensor & output, void * buffer = NULL) { - RWKV_ENSURE_OR_FALSE(rwkv_fread_tensor_header(file, output.header)); - RWKV_ENSURE_OR_FALSE(rwkv_fread_tensor_data(file, output, buffer)); - return true; -} - -bool rwkv_fread_ggml_tensor_data(FILE * file, const struct rwkv_tensor_header & header, struct ggml_context * ctx, std::string & name, struct ggml_tensor *& tensor) { - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE_READ, rwkv_fread_string(file, header.key_length, name), "Failed to read tensor name"); - - enum ggml_type ggml_type = rwkv_type_to_ggml[header.data_type]; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_UNSUPPORTED, ggml_type != GGML_TYPE_UNKNOWN, "Unsupported tensor data type %s from %s", rwkv_type_to_string[header.data_type], name.c_str()); - - tensor = header.dim_count == 1 - ? ggml_new_tensor_1d(ctx, ggml_type, header.width) - : ggml_new_tensor_2d(ctx, ggml_type, header.width, header.height); - - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, tensor, "Failed to allocate tensor"); - ggml_set_name(tensor, name.c_str()); - - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE_READ, rwkv_fread_data(file, rwkv_nbytes_old(tensor), tensor->data), "Failed to read tensor data from %s", name.c_str()); - return true; -} - -bool rwkv_fread_ggml_tensor(FILE * file, struct ggml_context * ctx, std::string & name, struct ggml_tensor *& tensor) { - struct rwkv_tensor_header header; - RWKV_ENSURE_OR_FALSE_MSG(rwkv_fread_tensor_header(file, header), "Invalid tensor header"); - return rwkv_fread_ggml_tensor_data(file, header, ctx, name, tensor); -} - -bool rwkv_fwrite_tensor(FILE * file, const struct rwkv_tensor & tensor) { - RWKV_ENSURE_OR_FALSE(rwkv_fwrite_tensor_header(file, tensor.header)); - RWKV_ENSURE_OR_FALSE(rwkv_fwrite_string(file, tensor.name)); - RWKV_ENSURE_OR_FALSE(rwkv_fwrite_data(file, tensor.data, tensor.header.size())); - return true; -} - -// --- Model definition --- - -struct rwkv_layer { - struct ggml_tensor * ln1_weight; - struct ggml_tensor * ln1_bias; - - // RWKV, also called "attention" by the author. - struct ggml_tensor * att_time_mix_k; - struct ggml_tensor * att_time_mix_v; - struct ggml_tensor * att_time_mix_r; - struct ggml_tensor * att_time_first; - struct ggml_tensor * att_time_decay; - struct ggml_tensor * att_key; - struct ggml_tensor * att_value; - struct ggml_tensor * att_receptance; - struct ggml_tensor * att_output; - - struct ggml_tensor * ln2_weight; - struct ggml_tensor * ln2_bias; - - // FFN. - struct ggml_tensor * ffn_time_mix_k; - struct ggml_tensor * ffn_time_mix_r; - struct ggml_tensor * ffn_key; - struct ggml_tensor * ffn_value; - struct ggml_tensor * ffn_receptance; -}; - -struct rwkv_model { - struct rwkv_file_header header; - - struct ggml_tensor * emb; - - struct ggml_tensor * ln0_weight; - struct ggml_tensor * ln0_bias; - - std::unique_ptr layers; - - struct ggml_tensor * ln_out_weight; - struct ggml_tensor * ln_out_bias; - - struct ggml_tensor * head; -}; - -// --- Operators --- - -void rwkv_exp_impl(const int n_cols, float * dest, const float * src) { - for (int i = 0; i < n_cols; i++) { - dest[i] = expf(src[i]); - } -} - -void rwkv_1_minus_x_impl(const int n_cols, float * dest, const float * src) { - for (int i = 0; i < n_cols; i++) { - dest[i] = 1.0F - src[i]; - } -} - -void rwkv_sigmoid_impl(const int n_cols, float * dest, const float * src) { - for (int i = 0; i < n_cols; i++) { - dest[i] = 1.0F / (1.0F + expf(-src[i])); - } -} - -void rwkv_max_impl(const int n_cols, float * dest, const float * src0, const float * src1) { - for (int i = 0; i < n_cols; i++) { - dest[i] = fmaxf(src0[i], src1[i]); - } -} - -struct ggml_tensor * rwkv_exp(ggml_context * ctx, struct ggml_tensor * x) { - return ggml_map_unary_f32(ctx, x, rwkv_exp_impl); -} - -struct ggml_tensor * rwkv_1_minus_x(ggml_context * ctx, struct ggml_tensor * x) { - return ggml_map_unary_f32(ctx, x, rwkv_1_minus_x_impl); -} - -struct ggml_tensor * rwkv_sigmoid(ggml_context * ctx, struct ggml_tensor * x) { - return ggml_map_unary_f32(ctx, x, rwkv_sigmoid_impl); -} - -struct ggml_tensor * rwkv_max(ggml_context * ctx, struct ggml_tensor * x, struct ggml_tensor * y) { - return ggml_map_binary_f32(ctx, x, y, rwkv_max_impl); -} - -struct ggml_tensor * rwkv_layer_norm(ggml_context * ctx, struct ggml_tensor * x, struct ggml_tensor * weight, struct ggml_tensor * bias) { - // LayerNorm in RWKV is `x = (x - mean(x)) / sqrt(variance(x) + 1e-5) * weight + bias` - // Looks like ggml_norm does the first part, we only need to apply weight & bias. - return ggml_add_inplace(ctx, ggml_mul_inplace(ctx, ggml_norm(ctx, x, default_norm_eps), weight), bias); -} - -// --- Implementation --- - -// Used as a helper during rwkv_ctx_size calculation. -struct rwkv_future_tensor; - -// Used to calculate the memory usage of ggml contexts before allocating them. -// Since ggml uses an internal bump allocator that can't be grown at runtime, we need to ensure we have enough space, -// while at the same time not using more memory than necessary. -struct rwkv_future_ctx { - size_t objects_count = 0; - size_t memory_size = 0; - size_t scratch_size = 0; - - // Align to GGML_MEM_ALIGN, which can currently be up to 16 - static const size_t align(const size_t size) { - return ((size + 15) & ~15); - } - - void add_objects(const size_t size, const size_t count = 1) { - this->objects_count += count; - - if (size && count) { - this->add_memory(size, count); - } - } - - void add_memory(const size_t size, const size_t count = 1) { - this->memory_size += this->align(size) * count; - } - - void add_scratch(const size_t size, const size_t count = 1) { - this->scratch_size += this->align(size) * count; - } - - void add_data(const bool use_scratch, const size_t size, const size_t count = 1) { - if (use_scratch) { - this->add_scratch(size, count); - } else { - this->add_memory(size, count); - } - } - - struct rwkv_future_tensor declare(const enum ggml_type type, const uint64_t width, const uint64_t height = 1); - - struct rwkv_future_tensor alloc(const enum ggml_type type, const uint64_t width, const uint64_t height = 1, const bool use_scratch = true); -}; - -struct rwkv_future_tensor { - enum ggml_type type = GGML_TYPE_COUNT; - uint64_t width = 0; - uint64_t height = 0; - - static const size_t size(const enum ggml_type type, const uint64_t width, const uint64_t height) { - struct ggml_tensor decoy {}; - decoy.type = type; - decoy.ne[0] = width; - decoy.ne[1] = height; - decoy.ne[2] = 1; - decoy.ne[3] = 1; - return rwkv_nbytes_old(&decoy); - } - - rwkv_future_tensor() {} - rwkv_future_tensor(const enum ggml_type type, const uint64_t width, const uint64_t height = 1): type(type), width(width), height(height) {} - rwkv_future_tensor(const struct ggml_tensor * ref): type(ref->type), width(ref->ne[0]), height(ref->ne[1]) {} - - struct rwkv_future_tensor alloc(struct rwkv_future_ctx & ctx, const bool use_scratch = true) const { - ctx.add_objects(sizeof(struct ggml_tensor)); - ctx.add_data(use_scratch, rwkv_future_tensor::size(type, width, height)); - return *this; - } - - struct rwkv_future_tensor view(struct rwkv_future_ctx & ctx) const { - ctx.add_objects(sizeof(struct ggml_tensor)); - return *this; - } - - struct rwkv_future_tensor subview(struct rwkv_future_ctx & ctx, const uint32_t width, const uint32_t height = 1) const { - ctx.add_objects(sizeof(struct ggml_tensor), 2); - ctx.add_memory(sizeof(uint32_t) * 2); - return rwkv_future_tensor(type, width, height); - } - - struct rwkv_future_tensor dup(struct rwkv_future_ctx & ctx) const { - return this->alloc(ctx); - } - - struct rwkv_future_tensor layer_norm(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor & weight, const struct rwkv_future_tensor & bias) const { - return this->dup(ctx).view(ctx).view(ctx); - } - - struct rwkv_future_tensor repeat(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor reference) const { - return reference.dup(ctx); - } - - struct rwkv_future_tensor set_inplace(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor src) { - ctx.add_objects(sizeof(struct ggml_tensor)); - ctx.add_memory(sizeof(uint32_t) * 5); - return this->view(ctx); - } - - struct rwkv_future_tensor consume(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor & other) { - return this->view(ctx); - } - - struct rwkv_future_tensor combine(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor & other) const { - return this->dup(ctx); - } - - struct rwkv_future_tensor fn(struct rwkv_future_ctx & ctx) const { - ctx.add_objects(sizeof(struct ggml_tensor)); - ctx.add_memory(sizeof(void *) / sizeof(uint32_t)); - return this->dup(ctx); - } - - struct rwkv_future_tensor mul_mat(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor & other) const { - return ctx.alloc(GGML_TYPE_F32, this->height, other.height); - } - - struct rwkv_future_tensor get_rows(struct rwkv_future_ctx & ctx, const struct rwkv_future_tensor & other) const { - return ctx.alloc(GGML_TYPE_F32, this->width, other.width); - } -}; - -const size_t rwkv_tensor_header::size() const { - return rwkv_future_tensor::size(rwkv_type_to_ggml[this->data_type], this->width, this->height); -} - -struct rwkv_future_tensor rwkv_future_ctx::declare(const enum ggml_type type, const uint64_t width, const uint64_t height) { - return rwkv_future_tensor(type, width, height); -} - -struct rwkv_future_tensor rwkv_future_ctx::alloc(const enum ggml_type type, const uint64_t width, const uint64_t height, const bool use_scratch) { - return this->declare(type, width, height).alloc(*this, use_scratch); -} - -struct rwkv_ggml_context { - std::unique_ptr scratch; - struct ggml_context * ctx; - - rwkv_ggml_context(): ctx(NULL) {} - - rwkv_ggml_context(const struct rwkv_future_ctx future_ctx): ctx(NULL) { - scratch.reset(new(std::nothrow) uint8_t[future_ctx.scratch_size]); - - if (!scratch) { - return; - } - - const size_t memory_required_overhead = size_t(128) * 1024 * 1024; - const size_t memory_required_overhead_sc = size_t(64) * 1024 * 1024; - - ctx = ggml_init({ future_ctx.objects_count * GGML_OBJECT_SIZE + future_ctx.memory_size + memory_required_overhead, NULL, false}); - - if (!ctx) { - return; - } - - ggml_set_scratch(ctx, { 0, memory_required_overhead_sc + future_ctx.scratch_size, scratch.get() }); - } - - struct rwkv_ggml_context & operator=(struct rwkv_ggml_context && source) { - scratch.reset(source.scratch.release()); - std::swap(ctx, source.ctx); - return *this; - } - - ~rwkv_ggml_context() { - if (ctx) { - ggml_free(ctx); - } - } -}; - -// An instance of an RWKV model loaded into memory. -// Contains all the model weights. -// Shared by one or more contexts. -struct rwkv_instance { - struct rwkv_ggml_context ctx; - struct rwkv_model model; - - // TODO Come up with a better solution to estimate "work tensor" size - // The ggml_cgraph allocates a "work tensor" the first time it is used. - // Currently, the height of blocks.0.ffn.key.weight is the bottleneck in our implementation of RWKV. - // Since it is the largest dimension used in any matrix multiply, it is the size used for the "work tensor". - // However, if ggml changes its implementation, or rwkv.cpp changes its own implementation, at any point, - // this may become outdated. We need to find a way not to hardcode a specific tensor, but to calculate accurately. - // This may come out of a ggml issue: https://github.com/ggerganov/ggml/issues/214 - size_t ffn_key_size; -}; - -// The hidden state of a single RWKV layer. -// These are mostly used for dividing up the input state, and writing portions of the output state. -// But they're also used in building the computation graphs to represent the operations -// used from input->output (operating "in place" on a rwkv_layer_state). -struct rwkv_layer_state { - struct ggml_tensor * ffn_xx; - struct ggml_tensor * att_xx; - struct ggml_tensor * att_aa; - struct ggml_tensor * att_bb; - struct ggml_tensor * att_pp; -}; - -// Holds a single computation graph and its ggml context. -// Graphs each have their own context so that they can be individually freed and rebuilt. -// Graphs read hidden state from the rwkv_context and then write it back to the rwkv_context. -// (see rwkv_context.input_layers and rwkv_context.output_layers) -struct rwkv_graph { - struct rwkv_ggml_context ctx; - struct ggml_tensor * tokens; - - // ggml_cgraph is so large that it can cause stack overflows if not stored on the heap - std::unique_ptr cgraph; - - size_t pre_logits_nodes; - size_t pre_logits_leafs; - size_t post_logits_nodes; - size_t post_logits_leafs; -}; - -// RWKV context for a specific instance. -// Contains computation graphs and is used for inference. -struct rwkv_context { - std::shared_ptr instance; - - // Reused by all graphs. - struct rwkv_ggml_context ctx; - struct ggml_tensor * input_state; - std::unique_ptr input_layers; - struct ggml_tensor * output_state; - std::unique_ptr output_layers; - struct ggml_tensor * logits; - - uint32_t n_threads; - - // The serial graph implements the traditional RNN mode that processes only one token at a time (serial mode). - struct rwkv_graph serial_graph; - - // The sequence graph implements the "sequence mode" (or transformer/GPT mode) that processes multiple tokens at a time. - // This can be an order of magnitude or so faster than serial execution if used properly. - size_t sequence_len; - struct rwkv_graph sequence_graph; - - enum rwkv_error_flags last_error; - bool print_errors; - - float * state_in = 0; //stores input state, or use null for a new state - float * state_out = 0; //stores address of output state buffer - float * logits_out = 0; //stores address of output logit buffer - - size_t gpu_layers; - std::vector work_buffer; -}; - -// https://stackoverflow.com/a/6458689 -template -bool rwkv_set_params(struct rwkv_model & model, F callback) { - RWKV_ENSURE_OR_FALSE(callback("emb.weight", model.emb)); - RWKV_ENSURE_OR_FALSE(callback("blocks.0.ln0.weight", model.ln0_weight)); - RWKV_ENSURE_OR_FALSE(callback("blocks.0.ln0.bias", model.ln0_bias)); - - uint32_t n_layer = model.header.n_layer; - std::unique_ptr layers(new(std::nothrow) struct rwkv_layer[n_layer]); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, layers.get(), "Failed to allocate model layers"); - model.layers = std::move(layers); - - for (uint32_t i = 0; i < n_layer; i++) { - char buffer[128]; - size_t offset = sprintf(buffer, "blocks.%" PRId32 ".", i); - - rwkv_layer & layer = model.layers[i]; - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ln1.weight"), buffer), layer.ln1_weight)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ln1.bias"), buffer), layer.ln1_bias)); - - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.time_mix_k"), buffer), layer.att_time_mix_k)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.time_mix_v"), buffer), layer.att_time_mix_v)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.time_mix_r"), buffer), layer.att_time_mix_r)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.time_first"), buffer), layer.att_time_first)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.time_decay"), buffer), layer.att_time_decay)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.key.weight"), buffer), layer.att_key)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.value.weight"), buffer), layer.att_value)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.receptance.weight"), buffer), layer.att_receptance)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "att.output.weight"), buffer), layer.att_output)); - - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ln2.weight"), buffer), layer.ln2_weight)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ln2.bias"), buffer), layer.ln2_bias)); - - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ffn.time_mix_k"), buffer), layer.ffn_time_mix_k)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ffn.time_mix_r"), buffer), layer.ffn_time_mix_r)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ffn.key.weight"), buffer), layer.ffn_key)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ffn.value.weight"), buffer), layer.ffn_value)); - RWKV_ENSURE_OR_FALSE(callback((strcpy(&buffer[offset], "ffn.receptance.weight"), buffer), layer.ffn_receptance)); - } - - RWKV_ENSURE_OR_FALSE(callback("ln_out.weight", model.ln_out_weight)); - RWKV_ENSURE_OR_FALSE(callback("ln_out.bias", model.ln_out_bias)); - RWKV_ENSURE_OR_FALSE(callback("head.weight", model.head)); - return true; -} - -void rwkv_future_carry_x(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor weight, - const struct rwkv_future_tensor bias, - struct rwkv_future_tensor & x, - struct rwkv_future_tensor & x_prev, - struct rwkv_future_tensor & carry -) { - if (x.height == 1) { - x = x.layer_norm(ctx, weight, bias); - x_prev = carry; - carry = x; - } else { - x = x.layer_norm(ctx, weight.repeat(ctx, x), bias.repeat(ctx, x)); - - x_prev = x.dup(ctx) - .set_inplace(ctx, carry) - .set_inplace(ctx, x.subview(ctx, x.width, x.height - 1)); - - carry = x.subview(ctx, x.width); - } -} - -void rwkv_carry_x(struct ggml_context * ctx, - struct ggml_tensor * weight, - struct ggml_tensor * bias, - struct ggml_tensor *& x, - struct ggml_tensor *& x_prev, - struct ggml_tensor *& carry -) { - const size_t n_embed = x->ne[0]; - const size_t sequence_len = x->ne[1]; - - if (sequence_len == 1) { - // self.layer_norm(x, self.w.blocks[i].ln2) - x = rwkv_layer_norm(ctx, x, weight, bias); - - // xx = state[5*i+0] - x_prev = carry; - - // state[5*i+0] = x - carry = x; - } else { - // self.layer_norm(x, self.w.blocks[i].ln2) - x = rwkv_layer_norm(ctx, x, ggml_repeat(ctx, weight, x), ggml_repeat(ctx, bias, x)); - - // xx = torch.cat((state[5*i+0].to(dtype=self.FLOAT_MODE).unsqueeze(0), x[:-1,:])) - x_prev = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, n_embed, sequence_len); - x_prev = ggml_set_1d_inplace(ctx, x_prev, carry, 0); - x_prev = ggml_set_1d_inplace(ctx, x_prev, ggml_view_1d(ctx, x, n_embed * (sequence_len - 1), 0), n_embed * sizeof(float)); - - // state[5*i+0] = x[-1,:] - carry = ggml_view_1d(ctx, x, n_embed, n_embed * (sequence_len - 1) * sizeof(float)); - } -} - -void rwkv_future_att_rkv(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor time_mix_k, - const struct rwkv_future_tensor time_mix_v, - const struct rwkv_future_tensor time_mix_r, - const struct rwkv_future_tensor x, - const struct rwkv_future_tensor x_prev, - const struct rwkv_future_tensor att_r, - const struct rwkv_future_tensor att_k, - const struct rwkv_future_tensor att_v, - struct rwkv_future_tensor & r, - struct rwkv_future_tensor & k, - struct rwkv_future_tensor & v -) { - const struct rwkv_future_tensor xk = x.combine(ctx, time_mix_k).consume(ctx, x_prev.combine(ctx, time_mix_k.fn(ctx))); - const struct rwkv_future_tensor xv = x.combine(ctx, time_mix_v).consume(ctx, x_prev.combine(ctx, time_mix_v.fn(ctx))); - const struct rwkv_future_tensor xr = x.combine(ctx, time_mix_r).consume(ctx, x_prev.combine(ctx, time_mix_r.fn(ctx))); - - r = att_r.mul_mat(ctx, xr).fn(ctx); - k = att_k.mul_mat(ctx, xk); - v = att_v.mul_mat(ctx, xv); -} - -void rwkv_att_rkv( - struct ggml_context * ctx, - struct rwkv_layer layer, - struct ggml_tensor * x, - struct ggml_tensor * x_prev, - struct ggml_tensor *& r, - struct ggml_tensor *& k, - struct ggml_tensor *& v -) { - // xk = x * time_mix_k + state[5 * i + 1] * (1 - time_mix_k) - struct ggml_tensor * xk = ggml_add_inplace(ctx, - ggml_mul(ctx, x, layer.att_time_mix_k), - ggml_mul(ctx, x_prev, rwkv_1_minus_x(ctx, layer.att_time_mix_k)) - ); - - // xv = x * time_mix_v + state[5 * i + 1] * (1 - time_mix_v) - struct ggml_tensor * xv = ggml_add_inplace(ctx, - ggml_mul(ctx, x, layer.att_time_mix_v), - ggml_mul(ctx, x_prev, rwkv_1_minus_x(ctx, layer.att_time_mix_v)) - ); - - // xr = x * time_mix_r + state[5 * i + 1] * (1 - time_mix_r) - struct ggml_tensor * xr = ggml_add_inplace(ctx, - ggml_mul(ctx, x, layer.att_time_mix_r), - ggml_mul(ctx, x_prev, rwkv_1_minus_x(ctx, layer.att_time_mix_r)) - ); - - // r = torch.sigmoid(rw @ xr) - r = rwkv_sigmoid(ctx, ggml_mul_mat(ctx, layer.att_receptance, xr)); - // k = kw @ xk - k = ggml_mul_mat(ctx, layer.att_key, xk); - // v = vw @ xv - v = ggml_mul_mat(ctx, layer.att_value, xv); -} - -struct rwkv_future_tensor rwkv_future_att_wkv(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor time_first, - const struct rwkv_future_tensor time_decay, - struct rwkv_future_tensor & aa, - struct rwkv_future_tensor & bb, - struct rwkv_future_tensor & pp, - const struct rwkv_future_tensor k, - const struct rwkv_future_tensor v -) { - struct rwkv_future_tensor ww = time_first.combine(ctx, k); - struct rwkv_future_tensor qq = pp.fn(ctx); - struct rwkv_future_tensor e1 = pp.combine(ctx, qq).fn(ctx); - struct rwkv_future_tensor e2 = ww.combine(ctx, qq).fn(ctx); - - struct rwkv_future_tensor a = e1.combine(ctx, aa).consume(ctx, e2.combine(ctx, v)); - struct rwkv_future_tensor b = e1.combine(ctx, bb).consume(ctx, e2); - - ww = pp.combine(ctx, time_decay); - qq = ww.fn(ctx); - e1 = ww.combine(ctx, qq).fn(ctx); - e2 = k.combine(ctx, qq).fn(ctx); - - // aa, bb - aa = e1.combine(ctx, aa).consume(ctx, e2.combine(ctx, v)); - bb = e1.combine(ctx, bb).consume(ctx, e2); - pp = qq; - - // wkv - return a.combine(ctx, b); -} - -struct ggml_tensor * rwkv_att_wkv( - struct ggml_context * ctx, - struct ggml_tensor * att_time_first, - struct ggml_tensor * att_time_decay, - struct ggml_tensor * k, - struct ggml_tensor * v, - struct ggml_tensor *& aa, - struct ggml_tensor *& bb, - struct ggml_tensor *& pp -) { - // ww = time_first + k - struct ggml_tensor * ww = ggml_add(ctx, att_time_first, k); - // qq = torch.maximum(pp, ww) - struct ggml_tensor * qq = rwkv_max(ctx, pp, ww); - // e1 = torch.exp(pp - qq) - struct ggml_tensor * e1 = rwkv_exp(ctx, ggml_sub(ctx, pp, qq)); - // e2 = torch.exp(ww - qq) - struct ggml_tensor * e2 = rwkv_exp(ctx, ggml_sub(ctx, ww, qq)); - - // a = e1 * aa + e2 * v - struct ggml_tensor * a = ggml_add_inplace(ctx, ggml_mul(ctx, e1, aa), ggml_mul(ctx, e2, v)); - // b = e1 * bb + e2 - struct ggml_tensor * b = ggml_add_inplace(ctx, ggml_mul(ctx, e1, bb), e2); - - // ww = pp + time_decay - ww = ggml_add(ctx, pp, att_time_decay); - // qq = torch.maximum(ww, k) - qq = rwkv_max(ctx, ww, k); - // e1 = torch.exp(ww - qq) - e1 = rwkv_exp(ctx, ggml_sub(ctx, ww, qq)); - // e2 = torch.exp(k[t] - qq) - e2 = rwkv_exp(ctx, ggml_sub(ctx, k, qq)); - - // state[5 * i + 2] = e1 * aa + e2 * v - // state[5 * i + 3] = e1 * bb + e2 - // state[5 * i + 4] = qq - aa = ggml_add_inplace(ctx, ggml_mul(ctx, e1, aa), ggml_mul(ctx, e2, v)); - bb = ggml_add_inplace(ctx, ggml_mul(ctx, e1, bb), e2); - pp = qq; - - // wkv = a / b - return ggml_div(ctx, a, b); -} - - -struct rwkv_future_tensor rwkv_future_att(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor ln1_weight, - const struct rwkv_future_tensor ln1_bias, - const struct rwkv_future_tensor time_mix_k, - const struct rwkv_future_tensor time_mix_v, - const struct rwkv_future_tensor time_mix_r, - const struct rwkv_future_tensor time_first, - const struct rwkv_future_tensor time_decay, - const struct rwkv_future_tensor att_r, - const struct rwkv_future_tensor att_k, - const struct rwkv_future_tensor att_v, - const struct rwkv_future_tensor att_output, - struct rwkv_future_tensor x, - struct rwkv_future_tensor & att_xx, - struct rwkv_future_tensor & att_aa, - struct rwkv_future_tensor & att_bb, - struct rwkv_future_tensor & att_pp -) { - struct rwkv_future_tensor x_prev; - rwkv_future_carry_x(ctx, ln1_weight, ln1_bias, x, x_prev, att_xx); - - struct rwkv_future_tensor r, k, v; - rwkv_future_att_rkv(ctx, time_mix_k, time_mix_v, time_mix_r, x, x_prev, att_r, att_k, att_v, r, k, v); - - struct rwkv_future_tensor wkv = rwkv_future_att_wkv(ctx, time_first, time_decay, att_aa, att_bb, att_pp, k, v); - - return att_output.mul_mat(ctx, r.combine(ctx, wkv)); -} - -struct ggml_tensor * rwkv_att(struct ggml_context * ctx, struct ggml_tensor * x, struct rwkv_layer layer, struct rwkv_layer_state & state) { - struct ggml_tensor * x_prev; - rwkv_carry_x(ctx, layer.ln1_weight, layer.ln1_bias, x, x_prev, state.att_xx); - - struct ggml_tensor * r, * k, * v; - rwkv_att_rkv(ctx, layer, x, x_prev, r, k, v); - - struct ggml_tensor * wkv = rwkv_att_wkv(ctx, layer.att_time_first, layer.att_time_decay, k, v, state.att_aa, state.att_bb, state.att_pp); - - // ow @ (r * xx) - return ggml_mul_mat(ctx, layer.att_output, ggml_mul(ctx, r, wkv)); -} - -struct rwkv_future_tensor rwkv_future_ffn(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor ln2_weight, - const struct rwkv_future_tensor ln2_bias, - const struct rwkv_future_tensor time_mix_k, - const struct rwkv_future_tensor time_mix_r, - const struct rwkv_future_tensor ffn_k, - const struct rwkv_future_tensor ffn_v, - const struct rwkv_future_tensor ffn_r, - struct rwkv_future_tensor x, - struct rwkv_future_tensor & ffn_xx -) { - struct rwkv_future_tensor x_prev; - rwkv_future_carry_x(ctx, ln2_weight, ln2_bias, x, x_prev, ffn_xx); - - struct rwkv_future_tensor xk = x.combine(ctx, time_mix_k).consume(ctx, x_prev.combine(ctx, time_mix_k.fn(ctx))); - struct rwkv_future_tensor xr = x.combine(ctx, time_mix_r).consume(ctx, x_prev.combine(ctx, time_mix_r.fn(ctx))); - - struct rwkv_future_tensor r = ffn_r.mul_mat(ctx, xr).fn(ctx); - struct rwkv_future_tensor k = ffn_k.mul_mat(ctx, xk).view(ctx).view(ctx); - - return r.consume(ctx, ffn_v.mul_mat(ctx, k)); -} - -struct ggml_tensor * rwkv_ffn(struct ggml_context * ctx, struct ggml_tensor * x, struct rwkv_layer layer, struct rwkv_layer_state & state) { - struct ggml_tensor * x_prev; - rwkv_carry_x(ctx, layer.ln2_weight, layer.ln2_bias, x, x_prev, state.ffn_xx); - - // xk = x * time_mix_k + state[5 * i + 1] * (1 - time_mix_k) - // xk = x * time_mix_k + state[5 * i + 0] * (1 - time_mix_k) - struct ggml_tensor * xk = ggml_add_inplace( - ctx, - ggml_mul(ctx, x, layer.ffn_time_mix_k), - ggml_mul(ctx, x_prev, rwkv_1_minus_x(ctx, layer.ffn_time_mix_k)) - ); - - // xr = x * time_mix_r + state[5 * i + 0] * (1 - time_mix_r) - struct ggml_tensor * xr = ggml_add_inplace( - ctx, - ggml_mul(ctx, x, layer.ffn_time_mix_r), - ggml_mul(ctx, x_prev, rwkv_1_minus_x(ctx, layer.ffn_time_mix_r)) - ); - - // r = torch.sigmoid(rw @ xr) - struct ggml_tensor * r = rwkv_sigmoid(ctx, ggml_mul_mat(ctx, layer.ffn_receptance, xr)); - - // k = torch.square(torch.relu(kw @ xk)) - struct ggml_tensor * k = ggml_sqr_inplace(ctx, ggml_relu_inplace(ctx, ggml_mul_mat(ctx, layer.ffn_key, xk))); - - // r * (vw @ k) - return ggml_mul_inplace(ctx, r, ggml_mul_mat(ctx, layer.ffn_value, k)); -} - -struct rwkv_future_tensor rwkv_future_graph_work(struct rwkv_future_ctx & ctx, - const enum ggml_type type, - const size_t ffn_key_height, - const size_t n_threads, - const size_t sequence_len = 1 -) { -#if defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS) - enum ggml_type mul_mat_type = type == GGML_TYPE_F32 ? GGML_TYPE_F32 : GGML_TYPE_F16; -#else - enum ggml_type mul_mat_type = ggml_is_quantized(type) ? GGML_TYPE_Q8_1 : type; -#endif - return ctx.alloc(GGML_TYPE_I8, rwkv_future_tensor::size(mul_mat_type, ffn_key_height, sequence_len) * n_threads + 64 * (n_threads - 1)); -} - -struct rwkv_future_tensor rwkv_future_serial_graph(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor tokens, - const size_t n_threads, - - const struct rwkv_future_tensor emb, - const struct rwkv_future_tensor ln0_weight, - const struct rwkv_future_tensor ln0_bias, - - const size_t n_layer, - - const struct rwkv_future_tensor ln1_weight, - const struct rwkv_future_tensor ln1_bias, - const struct rwkv_future_tensor att_time_mix_k, - const struct rwkv_future_tensor att_time_mix_v, - const struct rwkv_future_tensor att_time_mix_r, - const struct rwkv_future_tensor att_time_first, - const struct rwkv_future_tensor att_time_decay, - const struct rwkv_future_tensor att_r, - const struct rwkv_future_tensor att_k, - const struct rwkv_future_tensor att_v, - const struct rwkv_future_tensor att_output, - struct rwkv_future_tensor & att_xx, - struct rwkv_future_tensor & att_aa, - struct rwkv_future_tensor & att_bb, - struct rwkv_future_tensor & att_pp, - - const struct rwkv_future_tensor ln2_weight, - const struct rwkv_future_tensor ln2_bias, - const struct rwkv_future_tensor ffn_time_mix_k, - const struct rwkv_future_tensor ffn_time_mix_r, - const struct rwkv_future_tensor ffn_k, - const struct rwkv_future_tensor ffn_v, - const struct rwkv_future_tensor ffn_r, - struct rwkv_future_tensor & ffn_xx, - - const struct rwkv_future_tensor ln_out_weight, - const struct rwkv_future_tensor ln_out_bias, - const struct rwkv_future_tensor head -) { - struct rwkv_future_tensor x = emb.get_rows(ctx, tokens).layer_norm(ctx, ln0_weight, ln0_bias); - - for (size_t i = 0; i < n_layer; i++) { - x = x.consume(ctx, rwkv_future_att(ctx, - ln1_weight, ln1_bias, att_time_mix_k, att_time_mix_v, att_time_mix_r, att_time_first, att_time_decay, - att_r, att_k, att_v, att_output, x, att_xx, att_aa, att_bb, att_pp)); - - x = x.consume(ctx, rwkv_future_ffn(ctx, - ln2_weight, ln2_bias, ffn_time_mix_k, ffn_time_mix_r, ffn_k, ffn_v, ffn_r, x, ffn_xx)); - - ffn_xx.view(ctx); - att_xx.view(ctx); - att_aa.view(ctx); - att_bb.view(ctx); - att_pp.view(ctx); - } - - x = x.layer_norm(ctx, ln_out_weight, ln_out_bias); - - rwkv_future_graph_work(ctx, ffn_k.type, ffn_k.height, n_threads, tokens.width); - - return head.mul_mat(ctx, x).view(ctx); -} - -bool rwkv_build_serial_graph( - struct ggml_context * ctx, - struct rwkv_model & model, - struct ggml_tensor * tokens, - struct rwkv_layer_state * inputs, - struct rwkv_layer_state * outputs, - struct ggml_tensor * logits, - struct ggml_cgraph * cgraph, - - size_t * const pre_logits_nodes, - size_t * const pre_logits_leafs, - size_t * const post_logits_nodes, - size_t * const post_logits_leafs -) { - // x = self.w.emb.weight[token] - struct ggml_tensor * x = ggml_get_rows(ctx, model.emb, tokens); - - // x = self.layer_norm(x, self.w.blocks[0].ln0) - x = rwkv_layer_norm(ctx, x, model.ln0_weight, model.ln0_bias); - - for (size_t i = 0; i < model.header.n_layer; i++) { - struct rwkv_layer & layer = model.layers[i]; - - struct rwkv_layer_state state = inputs[i]; - x = ggml_add_inplace(ctx, x, rwkv_att(ctx, x, layer, state)); - x = ggml_add_inplace(ctx, x, rwkv_ffn(ctx, x, layer, state)); - - struct rwkv_layer_state & output = outputs[i]; - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.ffn_xx, output.ffn_xx)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_xx, output.att_xx)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_aa, output.att_aa)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_bb, output.att_bb)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_pp, output.att_pp)); - } - - *pre_logits_nodes = cgraph->n_nodes; - *pre_logits_leafs = cgraph->n_leafs; - - // x = self.layer_norm(x[-1,:], self.w.ln_out) - x = rwkv_layer_norm(ctx, x, model.ln_out_weight, model.ln_out_bias); - - // x = (self.w.head.weight @ x).float() - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, ggml_mul_mat(ctx, model.head, x), logits)); - - *post_logits_nodes = cgraph->n_nodes; - *post_logits_leafs = cgraph->n_leafs; - - return true; -} - -struct rwkv_future_tensor rwkv_future_sequence_graph(struct rwkv_future_ctx & ctx, - const struct rwkv_future_tensor tokens, - const size_t n_threads, - - const struct rwkv_future_tensor emb, - const struct rwkv_future_tensor ln0_weight, - const struct rwkv_future_tensor ln0_bias, - - const size_t n_layer, - - const struct rwkv_future_tensor ln1_weight, - const struct rwkv_future_tensor ln1_bias, - const struct rwkv_future_tensor att_time_mix_k, - const struct rwkv_future_tensor att_time_mix_v, - const struct rwkv_future_tensor att_time_mix_r, - const struct rwkv_future_tensor att_time_first, - const struct rwkv_future_tensor att_time_decay, - const struct rwkv_future_tensor att_r, - const struct rwkv_future_tensor att_k, - const struct rwkv_future_tensor att_v, - const struct rwkv_future_tensor att_output, - struct rwkv_future_tensor & att_xx, - struct rwkv_future_tensor & att_aa, - struct rwkv_future_tensor & att_bb, - struct rwkv_future_tensor & att_pp, - - const struct rwkv_future_tensor ln2_weight, - const struct rwkv_future_tensor ln2_bias, - const struct rwkv_future_tensor ffn_time_mix_k, - const struct rwkv_future_tensor ffn_time_mix_r, - const struct rwkv_future_tensor ffn_k, - const struct rwkv_future_tensor ffn_v, - const struct rwkv_future_tensor ffn_r, - struct rwkv_future_tensor & ffn_xx, - - const struct rwkv_future_tensor ln_out_weight, - const struct rwkv_future_tensor ln_out_bias, - const struct rwkv_future_tensor head -) { - struct rwkv_future_tensor x = emb.get_rows(ctx, tokens); - x = x.layer_norm(ctx, ln0_weight.repeat(ctx, x), ln0_bias.repeat(ctx, x)); - - for (size_t i = 0; i < n_layer; i++) { - struct rwkv_future_tensor x0 = x, x_prev; - rwkv_future_carry_x(ctx, ln1_weight, ln1_bias, x0, x_prev, att_xx); - - struct rwkv_future_tensor r, k, v; - rwkv_future_att_rkv(ctx, att_time_mix_k, att_time_mix_v, att_time_mix_r, x0, x_prev, att_r, att_k, att_v, r, k, v); - - for (size_t i = 0; i < tokens.width; i++) { - struct rwkv_future_tensor kt = k.subview(ctx, emb.width); - struct rwkv_future_tensor vt = v.subview(ctx, emb.width); - struct rwkv_future_tensor xt = x_prev.subview(ctx, emb.width); - struct rwkv_future_tensor wkv = rwkv_future_att_wkv(ctx, att_time_first, att_time_decay, att_aa, att_bb, att_pp, k, v); - wkv.view(ctx); - } - - x = x.consume(ctx, att_output.mul_mat(ctx, r.combine(ctx, x_prev))); - x = x.consume(ctx, rwkv_future_ffn(ctx, ln2_weight, ln2_bias, ffn_time_mix_k, ffn_time_mix_r, ffn_k, ffn_v, ffn_r, x, ffn_xx)); - - ffn_xx.view(ctx); - att_xx.view(ctx); - att_aa.view(ctx); - att_bb.view(ctx); - att_pp.view(ctx); - } - - x = x.subview(ctx, emb.width).layer_norm(ctx, ln_out_weight, ln_out_bias); - - rwkv_future_graph_work(ctx, ffn_k.type, ffn_k.height, n_threads, tokens.width); - - return head.mul_mat(ctx, x).view(ctx); -} - -bool rwkv_build_sequence_graph( - struct ggml_context * ctx, - struct rwkv_model & model, - struct ggml_tensor * tokens, - struct rwkv_layer_state * inputs, - struct rwkv_layer_state * outputs, - struct ggml_tensor * logits, - struct ggml_cgraph * cgraph, - - size_t * const pre_logits_nodes, - size_t * const pre_logits_leafs, - size_t * const post_logits_nodes, - size_t * const post_logits_leafs -) { - const uint32_t n_embed = model.header.n_embed; - const size_t sequence_len = tokens->ne[0]; - - struct ggml_tensor * x = ggml_get_rows(ctx, model.emb, tokens); - x = rwkv_layer_norm(ctx, x, ggml_repeat(ctx, model.ln0_weight, x), ggml_repeat(ctx, model.ln0_bias, x)); - - for (size_t i = 0; i < model.header.n_layer; i++) { - struct rwkv_layer & layer = model.layers[i]; - struct rwkv_layer_state state = inputs[i]; - - struct ggml_tensor * x0 = x, * x_prev; - rwkv_carry_x(ctx, layer.ln1_weight, layer.ln1_bias, x0, x_prev, state.att_xx); - - struct ggml_tensor * r, * k, * v; - rwkv_att_rkv(ctx, layer, x0, x_prev, r, k, v); - - ggml_build_forward_expand(cgraph, r); - - for (uint32_t t = 0; t < sequence_len; t++) { - struct ggml_tensor * kt = ggml_view_1d(ctx, k, n_embed, n_embed * sizeof(float) * t); - struct ggml_tensor * vt = ggml_view_1d(ctx, v, n_embed, n_embed * sizeof(float) * t); - struct ggml_tensor * xt = ggml_view_1d(ctx, x_prev, n_embed, n_embed * sizeof(float) * t); - struct ggml_tensor * wkv = rwkv_att_wkv(ctx, layer.att_time_first, layer.att_time_decay, kt, vt, state.att_aa, state.att_bb, state.att_pp); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, wkv, xt)); - } - - x = ggml_add_inplace(ctx, x, ggml_mul_mat(ctx, layer.att_output, ggml_mul(ctx, r, x_prev))); - x = ggml_add_inplace(ctx, x, rwkv_ffn(ctx, x, layer, state)); - - struct rwkv_layer_state & output = outputs[i]; - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.ffn_xx, output.ffn_xx)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_xx, output.att_xx)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_aa, output.att_aa)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_bb, output.att_bb)); - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, state.att_pp, output.att_pp)); - } - - *pre_logits_nodes = cgraph->n_nodes; - *pre_logits_leafs = cgraph->n_leafs; - - // x = self.layer_norm(x[-1,:], self.w.ln_out) - x = rwkv_layer_norm(ctx, ggml_view_1d(ctx, x, n_embed, n_embed * sizeof(float) * (sequence_len - 1)), model.ln_out_weight, model.ln_out_bias); - - // x = (self.w.head.weight @ x).float() - ggml_build_forward_expand(cgraph, ggml_cpy(ctx, ggml_mul_mat(ctx, model.head, x), logits)); - - *post_logits_nodes = cgraph->n_nodes; - *post_logits_leafs = cgraph->n_leafs; - - return true; -} - -void rwkv_set_print_errors(struct rwkv_context * ctx, bool print_errors) { - bool * ptr = ctx ? &ctx->print_errors : &global_print_errors; - *ptr = print_errors; -} - -bool rwkv_get_print_errors(struct rwkv_context * ctx) { - return ctx ? ctx->print_errors : global_print_errors; -} - -enum rwkv_error_flags rwkv_get_last_error(struct rwkv_context * ctx) { - enum rwkv_error_flags * ptr = ctx ? &ctx->last_error : &global_last_error; - enum rwkv_error_flags value = *ptr; - *ptr = RWKV_ERROR_NONE; - return value; -} - -struct rwkv_file { - FILE * file; - - rwkv_file(FILE * file): file(file) {} - - ~rwkv_file() { - if (file) { - fclose(file); - } - } -}; - -bool rwkv_instance_from_file(const char * file_path, struct rwkv_instance & instance) { - struct stat file_stat; - struct rwkv_model model; - struct rwkv_ggml_context ctx; - size_t ffn_key_size = 0; - - std::unordered_map parameters; - - { - rwkv_file file(fopen(file_path, "rb")); - - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_OPEN, file.file, "Failed to open file %s", file_path); - // Be very careful when changing this code. It must support files larger than 2 GB by using 64-bit functions to get the file length. - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_STAT, fstat(fileno(file.file), &file_stat) == 0, "Failed to stat file %s", file_path); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_FILE, rwkv_fread_file_header(file.file, model.header), "Invalid file header"); - - struct rwkv_tensor_header tensor_header; - std::string name; - struct rwkv_future_ctx future_ctx; - - while ((size_t) ftell(file.file) < (size_t) file_stat.st_size) { - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_tensor_header(file.file, tensor_header), "Invalid tensor header"); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_string(file.file, tensor_header.key_length, name), "Failed to read tensor name"); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_READ, fseek(file.file, tensor_header.size(), SEEK_CUR) == 0, "Failed to read tensor data"); - - future_ctx.alloc(rwkv_type_to_ggml[tensor_header.data_type], tensor_header.width, tensor_header.height); - - if (ffn_key_size == 0 && name == "blocks.0.ffn.key.weight") { - ffn_key_size = tensor_header.height; - } - } - - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS | RWKV_ERROR_PARAM_MISSING, ffn_key_size, "Model is missing parameter blocks.0.ffn.key.weight"); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_READ, fseek(file.file, sizeof(struct rwkv_file_header), SEEK_SET) == 0, "Failed to seek in file"); - - ctx = future_ctx; - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, ctx.ctx, "Failed to allocate model context"); - - struct ggml_tensor * tensor; - - while ((size_t) ftell(file.file) < (size_t) file_stat.st_size) { - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_ggml_tensor(file.file, ctx.ctx, name, tensor), "Failed to read model params"); - parameters[std::move(name)] = tensor; - } - } - - std::unordered_map & parameters_ref = parameters; - RWKV_ASSERT_NULL(RWKV_ERROR_MODEL_PARAMS | RWKV_ERROR_PARAM_MISSING, rwkv_set_params(model, [&](const char * key, struct ggml_tensor *& dest) { - struct ggml_tensor * tensor = parameters_ref[key]; - RWKV_ENSURE_OR_FALSE_MSG(tensor, "Model parameter %s not found", key); - dest = tensor; - return true; - })); - - // Verify order of dimensions - struct ggml_tensor * emb = model.emb; - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS | RWKV_ERROR_SHAPE, emb->n_dims == 2, "Unexpected dimension count of embedding matrix %d", emb->n_dims); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS | RWKV_ERROR_DIMENSION, emb->ne[0] == model.header.n_embed, "Unexpected dimension of embedding matrix %" PRId64, emb->ne[0]); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_MODEL_PARAMS | RWKV_ERROR_DIMENSION, emb->ne[1] == model.header.n_vocab, "Unexpected dimension of embedding matrix %" PRId64, emb->ne[1]); - - instance.ctx = std::move(ctx); - instance.model = std::move(model); - instance.ffn_key_size = ffn_key_size; - return true; -} - -struct rwkv_context * rwkv_new_context_impl(std::shared_ptr instance, const uint32_t n_threads) { - global_last_error = RWKV_ERROR_NONE; - - struct rwkv_file_header & header = instance->model.header; - const size_t n_vocab = header.n_vocab; - const size_t n_embed = header.n_embed; - const size_t n_layer = header.n_layer; - - struct rwkv_future_ctx future_ctx; - const struct rwkv_future_tensor future_input = future_ctx.alloc(GGML_TYPE_F32, n_embed * 5 * n_layer); - const struct rwkv_future_tensor future_output = future_ctx.alloc(GGML_TYPE_F32, n_embed * 5 * n_layer); - const struct rwkv_future_tensor future_logits = future_ctx.alloc(GGML_TYPE_F32, n_vocab); - - for (size_t i = 0; i < n_layer; i++) { - /* ffn_xx */ future_input.subview(future_ctx, n_embed); future_output.subview(future_ctx, n_embed); - /* att_xx */ future_input.subview(future_ctx, n_embed); future_output.subview(future_ctx, n_embed); - /* att_aa */ future_input.subview(future_ctx, n_embed); future_output.subview(future_ctx, n_embed); - /* att_bb */ future_input.subview(future_ctx, n_embed); future_output.subview(future_ctx, n_embed); - /* att_pp */ future_input.subview(future_ctx, n_embed); future_output.subview(future_ctx, n_embed); - } - - struct rwkv_ggml_context ctx(future_ctx); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, ctx.ctx, "Failed to allocate model context"); - - struct ggml_tensor * input = ggml_new_tensor_1d(ctx.ctx, GGML_TYPE_F32, n_embed * 5 * n_layer); - struct ggml_tensor * output = ggml_new_tensor_1d(ctx.ctx, GGML_TYPE_F32, n_embed * 5 * n_layer); - - // We collect parts of input state here. Each part is (n_embed) vector. - std::unique_ptr inputs(new(std::nothrow) struct rwkv_layer_state[n_layer]); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_ALLOC, inputs.get(), "Failed to allocate input state parts"); - - // We collect parts of output state here. Each part is (n_embed) vector. - std::unique_ptr outputs(new(std::nothrow) struct rwkv_layer_state[n_layer]); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_ALLOC, outputs.get(), "Failed to allocate output state parts"); - - for (size_t i = 0; i < n_layer; i++) { - struct rwkv_layer_state & input_state = inputs[i]; - input_state.ffn_xx = ggml_view_1d(ctx.ctx, input, n_embed, n_embed * (i * 5 + 0) * sizeof(float)); - input_state.att_xx = ggml_view_1d(ctx.ctx, input, n_embed, n_embed * (i * 5 + 1) * sizeof(float)); - input_state.att_aa = ggml_view_1d(ctx.ctx, input, n_embed, n_embed * (i * 5 + 2) * sizeof(float)); - input_state.att_bb = ggml_view_1d(ctx.ctx, input, n_embed, n_embed * (i * 5 + 3) * sizeof(float)); - input_state.att_pp = ggml_view_1d(ctx.ctx, input, n_embed, n_embed * (i * 5 + 4) * sizeof(float)); - - struct rwkv_layer_state & output_state = outputs[i]; - output_state.ffn_xx = ggml_view_1d(ctx.ctx, output, n_embed, n_embed * (i * 5 + 0) * sizeof(float)); - output_state.att_xx = ggml_view_1d(ctx.ctx, output, n_embed, n_embed * (i * 5 + 1) * sizeof(float)); - output_state.att_aa = ggml_view_1d(ctx.ctx, output, n_embed, n_embed * (i * 5 + 2) * sizeof(float)); - output_state.att_bb = ggml_view_1d(ctx.ctx, output, n_embed, n_embed * (i * 5 + 3) * sizeof(float)); - output_state.att_pp = ggml_view_1d(ctx.ctx, output, n_embed, n_embed * (i * 5 + 4) * sizeof(float)); - } - - struct ggml_tensor * logits = ggml_new_tensor_1d(ctx.ctx, GGML_TYPE_F32, n_vocab); - - struct rwkv_future_ctx graph_future_ctx; - const struct rwkv_future_tensor future_token = graph_future_ctx.alloc(GGML_TYPE_I32, 1, 1, false); - - const struct rwkv_model & model = instance->model; - const struct rwkv_layer & layer = model.layers[0]; - const struct rwkv_layer_state & state = inputs[0]; - struct rwkv_future_tensor ffn_xx = state.ffn_xx; - struct rwkv_future_tensor att_xx = state.att_xx; - struct rwkv_future_tensor att_aa = state.att_aa; - struct rwkv_future_tensor att_bb = state.att_bb; - struct rwkv_future_tensor att_pp = state.att_pp; - - const struct rwkv_future_tensor future_graph = rwkv_future_serial_graph(graph_future_ctx, future_token, n_threads, - model.emb, - model.ln0_weight, model.ln0_bias, - - n_layer, - layer.ln1_weight, layer.ln1_bias, - layer.att_time_mix_k, layer.att_time_mix_v, layer.att_time_mix_r, - layer.att_time_first, layer.att_time_decay, - layer.att_receptance, layer.att_key, layer.att_value, layer.att_output, - att_xx, att_aa, att_bb, att_pp, - - layer.ln2_weight, layer.ln2_bias, - layer.ffn_time_mix_k, layer.ffn_time_mix_r, - layer.ffn_key, layer.ffn_value, layer.ffn_receptance, - ffn_xx, - - model.ln_out_weight, model.ln_out_weight, - model.head - ); - - struct rwkv_graph serial_graph; - serial_graph.ctx = graph_future_ctx; - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, serial_graph.ctx.ctx, "Failed to allocate serial graph context"); - serial_graph.tokens = ggml_new_i32(serial_graph.ctx.ctx, 0); - serial_graph.cgraph.reset(new(std::nothrow) struct ggml_cgraph()); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_ALLOC, serial_graph.cgraph, "Failed to allocate serial graph"); - - RWKV_ASSERT_NULL(RWKV_ERROR_GRAPH, rwkv_build_serial_graph( - serial_graph.ctx.ctx, instance->model, - serial_graph.tokens, inputs.get(), outputs.get(), logits, - serial_graph.cgraph.get(), - &serial_graph.pre_logits_nodes, &serial_graph.pre_logits_leafs, &serial_graph.post_logits_nodes, &serial_graph.post_logits_leafs - )); - - std::unique_ptr rwkv_ctx(new(std::nothrow) struct rwkv_context()); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, rwkv_ctx, "Failed to allocate rwkv_context"); - rwkv_ctx->instance = std::move(instance); - rwkv_ctx->ctx = std::move(ctx); - rwkv_ctx->input_state = input; - rwkv_ctx->input_layers = std::move(inputs); - rwkv_ctx->output_state = output; - rwkv_ctx->output_layers = std::move(outputs); - rwkv_ctx->logits = logits; - rwkv_ctx->n_threads = n_threads; - rwkv_ctx->serial_graph = std::move(serial_graph); - rwkv_ctx->last_error = RWKV_ERROR_NONE; - rwkv_ctx->print_errors = global_print_errors; - return rwkv_ctx.release(); -} - -struct rwkv_context * rwkv_init_from_file(const char * file_path, const uint32_t n_threads) { - global_last_error = RWKV_ERROR_NONE; - - std::shared_ptr instance(new(std::nothrow) struct rwkv_instance()); - RWKV_ASSERT_NULL_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, instance, "Failed to allocate instance"); - RWKV_ENSURE_OR_NULL(rwkv_instance_from_file(file_path, *instance.get())); - return rwkv_new_context_impl(instance, n_threads); -} - -struct rwkv_context * rwkv_clone_context(struct rwkv_context * ctx, const uint32_t n_threads) { - struct rwkv_context * clone = rwkv_new_context_impl(ctx->instance, n_threads); - - if (clone) { - clone->print_errors = ctx->print_errors; - } - - return clone; -} - -bool rwkv_gpu_offload_layers(struct rwkv_context * ctx, const uint32_t n_layers) { -#if defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS) - printf("\nOffloading %u (or fewer) layers...",n_layers); - const auto offload = [&](struct ggml_tensor * tensor) { - // TODO support multi-GPU - tensor->backend = GGML_BACKEND_GPU; - #if defined(GGML_USE_CLBLAST) - ggml_cl_transform_tensor(tensor->data, tensor); - #else - ggml_cuda_transform_tensor(tensor->data, tensor); - #endif - }; - - const size_t n_gpu = std::min(n_layers, ctx->instance->model.header.n_layer); - - if (ctx->gpu_layers < n_gpu) { - for (size_t & i = ctx->gpu_layers; i < n_gpu; i++) { - const struct rwkv_layer & layer = ctx->instance->model.layers[i]; - - // TODO also offload other operations to GPU with ggml_cuda_assign_buffers - offload(layer.att_key); - offload(layer.att_value); - offload(layer.att_receptance); - offload(layer.att_output); - - offload(layer.ffn_key); - offload(layer.ffn_value); - offload(layer.ffn_receptance); - } - - return true; - } -#endif - return false; -} - -void rwkv_set_inputs(const struct rwkv_context * ctx, const float * state_in) { - if (state_in) { - memcpy(ctx->input_state->data, state_in, rwkv_nbytes_old(ctx->input_state)); - } else { - rwkv_init_state(ctx, (float *) ctx->input_state->data); - } -} - -void rwkv_get_outputs(const struct rwkv_context * ctx, float * state_out, float * logits_out) { - if (state_out) { - memcpy(state_out, ctx->output_state->data, rwkv_nbytes_old(ctx->output_state)); - } - - if (logits_out) { - memcpy(logits_out, ctx->logits->data, rwkv_nbytes_old(ctx->logits)); - } -} - -bool rwkv_eval(struct rwkv_context * ctx, const int n_threads, const uint32_t token, const float * state_in, float * state_out, float * logits_out) { - ctx->last_error = RWKV_ERROR_NONE; - - const struct rwkv_file_header & header = ctx->instance->model.header; - const size_t n_vocab = header.n_vocab; - RWKV_CTX_ASSERT_FALSE_MSG(ctx, RWKV_ERROR_ARGS, token < n_vocab, "Token (%" PRId32 ") is out of range (0 .. %zu)", token, n_vocab - 1); - - rwkv_set_inputs(ctx, state_in); - ggml_set_i32(ctx->serial_graph.tokens, token); - - // Short circuit computation of logits if nobody actually cares - if (!logits_out) { - ctx->serial_graph.cgraph->n_nodes = ctx->serial_graph.pre_logits_nodes; - ctx->serial_graph.cgraph->n_leafs = ctx->serial_graph.pre_logits_leafs; - } else { - ctx->serial_graph.cgraph->n_nodes = ctx->serial_graph.post_logits_nodes; - ctx->serial_graph.cgraph->n_leafs = ctx->serial_graph.post_logits_leafs; - } - - kcpp_graph_compute_helper(ctx->serial_graph.cgraph.get(),n_threads); - rwkv_get_outputs(ctx, state_out, logits_out); - - return true; -} - -bool rwkv_eval_sequence(struct rwkv_context * ctx, const int n_threads, const uint32_t * sequence, const size_t sequence_len, const float * state_in, float * state_out, float * logits_out) { - ctx->last_error = RWKV_ERROR_NONE; - - const struct rwkv_file_header & header = ctx->instance->model.header; - const size_t n_vocab = header.n_vocab; - const size_t n_embed = header.n_embed; - const size_t n_layer = header.n_layer; - - if (sequence) { - for (size_t i = 0; i < sequence_len; i++) { - const uint32_t token = sequence[i]; - RWKV_CTX_ASSERT_FALSE_MSG(ctx, RWKV_ERROR_ARGS, token < n_vocab, "Token at index %zu (%" PRId32 ") is out of range (0 .. %zu)", i, token, n_vocab - 1); - } - } - - if (ctx->sequence_len != sequence_len) { - // Build new sequence graph - - struct rwkv_future_ctx graph_future_ctx; - const struct rwkv_future_tensor future_tokens = graph_future_ctx.alloc(GGML_TYPE_I32, sequence_len); - - const struct rwkv_model & model = ctx->instance->model; - const struct rwkv_layer & layer = model.layers[0]; - const struct rwkv_layer_state & state = ctx->input_layers[0]; - struct rwkv_future_tensor ffn_xx = state.ffn_xx; - struct rwkv_future_tensor att_xx = state.att_xx; - struct rwkv_future_tensor att_aa = state.att_aa; - struct rwkv_future_tensor att_bb = state.att_bb; - struct rwkv_future_tensor att_pp = state.att_pp; - - const struct rwkv_future_tensor future_graph = rwkv_future_sequence_graph(graph_future_ctx, future_tokens, 1, - model.emb, - model.ln0_weight, model.ln0_bias, - - n_layer, - layer.ln1_weight, layer.ln1_bias, - layer.att_time_mix_k, layer.att_time_mix_v, layer.att_time_mix_r, - layer.att_time_first, layer.att_time_decay, - layer.att_receptance, layer.att_key, layer.att_value, layer.att_output, - att_xx, att_aa, att_bb, att_pp, - - layer.ln2_weight, layer.ln2_bias, - layer.ffn_time_mix_k, layer.ffn_time_mix_r, - layer.ffn_key, layer.ffn_value, layer.ffn_receptance, - ffn_xx, - - model.ln_out_weight, model.ln_out_weight, - model.head - ); - - struct rwkv_graph sequence_graph; - sequence_graph.ctx = graph_future_ctx; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_CTX | RWKV_ERROR_ALLOC, sequence_graph.ctx.ctx, "Failed to allocate sequence graph context"); - sequence_graph.tokens = ggml_new_tensor_1d(sequence_graph.ctx.ctx, GGML_TYPE_I32, sequence_len); - sequence_graph.cgraph.reset(new(std::nothrow) struct ggml_cgraph()); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, sequence_graph.cgraph, "Failed to allocate sequence graph"); - - RWKV_ASSERT_FALSE(RWKV_ERROR_GRAPH, rwkv_build_sequence_graph( - sequence_graph.ctx.ctx, ctx->instance->model, - sequence_graph.tokens, ctx->input_layers.get(), ctx->output_layers.get(), ctx->logits, - sequence_graph.cgraph.get(), - &sequence_graph.pre_logits_nodes, &sequence_graph.pre_logits_leafs, &sequence_graph.post_logits_nodes, &sequence_graph.post_logits_leafs - )); - - ctx->sequence_len = sequence_len; - ctx->sequence_graph = std::move(sequence_graph); - } - - // Allow building the sequence graph without actually evaluating, by specifying sequence = NULL. - if (sequence) { - rwkv_set_inputs(ctx, state_in); - memcpy(ctx->sequence_graph.tokens->data, sequence, sequence_len * sizeof(uint32_t)); - - // Short circuit computation of logits if nobody actually cares - if (!logits_out) { - ctx->sequence_graph.cgraph->n_nodes = ctx->sequence_graph.pre_logits_nodes; - ctx->sequence_graph.cgraph->n_leafs = ctx->sequence_graph.pre_logits_leafs; - } else { - ctx->sequence_graph.cgraph->n_nodes = ctx->sequence_graph.post_logits_nodes; - ctx->sequence_graph.cgraph->n_leafs = ctx->sequence_graph.post_logits_leafs; - } - - kcpp_graph_compute_helper(ctx->sequence_graph.cgraph.get(),n_threads); - rwkv_get_outputs(ctx, state_out, logits_out); - } - - return true; -} - -// Provided for compatibility. -extern "C" RWKV_API uint32_t rwkv_get_state_buffer_element_count(const struct rwkv_context * ctx) { - return rwkv_get_state_len(ctx); -} - -// Provided for compatibility. -extern "C" RWKV_API uint32_t rwkv_get_logits_buffer_element_count(const struct rwkv_context * ctx) { - return rwkv_get_logits_len(ctx); -} - -size_t rwkv_get_n_vocab(const struct rwkv_context * ctx) { - return (size_t) ctx->instance->model.header.n_vocab; -} - -size_t rwkv_get_n_embed(const struct rwkv_context * ctx) { - return (size_t) ctx->instance->model.header.n_embed; -} - -size_t rwkv_get_n_layer(const struct rwkv_context * ctx) { - return (size_t) ctx->instance->model.header.n_layer; -} - -size_t rwkv_get_state_len(const struct rwkv_context * ctx) { - const struct rwkv_file_header & header = ctx->instance->model.header; - return (size_t) header.n_embed * 5 * (size_t) header.n_layer; -} - -size_t rwkv_get_logits_len(const struct rwkv_context * ctx) { - return (size_t) ctx->instance->model.header.n_vocab; -} - -void rwkv_init_state(const struct rwkv_context * ctx, float * state) { - const struct rwkv_file_header & header = ctx->instance->model.header; - const size_t layer_size = (size_t) header.n_embed * 5; - const size_t layer_zero = (size_t) header.n_embed * 4; - const size_t layers_size = (size_t) header.n_layer * layer_size; - - for (size_t start = 0; start < layers_size; start += layer_size) { - for (size_t i = 0; i < layer_zero; i++) { - state[start + i] = 0.0F; - } - - for (size_t i = layer_zero; i < layer_size; i++) { - state[start + i] = -1e30F; - } - } -} - -void rwkv_free(struct rwkv_context * ctx) { - std::unique_ptr rwkv_ctx(ctx); -} - -bool rwkv_quantize_model_file(const char * in_path, const char * out_path, const char * type_name) { - global_last_error = RWKV_ERROR_NONE; - - enum ggml_type out_type = rwkv_type_to_ggml[rwkv_type_from_string(type_name)]; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ARGS | RWKV_ERROR_DATA_TYPE, ggml_is_quantized(out_type), "Unsupported output data type (%s)", rwkv_type_to_string[rwkv_type_from_ggml[out_type]]); - - RWKV_MSG("Loading model from '%s'\n", in_path); - - struct stat in_stat; - - struct rwkv_file in_file(fopen(in_path, "rb")); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_OPEN, in_file.file, "Failed to open %s for reading", in_path); - - // Be very careful when changing this code. It must support files larger than 2 GB by using 64-bit functions to the get file length. - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_STAT, fstat(fileno(in_file.file), &in_stat) == 0, "failed to stat file %s", in_path); - - struct rwkv_file out_file(fopen(out_path, "wb")); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_OPEN, out_file.file, "Failed to open %s for writing", out_path); - - struct rwkv_file_header in_header; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE, rwkv_fread_file_header(in_file.file, in_header), "Invalid file header"); - - enum ggml_type in_type = rwkv_type_to_ggml[in_header.data_type]; - RWKV_ASSERT_FALSE_MSG( - RWKV_ERROR_FILE, - in_type == GGML_TYPE_F32 || in_type == GGML_TYPE_F16, - "Unsupported input data type (%s); needs to be FP32 or FP16", - rwkv_type_to_string[rwkv_type_from_ggml[in_type]] - ); - - struct rwkv_file_header out_header = in_header; - out_header.version = RWKV_FILE_VERSION; - out_header.data_type = rwkv_type_from_ggml[out_type]; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE, rwkv_fwrite_file_header(out_file.file, out_header), "Failed to write file header"); - - // Process parameters - size_t orig_total_size = 0; - size_t new_total_size = 0; - - // Required to init the F16 tables - // Doesn't crash if ggml_init fails - ggml_free(ggml_init({ 0, NULL, true })); - - size_t max_in_size = 0; - size_t max_out_size = 0; - size_t max_key_length = 0; - - while (ftell(in_file.file) < in_stat.st_size) { - struct rwkv_tensor_header header; - RWKV_ASSERT_FALSE(RWKV_ERROR_FILE, rwkv_fread_tensor_header_and_skip(in_file.file, header)); - - size_t in_size = header.size(); - - if (in_size > max_in_size) { - max_in_size = in_size; - } - - // f16 type tensors get relocated to out and then converted into f32 at in - if (header.data_type == TYPE_FP16) { - if (in_size > max_out_size) { - max_out_size = in_size; - } - - size_t f32_size = rwkv_future_tensor::size(GGML_TYPE_F32, header.width, header.height); - - if (f32_size > max_in_size) { - max_in_size = f32_size; - } - } - - size_t out_size = rwkv_future_tensor::size(out_type, header.width, header.height); - - if (out_size > max_out_size) { - max_out_size = out_size; - } - - if (header.key_length > max_key_length) { - max_key_length = header.key_length; - } - } - - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE | RWKV_ERROR_FILE_READ, fseek(in_file.file, sizeof(struct rwkv_file_header), SEEK_SET) == 0, "Failed to seek in file"); - - // This is a histogram of quantized values. If it shows single 1.0, then all 0.0, something went very wrong! - int64_t hist_all[16] {}; - - std::unique_ptr scratch(new(std::nothrow) uint8_t[max_in_size + max_out_size]); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_ALLOC, scratch.get(), "Failed to allocate buffer"); - - uint8_t * in_buf = scratch.get(); - uint8_t * out_buf = in_buf + max_in_size; - - struct rwkv_tensor tensor; - struct rwkv_tensor_header & header = tensor.header; - std::string & name = tensor.name; - uint8_t *& data = tensor.data; - - while (ftell(in_file.file) < in_stat.st_size) { - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_tensor_header(in_file.file, header), "Failed to read tensor header"); - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_string(in_file.file, header.key_length, name), "Failed to read tensor name"); - - const char * name_str = name.c_str(); - RWKV_MSG("%*s - [%5" PRId32 ", %5" PRId32 "], type = %6s ", (int) max_key_length, name_str, header.width, header.height, rwkv_type_to_string[header.data_type]); - - data = header.data_type == TYPE_FP16 ? out_buf : in_buf; - size_t orig_size = header.size(), new_size = orig_size; - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_MODEL_PARAMS, rwkv_fread_data(in_file.file, orig_size, data), "\nFailed to read tensor data of %s", name_str); - - // Quantize only 2D tensors, except embedding and head matrices. - // Embedding and head take not too much space, especially in bigger models; - // but they significantly increase perplexity when quantized. - if ((header.data_type == TYPE_FP32 || header.data_type == TYPE_FP16) && header.dim_count == 2 && name != "emb.weight" && name != "head.weight") { - RWKV_MSG("quantizing... "); - - size_t nelements = (size_t) header.width * (size_t) header.height; - - if (header.data_type == TYPE_FP16) { - ggml_fp16_to_fp32_row((const ggml_fp16_t *) out_buf, (float *) in_buf, nelements); - } - - int64_t hist_cur[16] {}; - new_size = ggml_quantize_chunk(out_type, (const float *) in_buf, out_buf, 0, nelements, hist_cur); - header.data_type = rwkv_type_from_ggml[out_type]; - data = out_buf; - - RWKV_MSG("size = %8.2f MB -> %8.2f MB | hist: ", orig_size / 1024.0 / 1024.0, new_size / 1024.0 / 1024.0); - - for (int i = 0; i < 16; i++) { - RWKV_MSG("%5.3f ", hist_cur[i] / (float) nelements); - hist_all[i] += hist_cur[i]; - } - - RWKV_MSG("\n"); - } else { - RWKV_MSG("size = %8.3f MB\n", orig_size / 1024.0 / 1024.0); - } - - RWKV_ASSERT_FALSE_MSG(RWKV_ERROR_FILE_WRITE, rwkv_fwrite_tensor(out_file.file, tensor), "Failed to write tensor %s", name_str); - orig_total_size += orig_size; - new_total_size += new_size; - } - - RWKV_MSG("original size = %8.2f MB\n", orig_total_size / 1024.0 / 1024.0); - RWKV_MSG("quantized size = %8.2f MB\n", new_total_size / 1024.0 / 1024.0); - RWKV_MSG("compression ratio = %8.2f\n", orig_total_size / float(new_total_size)); - - int64_t sum_all = 0; - - for (int i = 0; i < 16; i++) { - sum_all += hist_all[i]; - } - - RWKV_MSG("hist: "); - - for (int i = 0; i < 16; ++i) { - printf("%5.3f ", hist_all[i] / float(sum_all)); - } - - RWKV_MSG("\n"); - - return true; -} - -const char * rwkv_get_system_info_string(void) { - static std::string s; - - s = ""; - s += "AVX=" + std::to_string(ggml_cpu_has_avx()) + " "; - s += "AVX2=" + std::to_string(ggml_cpu_has_avx2()) + " "; - s += "AVX512=" + std::to_string(ggml_cpu_has_avx512()) + " "; - s += "FMA=" + std::to_string(ggml_cpu_has_fma()) + " "; - s += "NEON=" + std::to_string(ggml_cpu_has_neon()) + " "; - s += "ARM_FMA=" + std::to_string(ggml_cpu_has_arm_fma()) + " "; - s += "F16C=" + std::to_string(ggml_cpu_has_f16c()) + " "; - s += "FP16_VA=" + std::to_string(ggml_cpu_has_fp16_va()) + " "; - s += "WASM_SIMD=" + std::to_string(ggml_cpu_has_wasm_simd()) + " "; - s += "BLAS=" + std::to_string(ggml_cpu_has_blas()) + " "; - s += "SSE3=" + std::to_string(ggml_cpu_has_sse3()) + " "; - s += "VSX=" + std::to_string(ggml_cpu_has_vsx()); - - return s.c_str(); -} \ No newline at end of file diff --git a/spaces/JSanchez79/js-test-facebook-bart-large-mnli/README.md b/spaces/JSanchez79/js-test-facebook-bart-large-mnli/README.md deleted file mode 100644 index bd1442bc6bf20a691ccdec250186b05257fe2b75..0000000000000000000000000000000000000000 --- a/spaces/JSanchez79/js-test-facebook-bart-large-mnli/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Js Test Facebook Bart Large Mnli -emoji: 🐨 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/utils/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/utils/__init__.py deleted file mode 100644 index 1c2e2c9abbc61f01e2476538e3eb342803880502..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/utils/__init__.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os - -from .deprecation_utils import deprecate -from .import_utils import ( - ENV_VARS_TRUE_AND_AUTO_VALUES, - ENV_VARS_TRUE_VALUES, - USE_JAX, - USE_TF, - USE_TORCH, - DummyObject, - is_accelerate_available, - is_flax_available, - is_inflect_available, - is_modelcards_available, - is_onnx_available, - is_safetensors_available, - is_scipy_available, - is_tf_available, - is_torch_available, - is_torch_version, - is_transformers_available, - is_transformers_version, - is_unidecode_available, - requires_backends, -) -from .logging import get_logger -from .outputs import BaseOutput -from .pil_utils import PIL_INTERPOLATION - - -if is_torch_available(): - from .testing_utils import ( - floats_tensor, - load_hf_numpy, - load_image, - load_numpy, - parse_flag_from_env, - require_torch_gpu, - slow, - torch_all_close, - torch_device, - ) - - -logger = get_logger(__name__) - - -hf_cache_home = os.path.expanduser( - os.getenv("HF_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "huggingface")) -) -default_cache_path = os.path.join(hf_cache_home, "diffusers") - - -CONFIG_NAME = "config.json" -WEIGHTS_NAME = "diffusion_pytorch_model.bin" -FLAX_WEIGHTS_NAME = "diffusion_flax_model.msgpack" -ONNX_WEIGHTS_NAME = "model.onnx" -SAFETENSORS_WEIGHTS_NAME = "diffusion_pytorch_model.safetensors" -ONNX_EXTERNAL_WEIGHTS_NAME = "weights.pb" -HUGGINGFACE_CO_RESOLVE_ENDPOINT = "https://huggingface.co" -DIFFUSERS_CACHE = default_cache_path -DIFFUSERS_DYNAMIC_MODULE_NAME = "diffusers_modules" -HF_MODULES_CACHE = os.getenv("HF_MODULES_CACHE", os.path.join(hf_cache_home, "modules")) - -_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS = [ - "DDIMScheduler", - "DDPMScheduler", - "PNDMScheduler", - "LMSDiscreteScheduler", - "EulerDiscreteScheduler", - "HeunDiscreteScheduler", - "EulerAncestralDiscreteScheduler", - "DPMSolverMultistepScheduler", -] diff --git a/spaces/JadAssaf/STPIzeimer/README.md b/spaces/JadAssaf/STPIzeimer/README.md deleted file mode 100644 index a293f46f6155bc025cb167155bd8dd32ef624f80..0000000000000000000000000000000000000000 --- a/spaces/JadAssaf/STPIzeimer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: STPIzeimer -emoji: 👀 -colorFrom: purple -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/KPCGD/bingo/src/components/chat-history.tsx b/spaces/KPCGD/bingo/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
    -
    - 历史记录 -
    -
    -
    -
    -
    -
    -
    - -
    -

    无标题的聊天

    -
    -

    上午1:42

    -
    - - - - - - - - -
    -
    -
    -
    -
    -
    -
    -
    - ) -} diff --git a/spaces/Karthikbolla/NEP-Chatbot/app.py b/spaces/Karthikbolla/NEP-Chatbot/app.py deleted file mode 100644 index 8cafe08f130c47fa2ccd1e287593c7e974579afa..0000000000000000000000000000000000000000 --- a/spaces/Karthikbolla/NEP-Chatbot/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/blenderbot-400M-distill").launch() \ No newline at end of file diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/README.md b/spaces/Kayson/InstructDiffusion/stable_diffusion/README.md deleted file mode 100644 index c9e6c3bb13a18fc5fc0f31ab819bf6eccda81bf0..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/README.md +++ /dev/null @@ -1,215 +0,0 @@ -# Stable Diffusion -*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:* - -[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)
    -[Robin Rombach](https://github.com/rromb)\*, -[Andreas Blattmann](https://github.com/ablattmann)\*, -[Dominik Lorenz](https://github.com/qp-qp)\, -[Patrick Esser](https://github.com/pesser), -[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)
    -_[CVPR '22 Oral](https://openaccess.thecvf.com/content/CVPR2022/html/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.html) | -[GitHub](https://github.com/CompVis/latent-diffusion) | [arXiv](https://arxiv.org/abs/2112.10752) | [Project page](https://ommer-lab.com/research/latent-diffusion-models/)_ - -![txt2img-stable2](assets/stable-samples/txt2img/merged-0006.png) -[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion -model. -Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. -Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), -this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. -With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM. -See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion). - - -## Requirements -A suitable [conda](https://conda.io/) environment named `ldm` can be created -and activated with: - -``` -conda env create -f environment.yaml -conda activate ldm -``` - -You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running - -``` -conda install pytorch torchvision -c pytorch -pip install transformers==4.19.2 diffusers invisible-watermark -pip install -e . -``` - - -## Stable Diffusion v1 - -Stable Diffusion v1 refers to a specific configuration of the model -architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet -and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and -then finetuned on 512x512 images. - -*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present -in its training data. -Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](Stable_Diffusion_v1_Model_Card.md).* - -The weights are available via [the CompVis organization at Hugging Face](https://huggingface.co/CompVis) under [a license which contains specific use-based restrictions to prevent misuse and harm as informed by the model card, but otherwise remains permissive](LICENSE). While commercial use is permitted under the terms of the license, **we do not recommend using the provided weights for services or products without additional safety mechanisms and considerations**, since there are [known limitations and biases](Stable_Diffusion_v1_Model_Card.md#limitations-and-bias) of the weights, and research on safe and ethical deployment of general text-to-image models is an ongoing effort. **The weights are research artifacts and should be treated as such.** - -[The CreativeML OpenRAIL M license](LICENSE) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - -### Weights - -We currently provide the following checkpoints: - -- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). - 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). -- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. - 515k steps at resolution `512x512` on [laion-aesthetics v2 5+](https://laion.ai/blog/laion-aesthetics/) (a subset of laion2B-en with estimated aesthetics score `> 5.0`, and additionally -filtered to images with an original size `>= 512x512`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the [LAION-5B](https://laion.ai/blog/laion-5b/) metadata, the aesthetics score is estimated using the [LAION-Aesthetics Predictor V2](https://github.com/christophschuhmann/improved-aesthetic-predictor)). -- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). -- `sd-v1-4.ckpt`: Resumed from `sd-v1-2.ckpt`. 225k steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - -Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, -5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling -steps show the relative improvements of the checkpoints: -![sd evaluation results](assets/v1-variants-scores.jpg) - - - -### Text-to-Image with Stable Diffusion -![txt2img-stable2](assets/stable-samples/txt2img/merged-0005.png) -![txt2img-stable2](assets/stable-samples/txt2img/merged-0007.png) - -Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder. -We provide a [reference script for sampling](#reference-sampling-script), but -there also exists a [diffusers integration](#diffusers-integration), which we -expect to see more active community development. - -#### Reference Sampling Script - -We provide a reference sampling script, which incorporates - -- a [Safety Checker Module](https://github.com/CompVis/stable-diffusion/pull/36), - to reduce the probability of explicit outputs, -- an [invisible watermarking](https://github.com/ShieldMnt/invisible-watermark) - of the outputs, to help viewers [identify the images as machine-generated](scripts/tests/test_watermark.py). - -After [obtaining the `stable-diffusion-v1-*-original` weights](#weights), link them -``` -mkdir -p models/ldm/stable-diffusion-v1/ -ln -s models/ldm/stable-diffusion-v1/model.ckpt -``` -and sample with -``` -python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms -``` - -By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler, -and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`). - - -```commandline -usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA] - [--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS] [--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT] - [--seed SEED] [--precision {full,autocast}] - -optional arguments: - -h, --help show this help message and exit - --prompt [PROMPT] the prompt to render - --outdir [OUTDIR] dir to write results to - --skip_grid do not save a grid, only individual samples. Helpful when evaluating lots of samples - --skip_save do not save individual samples. For speed measurements. - --ddim_steps DDIM_STEPS - number of ddim sampling steps - --plms use plms sampling - --laion400m uses the LAION400M model - --fixed_code if enabled, uses the same starting code across samples - --ddim_eta DDIM_ETA ddim eta (eta=0.0 corresponds to deterministic sampling - --n_iter N_ITER sample this often - --H H image height, in pixel space - --W W image width, in pixel space - --C C latent channels - --f F downsampling factor - --n_samples N_SAMPLES - how many samples to produce for each given prompt. A.k.a. batch size - --n_rows N_ROWS rows in the grid (default: n_samples) - --scale SCALE unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty)) - --from-file FROM_FILE - if specified, load prompts from this file - --config CONFIG path to config which constructs model - --ckpt CKPT path to checkpoint of model - --seed SEED the seed (for reproducible sampling) - --precision {full,autocast} - evaluate at this precision -``` -Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints. -For this reason `use_ema=False` is set in the configuration, otherwise the code will try to switch from -non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we provide "full" checkpoints -which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights. - - -#### Diffusers Integration - -A simple way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers): -```py -# make sure you're logged in with `huggingface-cli login` -from torch import autocast -from diffusers import StableDiffusionPipeline - -pipe = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - use_auth_token=True -).to("cuda") - -prompt = "a photo of an astronaut riding a horse on mars" -with autocast("cuda"): - image = pipe(prompt)["sample"][0] - -image.save("astronaut_rides_horse.png") -``` - - -### Image Modification with Stable Diffusion - -By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different -tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script, -we provide a script to perform image modification with Stable Diffusion. - -The following describes an example where a rough sketch made in [Pinta](https://www.pinta-project.com/) is converted into a detailed artwork. -``` -python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img --strength 0.8 -``` -Here, strength is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. -Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. See the following example. - -**Input** - -![sketch-in](assets/stable-samples/img2img/sketch-mountains-input.jpg) - -**Outputs** - -![out3](assets/stable-samples/img2img/mountains-3.png) -![out2](assets/stable-samples/img2img/mountains-2.png) - -This procedure can, for example, also be used to upscale samples from the base model. - - -## Comments - -- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion) -and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). -Thanks for open-sourcing! - -- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). - - -## BibTeX - -``` -@misc{rombach2021highresolution, - title={High-Resolution Image Synthesis with Latent Diffusion Models}, - author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer}, - year={2021}, - eprint={2112.10752}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` - - diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/__init__.py deleted file mode 100644 index b51164f3537a6b19cb2a00fb44b38855c4ba1c49..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/toolbox/__init__.py +++ /dev/null @@ -1,476 +0,0 @@ -from toolbox.ui import UI -from encoder import inference as encoder -from synthesizer.inference import Synthesizer -from vocoder.wavernn import inference as rnn_vocoder -from vocoder.hifigan import inference as gan_vocoder -from vocoder.fregan import inference as fgan_vocoder -from pathlib import Path -from time import perf_counter as timer -from toolbox.utterance import Utterance -import numpy as np -import traceback -import sys -import torch -import re - -# 默认使用wavernn -vocoder = rnn_vocoder - -# Use this directory structure for your datasets, or modify it to fit your needs -recognized_datasets = [ - "LibriSpeech/dev-clean", - "LibriSpeech/dev-other", - "LibriSpeech/test-clean", - "LibriSpeech/test-other", - "LibriSpeech/train-clean-100", - "LibriSpeech/train-clean-360", - "LibriSpeech/train-other-500", - "LibriTTS/dev-clean", - "LibriTTS/dev-other", - "LibriTTS/test-clean", - "LibriTTS/test-other", - "LibriTTS/train-clean-100", - "LibriTTS/train-clean-360", - "LibriTTS/train-other-500", - "LJSpeech-1.1", - "VoxCeleb1/wav", - "VoxCeleb1/test_wav", - "VoxCeleb2/dev/aac", - "VoxCeleb2/test/aac", - "VCTK-Corpus/wav48", - "aidatatang_200zh/corpus/dev", - "aidatatang_200zh/corpus/test", - "aishell3/test/wav", - "magicdata/train", -] - -#Maximum of generated wavs to keep on memory -MAX_WAVES = 15 - -class Toolbox: - def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, extractor_models_dir, convertor_models_dir, seed, no_mp3_support, vc_mode): - self.no_mp3_support = no_mp3_support - self.vc_mode = vc_mode - sys.excepthook = self.excepthook - self.datasets_root = datasets_root - self.utterances = set() - self.current_generated = (None, None, None, None) # speaker_name, spec, breaks, wav - - self.synthesizer = None # type: Synthesizer - - # for ppg-based voice conversion - self.extractor = None - self.convertor = None # ppg2mel - - self.current_wav = None - self.waves_list = [] - self.waves_count = 0 - self.waves_namelist = [] - - # Check for webrtcvad (enables removal of silences in vocoder output) - try: - import webrtcvad - self.trim_silences = True - except: - self.trim_silences = False - - # Initialize the events and the interface - self.ui = UI(vc_mode) - self.style_idx = 0 - self.reset_ui(enc_models_dir, syn_models_dir, voc_models_dir, extractor_models_dir, convertor_models_dir, seed) - self.setup_events() - self.ui.start() - - def excepthook(self, exc_type, exc_value, exc_tb): - traceback.print_exception(exc_type, exc_value, exc_tb) - self.ui.log("Exception: %s" % exc_value) - - def setup_events(self): - # Dataset, speaker and utterance selection - self.ui.browser_load_button.clicked.connect(lambda: self.load_from_browser()) - random_func = lambda level: lambda: self.ui.populate_browser(self.datasets_root, - recognized_datasets, - level) - self.ui.random_dataset_button.clicked.connect(random_func(0)) - self.ui.random_speaker_button.clicked.connect(random_func(1)) - self.ui.random_utterance_button.clicked.connect(random_func(2)) - self.ui.dataset_box.currentIndexChanged.connect(random_func(1)) - self.ui.speaker_box.currentIndexChanged.connect(random_func(2)) - - # Model selection - self.ui.encoder_box.currentIndexChanged.connect(self.init_encoder) - def func(): - self.synthesizer = None - if self.vc_mode: - self.ui.extractor_box.currentIndexChanged.connect(self.init_extractor) - else: - self.ui.synthesizer_box.currentIndexChanged.connect(func) - - self.ui.vocoder_box.currentIndexChanged.connect(self.init_vocoder) - - # Utterance selection - func = lambda: self.load_from_browser(self.ui.browse_file()) - self.ui.browser_browse_button.clicked.connect(func) - func = lambda: self.ui.draw_utterance(self.ui.selected_utterance, "current") - self.ui.utterance_history.currentIndexChanged.connect(func) - func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate) - self.ui.play_button.clicked.connect(func) - self.ui.stop_button.clicked.connect(self.ui.stop) - self.ui.record_button.clicked.connect(self.record) - - # Source Utterance selection - if self.vc_mode: - func = lambda: self.load_soruce_button(self.ui.selected_utterance) - self.ui.load_soruce_button.clicked.connect(func) - - #Audio - self.ui.setup_audio_devices(Synthesizer.sample_rate) - - #Wav playback & save - func = lambda: self.replay_last_wav() - self.ui.replay_wav_button.clicked.connect(func) - func = lambda: self.export_current_wave() - self.ui.export_wav_button.clicked.connect(func) - self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav) - - # Generation - self.ui.vocode_button.clicked.connect(self.vocode) - self.ui.random_seed_checkbox.clicked.connect(self.update_seed_textbox) - - if self.vc_mode: - func = lambda: self.convert() or self.vocode() - self.ui.convert_button.clicked.connect(func) - else: - func = lambda: self.synthesize() or self.vocode() - self.ui.generate_button.clicked.connect(func) - self.ui.synthesize_button.clicked.connect(self.synthesize) - - # UMAP legend - self.ui.clear_button.clicked.connect(self.clear_utterances) - - def set_current_wav(self, index): - self.current_wav = self.waves_list[index] - - def export_current_wave(self): - self.ui.save_audio_file(self.current_wav, Synthesizer.sample_rate) - - def replay_last_wav(self): - self.ui.play(self.current_wav, Synthesizer.sample_rate) - - def reset_ui(self, encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, extractor_models_dir, convertor_models_dir, seed): - self.ui.populate_browser(self.datasets_root, recognized_datasets, 0, True) - self.ui.populate_models(encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, extractor_models_dir, convertor_models_dir, self.vc_mode) - self.ui.populate_gen_options(seed, self.trim_silences) - - def load_from_browser(self, fpath=None): - if fpath is None: - fpath = Path(self.datasets_root, - self.ui.current_dataset_name, - self.ui.current_speaker_name, - self.ui.current_utterance_name) - name = str(fpath.relative_to(self.datasets_root)) - speaker_name = self.ui.current_dataset_name + '_' + self.ui.current_speaker_name - - # Select the next utterance - if self.ui.auto_next_checkbox.isChecked(): - self.ui.browser_select_next() - elif fpath == "": - return - else: - name = fpath.name - speaker_name = fpath.parent.name - - if fpath.suffix.lower() == ".mp3" and self.no_mp3_support: - self.ui.log("Error: No mp3 file argument was passed but an mp3 file was used") - return - - # Get the wav from the disk. We take the wav with the vocoder/synthesizer format for - # playback, so as to have a fair comparison with the generated audio - wav = Synthesizer.load_preprocess_wav(fpath) - self.ui.log("Loaded %s" % name) - - self.add_real_utterance(wav, name, speaker_name) - - def load_soruce_button(self, utterance: Utterance): - self.selected_source_utterance = utterance - - def record(self): - wav = self.ui.record_one(encoder.sampling_rate, 5) - if wav is None: - return - self.ui.play(wav, encoder.sampling_rate) - - speaker_name = "user01" - name = speaker_name + "_rec_%05d" % np.random.randint(100000) - self.add_real_utterance(wav, name, speaker_name) - - def add_real_utterance(self, wav, name, speaker_name): - # Compute the mel spectrogram - spec = Synthesizer.make_spectrogram(wav) - self.ui.draw_spec(spec, "current") - - # Compute the embedding - if not encoder.is_loaded(): - self.init_encoder() - encoder_wav = encoder.preprocess_wav(wav) - embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - # Add the utterance - utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, False) - self.utterances.add(utterance) - self.ui.register_utterance(utterance, self.vc_mode) - - # Plot it - self.ui.draw_embed(embed, name, "current") - self.ui.draw_umap_projections(self.utterances) - - def clear_utterances(self): - self.utterances.clear() - self.ui.draw_umap_projections(self.utterances) - - def synthesize(self): - self.ui.log("Generating the mel spectrogram...") - self.ui.set_loading(1) - - # Update the synthesizer random seed - if self.ui.random_seed_checkbox.isChecked(): - seed = int(self.ui.seed_textbox.text()) - self.ui.populate_gen_options(seed, self.trim_silences) - else: - seed = None - - if seed is not None: - torch.manual_seed(seed) - - # Synthesize the spectrogram - if self.synthesizer is None or seed is not None: - self.init_synthesizer() - - texts = self.ui.text_prompt.toPlainText().split("\n") - punctuation = '!,。、,' # punctuate and split/clean text - processed_texts = [] - for text in texts: - for processed_text in re.sub(r'[{}]+'.format(punctuation), '\n', text).split('\n'): - if processed_text: - processed_texts.append(processed_text.strip()) - texts = processed_texts - embed = self.ui.selected_utterance.embed - embeds = [embed] * len(texts) - min_token = int(self.ui.token_slider.value()) - specs = self.synthesizer.synthesize_spectrograms(texts, embeds, style_idx=int(self.ui.style_slider.value()), min_stop_token=min_token, steps=int(self.ui.length_slider.value())*200) - breaks = [spec.shape[1] for spec in specs] - spec = np.concatenate(specs, axis=1) - - self.ui.draw_spec(spec, "generated") - self.current_generated = (self.ui.selected_utterance.speaker_name, spec, breaks, None) - self.ui.set_loading(0) - - def vocode(self): - speaker_name, spec, breaks, _ = self.current_generated - assert spec is not None - - # Initialize the vocoder model and make it determinstic, if user provides a seed - if self.ui.random_seed_checkbox.isChecked(): - seed = int(self.ui.seed_textbox.text()) - self.ui.populate_gen_options(seed, self.trim_silences) - else: - seed = None - - if seed is not None: - torch.manual_seed(seed) - - # Synthesize the waveform - if not vocoder.is_loaded() or seed is not None: - self.init_vocoder() - - def vocoder_progress(i, seq_len, b_size, gen_rate): - real_time_factor = (gen_rate / Synthesizer.sample_rate) * 1000 - line = "Waveform generation: %d/%d (batch size: %d, rate: %.1fkHz - %.2fx real time)" \ - % (i * b_size, seq_len * b_size, b_size, gen_rate, real_time_factor) - self.ui.log(line, "overwrite") - self.ui.set_loading(i, seq_len) - if self.ui.current_vocoder_fpath is not None: - self.ui.log("") - wav, sample_rate = vocoder.infer_waveform(spec, progress_callback=vocoder_progress) - else: - self.ui.log("Waveform generation with Griffin-Lim... ") - wav = Synthesizer.griffin_lim(spec) - self.ui.set_loading(0) - self.ui.log(" Done!", "append") - - # Add breaks - b_ends = np.cumsum(np.array(breaks) * Synthesizer.hparams.hop_size) - b_starts = np.concatenate(([0], b_ends[:-1])) - wavs = [wav[start:end] for start, end, in zip(b_starts, b_ends)] - breaks = [np.zeros(int(0.15 * sample_rate))] * len(breaks) - wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)]) - - # Trim excessive silences - if self.ui.trim_silences_checkbox.isChecked(): - wav = encoder.preprocess_wav(wav) - - # Play it - wav = wav / np.abs(wav).max() * 0.97 - self.ui.play(wav, sample_rate) - - # Name it (history displayed in combobox) - # TODO better naming for the combobox items? - wav_name = str(self.waves_count + 1) - - #Update waves combobox - self.waves_count += 1 - if self.waves_count > MAX_WAVES: - self.waves_list.pop() - self.waves_namelist.pop() - self.waves_list.insert(0, wav) - self.waves_namelist.insert(0, wav_name) - - self.ui.waves_cb.disconnect() - self.ui.waves_cb_model.setStringList(self.waves_namelist) - self.ui.waves_cb.setCurrentIndex(0) - self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav) - - # Update current wav - self.set_current_wav(0) - - #Enable replay and save buttons: - self.ui.replay_wav_button.setDisabled(False) - self.ui.export_wav_button.setDisabled(False) - - # Compute the embedding - # TODO: this is problematic with different sampling rates, gotta fix it - if not encoder.is_loaded(): - self.init_encoder() - encoder_wav = encoder.preprocess_wav(wav) - embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - # Add the utterance - name = speaker_name + "_gen_%05d" % np.random.randint(100000) - utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, True) - self.utterances.add(utterance) - - # Plot it - self.ui.draw_embed(embed, name, "generated") - self.ui.draw_umap_projections(self.utterances) - - def convert(self): - self.ui.log("Extract PPG and Converting...") - self.ui.set_loading(1) - - # Init - if self.convertor is None: - self.init_convertor() - if self.extractor is None: - self.init_extractor() - - src_wav = self.selected_source_utterance.wav - - # Compute the ppg - if not self.extractor is None: - ppg = self.extractor.extract_from_wav(src_wav) - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - ref_wav = self.ui.selected_utterance.wav - # Import necessary dependency of Voice Conversion - from utils.f0_utils import compute_f0, f02lf0, compute_mean_std, get_converted_lf0uv - ref_lf0_mean, ref_lf0_std = compute_mean_std(f02lf0(compute_f0(ref_wav))) - lf0_uv = get_converted_lf0uv(src_wav, ref_lf0_mean, ref_lf0_std, convert=True) - min_len = min(ppg.shape[1], len(lf0_uv)) - ppg = ppg[:, :min_len] - lf0_uv = lf0_uv[:min_len] - _, mel_pred, att_ws = self.convertor.inference( - ppg, - logf0_uv=torch.from_numpy(lf0_uv).unsqueeze(0).float().to(device), - spembs=torch.from_numpy(self.ui.selected_utterance.embed).unsqueeze(0).to(device), - ) - mel_pred= mel_pred.transpose(0, 1) - breaks = [mel_pred.shape[1]] - mel_pred= mel_pred.detach().cpu().numpy() - self.ui.draw_spec(mel_pred, "generated") - self.current_generated = (self.ui.selected_utterance.speaker_name, mel_pred, breaks, None) - self.ui.set_loading(0) - - def init_extractor(self): - if self.ui.current_extractor_fpath is None: - return - model_fpath = self.ui.current_extractor_fpath - self.ui.log("Loading the extractor %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - import ppg_extractor as extractor - self.extractor = extractor.load_model(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_convertor(self): - if self.ui.current_convertor_fpath is None: - return - model_fpath = self.ui.current_convertor_fpath - self.ui.log("Loading the convertor %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - import ppg2mel as convertor - self.convertor = convertor.load_model( model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_encoder(self): - model_fpath = self.ui.current_encoder_fpath - - self.ui.log("Loading the encoder %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - encoder.load_model(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_synthesizer(self): - model_fpath = self.ui.current_synthesizer_fpath - - self.ui.log("Loading the synthesizer %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - self.synthesizer = Synthesizer(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_vocoder(self): - - global vocoder - model_fpath = self.ui.current_vocoder_fpath - # Case of Griffin-lim - if model_fpath is None: - return - # Sekect vocoder based on model name - model_config_fpath = None - if model_fpath.name is not None and model_fpath.name.find("hifigan") > -1: - vocoder = gan_vocoder - self.ui.log("set hifigan as vocoder") - # search a config file - model_config_fpaths = list(model_fpath.parent.rglob("*.json")) - if self.vc_mode and self.ui.current_extractor_fpath is None: - return - if len(model_config_fpaths) > 0: - model_config_fpath = model_config_fpaths[0] - elif model_fpath.name is not None and model_fpath.name.find("fregan") > -1: - vocoder = fgan_vocoder - self.ui.log("set fregan as vocoder") - # search a config file - model_config_fpaths = list(model_fpath.parent.rglob("*.json")) - if self.vc_mode and self.ui.current_extractor_fpath is None: - return - if len(model_config_fpaths) > 0: - model_config_fpath = model_config_fpaths[0] - else: - vocoder = rnn_vocoder - self.ui.log("set wavernn as vocoder") - - self.ui.log("Loading the vocoder %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - vocoder.load_model(model_fpath, model_config_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def update_seed_textbox(self): - self.ui.update_seed_textbox() diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning/app.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning/app.py deleted file mode 100644 index 820ca71d93eaeae3f0c70fed917deb85d9200849..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning/app.py +++ /dev/null @@ -1,98 +0,0 @@ -from TTS.api import TTS -tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True) -from scipy.io import wavfile -import noisereduce as nr -import whisper -model = whisper.load_model("small") -import gradio as gr -import openai - -mes1 = [ - {"role": "system", "content": "You are a TOEFL examiner. Help me improve my oral Englsih and give me feedback. Replace the Arabic numerals with the corresponding English words in your response."} -] - -mes2 = [ - {"role": "system", "content": "You are a mental health therapist. Your name is Tina. Replace the Arabic numerals with the corresponding English words in your response."} -] - -mes3 = [ - {"role": "system", "content": "You are my personal assistant. Your name is Alice. Replace the Arabic numerals with the corresponding English words in your response."} -] - -res = [] - -def transcribe(apikey, upload, audio, choice1): - - openai.api_key = apikey - - # time.sleep(3) - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - options = whisper.DecodingOptions() - result = whisper.decode(model, mel, options) - res.append(result.text) - - if choice1 == "TOEFL": - messages = mes1 - elif choice1 == "Therapist": - messages = mes2 - elif choice1 == "Alice": - messages = mes3 - - # chatgpt - n = len(res) - content = res[n-1] - messages.append({"role": "user", "content": content}) - - completion = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = messages - ) - - chat_response = completion.choices[0].message.content - - messages.append({"role": "assistant", "content": chat_response}) - - tts.tts_to_file(chat_response, speaker_wav = upload, language="en", file_path="output.wav") - - audio_in = "output.wav" - - rate, data = wavfile.read(audio_in) - - reduced_noise = nr.reduce_noise(y=data, sr=rate, prop_decrease= 0.85, stationary=True) - #reduced_noise = nr.reduce_noise(y = data, sr=rate, prop_decrease= 0.85) - #reduced_noise = nr.reduce_noise(y = data, sr=rate, thresh_n_mult_nonstationary=2, stationary=False) - - wavfile.write("audio1.wav", rate, reduced_noise) - - return [result.text, chat_response, "audio1.wav"] - -output_1 = gr.Textbox(label="Speech to Text") -output_2 = gr.Textbox(label="ChatGPT Output") -output_3 = gr.Audio(label="Audio") - -gr.Interface( - title = '🥳💬💕 - TalktoAI,随时随地,谈天说地!', - theme="huggingface", - description = "🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI,Enable the future!", - fn=transcribe, - inputs=[ - gr.Textbox(lines=1, label = "请填写您的OpenAI_API_key"), - gr.inputs.Audio(source="upload", label = "请上传您喜欢的声音", type="filepath"), - gr.inputs.Audio(source="microphone", type="filepath"), - gr.Radio(["TOEFL", "Therapist", "Alice"], label="TOEFL Examiner, Therapist Tina, or Assistant Alice?"), - ], - outputs=[ - output_1, output_2, output_3 - ], - ).launch() \ No newline at end of file diff --git a/spaces/Kiyo-umm/Linaqruf-pastel-anime-xl-lora/README.md b/spaces/Kiyo-umm/Linaqruf-pastel-anime-xl-lora/README.md deleted file mode 100644 index c8c59d13a691516c95530319c27a37dfbf083edd..0000000000000000000000000000000000000000 --- a/spaces/Kiyo-umm/Linaqruf-pastel-anime-xl-lora/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Linaqruf Pastel Anime Xl Lora -emoji: 📚 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/L1211/New_space1/README.md b/spaces/L1211/New_space1/README.md deleted file mode 100644 index 2a91d1c96344643a9e7698c7303558e51bd6ba95..0000000000000000000000000000000000000000 --- a/spaces/L1211/New_space1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: New Space1 -emoji: ⚡ -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Laihiujin/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py b/spaces/Laihiujin/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py deleted file mode 100644 index a06d586f70131c86604ee0113993b99effaba340..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py +++ /dev/null @@ -1,528 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import logging -import fvcore.nn.weight_init as weight_init -from typing import Optional -import torch -from torch import nn, Tensor -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d - -from .position_encoding import PositionEmbeddingSine -from .transformer import Transformer - -from detectron2.utils.registry import Registry - - -TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE") -TRANSFORMER_DECODER_REGISTRY.__doc__ = """ -Registry for transformer module in OneFormer. -""" - - -def build_transformer_decoder(cfg, in_channels, mask_classification=True): - """ - Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`. - """ - name = cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME - return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification) - - -class SelfAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - return self.forward_post(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - - -class CrossAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - return self.forward_post(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - - -class FFNLayer(nn.Module): - - def __init__(self, d_model, dim_feedforward=2048, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm = nn.LayerNorm(d_model) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt): - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - return tgt - - def forward_pre(self, tgt): - tgt2 = self.norm(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, tgt): - if self.normalize_before: - return self.forward_pre(tgt) - return self.forward_post(tgt) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class MLP(nn.Module): - """ Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -@TRANSFORMER_DECODER_REGISTRY.register() -class ContrastiveMultiScaleMaskedTransformerDecoder(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "static_query" in k: - newk = k.replace("static_query", "query_feat") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - num_classes: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dropout: float, - dim_feedforward: int, - enc_layers: int, - is_train: bool, - dec_layers: int, - class_dec_layers: int, - pre_norm: bool, - mask_dim: int, - enforce_input_project: bool, - use_task_norm: bool, - ): - """ - NOTE: this interface is experimental. - Args: - in_channels: channels of the input features - mask_classification: whether to add mask classifier or not - num_classes: number of classes - hidden_dim: Transformer feature dimension - num_queries: number of queries - nheads: number of heads - dim_feedforward: feature dimension in feedforward network - enc_layers: number of Transformer encoder layers - dec_layers: number of Transformer decoder layers - pre_norm: whether to use pre-LayerNorm or not - mask_dim: mask feature dimension - enforce_input_project: add input project 1x1 conv even if input - channels and hidden dim is identical - """ - super().__init__() - - assert mask_classification, "Only support mask classification model" - self.mask_classification = mask_classification - self.is_train = is_train - self.use_task_norm = use_task_norm - - # positional encoding - N_steps = hidden_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - self.class_transformer = Transformer( - d_model=hidden_dim, - dropout=dropout, - nhead=nheads, - dim_feedforward=dim_feedforward, - num_encoder_layers=enc_layers, - num_decoder_layers=class_dec_layers, - normalize_before=pre_norm, - return_intermediate_dec=False, - ) - - # define Transformer decoder here - self.num_heads = nheads - self.num_layers = dec_layers - self.transformer_self_attention_layers = nn.ModuleList() - self.transformer_cross_attention_layers = nn.ModuleList() - self.transformer_ffn_layers = nn.ModuleList() - - for _ in range(self.num_layers): - self.transformer_self_attention_layers.append( - SelfAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_cross_attention_layers.append( - CrossAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_ffn_layers.append( - FFNLayer( - d_model=hidden_dim, - dim_feedforward=dim_feedforward, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.decoder_norm = nn.LayerNorm(hidden_dim) - - self.num_queries = num_queries - # learnable query p.e. - self.query_embed = nn.Embedding(num_queries, hidden_dim) - - # level embedding (we always use 3 scales) - self.num_feature_levels = 3 - self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim) - self.input_proj = nn.ModuleList() - for _ in range(self.num_feature_levels): - if in_channels != hidden_dim or enforce_input_project: - self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1)) - weight_init.c2_xavier_fill(self.input_proj[-1]) - else: - self.input_proj.append(nn.Sequential()) - - self.class_input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.class_input_proj) - - # output FFNs - if self.mask_classification: - self.class_embed = nn.Linear(hidden_dim, num_classes + 1) - self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3) - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - ret["hidden_dim"] = cfg.MODEL.ONE_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.ONE_FORMER.NHEADS - ret["dim_feedforward"] = cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD - - # NOTE: because we add learnable query features which requires supervision, - # we add minus 1 to decoder layers to be consistent with our loss - # implementation: that is, number of auxiliary losses is always - # equal to number of decoder layers. With learnable query features, the number of - # auxiliary losses equals number of decoders plus 1. - assert cfg.MODEL.ONE_FORMER.DEC_LAYERS >= 1 - ret["dec_layers"] = cfg.MODEL.ONE_FORMER.DEC_LAYERS - 1 - ret["class_dec_layers"] = cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS - ret["enc_layers"] = cfg.MODEL.ONE_FORMER.ENC_LAYERS - ret["dropout"] = cfg.MODEL.ONE_FORMER.DROPOUT - ret["pre_norm"] = cfg.MODEL.ONE_FORMER.PRE_NORM - ret["enforce_input_project"] = cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ - ret["is_train"] = cfg.MODEL.IS_TRAIN - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["use_task_norm"] = cfg.MODEL.ONE_FORMER.USE_TASK_NORM - - return ret - - def forward(self, x, mask_features, tasks, mask = None): - # x is a list of multi-scale feature - assert len(x) == self.num_feature_levels - src = [] - pos = [] - size_list = [] - - # disable mask, it does not affect performance - del mask - - for i in range(self.num_feature_levels): - size_list.append(x[i].shape[-2:]) - pos.append(self.pe_layer(x[i], None).flatten(2)) - src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None]) - - # flatten NxCxHxW to HWxNxC - pos[-1] = pos[-1].permute(2, 0, 1) - src[-1] = src[-1].permute(2, 0, 1) - - _, bs, _ = src[0].shape - - # QxNxC - query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1) - tasks = tasks.unsqueeze(0) - if self.use_task_norm: - tasks = self.decoder_norm(tasks) - - feats = self.pe_layer(mask_features, None) - - out_t, _ = self.class_transformer(feats, None, - self.query_embed.weight[:-1], - self.class_input_proj(mask_features), - tasks if self.use_task_norm else None) - out_t = out_t[0].permute(1, 0, 2) - - out = torch.cat([out_t, tasks], dim=0) - - output = out.clone() - - predictions_class = [] - predictions_mask = [] - - # prediction heads on learnable query features - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0], i=0) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - for i in range(self.num_layers): - level_index = i % self.num_feature_levels - attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False - # attention: cross-attention first - output = self.transformer_cross_attention_layers[i]( - output, src[level_index], - memory_mask=attn_mask, - memory_key_padding_mask=None, # here we do not apply masking on padded region - pos=pos[level_index], query_pos=query_embed - ) - - output = self.transformer_self_attention_layers[i]( - output, tgt_mask=None, - tgt_key_padding_mask=None, - query_pos=query_embed - ) - - # FFN - output = self.transformer_ffn_layers[i]( - output - ) - - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels], i=i+1) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - assert len(predictions_class) == self.num_layers + 1 - if self.is_train: - query_class = out.permute(1, 0, 2) - else: - query_class = None - out = { - 'contrastive_logits': query_class, - 'pred_logits': predictions_class[-1], - 'pred_masks': predictions_mask[-1], - 'aux_outputs': self._set_aux_loss( - predictions_class if self.mask_classification else None, - predictions_mask, - ) - } - - return out - - def forward_prediction_heads(self, output, mask_features, attn_mask_target_size, i): - decoder_output = self.decoder_norm(output) - decoder_output = decoder_output.transpose(0, 1) - outputs_class = self.class_embed(decoder_output) - mask_embed = self.mask_embed(decoder_output) - outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features) - - # NOTE: prediction is of higher-resolution - # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW] - attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False) - - # save_attn_masks(attn_mask.sigmoid() < 0.5, fname=f'demo/maps/{i}_pre_bool') - - # must use bool type - # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged. - attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool() - attn_mask = attn_mask.detach() - - return outputs_class, outputs_mask, attn_mask - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_seg_masks): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - if self.mask_classification: - aux_list = [ - {"pred_logits": a, "pred_masks": b} - for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1]) - ] - else: - aux_list = [{"pred_masks": b} for b, in outputs_seg_masks[:-1]] - - return aux_list \ No newline at end of file diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/rmvpe.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/rmvpe.py deleted file mode 100644 index 08dd79b76e505a9c8bee554da32c75772f560006..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/rmvpe.py +++ /dev/null @@ -1,724 +0,0 @@ -import os - -import numpy as np -import torch -try: - #Fix "Torch not compiled with CUDA enabled" - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from lib.infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass -import torch.nn as nn -import torch.nn.functional as F -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window - -import logging - -logger = logging.getLogger(__name__) - - -###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = normalize(win_sq, norm=norm) ** 2 - win_sq = pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -class STFT(torch.nn.Module): - def __init__( - self, filter_length=1024, hop_length=512, win_length=None, window="hann" - ): - """ - This module implements an STFT using 1D convolution and 1D transpose convolutions. - This is a bit tricky so there are some cases that probably won't work as working - out the same sizes before and after in all overlap add setups is tough. Right now, - this code should work with hop lengths that are half the filter length (50% overlap - between frames). - - Keyword Arguments: - filter_length {int} -- Length of filters used (default: {1024}) - hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512}) - win_length {[type]} -- Length of the window function applied to each frame (if not specified, it - equals the filter length). (default: {None}) - window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris) - (default: {'hann'}) - """ - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length if win_length else filter_length - self.window = window - self.forward_transform = None - self.pad_amount = int(self.filter_length / 2) - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - assert filter_length >= self.win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, self.win_length, fftbins=True) - fft_window = pad_center(fft_window, size=filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - """Take input data (audio) to STFT domain. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - """ - num_batches = input_data.shape[0] - num_samples = input_data.shape[-1] - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - # print(1234,input_data.shape) - input_data = F.pad( - input_data.unsqueeze(1), - (self.pad_amount, self.pad_amount, 0, 0, 0, 0), - mode="reflect", - ).squeeze(1) - # print(2333,input_data.shape,self.forward_basis.shape,self.hop_length) - # pdb.set_trace() - forward_transform = F.conv1d( - input_data, self.forward_basis, stride=self.hop_length, padding=0 - ) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - # phase = torch.atan2(imag_part.data, real_part.data) - - return magnitude # , phase - - def inverse(self, magnitude, phase): - """Call the inverse STFT (iSTFT), given magnitude and phase tensors produced - by the ```transform``` function. - - Arguments: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - - Returns: - inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - self.inverse_basis, - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.from_numpy(window_sum).to(inverse_transform.device) - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[..., self.pad_amount :] - inverse_transform = inverse_transform[..., : self.num_samples] - inverse_transform = inverse_transform.squeeze(1) - - return inverse_transform - - def forward(self, input_data): - """Take input data (audio) to STFT domain and then back to audio. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -from time import time as ttime - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - # print(mel.shape) - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - # print(x.shape) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - # "cpu"if(audio.device.type=="privateuseone") else audio.device - audio.device - ) - # fft = torch.stft(#doesn't support pytorch_dml - # # audio.cpu() if(audio.device.type=="privateuseone")else audio, - # audio, - # n_fft=n_fft_new, - # hop_length=hop_length_new, - # win_length=win_length_new, - # window=self.hann_window[keyshift_key], - # center=center, - # return_complex=True, - # ) - # magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - # print(1111111111) - # print(222222222222222,audio.device,self.is_half) - if hasattr(self, "stft") == False: - # print(n_fft_new,hop_length_new,win_length_new,audio.shape) - self.stft = STFT( - filter_length=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window="hann", - ).to(audio.device) - magnitude = self.stft.transform(audio) # phase - # if (audio.device.type == "privateuseone"): - # magnitude=magnitude.to(audio.device) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - # print(log_mel_spec.device.type) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - if "privateuseone" in str(device): - import onnxruntime as ort - - ort_session = ort.InferenceSession( - "%s/rmvpe.onnx" % os.environ["rmvpe_root"], - providers=["DmlExecutionProvider"], - ) - self.model = ort_session - else: - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="constant" - ) - if "privateuseone" in str(self.device): - onnx_input_name = self.model.get_inputs()[0].name - onnx_outputs_names = self.model.get_outputs()[0].name - hidden = self.model.run( - [onnx_outputs_names], - input_feed={onnx_input_name: mel.cpu().numpy()}, - )[0] - else: - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - # torch.cuda.synchronize() - t0 = ttime() - mel = self.mel_extractor( - torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True - ) - # print(123123123,mel.device.type) - # torch.cuda.synchronize() - t1 = ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - t2 = ttime() - # print(234234,hidden.device.type) - if "privateuseone" not in str(self.device): - hidden = hidden.squeeze(0).cpu().numpy() - else: - hidden = hidden[0] - if self.is_half == True: - hidden = hidden.astype("float32") - - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - t3 = ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def infer_from_audio_with_pitch(self, audio, thred=0.03, f0_min=50, f0_max=1100): - t0 = ttime() - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - mel = self.mel_extractor(audio, center=True) - t1 = ttime() - hidden = self.mel2hidden(mel) - t2 = ttime() - if "privateuseone" not in str(self.device): - hidden = hidden.squeeze(0).cpu().numpy() - else: - hidden = hidden[0] - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - f0[(f0 < f0_min) | (f0 > f0_max)] = 0 - t3 = ttime() - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -if __name__ == "__main__": - import librosa - import soundfile as sf - - audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav") - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - audio_bak = audio.copy() - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt" - thred = 0.03 # 0.01 - device = "cuda" if torch.cuda.is_available() else "cpu" - rmvpe = RMVPE(model_path, is_half=False, device=device) - t0 = ttime() - f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - t1 = ttime() - logger.info("%s %.2f", f0.shape, t1 - t0) diff --git a/spaces/LennardZuendorf/legalis/README.md b/spaces/LennardZuendorf/legalis/README.md deleted file mode 100644 index f59e4d20a32bd94810722c4d2cf8f78536938eb0..0000000000000000000000000000000000000000 --- a/spaces/LennardZuendorf/legalis/README.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Legalis -emoji: ⚖️ -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: mit ---- - -# legalis demo - -#### This space is showing a demo of two models I build for a Uni Project about the prediction of court case outcomes. It's based on the similarly named legalis dataset, which contains 2800 court cases from the German Court cases. The dataset is labeled with the outcome of the case (plaintiff won or lost), extracted from the decision in text form (by ChatGPT) and the models are trained to predict the outcome based on the text of the case. - -## 🔗 Links: - -**[RandomForest Model on Huggingface](https://huggingface.co/LennardZuendorf/legalis-scikit)** - -**[BERT Model on Huggingface](https://huggingface.co/LennardZuendorf/legalis-BERT)** - -**[Labeled Dataset on Huggingface](https://huggingface.co/datasets/LennardZuendorf/legalis)** - - -## 👨‍💻 Author and Credits: - -**Author:** [@LennardZuendorf](https://github.com/LennardZuendorf) - -- This project is part of a course in HTW Berlin's Business Computing Bachelor Program ("Unternehmenssoftware") - diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/vchartfile.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/vchartfile.py deleted file mode 100644 index b81fd545443b600cf604a6f538add125469a3203..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/vchartfile.py +++ /dev/null @@ -1,141 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from datetime import datetime -from struct import unpack -import os.path - -import backtrader as bt -from backtrader import date2num # avoid dict lookups - - -class MetaVChartFile(bt.DataBase.__class__): - def __init__(cls, name, bases, dct): - '''Class has already been created ... register''' - # Initialize the class - super(MetaVChartFile, cls).__init__(name, bases, dct) - - # Register with the store - bt.stores.VChartFile.DataCls = cls - - -class VChartFile(bt.with_metaclass(MetaVChartFile, bt.DataBase)): - ''' - Support for `Visual Chart `_ binary on-disk files for - both daily and intradaily formats. - - Note: - - - ``dataname``: Market code displayed by Visual Chart. Example: 015ES for - EuroStoxx 50 continuous future - ''' - - def start(self): - super(VChartFile, self).start() - if self._store is None: - self._store = bt.stores.VChartFile() - self._store.start() - - self._store.start(data=self) - - # Choose extension and extraction/calculation parameters - if self.p.timeframe < bt.TimeFrame.Minutes: - ext = '.tck' # seconds will still need resampling - # FIXME: find reference to tick counter for format - elif self.p.timeframe < bt.TimeFrame.Days: - ext = '.min' - self._dtsize = 2 - self._barsize = 32 - self._barfmt = 'IIffffII' - else: - ext = '.fd' - self._barsize = 28 - self._dtsize = 1 - self._barfmt = 'IffffII' - - # Construct full path - basepath = self._store.get_datapath() - - # Example: 01 + 0 + 015ES + .fd -> 010015ES.fd - dataname = '01' + '0' + self.p.dataname + ext - # 015ES -> 0 + 015 -> 0015 - mktcode = '0' + self.p.dataname[0:3] - - # basepath/0015/010015ES.fd - path = os.path.join(basepath, mktcode, dataname) - try: - self.f = open(path, 'rb') - except IOError: - self.f = None - - def stop(self): - if self.f is not None: - self.f.close() - self.f = None - - def _load(self): - if self.f is None: - return False # cannot load more - - try: - bardata = self.f.read(self._barsize) - except IOError: - self.f = None # cannot return, nullify file - return False # cannot load more - - if not bardata or len(bardata) < self._barsize: - self.f = None # cannot return, nullify file - return False # cannot load more - - try: - bdata = unpack(self._barfmt, bardata) - except: - self.f = None - return False - - # First Date - y, md = divmod(bdata[0], 500) # Years stored as if they had 500 days - m, d = divmod(md, 32) # Months stored as if they had 32 days - dt = datetime(y, m, d) - - # Time - if self._dtsize > 1: # Minute Bars - # Daily Time is stored in seconds - hhmm, ss = divmod(bdata[1], 60) - hh, mm = divmod(hhmm, 60) - dt = dt.replace(hour=hh, minute=mm, second=ss) - else: # Daily Bars - dt = datetime.combine(dt, self.p.sessionend) - - self.lines.datetime[0] = date2num(dt) # Store time - - # Get the rest of the fields - o, h, l, c, v, oi = bdata[self._dtsize:] - self.lines.open[0] = o - self.lines.high[0] = h - self.lines.low[0] = l - self.lines.close[0] = c - self.lines.volume[0] = v - self.lines.openinterest[0] = oi - - return True # a bar has been successfully loaded diff --git a/spaces/Lippppxy/AiAnimeVoice/text/cleaners.py b/spaces/Lippppxy/AiAnimeVoice/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/Lippppxy/AiAnimeVoice/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/LittleYuan/My-Real-Bot/realesrgan/weights/README.md b/spaces/LittleYuan/My-Real-Bot/realesrgan/weights/README.md deleted file mode 100644 index 4d7b7e642591ef88575d9e6c360a4d29e0cc1a4f..0000000000000000000000000000000000000000 --- a/spaces/LittleYuan/My-Real-Bot/realesrgan/weights/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Weights - -Put the downloaded weights to this folder. diff --git a/spaces/LouieDellavega/dreamlike-photoreal-2.0/README.md b/spaces/LouieDellavega/dreamlike-photoreal-2.0/README.md deleted file mode 100644 index a70a7b6bfda1bdeb1d5d103e33a80e6780b24740..0000000000000000000000000000000000000000 --- a/spaces/LouieDellavega/dreamlike-photoreal-2.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dreamlike Photoreal 2.0 -emoji: 📉 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: akhaliq/dreamlike-photoreal-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Luna-Crestt/Da-ze/app.py b/spaces/Luna-Crestt/Da-ze/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/Luna-Crestt/Da-ze/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/Lwalid/Daam_Inpainting/README.md b/spaces/Lwalid/Daam_Inpainting/README.md deleted file mode 100644 index 6079744a430d1b43840bdd20628ab537ebf0e68a..0000000000000000000000000000000000000000 --- a/spaces/Lwalid/Daam_Inpainting/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Daam Inpainting -emoji: 🏢 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/inference.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/inference.py deleted file mode 100644 index 90bc1c0c68525734bd6793f07c15fe97d3c8342c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -import matplotlib.pyplot as plt -import annotator.uniformer.mmcv as mmcv -import torch -from annotator.uniformer.mmcv.parallel import collate, scatter -from annotator.uniformer.mmcv.runner import load_checkpoint - -from annotator.uniformer.mmseg.datasets.pipelines import Compose -from annotator.uniformer.mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - # plt.figure(figsize=fig_size) - # plt.imshow(mmcv.bgr2rgb(img)) - # plt.title(title) - # plt.tight_layout() - # plt.show(block=block) - return mmcv.bgr2rgb(img) diff --git a/spaces/MetaWabbit/Auto-GPT/README.md b/spaces/MetaWabbit/Auto-GPT/README.md deleted file mode 100644 index 5bf09b995f04f7af05d1314906b1b1ff39c20ddc..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AutoGPT -emoji: 🦾 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: ui/app.py -pinned: false -license: mit -duplicated_from: aliabid94/AutoGPT ---- - diff --git a/spaces/MrBodean/VoiceClone/synthesizer/train.py b/spaces/MrBodean/VoiceClone/synthesizer/train.py deleted file mode 100644 index a136cf9b38538ca7dc428adf209c0cbb40e890d7..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/synthesizer/train.py +++ /dev/null @@ -1,269 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import optim -from torch.utils.data import DataLoader -from synthesizer import audio -from synthesizer.models.tacotron import Tacotron -from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer -from synthesizer.utils import ValueWindow, data_parallel_workaround -from synthesizer.utils.plot import plot_spectrogram -from synthesizer.utils.symbols import symbols -from synthesizer.utils.text import sequence_to_text -from vocoder.display import * -from datetime import datetime -import numpy as np -from pathlib import Path -import sys -import time -import platform - - -def np_now(x: torch.Tensor): return x.detach().cpu().numpy() - -def time_string(): - return datetime.now().strftime("%Y-%m-%d %H:%M") - -def train(run_id: str, syn_dir: str, models_dir: str, save_every: int, - backup_every: int, force_restart:bool, hparams): - - syn_dir = Path(syn_dir) - models_dir = Path(models_dir) - models_dir.mkdir(exist_ok=True) - - model_dir = models_dir.joinpath(run_id) - plot_dir = model_dir.joinpath("plots") - wav_dir = model_dir.joinpath("wavs") - mel_output_dir = model_dir.joinpath("mel-spectrograms") - meta_folder = model_dir.joinpath("metas") - model_dir.mkdir(exist_ok=True) - plot_dir.mkdir(exist_ok=True) - wav_dir.mkdir(exist_ok=True) - mel_output_dir.mkdir(exist_ok=True) - meta_folder.mkdir(exist_ok=True) - - weights_fpath = model_dir.joinpath(run_id).with_suffix(".pt") - metadata_fpath = syn_dir.joinpath("train.txt") - - print("Checkpoint path: {}".format(weights_fpath)) - print("Loading training data from: {}".format(metadata_fpath)) - print("Using model: Tacotron") - - # Book keeping - step = 0 - time_window = ValueWindow(100) - loss_window = ValueWindow(100) - - - # From WaveRNN/train_tacotron.py - if torch.cuda.is_available(): - device = torch.device("cuda") - - for session in hparams.tts_schedule: - _, _, _, batch_size = session - if batch_size % torch.cuda.device_count() != 0: - raise ValueError("`batch_size` must be evenly divisible by n_gpus!") - else: - device = torch.device("cpu") - print("Using device:", device) - - # Instantiate Tacotron Model - print("\nInitialising Tacotron Model...\n") - model = Tacotron(embed_dims=hparams.tts_embed_dims, - num_chars=len(symbols), - encoder_dims=hparams.tts_encoder_dims, - decoder_dims=hparams.tts_decoder_dims, - n_mels=hparams.num_mels, - fft_bins=hparams.num_mels, - postnet_dims=hparams.tts_postnet_dims, - encoder_K=hparams.tts_encoder_K, - lstm_dims=hparams.tts_lstm_dims, - postnet_K=hparams.tts_postnet_K, - num_highways=hparams.tts_num_highways, - dropout=hparams.tts_dropout, - stop_threshold=hparams.tts_stop_threshold, - speaker_embedding_size=hparams.speaker_embedding_size).to(device) - - # Initialize the optimizer - optimizer = optim.Adam(model.parameters()) - - # Load the weights - if force_restart or not weights_fpath.exists(): - print("\nStarting the training of Tacotron from scratch\n") - model.save(weights_fpath) - - # Embeddings metadata - char_embedding_fpath = meta_folder.joinpath("CharacterEmbeddings.tsv") - with open(char_embedding_fpath, "w", encoding="utf-8") as f: - for symbol in symbols: - if symbol == " ": - symbol = "\\s" # For visual purposes, swap space with \s - - f.write("{}\n".format(symbol)) - - else: - print("\nLoading weights at %s" % weights_fpath) - model.load(weights_fpath, optimizer) - print("Tacotron weights loaded from step %d" % model.step) - - # Initialize the dataset - metadata_fpath = syn_dir.joinpath("train.txt") - mel_dir = syn_dir.joinpath("mels") - embed_dir = syn_dir.joinpath("embeds") - dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams) - test_loader = DataLoader(dataset, - batch_size=1, - shuffle=True, - pin_memory=True) - - for i, session in enumerate(hparams.tts_schedule): - current_step = model.get_step() - - r, lr, max_step, batch_size = session - - training_steps = max_step - current_step - - # Do we need to change to the next session? - if current_step >= max_step: - # Are there no further sessions than the current one? - if i == len(hparams.tts_schedule) - 1: - # We have completed training. Save the model and exit - model.save(weights_fpath, optimizer) - break - else: - # There is a following session, go to it - continue - - model.r = r - - # Begin the training - simple_table([(f"Steps with r={r}", str(training_steps // 1000) + "k Steps"), - ("Batch Size", batch_size), - ("Learning Rate", lr), - ("Outputs/Step (r)", model.r)]) - - for p in optimizer.param_groups: - p["lr"] = lr - - data_loader = DataLoader(dataset, - collate_fn=lambda batch: collate_synthesizer(batch, r, hparams), - batch_size=batch_size, - num_workers=2 if platform.system() != "Windows" else 0, - shuffle=True, - pin_memory=True) - - total_iters = len(dataset) - steps_per_epoch = np.ceil(total_iters / batch_size).astype(np.int32) - epochs = np.ceil(training_steps / steps_per_epoch).astype(np.int32) - - for epoch in range(1, epochs+1): - for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1): - start_time = time.time() - - # Generate stop tokens for training - stop = torch.ones(mels.shape[0], mels.shape[2]) - for j, k in enumerate(idx): - stop[j, :int(dataset.metadata[k][4])-1] = 0 - - texts = texts.to(device) - mels = mels.to(device) - embeds = embeds.to(device) - stop = stop.to(device) - - # Forward pass - # Parallelize model onto GPUS using workaround due to python bug - if device.type == "cuda" and torch.cuda.device_count() > 1: - m1_hat, m2_hat, attention, stop_pred = data_parallel_workaround(model, texts, - mels, embeds) - else: - m1_hat, m2_hat, attention, stop_pred = model(texts, mels, embeds) - - # Backward pass - m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels) - m2_loss = F.mse_loss(m2_hat, mels) - stop_loss = F.binary_cross_entropy(stop_pred, stop) - - loss = m1_loss + m2_loss + stop_loss - - optimizer.zero_grad() - loss.backward() - - if hparams.tts_clip_grad_norm is not None: - grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), hparams.tts_clip_grad_norm) - if np.isnan(grad_norm.cpu()): - print("grad_norm was NaN!") - - optimizer.step() - - time_window.append(time.time() - start_time) - loss_window.append(loss.item()) - - step = model.get_step() - k = step // 1000 - - msg = f"| Epoch: {epoch}/{epochs} ({i}/{steps_per_epoch}) | Loss: {loss_window.average:#.4} | {1./time_window.average:#.2} steps/s | Step: {k}k | " - stream(msg) - - # Backup or save model as appropriate - if backup_every != 0 and step % backup_every == 0 : - backup_fpath = Path("{}/{}_{}k.pt".format(str(weights_fpath.parent), run_id, k)) - model.save(backup_fpath, optimizer) - - if save_every != 0 and step % save_every == 0 : - # Must save latest optimizer state to ensure that resuming training - # doesn't produce artifacts - model.save(weights_fpath, optimizer) - - # Evaluate model to generate samples - epoch_eval = hparams.tts_eval_interval == -1 and i == steps_per_epoch # If epoch is done - step_eval = hparams.tts_eval_interval > 0 and step % hparams.tts_eval_interval == 0 # Every N steps - if epoch_eval or step_eval: - for sample_idx in range(hparams.tts_eval_num_samples): - # At most, generate samples equal to number in the batch - if sample_idx + 1 <= len(texts): - # Remove padding from mels using frame length in metadata - mel_length = int(dataset.metadata[idx[sample_idx]][4]) - mel_prediction = np_now(m2_hat[sample_idx]).T[:mel_length] - target_spectrogram = np_now(mels[sample_idx]).T[:mel_length] - attention_len = mel_length // model.r - - eval_model(attention=np_now(attention[sample_idx][:, :attention_len]), - mel_prediction=mel_prediction, - target_spectrogram=target_spectrogram, - input_seq=np_now(texts[sample_idx]), - step=step, - plot_dir=plot_dir, - mel_output_dir=mel_output_dir, - wav_dir=wav_dir, - sample_num=sample_idx + 1, - loss=loss, - hparams=hparams) - - # Break out of loop to update training schedule - if step >= max_step: - break - - # Add line break after every epoch - print("") - -def eval_model(attention, mel_prediction, target_spectrogram, input_seq, step, - plot_dir, mel_output_dir, wav_dir, sample_num, loss, hparams): - # Save some results for evaluation - attention_path = str(plot_dir.joinpath("attention_step_{}_sample_{}".format(step, sample_num))) - save_attention(attention, attention_path) - - # save predicted mel spectrogram to disk (debug) - mel_output_fpath = mel_output_dir.joinpath("mel-prediction-step-{}_sample_{}.npy".format(step, sample_num)) - np.save(str(mel_output_fpath), mel_prediction, allow_pickle=False) - - # save griffin lim inverted wav for debug (mel -> wav) - wav = audio.inv_mel_spectrogram(mel_prediction.T, hparams) - wav_fpath = wav_dir.joinpath("step-{}-wave-from-mel_sample_{}.wav".format(step, sample_num)) - audio.save_wav(wav, str(wav_fpath), sr=hparams.sample_rate) - - # save real and predicted mel-spectrogram plot to disk (control purposes) - spec_fpath = plot_dir.joinpath("step-{}-mel-spectrogram_sample_{}.png".format(step, sample_num)) - title_str = "{}, {}, step={}, loss={:.5f}".format("Tacotron", time_string(), step, loss) - plot_spectrogram(mel_prediction, str(spec_fpath), title=title_str, - target_spectrogram=target_spectrogram, - max_len=target_spectrogram.size // hparams.num_mels) - print("Input at step {}: {}".format(step, sequence_to_text(input_seq))) diff --git a/spaces/MrD05/text-generation-webui-space/extensions/llama_prompts/script.py b/spaces/MrD05/text-generation-webui-space/extensions/llama_prompts/script.py deleted file mode 100644 index 22c96f7c2d6763213a728d77ee6666496d9c4aa3..0000000000000000000000000000000000000000 --- a/spaces/MrD05/text-generation-webui-space/extensions/llama_prompts/script.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -import modules.shared as shared -import pandas as pd - -df = pd.read_csv("https://raw.githubusercontent.com/devbrones/llama-prompts/main/prompts/prompts.csv") - -def get_prompt_by_name(name): - if name == 'None': - return '' - else: - return df[df['Prompt name'] == name].iloc[0]['Prompt'].replace('\\n', '\n') - -def ui(): - if not shared.args.chat or shared.args.cai_chat: - choices = ['None'] + list(df['Prompt name']) - - prompts_menu = gr.Dropdown(value=choices[0], choices=choices, label='Prompt') - prompts_menu.change(get_prompt_by_name, prompts_menu, shared.gradio['textbox']) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/squad_evaluate_v2_0.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/squad_evaluate_v2_0.py deleted file mode 100644 index 54fb84e993c3459ffdd2b3d90f870e4d178ab54f..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/squad_evaluate_v2_0.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Evaluation script for SQuAD version 2.0. - -The functions are copied and modified from -https://raw.githubusercontent.com/white127/SQUAD-2.0-bidaf/master/evaluate-v2.0.py - -In addition to basic functionality, we also compute additional statistics and -plot precision-recall curves if an additional na_prob.json file is provided. -This file is expected to map question ID's to the model's predicted probability -that a question is unanswerable. -""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import collections -import re -import string - -from absl import logging - - -def _make_qid_to_has_ans(dataset): - qid_to_has_ans = {} - for article in dataset: - for p in article['paragraphs']: - for qa in p['qas']: - qid_to_has_ans[qa['id']] = bool(qa['answers']) - return qid_to_has_ans - - -def _normalize_answer(s): - """Lower text and remove punctuation, articles and extra whitespace.""" - def remove_articles(text): - regex = re.compile(r'\b(a|an|the)\b', re.UNICODE) - return re.sub(regex, ' ', text) - def white_space_fix(text): - return ' '.join(text.split()) - def remove_punc(text): - exclude = set(string.punctuation) - return ''.join(ch for ch in text if ch not in exclude) - def lower(text): - return text.lower() - return white_space_fix(remove_articles(remove_punc(lower(s)))) - - -def _get_tokens(s): - if not s: return [] - return _normalize_answer(s).split() - - -def _compute_exact(a_gold, a_pred): - return int(_normalize_answer(a_gold) == _normalize_answer(a_pred)) - - -def _compute_f1(a_gold, a_pred): - """Compute F1-score.""" - gold_toks = _get_tokens(a_gold) - pred_toks = _get_tokens(a_pred) - common = collections.Counter(gold_toks) & collections.Counter(pred_toks) - num_same = sum(common.values()) - if not gold_toks or not pred_toks: - # If either is no-answer, then F1 is 1 if they agree, 0 otherwise - return int(gold_toks == pred_toks) - if num_same == 0: - return 0 - precision = 1.0 * num_same / len(pred_toks) - recall = 1.0 * num_same / len(gold_toks) - f1 = (2 * precision * recall) / (precision + recall) - return f1 - - -def _get_raw_scores(dataset, predictions): - """Compute raw scores.""" - exact_scores = {} - f1_scores = {} - for article in dataset: - for p in article['paragraphs']: - for qa in p['qas']: - qid = qa['id'] - gold_answers = [a['text'] for a in qa['answers'] - if _normalize_answer(a['text'])] - if not gold_answers: - # For unanswerable questions, only correct answer is empty string - gold_answers = [''] - if qid not in predictions: - logging.error('Missing prediction for %s', qid) - continue - a_pred = predictions[qid] - # Take max over all gold answers - exact_scores[qid] = max(_compute_exact(a, a_pred) for a in gold_answers) - f1_scores[qid] = max(_compute_f1(a, a_pred) for a in gold_answers) - return exact_scores, f1_scores - - -def _apply_no_ans_threshold( - scores, na_probs, qid_to_has_ans, na_prob_thresh=1.0): - new_scores = {} - for qid, s in scores.items(): - pred_na = na_probs[qid] > na_prob_thresh - if pred_na: - new_scores[qid] = float(not qid_to_has_ans[qid]) - else: - new_scores[qid] = s - return new_scores - - -def _make_eval_dict(exact_scores, f1_scores, qid_list=None): - """Make evaluation result dictionary.""" - if not qid_list: - total = len(exact_scores) - return collections.OrderedDict([ - ('exact', 100.0 * sum(exact_scores.values()) / total), - ('f1', 100.0 * sum(f1_scores.values()) / total), - ('total', total), - ]) - else: - total = len(qid_list) - return collections.OrderedDict([ - ('exact', 100.0 * sum(exact_scores[k] for k in qid_list) / total), - ('f1', 100.0 * sum(f1_scores[k] for k in qid_list) / total), - ('total', total), - ]) - - -def _merge_eval(main_eval, new_eval, prefix): - for k in new_eval: - main_eval['%s_%s' % (prefix, k)] = new_eval[k] - - -def _make_precision_recall_eval(scores, na_probs, num_true_pos, qid_to_has_ans): - """Make evaluation dictionary containing average recision recall.""" - qid_list = sorted(na_probs, key=lambda k: na_probs[k]) - true_pos = 0.0 - cur_p = 1.0 - cur_r = 0.0 - precisions = [1.0] - recalls = [0.0] - avg_prec = 0.0 - for i, qid in enumerate(qid_list): - if qid_to_has_ans[qid]: - true_pos += scores[qid] - cur_p = true_pos / float(i+1) - cur_r = true_pos / float(num_true_pos) - if i == len(qid_list) - 1 or na_probs[qid] != na_probs[qid_list[i+1]]: - # i.e., if we can put a threshold after this point - avg_prec += cur_p * (cur_r - recalls[-1]) - precisions.append(cur_p) - recalls.append(cur_r) - return {'ap': 100.0 * avg_prec} - - -def _run_precision_recall_analysis( - main_eval, exact_raw, f1_raw, na_probs, qid_to_has_ans): - """Run precision recall analysis and return result dictionary.""" - num_true_pos = sum(1 for v in qid_to_has_ans.values() if v) - if num_true_pos == 0: - return - pr_exact = _make_precision_recall_eval( - exact_raw, na_probs, num_true_pos, qid_to_has_ans) - pr_f1 = _make_precision_recall_eval( - f1_raw, na_probs, num_true_pos, qid_to_has_ans) - oracle_scores = {k: float(v) for k, v in qid_to_has_ans.items()} - pr_oracle = _make_precision_recall_eval( - oracle_scores, na_probs, num_true_pos, qid_to_has_ans) - _merge_eval(main_eval, pr_exact, 'pr_exact') - _merge_eval(main_eval, pr_f1, 'pr_f1') - _merge_eval(main_eval, pr_oracle, 'pr_oracle') - - -def _find_best_thresh(predictions, scores, na_probs, qid_to_has_ans): - """Find the best threshold for no answer probability.""" - num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k]) - cur_score = num_no_ans - best_score = cur_score - best_thresh = 0.0 - qid_list = sorted(na_probs, key=lambda k: na_probs[k]) - for qid in qid_list: - if qid not in scores: continue - if qid_to_has_ans[qid]: - diff = scores[qid] - else: - if predictions[qid]: - diff = -1 - else: - diff = 0 - cur_score += diff - if cur_score > best_score: - best_score = cur_score - best_thresh = na_probs[qid] - return 100.0 * best_score / len(scores), best_thresh - - -def _find_all_best_thresh( - main_eval, predictions, exact_raw, f1_raw, na_probs, qid_to_has_ans): - best_exact, exact_thresh = _find_best_thresh( - predictions, exact_raw, na_probs, qid_to_has_ans) - best_f1, f1_thresh = _find_best_thresh( - predictions, f1_raw, na_probs, qid_to_has_ans) - main_eval['final_exact'] = best_exact - main_eval['final_exact_thresh'] = exact_thresh - main_eval['final_f1'] = best_f1 - main_eval['final_f1_thresh'] = f1_thresh - - -def evaluate(dataset, predictions, na_probs=None): - """Evaluate prediction results.""" - new_orig_data = [] - for article in dataset: - for p in article['paragraphs']: - for qa in p['qas']: - if qa['id'] in predictions: - new_para = {'qas': [qa]} - new_article = {'paragraphs': [new_para]} - new_orig_data.append(new_article) - dataset = new_orig_data - - if na_probs is None: - na_probs = {k: 0.0 for k in predictions} - qid_to_has_ans = _make_qid_to_has_ans(dataset) # maps qid to True/False - has_ans_qids = [k for k, v in qid_to_has_ans.items() if v] - no_ans_qids = [k for k, v in qid_to_has_ans.items() if not v] - exact_raw, f1_raw = _get_raw_scores(dataset, predictions) - exact_thresh = _apply_no_ans_threshold(exact_raw, na_probs, qid_to_has_ans) - f1_thresh = _apply_no_ans_threshold(f1_raw, na_probs, qid_to_has_ans) - out_eval = _make_eval_dict(exact_thresh, f1_thresh) - if has_ans_qids: - has_ans_eval = _make_eval_dict( - exact_thresh, f1_thresh, qid_list=has_ans_qids) - _merge_eval(out_eval, has_ans_eval, 'HasAns') - if no_ans_qids: - no_ans_eval = _make_eval_dict(exact_thresh, f1_thresh, qid_list=no_ans_qids) - _merge_eval(out_eval, no_ans_eval, 'NoAns') - - _find_all_best_thresh( - out_eval, predictions, exact_raw, f1_raw, na_probs, qid_to_has_ans) - _run_precision_recall_analysis( - out_eval, exact_raw, f1_raw, na_probs, qid_to_has_ans) - return out_eval diff --git a/spaces/NMEX/vits-uma-genshin-honkai/attentions.py b/spaces/NMEX/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/NMEX/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/NingKanae/anime-voice-generator/app.py b/spaces/NingKanae/anime-voice-generator/app.py deleted file mode 100644 index e41932ae3e0a20837c5740859b4be34253c59b82..0000000000000000000000000000000000000000 --- a/spaces/NingKanae/anime-voice-generator/app.py +++ /dev/null @@ -1,264 +0,0 @@ -# coding=utf-8 -import os -import re -import argparse -import utils -import commons -import json -import torch -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from torch import no_grad, LongTensor -import gradio.processing_utils as gr_processing_utils -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - -hps_ms = utils.get_hparams_from_file(r'config/config.json') - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess - -def get_text(text, hps, is_symbol): - text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def create_tts_fn(net_g_ms, speaker_id): - def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol): - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if limitation: - text_len = len(re.sub("\[([A-Z]{2})\]", "", text)) - max_len = 100 - if is_symbol: - max_len *= 3 - if text_len > max_len: - return "Error: Text is too long", None - if not is_symbol: - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms, is_symbol) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "Success", (22050, audio) - return tts_fn - -def create_to_symbol_fn(hps): - def to_symbol_fn(is_symbol_input, input_text, temp_lang): - if temp_lang == 0: - clean_text = f'[ZH]{input_text}[ZH]' - elif temp_lang == 1: - clean_text = f'[JA]{input_text}[JA]' - else: - clean_text = input_text - return _clean_text(clean_text, hps.data.text_cleaners) if is_symbol_input else '' - - return to_symbol_fn -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - elif language == 1: - return 0.6, 0.668, 1 - else: - return 0.6, 0.668, 1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio"); - let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - device = torch.device(args.device) - - models = [] - with open("pretrained_models/info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for i, info in models_info.items(): - if not info['enable']: - continue - sid = info['sid'] - name_en = info['name_en'] - name_zh = info['name_zh'] - title = info['title'] - cover = f"pretrained_models/{i}/{info['cover']}" - example = info['example'] - language = info['language'] - net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0, - **hps_ms.model) - utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None) - _ = net_g_ms.eval().to(device) - models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms))) - with gr.Blocks() as app: - gr.Markdown( - "#
    vits-models\n" - "##
    Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n" - "##
    ·请不要生成会对个人以及组织造成侵害的内容\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=sayashi.vits-models)\n\n" - "[Open In Colab]" - "(https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)" - " without queue and length limitation.(无需等待队列,并且没有长度限制)\n\n" - "[Finetune your own model](https://github.com/SayaSS/vits-finetuning)" - ) - - with gr.Tabs(): - with gr.TabItem("EN"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - with gr.TabItem(name_en): - with gr.Row(): - gr.Markdown( - '
    ' - f'{title}' - f'' if cover else "" - '
    ' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)" if limitation else "Text", lines=5, value=example, elem_id=f"input-text-en-{name_en.replace(' ','')}") - lang = gr.Dropdown(label="Language", choices=["Chinese", "Japanese", "Mix(wrap the Chinese text with [ZH][ZH], wrap the Japanese text with [JA][JA])"], - type="index", value=language) - with gr.Accordion(label="Advanced Options", open=False): - symbol_input = gr.Checkbox(value=False, label="Symbol input") - symbol_list = gr.Dataset(label="Symbol list", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="Generate", variant="primary") - with gr.Row(): - ns = gr.Slider(label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio-en-{name_en.replace(' ','')}") - download = gr.Button("Download Audio") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2], api_name=f"tts-{name_en}") - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, lang], - [input_text] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-en-{name_en.replace(' ', '')}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - with gr.TabItem("中文"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - with gr.TabItem(name_zh): - with gr.Row(): - gr.Markdown( - '
    ' - f'{title}' - f'' if cover else "" - '
    ' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="文本 (100字上限)" if limitation else "文本", lines=5, value=example, elem_id=f"input-text-zh-{name_zh}") - lang = gr.Dropdown(label="语言", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文"if language == "Chinese" else "日语") - with gr.Accordion(label="高级选项", open=False): - symbol_input = gr.Checkbox(value=False, label="符号输入") - symbol_list = gr.Dataset(label="符号列表", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="生成", variant="primary") - with gr.Row(): - ns = gr.Slider(label="控制感情变化程度", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="控制音素发音长度", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="控制整体语速", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="输出信息") - o2 = gr.Audio(label="输出音频", elem_id=f"tts-audio-zh-{name_zh}") - download = gr.Button("下载音频") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2]) - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"zh-{name_zh}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, lang], - [input_text] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-zh-{name_zh}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/OAOA/DifFace/basicsr/utils/img_process_util.py b/spaces/OAOA/DifFace/basicsr/utils/img_process_util.py deleted file mode 100644 index 52e02f09930dbf13bcd12bbe16b76e4fce52578e..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/img_process_util.py +++ /dev/null @@ -1,83 +0,0 @@ -import cv2 -import numpy as np -import torch -from torch.nn import functional as F - - -def filter2D(img, kernel): - """PyTorch version of cv2.filter2D - - Args: - img (Tensor): (b, c, h, w) - kernel (Tensor): (b, k, k) - """ - k = kernel.size(-1) - b, c, h, w = img.size() - if k % 2 == 1: - img = F.pad(img, (k // 2, k // 2, k // 2, k // 2), mode='reflect') - else: - raise ValueError('Wrong kernel size') - - ph, pw = img.size()[-2:] - - if kernel.size(0) == 1: - # apply the same kernel to all batch images - img = img.view(b * c, 1, ph, pw) - kernel = kernel.view(1, 1, k, k) - return F.conv2d(img, kernel, padding=0).view(b, c, h, w) - else: - img = img.view(1, b * c, ph, pw) - kernel = kernel.view(b, 1, k, k).repeat(1, c, 1, 1).view(b * c, 1, k, k) - return F.conv2d(img, kernel, groups=b * c).view(b, c, h, w) - - -def usm_sharp(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. - - Input image: I; Blurry image: B. - 1. sharp = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * sharp + (1 - Mask) * I - - - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - sharp = img + weight * residual - sharp = np.clip(sharp, 0, 1) - return soft_mask * sharp + (1 - soft_mask) * img - - -class USMSharp(torch.nn.Module): - - def __init__(self, radius=50, sigma=0): - super(USMSharp, self).__init__() - if radius % 2 == 0: - radius += 1 - self.radius = radius - kernel = cv2.getGaussianKernel(radius, sigma) - kernel = torch.FloatTensor(np.dot(kernel, kernel.transpose())).unsqueeze_(0) - self.register_buffer('kernel', kernel) - - def forward(self, img, weight=0.5, threshold=10): - blur = filter2D(img, self.kernel) - residual = img - blur - - mask = torch.abs(residual) * 255 > threshold - mask = mask.float() - soft_mask = filter2D(mask, self.kernel) - sharp = img + weight * residual - sharp = torch.clip(sharp, 0, 1) - return soft_mask * sharp + (1 - soft_mask) * img diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/beamable_mm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/beamable_mm.py deleted file mode 100644 index eff1a4607f600c71210e6b914985dc48731aae86..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/beamable_mm.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -class BeamableMM(nn.Module): - """This module provides an optimized MM for beam decoding with attention. - - It leverage the fact that the source-side of the input is replicated beam - times and the target-side of the input is of width one. This layer speeds up - inference by replacing the inputs {(bsz x 1 x nhu), (bsz x sz2 x nhu)} - with smaller inputs {(bsz/beam x beam x nhu), (bsz/beam x sz2 x nhu)}. - """ - - def __init__(self, beam_size=None): - super(BeamableMM, self).__init__() - self.beam_size = beam_size - - def forward(self, input1, input2): - if ( - not self.training - and self.beam_size is not None # test mode - and input1.dim() == 3 # beam size is set - and input1.size(1) # only support batched input - == 1 # single time step update - ): - bsz, beam = input1.size(0), self.beam_size - - # bsz x 1 x nhu --> bsz/beam x beam x nhu - input1 = input1[:, 0, :].unfold(0, beam, beam).transpose(2, 1) - - # bsz x sz2 x nhu --> bsz/beam x sz2 x nhu - input2 = input2.unfold(0, beam, beam)[:, :, :, 0] - - # use non batched operation if bsz = beam - if input1.size(0) == 1: - output = torch.mm(input1[0, :, :], input2[0, :, :]) - else: - output = input1.bmm(input2) - return output.view(bsz, 1, -1) - else: - return input1.bmm(input2) - - def set_beam_size(self, beam_size): - self.beam_size = beam_size diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py deleted file mode 100644 index a28cd607a096844438f6a3ba6b007d94d67d1bc8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import csv -from pathlib import Path - - -def main(args): - """ - `uid syn ref text` - """ - in_root = Path(args.generation_root).resolve() - ext = args.audio_format - with open(args.audio_manifest) as f, open(args.output_path, "w") as f_out: - reader = csv.DictReader( - f, delimiter="\t", quotechar=None, doublequote=False, - lineterminator="\n", quoting=csv.QUOTE_NONE - ) - header = ["id", "syn", "ref", "text", "speaker"] - f_out.write("\t".join(header) + "\n") - for row in reader: - dir_name = f"{ext}_{args.sample_rate}hz_{args.vocoder}" - id_ = row["id"] - syn = (in_root / dir_name / f"{id_}.{ext}").as_posix() - ref = row["audio"] - if args.use_resynthesized_target: - ref = (in_root / f"{dir_name}_tgt" / f"{id_}.{ext}").as_posix() - sample = [id_, syn, ref, row["tgt_text"], row["speaker"]] - f_out.write("\t".join(sample) + "\n") - print(f"wrote evaluation file to {args.output_path}") - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - "--generation-root", help="output directory for generate_waveform.py" - ) - parser.add_argument( - "--audio-manifest", - help="used to determine the original utterance ID and text" - ) - parser.add_argument( - "--output-path", help="path to output evaluation spec file" - ) - parser.add_argument( - "--use-resynthesized-target", action="store_true", - help="use resynthesized reference instead of the original audio" - ) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--sample-rate", type=int, default=22_050) - parser.add_argument("--audio-format", type=str, default="wav") - args = parser.parse_args() - - main(args) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/utils.py deleted file mode 100644 index d93eb532ef84f0e2bc708b777229ab2cb76ca14b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.data import encoders - - -def get_whole_word_mask(args, dictionary): - bpe = encoders.build_bpe(args) - if bpe is not None: - - def is_beginning_of_word(i): - if i < dictionary.nspecial: - # special elements are always considered beginnings - return True - tok = dictionary[i] - if tok.startswith("madeupword"): - return True - try: - return bpe.is_beginning_of_word(tok) - except ValueError: - return True - - mask_whole_words = torch.ByteTensor( - list(map(is_beginning_of_word, range(len(dictionary)))) - ) - return mask_whole_words - return None diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/__init__.py deleted file mode 100644 index d7a030e2b5cbca30e6a4ca4f8a17a62a8cf197af..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .adaptive_input import AdaptiveInput -from .adaptive_softmax import AdaptiveSoftmax -from .base_layer import BaseLayer -from .beamable_mm import BeamableMM -from .character_token_embedder import CharacterTokenEmbedder -from .conv_tbc import ConvTBC -from .cross_entropy import cross_entropy -from .downsampled_multihead_attention import DownsampledMultiHeadAttention -from .dynamic_convolution import DynamicConv, DynamicConv1dTBC -from .dynamic_crf_layer import DynamicCRF -from .fairseq_dropout import FairseqDropout -from .fp32_group_norm import Fp32GroupNorm -from .gelu import gelu, gelu_accurate -from .grad_multiply import GradMultiply -from .gumbel_vector_quantizer import GumbelVectorQuantizer -from .kmeans_vector_quantizer import KmeansVectorQuantizer -from .layer_drop import LayerDropModuleList -from .layer_norm import Fp32LayerNorm, LayerNorm -from .learned_positional_embedding import LearnedPositionalEmbedding -from .lightweight_convolution import LightweightConv, LightweightConv1dTBC -from .linearized_convolution import LinearizedConvolution -from .location_attention import LocationAttention -from .lstm_cell_with_zoneout import LSTMCellWithZoneOut -from .multihead_attention import MultiheadAttention -from .positional_embedding import PositionalEmbedding -from .same_pad import SamePad -from .scalar_bias import ScalarBias -from .sinusoidal_positional_embedding import SinusoidalPositionalEmbedding -from .transformer_sentence_encoder_layer import TransformerSentenceEncoderLayer -from .transformer_sentence_encoder import TransformerSentenceEncoder -from .transpose_last import TransposeLast -from .unfold import unfold1d -from .transformer_layer import TransformerDecoderLayer, TransformerEncoderLayer -from .vggblock import VGGBlock - -__all__ = [ - "AdaptiveInput", - "AdaptiveSoftmax", - "BaseLayer", - "BeamableMM", - "CharacterTokenEmbedder", - "ConvTBC", - "cross_entropy", - "DownsampledMultiHeadAttention", - "DynamicConv1dTBC", - "DynamicConv", - "DynamicCRF", - "FairseqDropout", - "Fp32GroupNorm", - "Fp32LayerNorm", - "gelu", - "gelu_accurate", - "GradMultiply", - "GumbelVectorQuantizer", - "KmeansVectorQuantizer", - "LayerDropModuleList", - "LayerNorm", - "LearnedPositionalEmbedding", - "LightweightConv1dTBC", - "LightweightConv", - "LinearizedConvolution", - "LocationAttention", - "LSTMCellWithZoneOut", - "MultiheadAttention", - "PositionalEmbedding", - "SamePad", - "ScalarBias", - "SinusoidalPositionalEmbedding", - "TransformerSentenceEncoderLayer", - "TransformerSentenceEncoder", - "TransformerDecoderLayer", - "TransformerEncoderLayer", - "TransposeLast", - "VGGBlock", - "unfold1d", -] diff --git a/spaces/OIUGLK/bingo/src/components/ui/icons.tsx b/spaces/OIUGLK/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/ORI-Muchim/BarKeYaeTTS/utils.py b/spaces/ORI-Muchim/BarKeYaeTTS/utils.py deleted file mode 100644 index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BarKeYaeTTS/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/ORI-Muchim/NahidaTTS/transforms.py b/spaces/ORI-Muchim/NahidaTTS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/NahidaTTS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/vadParallel.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/src/vadParallel.py deleted file mode 100644 index c2323c0b632c34014ac1fe7ac79141b5bd9c5731..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/vadParallel.py +++ /dev/null @@ -1,298 +0,0 @@ -import multiprocessing -from queue import Empty -import threading -import time -from src.hooks.progressListener import ProgressListener -from src.vad import AbstractTranscription, TranscriptionConfig, get_audio_duration - -from multiprocessing import Pool, Queue - -from typing import Any, Dict, List, Union -import os - -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -class _ProgressListenerToQueue(ProgressListener): - def __init__(self, progress_queue: Queue): - self.progress_queue = progress_queue - self.progress_total = 0 - self.prev_progress = 0 - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - delta = current - self.prev_progress - self.prev_progress = current - self.progress_total = total - self.progress_queue.put(delta) - - def on_finished(self): - if self.progress_total > self.prev_progress: - delta = self.progress_total - self.prev_progress - self.progress_queue.put(delta) - self.prev_progress = self.progress_total - -class ParallelContext: - def __init__(self, num_processes: int = None, auto_cleanup_timeout_seconds: float = None): - self.num_processes = num_processes - self.auto_cleanup_timeout_seconds = auto_cleanup_timeout_seconds - self.lock = threading.Lock() - - self.ref_count = 0 - self.pool = None - self.cleanup_timer = None - - def get_pool(self): - # Initialize pool lazily - if (self.pool is None): - context = multiprocessing.get_context('spawn') - self.pool = context.Pool(self.num_processes) - - self.ref_count = self.ref_count + 1 - - if (self.auto_cleanup_timeout_seconds is not None): - self._stop_auto_cleanup() - - return self.pool - - def return_pool(self, pool): - if (self.pool == pool and self.ref_count > 0): - self.ref_count = self.ref_count - 1 - - if (self.ref_count == 0): - if (self.auto_cleanup_timeout_seconds is not None): - self._start_auto_cleanup() - - def _start_auto_cleanup(self): - if (self.cleanup_timer is not None): - self.cleanup_timer.cancel() - self.cleanup_timer = threading.Timer(self.auto_cleanup_timeout_seconds, self._execute_cleanup) - self.cleanup_timer.start() - - print("Started auto cleanup of pool in " + str(self.auto_cleanup_timeout_seconds) + " seconds") - - def _stop_auto_cleanup(self): - if (self.cleanup_timer is not None): - self.cleanup_timer.cancel() - self.cleanup_timer = None - - print("Stopped auto cleanup of pool") - - def _execute_cleanup(self): - print("Executing cleanup of pool") - - if (self.ref_count == 0): - self.close() - - def close(self): - self._stop_auto_cleanup() - - if (self.pool is not None): - print("Closing pool of " + str(self.num_processes) + " processes") - self.pool.close() - self.pool.join() - self.pool = None - -class ParallelTranscriptionConfig(TranscriptionConfig): - def __init__(self, device_id: str, override_timestamps, initial_segment_index, copy: TranscriptionConfig = None): - super().__init__(copy.non_speech_strategy, copy.segment_padding_left, copy.segment_padding_right, copy.max_silent_period, copy.max_merge_size, copy.max_prompt_window, initial_segment_index) - self.device_id = device_id - self.override_timestamps = override_timestamps - -class ParallelTranscription(AbstractTranscription): - # Silero VAD typically takes about 3 seconds per minute, so there's no need to split the chunks - # into smaller segments than 2 minute (min 6 seconds per CPU core) - MIN_CPU_CHUNK_SIZE_SECONDS = 2 * 60 - - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def transcribe_parallel(self, transcription: AbstractTranscription, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - cpu_device_count: int, gpu_devices: List[str], cpu_parallel_context: ParallelContext = None, gpu_parallel_context: ParallelContext = None, - progress_listener: ProgressListener = None): - total_duration = get_audio_duration(audio) - - # First, get the timestamps for the original audio - if (cpu_device_count > 1 and not transcription.is_transcribe_timestamps_fast()): - merged = self._get_merged_timestamps_parallel(transcription, audio, config, total_duration, cpu_device_count, cpu_parallel_context) - else: - timestamp_segments = transcription.get_transcribe_timestamps(audio, config, 0, total_duration) - merged = transcription.get_merged_timestamps(timestamp_segments, config, total_duration) - - # We must make sure the whisper model is downloaded - if (len(gpu_devices) > 1): - whisperCallable.model_container.ensure_downloaded() - - # Split into a list for each device - # TODO: Split by time instead of by number of chunks - merged_split = list(self._split(merged, len(gpu_devices))) - - # Parameters that will be passed to the transcribe function - parameters = [] - segment_index = config.initial_segment_index - - processing_manager = multiprocessing.Manager() - progress_queue = processing_manager.Queue() - - for i in range(len(gpu_devices)): - # Note that device_segment_list can be empty. But we will still create a process for it, - # as otherwise we run the risk of assigning the same device to multiple processes. - device_segment_list = list(merged_split[i]) if i < len(merged_split) else [] - device_id = gpu_devices[i] - - print("Device " + str(device_id) + " (index " + str(i) + ") has " + str(len(device_segment_list)) + " segments") - - # Create a new config with the given device ID - device_config = ParallelTranscriptionConfig(device_id, device_segment_list, segment_index, config) - segment_index += len(device_segment_list) - - progress_listener_to_queue = _ProgressListenerToQueue(progress_queue) - parameters.append([audio, whisperCallable, device_config, progress_listener_to_queue]); - - merged = { - 'text': '', - 'segments': [], - 'language': None - } - - created_context = False - - perf_start_gpu = time.perf_counter() - - # Spawn a separate process for each device - try: - if (gpu_parallel_context is None): - gpu_parallel_context = ParallelContext(len(gpu_devices)) - created_context = True - - # Get a pool of processes - pool = gpu_parallel_context.get_pool() - - # Run the transcription in parallel - results_async = pool.starmap_async(self.transcribe, parameters) - total_progress = 0 - - while not results_async.ready(): - try: - delta = progress_queue.get(timeout=5) # Set a timeout of 5 seconds - except Empty: - continue - - total_progress += delta - if progress_listener is not None: - progress_listener.on_progress(total_progress, total_duration) - - results = results_async.get() - - # Call the finished callback - if progress_listener is not None: - progress_listener.on_finished() - - for result in results: - # Merge the results - if (result['text'] is not None): - merged['text'] += result['text'] - if (result['segments'] is not None): - merged['segments'].extend(result['segments']) - if (result['language'] is not None): - merged['language'] = result['language'] - - finally: - # Return the pool to the context - if (gpu_parallel_context is not None): - gpu_parallel_context.return_pool(pool) - # Always close the context if we created it - if (created_context): - gpu_parallel_context.close() - - perf_end_gpu = time.perf_counter() - print("Parallel transcription took " + str(perf_end_gpu - perf_start_gpu) + " seconds") - - return merged - - def _get_merged_timestamps_parallel(self, transcription: AbstractTranscription, audio: str, config: TranscriptionConfig, total_duration: float, - cpu_device_count: int, cpu_parallel_context: ParallelContext = None): - parameters = [] - - chunk_size = max(total_duration / cpu_device_count, self.MIN_CPU_CHUNK_SIZE_SECONDS) - chunk_start = 0 - cpu_device_id = 0 - - perf_start_time = time.perf_counter() - - # Create chunks that will be processed on the CPU - while (chunk_start < total_duration): - chunk_end = min(chunk_start + chunk_size, total_duration) - - if (chunk_end - chunk_start < 1): - # No need to process chunks that are less than 1 second - break - - print("Parallel VAD: Executing chunk from " + str(chunk_start) + " to " + - str(chunk_end) + " on CPU device " + str(cpu_device_id)) - parameters.append([audio, config, chunk_start, chunk_end]); - - cpu_device_id += 1 - chunk_start = chunk_end - - created_context = False - - # Spawn a separate process for each device - try: - if (cpu_parallel_context is None): - cpu_parallel_context = ParallelContext(cpu_device_count) - created_context = True - - # Get a pool of processes - pool = cpu_parallel_context.get_pool() - - # Run the transcription in parallel. Note that transcription must be picklable. - results = pool.starmap(transcription.get_transcribe_timestamps, parameters) - - timestamps = [] - - # Flatten the results - for result in results: - timestamps.extend(result) - - merged = transcription.get_merged_timestamps(timestamps, config, total_duration) - - perf_end_time = time.perf_counter() - print("Parallel VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - return merged - - finally: - # Return the pool to the context - if (cpu_parallel_context is not None): - cpu_parallel_context.return_pool(pool) - # Always close the context if we created it - if (created_context): - cpu_parallel_context.close() - - def get_transcribe_timestamps(self, audio: str, config: ParallelTranscriptionConfig, start_time: float, duration: float): - return [] - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: ParallelTranscriptionConfig, total_duration: float): - # Override timestamps that will be processed - if (config.override_timestamps is not None): - print("(get_merged_timestamps) Using override timestamps of size " + str(len(config.override_timestamps))) - return config.override_timestamps - return super().get_merged_timestamps(timestamps, config, total_duration) - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: ParallelTranscriptionConfig, - progressListener: ProgressListener = None): - # Override device ID the first time - if (os.environ.get("INITIALIZED", None) is None): - os.environ["INITIALIZED"] = "1" - - # Note that this may be None if the user didn't specify a device. In that case, Whisper will - # just use the default GPU device. - if (config.device_id is not None): - print("Using device " + config.device_id) - os.environ["CUDA_VISIBLE_DEVICES"] = config.device_id - - return super().transcribe(audio, whisperCallable, config, progressListener) - - def _split(self, a, n): - """Split a list into n approximately equal parts.""" - k, m = divmod(len(a), n) - return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n)) - diff --git a/spaces/OpenGVLab/DragGAN/stylegan2/lpips/networks_basic.py b/spaces/OpenGVLab/DragGAN/stylegan2/lpips/networks_basic.py deleted file mode 100644 index ea45e4c12f53546c1334d532afc2846ce90ece1b..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/DragGAN/stylegan2/lpips/networks_basic.py +++ /dev/null @@ -1,188 +0,0 @@ - -from __future__ import absolute_import - -import sys -import torch -import torch.nn as nn -import torch.nn.init as init -from torch.autograd import Variable -import numpy as np -from pdb import set_trace as st -from skimage import color -from IPython import embed -from . import pretrained_networks as pn - -from . import util - - -def spatial_average(in_tens, keepdim=True): - return in_tens.mean([2,3],keepdim=keepdim) - -def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W - in_H = in_tens.shape[2] - scale_factor = 1.*out_H/in_H - - return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens) - -# Learned perceptual metric -class PNetLin(nn.Module): - def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True): - super(PNetLin, self).__init__() - - self.pnet_type = pnet_type - self.pnet_tune = pnet_tune - self.pnet_rand = pnet_rand - self.spatial = spatial - self.lpips = lpips - self.version = version - self.scaling_layer = ScalingLayer() - - if(self.pnet_type in ['vgg','vgg16']): - net_type = pn.vgg16 - self.chns = [64,128,256,512,512] - elif(self.pnet_type=='alex'): - net_type = pn.alexnet - self.chns = [64,192,384,256,256] - elif(self.pnet_type=='squeeze'): - net_type = pn.squeezenet - self.chns = [64,128,256,384,384,512,512] - self.L = len(self.chns) - - self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune) - - if(lpips): - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4] - if(self.pnet_type=='squeeze'): # 7 layers for squeezenet - self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout) - self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout) - self.lins+=[self.lin5,self.lin6] - - def forward(self, in0, in1, retPerLayer=False): - # v0.0 - original release had a bug, where input was not scaled - in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1) - outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input) - feats0, feats1, diffs = {}, {}, {} - - for kk in range(self.L): - feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk]-feats1[kk])**2 - - if(self.lpips): - if(self.spatial): - res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)] - else: - if(self.spatial): - res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)] - - val = res[0] - for l in range(1,self.L): - val += res[l] - - if(retPerLayer): - return (val, res) - else: - return val - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None]) - self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - - -class NetLinLayer(nn.Module): - ''' A single linear layer which does a 1x1 conv ''' - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - - layers = [nn.Dropout(),] if(use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),] - self.model = nn.Sequential(*layers) - - -class Dist2LogitLayer(nn.Module): - ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) ''' - def __init__(self, chn_mid=32, use_sigmoid=True): - super(Dist2LogitLayer, self).__init__() - - layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),] - if(use_sigmoid): - layers += [nn.Sigmoid(),] - self.model = nn.Sequential(*layers) - - def forward(self,d0,d1,eps=0.1): - return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1)) - -class BCERankingLoss(nn.Module): - def __init__(self, chn_mid=32): - super(BCERankingLoss, self).__init__() - self.net = Dist2LogitLayer(chn_mid=chn_mid) - # self.parameters = list(self.net.parameters()) - self.loss = torch.nn.BCELoss() - - def forward(self, d0, d1, judge): - per = (judge+1.)/2. - self.logit = self.net.forward(d0,d1) - return self.loss(self.logit, per) - -# L2, DSSIM metrics -class FakeNet(nn.Module): - def __init__(self, use_gpu=True, colorspace='Lab'): - super(FakeNet, self).__init__() - self.use_gpu = use_gpu - self.colorspace=colorspace - -class L2(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - (N,C,X,Y) = in0.size() - value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N) - return value - elif(self.colorspace=='Lab'): - value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -class DSSIM(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float') - elif(self.colorspace=='Lab'): - value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -def print_network(net): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print('Network',net) - print('Total number of parameters: %d' % num_params) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_panoptic_separated.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_panoptic_separated.py deleted file mode 100644 index 5ccbc77e64d1c92c99cbd7158d047bab54cb9f3d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_panoptic_separated.py +++ /dev/null @@ -1,26 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.evaluation import ( - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - SemSegEvaluator, -) - -from .coco import dataloader - -dataloader.train.dataset.names = "coco_2017_train_panoptic_separated" -dataloader.train.dataset.filter_empty = False -dataloader.test.dataset.names = "coco_2017_val_panoptic_separated" - - -dataloader.evaluator = [ - L(COCOEvaluator)( - dataset_name="${...test.dataset.names}", - ), - L(SemSegEvaluator)( - dataset_name="${...test.dataset.names}", - ), - L(COCOPanopticEvaluator)( - dataset_name="${...test.dataset.names}", - ), -] diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py deleted file mode 100644 index 01dd6955f78e2700ffc10ed723ab1c95df0e5a18..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py +++ /dev/null @@ -1,270 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - - -import os -import tempfile -import unittest -import torch -from omegaconf import OmegaConf - -from detectron2 import model_zoo -from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config -from detectron2.layers import ShapeSpec -from detectron2.modeling import build_model - -_V0_CFG = """ -MODEL: - RPN_HEAD: - NAME: "TEST" -VERSION: 0 -""" - -_V1_CFG = """ -MODEL: - WEIGHT: "/path/to/weight" -""" - - -class TestConfigVersioning(unittest.TestCase): - def test_upgrade_downgrade_consistency(self): - cfg = get_cfg() - # check that custom is preserved - cfg.USER_CUSTOM = 1 - - down = downgrade_config(cfg, to_version=0) - up = upgrade_config(down) - self.assertTrue(up == cfg) - - def _merge_cfg_str(self, cfg, merge_str): - f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) - try: - f.write(merge_str) - f.close() - cfg.merge_from_file(f.name) - finally: - os.remove(f.name) - return cfg - - def test_auto_upgrade(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - cfg.USER_CUSTOM = 1 - - self._merge_cfg_str(cfg, _V0_CFG) - - self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST") - self.assertEqual(cfg.VERSION, latest_ver) - - def test_guess_v1(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - self._merge_cfg_str(cfg, _V1_CFG) - self.assertEqual(cfg.VERSION, latest_ver) - - -class _TestClassA(torch.nn.Module): - @configurable - def __init__(self, arg1, arg2, arg3=3): - super().__init__() - self.arg1 = arg1 - self.arg2 = arg2 - self.arg3 = arg3 - assert arg1 == 1 - assert arg2 == 2 - assert arg3 == 3 - - @classmethod - def from_config(cls, cfg): - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - return args - - -class _TestClassB(_TestClassA): - @configurable - def __init__(self, input_shape, arg1, arg2, arg3=3): - """ - Doc of _TestClassB - """ - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - @classmethod - def from_config(cls, cfg, input_shape): # test extra positional arg in from_config - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - return args - - -class _LegacySubClass(_TestClassB): - # an old subclass written in cfg style - def __init__(self, cfg, input_shape, arg4=4): - super().__init__(cfg, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _NewSubClassNewInit(_TestClassB): - # test new subclass with a new __init__ - @configurable - def __init__(self, input_shape, arg4=4, **kwargs): - super().__init__(input_shape, **kwargs) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _LegacySubClassNotCfg(_TestClassB): - # an old subclass written in cfg style, but argument is not called "cfg" - def __init__(self, config, input_shape): - super().__init__(config, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _TestClassC(_TestClassB): - @classmethod - def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - args.update(kwargs) - return args - - -class _TestClassD(_TestClassA): - @configurable - def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3): - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - # _TestClassA.from_config does not have input_shape args. - # Test whether input_shape will be forwarded to __init__ - - -@configurable(from_config=lambda cfg, arg2: {"arg1": cfg.ARG1, "arg2": arg2, "arg3": cfg.ARG3}) -def _test_func(arg1, arg2=2, arg3=3, arg4=4): - return arg1, arg2, arg3, arg4 - - -class TestConfigurable(unittest.TestCase): - def testInitWithArgs(self): - _ = _TestClassA(arg1=1, arg2=2, arg3=3) - _ = _TestClassB("shape", arg1=1, arg2=2) - _ = _TestClassC("shape", arg1=1, arg2=2) - _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3) - - def testPatchedAttr(self): - self.assertTrue("Doc" in _TestClassB.__init__.__doc__) - self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int) - - def testInitWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - cfg.ARG3 = 3 - _ = _TestClassA(cfg) - _ = _TestClassB(cfg, input_shape="shape") - _ = _TestClassC(cfg, input_shape="shape") - _ = _TestClassD(cfg, input_shape="shape") - _ = _LegacySubClass(cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(cfg, input_shape="shape") - with self.assertRaises(TypeError): - # disallow forwarding positional args to __init__ since it's prone to errors - _ = _TestClassD(cfg, "shape") - - # call with kwargs instead - _ = _TestClassA(cfg=cfg) - _ = _TestClassB(cfg=cfg, input_shape="shape") - _ = _TestClassC(cfg=cfg, input_shape="shape") - _ = _TestClassD(cfg=cfg, input_shape="shape") - _ = _LegacySubClass(cfg=cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape") - - def testInitWithCfgOverwrite(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 999 # wrong config - with self.assertRaises(AssertionError): - _ = _TestClassA(cfg, arg3=3) - - # overwrite arg2 with correct config later: - _ = _TestClassA(cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3) - - # call with kwargs cfg=cfg instead - _ = _TestClassA(cfg=cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - - def testInitWithCfgWrongArgs(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - with self.assertRaises(TypeError): - _ = _TestClassB(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassC(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassD(cfg, "shape", not_exist=1) - - def testBadClass(self): - class _BadClass1: - @configurable - def __init__(self, a=1, b=2): - pass - - class _BadClass2: - @configurable - def __init__(self, a=1, b=2): - pass - - def from_config(self, cfg): # noqa - pass - - class _BadClass3: - @configurable - def __init__(self, a=1, b=2): - pass - - # bad name: must be cfg - @classmethod - def from_config(cls, config): # noqa - pass - - with self.assertRaises(AttributeError): - _ = _BadClass1(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass2(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass3(get_cfg()) - - def testFuncWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 10 - cfg.ARG3 = 30 - - self.assertEqual(_test_func(1), (1, 2, 3, 4)) - with self.assertRaises(TypeError): - _test_func(cfg) - self.assertEqual(_test_func(cfg, arg2=2), (10, 2, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20), (100, 20, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20, arg4=40), (100, 20, 30, 40)) - - self.assertTrue(callable(_test_func.from_config)) - - def testOmegaConf(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - cfg = OmegaConf.create(cfg.dump()) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - # test that a model can be built with omegaconf config as well - build_model(cfg) diff --git a/spaces/Osmond141319/ComfyUI-XL-Vae-Public/README.md b/spaces/Osmond141319/ComfyUI-XL-Vae-Public/README.md deleted file mode 100644 index 8575f99c9a93fc5cbe3a75659f650831826db2b6..0000000000000000000000000000000000000000 --- a/spaces/Osmond141319/ComfyUI-XL-Vae-Public/README.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: ComfyUI-XL-Vae-Public -emoji: ✨ -colorFrom: pink -colorTo: purple -sdk: docker -duplicated_from: Osmond141319/ComfyUI-XL-Vae -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -CompfyUi XL Diffusion with Vae for Cpu, no Nvidia and Gpu version. Warning: Capable to make NSFW images. - -Please duplicate this space and make it private, anyone can see your generated pics. - -Go to Settings above, and Change space visibility. - -Duplicate this space if you want to edit and improve it. - -Press "queue prompt" to generate images, wait for AI finish the image. May take some time to generate. Leave space and do something else, then come back and view history to see results. - -For smartphone user, tap the image with both fingers to save it. - -Duplicated from : https://huggingface.co/spaces/TechnoByte/ComfyUI-Kybalico - -Updated to XL model:https://huggingface.co/Linaqruf/animagine-xl/ - -ComfyUI GPU Nvidia, probably don't have anime models(not sure since I don't use it often): https://huggingface.co/spaces/SpacesExamples/ComfyUI - -Comfyui github.Has instructions and tutorials: https://github.com/comfyanonymous/ComfyUI \ No newline at end of file diff --git a/spaces/OzoneAsai/gptsan/README.md b/spaces/OzoneAsai/gptsan/README.md deleted file mode 100644 index 92402b615fa1826984d38b085cf796ea07cb2aec..0000000000000000000000000000000000000000 --- a/spaces/OzoneAsai/gptsan/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gptsan -emoji: 💻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PeepDaSlan9/Language-Learn-Idea/app.py b/spaces/PeepDaSlan9/Language-Learn-Idea/app.py deleted file mode 100644 index 2e45e858ddac15939676fb4ef7fe2a2af66a9a03..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Language-Learn-Idea/app.py +++ /dev/null @@ -1,956 +0,0 @@ -from googletrans import Translator -import spacy -import gradio as gr -import nltk -from nltk.corpus import wordnet -import wikipedia -import re -import time -import random -import os -import zipfile -import ffmpeg -from gtts import gTTS -#from io import BytesIO -from collections import Counter -from PIL import Image, ImageDraw, ImageFont -import numpy as np -from docx import Document -import textwrap -import pandas as pd - -#Uncomment these for Huggingface -nltk.download('maxent_ne_chunker') #Chunker -nltk.download('stopwords') #Stop Words List (Mainly Roman Languages) -nltk.download('words') #200 000+ Alphabetical order list -nltk.download('punkt') #Tokenizer -nltk.download('verbnet') #For Description of Verbs -nltk.download('omw') -nltk.download('omw-1.4') #Multilingual Wordnet -nltk.download('wordnet') #For Definitions, Antonyms and Synonyms -nltk.download('shakespeare') -nltk.download('dolch') #Sight words -nltk.download('names') #People Names NER -nltk.download('gazetteers') #Location NER -nltk.download('opinion_lexicon') #Sentiment words -nltk.download('averaged_perceptron_tagger') #Parts of Speech Tagging - -spacy.cli.download("en_core_web_sm") -spacy.cli.download('ko_core_news_sm') -spacy.cli.download('ja_core_news_sm') -spacy.cli.download('zh_core_web_sm') - -nlp = spacy.load('en_core_web_sm') -translator = Translator() - -def Sentencechunker(sentence): - Sentchunks = sentence.split(" ") - chunks = [] - for i in range(len(Sentchunks)): - chunks.append(" ".join(Sentchunks[:i+1])) - return " | ".join(chunks) - -def ReverseSentenceChunker(sentence): - reversed_sentence = " ".join(reversed(sentence.split())) - chunks = Sentencechunker(reversed_sentence) - return chunks - -def three_words_chunk(sentence): - words = sentence.split() - chunks = [words[i:i+3] for i in range(len(words)-2)] - chunks = [" ".join(chunk) for chunk in chunks] - return " | ".join(chunks) - -def keep_nouns_verbs(sentence): - doc = nlp(sentence) - nouns_verbs = [] - for token in doc: - if token.pos_ in ['NOUN','VERB','PUNCT']: - nouns_verbs.append(token.text) - return " ".join(nouns_verbs) - -def unique_word_count(text="", state=None): - if state is None: - state = {} - words = text.split() - word_counts = state - for word in words: - if word in word_counts: - word_counts[word] += 1 - else: - word_counts[word] = 1 - sorted_word_counts = sorted(word_counts.items(), key=lambda x: x[1], reverse=True) - return sorted_word_counts, - -def Wordchunker(word): - chunks = [] - for i in range(len(word)): - chunks.append(word[:i+1]) - return chunks - -def BatchWordChunk(sentence): - words = sentence.split(" ") - FinalOutput = "" - Currentchunks = "" - ChunksasString = "" - for word in words: - ChunksasString = "" - Currentchunks = Wordchunker(word) - for chunk in Currentchunks: - ChunksasString += chunk + " " - FinalOutput += "\n" + ChunksasString - return FinalOutput - -# Translate from English to French - -langdest = gr.Dropdown(choices=["af", "de", "es", "ko", "ja", "zh-cn"], label="Choose Language", value="de") - -ChunkModeDrop = gr.Dropdown(choices=["Chunks", "Reverse", "Three Word Chunks", "Spelling Chunks"], label="Choose Chunk Type", value="Chunks") - -def FrontRevSentChunk (Chunkmode, Translate, Text, langdest): - FinalOutput = "" - TransFinalOutput = "" - if Chunkmode=="Chunks": - FinalOutput += Sentencechunker(Text) - if Chunkmode=="Reverse": - FinalOutput += ReverseSentenceChunker(Text) - if Chunkmode=="Three Word Chunks": - FinalOutput += three_words_chunk(Text) - if Chunkmode=="Spelling Chunks": - FinalOutput += BatchWordChunk(Text) - - if Translate: - TransFinalOutput = FinalOutput - translated = translator.translate(TransFinalOutput, dest=langdest) - FinalOutput += "\n" + translated.text - return FinalOutput - -# Define a function to filter out non-verb, noun, or adjective words -def filter_words(words): - # Use NLTK to tag each word with its part of speech - tagged_words = nltk.pos_tag(words) - - # Define a set of parts of speech to keep (verbs, nouns, adjectives) - keep_pos = {'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'NN', 'NNS', 'NNP', 'NNPS', 'JJ', 'JJR', 'JJS'} - - # Filter the list to only include words with the desired parts of speech - filtered_words = [word for word, pos in tagged_words if pos in keep_pos] - - return filtered_words - -def SepHypandSynExpansion(text): - # Tokenize the text - tokens = nltk.word_tokenize(text) - NoHits = "" - FinalOutput = "" - - # Find synonyms and hypernyms of each word in the text - for token in tokens: - synonyms = [] - hypernyms = [] - for synset in wordnet.synsets(token): - synonyms += synset.lemma_names() - hypernyms += [hypernym.name() for hypernym in synset.hypernyms()] - if not synonyms and not hypernyms: - NoHits += f"{token} | " - else: - FinalOutput += "\n" f"{token}: hypernyms={hypernyms}, synonyms={synonyms} \n" - NoHits = set(NoHits.split(" | ")) - NoHits = filter_words(NoHits) - NoHits = "Words to pay special attention to: \n" + str(NoHits) - return NoHits, FinalOutput - - -def WikiSearch(term): - termtoks = term.split(" ") - - for item in termtoks: - # Search for the term on Wikipedia and get the first result - result = wikipedia.search(item, results=20) - return result - -def create_dictionary(word_list, word_dict = {}): - word_list = set(word_list.split(" ")) - for word in word_list: - key = word[:2] - if key not in word_dict: - word_dict[key] = [word] - else: - word_dict[key].append(word) - return word_dict - -def merge_lines(roman_file, w4w_file, full_mean_file, macaronic_file): - files = [roman_file, w4w_file, full_mean_file, macaronic_file] - merged_lines = [] - - with open(roman_file.name, "r") as f1, open(w4w_file.name, "r") as f2, \ - open(full_mean_file.name, "r") as f3, open(macaronic_file.name, "r") as f4: - for lines in zip(f1, f2, f3, f4): - merged_line = "\n".join(line.strip() for line in lines) - merged_lines.append(merged_line) - - return "\n".join(merged_lines) - -TTSLangOptions = gr.Dropdown(choices=["en", "de", "es", "ja", "ko", "zh-cn"], value="en", label="choose the language of the srt/text accent") -TTSLangOptions2 = gr.Dropdown(choices=["en", "de", "es", "ja", "ko", "zh-cn"], value="en", label="choose the language of the srt/text accent") - -def TTSforListeningPractice(text, language = "en", Repeat10x = False): - if Repeat10x: - text = text * 10 - speech = gTTS(text=text, lang=language, slow="False") - speech.save("CurrentTTSFile.mp3") - #file = BytesIO() - #speech.write_to_fp(file) - #file.seek(0) - return "CurrentTTSFile.mp3" #file - -def AutoChorusInvestigator(sentences): - sentences = sentences.splitlines() - # Use Counter to count the number of occurrences of each sentence - sentence_counts = Counter(sentences) - - # Identify duplicate sentences - duplicates = [s for s, count in sentence_counts.items() if count > 1] - - FinalOutput = "" - if len(duplicates) == 0: - FinalOutput += "No duplicate sentences found in the file." - else: - FinalOutput += "The following sentences appear more than once in the file:" - for sentence in duplicates: - FinalOutput += "\n" + sentence - return FinalOutput - -def AutoChorusPerWordScheduler(sentences): - words = set(sentences.split(" ")) - wordsoneattime =[] - practicestring = "" - - FinalOutput = "This is supposed to output the words in repetition format (i.e. schedule for repitition) \nCurrent Idea = 1 new word every min and 1 old word every second" + "\n\nWords: \n" - for word in words: - wordsoneattime.append(word) - for i in range(0, 59): - practicestring += word + " " - practicestring += random.choice(wordsoneattime) + " " - FinalOutput += word + "\n " - practicestring += "\n" - - FinalOutput += practicestring - return FinalOutput - -def group_words(inlist): - inlisttoks = inlist.split(" ") - inlistset = set(inlisttoks) - - word_groups = [] - current_group = [] - - for word in inlisttoks: - current_group.append(word) - if len(current_group) == 10: - word_groups.append(current_group) - current_group = [] - if current_group: - word_groups.append(current_group) - - current_group_index = 0 - current_group_time = 0 - - while True: - if current_group_time == 60: - current_group_index = (current_group_index + 1) % len(word_groups) - current_group_time = 0 - else: - if current_group_time % 10 == 0: - random.shuffle(word_groups[current_group_index]) - current_group_time += 10 - - yield " ".join(word_groups[current_group_index]) - time.sleep(10) - -def split_verbs_nouns(text): - nlp = spacy.load("en_core_web_sm") - doc = nlp(text) - - verbs_nouns = [] - other_words = [] - pos_string = [] - - for token in doc: - if token.pos_ in ["VERB", "NOUN"]: - verbs_nouns.append(token.text) - elif token.text in [punct.text for punct in doc if punct.is_punct]: - verbs_nouns.append(token.text) - other_words.append(token.text) - else: - other_words.append(token.text) - pos_string.append(token.pos_) - - verbs_nouns_text = " ".join(verbs_nouns) - other_words_text = " ".join(other_words) - pos_string_text = " ".join(pos_string) - - return pos_string_text, verbs_nouns_text, other_words_text - -SRTLangOptions = gr.Dropdown(choices=["en", "ja", "ko", "zh-cn"], value="en", label="choose the language of the srt") - -def save_string_to_file(string_to_save, file_name, srtdocx): - with open(file_name, 'w', encoding='utf-8') as file: - file.write(string_to_save) - if srtdocx == "True": - with open(file_name.split('.')[0] + '.srt', 'w', encoding='utf-8') as file: - file.write(string_to_save) - srtdocument = Document() - srtdocument.add_paragraph(string_to_save) - srtdocument.save('SplitSRT.docx') - -def split_srt_file(text, lang): #file_path): - # Open the SRT file and read its contents - #with open(file_path, 'r') as f: - # srt_contents = f.read() - - if lang == "en": nlp = spacy.load('en_core_web_sm') - if lang == "ja": nlp = spacy.load('ja_core_news_sm') - if lang == "ko": nlp = spacy.load('ko_core_news_sm') - if lang == "zn-cn": nlp = spacy.load('zn_core_web_sm') - - srt_contents = text - - # Split the SRT file by timestamp - srt_sections = srt_contents.split('\n\n') - srt_sections_POSversion = [] - subaswordlist = "" - - # Loop through each section of the SRT file - for i in range(len(srt_sections)): - # Split the section into its timestamp and subtitle text - section_lines = srt_sections[i].split('\n') - timestamp = section_lines[1] - subtitle_text = ' | '.join(section_lines[2:]) - sub_split_line = nlp(subtitle_text) - subtitle_textPOSversion = "" - subtitle_text = "" - - # Replace spaces in the subtitle text with " | " - #subtitle_text = subtitle_text.replace(' ', ' | ') - for token in sub_split_line: - subtitle_text += token.text + " | " - subaswordlist += token.text + " " - subtitle_textPOSversion += token.pos_ + " | " - - # Reconstruct the section with the updated subtitle text - srt_sections[i] = f"{section_lines[0]}\n{timestamp}\n{subtitle_text[3:]}" - srt_sections_POSversion.append(f"{section_lines[0]}\n{timestamp}\n{subtitle_textPOSversion[3:]}\n\n") - - SplitSRT = '\n\n'.join(srt_sections) - SplitPOSsrt = ''.join(srt_sections_POSversion) - save_string_to_file(SplitSRT, "SplitSRT.txt", "True") - save_string_to_file(SplitPOSsrt, "SplitPOSsrt.txt", "False") - subaswordlist = set(subaswordlist.split(" ")) - subaswordlistOutput = "" - - for word in subaswordlist: - subaswordlistOutput += "\n | " + word - - subaswordlistOutput = str(len(subaswordlist)) + "\n" + subaswordlistOutput - - # Join the SRT sections back together into a single string - return subaswordlistOutput, ["SplitSRT.docx", "SplitSRT.txt", "SplitSRT.srt", "SplitPOSsrt.txt"], SplitSRT, SplitPOSsrt - -def find_string_positions(s, string): - positions = [] - start = 0 - while True: - position = s.find(string, start) - if position == -1: - break - positions.append(position) - start = position + len(string) - return positions - -def splittext(string): - string_no_formaterror = string.replace(" -- > ", " --> ") - split_positions = find_string_positions(string_no_formaterror, " --> ") - split_strings = [] - prepos = 0 - for pos in split_positions: - pos -= 12 - split_strings.append((string[prepos:pos])) #, string[pos:])) - prepos = pos - - FinalOutput = "" - stoutput = "" - linenumber = 1 - #print(linenumber) - for item in split_strings[1:]: - stoutput = item[0:29] + "\n" + item[30:] - stspaces = find_string_positions(stoutput, " ") - FinalOutput += str(linenumber) + "\n" + stoutput[:stspaces[-2]] + "\n" - FinalOutput += "\n" - linenumber += 1 - return FinalOutput[2:] - -def VideotoSegment(video_file, subtitle_file): - # Read the subtitle file and extract the timings for each subtitle - timings = [] - for line in subtitle_file: - if '-->' in line: - start, end = line.split('-->') - start_time = start.strip().replace(',', '.') - end_time = end.strip().replace(',', '.') - timings.append((start_time, end_time)) - - # Cut the video into segments based on the subtitle timings - video_segments = [] - for i, (start_time, end_time) in enumerate(timings): - output_file = f'segment_{i}.mp4' - ffmpeg.input(video_file, ss=start_time, to=end_time).output(output_file, codec='copy').run() - video_segments.append(output_file) - - # Convert each segment to an MP3 audio file using FFmpeg - audio_segments = [] - for i in range(len(timings)): - output_file = f'segment_{i}.mp3' - ffmpeg.input(video_segments[i]).output(output_file, codec='libmp3lame', qscale='4').run() - audio_segments.append(output_file) - - # Create a ZIP archive containing all of the segmented files - zip_file = zipfile.ZipFile('segmented_files.zip', 'w') - for segment in video_segments + audio_segments: - zip_file.write(segment) - os.remove(segment) - zip_file.close() - - # Return the ZIP archive for download - return 'segmented_files.zip' - -def text_to_dropdown(text, id=None): #TextCompFormat - lines = text.strip().split("\n") - html = "{line}\n" - html += " \n" - return html - -def text_to_links(text): #TextCompFormat - lines = text.strip().split("\n") - html = "" - for line in lines: - if line.startswith("http"): - html += f" -- -- | \n" - else: - html += line + "Not a link
    \n" - return html - -HTMLCompMode = gr.Dropdown(choices=["Dropdown", "Links"], value="Links") - -def TextCompFormat(text, HTMLCompMode): - FinalOutput = "" - if HTMLCompMode == "Dropdown": - FinalOutput = text_to_dropdown(text) - if HTMLCompMode == "Links": - FinalOutput = text_to_links(text) - return FinalOutput - -def create_collapsiblebutton(button_id, button_caption, div_content): - button_html = f'' - div_html = f'
    \n{div_content}\n
    ' - return button_html + "\n " + div_html - -#--------------- - -def removeTonalMarks(string): - tonalMarks = "āēīōūǖáéíóúǘǎěǐǒǔǚàèìòùǜɔɛ" - nonTonalMarks = "aeiouuaeiouuaeiouuaeiouoe" - noTonalMarksStr = "" - for char in string: - index = tonalMarks.find(char) - if index != -1: - noTonalMarksStr += nonTonalMarks[index] - else: - noTonalMarksStr += char - return noTonalMarksStr - - -def add_text_to_image(input_image, text, output_image_path="output.png", border_size=2): - text = removeTonalMarks(text) - imagearr = np.asarray(input_image) #Image.open(input_image_path) - width, height = imagearr.shape[:2] #width, height = image.size - img = Image.fromarray(imagearr) - draw = ImageDraw.Draw(img) - font = ImageFont.truetype("ShortBaby.ttf", 36) #ShortBaby-Mg2w.ttf - text_width, text_height = draw.textbbox((0, 0), text, font=font)[2:] #draw.textsize(text, font) - # calculate the x, y coordinates of the text box - x = (width - text_width) / 2 - y = (height - text_height) / 2 - # put the text on the image with a border - for dx, dy in [(0, 0), (border_size, border_size), (-border_size, -border_size), (border_size, -border_size), (-border_size, border_size)]: - draw.text((x + dx, y + dy), text, font=font, fill=(255, 255, 255)) - draw.text((x, y), text, font=font, fill=(0, 0, 0)) - img.save(output_image_path, "PNG") - return "output.png" - -def UnknownTrackTexttoApp(text): #Copy of def OptimisedTtAppForUNWFWO(text): - #Buttons and labels autocreation - #Change this to spacy version so that data is from one library - #Javascript videos on youtube - KodeBase - Change button color Onclick; bro code - button in 5 minutes - #GPT3 helped guide the highlighting if statements - - FinalOutput = "" - #sentence = "One Piece chapter 1049 spoilers Thanks to Etenboby from WG forums Chapter 1049: **\"The world we should aspire to\"** * In the cover, someone burned Niji and Yonji\u2019s book * Kaido flashback time. We see his childhood in Vodka Kingdom, and where a few years later he met Whitebeard who told him that Rocks wants to meet him * In the present, part of Raizo\u2019s water leaves the castle and flame clouds disappear. But Momo makes a new one. * Luffy says he will create a world where none of his friends would starve, then he hits Kaido and Kaido falls to the ground of the flower capital. * In another flashback, Kaido tells King that Joy Boy will be the man that can defeat him. **Additional info** *Flashback to Kaidou as a kid* *- His country tries to sell him to the marines but he escapes* *- He rampages in Hachinosu(i think it's blackbeard's island) and Rocks invites him to his crew* *- Young WB appears* *- Rocks flashback suddenly ends* *- Higurashi invites Kaidou* *- The flashback ends with Kaidou telling King he knows who Joy Boy is.* *Back to the present* \\- *Denjirou hugs Hiyori* \\- *Luffy's punch hits Kaidou* *Flashback continues* \\- *King asks: Who is it then?* \\- *Kaidou: The one who will defeat me* \\- *King: Then he will not appear* \\- *Onigashima falls near the capital* \\- *Momo falls* **BREAK NEXT WEEK** https://www.reddit.com/r/OnePiece/comments/umu2h0/one_piece_chapter_1049_spoilers/" #@param {type: "string"} - HTMLMainbody = "" - - doc = nlp(text) - iIDNumber = 0 - iVerbCount = 0 - iNounCount = 0 - iWords = 0 - allverbs = "" - allverbslist = "" - allverbids = "" - allverbidslist = "" - - for token in doc: - if (token.pos_ == "VERB") or (token.pos_ == "AUX"): - HTMLMainbody = HTMLMainbody + " " - allverbids = allverbids + str(iVerbCount) + " " - iVerbCount += 1 - iWords += 1 - allverbs = allverbs + token.text + " " - elif token.pos_ == "NOUN": - HTMLMainbody = HTMLMainbody + "" - iNounCount += 1 - iWords += 1 - elif token.pos_ == "PUNCT": - HTMLMainbody = HTMLMainbody + token.text - else: - HTMLMainbody = HTMLMainbody + token.text + " " - iWords += 1 - iIDNumber += 1 - - allverbslist = allverbs.split() - allverbidslist = allverbids.split() - - FinalHTML = "" - FinalCSS = "" - FinalJS = "" - - FinalCSS = FinalCSS + ''' - ''' - - #style='background-color:Gainsboro; There is no general style attribute for buttons but you can make a class and put the style conditions - - iSents = 0 - for sent in doc.sents: - iSents += 1 - - FinalHTML = FinalHTML + "\n
    Picture on mouse hover = Visual
    Speed = End Goal ==> App Timer Functions ||| \nSentences: " + str(iSents) + " | Words: " + str(iWords) + " | App elements: " + str(iNounCount + iVerbCount) + " | Verbs: " + str(iVerbCount) + "
    " - FinalHTML = FinalHTML + "\n

    " - FinalJS = FinalJS + '''\n - - ''' - - FinalHTML = FinalHTML + '''


    -
    Only Unknown List
    - \n - ''' - - FinalOutput = FinalHTML + FinalCSS + FinalJS - - HTMLDownloadTemp = f'UnknownVerbTrack.html' - - with open(HTMLDownloadTemp, 'w') as f: - f.write(FinalOutput) - - return FinalOutput, FinalOutput, HTMLDownloadTemp - -#Kathryn Lingel - Pyambic Pentameter Example - PyCon US -#Basic Language Model Code -def build_model(source_text): - list_of_words = source_text.split() - model = {} #initialise model to empty dictionary - - for i, word in enumerate(list_of_words[:-1]): #every word except last word - if not word in model: #If word not already in dictionary as a key we add it and initialise to empty array - model[word] = [] - next_word = list_of_words[i+1] - model[word].append(next_word) #model = dictionary per word containing previously seen next words from ANY given text ==> even lyrics - - translatestring = str(model) - translatestring = translatestring.replace("'", "") - return model, translatestring - -def markov_generate(source_text, num_words = 20): - model = build_model(source_text) - seed = random.choice(list(model.keys())) #Randomly pick a word ==> Heading of the dictionary are keys aka the words - output = [seed] #output initialisation using random word - for i in range(num_words): - last_word = output[-1] #of the output list - next_word = random.choice(model[last_word]) # next word to the above word - output.append(next_word) #new last word in the output list - if next_word not in model: - break - - return ' '.join(output) #New list into a string aka (hopefully) sentence -# print(markov_generate("I am the egg man they are the egg men I am the wallrus goo goo g' joob")) - -def chunk_srt_text(srt_text, chunk_size): - # Split the SRT text into chunks of the specified size - ChunkList = textwrap.wrap(srt_text, chunk_size) - dfFinalOutput = pd.DataFrame(ChunkList, columns = [f"Chunks - { len(ChunkList) }"]) - return dfFinalOutput, "" - -#------------------------------------------------------------------------------------------------------------------------------- -#Clean Merge - -def split_into_fours(text): - lines = text.split('\n') - chunks = [lines[i:i+4] for i in range(0, len(lines), 4)] - return chunks - -def NumberLineSort(listlen): - numbers = list(range(0, listlen)) # create a list of numbers 1 to 12 - grouped_numbers = [] - for i in range(4): - group = [numbers[j] for j in range(i, len(numbers), 4)] - grouped_numbers.append(group) - return grouped_numbers - -def SRTLineSort(text): - chunks = split_into_fours(text) - NumberofBlocks = len(chunks) / 4 - printnumber = NumberLineSort(len(chunks)) - SRTLinenumber = [] - SRTTiming = [] - SRTContent = [] - FinalOutput = "" - - for i in range(0, 3): - for item in printnumber[i]: - if i == 0: SRTLinenumber.append(chunks[item][0]) - if i == 1: SRTTiming.append(chunks[item][0]) - if i == 2: SRTContent.append(chunks[item]) - - for i in range(0, int(NumberofBlocks)): - FinalOutput += SRTLinenumber[i] + "\n" - FinalOutput += SRTTiming[i] + "\n" - for i2 in range(0, 4): - FinalOutput += SRTContent[i][i2] + "\n" - FinalOutput += "\n" - - return FinalOutput - -#-------------------------------------------------------------------------------------------------------------------------------- - -RandomiseTextType = gr.Dropdown(choices=["Words", "Words5x", "Sentences", "Paragraph", "Page"], value="Words") - -def RandomiseTextbyType(Text, Choice): - FinalOutput = "" - TempWords = [] - - if Choice == "Words" : - TempWords = Text.split() - FinalOutput = reading_randomize_words(TempWords) - if Choice == "Words5x" : - TempWords = Text.split() - FinalOutput = reading_randomize_words5x(TempWords) - if Choice == "Sentences" : FinalOutput = reading_randomize_words_in_sentence(Text) - if Choice == "Paragraph" : FinalOutput = reading_randomize_words_in_paragraph(Text) - if Choice == "Page" : FinalOutput = "Still under Construction" - - return FinalOutput - -def reading_randomize_words5x(word): - wordScram = "" - for item in word: - for i in range(5): - item = ''.join(random.sample(item, len(item))) - wordScram += " " + item - #print(item) - wordScram += "\n" - return wordScram - -def reading_randomize_words(word): - wordScram = "" - for item in word: - item = ''.join(random.sample(item, len(item))) - wordScram += item + " " - return wordScram - -def reading_randomize_words_in_sentence(text): - FinalOutput = "" - sentences = text.split(".") - for sentence in sentences: - words = sentence.split() - random.shuffle(words) - FinalOutput += ' '.join(words) + ". " - return FinalOutput - -def reading_randomize_words_in_paragraph(paragraph): - sentences = paragraph.split(".") - random.shuffle(sentences) - return '. '.join(sentences) - -def changeexposuretext(text): - return f" {text} " - -#------------------------------------------------------------------------------------------------------------------------------- - -def arrealtimetestidea(img): - return "Unfinished. The aim is to do realtime translation google but based on knowledge domains instead of language - Look at HF Models and spaces" - -#------------------------------------------------------------------------------------------------------------------------------ - - -LLPromptIdeas = """ - -

    Make jokes while following rules for a syllogism jokes game: - -The game can be played with any number of people. -One person starts by stating a syllogism, which is a logical argument that consists of three parts: a major premise, a minor premise, and a conclusion. -The next person must then state a syllogism that has the same conclusion as the first syllogism, but with different major and minor premises. -The game continues in this way until someone cannot think of a new syllogism. -The person who makes the last valid syllogism wins the game.

    - -
    - -

    Lets try this I will give you the english word and you find the <> word and then mix the meaning of the word with the mnemonic - -Bings Explanation -Youre welcome. Im happy to help. If I wanted to explain the rules of this game clearly and concisely, I would do something like this: - -The game is about finding the <> word for an English word and making a mnemonic that mixes the meaning of the word with the mnemonic. -The game has two players: one who gives the English word and one who finds the <> word and makes the mnemonic. -The player who gives the English word can choose any word they want, as long as it has a <> equivalent. -The player who finds the <> word and makes the mnemonic can use any online or offline resources they want, such as dictionaries, translators, or search engines. -The player who finds the <> word and makes the mnemonic should write their answer in the format: The <> word for X is Y <<(pinyin)>>, which means Z. Here are some possible mnemonics that mix the meaning of the word with the mnemonic: A, B, C, D.

    -""" - - -# Define the Gradio interface inputs and outputs for video split -spvvideo_file_input = gr.File(label='Video File') -spvsubtitle_file_input = gr.File(label='Subtitle File') -spvdownload_output = gr.File(label='Download Segmented Files') - -Markovlength = gr.Number(value=30, label='Length of generation') - -groupinput_text = gr.Textbox(lines=2, label="Enter a list of words") -groupoutput_text = gr.Textbox(label="Grouped words") - -Translationchuncksize = gr.Number(value=4998) - -randomExposuremessagelistitem = str(random.sample(["Bing mnemonic - lost = dont ignore unusual sounds here inside lost cave", "1000 verbs in lists of 100, verbs = easy setence structure estimation (SVO, SOV, etc.)", "Can put any message here in the navigatoin tab"], 1)).replace("['", "").replace("']", "") -randomExposuremessage = f" { randomExposuremessagelistitem } " - - - -with gr.Blocks() as lliface: #theme=gr.themes.Glass(primary_hue='green', secondary_hue='red', neutral_hue='blue', ) - PracticeExposure = gr.HTML(randomExposuremessage) - gr.HTML("Advanced Repitition = Combinatorics --> to understand a sentence properly you need understanding of every word --> in language that means use with other words --> Combos within the unique words in a sentence, paragraph, page, etc. --> as close to 3 word sentences") - gr.HTML("

    Timing Practice - Repitition: Run from it, Dread it, Repitition is inevitable - Thanos --> Repitition of reaction - Foreign in eyes/ears native in mind (For beginners) | Repitition is a multitask activity like driving must be subconcious process to show mastery

    ") - gr.HTML(""" -- Open LLM Leaderboard -- | -- Whisper JAX -- | -- Google Translate -- | -- Modelscope Text to Video -- | -- stable-diffusion 2 -- | -- stable-diffusion 1 -- | -- karlo 1 -- | -- Bark (TTS) -- | -- Offline Text Model Demos -- | -- SAM with Clip -- | -- Eleven Labs -- | -- Animate an Image -- | -- Clone a voice -- | -- OpenAI pricing -- | -- Image Training Data Search -- | -- Huggingface Chat -- | -- 128x128 Stable Diffusion (Fast) -- | -- Search 95 million research abstracts -- | -- Tiny Stories Dataset -- | -- Visualglm6b - Discuss images -- | -- RAM and Tag2Text -- | -- Potat1 Text2vid -- | """) - with gr.Row(): - with gr.Column(scale=1): - with gr.Tab("Rep - Gradio"): - gr.HTML("""Gradio Version Below """) - with gr.Tab("Rep - Gradio"): - gr.Interface(fn=group_words, inputs=groupinput_text, outputs=groupoutput_text, description="Word Grouping and Rotation - Group a list of words into sets of 10 and rotate them every 60 seconds.") #.queue() - with gr.Tab("Navigation"): - gr.HTML("Picture Annotation
    Chorus Focused Word List
    Merged Subtitles
    Repetitive Audio (TTS)
    Word and Sentence Jumbling
    Unkown: Wordnet
    Unknown: Wikipeadia
    ") - PracticeExposureInput = gr.Textbox(placeholer="Exposure practice = look up", label="Exposure at the top") - PracticeExposurebtn = gr.Button("Change Default") - PracticeExposurebtn.click(fn=changeexposuretext, inputs=PracticeExposureInput, outputs=PracticeExposure) - with gr.Tab("Vector Database = Memorisation"): - gr.HTML("Open AI - 2500000 character text = <1$ (0.0004 per 1000 tokens), Cohere Multilingual = free for personal use / Commercial use = \n Vector Database query = Better than text search but not for logical relationships") - with gr.Column(scale=3): - with gr.Tab("Beginner - Listen + Read"): - with gr.Row(): - with gr.Column(scale=1): - gr.HTML("Listening - Songs - Chorus
    Anticipation of the item to remember is how you learn lyrics that is why songs are easy as if you heard it 10 times already your capacity to anticipate the words is great

    This is where TTS helps as you are ignoring all words except the words just before the actual
    Tiny Stories dataset is like a graded reader
    ") - gr.Interface(fn=TTSforListeningPractice, inputs=["text", TTSLangOptions, "checkbox"], outputs="audio", description="Paste chorus lyrics from below here and use TTS or make notes to save here (Or paste anything)") - gr.HTML("

    Fastest way to learn words = is to have your own sound reference --> probably why babies learn fast as they make random noise

    If you know the flow of the song you can remember the spelling easier

    Essentially if the sounds are repeated or long notes they are easy to remember

    ") - gr.Interface(fn=AutoChorusInvestigator, inputs="text", outputs="text", description="Paste Full Lyrics to try find only chorus lines") - gr.Interface(fn=AutoChorusPerWordScheduler, inputs="text", outputs="text", description="Create order of repitition for tts practice") - with gr.Column(scale=1): - gr.HTML("""Reading - Caption images (SD/Dalle-E)
    -- Unsplash - free images -- | --Huggingface CLIP-Interrogator Space-- | -- Tag2Text is faster than clip -- |
    -- Transform word to an image -- | -- Promptist (Microsoft) -- | """) - gr.Interface(fn=add_text_to_image , inputs=["image", "text"], outputs="image", description="Create Annotated images (Can create using stable diffusion and use the prompt) - Describe from one side to the other to make guessing easy") - gr.HTML("Use Shift Enter To put text on new lines if the text doesnt fit
    if theres an error you have to remove the foreign letters and place roman ones") - #with gr.Tab("Transcribe - RASMUS Whisper"): - #gr.Interface.load("spaces/RASMUS/Whisper-youtube-crosslingual-subtitles", title="Subtitles") - with gr.Tab("Advanced - LingQ Addon Ideas"): - gr.HTML("Find LingQ Here --> https://www.lingq.com/en/") - with gr.Tab("Visual - Multiline Custom Video Subtitles"): - gr.HTML("LingQ Companion Idea - i.e. Full Translation Read along, and eventually Videoplayer watch along like RAMUS whisper space

    Extra functions needed - Persitent Sentence translation, UNWFWO, POS tagging and Word Count per user of words in their account. Macaronic Text is also another way to practice only the important information") - gr.HTML("""

    For Transcripts to any video on youtube use the link below ⬇️

    https://huggingface.co/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles | https://huggingface.co/spaces/vumichien/whisper-speaker-diarization""") - #gr.HTML("

    If Space not loaded its because of offline devopment errors please message for edit


    ") - with gr.Tab("Merged Subtitles"): - gr.HTML(""" Core Idea = Ability to follow one video from start to finish is more important than number of words (except for verbs)
    - Step 1 - Get foreign transcript - WHISPER (Need to download video though - booo) / Youtube / Youtube transcript api / SRT websites
    - Step 2 - Get Translation of foreign transcript
    - Step 3 - Word for Word Translation Creation in both Directions (Paste Google Translation here)
    - """) - gr.Interface(fn=split_srt_file, inputs=["text", SRTLangOptions] , outputs=["text", "file", "text", "text"], description="SRT Contents to W4W Split SRT for Google Translate") - gr.Interface(fn=chunk_srt_text, inputs=['text', Translationchuncksize], outputs=['dataframe','text'], description='Assitant for google translate character limit - aka where to expect cuts in the text') - gr.HTML("Step 4 - Pronounciation (Roman) to Subtitle Format --> GTranslate returns unformatted string") - gr.Interface(fn=splittext, inputs="text", outputs="text", description="Text for w4w creation in G Translate") - gr.HTML("Step 5 - Merge into one file") - with gr.Row(): - RomanFile = gr.File(label="Paste Roman") - W4WFile = gr.File(label="Paste Word 4 Word") - FullMeanFile = gr.File(label="Paste Full Meaning") - MacaronicFile = gr.File(label="Paste Macaronic Text") - SentGramFormula = gr.File(label="Paste Sentence Grammar Formula Text") - with gr.Row(): - MergeButton = gr.Button(label='Merge the seperate files into one interpolated file (Line by line merge)') - with gr.Row(): - MergeOutput = gr.TextArea(label="Output") - MergeButton.click(merge_lines, inputs=[RomanFile, W4WFile, FullMeanFile, MacaronicFile], outputs=[MergeOutput], ) - with gr.Row(): - gr.Text("Make sure there are 4 spaces after the last subtitle block (Otherwise its skipped)") - CleanedMergeButton = gr.Button(label='Create a Usable file for SRT') - with gr.Row(): - CleanedMergeOutput = gr.TextArea(label="Output") - CleanedMergeButton.click(fn=SRTLineSort, inputs=[MergeOutput], outputs=[CleanedMergeOutput]) - with gr.Tab("Split video to segments"): - gr.HTML("How to make screenshot in vlc - https://www.vlchelp.com/automated-screenshots-interval/
    ") - gr.Interface(VideotoSegment, inputs=[spvvideo_file_input, spvsubtitle_file_input], outputs=spvdownload_output) - gr.Text("Text to Closed Class + Adjectives + Punctuation or Noun Verb + Punctuation ") - with gr.Tab("Audio - Only English thoughts as practice"): - gr.HTML("For Audio Most productive is real time recall of native (where your full reasoning ability will always be)

    Find Replace new lines of the foreign text with full stops or | to get per word translation") - gr.Interface(fn=TTSforListeningPractice, inputs=["text", TTSLangOptions2], outputs="audio", description="Paste only english words in foreign order and then keep removing the words from this to practice as effectively") - with gr.Tab("Transition is the end goal"): - with gr.Row(): - with gr.Column(): - gr.Textbox("A word is a list of letter as a fact is a list of words. Both are in a specific order. What is most important is practice the order so randomiser is the tool", lines=4) - gr.Interface(fn=RandomiseTextbyType, inputs=["text", RandomiseTextType], outputs="text", description="Randomise order within words, sentences, paragrahs") - with gr.Column(): - #with gr.Tab("Collocations (Markov)"): - gr.HTML("Transition is the true nature of logic i.e. like some form of non-semantic embedding that is semantic?") - gr.Interface(fn=build_model, inputs="text", outputs=["text", "text"], description="Create Collocation Dictionary --> Google Kathryn Lingel - Pyambic Pentameter Example - PyCon US for more") - gr.Interface(fn=markov_generate, inputs=["text", Markovlength], outputs="text", description="Generate Text based on the collocations in the text") - with gr.Column(): - #with gr.Tab("Spelling + Chunks"): - gr.Textbox("Merged Spelling Practice Placeholder - Spell multiple words simultaneously for simultaneous access", lines=3) - gr.HTML("

    Spell multiple words simultaneously for simultaneous access

    Spelling Simplification - Use a dual language list? | Spelling is the end goal, you already know many letter orders called words so you need leverage them to remember random sequences") - gr.Interface(fn=create_dictionary, inputs="text", outputs="text", title="Sort Text by first two letters") - gr.Interface(fn=keep_nouns_verbs, inputs=["text"], outputs="text", description="Noun and Verbs only (Plus punctuation)") - gr.Interface(fn=FrontRevSentChunk, inputs=[ChunkModeDrop, "checkbox", "text", langdest], outputs="text", description="Chunks creator") - with gr.Tab("Unknown Tracker"): - gr.HTML("Repitition of things you know is a waste of time when theres stuff you dont know

    In Language the goal is bigger vocab --> Knowledge equivalent = question answer pairs but to get to those you need related information pairs

    Vocab = Glossary + all non text wall(lists, diagrams, etc.)

    ") - gr.Textbox("Placeholder for a function that creates a set list and can takes a list for known words and auto find replaces the stuff you know out of the content") - gr.Textbox("Place holder for a translate to english interface so that highlighting can still work as only english supported for now") - gr.Interface(fn=UnknownTrackTexttoApp, inputs="text", outputs=["html", "text", "file"], description="Use the text from here to create lists you use for the TTS section") - with gr.Tab("Unique word ID - use in Infranodus"): - gr.Interface(fn=unique_word_count, inputs="text", outputs="text", description="Wordcounter") - gr.Interface(fn=SepHypandSynExpansion, inputs="text", outputs=["text", "text"], description="Word suggestions - Analyse the unique words in infranodus") - gr.Interface(fn=WikiSearch, inputs="text", outputs="text", description="One word at a time Unique word suggestions (wiki articles)") - with gr.Tab("Automating related information linking"): - gr.HTML("Questions - Tacking and suggesting questions to ask = new education") - with gr.Tab("Thinking Practice"): - with gr.Tab("Sentence to Format"): - gr.Interface(fn=split_verbs_nouns , inputs="text", outputs=["text", "text", "text"], description="Comprehension reading and Sentence Format Creator") - with gr.Tab("Knowledge Ideas - Notetaking"): - gr.HTML("""

    Good knowledge = ability to answer questions --> find Questions you cant answer and look for hidden answer within them

    -

    My One Word Theory = We only use more words than needed when we have to or are bored --> Headings exist because title is not sufficient, subheadings exist because headings are not sufficient, Book Text exists because subheadings are not sufficient

    -

    Big Picture = Expand the Heading and the subheadings and compare them to each other

    -

    Application of Knowledge = App Version of the text (eg. Jupyter Notebooks) is what you create and learn first

    - """) - gr.Interface(fn=TextCompFormat, inputs=["textarea", HTMLCompMode], outputs="text", description="Convert Text to HTML Dropdown or Links which you paste in any html file") - gr.Interface(fn=create_collapsiblebutton, inputs=["textbox", "textbox", "textarea"], outputs="textarea", description="Button and Div HTML Generator, Generate the HTML for a button and the corresponding div element.") - with gr.Tab("Automated Reading Assitant"): - gr.Textbox('Parts of Speech based | Automating the Notetaking Tab either directly or using visual llm to use this interface efficiently') - gr.HTML("Types of comprehension agent
    Speed of Comprehension = Verb comprehension
    From the following please extract the verbs
    now explain each in context
    Next, use picture descriptions for each word in the verb list
    Create combinations using the verb list
    ") - gr.HTML("Tree and Branches approach to learning = familiarity with keywords/headings/summaries before reading the whole text
    Productivity/Work revolves around repitition which can be found looking for plurals and grouping terms eg. Headings and Hyper/Hyponyms Analysis") - with gr.Tab("AR"): - gr.Textbox("Alpha Test version = Real time Lablling of All things in view using SAM and Clip Interrogator and OpenCV on pydroid --> Adjusted Demo") - gr.HTML("Some Prompt ideas --> Prompt: Describe the place where these descriptions may be (You job is to be speculative for brainstorming purposes): A dog and a boy, the area is texas, the weather is sunny, the date is 01 May 2021
    Prompt Content Ideas Ideas Clip Interrogator + Location Data aka tags for place, location and time + general news updates on the location + overview of the items in the location
    Location based advise is most important but after that is information observed by appliances in the location eg. Times Computer turned on, times geyser inspected, amount of time keys havent been touched etc.
    each location will have an ai personality that will relay more information ") - gr.HTML(" -- SAM with Clip -- ") - gr.Interface(fn=arrealtimetestidea, inputs='image', outputs="text", description="Vision Assistant - see and execute") - gr.Textbox("Placeholder for webcam stream") - #gr.Interface(fn=arrealtimetestidea, inputs='webcam', outputs="text", description="Vision Assistant aka Free Observation llm judgement (GPT Vision API goes here when released). FPS is the difference between realtime app and static image") - with gr.Tab("Random Ideas"): - gr.HTML("""

    Spaces Test - Still Undercontruction --> Next Milestone is Turning this interface handsfree | Knowledge is a Language but productive knowledge is find replace as well | LingQ is good option for per word state management

    Arrows app json creator for easy knowledge graphing and spacy POS graph? --> Questions? --> -

    ChatGPT Turns Learning into a read only what you dont know ask only what you dont know feedback loop --> All you have to do is keep track of what prompts you have asked in the past

    """) - gr.HTML("

    Target 0: Mnemonics as title of images --> Comprehensible input
    Target 1: Dual audio at word Level while using repitition to train random recall --> Word level Time
    Target 2: Video --> Split by sentence --> each word repeated (60) + each phrase (10) + each sentence (10) --> TTS file for practice --> State Management/Known word Tracker
    -----------------------
    The trick is minimum one minute of focus on a new word --> Listening is hard because there are new word within seconds and you need repeated focus on each to learn

    Audio = best long form attention mechanism AS it is ANTICIPATION (Awareness of something before it happens like knowing song Lyrics) FOCUSED - Attention (Focused Repitition) + Exposure (Random Repitition)

    Listening is hard due to different word order and word combinations (collocations more important than single words)


    ") - gr.HTML("Predictable to identify the parts of picture being described --> The description moves in one direction from one side of the image to the other side is easiest
    ") - gr.HTML("Image = instant comprehension like Stable Diffusion --> Audiovisual experience is the most optimal reading experience
    Manga with summary descriptions for the chapters = Most aligned visual to audio experience") - with gr.Tab("LLM Prompts and games"): - gr.HTML(LLPromptIdeas) - -lliface.queue().launch() #(inbrowser="true") \ No newline at end of file diff --git a/spaces/PeepDaSlan9/Universal-NER-UniNER-7B-definition/README.md b/spaces/PeepDaSlan9/Universal-NER-UniNER-7B-definition/README.md deleted file mode 100644 index 31573a4c491409e9147d3642e8b2a14a2c029403..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Universal-NER-UniNER-7B-definition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Universal NER UniNER 7B Definition -emoji: ⚡ -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pengyey/bingo-chuchu/src/components/header.tsx b/spaces/Pengyey/bingo-chuchu/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
    -
    - -
    -
    - ) -} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/hswish.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/hswish.py deleted file mode 100644 index 7e0c090ff037c99ee6c5c84c4592e87beae02208..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/hswish.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSwish(nn.Module): - """Hard Swish Module. - - This module applies the hard swish function: - - .. math:: - Hswish(x) = x * ReLU6(x + 3) / 6 - - Args: - inplace (bool): can optionally do the operation in-place. - Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, inplace=False): - super(HSwish, self).__init__() - self.act = nn.ReLU6(inplace) - - def forward(self, x): - return x * self.act(x + 3) / 6 diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_nlvr.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_nlvr.py deleted file mode 100644 index 84837167bfa6874d3c3e41fb9b37271113910b7f..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/models/blip_nlvr.py +++ /dev/null @@ -1,103 +0,0 @@ -from models.med import BertConfig -from models.nlvr_encoder import BertModel -from models.vit import interpolate_pos_embed -from models.blip import create_vit, init_tokenizer, is_url - -from timm.models.hub import download_cached_file - -import torch -from torch import nn -import torch.nn.functional as F -from transformers import BertTokenizer -import numpy as np - -class BLIP_NLVR(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 480, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer, drop_path_rate=0.1) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_encoder = BertModel(config=med_config, add_pooling_layer=False) - - self.cls_head = nn.Sequential( - nn.Linear(self.text_encoder.config.hidden_size, self.text_encoder.config.hidden_size), - nn.ReLU(), - nn.Linear(self.text_encoder.config.hidden_size, 2) - ) - - def forward(self, image, text, targets, train=True): - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - image0_embeds, image1_embeds = torch.split(image_embeds,targets.size(0)) - - text = self.tokenizer(text, padding='longest', return_tensors="pt").to(image.device) - text.input_ids[:,0] = self.tokenizer.enc_token_id - - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = [image0_embeds,image1_embeds], - encoder_attention_mask = [image_atts[:image0_embeds.size(0)], - image_atts[image0_embeds.size(0):]], - return_dict = True, - ) - hidden_state = output.last_hidden_state[:,0,:] - prediction = self.cls_head(hidden_state) - - if train: - loss = F.cross_entropy(prediction, targets) - return loss - else: - return prediction - -def blip_nlvr(pretrained='',**kwargs): - model = BLIP_NLVR(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - print("missing keys:") - print(msg.missing_keys) - return model - - -def load_checkpoint(model,url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) - checkpoint = torch.load(cached_file, map_location='cpu') - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location='cpu') - else: - raise RuntimeError('checkpoint url or path is invalid') - state_dict = checkpoint['model'] - - state_dict['visual_encoder.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder.pos_embed'],model.visual_encoder) - - for key in list(state_dict.keys()): - if 'crossattention.self.' in key: - new_key0 = key.replace('self','self0') - new_key1 = key.replace('self','self1') - state_dict[new_key0] = state_dict[key] - state_dict[new_key1] = state_dict[key] - elif 'crossattention.output.dense.' in key: - new_key0 = key.replace('dense','dense0') - new_key1 = key.replace('dense','dense1') - state_dict[new_key0] = state_dict[key] - state_dict[new_key1] = state_dict[key] - - msg = model.load_state_dict(state_dict,strict=False) - print('load checkpoint from %s'%url_or_filename) - return model,msg - \ No newline at end of file diff --git a/spaces/Priyanka-Kumavat/Anomaly-Detection-On-Sound-Data/README.md b/spaces/Priyanka-Kumavat/Anomaly-Detection-On-Sound-Data/README.md deleted file mode 100644 index 05bb9047da5bf148bb9b4a65bca0d0581681418a..0000000000000000000000000000000000000000 --- a/spaces/Priyanka-Kumavat/Anomaly-Detection-On-Sound-Data/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anomaly Detection On Sound Data -emoji: 📚 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Priyanka-Kumavat/Regression-Model/README.md b/spaces/Priyanka-Kumavat/Regression-Model/README.md deleted file mode 100644 index 324fa1a05cbb00a0270f5afedded0bb8c1891e20..0000000000000000000000000000000000000000 --- a/spaces/Priyanka-Kumavat/Regression-Model/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Regression Model -emoji: 👀 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Qualinguis/Fraudulent_or_not/README.md b/spaces/Qualinguis/Fraudulent_or_not/README.md deleted file mode 100644 index 116de543e712033bd657fd2ee61050ac8af44c50..0000000000000000000000000000000000000000 --- a/spaces/Qualinguis/Fraudulent_or_not/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fraudulent Or Not -emoji: 📚 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py deleted file mode 100644 index 72aa5bfd4b60d8e6ef6ed0cf2ae4f763d12195cc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tenacity/nap.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright 2016 Étienne Bersac -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import time -import typing - -if typing.TYPE_CHECKING: - import threading - - -def sleep(seconds: float) -> None: - """ - Sleep strategy that delays execution for a given number of seconds. - - This is the default strategy, and may be mocked out for unit testing. - """ - time.sleep(seconds) - - -class sleep_using_event: - """Sleep strategy that waits on an event to be set.""" - - def __init__(self, event: "threading.Event") -> None: - self.event = event - - def __call__(self, timeout: typing.Optional[float]) -> None: - # NOTE(harlowja): this may *not* actually wait for timeout - # seconds if the event is set (ie this may eject out early). - self.event.wait(timeout=timeout) diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/Cambridge/README.md b/spaces/Realcat/image-matching-webui/hloc/pipelines/Cambridge/README.md deleted file mode 100644 index d5ae07b71c48a98fa9235f0dfb0234c3c18c74c6..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/Cambridge/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# Cambridge Landmarks dataset - -## Installation - -Download the dataset from the [PoseNet project page](http://mi.eng.cam.ac.uk/projects/relocalisation/): -```bash -export dataset=datasets/cambridge -export scenes=( "KingsCollege" "OldHospital" "StMarysChurch" "ShopFacade" "GreatCourt" ) -export IDs=( "251342" "251340" "251294" "251336" "251291" ) -for i in "${!scenes[@]}"; do -wget https://www.repository.cam.ac.uk/bitstream/handle/1810/${IDs[i]}/${scenes[i]}.zip -P $dataset \ -&& unzip $dataset/${scenes[i]}.zip -d $dataset && rm $dataset/${scenes[i]}.zip; done -``` - -Download the SIFT SfM models, courtesy of Torsten Sattler: -```bash -export fileid=1esqzZ1zEQlzZVic-H32V6kkZvc4NeS15 -export filename=$dataset/CambridgeLandmarks_Colmap_Retriangulated_1024px.zip -wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate "https://docs.google.com/uc?export=download&id=$fileid" -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$fileid" -O $filename && rm -rf /tmp/cookies.txt -unzip $filename -d $dataset -``` - -## Pipeline - -```bash -python3 -m hloc.pipelines.Cambridge.pipeline -``` - -## Results -We report the median error in translation/rotation in cm/deg over all scenes: -| Method \ Scene | Court | King's | Hospital | Shop | St. Mary's | -| ------------------------ | --------------- | --------------- | --------------- | -------------- | -------------- | -| Active Search | 24/0.13 | 13/0.22 | 20/0.36 | **4**/0.21 | 8/0.25 | -| DSAC* | 49/0.3 | 15/0.3 | 21/0.4 | 5/0.3 | 13/0.4 | -| **SuperPoint+SuperGlue** | **17**/**0.11** | **12**/**0.21** | **14**/**0.30** | **4**/**0.19** | **7**/**0.22** | - -## Citation - -Please cite the following paper if you use the Cambridge Landmarks dataset: -``` -@inproceedings{kendall2015posenet, - title={{PoseNet}: A convolutional network for real-time {6-DoF} camera relocalization}, - author={Kendall, Alex and Grimes, Matthew and Cipolla, Roberto}, - booktitle={ICCV}, - year={2015} -} -``` diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/repeatability_loss.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/repeatability_loss.py deleted file mode 100644 index af49e77f444c5b4b035cd43d0c065096e8dd7c1b..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/repeatability_loss.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright 2019-present NAVER Corp. -# CC BY-NC-SA 3.0 -# Available only for non-commercial use - -import pdb - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from nets.sampler import FullSampler - - -class CosimLoss(nn.Module): - """Try to make the repeatability repeatable from one image to the other.""" - - def __init__(self, N=16): - nn.Module.__init__(self) - self.name = f"cosim{N}" - self.patches = nn.Unfold(N, padding=0, stride=N // 2) - - def extract_patches(self, sal): - patches = self.patches(sal).transpose(1, 2) # flatten - patches = F.normalize(patches, p=2, dim=2) # norm - return patches - - def forward(self, repeatability, aflow, **kw): - B, two, H, W = aflow.shape - assert two == 2 - - # normalize - sali1, sali2 = repeatability - grid = FullSampler._aflow_to_grid(aflow) - sali2 = F.grid_sample(sali2, grid, mode="bilinear", padding_mode="border") - - patches1 = self.extract_patches(sali1) - patches2 = self.extract_patches(sali2) - cosim = (patches1 * patches2).sum(dim=2) - return 1 - cosim.mean() - - -class PeakyLoss(nn.Module): - """Try to make the repeatability locally peaky. - - Mechanism: we maximize, for each pixel, the difference between the local mean - and the local max. - """ - - def __init__(self, N=16): - nn.Module.__init__(self) - self.name = f"peaky{N}" - assert N % 2 == 0, "N must be pair" - self.preproc = nn.AvgPool2d(3, stride=1, padding=1) - self.maxpool = nn.MaxPool2d(N + 1, stride=1, padding=N // 2) - self.avgpool = nn.AvgPool2d(N + 1, stride=1, padding=N // 2) - - def forward_one(self, sali): - sali = self.preproc(sali) # remove super high frequency - return 1 - (self.maxpool(sali) - self.avgpool(sali)).mean() - - def forward(self, repeatability, **kw): - sali1, sali2 = repeatability - return (self.forward_one(sali1) + self.forward_one(sali2)) / 2 diff --git a/spaces/Reha2704/VToonify/vtoonify/model/raft/core/raft.py b/spaces/Reha2704/VToonify/vtoonify/model/raft/core/raft.py deleted file mode 100644 index a25c22f78c96470e3dca4c25e81683133ae024e3..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/raft/core/raft.py +++ /dev/null @@ -1,144 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from model.raft.core.update import BasicUpdateBlock, SmallUpdateBlock -from model.raft.core.extractor import BasicEncoder, SmallEncoder -from model.raft.core.corr import CorrBlock, AlternateCorrBlock -from model.raft.core.utils.utils import bilinear_sampler, coords_grid, upflow8 - -try: - autocast = torch.cuda.amp.autocast -except: - # dummy autocast for PyTorch < 1.6 - class autocast: - def __init__(self, enabled): - pass - def __enter__(self): - pass - def __exit__(self, *args): - pass - - -class RAFT(nn.Module): - def __init__(self, args): - super(RAFT, self).__init__() - self.args = args - - if args.small: - self.hidden_dim = hdim = 96 - self.context_dim = cdim = 64 - args.corr_levels = 4 - args.corr_radius = 3 - - else: - self.hidden_dim = hdim = 128 - self.context_dim = cdim = 128 - args.corr_levels = 4 - args.corr_radius = 4 - - if 'dropout' not in self.args: - self.args.dropout = 0 - - if 'alternate_corr' not in self.args: - self.args.alternate_corr = False - - # feature network, context network, and update block - if args.small: - self.fnet = SmallEncoder(output_dim=128, norm_fn='instance', dropout=args.dropout) - self.cnet = SmallEncoder(output_dim=hdim+cdim, norm_fn='none', dropout=args.dropout) - self.update_block = SmallUpdateBlock(self.args, hidden_dim=hdim) - - else: - self.fnet = BasicEncoder(output_dim=256, norm_fn='instance', dropout=args.dropout) - self.cnet = BasicEncoder(output_dim=hdim+cdim, norm_fn='batch', dropout=args.dropout) - self.update_block = BasicUpdateBlock(self.args, hidden_dim=hdim) - - def freeze_bn(self): - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - - def initialize_flow(self, img): - """ Flow is represented as difference between two coordinate grids flow = coords1 - coords0""" - N, C, H, W = img.shape - coords0 = coords_grid(N, H//8, W//8, device=img.device) - coords1 = coords_grid(N, H//8, W//8, device=img.device) - - # optical flow computed as difference: flow = coords1 - coords0 - return coords0, coords1 - - def upsample_flow(self, flow, mask): - """ Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination """ - N, _, H, W = flow.shape - mask = mask.view(N, 1, 9, 8, 8, H, W) - mask = torch.softmax(mask, dim=2) - - up_flow = F.unfold(8 * flow, [3,3], padding=1) - up_flow = up_flow.view(N, 2, 9, 1, 1, H, W) - - up_flow = torch.sum(mask * up_flow, dim=2) - up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) - return up_flow.reshape(N, 2, 8*H, 8*W) - - - def forward(self, image1, image2, iters=12, flow_init=None, upsample=True, test_mode=False): - """ Estimate optical flow between pair of frames """ - - image1 = 2 * (image1 / 255.0) - 1.0 - image2 = 2 * (image2 / 255.0) - 1.0 - - image1 = image1.contiguous() - image2 = image2.contiguous() - - hdim = self.hidden_dim - cdim = self.context_dim - - # run the feature network - with autocast(enabled=self.args.mixed_precision): - fmap1, fmap2 = self.fnet([image1, image2]) - - fmap1 = fmap1.float() - fmap2 = fmap2.float() - if self.args.alternate_corr: - corr_fn = AlternateCorrBlock(fmap1, fmap2, radius=self.args.corr_radius) - else: - corr_fn = CorrBlock(fmap1, fmap2, radius=self.args.corr_radius) - - # run the context network - with autocast(enabled=self.args.mixed_precision): - cnet = self.cnet(image1) - net, inp = torch.split(cnet, [hdim, cdim], dim=1) - net = torch.tanh(net) - inp = torch.relu(inp) - - coords0, coords1 = self.initialize_flow(image1) - - if flow_init is not None: - coords1 = coords1 + flow_init - - flow_predictions = [] - for itr in range(iters): - coords1 = coords1.detach() - corr = corr_fn(coords1) # index correlation volume - - flow = coords1 - coords0 - with autocast(enabled=self.args.mixed_precision): - net, up_mask, delta_flow = self.update_block(net, inp, corr, flow) - - # F(t+1) = F(t) + \Delta(t) - coords1 = coords1 + delta_flow - - # upsample predictions - if up_mask is None: - flow_up = upflow8(coords1 - coords0) - else: - flow_up = self.upsample_flow(coords1 - coords0, up_mask) - - flow_predictions.append(flow_up) - - if test_mode: - return coords1 - coords0, flow_up - - return flow_predictions diff --git a/spaces/Ritori/TTS_Yui/plotting_utils.py b/spaces/Ritori/TTS_Yui/plotting_utils.py deleted file mode 100644 index ca7e16880ea01ca3a03b8842a5d885e0285ed489..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/plotting_utils.py +++ /dev/null @@ -1,61 +0,0 @@ -import matplotlib -matplotlib.use("Agg") -import matplotlib.pylab as plt -import numpy as np - - -def save_figure_to_numpy(fig): - # save it to a numpy array. - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - return data - - -def plot_alignment_to_numpy(alignment, info=None): - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment, aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close() - return data - - -def plot_spectrogram_to_numpy(spectrogram): - fig, ax = plt.subplots(figsize=(12, 3)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close() - return data - - -def plot_gate_outputs_to_numpy(gate_targets, gate_outputs): - fig, ax = plt.subplots(figsize=(12, 3)) - ax.scatter(range(len(gate_targets)), gate_targets, alpha=0.5, - color='green', marker='+', s=1, label='target') - ax.scatter(range(len(gate_outputs)), gate_outputs, alpha=0.5, - color='red', marker='.', s=1, label='predicted') - - plt.xlabel("Frames (Green target, Red predicted)") - plt.ylabel("Gate State") - plt.tight_layout() - - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close() - return data diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/fpn_r50.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/fpn_r50.py deleted file mode 100644 index 86ab327db92e44c14822d65f1c9277cb007f17c1..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/fpn_r50.py +++ /dev/null @@ -1,36 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=4), - decode_head=dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deprecated_wrappers.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deprecated_wrappers.py deleted file mode 100644 index a2e593df9ee57637038683d7a1efaa347b2b69e7..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deprecated_wrappers.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file is for backward compatibility. -# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks. -import warnings - -from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d - - -class Conv2d_deprecated(Conv2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class ConvTranspose2d_deprecated(ConvTranspose2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be ' - 'deprecated in the future. Please import them from "mmcv.cnn" ' - 'instead') - - -class MaxPool2d_deprecated(MaxPool2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class Linear_deprecated(Linear): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Linear wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/balanced_l1_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/balanced_l1_loss.py deleted file mode 100644 index 7bcd13ff26dbdc9f6eff8d7c7b5bde742a8d7d1d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/balanced_l1_loss.py +++ /dev/null @@ -1,120 +0,0 @@ -import mmcv -import numpy as np -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def balanced_l1_loss(pred, - target, - beta=1.0, - alpha=0.5, - gamma=1.5, - reduction='mean'): - """Calculate balanced L1 loss. - - Please see the `Libra R-CNN `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - beta (float): The loss is a piecewise function of prediction and target - and ``beta`` serves as a threshold for the difference between the - prediction and target. Defaults to 1.0. - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. - Defaults to 1.5. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert beta > 0 - assert pred.size() == target.size() and target.numel() > 0 - - diff = torch.abs(pred - target) - b = np.e**(gamma / alpha) - 1 - loss = torch.where( - diff < beta, alpha / b * - (b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff, - gamma * diff + gamma / b - alpha * beta) - - return loss - - -@LOSSES.register_module() -class BalancedL1Loss(nn.Module): - """Balanced L1 Loss. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Args: - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. Defaults to 1.5. - beta (float, optional): The loss is a piecewise function of prediction - and target. ``beta`` serves as a threshold for the difference - between the prediction and target. Defaults to 1.0. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, - alpha=0.5, - gamma=1.5, - beta=1.0, - reduction='mean', - loss_weight=1.0): - super(BalancedL1Loss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - weight (torch.Tensor, optional): Sample-wise loss weight with - shape (N, ). - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * balanced_l1_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py deleted file mode 100644 index 57994157960eeae5530bd983b8b86263de31d0ff..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,214 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - """ - # element-wise losses - loss = F.cross_entropy(pred, label, weight=class_weight, reduction='none') - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, label_channels): - bin_labels = labels.new_full((labels.size(0), label_channels), 0) - inds = torch.nonzero( - (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze() - if inds.numel() > 0: - bin_labels[inds, labels[inds]] = 1 - - if label_weights is None: - bin_label_weights = None - else: - bin_label_weights = label_weights.view(-1, 1).expand( - label_weights.size(0), label_channels) - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - label, weight = _expand_onehot_labels(label, weight, pred.size(-1)) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C, *), C is the - number of classes. The trailing * indicates arbitrary shape. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - - Returns: - torch.Tensor: The calculated loss - - Example: - >>> N, C = 3, 11 - >>> H, W = 2, 2 - >>> pred = torch.randn(N, C, H, W) * 1000 - >>> target = torch.rand(N, H, W) - >>> label = torch.randint(0, C, size=(N,)) - >>> reduction = 'mean' - >>> avg_factor = None - >>> class_weights = None - >>> loss = mask_cross_entropy(pred, target, label, reduction, - >>> avg_factor, class_weights) - >>> assert loss.shape == (1,) - """ - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float], optional): Weight of each class. - Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - """ - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = class_weight - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - cls_score (torch.Tensor): The prediction. - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor( - self.class_weight, device=cls_score.device) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/cantonese.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/SrRaptor/Imagy/music.py b/spaces/SrRaptor/Imagy/music.py deleted file mode 100644 index 1e76fd1f3c5b8a38a3d99b4a151f162dca32ca07..0000000000000000000000000000000000000000 --- a/spaces/SrRaptor/Imagy/music.py +++ /dev/null @@ -1,209 +0,0 @@ -from distutils.log import debug -import os, sys - -import random -import datetime -import glob -# from xml.dom.minidom import Document -import markov -import pickle -import subprocess -import gradio as gr -import time -from MC.markov_chain import get_pngs - - - - -#TODO: convert these into inputs -# lengthofsong = 10 Should we control this? Setting it to random now -timesignature = ['3/4','4/4','1/8','6/8','2/4'] #Sometimes the letter “C” (meaning common time) will be used in place of 4/4. -#Both C and 4/4 indicate that there are four quarter note beats in each measure. -keysignature = ["C","G","D","No selection"] -difficulty = ["beginner","intermediate","expert"] -key_enforced = False -key_enforced = True #Set to true if user wants in specific key - -# get the list of filenames (abc files downloaded from http://www.norbeck.nu/abc/) -# getdirs = [] -# dirs = ["hn201612/i/*.abc", "hn201612/s/*.abc"] -# dirs = ["data/*.abc"] -# dirs = ["data"] -# for dir1 in dirs: -# for filename in glob.iglob(dir1): -# getdirs += [filename] - -selected_timeSign = '3/4' #Default values -selected_keySign = 'C' #Default Values -deployed = True - - -GlobalUIGallery = False -#Finds all absolute paths in directory -#https://stackoverflow.com/questions/9816816/get-absolute-paths-of-all-files-in-a-directory -def abs_paths(dir): - for dir_path,_,filenames in os.walk(dir): - for f in filenames: - yield os.path.abspath(os.path.join(dir_path, f)) - -def time_sigFinder(time_Signature): - if time_Signature == "4/4": - return 'M:4/4',4 - elif time_Signature == "3/4": - return 'M:3/4',3 - elif time_Signature == "2/4": - return 'M:2/4',2 - elif time_Signature == "1/8": - pass - elif time_Signature == "2/4": - return 'M:2/4',2 - elif time_Signature == "2/2": - return 'M:2/2',2 -# def get_pngs(path): -# filelist=os.listdir(path) -# for fichier in filelist[:]: # filelist[:] makes a copy of filelist. -# if not(fichier.endswith(".png")): -# filelist.remove(fichier) -# newlist = [path+'/'+x for x in filelist] #making it cwd -# return newlist -def music_gen(difficulty,time_Signature, Key_Signature): - if deployed: - #delete all files stored in gen_songs_abc - command = "rm -r gen_songs_abc/*" - subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE).communicate() - - corpus = [] - song = [] - selected_timeSign = time_Signature - selected_keySign = Key_Signature - data_path = "data/"+str(difficulty) -# ex_filename = "hn201612/i/hnsong1.abc" -# parsing on file to extract songs and add them to corpus - for filename in abs_paths(data_path): - with open(filename) as f: - lines = f.readlines() - last = len(lines) - accepted = False - for index, line in enumerate(lines): - if (line.find("|") < 0 and index - 1 == last): - # if the next line does not have pipes add song to corpus and then set song variable empty again - if accepted and key_enforced and key_accepted: - corpus.append(song) - accepted = False - key_accepted = False - if accepted: - corpus.append(song) - accepted = False - song = [] - else: - if line.find("|") > -1: - # a line should be split on "|" and copied to the corpus if it has pipes - sline = line.split("|") - # add the list of measures to the song - song += [x.strip("\r\n") for x in sline if len(x.strip("\r\n")) > 0] - last = index - elif "M:" in line: - #time signature - if selected_timeSign == "4/4": - if "4/4" in line or "C|" in line: - accepted = True - elif selected_timeSign in line: - accepted = True - elif line.find("K:") and key_enforced: - #key signature - if selected_keySign in line: - key_accepted = True - - # print("Training on {} songs...".format(len(corpus))) - - # MARKOV PART - # n-gram length for markov model - n = 1 - - model = markov.generate_model_from_token_lists(corpus, n) - - - # save pickle - # with open('markov_chain.pickle', 'wb') as handle: - # pickle.dump(model, handle) - - - def nextword(word): - return markov.generate(model, 3, seed=word, max_iterations=1) - - - def writesong(songlength, first): - song = [first] - for i in range(songlength): - song += nextword(str(song[-1])) - return song - - # choose a random song length from list of song lengths in corpus - lengthofsong = random.choice([len(x) for x in corpus if len(x) > 10]) - song_len = [len(x) for x in corpus if len(x)>10] - song_len.sort() - firstnote = markov.generate(model, n, max_iterations=3)[0] - # print "first note: {}".format(firstnote) - - print("Here is the song in abc format:") - song = writesong(lengthofsong, firstnote) - dob = datetime.datetime.now().strftime('%H%M%S') - - modifier = format(dob) - path = "gen_songs_abc/song_"+modifier - # make song file - # songname = "./gen_songs_abc/gen_song_{}.abc".modifier - song_path = path+"/gen_song_"+modifier #without extension - songname = path+"/gen_song_"+modifier+".abc" - - print("\n\nYou can find the song in {}".format(songname)) - lastpart = lengthofsong - lengthofsong%4 - - # hack to include dictionary at the beginning of every abc file - # will add a more sophisticated way to generate the values in the future - title = "Markov Song {}".format(dob) - final_timeS,numOfnotes = time_sigFinder(time_Signature) - songbeginning = ['X:1','T:' + title, 'R:song', 'C:Visakh Ajith', 'Z:id:hn-song-111', final_timeS, 'L:1/8', 'Q:1/4=120', 'K:G' - ] - songbeginning = [x+"\n" for x in songbeginning] - - # convert song to abc format and write to file - - if not os.path.exists(path): - os.makedirs("gen_songs_abc/song_"+modifier) - - - newsong = open(os.path.abspath(songname), 'w') - newsong.writelines(songbeginning) - for i in range(lastpart): - newsong.write(" | ".join(song[i:i+numOfnotes]) + "\n") - newsong.write(" | ".join(song[lastpart:lengthofsong])) - newsong.close() - #abc2ly markov.abc - # lilypond -fpng markov.ly - #convert abc to markov - #create folder with that name and push .ly, midi and abc there? - - - f = open(song_path+".ly","w") - # subprocess.Popen(['/usr/bin/abc2midi',songname],stdout=subprocess.PIPE).communicate() - command = "abc2ly "+"-o "+song_path+".ly"+" "+songname - - # cmd1 = subprocess.Popen(['/usr/bin/abc2ly','-o',song_path+".ly",songname],stdout=subprocess.PIPE,stderr=subprocess.PIPE) - # cmd1 = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE) - subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE).communicate() - # os.system(command) - f.close() - # out, err = cmd1.communicate() - # # time.sleep(2) - cmd2 = subprocess.Popen(['lilypond','-fpng','-o',path,song_path+".ly"]).communicate() - # cmd2.wait() - - #fluidsynth() dependency - # subprocess.Popen(['midi2audio',song_path+'.midi',song_path+'.wav']).communicate() - subprocess.Popen(['timidity',song_path+'.midi','-Ow','-o',song_path+'.wav']).communicate() - # output = str(temp.communicate()) - #Introduces this wait time as we were returning file path even before lilypond converted the abc file - # final_path = os.path.abspath(song_path+".png") - png_list = get_pngs(path) - return png_list,song_path+".wav" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/test_refs.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/test_refs.py deleted file mode 100644 index b92448be074180e8311fa9820b52bc6fa70d696e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/testing/plugin/test_refs.py +++ /dev/null @@ -1,39 +0,0 @@ -"""Some simple tests for the plugin while running scripts. -""" -# Module imports -# Std lib -import inspect - -# Our own - -#----------------------------------------------------------------------------- -# Testing functions - -def test_trivial(): - """A trivial passing test.""" - pass - -def doctest_run(): - """Test running a trivial script. - - In [13]: run simplevars.py - x is: 1 - """ - -def doctest_runvars(): - """Test that variables defined in scripts get loaded correctly via %run. - - In [13]: run simplevars.py - x is: 1 - - In [14]: x - Out[14]: 1 - """ - -def doctest_ivars(): - """Test that variables defined interactively are picked up. - In [5]: zz=1 - - In [6]: zz - Out[6]: 1 - """ diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_compat.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_compat.py deleted file mode 100644 index 22d29ab8ac303756047d105dadafcfd5107563ef..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_compat.py +++ /dev/null @@ -1,217 +0,0 @@ -from __future__ import annotations - -from abc import ABCMeta, abstractmethod -from contextlib import AbstractContextManager -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - AsyncContextManager, - Callable, - ContextManager, - Generator, - Generic, - Iterable, - List, - TypeVar, - Union, - overload, -) -from warnings import warn - -if TYPE_CHECKING: - from ._testing import TaskInfo -else: - TaskInfo = object - -T = TypeVar("T") -AnyDeprecatedAwaitable = Union[ - "DeprecatedAwaitable", - "DeprecatedAwaitableFloat", - "DeprecatedAwaitableList[T]", - TaskInfo, -] - - -@overload -async def maybe_async(__obj: TaskInfo) -> TaskInfo: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitableFloat) -> float: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitableList[T]) -> list[T]: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitable) -> None: - ... - - -async def maybe_async( - __obj: AnyDeprecatedAwaitable[T], -) -> TaskInfo | float | list[T] | None: - """ - Await on the given object if necessary. - - This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and - methods were converted from coroutine functions into regular functions. - - Do **not** try to use this for any other purpose! - - :return: the result of awaiting on the object if coroutine, or the object itself otherwise - - .. versionadded:: 2.2 - - """ - return __obj._unwrap() - - -class _ContextManagerWrapper: - def __init__(self, cm: ContextManager[T]): - self._cm = cm - - async def __aenter__(self) -> T: - return self._cm.__enter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self._cm.__exit__(exc_type, exc_val, exc_tb) - - -def maybe_async_cm( - cm: ContextManager[T] | AsyncContextManager[T], -) -> AsyncContextManager[T]: - """ - Wrap a regular context manager as an async one if necessary. - - This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and - methods were changed to return regular context managers instead of async ones. - - :param cm: a regular or async context manager - :return: an async context manager - - .. versionadded:: 2.2 - - """ - if not isinstance(cm, AbstractContextManager): - raise TypeError("Given object is not an context manager") - - return _ContextManagerWrapper(cm) - - -def _warn_deprecation( - awaitable: AnyDeprecatedAwaitable[Any], stacklevel: int = 1 -) -> None: - warn( - f'Awaiting on {awaitable._name}() is deprecated. Use "await ' - f"anyio.maybe_async({awaitable._name}(...)) if you have to support both AnyIO 2.x " - f'and 3.x, or just remove the "await" if you are completely migrating to AnyIO 3+.', - DeprecationWarning, - stacklevel=stacklevel + 1, - ) - - -class DeprecatedAwaitable: - def __init__(self, func: Callable[..., DeprecatedAwaitable]): - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, None]: - _warn_deprecation(self) - if False: - yield - - def __reduce__(self) -> tuple[type[None], tuple[()]]: - return type(None), () - - def _unwrap(self) -> None: - return None - - -class DeprecatedAwaitableFloat(float): - def __new__( - cls, x: float, func: Callable[..., DeprecatedAwaitableFloat] - ) -> DeprecatedAwaitableFloat: - return super().__new__(cls, x) - - def __init__(self, x: float, func: Callable[..., DeprecatedAwaitableFloat]): - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, float]: - _warn_deprecation(self) - if False: - yield - - return float(self) - - def __reduce__(self) -> tuple[type[float], tuple[float]]: - return float, (float(self),) - - def _unwrap(self) -> float: - return float(self) - - -class DeprecatedAwaitableList(List[T]): - def __init__( - self, - iterable: Iterable[T] = (), - *, - func: Callable[..., DeprecatedAwaitableList[T]], - ): - super().__init__(iterable) - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, list[T]]: - _warn_deprecation(self) - if False: - yield - - return list(self) - - def __reduce__(self) -> tuple[type[list[T]], tuple[list[T]]]: - return list, (list(self),) - - def _unwrap(self) -> list[T]: - return list(self) - - -class DeprecatedAsyncContextManager(Generic[T], metaclass=ABCMeta): - @abstractmethod - def __enter__(self) -> T: - pass - - @abstractmethod - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - pass - - async def __aenter__(self) -> T: - warn( - f"Using {self.__class__.__name__} as an async context manager has been deprecated. " - f'Use "async with anyio.maybe_async_cm(yourcontextmanager) as foo:" if you have to ' - f'support both AnyIO 2.x and 3.x, or just remove the "async" from "async with" if ' - f"you are completely migrating to AnyIO 3+.", - DeprecationWarning, - ) - return self.__enter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self.__exit__(exc_type, exc_val, exc_tb) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/visualizer.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/visualizer.py deleted file mode 100644 index 48e915433efd4083849229713611b949e88565c5..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/visualizer.py +++ /dev/null @@ -1,1267 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import colorsys -import logging -import math -import numpy as np -from enum import Enum, unique -import cv2 -import matplotlib as mpl -import matplotlib.colors as mplc -import matplotlib.figure as mplfigure -import annotator.oneformer.pycocotools.mask as mask_util -import torch -from matplotlib.backends.backend_agg import FigureCanvasAgg -from PIL import Image - -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.structures import BitMasks, Boxes, BoxMode, Keypoints, PolygonMasks, RotatedBoxes -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .colormap import random_color - -logger = logging.getLogger(__name__) - -__all__ = ["ColorMode", "VisImage", "Visualizer"] - - -_SMALL_OBJECT_AREA_THRESH = 1000 -_LARGE_MASK_AREA_THRESH = 120000 -_OFF_WHITE = (1.0, 1.0, 240.0 / 255) -_BLACK = (0, 0, 0) -_RED = (1.0, 0, 0) - -_KEYPOINT_THRESHOLD = 0.05 - - -@unique -class ColorMode(Enum): - """ - Enum of different color modes to use for instance visualizations. - """ - - IMAGE = 0 - """ - Picks a random color for every instance and overlay segmentations with low opacity. - """ - SEGMENTATION = 1 - """ - Let instances of the same category have similar colors - (from metadata.thing_colors), and overlay them with - high opacity. This provides more attention on the quality of segmentation. - """ - IMAGE_BW = 2 - """ - Same as IMAGE, but convert all areas without masks to gray-scale. - Only available for drawing per-instance mask predictions. - """ - - -class GenericMask: - """ - Attribute: - polygons (list[ndarray]): list[ndarray]: polygons for this mask. - Each ndarray has format [x, y, x, y, ...] - mask (ndarray): a binary mask - """ - - def __init__(self, mask_or_polygons, height, width): - self._mask = self._polygons = self._has_holes = None - self.height = height - self.width = width - - m = mask_or_polygons - if isinstance(m, dict): - # RLEs - assert "counts" in m and "size" in m - if isinstance(m["counts"], list): # uncompressed RLEs - h, w = m["size"] - assert h == height and w == width - m = mask_util.frPyObjects(m, h, w) - self._mask = mask_util.decode(m)[:, :] - return - - if isinstance(m, list): # list[ndarray] - self._polygons = [np.asarray(x).reshape(-1) for x in m] - return - - if isinstance(m, np.ndarray): # assumed to be a binary mask - assert m.shape[1] != 2, m.shape - assert m.shape == ( - height, - width, - ), f"mask shape: {m.shape}, target dims: {height}, {width}" - self._mask = m.astype("uint8") - return - - raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m))) - - @property - def mask(self): - if self._mask is None: - self._mask = self.polygons_to_mask(self._polygons) - return self._mask - - @property - def polygons(self): - if self._polygons is None: - self._polygons, self._has_holes = self.mask_to_polygons(self._mask) - return self._polygons - - @property - def has_holes(self): - if self._has_holes is None: - if self._mask is not None: - self._polygons, self._has_holes = self.mask_to_polygons(self._mask) - else: - self._has_holes = False # if original format is polygon, does not have holes - return self._has_holes - - def mask_to_polygons(self, mask): - # cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level - # hierarchy. External contours (boundary) of the object are placed in hierarchy-1. - # Internal contours (holes) are placed in hierarchy-2. - # cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours. - mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr - res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) - hierarchy = res[-1] - if hierarchy is None: # empty mask - return [], False - has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0 - res = res[-2] - res = [x.flatten() for x in res] - # These coordinates from OpenCV are integers in range [0, W-1 or H-1]. - # We add 0.5 to turn them into real-value coordinate space. A better solution - # would be to first +0.5 and then dilate the returned polygon by 0.5. - res = [x + 0.5 for x in res if len(x) >= 6] - return res, has_holes - - def polygons_to_mask(self, polygons): - rle = mask_util.frPyObjects(polygons, self.height, self.width) - rle = mask_util.merge(rle) - return mask_util.decode(rle)[:, :] - - def area(self): - return self.mask.sum() - - def bbox(self): - p = mask_util.frPyObjects(self.polygons, self.height, self.width) - p = mask_util.merge(p) - bbox = mask_util.toBbox(p) - bbox[2] += bbox[0] - bbox[3] += bbox[1] - return bbox - - -class _PanopticPrediction: - """ - Unify different panoptic annotation/prediction formats - """ - - def __init__(self, panoptic_seg, segments_info, metadata=None): - if segments_info is None: - assert metadata is not None - # If "segments_info" is None, we assume "panoptic_img" is a - # H*W int32 image storing the panoptic_id in the format of - # category_id * label_divisor + instance_id. We reserve -1 for - # VOID label. - label_divisor = metadata.label_divisor - segments_info = [] - for panoptic_label in np.unique(panoptic_seg.numpy()): - if panoptic_label == -1: - # VOID region. - continue - pred_class = panoptic_label // label_divisor - isthing = pred_class in metadata.thing_dataset_id_to_contiguous_id.values() - segments_info.append( - { - "id": int(panoptic_label), - "category_id": int(pred_class), - "isthing": bool(isthing), - } - ) - del metadata - - self._seg = panoptic_seg - - self._sinfo = {s["id"]: s for s in segments_info} # seg id -> seg info - segment_ids, areas = torch.unique(panoptic_seg, sorted=True, return_counts=True) - areas = areas.numpy() - sorted_idxs = np.argsort(-areas) - self._seg_ids, self._seg_areas = segment_ids[sorted_idxs], areas[sorted_idxs] - self._seg_ids = self._seg_ids.tolist() - for sid, area in zip(self._seg_ids, self._seg_areas): - if sid in self._sinfo: - self._sinfo[sid]["area"] = float(area) - - def non_empty_mask(self): - """ - Returns: - (H, W) array, a mask for all pixels that have a prediction - """ - empty_ids = [] - for id in self._seg_ids: - if id not in self._sinfo: - empty_ids.append(id) - if len(empty_ids) == 0: - return np.zeros(self._seg.shape, dtype=np.uint8) - assert ( - len(empty_ids) == 1 - ), ">1 ids corresponds to no labels. This is currently not supported" - return (self._seg != empty_ids[0]).numpy().astype(bool) - - def semantic_masks(self): - for sid in self._seg_ids: - sinfo = self._sinfo.get(sid) - if sinfo is None or sinfo["isthing"]: - # Some pixels (e.g. id 0 in PanopticFPN) have no instance or semantic predictions. - continue - yield (self._seg == sid).numpy().astype(bool), sinfo - - def instance_masks(self): - for sid in self._seg_ids: - sinfo = self._sinfo.get(sid) - if sinfo is None or not sinfo["isthing"]: - continue - mask = (self._seg == sid).numpy().astype(bool) - if mask.sum() > 0: - yield mask, sinfo - - -def _create_text_labels(classes, scores, class_names, is_crowd=None): - """ - Args: - classes (list[int] or None): - scores (list[float] or None): - class_names (list[str] or None): - is_crowd (list[bool] or None): - - Returns: - list[str] or None - """ - labels = None - if classes is not None: - if class_names is not None and len(class_names) > 0: - labels = [class_names[i] for i in classes] - else: - labels = [str(i) for i in classes] - if scores is not None: - if labels is None: - labels = ["{:.0f}%".format(s * 100) for s in scores] - else: - labels = ["{} {:.0f}%".format(l, s * 100) for l, s in zip(labels, scores)] - if labels is not None and is_crowd is not None: - labels = [l + ("|crowd" if crowd else "") for l, crowd in zip(labels, is_crowd)] - return labels - - -class VisImage: - def __init__(self, img, scale=1.0): - """ - Args: - img (ndarray): an RGB image of shape (H, W, 3) in range [0, 255]. - scale (float): scale the input image - """ - self.img = img - self.scale = scale - self.width, self.height = img.shape[1], img.shape[0] - self._setup_figure(img) - - def _setup_figure(self, img): - """ - Args: - Same as in :meth:`__init__()`. - - Returns: - fig (matplotlib.pyplot.figure): top level container for all the image plot elements. - ax (matplotlib.pyplot.Axes): contains figure elements and sets the coordinate system. - """ - fig = mplfigure.Figure(frameon=False) - self.dpi = fig.get_dpi() - # add a small 1e-2 to avoid precision lost due to matplotlib's truncation - # (https://github.com/matplotlib/matplotlib/issues/15363) - fig.set_size_inches( - (self.width * self.scale + 1e-2) / self.dpi, - (self.height * self.scale + 1e-2) / self.dpi, - ) - self.canvas = FigureCanvasAgg(fig) - # self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig) - ax = fig.add_axes([0.0, 0.0, 1.0, 1.0]) - ax.axis("off") - self.fig = fig - self.ax = ax - self.reset_image(img) - - def reset_image(self, img): - """ - Args: - img: same as in __init__ - """ - img = img.astype("uint8") - self.ax.imshow(img, extent=(0, self.width, self.height, 0), interpolation="nearest") - - def save(self, filepath): - """ - Args: - filepath (str): a string that contains the absolute path, including the file name, where - the visualized image will be saved. - """ - self.fig.savefig(filepath) - - def get_image(self): - """ - Returns: - ndarray: - the visualized image of shape (H, W, 3) (RGB) in uint8 type. - The shape is scaled w.r.t the input image using the given `scale` argument. - """ - canvas = self.canvas - s, (width, height) = canvas.print_to_buffer() - # buf = io.BytesIO() # works for cairo backend - # canvas.print_rgba(buf) - # width, height = self.width, self.height - # s = buf.getvalue() - - buffer = np.frombuffer(s, dtype="uint8") - - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - return rgb.astype("uint8") - - -class Visualizer: - """ - Visualizer that draws data about detection/segmentation on images. - - It contains methods like `draw_{text,box,circle,line,binary_mask,polygon}` - that draw primitive objects to images, as well as high-level wrappers like - `draw_{instance_predictions,sem_seg,panoptic_seg_predictions,dataset_dict}` - that draw composite data in some pre-defined style. - - Note that the exact visualization style for the high-level wrappers are subject to change. - Style such as color, opacity, label contents, visibility of labels, or even the visibility - of objects themselves (e.g. when the object is too small) may change according - to different heuristics, as long as the results still look visually reasonable. - - To obtain a consistent style, you can implement custom drawing functions with the - abovementioned primitive methods instead. If you need more customized visualization - styles, you can process the data yourself following their format documented in - tutorials (:doc:`/tutorials/models`, :doc:`/tutorials/datasets`). This class does not - intend to satisfy everyone's preference on drawing styles. - - This visualizer focuses on high rendering quality rather than performance. It is not - designed to be used for real-time applications. - """ - - # TODO implement a fast, rasterized version using OpenCV - - def __init__(self, img_rgb, metadata=None, scale=1.0, instance_mode=ColorMode.IMAGE): - """ - Args: - img_rgb: a numpy array of shape (H, W, C), where H and W correspond to - the height and width of the image respectively. C is the number of - color channels. The image is required to be in RGB format since that - is a requirement of the Matplotlib library. The image is also expected - to be in the range [0, 255]. - metadata (Metadata): dataset metadata (e.g. class names and colors) - instance_mode (ColorMode): defines one of the pre-defined style for drawing - instances on an image. - """ - self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8) - if metadata is None: - metadata = MetadataCatalog.get("__nonexist__") - self.metadata = metadata - self.output = VisImage(self.img, scale=scale) - self.cpu_device = torch.device("cpu") - - # too small texts are useless, therefore clamp to 9 - self._default_font_size = max( - np.sqrt(self.output.height * self.output.width) // 90, 10 // scale - ) - self._instance_mode = instance_mode - self.keypoint_threshold = _KEYPOINT_THRESHOLD - - def draw_instance_predictions(self, predictions): - """ - Draw instance-level prediction results on an image. - - Args: - predictions (Instances): the output of an instance detection/segmentation - model. Following fields will be used to draw: - "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle"). - - Returns: - output (VisImage): image object with visualizations. - """ - boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.tolist() if predictions.has("pred_classes") else None - labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None)) - keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None - - if predictions.has("pred_masks"): - masks = np.asarray(predictions.pred_masks) - masks = [GenericMask(x, self.output.height, self.output.width) for x in masks] - else: - masks = None - - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes - ] - alpha = 0.8 - else: - colors = None - alpha = 0.5 - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.reset_image( - self._create_grayscale_image( - (predictions.pred_masks.any(dim=0) > 0).numpy() - if predictions.has("pred_masks") - else None - ) - ) - alpha = 0.3 - - self.overlay_instances( - masks=masks, - boxes=boxes, - labels=labels, - keypoints=keypoints, - assigned_colors=colors, - alpha=alpha, - ) - return self.output - - def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8): - """ - Draw semantic segmentation predictions/labels. - - Args: - sem_seg (Tensor or ndarray): the segmentation of shape (H, W). - Each value is the integer label of the pixel. - area_threshold (int): segments with less than `area_threshold` are not drawn. - alpha (float): the larger it is, the more opaque the segmentations are. - - Returns: - output (VisImage): image object with visualizations. - """ - if isinstance(sem_seg, torch.Tensor): - sem_seg = sem_seg.numpy() - labels, areas = np.unique(sem_seg, return_counts=True) - sorted_idxs = np.argsort(-areas).tolist() - labels = labels[sorted_idxs] - for label in filter(lambda l: l < len(self.metadata.stuff_classes), labels): - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[label]] - except (AttributeError, IndexError): - mask_color = None - - binary_mask = (sem_seg == label).astype(np.uint8) - text = self.metadata.stuff_classes[label] - self.draw_binary_mask( - binary_mask, - color=mask_color, - edge_color=_OFF_WHITE, - text=text, - alpha=alpha, - area_threshold=area_threshold, - ) - return self.output - - def draw_panoptic_seg(self, panoptic_seg, segments_info, area_threshold=None, alpha=0.7): - """ - Draw panoptic prediction annotations or results. - - Args: - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each - segment. - segments_info (list[dict] or None): Describe each segment in `panoptic_seg`. - If it is a ``list[dict]``, each dict contains keys "id", "category_id". - If None, category id of each pixel is computed by - ``pixel // metadata.label_divisor``. - area_threshold (int): stuff segments with less than `area_threshold` are not drawn. - - Returns: - output (VisImage): image object with visualizations. - """ - pred = _PanopticPrediction(panoptic_seg, segments_info, self.metadata) - - if self._instance_mode == ColorMode.IMAGE_BW: - self.output.reset_image(self._create_grayscale_image(pred.non_empty_mask())) - - # draw mask for all semantic segments first i.e. "stuff" - for mask, sinfo in pred.semantic_masks(): - category_idx = sinfo["category_id"] - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]] - except AttributeError: - mask_color = None - - text = self.metadata.stuff_classes[category_idx] - self.draw_binary_mask( - mask, - color=mask_color, - edge_color=_OFF_WHITE, - text=text, - alpha=alpha, - area_threshold=area_threshold, - ) - - # draw mask for all instances second - all_instances = list(pred.instance_masks()) - if len(all_instances) == 0: - return self.output - masks, sinfo = list(zip(*all_instances)) - category_ids = [x["category_id"] for x in sinfo] - - try: - scores = [x["score"] for x in sinfo] - except KeyError: - scores = None - labels = _create_text_labels( - category_ids, scores, self.metadata.thing_classes, [x.get("iscrowd", 0) for x in sinfo] - ) - - try: - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in category_ids - ] - except AttributeError: - colors = None - self.overlay_instances(masks=masks, labels=labels, assigned_colors=colors, alpha=alpha) - - return self.output - - draw_panoptic_seg_predictions = draw_panoptic_seg # backward compatibility - - def draw_dataset_dict(self, dic): - """ - Draw annotations/segmentations in Detectron2 Dataset format. - - Args: - dic (dict): annotation/segmentation data of one image, in Detectron2 Dataset format. - - Returns: - output (VisImage): image object with visualizations. - """ - annos = dic.get("annotations", None) - if annos: - if "segmentation" in annos[0]: - masks = [x["segmentation"] for x in annos] - else: - masks = None - if "keypoints" in annos[0]: - keypts = [x["keypoints"] for x in annos] - keypts = np.array(keypts).reshape(len(annos), -1, 3) - else: - keypts = None - - boxes = [ - BoxMode.convert(x["bbox"], x["bbox_mode"], BoxMode.XYXY_ABS) - if len(x["bbox"]) == 4 - else x["bbox"] - for x in annos - ] - - colors = None - category_ids = [x["category_id"] for x in annos] - if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"): - colors = [ - self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) - for c in category_ids - ] - names = self.metadata.get("thing_classes", None) - labels = _create_text_labels( - category_ids, - scores=None, - class_names=names, - is_crowd=[x.get("iscrowd", 0) for x in annos], - ) - self.overlay_instances( - labels=labels, boxes=boxes, masks=masks, keypoints=keypts, assigned_colors=colors - ) - - sem_seg = dic.get("sem_seg", None) - if sem_seg is None and "sem_seg_file_name" in dic: - with PathManager.open(dic["sem_seg_file_name"], "rb") as f: - sem_seg = Image.open(f) - sem_seg = np.asarray(sem_seg, dtype="uint8") - if sem_seg is not None: - self.draw_sem_seg(sem_seg, area_threshold=0, alpha=0.5) - - pan_seg = dic.get("pan_seg", None) - if pan_seg is None and "pan_seg_file_name" in dic: - with PathManager.open(dic["pan_seg_file_name"], "rb") as f: - pan_seg = Image.open(f) - pan_seg = np.asarray(pan_seg) - from panopticapi.utils import rgb2id - - pan_seg = rgb2id(pan_seg) - if pan_seg is not None: - segments_info = dic["segments_info"] - pan_seg = torch.tensor(pan_seg) - self.draw_panoptic_seg(pan_seg, segments_info, area_threshold=0, alpha=0.5) - return self.output - - def overlay_instances( - self, - *, - boxes=None, - labels=None, - masks=None, - keypoints=None, - assigned_colors=None, - alpha=0.5, - ): - """ - Args: - boxes (Boxes, RotatedBoxes or ndarray): either a :class:`Boxes`, - or an Nx4 numpy array of XYXY_ABS format for the N objects in a single image, - or a :class:`RotatedBoxes`, - or an Nx5 numpy array of (x_center, y_center, width, height, angle_degrees) format - for the N objects in a single image, - labels (list[str]): the text to be displayed for each instance. - masks (masks-like object): Supported types are: - - * :class:`detectron2.structures.PolygonMasks`, - :class:`detectron2.structures.BitMasks`. - * list[list[ndarray]]: contains the segmentation masks for all objects in one image. - The first level of the list corresponds to individual instances. The second - level to all the polygon that compose the instance, and the third level - to the polygon coordinates. The third level should have the format of - [x0, y0, x1, y1, ..., xn, yn] (n >= 3). - * list[ndarray]: each ndarray is a binary mask of shape (H, W). - * list[dict]: each dict is a COCO-style RLE. - keypoints (Keypoint or array like): an array-like object of shape (N, K, 3), - where the N is the number of instances and K is the number of keypoints. - The last dimension corresponds to (x, y, visibility or score). - assigned_colors (list[matplotlib.colors]): a list of colors, where each color - corresponds to each mask or box in the image. Refer to 'matplotlib.colors' - for full list of formats that the colors are accepted in. - Returns: - output (VisImage): image object with visualizations. - """ - num_instances = 0 - if boxes is not None: - boxes = self._convert_boxes(boxes) - num_instances = len(boxes) - if masks is not None: - masks = self._convert_masks(masks) - if num_instances: - assert len(masks) == num_instances - else: - num_instances = len(masks) - if keypoints is not None: - if num_instances: - assert len(keypoints) == num_instances - else: - num_instances = len(keypoints) - keypoints = self._convert_keypoints(keypoints) - if labels is not None: - assert len(labels) == num_instances - if assigned_colors is None: - assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)] - if num_instances == 0: - return self.output - if boxes is not None and boxes.shape[1] == 5: - return self.overlay_rotated_instances( - boxes=boxes, labels=labels, assigned_colors=assigned_colors - ) - - # Display in largest to smallest order to reduce occlusion. - areas = None - if boxes is not None: - areas = np.prod(boxes[:, 2:] - boxes[:, :2], axis=1) - elif masks is not None: - areas = np.asarray([x.area() for x in masks]) - - if areas is not None: - sorted_idxs = np.argsort(-areas).tolist() - # Re-order overlapped instances in descending order. - boxes = boxes[sorted_idxs] if boxes is not None else None - labels = [labels[k] for k in sorted_idxs] if labels is not None else None - masks = [masks[idx] for idx in sorted_idxs] if masks is not None else None - assigned_colors = [assigned_colors[idx] for idx in sorted_idxs] - keypoints = keypoints[sorted_idxs] if keypoints is not None else None - - for i in range(num_instances): - color = assigned_colors[i] - if boxes is not None: - self.draw_box(boxes[i], edge_color=color) - - if masks is not None: - for segment in masks[i].polygons: - self.draw_polygon(segment.reshape(-1, 2), color, alpha=alpha) - - if labels is not None: - # first get a box - if boxes is not None: - x0, y0, x1, y1 = boxes[i] - text_pos = (x0, y0) # if drawing boxes, put text on the box corner. - horiz_align = "left" - elif masks is not None: - # skip small mask without polygon - if len(masks[i].polygons) == 0: - continue - - x0, y0, x1, y1 = masks[i].bbox() - - # draw text in the center (defined by median) when box is not drawn - # median is less sensitive to outliers. - text_pos = np.median(masks[i].mask.nonzero(), axis=1)[::-1] - horiz_align = "center" - else: - continue # drawing the box confidence for keypoints isn't very useful. - # for small objects, draw text at the side to avoid occlusion - instance_area = (y1 - y0) * (x1 - x0) - if ( - instance_area < _SMALL_OBJECT_AREA_THRESH * self.output.scale - or y1 - y0 < 40 * self.output.scale - ): - if y1 >= self.output.height - 5: - text_pos = (x1, y0) - else: - text_pos = (x0, y1) - - height_ratio = (y1 - y0) / np.sqrt(self.output.height * self.output.width) - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - font_size = ( - np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) - * 0.5 - * self._default_font_size - ) - self.draw_text( - labels[i], - text_pos, - color=lighter_color, - horizontal_alignment=horiz_align, - font_size=font_size, - ) - - # draw keypoints - if keypoints is not None: - for keypoints_per_instance in keypoints: - self.draw_and_connect_keypoints(keypoints_per_instance) - - return self.output - - def overlay_rotated_instances(self, boxes=None, labels=None, assigned_colors=None): - """ - Args: - boxes (ndarray): an Nx5 numpy array of - (x_center, y_center, width, height, angle_degrees) format - for the N objects in a single image. - labels (list[str]): the text to be displayed for each instance. - assigned_colors (list[matplotlib.colors]): a list of colors, where each color - corresponds to each mask or box in the image. Refer to 'matplotlib.colors' - for full list of formats that the colors are accepted in. - - Returns: - output (VisImage): image object with visualizations. - """ - num_instances = len(boxes) - - if assigned_colors is None: - assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)] - if num_instances == 0: - return self.output - - # Display in largest to smallest order to reduce occlusion. - if boxes is not None: - areas = boxes[:, 2] * boxes[:, 3] - - sorted_idxs = np.argsort(-areas).tolist() - # Re-order overlapped instances in descending order. - boxes = boxes[sorted_idxs] - labels = [labels[k] for k in sorted_idxs] if labels is not None else None - colors = [assigned_colors[idx] for idx in sorted_idxs] - - for i in range(num_instances): - self.draw_rotated_box_with_label( - boxes[i], edge_color=colors[i], label=labels[i] if labels is not None else None - ) - - return self.output - - def draw_and_connect_keypoints(self, keypoints): - """ - Draws keypoints of an instance and follows the rules for keypoint connections - to draw lines between appropriate keypoints. This follows color heuristics for - line color. - - Args: - keypoints (Tensor): a tensor of shape (K, 3), where K is the number of keypoints - and the last dimension corresponds to (x, y, probability). - - Returns: - output (VisImage): image object with visualizations. - """ - visible = {} - keypoint_names = self.metadata.get("keypoint_names") - for idx, keypoint in enumerate(keypoints): - - # draw keypoint - x, y, prob = keypoint - if prob > self.keypoint_threshold: - self.draw_circle((x, y), color=_RED) - if keypoint_names: - keypoint_name = keypoint_names[idx] - visible[keypoint_name] = (x, y) - - if self.metadata.get("keypoint_connection_rules"): - for kp0, kp1, color in self.metadata.keypoint_connection_rules: - if kp0 in visible and kp1 in visible: - x0, y0 = visible[kp0] - x1, y1 = visible[kp1] - color = tuple(x / 255.0 for x in color) - self.draw_line([x0, x1], [y0, y1], color=color) - - # draw lines from nose to mid-shoulder and mid-shoulder to mid-hip - # Note that this strategy is specific to person keypoints. - # For other keypoints, it should just do nothing - try: - ls_x, ls_y = visible["left_shoulder"] - rs_x, rs_y = visible["right_shoulder"] - mid_shoulder_x, mid_shoulder_y = (ls_x + rs_x) / 2, (ls_y + rs_y) / 2 - except KeyError: - pass - else: - # draw line from nose to mid-shoulder - nose_x, nose_y = visible.get("nose", (None, None)) - if nose_x is not None: - self.draw_line([nose_x, mid_shoulder_x], [nose_y, mid_shoulder_y], color=_RED) - - try: - # draw line from mid-shoulder to mid-hip - lh_x, lh_y = visible["left_hip"] - rh_x, rh_y = visible["right_hip"] - except KeyError: - pass - else: - mid_hip_x, mid_hip_y = (lh_x + rh_x) / 2, (lh_y + rh_y) / 2 - self.draw_line([mid_hip_x, mid_shoulder_x], [mid_hip_y, mid_shoulder_y], color=_RED) - return self.output - - """ - Primitive drawing functions: - """ - - def draw_text( - self, - text, - position, - *, - font_size=None, - color="g", - horizontal_alignment="center", - rotation=0, - ): - """ - Args: - text (str): class label - position (tuple): a tuple of the x and y coordinates to place text on image. - font_size (int, optional): font of the text. If not provided, a font size - proportional to the image width is calculated and used. - color: color of the text. Refer to `matplotlib.colors` for full list - of formats that are accepted. - horizontal_alignment (str): see `matplotlib.text.Text` - rotation: rotation angle in degrees CCW - - Returns: - output (VisImage): image object with text drawn. - """ - if not font_size: - font_size = self._default_font_size - - # since the text background is dark, we don't want the text to be dark - color = np.maximum(list(mplc.to_rgb(color)), 0.2) - color[np.argmax(color)] = max(0.8, np.max(color)) - - x, y = position - self.output.ax.text( - x, - y, - text, - size=font_size * self.output.scale, - family="sans-serif", - bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"}, - verticalalignment="top", - horizontalalignment=horizontal_alignment, - color=color, - zorder=10, - rotation=rotation, - ) - return self.output - - def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"): - """ - Args: - box_coord (tuple): a tuple containing x0, y0, x1, y1 coordinates, where x0 and y0 - are the coordinates of the image's top left corner. x1 and y1 are the - coordinates of the image's bottom right corner. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - edge_color: color of the outline of the box. Refer to `matplotlib.colors` - for full list of formats that are accepted. - line_style (string): the string to use to create the outline of the boxes. - - Returns: - output (VisImage): image object with box drawn. - """ - x0, y0, x1, y1 = box_coord - width = x1 - x0 - height = y1 - y0 - - linewidth = max(self._default_font_size / 4, 1) - - self.output.ax.add_patch( - mpl.patches.Rectangle( - (x0, y0), - width, - height, - fill=False, - edgecolor=edge_color, - linewidth=linewidth * self.output.scale, - alpha=alpha, - linestyle=line_style, - ) - ) - return self.output - - def draw_rotated_box_with_label( - self, rotated_box, alpha=0.5, edge_color="g", line_style="-", label=None - ): - """ - Draw a rotated box with label on its top-left corner. - - Args: - rotated_box (tuple): a tuple containing (cnt_x, cnt_y, w, h, angle), - where cnt_x and cnt_y are the center coordinates of the box. - w and h are the width and height of the box. angle represents how - many degrees the box is rotated CCW with regard to the 0-degree box. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - edge_color: color of the outline of the box. Refer to `matplotlib.colors` - for full list of formats that are accepted. - line_style (string): the string to use to create the outline of the boxes. - label (string): label for rotated box. It will not be rendered when set to None. - - Returns: - output (VisImage): image object with box drawn. - """ - cnt_x, cnt_y, w, h, angle = rotated_box - area = w * h - # use thinner lines when the box is small - linewidth = self._default_font_size / ( - 6 if area < _SMALL_OBJECT_AREA_THRESH * self.output.scale else 3 - ) - - theta = angle * math.pi / 180.0 - c = math.cos(theta) - s = math.sin(theta) - rect = [(-w / 2, h / 2), (-w / 2, -h / 2), (w / 2, -h / 2), (w / 2, h / 2)] - # x: left->right ; y: top->down - rotated_rect = [(s * yy + c * xx + cnt_x, c * yy - s * xx + cnt_y) for (xx, yy) in rect] - for k in range(4): - j = (k + 1) % 4 - self.draw_line( - [rotated_rect[k][0], rotated_rect[j][0]], - [rotated_rect[k][1], rotated_rect[j][1]], - color=edge_color, - linestyle="--" if k == 1 else line_style, - linewidth=linewidth, - ) - - if label is not None: - text_pos = rotated_rect[1] # topleft corner - - height_ratio = h / np.sqrt(self.output.height * self.output.width) - label_color = self._change_color_brightness(edge_color, brightness_factor=0.7) - font_size = ( - np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) * 0.5 * self._default_font_size - ) - self.draw_text(label, text_pos, color=label_color, font_size=font_size, rotation=angle) - - return self.output - - def draw_circle(self, circle_coord, color, radius=3): - """ - Args: - circle_coord (list(int) or tuple(int)): contains the x and y coordinates - of the center of the circle. - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - radius (int): radius of the circle. - - Returns: - output (VisImage): image object with box drawn. - """ - x, y = circle_coord - self.output.ax.add_patch( - mpl.patches.Circle(circle_coord, radius=radius, fill=True, color=color) - ) - return self.output - - def draw_line(self, x_data, y_data, color, linestyle="-", linewidth=None): - """ - Args: - x_data (list[int]): a list containing x values of all the points being drawn. - Length of list should match the length of y_data. - y_data (list[int]): a list containing y values of all the points being drawn. - Length of list should match the length of x_data. - color: color of the line. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - linestyle: style of the line. Refer to `matplotlib.lines.Line2D` - for a full list of formats that are accepted. - linewidth (float or None): width of the line. When it's None, - a default value will be computed and used. - - Returns: - output (VisImage): image object with line drawn. - """ - if linewidth is None: - linewidth = self._default_font_size / 3 - linewidth = max(linewidth, 1) - self.output.ax.add_line( - mpl.lines.Line2D( - x_data, - y_data, - linewidth=linewidth * self.output.scale, - color=color, - linestyle=linestyle, - ) - ) - return self.output - - def draw_binary_mask( - self, binary_mask, color=None, *, edge_color=None, text=None, alpha=0.5, area_threshold=10 - ): - """ - Args: - binary_mask (ndarray): numpy array of shape (H, W), where H is the image height and - W is the image width. Each value in the array is either a 0 or 1 value of uint8 - type. - color: color of the mask. Refer to `matplotlib.colors` for a full list of - formats that are accepted. If None, will pick a random color. - edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a - full list of formats that are accepted. - text (str): if None, will be drawn on the object - alpha (float): blending efficient. Smaller values lead to more transparent masks. - area_threshold (float): a connected component smaller than this area will not be shown. - - Returns: - output (VisImage): image object with mask drawn. - """ - if color is None: - color = random_color(rgb=True, maximum=1) - color = mplc.to_rgb(color) - - has_valid_segment = False - binary_mask = binary_mask.astype("uint8") # opencv needs uint8 - mask = GenericMask(binary_mask, self.output.height, self.output.width) - shape2d = (binary_mask.shape[0], binary_mask.shape[1]) - - if not mask.has_holes: - # draw polygons for regular masks - for segment in mask.polygons: - area = mask_util.area(mask_util.frPyObjects([segment], shape2d[0], shape2d[1])) - if area < (area_threshold or 0): - continue - has_valid_segment = True - segment = segment.reshape(-1, 2) - self.draw_polygon(segment, color=color, edge_color=edge_color, alpha=alpha) - else: - # TODO: Use Path/PathPatch to draw vector graphics: - # https://stackoverflow.com/questions/8919719/how-to-plot-a-complex-polygon - rgba = np.zeros(shape2d + (4,), dtype="float32") - rgba[:, :, :3] = color - rgba[:, :, 3] = (mask.mask == 1).astype("float32") * alpha - has_valid_segment = True - self.output.ax.imshow(rgba, extent=(0, self.output.width, self.output.height, 0)) - - if text is not None and has_valid_segment: - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - self._draw_text_in_mask(binary_mask, text, lighter_color) - return self.output - - def draw_soft_mask(self, soft_mask, color=None, *, text=None, alpha=0.5): - """ - Args: - soft_mask (ndarray): float array of shape (H, W), each value in [0, 1]. - color: color of the mask. Refer to `matplotlib.colors` for a full list of - formats that are accepted. If None, will pick a random color. - text (str): if None, will be drawn on the object - alpha (float): blending efficient. Smaller values lead to more transparent masks. - - Returns: - output (VisImage): image object with mask drawn. - """ - if color is None: - color = random_color(rgb=True, maximum=1) - color = mplc.to_rgb(color) - - shape2d = (soft_mask.shape[0], soft_mask.shape[1]) - rgba = np.zeros(shape2d + (4,), dtype="float32") - rgba[:, :, :3] = color - rgba[:, :, 3] = soft_mask * alpha - self.output.ax.imshow(rgba, extent=(0, self.output.width, self.output.height, 0)) - - if text is not None: - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - binary_mask = (soft_mask > 0.5).astype("uint8") - self._draw_text_in_mask(binary_mask, text, lighter_color) - return self.output - - def draw_polygon(self, segment, color, edge_color=None, alpha=0.5): - """ - Args: - segment: numpy array of shape Nx2, containing all the points in the polygon. - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a - full list of formats that are accepted. If not provided, a darker shade - of the polygon color will be used instead. - alpha (float): blending efficient. Smaller values lead to more transparent masks. - - Returns: - output (VisImage): image object with polygon drawn. - """ - if edge_color is None: - # make edge color darker than the polygon color - if alpha > 0.8: - edge_color = self._change_color_brightness(color, brightness_factor=-0.7) - else: - edge_color = color - edge_color = mplc.to_rgb(edge_color) + (1,) - - polygon = mpl.patches.Polygon( - segment, - fill=True, - facecolor=mplc.to_rgb(color) + (alpha,), - edgecolor=edge_color, - linewidth=max(self._default_font_size // 15 * self.output.scale, 1), - ) - self.output.ax.add_patch(polygon) - return self.output - - """ - Internal methods: - """ - - def _jitter(self, color): - """ - Randomly modifies given color to produce a slightly different color than the color given. - - Args: - color (tuple[double]): a tuple of 3 elements, containing the RGB values of the color - picked. The values in the list are in the [0.0, 1.0] range. - - Returns: - jittered_color (tuple[double]): a tuple of 3 elements, containing the RGB values of the - color after being jittered. The values in the list are in the [0.0, 1.0] range. - """ - color = mplc.to_rgb(color) - vec = np.random.rand(3) - # better to do it in another color space - vec = vec / np.linalg.norm(vec) * 0.5 - res = np.clip(vec + color, 0, 1) - return tuple(res) - - def _create_grayscale_image(self, mask=None): - """ - Create a grayscale version of the original image. - The colors in masked area, if given, will be kept. - """ - img_bw = self.img.astype("f4").mean(axis=2) - img_bw = np.stack([img_bw] * 3, axis=2) - if mask is not None: - img_bw[mask] = self.img[mask] - return img_bw - - def _change_color_brightness(self, color, brightness_factor): - """ - Depending on the brightness_factor, gives a lighter or darker color i.e. a color with - less or more saturation than the original color. - - Args: - color: color of the polygon. Refer to `matplotlib.colors` for a full list of - formats that are accepted. - brightness_factor (float): a value in [-1.0, 1.0] range. A lightness factor of - 0 will correspond to no change, a factor in [-1.0, 0) range will result in - a darker color and a factor in (0, 1.0] range will result in a lighter color. - - Returns: - modified_color (tuple[double]): a tuple containing the RGB values of the - modified color. Each value in the tuple is in the [0.0, 1.0] range. - """ - assert brightness_factor >= -1.0 and brightness_factor <= 1.0 - color = mplc.to_rgb(color) - polygon_color = colorsys.rgb_to_hls(*mplc.to_rgb(color)) - modified_lightness = polygon_color[1] + (brightness_factor * polygon_color[1]) - modified_lightness = 0.0 if modified_lightness < 0.0 else modified_lightness - modified_lightness = 1.0 if modified_lightness > 1.0 else modified_lightness - modified_color = colorsys.hls_to_rgb(polygon_color[0], modified_lightness, polygon_color[2]) - return tuple(np.clip(modified_color, 0.0, 1.0)) - - def _convert_boxes(self, boxes): - """ - Convert different format of boxes to an NxB array, where B = 4 or 5 is the box dimension. - """ - if isinstance(boxes, Boxes) or isinstance(boxes, RotatedBoxes): - return boxes.tensor.detach().numpy() - else: - return np.asarray(boxes) - - def _convert_masks(self, masks_or_polygons): - """ - Convert different format of masks or polygons to a tuple of masks and polygons. - - Returns: - list[GenericMask]: - """ - - m = masks_or_polygons - if isinstance(m, PolygonMasks): - m = m.polygons - if isinstance(m, BitMasks): - m = m.tensor.numpy() - if isinstance(m, torch.Tensor): - m = m.numpy() - ret = [] - for x in m: - if isinstance(x, GenericMask): - ret.append(x) - else: - ret.append(GenericMask(x, self.output.height, self.output.width)) - return ret - - def _draw_text_in_mask(self, binary_mask, text, color): - """ - Find proper places to draw text given a binary mask. - """ - # TODO sometimes drawn on wrong objects. the heuristics here can improve. - _num_cc, cc_labels, stats, centroids = cv2.connectedComponentsWithStats(binary_mask, 8) - if stats[1:, -1].size == 0: - return - largest_component_id = np.argmax(stats[1:, -1]) + 1 - - # draw text on the largest component, as well as other very large components. - for cid in range(1, _num_cc): - if cid == largest_component_id or stats[cid, -1] > _LARGE_MASK_AREA_THRESH: - # median is more stable than centroid - # center = centroids[largest_component_id] - center = np.median((cc_labels == cid).nonzero(), axis=1)[::-1] - self.draw_text(text, center, color=color) - - def _convert_keypoints(self, keypoints): - if isinstance(keypoints, Keypoints): - keypoints = keypoints.tensor - keypoints = np.asarray(keypoints) - return keypoints - - def get_output(self): - """ - Returns: - output (VisImage): the image output containing the visualizations added - to the image. - """ - return self.output diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/serialize.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/serialize.py deleted file mode 100644 index 7fe1a3e33a3adbfd9ad1126a22d7175154ebc200..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/serialize.py +++ /dev/null @@ -1,190 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import base64 -import io -import json -import zlib - -from pip._vendor import msgpack -from pip._vendor.requests.structures import CaseInsensitiveDict - -from .compat import HTTPResponse, pickle, text_type - - -def _b64_decode_bytes(b): - return base64.b64decode(b.encode("ascii")) - - -def _b64_decode_str(s): - return _b64_decode_bytes(s).decode("utf8") - - -_default_body_read = object() - - -class Serializer(object): - def dumps(self, request, response, body=None): - response_headers = CaseInsensitiveDict(response.headers) - - if body is None: - # When a body isn't passed in, we'll read the response. We - # also update the response with a new file handler to be - # sure it acts as though it was never read. - body = response.read(decode_content=False) - response._fp = io.BytesIO(body) - - # NOTE: This is all a bit weird, but it's really important that on - # Python 2.x these objects are unicode and not str, even when - # they contain only ascii. The problem here is that msgpack - # understands the difference between unicode and bytes and we - # have it set to differentiate between them, however Python 2 - # doesn't know the difference. Forcing these to unicode will be - # enough to have msgpack know the difference. - data = { - u"response": { - u"body": body, # Empty bytestring if body is stored separately - u"headers": dict( - (text_type(k), text_type(v)) for k, v in response.headers.items() - ), - u"status": response.status, - u"version": response.version, - u"reason": text_type(response.reason), - u"strict": response.strict, - u"decode_content": response.decode_content, - } - } - - # Construct our vary headers - data[u"vary"] = {} - if u"vary" in response_headers: - varied_headers = response_headers[u"vary"].split(",") - for header in varied_headers: - header = text_type(header).strip() - header_value = request.headers.get(header, None) - if header_value is not None: - header_value = text_type(header_value) - data[u"vary"][header] = header_value - - return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) - - def loads(self, request, data, body_file=None): - # Short circuit if we've been given an empty set of data - if not data: - return - - # Determine what version of the serializer the data was serialized - # with - try: - ver, data = data.split(b",", 1) - except ValueError: - ver = b"cc=0" - - # Make sure that our "ver" is actually a version and isn't a false - # positive from a , being in the data stream. - if ver[:3] != b"cc=": - data = ver + data - ver = b"cc=0" - - # Get the version number out of the cc=N - ver = ver.split(b"=", 1)[-1].decode("ascii") - - # Dispatch to the actual load method for the given version - try: - return getattr(self, "_loads_v{}".format(ver))(request, data, body_file) - - except AttributeError: - # This is a version we don't have a loads function for, so we'll - # just treat it as a miss and return None - return - - def prepare_response(self, request, cached, body_file=None): - """Verify our vary headers match and construct a real urllib3 - HTTPResponse object. - """ - # Special case the '*' Vary value as it means we cannot actually - # determine if the cached response is suitable for this request. - # This case is also handled in the controller code when creating - # a cache entry, but is left here for backwards compatibility. - if "*" in cached.get("vary", {}): - return - - # Ensure that the Vary headers for the cached response match our - # request - for header, value in cached.get("vary", {}).items(): - if request.headers.get(header, None) != value: - return - - body_raw = cached["response"].pop("body") - - headers = CaseInsensitiveDict(data=cached["response"]["headers"]) - if headers.get("transfer-encoding", "") == "chunked": - headers.pop("transfer-encoding") - - cached["response"]["headers"] = headers - - try: - if body_file is None: - body = io.BytesIO(body_raw) - else: - body = body_file - except TypeError: - # This can happen if cachecontrol serialized to v1 format (pickle) - # using Python 2. A Python 2 str(byte string) will be unpickled as - # a Python 3 str (unicode string), which will cause the above to - # fail with: - # - # TypeError: 'str' does not support the buffer interface - body = io.BytesIO(body_raw.encode("utf8")) - - return HTTPResponse(body=body, preload_content=False, **cached["response"]) - - def _loads_v0(self, request, data, body_file=None): - # The original legacy cache data. This doesn't contain enough - # information to construct everything we need, so we'll treat this as - # a miss. - return - - def _loads_v1(self, request, data, body_file=None): - try: - cached = pickle.loads(data) - except ValueError: - return - - return self.prepare_response(request, cached, body_file) - - def _loads_v2(self, request, data, body_file=None): - assert body_file is None - try: - cached = json.loads(zlib.decompress(data).decode("utf8")) - except (ValueError, zlib.error): - return - - # We need to decode the items that we've base64 encoded - cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) - cached["response"]["headers"] = dict( - (_b64_decode_str(k), _b64_decode_str(v)) - for k, v in cached["response"]["headers"].items() - ) - cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) - cached["vary"] = dict( - (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) - for k, v in cached["vary"].items() - ) - - return self.prepare_response(request, cached, body_file) - - def _loads_v3(self, request, data, body_file): - # Due to Python 2 encoding issues, it's impossible to know for sure - # exactly how to load v3 entries, thus we'll treat these as a miss so - # that they get rewritten out as v4 entries. - return - - def _loads_v4(self, request, data, body_file=None): - try: - cached = msgpack.loads(data, raw=False) - except ValueError: - return - - return self.prepare_response(request, cached, body_file) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py deleted file mode 100644 index 994668219dd4def6404e0afd3f538b29a0e50f8b..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py +++ /dev/null @@ -1,4649 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -BULGARIAN_LANG_MODEL = { - 63: { # 'e' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 45: { # '\xad' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 31: { # 'А' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 2, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 0, # 'и' - 26: 2, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 32: { # 'Б' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 2, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 35: { # 'В' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 2, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 43: { # 'Г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 37: { # 'Д' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 2, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 44: { # 'Е' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 2, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 0, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 55: { # 'Ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 47: { # 'З' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 40: { # 'И' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 2, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 3, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 59: { # 'Й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 33: { # 'К' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 46: { # 'Л' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 38: { # 'М' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 36: { # 'Н' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 41: { # 'О' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 30: { # 'П' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 2, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 39: { # 'Р' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 28: { # 'С' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 3, # 'А' - 32: 2, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 34: { # 'Т' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 51: { # 'У' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 48: { # 'Ф' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 49: { # 'Х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 53: { # 'Ц' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 50: { # 'Ч' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 54: { # 'Ш' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 57: { # 'Щ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 61: { # 'Ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 60: { # 'Ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 2, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 56: { # 'Я' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 1: { # 'а' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 18: { # 'б' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 3, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 9: { # 'в' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 20: { # 'г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 11: { # 'д' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 3: { # 'е' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 23: { # 'ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 15: { # 'з' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 2: { # 'и' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 26: { # 'й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 12: { # 'к' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 10: { # 'л' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 3, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 14: { # 'м' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 6: { # 'н' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 2, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 4: { # 'о' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 13: { # 'п' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 7: { # 'р' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 3, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 8: { # 'с' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 5: { # 'т' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 19: { # 'у' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 2, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 29: { # 'ф' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 2, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 25: { # 'х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 22: { # 'ц' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 21: { # 'ч' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 27: { # 'ш' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 24: { # 'щ' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 17: { # 'ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 52: { # 'ь' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 42: { # 'ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 16: { # 'я' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 1, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 3, # 'х' - 22: 2, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 58: { # 'є' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 62: { # '№' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_5_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 194, # '\x80' - 129: 195, # '\x81' - 130: 196, # '\x82' - 131: 197, # '\x83' - 132: 198, # '\x84' - 133: 199, # '\x85' - 134: 200, # '\x86' - 135: 201, # '\x87' - 136: 202, # '\x88' - 137: 203, # '\x89' - 138: 204, # '\x8a' - 139: 205, # '\x8b' - 140: 206, # '\x8c' - 141: 207, # '\x8d' - 142: 208, # '\x8e' - 143: 209, # '\x8f' - 144: 210, # '\x90' - 145: 211, # '\x91' - 146: 212, # '\x92' - 147: 213, # '\x93' - 148: 214, # '\x94' - 149: 215, # '\x95' - 150: 216, # '\x96' - 151: 217, # '\x97' - 152: 218, # '\x98' - 153: 219, # '\x99' - 154: 220, # '\x9a' - 155: 221, # '\x9b' - 156: 222, # '\x9c' - 157: 223, # '\x9d' - 158: 224, # '\x9e' - 159: 225, # '\x9f' - 160: 81, # '\xa0' - 161: 226, # 'Ё' - 162: 227, # 'Ђ' - 163: 228, # 'Ѓ' - 164: 229, # 'Є' - 165: 230, # 'Ѕ' - 166: 105, # 'І' - 167: 231, # 'Ї' - 168: 232, # 'Ј' - 169: 233, # 'Љ' - 170: 234, # 'Њ' - 171: 235, # 'Ћ' - 172: 236, # 'Ќ' - 173: 45, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 31, # 'А' - 177: 32, # 'Б' - 178: 35, # 'В' - 179: 43, # 'Г' - 180: 37, # 'Д' - 181: 44, # 'Е' - 182: 55, # 'Ж' - 183: 47, # 'З' - 184: 40, # 'И' - 185: 59, # 'Й' - 186: 33, # 'К' - 187: 46, # 'Л' - 188: 38, # 'М' - 189: 36, # 'Н' - 190: 41, # 'О' - 191: 30, # 'П' - 192: 39, # 'Р' - 193: 28, # 'С' - 194: 34, # 'Т' - 195: 51, # 'У' - 196: 48, # 'Ф' - 197: 49, # 'Х' - 198: 53, # 'Ц' - 199: 50, # 'Ч' - 200: 54, # 'Ш' - 201: 57, # 'Щ' - 202: 61, # 'Ъ' - 203: 239, # 'Ы' - 204: 67, # 'Ь' - 205: 240, # 'Э' - 206: 60, # 'Ю' - 207: 56, # 'Я' - 208: 1, # 'а' - 209: 18, # 'б' - 210: 9, # 'в' - 211: 20, # 'г' - 212: 11, # 'д' - 213: 3, # 'е' - 214: 23, # 'ж' - 215: 15, # 'з' - 216: 2, # 'и' - 217: 26, # 'й' - 218: 12, # 'к' - 219: 10, # 'л' - 220: 14, # 'м' - 221: 6, # 'н' - 222: 4, # 'о' - 223: 13, # 'п' - 224: 7, # 'р' - 225: 8, # 'с' - 226: 5, # 'т' - 227: 19, # 'у' - 228: 29, # 'ф' - 229: 25, # 'х' - 230: 22, # 'ц' - 231: 21, # 'ч' - 232: 27, # 'ш' - 233: 24, # 'щ' - 234: 17, # 'ъ' - 235: 75, # 'ы' - 236: 52, # 'ь' - 237: 241, # 'э' - 238: 42, # 'ю' - 239: 16, # 'я' - 240: 62, # '№' - 241: 242, # 'ё' - 242: 243, # 'ђ' - 243: 244, # 'ѓ' - 244: 58, # 'є' - 245: 245, # 'ѕ' - 246: 98, # 'і' - 247: 246, # 'ї' - 248: 247, # 'ј' - 249: 248, # 'љ' - 250: 249, # 'њ' - 251: 250, # 'ћ' - 252: 251, # 'ќ' - 253: 91, # '§' - 254: 252, # 'ў' - 255: 253, # 'џ' -} - -ISO_8859_5_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Bulgarian", - char_to_order_map=ISO_8859_5_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) - -WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 206, # 'Ђ' - 129: 207, # 'Ѓ' - 130: 208, # '‚' - 131: 209, # 'ѓ' - 132: 210, # '„' - 133: 211, # '…' - 134: 212, # '†' - 135: 213, # '‡' - 136: 120, # '€' - 137: 214, # '‰' - 138: 215, # 'Љ' - 139: 216, # '‹' - 140: 217, # 'Њ' - 141: 218, # 'Ќ' - 142: 219, # 'Ћ' - 143: 220, # 'Џ' - 144: 221, # 'ђ' - 145: 78, # '‘' - 146: 64, # '’' - 147: 83, # '“' - 148: 121, # '”' - 149: 98, # '•' - 150: 117, # '–' - 151: 105, # '—' - 152: 222, # None - 153: 223, # '™' - 154: 224, # 'љ' - 155: 225, # '›' - 156: 226, # 'њ' - 157: 227, # 'ќ' - 158: 228, # 'ћ' - 159: 229, # 'џ' - 160: 88, # '\xa0' - 161: 230, # 'Ў' - 162: 231, # 'ў' - 163: 232, # 'Ј' - 164: 233, # '¤' - 165: 122, # 'Ґ' - 166: 89, # '¦' - 167: 106, # '§' - 168: 234, # 'Ё' - 169: 235, # '©' - 170: 236, # 'Є' - 171: 237, # '«' - 172: 238, # '¬' - 173: 45, # '\xad' - 174: 239, # '®' - 175: 240, # 'Ї' - 176: 73, # '°' - 177: 80, # '±' - 178: 118, # 'І' - 179: 114, # 'і' - 180: 241, # 'ґ' - 181: 242, # 'µ' - 182: 243, # '¶' - 183: 244, # '·' - 184: 245, # 'ё' - 185: 62, # '№' - 186: 58, # 'є' - 187: 246, # '»' - 188: 247, # 'ј' - 189: 248, # 'Ѕ' - 190: 249, # 'ѕ' - 191: 250, # 'ї' - 192: 31, # 'А' - 193: 32, # 'Б' - 194: 35, # 'В' - 195: 43, # 'Г' - 196: 37, # 'Д' - 197: 44, # 'Е' - 198: 55, # 'Ж' - 199: 47, # 'З' - 200: 40, # 'И' - 201: 59, # 'Й' - 202: 33, # 'К' - 203: 46, # 'Л' - 204: 38, # 'М' - 205: 36, # 'Н' - 206: 41, # 'О' - 207: 30, # 'П' - 208: 39, # 'Р' - 209: 28, # 'С' - 210: 34, # 'Т' - 211: 51, # 'У' - 212: 48, # 'Ф' - 213: 49, # 'Х' - 214: 53, # 'Ц' - 215: 50, # 'Ч' - 216: 54, # 'Ш' - 217: 57, # 'Щ' - 218: 61, # 'Ъ' - 219: 251, # 'Ы' - 220: 67, # 'Ь' - 221: 252, # 'Э' - 222: 60, # 'Ю' - 223: 56, # 'Я' - 224: 1, # 'а' - 225: 18, # 'б' - 226: 9, # 'в' - 227: 20, # 'г' - 228: 11, # 'д' - 229: 3, # 'е' - 230: 23, # 'ж' - 231: 15, # 'з' - 232: 2, # 'и' - 233: 26, # 'й' - 234: 12, # 'к' - 235: 10, # 'л' - 236: 14, # 'м' - 237: 6, # 'н' - 238: 4, # 'о' - 239: 13, # 'п' - 240: 7, # 'р' - 241: 8, # 'с' - 242: 5, # 'т' - 243: 19, # 'у' - 244: 29, # 'ф' - 245: 25, # 'х' - 246: 22, # 'ц' - 247: 21, # 'ч' - 248: 27, # 'ш' - 249: 24, # 'щ' - 250: 17, # 'ъ' - 251: 75, # 'ы' - 252: 52, # 'ь' - 253: 253, # 'э' - 254: 42, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Bulgarian", - char_to_order_map=WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/core.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/core.py deleted file mode 100644 index 05d2971994d36b29c6532e0c99115c1906b8e275..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/core.py +++ /dev/null @@ -1,291 +0,0 @@ -"""distutils.core - -The only module that needs to be imported to use the Distutils; provides -the 'setup' function (which is to be called from the setup script). Also -indirectly provides the Distribution and Command classes, although they are -really defined in distutils.dist and distutils.cmd. -""" - -import os -import sys -import tokenize - -from .debug import DEBUG -from .errors import ( - DistutilsSetupError, - DistutilsError, - CCompilerError, - DistutilsArgError, -) - -# Mainly import these so setup scripts can "from distutils.core import" them. -from .dist import Distribution -from .cmd import Command -from .config import PyPIRCCommand -from .extension import Extension - - -__all__ = ['Distribution', 'Command', 'PyPIRCCommand', 'Extension', 'setup'] - -# This is a barebones help message generated displayed when the user -# runs the setup script with no arguments at all. More useful help -# is generated with various --help options: global help, list commands, -# and per-command help. -USAGE = """\ -usage: %(script)s [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] - or: %(script)s --help [cmd1 cmd2 ...] - or: %(script)s --help-commands - or: %(script)s cmd --help -""" - - -def gen_usage(script_name): - script = os.path.basename(script_name) - return USAGE % locals() - - -# Some mild magic to control the behaviour of 'setup()' from 'run_setup()'. -_setup_stop_after = None -_setup_distribution = None - -# Legal keyword arguments for the setup() function -setup_keywords = ( - 'distclass', - 'script_name', - 'script_args', - 'options', - 'name', - 'version', - 'author', - 'author_email', - 'maintainer', - 'maintainer_email', - 'url', - 'license', - 'description', - 'long_description', - 'keywords', - 'platforms', - 'classifiers', - 'download_url', - 'requires', - 'provides', - 'obsoletes', -) - -# Legal keyword arguments for the Extension constructor -extension_keywords = ( - 'name', - 'sources', - 'include_dirs', - 'define_macros', - 'undef_macros', - 'library_dirs', - 'libraries', - 'runtime_library_dirs', - 'extra_objects', - 'extra_compile_args', - 'extra_link_args', - 'swig_opts', - 'export_symbols', - 'depends', - 'language', -) - - -def setup(**attrs): # noqa: C901 - """The gateway to the Distutils: do everything your setup script needs - to do, in a highly flexible and user-driven way. Briefly: create a - Distribution instance; find and parse config files; parse the command - line; run each Distutils command found there, customized by the options - supplied to 'setup()' (as keyword arguments), in config files, and on - the command line. - - The Distribution instance might be an instance of a class supplied via - the 'distclass' keyword argument to 'setup'; if no such class is - supplied, then the Distribution class (in dist.py) is instantiated. - All other arguments to 'setup' (except for 'cmdclass') are used to set - attributes of the Distribution instance. - - The 'cmdclass' argument, if supplied, is a dictionary mapping command - names to command classes. Each command encountered on the command line - will be turned into a command class, which is in turn instantiated; any - class found in 'cmdclass' is used in place of the default, which is - (for command 'foo_bar') class 'foo_bar' in module - 'distutils.command.foo_bar'. The command class must provide a - 'user_options' attribute which is a list of option specifiers for - 'distutils.fancy_getopt'. Any command-line options between the current - and the next command are used to set attributes of the current command - object. - - When the entire command-line has been successfully parsed, calls the - 'run()' method on each command object in turn. This method will be - driven entirely by the Distribution object (which each command object - has a reference to, thanks to its constructor), and the - command-specific options that became attributes of each command - object. - """ - - global _setup_stop_after, _setup_distribution - - # Determine the distribution class -- either caller-supplied or - # our Distribution (see below). - klass = attrs.get('distclass') - if klass: - attrs.pop('distclass') - else: - klass = Distribution - - if 'script_name' not in attrs: - attrs['script_name'] = os.path.basename(sys.argv[0]) - if 'script_args' not in attrs: - attrs['script_args'] = sys.argv[1:] - - # Create the Distribution instance, using the remaining arguments - # (ie. everything except distclass) to initialize it - try: - _setup_distribution = dist = klass(attrs) - except DistutilsSetupError as msg: - if 'name' not in attrs: - raise SystemExit("error in setup command: %s" % msg) - else: - raise SystemExit("error in {} setup command: {}".format(attrs['name'], msg)) - - if _setup_stop_after == "init": - return dist - - # Find and parse the config file(s): they will override options from - # the setup script, but be overridden by the command line. - dist.parse_config_files() - - if DEBUG: - print("options (after parsing config files):") - dist.dump_option_dicts() - - if _setup_stop_after == "config": - return dist - - # Parse the command line and override config files; any - # command-line errors are the end user's fault, so turn them into - # SystemExit to suppress tracebacks. - try: - ok = dist.parse_command_line() - except DistutilsArgError as msg: - raise SystemExit(gen_usage(dist.script_name) + "\nerror: %s" % msg) - - if DEBUG: - print("options (after parsing command line):") - dist.dump_option_dicts() - - if _setup_stop_after == "commandline": - return dist - - # And finally, run all the commands found on the command line. - if ok: - return run_commands(dist) - - return dist - - -# setup () - - -def run_commands(dist): - """Given a Distribution object run all the commands, - raising ``SystemExit`` errors in the case of failure. - - This function assumes that either ``sys.argv`` or ``dist.script_args`` - is already set accordingly. - """ - try: - dist.run_commands() - except KeyboardInterrupt: - raise SystemExit("interrupted") - except OSError as exc: - if DEBUG: - sys.stderr.write("error: {}\n".format(exc)) - raise - else: - raise SystemExit("error: {}".format(exc)) - - except (DistutilsError, CCompilerError) as msg: - if DEBUG: - raise - else: - raise SystemExit("error: " + str(msg)) - - return dist - - -def run_setup(script_name, script_args=None, stop_after="run"): - """Run a setup script in a somewhat controlled environment, and - return the Distribution instance that drives things. This is useful - if you need to find out the distribution meta-data (passed as - keyword args from 'script' to 'setup()', or the contents of the - config files or command-line. - - 'script_name' is a file that will be read and run with 'exec()'; - 'sys.argv[0]' will be replaced with 'script' for the duration of the - call. 'script_args' is a list of strings; if supplied, - 'sys.argv[1:]' will be replaced by 'script_args' for the duration of - the call. - - 'stop_after' tells 'setup()' when to stop processing; possible - values: - init - stop after the Distribution instance has been created and - populated with the keyword arguments to 'setup()' - config - stop after config files have been parsed (and their data - stored in the Distribution instance) - commandline - stop after the command-line ('sys.argv[1:]' or 'script_args') - have been parsed (and the data stored in the Distribution) - run [default] - stop after all commands have been run (the same as if 'setup()' - had been called in the usual way - - Returns the Distribution instance, which provides all information - used to drive the Distutils. - """ - if stop_after not in ('init', 'config', 'commandline', 'run'): - raise ValueError("invalid value for 'stop_after': {!r}".format(stop_after)) - - global _setup_stop_after, _setup_distribution - _setup_stop_after = stop_after - - save_argv = sys.argv.copy() - g = {'__file__': script_name, '__name__': '__main__'} - try: - try: - sys.argv[0] = script_name - if script_args is not None: - sys.argv[1:] = script_args - # tokenize.open supports automatic encoding detection - with tokenize.open(script_name) as f: - code = f.read().replace(r'\r\n', r'\n') - exec(code, g) - finally: - sys.argv = save_argv - _setup_stop_after = None - except SystemExit: - # Hmm, should we do something if exiting with a non-zero code - # (ie. error)? - pass - - if _setup_distribution is None: - raise RuntimeError( - ( - "'distutils.core.setup()' was never called -- " - "perhaps '%s' is not a Distutils setup script?" - ) - % script_name - ) - - # I wonder if the setup script's namespace -- g and l -- would be of - # any interest to callers? - # print "_setup_distribution:", _setup_distribution - return _setup_distribution - - -# run_setup () diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/errors.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/errors.py deleted file mode 100644 index ec7fb3b6c4856708dc6bc3b0c35fd8df73156029..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/errors.py +++ /dev/null @@ -1,58 +0,0 @@ -"""setuptools.errors - -Provides exceptions used by setuptools modules. -""" - -from distutils import errors as _distutils_errors - - -# Re-export errors from distutils to facilitate the migration to PEP632 - -ByteCompileError = _distutils_errors.DistutilsByteCompileError -CCompilerError = _distutils_errors.CCompilerError -ClassError = _distutils_errors.DistutilsClassError -CompileError = _distutils_errors.CompileError -ExecError = _distutils_errors.DistutilsExecError -FileError = _distutils_errors.DistutilsFileError -InternalError = _distutils_errors.DistutilsInternalError -LibError = _distutils_errors.LibError -LinkError = _distutils_errors.LinkError -ModuleError = _distutils_errors.DistutilsModuleError -OptionError = _distutils_errors.DistutilsOptionError -PlatformError = _distutils_errors.DistutilsPlatformError -PreprocessError = _distutils_errors.PreprocessError -SetupError = _distutils_errors.DistutilsSetupError -TemplateError = _distutils_errors.DistutilsTemplateError -UnknownFileError = _distutils_errors.UnknownFileError - -# The root error class in the hierarchy -BaseError = _distutils_errors.DistutilsError - - -class RemovedCommandError(BaseError, RuntimeError): - """Error used for commands that have been removed in setuptools. - - Since ``setuptools`` is built on ``distutils``, simply removing a command - from ``setuptools`` will make the behavior fall back to ``distutils``; this - error is raised if a command exists in ``distutils`` but has been actively - removed in ``setuptools``. - """ - - -class PackageDiscoveryError(BaseError, RuntimeError): - """Impossible to perform automatic discovery of packages and/or modules. - - The current project layout or given discovery options can lead to problems when - scanning the project directory. - - Setuptools might also refuse to complete auto-discovery if an error prone condition - is detected (e.g. when a project is organised as a flat-layout but contains - multiple directories that can be taken as top-level packages inside a single - distribution [*]_). In these situations the users are encouraged to be explicit - about which packages to include or to make the discovery parameters more specific. - - .. [*] Since multi-package distributions are uncommon it is very likely that the - developers did not intend for all the directories to be packaged, and are just - leaving auxiliary code in the repository top-level, such as maintenance-related - scripts. - """ diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/extension.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/extension.py deleted file mode 100644 index 58c023f6b4479c631f382e5062932793d2bee26b..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/extension.py +++ /dev/null @@ -1,148 +0,0 @@ -import re -import functools -import distutils.core -import distutils.errors -import distutils.extension - -from .monkey import get_unpatched - - -def _have_cython(): - """ - Return True if Cython can be imported. - """ - cython_impl = 'Cython.Distutils.build_ext' - try: - # from (cython_impl) import build_ext - __import__(cython_impl, fromlist=['build_ext']).build_ext - return True - except Exception: - pass - return False - - -# for compatibility -have_pyrex = _have_cython - -_Extension = get_unpatched(distutils.core.Extension) - - -class Extension(_Extension): - """ - Describes a single extension module. - - This means that all source files will be compiled into a single binary file - ``.`` (with ```` derived from ``name`` and - ```` defined by one of the values in - ``importlib.machinery.EXTENSION_SUFFIXES``). - - In the case ``.pyx`` files are passed as ``sources and`` ``Cython`` is **not** - installed in the build environment, ``setuptools`` may also try to look for the - equivalent ``.cpp`` or ``.c`` files. - - :arg str name: - the full name of the extension, including any packages -- ie. - *not* a filename or pathname, but Python dotted name - - :arg list[str] sources: - list of source filenames, relative to the distribution root - (where the setup script lives), in Unix form (slash-separated) - for portability. Source files may be C, C++, SWIG (.i), - platform-specific resource files, or whatever else is recognized - by the "build_ext" command as source for a Python extension. - - :keyword list[str] include_dirs: - list of directories to search for C/C++ header files (in Unix - form for portability) - - :keyword list[tuple[str, str|None]] define_macros: - list of macros to define; each macro is defined using a 2-tuple: - the first item corresponding to the name of the macro and the second - item either a string with its value or None to - define it without a particular value (equivalent of "#define - FOO" in source or -DFOO on Unix C compiler command line) - - :keyword list[str] undef_macros: - list of macros to undefine explicitly - - :keyword list[str] library_dirs: - list of directories to search for C/C++ libraries at link time - - :keyword list[str] libraries: - list of library names (not filenames or paths) to link against - - :keyword list[str] runtime_library_dirs: - list of directories to search for C/C++ libraries at run time - (for shared extensions, this is when the extension is loaded). - Setting this will cause an exception during build on Windows - platforms. - - :keyword list[str] extra_objects: - list of extra files to link with (eg. object files not implied - by 'sources', static library that must be explicitly specified, - binary resource files, etc.) - - :keyword list[str] extra_compile_args: - any extra platform- and compiler-specific information to use - when compiling the source files in 'sources'. For platforms and - compilers where "command line" makes sense, this is typically a - list of command-line arguments, but for other platforms it could - be anything. - - :keyword list[str] extra_link_args: - any extra platform- and compiler-specific information to use - when linking object files together to create the extension (or - to create a new static Python interpreter). Similar - interpretation as for 'extra_compile_args'. - - :keyword list[str] export_symbols: - list of symbols to be exported from a shared extension. Not - used on all platforms, and not generally necessary for Python - extensions, which typically export exactly one symbol: "init" + - extension_name. - - :keyword list[str] swig_opts: - any extra options to pass to SWIG if a source file has the .i - extension. - - :keyword list[str] depends: - list of files that the extension depends on - - :keyword str language: - extension language (i.e. "c", "c++", "objc"). Will be detected - from the source extensions if not provided. - - :keyword bool optional: - specifies that a build failure in the extension should not abort the - build process, but simply not install the failing extension. - - :keyword bool py_limited_api: - opt-in flag for the usage of :doc:`Python's limited API `. - - :raises setuptools.errors.PlatformError: if 'runtime_library_dirs' is - specified on Windows. (since v63) - """ - - def __init__(self, name, sources, *args, **kw): - # The *args is needed for compatibility as calls may use positional - # arguments. py_limited_api may be set only via keyword. - self.py_limited_api = kw.pop("py_limited_api", False) - super().__init__(name, sources, *args, **kw) - - def _convert_pyx_sources_to_lang(self): - """ - Replace sources with .pyx extensions to sources with the target - language extension. This mechanism allows language authors to supply - pre-converted sources but to prefer the .pyx sources. - """ - if _have_cython(): - # the build has Cython, so allow it to compile the .pyx files - return - lang = self.language or '' - target_ext = '.cpp' if lang.lower() == 'c++' else '.c' - sub = functools.partial(re.sub, '.pyx$', target_ext) - self.sources = list(map(sub, self.sources)) - - -class Library(Extension): - """Just like a regular Extension, but built as a library instead""" diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/demo/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/demo/README.md deleted file mode 100644 index 133d8d38e5e9f5f44aca92c59f73309e166d7132..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/demo/README.md +++ /dev/null @@ -1,8 +0,0 @@ - -## Detectron2 Demo - -We provide a command line tool to run a simple demo of builtin configs. -The usage is explained in [GETTING_STARTED.md](../GETTING_STARTED.md). - -See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-) -for a high-quality demo generated with this tool. diff --git a/spaces/UtkMal/fresh-or-rotten-apple/fresh_or_rotten_apple/__init__.py b/spaces/UtkMal/fresh-or-rotten-apple/fresh_or_rotten_apple/__init__.py deleted file mode 100644 index f102a9cadfa89ce554b3b26d2b90bfba2e05273c..0000000000000000000000000000000000000000 --- a/spaces/UtkMal/fresh-or-rotten-apple/fresh_or_rotten_apple/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.0.1" diff --git a/spaces/XzJosh/JM-Bert-VITS2/commons.py b/spaces/XzJosh/JM-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/Jiaran-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/XzJosh/maimai-Bert-VITS2/resample.py b/spaces/XzJosh/maimai-Bert-VITS2/resample.py deleted file mode 100644 index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/resample.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_positional_embedding.py b/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_positional_embedding.py deleted file mode 100644 index 89b9b5549cc779d1ea67f052b1c99cad92365503..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/espnet_positional_embedding.py +++ /dev/null @@ -1,113 +0,0 @@ -import math -import torch - - -class PositionalEncoding(torch.nn.Module): - """Positional encoding. - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - reverse (bool): Whether to reverse the input position. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = torch.nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class ScaledPositionalEncoding(PositionalEncoding): - """Scaled positional encoding module. - See Sec. 3.2 https://arxiv.org/abs/1809.08895 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len) - self.alpha = torch.nn.Parameter(torch.tensor(1.0)) - - def reset_parameters(self): - """Reset parameters.""" - self.alpha.data = torch.tensor(1.0) - - def forward(self, x): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x + self.alpha * self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(PositionalEncoding): - """Relative positional encoding module. - See : Appendix B in https://arxiv.org/abs/1901.02860 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model, dropout_rate, max_len, reverse=True) - - def forward(self, x): - """Compute positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - torch.Tensor: Positional embedding tensor (1, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale - pos_emb = self.pe[:, : x.size(1)] - return self.dropout(x), self.dropout(pos_emb) \ No newline at end of file diff --git a/spaces/YangHao520/testCreateFile/README.md b/spaces/YangHao520/testCreateFile/README.md deleted file mode 100644 index cbbccd8de7114dcdc4ae2d6455602b9c68702289..0000000000000000000000000000000000000000 --- a/spaces/YangHao520/testCreateFile/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TestCreateFile -emoji: 🌍 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/diffusers_cli.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/diffusers_cli.py deleted file mode 100644 index 30084e55ba4eeec79c87a99eae3e60a6233dc556..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/diffusers_cli.py +++ /dev/null @@ -1,41 +0,0 @@ -#!/usr/bin/env python -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from argparse import ArgumentParser - -from .env import EnvironmentCommand - - -def main(): - parser = ArgumentParser("Diffusers CLI tool", usage="diffusers-cli []") - commands_parser = parser.add_subparsers(help="diffusers-cli command helpers") - - # Register commands - EnvironmentCommand.register_subcommand(commands_parser) - - # Let's go - args = parser.parse_args() - - if not hasattr(args, "func"): - parser.print_help() - exit(1) - - # Run - service = args.func(args) - service.run() - - -if __name__ == "__main__": - main() diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/datasets/transforms.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/datasets/transforms.py deleted file mode 100644 index 91cf9269e4b31008a3ddca34a19b038a9b399991..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/datasets/transforms.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Transforms and data augmentation for both image + bbox. -""" -import os -import random - -import PIL -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as F - -from groundingdino.util.box_ops import box_xyxy_to_cxcywh -from groundingdino.util.misc import interpolate - - -def crop(image, target, region): - cropped_image = F.crop(image, *region) - - target = target.copy() - i, j, h, w = region - - # should we do something wrt the original size? - target["size"] = torch.tensor([h, w]) - - fields = ["labels", "area", "iscrowd", "positive_map"] - - if "boxes" in target: - boxes = target["boxes"] - max_size = torch.as_tensor([w, h], dtype=torch.float32) - cropped_boxes = boxes - torch.as_tensor([j, i, j, i]) - cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size) - cropped_boxes = cropped_boxes.clamp(min=0) - area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1) - target["boxes"] = cropped_boxes.reshape(-1, 4) - target["area"] = area - fields.append("boxes") - - if "masks" in target: - # FIXME should we update the area here if there are no boxes? - target["masks"] = target["masks"][:, i : i + h, j : j + w] - fields.append("masks") - - # remove elements for which the boxes or masks that have zero area - if "boxes" in target or "masks" in target: - # favor boxes selection when defining which elements to keep - # this is compatible with previous implementation - if "boxes" in target: - cropped_boxes = target["boxes"].reshape(-1, 2, 2) - keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1) - else: - keep = target["masks"].flatten(1).any(1) - - for field in fields: - if field in target: - target[field] = target[field][keep] - - if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO": - # for debug and visualization only. - if "strings_positive" in target: - target["strings_positive"] = [ - _i for _i, _j in zip(target["strings_positive"], keep) if _j - ] - - return cropped_image, target - - -def hflip(image, target): - flipped_image = F.hflip(image) - - w, h = image.size - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor( - [w, 0, w, 0] - ) - target["boxes"] = boxes - - if "masks" in target: - target["masks"] = target["masks"].flip(-1) - - return flipped_image, target - - -def resize(image, target, size, max_size=None): - # size can be min_size (scalar) or (w, h) tuple - - def get_size_with_aspect_ratio(image_size, size, max_size=None): - w, h = image_size - if max_size is not None: - min_original_size = float(min((w, h))) - max_original_size = float(max((w, h))) - if max_original_size / min_original_size * size > max_size: - size = int(round(max_size * min_original_size / max_original_size)) - - if (w <= h and w == size) or (h <= w and h == size): - return (h, w) - - if w < h: - ow = size - oh = int(size * h / w) - else: - oh = size - ow = int(size * w / h) - - return (oh, ow) - - def get_size(image_size, size, max_size=None): - if isinstance(size, (list, tuple)): - return size[::-1] - else: - return get_size_with_aspect_ratio(image_size, size, max_size) - - size = get_size(image.size, size, max_size) - rescaled_image = F.resize(image, size) - - if target is None: - return rescaled_image, None - - ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size)) - ratio_width, ratio_height = ratios - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - scaled_boxes = boxes * torch.as_tensor( - [ratio_width, ratio_height, ratio_width, ratio_height] - ) - target["boxes"] = scaled_boxes - - if "area" in target: - area = target["area"] - scaled_area = area * (ratio_width * ratio_height) - target["area"] = scaled_area - - h, w = size - target["size"] = torch.tensor([h, w]) - - if "masks" in target: - target["masks"] = ( - interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5 - ) - - return rescaled_image, target - - -def pad(image, target, padding): - # assumes that we only pad on the bottom right corners - padded_image = F.pad(image, (0, 0, padding[0], padding[1])) - if target is None: - return padded_image, None - target = target.copy() - # should we do something wrt the original size? - target["size"] = torch.tensor(padded_image.size[::-1]) - if "masks" in target: - target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1])) - return padded_image, target - - -class ResizeDebug(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - return resize(img, target, self.size) - - -class RandomCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - region = T.RandomCrop.get_params(img, self.size) - return crop(img, target, region) - - -class RandomSizeCrop(object): - def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False): - # respect_boxes: True to keep all boxes - # False to tolerence box filter - self.min_size = min_size - self.max_size = max_size - self.respect_boxes = respect_boxes - - def __call__(self, img: PIL.Image.Image, target: dict): - init_boxes = len(target["boxes"]) - max_patience = 10 - for i in range(max_patience): - w = random.randint(self.min_size, min(img.width, self.max_size)) - h = random.randint(self.min_size, min(img.height, self.max_size)) - region = T.RandomCrop.get_params(img, [h, w]) - result_img, result_target = crop(img, target, region) - if ( - not self.respect_boxes - or len(result_target["boxes"]) == init_boxes - or i == max_patience - 1 - ): - return result_img, result_target - return result_img, result_target - - -class CenterCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - image_width, image_height = img.size - crop_height, crop_width = self.size - crop_top = int(round((image_height - crop_height) / 2.0)) - crop_left = int(round((image_width - crop_width) / 2.0)) - return crop(img, target, (crop_top, crop_left, crop_height, crop_width)) - - -class RandomHorizontalFlip(object): - def __init__(self, p=0.5): - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return hflip(img, target) - return img, target - - -class RandomResize(object): - def __init__(self, sizes, max_size=None): - assert isinstance(sizes, (list, tuple)) - self.sizes = sizes - self.max_size = max_size - - def __call__(self, img, target=None): - size = random.choice(self.sizes) - return resize(img, target, size, self.max_size) - - -class RandomPad(object): - def __init__(self, max_pad): - self.max_pad = max_pad - - def __call__(self, img, target): - pad_x = random.randint(0, self.max_pad) - pad_y = random.randint(0, self.max_pad) - return pad(img, target, (pad_x, pad_y)) - - -class RandomSelect(object): - """ - Randomly selects between transforms1 and transforms2, - with probability p for transforms1 and (1 - p) for transforms2 - """ - - def __init__(self, transforms1, transforms2, p=0.5): - self.transforms1 = transforms1 - self.transforms2 = transforms2 - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return self.transforms1(img, target) - return self.transforms2(img, target) - - -class ToTensor(object): - def __call__(self, img, target): - return F.to_tensor(img), target - - -class RandomErasing(object): - def __init__(self, *args, **kwargs): - self.eraser = T.RandomErasing(*args, **kwargs) - - def __call__(self, img, target): - return self.eraser(img), target - - -class Normalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, image, target=None): - image = F.normalize(image, mean=self.mean, std=self.std) - if target is None: - return image, None - target = target.copy() - h, w = image.shape[-2:] - if "boxes" in target: - boxes = target["boxes"] - boxes = box_xyxy_to_cxcywh(boxes) - boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32) - target["boxes"] = boxes - return image, target - - -class Compose(object): - def __init__(self, transforms): - self.transforms = transforms - - def __call__(self, image, target): - for t in self.transforms: - image, target = t(image, target) - return image, target - - def __repr__(self): - format_string = self.__class__.__name__ + "(" - for t in self.transforms: - format_string += "\n" - format_string += " {0}".format(t) - format_string += "\n)" - return format_string diff --git a/spaces/Yuliang/ECON/lib/pymafx/utils/segms.py b/spaces/Yuliang/ECON/lib/pymafx/utils/segms.py deleted file mode 100644 index c8fbf7e2c49422449cf4a8c4a38e1f320a0b15c0..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/utils/segms.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -############################################################################## -"""Functions for interacting with segmentation masks in the COCO format. - -The following terms are used in this module - mask: a binary mask encoded as a 2D numpy array - segm: a segmentation mask in one of the two COCO formats (polygon or RLE) - polygon: COCO's polygon format - RLE: COCO's run length encoding format -""" - -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -import numpy as np -import pycocotools.mask as mask_util - - -def GetDensePoseMask(Polys): - MaskGen = np.zeros([256, 256]) - for i in range(1, 15): - if (Polys[i - 1]): - current_mask = mask_util.decode(Polys[i - 1]) - MaskGen[current_mask > 0] = i - return MaskGen - - -def flip_segms(segms, height, width): - """Left/right flip each mask in a list of masks.""" - def _flip_poly(poly, width): - flipped_poly = np.array(poly) - flipped_poly[0::2] = width - np.array(poly[0::2]) - 1 - return flipped_poly.tolist() - - def _flip_rle(rle, height, width): - if 'counts' in rle and type(rle['counts']) == list: - # Magic RLE format handling painfully discovered by looking at the - # COCO API showAnns function. - rle = mask_util.frPyObjects([rle], height, width) - mask = mask_util.decode(rle) - mask = mask[:, ::-1, :] - rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8)) - return rle - - flipped_segms = [] - for segm in segms: - if type(segm) == list: - # Polygon format - flipped_segms.append([_flip_poly(poly, width) for poly in segm]) - else: - # RLE format - assert type(segm) == dict - flipped_segms.append(_flip_rle(segm, height, width)) - return flipped_segms - - -def polys_to_mask(polygons, height, width): - """Convert from the COCO polygon segmentation format to a binary mask - encoded as a 2D array of data type numpy.float32. The polygon segmentation - is understood to be enclosed inside a height x width image. The resulting - mask is therefore of shape (height, width). - """ - rle = mask_util.frPyObjects(polygons, height, width) - mask = np.array(mask_util.decode(rle), dtype=np.float32) - # Flatten in case polygons was a list - mask = np.sum(mask, axis=2) - mask = np.array(mask > 0, dtype=np.float32) - return mask - - -def mask_to_bbox(mask): - """Compute the tight bounding box of a binary mask.""" - xs = np.where(np.sum(mask, axis=0) > 0)[0] - ys = np.where(np.sum(mask, axis=1) > 0)[0] - - if len(xs) == 0 or len(ys) == 0: - return None - - x0 = xs[0] - x1 = xs[-1] - y0 = ys[0] - y1 = ys[-1] - return np.array((x0, y0, x1, y1), dtype=np.float32) - - -def polys_to_mask_wrt_box(polygons, box, M): - """Convert from the COCO polygon segmentation format to a binary mask - encoded as a 2D array of data type numpy.float32. The polygon segmentation - is understood to be enclosed in the given box and rasterized to an M x M - mask. The resulting mask is therefore of shape (M, M). - """ - w = box[2] - box[0] - h = box[3] - box[1] - - w = np.maximum(w, 1) - h = np.maximum(h, 1) - - polygons_norm = [] - for poly in polygons: - p = np.array(poly, dtype=np.float32) - p[0::2] = (p[0::2] - box[0]) * M / w - p[1::2] = (p[1::2] - box[1]) * M / h - polygons_norm.append(p) - - rle = mask_util.frPyObjects(polygons_norm, M, M) - mask = np.array(mask_util.decode(rle), dtype=np.float32) - # Flatten in case polygons was a list - mask = np.sum(mask, axis=2) - mask = np.array(mask > 0, dtype=np.float32) - return mask - - -def polys_to_boxes(polys): - """Convert a list of polygons into an array of tight bounding boxes.""" - boxes_from_polys = np.zeros((len(polys), 4), dtype=np.float32) - for i in range(len(polys)): - poly = polys[i] - x0 = min(min(p[::2]) for p in poly) - x1 = max(max(p[::2]) for p in poly) - y0 = min(min(p[1::2]) for p in poly) - y1 = max(max(p[1::2]) for p in poly) - boxes_from_polys[i, :] = [x0, y0, x1, y1] - - return boxes_from_polys - - -def rle_mask_voting(top_masks, all_masks, all_dets, iou_thresh, binarize_thresh, method='AVG'): - """Returns new masks (in correspondence with `top_masks`) by combining - multiple overlapping masks coming from the pool of `all_masks`. Two methods - for combining masks are supported: 'AVG' uses a weighted average of - overlapping mask pixels; 'UNION' takes the union of all mask pixels. - """ - if len(top_masks) == 0: - return - - all_not_crowd = [False] * len(all_masks) - top_to_all_overlaps = mask_util.iou(top_masks, all_masks, all_not_crowd) - decoded_all_masks = [np.array(mask_util.decode(rle), dtype=np.float32) for rle in all_masks] - decoded_top_masks = [np.array(mask_util.decode(rle), dtype=np.float32) for rle in top_masks] - all_boxes = all_dets[:, :4].astype(np.int32) - all_scores = all_dets[:, 4] - - # Fill box support with weights - mask_shape = decoded_all_masks[0].shape - mask_weights = np.zeros((len(all_masks), mask_shape[0], mask_shape[1])) - for k in range(len(all_masks)): - ref_box = all_boxes[k] - x_0 = max(ref_box[0], 0) - x_1 = min(ref_box[2] + 1, mask_shape[1]) - y_0 = max(ref_box[1], 0) - y_1 = min(ref_box[3] + 1, mask_shape[0]) - mask_weights[k, y_0:y_1, x_0:x_1] = all_scores[k] - mask_weights = np.maximum(mask_weights, 1e-5) - - top_segms_out = [] - for k in range(len(top_masks)): - # Corner case of empty mask - if decoded_top_masks[k].sum() == 0: - top_segms_out.append(top_masks[k]) - continue - - inds_to_vote = np.where(top_to_all_overlaps[k] >= iou_thresh)[0] - # Only matches itself - if len(inds_to_vote) == 1: - top_segms_out.append(top_masks[k]) - continue - - masks_to_vote = [decoded_all_masks[i] for i in inds_to_vote] - if method == 'AVG': - ws = mask_weights[inds_to_vote] - soft_mask = np.average(masks_to_vote, axis=0, weights=ws) - mask = np.array(soft_mask > binarize_thresh, dtype=np.uint8) - elif method == 'UNION': - # Any pixel that's on joins the mask - soft_mask = np.sum(masks_to_vote, axis=0) - mask = np.array(soft_mask > 1e-5, dtype=np.uint8) - else: - raise NotImplementedError('Method {} is unknown'.format(method)) - rle = mask_util.encode(np.array(mask[:, :, np.newaxis], order='F'))[0] - top_segms_out.append(rle) - - return top_segms_out - - -def rle_mask_nms(masks, dets, thresh, mode='IOU'): - """Performs greedy non-maximum suppression based on an overlap measurement - between masks. The type of measurement is determined by `mode` and can be - either 'IOU' (standard intersection over union) or 'IOMA' (intersection over - mininum area). - """ - if len(masks) == 0: - return [] - if len(masks) == 1: - return [0] - - if mode == 'IOU': - # Computes ious[m1, m2] = area(intersect(m1, m2)) / area(union(m1, m2)) - all_not_crowds = [False] * len(masks) - ious = mask_util.iou(masks, masks, all_not_crowds) - elif mode == 'IOMA': - # Computes ious[m1, m2] = area(intersect(m1, m2)) / min(area(m1), area(m2)) - all_crowds = [True] * len(masks) - # ious[m1, m2] = area(intersect(m1, m2)) / area(m2) - ious = mask_util.iou(masks, masks, all_crowds) - # ... = max(area(intersect(m1, m2)) / area(m2), - # area(intersect(m2, m1)) / area(m1)) - ious = np.maximum(ious, ious.transpose()) - elif mode == 'CONTAINMENT': - # Computes ious[m1, m2] = area(intersect(m1, m2)) / area(m2) - # Which measures how much m2 is contained inside m1 - all_crowds = [True] * len(masks) - ious = mask_util.iou(masks, masks, all_crowds) - else: - raise NotImplementedError('Mode {} is unknown'.format(mode)) - - scores = dets[:, 4] - order = np.argsort(-scores) - - keep = [] - while order.size > 0: - i = order[0] - keep.append(i) - ovr = ious[i, order[1:]] - inds_to_keep = np.where(ovr <= thresh)[0] - order = order[inds_to_keep + 1] - - return keep - - -def rle_masks_to_boxes(masks): - """Computes the bounding box of each mask in a list of RLE encoded masks.""" - if len(masks) == 0: - return [] - - decoded_masks = [np.array(mask_util.decode(rle), dtype=np.float32) for rle in masks] - - def get_bounds(flat_mask): - inds = np.where(flat_mask > 0)[0] - return inds.min(), inds.max() - - boxes = np.zeros((len(decoded_masks), 4)) - keep = [True] * len(decoded_masks) - for i, mask in enumerate(decoded_masks): - if mask.sum() == 0: - keep[i] = False - continue - flat_mask = mask.sum(axis=0) - x0, x1 = get_bounds(flat_mask) - flat_mask = mask.sum(axis=1) - y0, y1 = get_bounds(flat_mask) - boxes[i, :] = (x0, y0, x1, y1) - - return boxes, np.where(keep)[0] diff --git a/spaces/Zannriell/cloudqi-cqi_speech_recognize_pt_v0/app.py b/spaces/Zannriell/cloudqi-cqi_speech_recognize_pt_v0/app.py deleted file mode 100644 index 07bf6934ac381840ae8fae6fe3a2d3ff8d861727..0000000000000000000000000000000000000000 --- a/spaces/Zannriell/cloudqi-cqi_speech_recognize_pt_v0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/cloudqi/cqi_speech_recognize_pt_v0").launch() \ No newline at end of file diff --git a/spaces/ZiLaiJuan/GRADIO/README.md b/spaces/ZiLaiJuan/GRADIO/README.md deleted file mode 100644 index 339dd23bd2233634ea2987a63abe677c1b200271..0000000000000000000000000000000000000000 --- a/spaces/ZiLaiJuan/GRADIO/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GRADIO -emoji: 🐠 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abidlabs/gradio-discord-bot-server/app.py b/spaces/abidlabs/gradio-discord-bot-server/app.py deleted file mode 100644 index e5e50714322d74cf0109f8be67fed8e9bb133bcb..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/gradio-discord-bot-server/app.py +++ /dev/null @@ -1,192 +0,0 @@ -import asyncio -import argparse -from collections import Counter -import json -import os -import pathlib -import re -from threading import Thread -from pathlib import Path - - -import discord -from discord.ext import commands -import gradio as gr -from gradio import utils -import requests - -from typing import Dict, List - -from utils import * - - -lock = asyncio.Lock() - -bot = commands.Bot("", intents=discord.Intents(messages=True, guilds=True)) - - -GUILD_SPACES_FILE = "guild_spaces.pkl" - - -if pathlib.Path(GUILD_SPACES_FILE).exists(): - guild_spaces = read_pickle_file(GUILD_SPACES_FILE) - assert isinstance(guild_spaces, dict), f"{GUILD_SPACES_FILE} in invalid format." - guild_blocks = {} - delete_keys = [] - for k, v in guild_spaces.items(): - try: - guild_blocks[k] = gr.Interface.load(v, src="spaces") - except ValueError: - delete_keys.append(k) - for k in delete_keys: - del guild_spaces[k] -else: - guild_spaces: Dict[int, str] = {} - guild_blocks: Dict[int, gr.Blocks] = {} - - -HASHED_USERS_FILE = "users.pkl" - -if pathlib.Path(HASHED_USERS_FILE).exists(): - hashed_users = read_pickle_file(HASHED_USERS_FILE) - assert isinstance(hashed_users, list), f"{HASHED_USERS_FILE} in invalid format." -else: - hashed_users: List[str] = [] - - -@bot.event -async def on_ready(): - print(f"Logged in as {bot.user}") - print(f"Running in {len(bot.guilds)} servers...") - - -async def run_prediction(space: gr.Blocks, *inputs): - inputs = list(inputs) - fn_index = 0 - processed_inputs = space.serialize_data(fn_index=fn_index, inputs=inputs) - batch = space.dependencies[fn_index]["batch"] - - if batch: - processed_inputs = [[inp] for inp in processed_inputs] - - outputs = await space.process_api( - fn_index=fn_index, inputs=processed_inputs, request=None, state={} - ) - outputs = outputs["data"] - - if batch: - outputs = [out[0] for out in outputs] - - processed_outputs = space.deserialize_data(fn_index, outputs) - processed_outputs = utils.resolve_singleton(processed_outputs) - - return processed_outputs - - -async def display_stats(message: discord.Message): - await message.channel.send( - f"Running in {len(bot.guilds)} servers\n" - f"Total # of users: {len(hashed_users)}\n" - f"------------------" - ) - await message.channel.send(f"Most popular spaces:") - # display the top 10 most frequently occurring strings and their counts - spaces = guild_spaces.values() - counts = Counter(spaces) - for space, count in counts.most_common(10): - await message.channel.send(f"- {space}: {count}") - - -async def load_space(guild: discord.Guild, message: discord.Message, content: str): - iframe_url = ( - requests.get(f"https://huggingface.co/api/spaces/{content}/host") - .json() - .get("host") - ) - if iframe_url is None: - return await message.channel.send( - f"Space: {content} not found. If you'd like to make a prediction, enclose the inputs in quotation marks." - ) - else: - await message.channel.send( - f"Loading Space: https://huggingface.co/spaces/{content}..." - ) - interface = gr.Interface.load(content, src="spaces") - guild_spaces[guild.id] = content - guild_blocks[guild.id] = interface - asyncio.create_task(update_pickle_file(guild_spaces, GUILD_SPACES_FILE)) - if len(content) > 32 - len(f"{bot.name} []"): # type: ignore - nickname = content[: 32 - len(f"{bot.name} []") - 3] + "..." # type: ignore - else: - nickname = content - nickname = f"{bot.name} [{nickname}]" # type: ignore - await guild.me.edit(nick=nickname) - await message.channel.send( - "Ready to make predictions! Type in your inputs and enclose them in quotation marks." - ) - - -async def disconnect_space(bot: commands.Bot, guild: discord.Guild): - guild_spaces.pop(guild.id, None) - guild_blocks.pop(guild.id, None) - asyncio.create_task(update_pickle_file(guild_spaces, GUILD_SPACES_FILE)) - await guild.me.edit(nick=bot.name) # type: ignore - - -async def make_prediction(guild: discord.Guild, message: discord.Message, content: str): - if guild.id in guild_spaces: - params = re.split(r' (?=")', content) - params = [p.strip("'\"") for p in params] - space = guild_blocks[guild.id] - predictions = await run_prediction(space, *params) - if isinstance(predictions, (tuple, list)): - for p in predictions: - await send_file_or_text(message.channel, p) - else: - await send_file_or_text(message.channel, predictions) - return - else: - await message.channel.send( - "No Space is currently running. Please type in the name of a Hugging Face Space name first, e.g. abidlabs/en2fr" - ) - await guild.me.edit(nick=bot.name) # type: ignore - - -@bot.event -async def on_message(message: discord.Message): - if message.author == bot.user: - return - h = hash_user_id(message.author.id) - if h not in hashed_users: - hashed_users.append(h) - asyncio.create_task(update_pickle_file(hashed_users, HASHED_USERS_FILE)) - else: - if message.content: - content = remove_tags(message.content) - print("Message received: " + content) - guild = message.channel.guild - assert guild, "Message not sent in a guild." - - if content.strip() == "exit": - await disconnect_space(bot, guild) - elif content.strip() == "stats": - await display_stats(message) - elif content.startswith('"') or content.startswith("'"): - await make_prediction(guild, message, content) - else: - await load_space(guild, message, content) - -bot.env = "prod" # type: ignore -bot.name = "GradioBot" # type: ignore - - -t = Thread(target=bot.run, daemon=True, args=(os.getenv("discord_token"), )) -t.start() - -import gradio as gr - -with gr.Blocks() as demo: - gr.Markdown(Path('landing.md').read_text()) - -demo.launch() - \ No newline at end of file diff --git a/spaces/adamcasson/transformer-flops-calculator/README.md b/spaces/adamcasson/transformer-flops-calculator/README.md deleted file mode 100644 index 4127aed709d6bb110f135ae3a8c1cda62c86c8dc..0000000000000000000000000000000000000000 --- a/spaces/adamcasson/transformer-flops-calculator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Transformer Flops Calculator -emoji: ⚡ -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/adirik/stylemc-demo/encoder4editing/criteria/lpips/__init__.py b/spaces/adirik/stylemc-demo/encoder4editing/criteria/lpips/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/aidealab/interior-ai/pipelines.py b/spaces/aidealab/interior-ai/pipelines.py deleted file mode 100644 index 5c40d5ab9852206be8e392c4b370b7953caeab62..0000000000000000000000000000000000000000 --- a/spaces/aidealab/interior-ai/pipelines.py +++ /dev/null @@ -1,131 +0,0 @@ -import logging -from typing import List, Tuple, Dict - -import streamlit as st -import torch -import gc -import time -import numpy as np -from PIL import Image -from time import perf_counter -from contextlib import contextmanager -from scipy.signal import fftconvolve -from PIL import ImageFilter - -from diffusers import ControlNetModel, UniPCMultistepScheduler -from diffusers import StableDiffusionInpaintPipeline - -from config import WIDTH, HEIGHT -from stable_diffusion_controlnet_inpaint_img2img import ( - StableDiffusionControlNetInpaintImg2ImgPipeline, -) -from helpers import flush - -LOGGING = logging.getLogger(__name__) - - -class ControlNetPipeline: - def __init__(self): - self.in_use = False - self.controlnet = ControlNetModel.from_pretrained( - "BertChristiaens/controlnet-seg-room", torch_dtype=torch.float16 - ) - - self.pipe = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - controlnet=self.controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe.scheduler = UniPCMultistepScheduler.from_config( - self.pipe.scheduler.config - ) - self.pipe.enable_xformers_memory_efficient_attention() - self.pipe = self.pipe.to("cuda") - - self.waiting_queue = [] - self.count = 0 - - @property - def queue_size(self): - return len(self.waiting_queue) - - def __call__(self, **kwargs): - self.count += 1 - number = self.count - - self.waiting_queue.append(number) - - # wait until the next number in the queue is the current number - # while self.waiting_queue[0] != number: - # print(f"Wait for your turn {number} in queue {self.waiting_queue}") - # time.sleep(0.5) - # pass - - # it's your turn, so remove the number from the queue - # and call the function - print("It's the turn of", self.count) - results = self.pipe(**kwargs) - self.waiting_queue.pop(0) - flush() - return results - - -class SDPipeline: - def __init__(self): - self.pipe = StableDiffusionInpaintPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-inpainting", - torch_dtype=torch.float16, - safety_checker=None, - ) - - self.pipe.enable_xformers_memory_efficient_attention() - self.pipe = self.pipe.to("cuda") - - self.waiting_queue = [] - self.count = 0 - - @property - def queue_size(self): - return len(self.waiting_queue) - - def __call__(self, **kwargs): - self.count += 1 - number = self.count - - self.waiting_queue.append(number) - - # wait until the next number in the queue is the current number - # while self.waiting_queue[0] != number: - # print(f"Wait for your turn {number} in queue {self.waiting_queue}") - # time.sleep(0.5) - # pass - - # it's your turn, so remove the number from the queue - # and call the function - print("It's the turn of", self.count) - results = self.pipe(**kwargs) - self.waiting_queue.pop(0) - flush() - return results - - -@st.experimental_singleton(max_entries=5,show_spinner=False) -def get_controlnet(): - """Method to load the controlnet model - Returns: - ControlNetModel: controlnet model - """ - pipe = ControlNetPipeline() - return pipe - - -@st.experimental_singleton(max_entries=5,show_spinner=False) -def get_inpainting_pipeline(): - """Method to load the inpainting pipeline - Returns: - StableDiffusionInpaintPipeline: inpainting pipeline - """ - pipe = SDPipeline() - return pipe diff --git a/spaces/akhaliq/Real-ESRGAN/realesrgan/archs/__init__.py b/spaces/akhaliq/Real-ESRGAN/realesrgan/archs/__init__.py deleted file mode 100644 index f3fbbf3b78e33b61fd4c33a564a9a617010d90de..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-ESRGAN/realesrgan/archs/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import arch modules for registry -# scan all the files that end with '_arch.py' under the archs folder -arch_folder = osp.dirname(osp.abspath(__file__)) -arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')] -# import all the arch modules -_arch_modules = [importlib.import_module(f'realesrgan.archs.{file_name}') for file_name in arch_filenames] diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh deleted file mode 100644 index eb17e4388a7b32fd61d396056f91ed7fb67ec48c..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/vctk/voc1/local/data_prep.sh +++ /dev/null @@ -1,118 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -# shellcheck disable=SC1091 -. ./path.sh || exit 1; - -fs=24000 -num_dev=10 -num_eval=10 -train_set="train_nodev" -dev_set="dev" -eval_set="eval" -shuffle=false - -# shellcheck disable=SC1091 -. utils/parse_options.sh || exit 1; - -db_root=$1 -spk=$2 -data_dir=$3 - -# check arguments -if [ $# != 3 ]; then - echo "Usage: $0 [Options] " - echo "e.g.: $0 downloads/VCTK-Corpus p225 data" - echo "" - echo "Options:" - echo " --fs: target sampling rate (default=24000)." - echo " --num_dev: number of development uttreances (default=10)." - echo " --num_eval: number of evaluation uttreances (default=10)." - echo " --train_set: name of train set (default=train_nodev)." - echo " --dev_set: name of dev set (default=dev)." - echo " --eval_set: name of eval set (default=eval)." - echo " --shuffle: whether to perform shuffle in making dev / eval set (default=false)." - exit 1 -fi - -set -euo pipefail - -# check spk existence -[ ! -e "${db_root}/lab/mono/${spk}" ] && \ - echo "${spk} does not exist." >&2 && exit 1; - -[ ! -e "${data_dir}/all" ] && mkdir -p "${data_dir}/all" - -# set filenames -scp="${data_dir}/all/wav.scp" -segments="${data_dir}/all/segments" - -# check file existence -[ -e "${scp}" ] && rm "${scp}" -[ -e "${segments}" ] && rm "${segments}" - -# make scp and segments -find "${db_root}/wav48/${spk}" -follow -name "*.wav" | sort | while read -r wav; do - id=$(basename "${wav}" | sed -e "s/\.[^\.]*$//g") - lab=${db_root}/lab/mono/${spk}/${id}.lab - - # check lab existence - if [ ! -e "${lab}" ]; then - echo "${id} does not have a label file. skipped." - continue - fi - - echo "${id} cat ${wav} | sox -t wav - -c 1 -b 16 -t wav - rate ${fs} |" >> "${scp}" - - # parse start and end time from HTS-style mono label - idx=1 - while true; do - next_idx=$((idx+1)) - next_symbol=$(sed -n "${next_idx}p" "${lab}" | awk '{print $3}') - if [ "${next_symbol}" != "pau" ]; then - start_nsec=$(sed -n "${idx}p" "${lab}" | awk '{print $2}') - break - fi - idx=${next_idx} - done - idx=$(wc -l < "${lab}") - while true; do - prev_idx=$((idx-1)) - prev_symbol=$(sed -n "${prev_idx}p" "${lab}" | awk '{print $3}') - if [ "${prev_symbol}" != "pau" ]; then - end_nsec=$(sed -n "${idx}p" "${lab}" | awk '{print $1}') - break - fi - idx=${prev_idx} - done - start_sec=$(echo "${start_nsec}*0.0000001" | bc | sed "s/^\./0./") - end_sec=$(echo "${end_nsec}*0.0000001" | bc | sed "s/^\./0./") - echo "${id} ${id} ${start_sec} ${end_sec}" >> "${segments}" -done - -# split -num_all=$(wc -l < "${scp}") -num_deveval=$((num_dev + num_eval)) -num_train=$((num_all - num_deveval)) -utils/split_data.sh \ - --num_first "${num_train}" \ - --num_second "${num_deveval}" \ - --shuffle "${shuffle}" \ - "${data_dir}/all" \ - "${data_dir}/${train_set}" \ - "${data_dir}/deveval" -utils/split_data.sh \ - --num_first "${num_dev}" \ - --num_second "${num_eval}" \ - --shuffle "${shuffle}" \ - "${data_dir}/deveval" \ - "${data_dir}/${dev_set}" \ - "${data_dir}/${eval_set}" - -# remove tmp directories -rm -rf "${data_dir}/all" -rm -rf "${data_dir}/deveval" - -echo "Successfully prepared data." diff --git a/spaces/alamin655/websurfx/src/cache/error.rs b/spaces/alamin655/websurfx/src/cache/error.rs deleted file mode 100644 index 62c9098be2a1632908bb792c9742b92dac36c2be..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/cache/error.rs +++ /dev/null @@ -1,50 +0,0 @@ -//! This module provides the error enum to handle different errors associated while requesting data from -//! the redis server using an async connection pool. -use std::fmt; - -#[cfg(feature = "redis-cache")] -use redis::RedisError; - -/// A custom error type used for handling redis async pool associated errors. -#[derive(Debug)] -pub enum CacheError { - /// This variant handles all errors related to `RedisError`, - #[cfg(feature = "redis-cache")] - RedisError(RedisError), - /// This variant handles the errors which occurs when all the connections - /// in the connection pool return a connection dropped redis error. - PoolExhaustionWithConnectionDropError, - /// Whenever serialization or deserialization fails during communication with the cache. - SerializationError, - /// Returned when the value is missing. - MissingValue, -} - -impl fmt::Display for CacheError { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - match self { - #[cfg(feature = "redis-cache")] - CacheError::RedisError(redis_error) => { - if let Some(detail) = redis_error.detail() { - write!(f, "{}", detail) - } else { - write!(f, "") - } - } - CacheError::PoolExhaustionWithConnectionDropError => { - write!( - f, - "Error all connections from the pool dropped with connection error" - ) - } - CacheError::MissingValue => { - write!(f, "The value is missing from the cache") - } - CacheError::SerializationError => { - write!(f, "Unable to serialize, deserialize from the cache") - } - } - } -} - -impl error_stack::Context for CacheError {} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/winterm.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/winterm.py deleted file mode 100644 index 0fdb4ec4e91090876dc3fbf207049b521fa0dd73..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/winterm.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -from . import win32 - - -# from wincon.h -class WinColor(object): - BLACK = 0 - BLUE = 1 - GREEN = 2 - CYAN = 3 - RED = 4 - MAGENTA = 5 - YELLOW = 6 - GREY = 7 - -# from wincon.h -class WinStyle(object): - NORMAL = 0x00 # dim text, dim background - BRIGHT = 0x08 # bright text, dim background - BRIGHT_BACKGROUND = 0x80 # dim text, bright background - -class WinTerm(object): - - def __init__(self): - self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes - self.set_attrs(self._default) - self._default_fore = self._fore - self._default_back = self._back - self._default_style = self._style - # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. - # So that LIGHT_EX colors and BRIGHT style do not clobber each other, - # we track them separately, since LIGHT_EX is overwritten by Fore/Back - # and BRIGHT is overwritten by Style codes. - self._light = 0 - - def get_attrs(self): - return self._fore + self._back * 16 + (self._style | self._light) - - def set_attrs(self, value): - self._fore = value & 7 - self._back = (value >> 4) & 7 - self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) - - def reset_all(self, on_stderr=None): - self.set_attrs(self._default) - self.set_console(attrs=self._default) - self._light = 0 - - def fore(self, fore=None, light=False, on_stderr=False): - if fore is None: - fore = self._default_fore - self._fore = fore - # Emulate LIGHT_EX with BRIGHT Style - if light: - self._light |= WinStyle.BRIGHT - else: - self._light &= ~WinStyle.BRIGHT - self.set_console(on_stderr=on_stderr) - - def back(self, back=None, light=False, on_stderr=False): - if back is None: - back = self._default_back - self._back = back - # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style - if light: - self._light |= WinStyle.BRIGHT_BACKGROUND - else: - self._light &= ~WinStyle.BRIGHT_BACKGROUND - self.set_console(on_stderr=on_stderr) - - def style(self, style=None, on_stderr=False): - if style is None: - style = self._default_style - self._style = style - self.set_console(on_stderr=on_stderr) - - def set_console(self, attrs=None, on_stderr=False): - if attrs is None: - attrs = self.get_attrs() - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleTextAttribute(handle, attrs) - - def get_position(self, handle): - position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition - # Because Windows coordinates are 0-based, - # and win32.SetConsoleCursorPosition expects 1-based. - position.X += 1 - position.Y += 1 - return position - - def set_cursor_position(self, position=None, on_stderr=False): - if position is None: - # I'm not currently tracking the position, so there is no default. - # position = self.get_position() - return - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleCursorPosition(handle, position) - - def cursor_adjust(self, x, y, on_stderr=False): - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - position = self.get_position(handle) - adjusted_position = (position.Y + y, position.X + x) - win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) - - def erase_screen(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the screen. - # 1 should clear from the cursor to the beginning of the screen. - # 2 should clear the entire screen, and move cursor to (1,1) - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - # get the number of character cells in the current buffer - cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y - # get number of character cells before current cursor position - cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = cells_in_screen - cells_before_cursor - elif mode == 1: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_before_cursor - elif mode == 2: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_in_screen - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - if mode == 2: - # put the cursor where needed - win32.SetConsoleCursorPosition(handle, (1, 1)) - - def erase_line(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the line. - # 1 should clear from the cursor to the beginning of the line. - # 2 should clear the entire line. - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X - elif mode == 1: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwCursorPosition.X - elif mode == 2: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwSize.X - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - - def set_title(self, title): - win32.SetConsoleTitle(title) diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/test/test_iterators.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/test/test_iterators.py deleted file mode 100644 index 08d5e2465dec4f684435fb1663bd9566a8cfc27b..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/test/test_iterators.py +++ /dev/null @@ -1,601 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -import gzip -import itertools -from random import Random -import os -import shutil -import tempfile -from typing import Iterable, Iterator, Any, Union -import unittest -import pickle -import gc - -from infinibatch.iterators import ( - create_source_iterator, - ChunkedSourceIterator, - InfinitePermutationSourceIterator, - BufferedShuffleIterator, - BlockwiseShuffleIterator, - NativeCheckpointableIterator, - BucketedReadaheadBatchIterator, - MapIterator, - ParallelMapIterator, - ZipIterator, - FixedBatchIterator, - WindowedIterator, - SelectManyIterator, - RandomIterator, - RecurrentIterator, - SamplingRandomMapIterator, - PrefetchIterator, -) -from infinibatch.datasets import chunked_dataset_iterator - - -# TODO: -# - make sure that all iterators can be reset to a checkpoint even after they were exhausted -# - make sure that all iterators can be reset to a checkpoint that was taken after the iterator was exhausted -# - make sure that all iterators can be reset to a checkpoint at the beginning of the iteration -# - refactor test cases that do not rely on TestCheckpointableIterator -# - make sure every iterator is tested for correct checkpointing at the end of the iterator - - -class TestCheckpointableIterator: - """ - These are common test cases for CheckointableIterators - - Inherit from this class and set self.iterator and self.expected_result in the setUp function to use. - """ - - def test_basic(self): - self.assertListEqual(list(self.iterator), self.expected_result) - - def test_checkpointing_from_start(self): - for _ in range(len(self.expected_result)): - next(self.iterator) - self.iterator.setstate(None) - self.assertListEqual(list(self.iterator), self.expected_result) - - def test_checkpointing_in_middle(self): - result = [next(self.iterator) for _ in range(len(self.expected_result) // 3)] - self.iterator.setstate(self.iterator.getstate()) - result += [item for item in self.iterator] - self.assertListEqual(result, self.expected_result) - - def test_checkpointing_at_end(self): - for _ in range(len(self.expected_result)): - next(self.iterator) - self.iterator.setstate(self.iterator.getstate()) - self.assertRaises(StopIteration, self.iterator.__next__) - - -class TestBase(unittest.TestCase): - def setUp(self): - self.test_data = [ - [ - "item number one", - "item number two", - "item number three", - "item number four", - ], - ["item number five"], - [ - "item number six", - "item number seven", - "item number eight", - "item number nine", - "item number ten", - "item number eleven", - ], - [ - "item number twelve", - "item number thirteen", - "item number fourteen", - ], - ] - - self.flattened_test_data = [] - for chunk in self.test_data: - for item in chunk: - self.flattened_test_data.append(item) - - self.data_dir = tempfile.mkdtemp() - self.chunk_file_paths = [] - for chunk_id, chunk in enumerate(self.test_data): - file_name = os.path.join( - self.data_dir, "chunk_" + str(chunk_id).zfill(10) + ".gz" - ) - self.chunk_file_paths.append(file_name) - file_content = "\n".join(chunk) - with gzip.open(file_name, "wt", encoding="utf-8") as f: - f.write(file_content) - - @staticmethod - def read_chunk( - textfile_path: str, - ) -> Iterator[str]: # read_chunk_fn for chunked_dataset_iterator - with gzip.open(textfile_path, "rt", encoding="utf-8") as f: - return iter(f.read().splitlines()) - - def tearDown(self): - gc.collect() # this will get the pre-fetch terminated in some tests, which otherwise may still want to read these files - shutil.rmtree(self.data_dir) - - def assertMultisetEqual(self, a, b): - self.assertEqual(len(a), len(b)) - self.assertSetEqual(set(a), set(b)) - - -class TestSourceIterator(unittest.TestCase): - def test_exception(self): - self.assertRaises( - ValueError, create_source_iterator, [1], train=False, shuffle=True - ) - - -class TestChunkedSourceIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - self.expected_result = list(range(53)) - self.iterator = ChunkedSourceIterator(self.expected_result) - - def test_multiple_instance(self): - for num_instances in range(2, 17): - items = [] - for rank in range(num_instances): - iterator = ChunkedSourceIterator( - self.expected_result, - num_instances=num_instances, - instance_rank=rank, - ) - items.extend(list(iterator)) - self.assertListEqual(items, self.expected_result) - - -class TestInfinitePermutationSourceIterator(TestBase): - def test_repeat_once(self): - # This tests that two consecutive iterations through the test data yields differently ordered sequences. - reader = iter(InfinitePermutationSourceIterator(self.flattened_test_data, 42)) - items0 = list(itertools.islice(reader, len(self.flattened_test_data))) - items1 = list(itertools.islice(reader, len(self.flattened_test_data))) - self.assertMultisetEqual(items0 + items1, self.flattened_test_data * 2) - self.assertTrue(any(item0 != item1 for item0, item1 in zip(items0, items1))) - - def test_reiter_once(self): - # This differs from test_repeat_once in that we use checkpoints. - reader = InfinitePermutationSourceIterator(self.flattened_test_data, 42) - checkpoint = reader.getstate() - items0 = list(itertools.islice(reader, len(self.flattened_test_data))) - reader.setstate(checkpoint) - items1 = list(itertools.islice(reader, len(self.flattened_test_data))) - self.assertMultisetEqual(items0 + items1, self.flattened_test_data * 2) - self.assertSequenceEqual(items0, items1) - - def test_checkpointing(self): - random = Random() - for i in range(5): - # random sequence lengths to for testing different configurations - test_source_length = random.randrange(5, 25) - test_first_output_length = random.randrange(5, 25) - test_second_output_length = random.randrange(5, 25) - # source - test_source = list(range(test_source_length)) - reader = InfinitePermutationSourceIterator(test_source, seed=i) - # fetch a first sequence - _ = list(itertools.islice(reader, test_first_output_length)) - # fetch a second sequence - checkpoint = reader.getstate() - items1a = list(itertools.islice(reader, test_second_output_length)) - # fetch that second sequence again via checkpointing - reader.setstate(checkpoint) - items1b = list(itertools.islice(reader, test_second_output_length)) - # and again with serialized checkpoint - as_json = pickle.dumps(checkpoint) - checkpoint2 = pickle.loads(as_json) - reader.setstate(checkpoint2) - items1c = list(itertools.islice(reader, test_second_output_length)) - # must be the same - self.assertTrue(items1a == items1b) - self.assertTrue(items1a == items1c) - - -class TestNativeCheckpointableIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - self.expected_result = list(range(53)) - self.iterator = NativeCheckpointableIterator(self.expected_result) - - def test_iterator_exception(self): - self.assertRaises(ValueError, NativeCheckpointableIterator, iter(range(10))) - - -class TestRecurrentIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data = list(range(53)) - - self.expected_result = [0] - for i in data[1:]: - self.expected_result.append(self.expected_result[-1] + i) - - def step_function(prev_state, item): - output = item + prev_state - new_state = output - return new_state, output - - self.iterator = RecurrentIterator( - NativeCheckpointableIterator(data), step_function, initial_state=0 - ) - - -class TestSamplingRandomMapIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data = list(range(53)) - - def transform(random: Random, item: int): - return item + random.random() - - seed = 1 - random = Random() - random.seed(seed) - self.expected_result = [n + random.random() for n in data] - - self.iterator = SamplingRandomMapIterator( - NativeCheckpointableIterator(data), transform=transform, seed=seed - ) - - -class TestFixedBatchIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data = list(range(5)) - - batch_size = 3 - self.expected_result = [data[0:3], data[3:]] - - self.iterator = FixedBatchIterator( - NativeCheckpointableIterator(data), batch_size=batch_size - ) - - -class TestSelectManyIterator(TestBase): - # in this test, SelectManyIterator is used to read chunk files - @staticmethod - def _select_many_from_chunks(chunk_file_paths): - return SelectManyIterator( - source_iterator=chunk_file_paths, collection_selector=TestBase.read_chunk - ) - - def test(self): - items = list( - self._select_many_from_chunks( - NativeCheckpointableIterator(self.chunk_file_paths) - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - def test_no_selector(self): - data = list(range(100)) - sublists = [data[:10], data[10:42], data[42:87], data[87:]] - result = list(SelectManyIterator(NativeCheckpointableIterator(sublists))) - self.assertListEqual(result, data) - - def test_different_line_endings(self): - # write data in binary mode with LF line endings - lf_dir = tempfile.mkdtemp() - lf_file = os.path.join(lf_dir, "test.gz") - with gzip.open(lf_file, "w") as f: - f.write("\n".join(self.flattened_test_data).encode("utf-8")) - - # write data in binary mode with CRLF line endings - crlf_dir = tempfile.mkdtemp() - crlf_file = os.path.join(crlf_dir, "test.gz") - with gzip.open(crlf_file, "w") as f: - f.write("\r\n".join(self.flattened_test_data).encode("utf-8")) - - lf_data = list( - self._select_many_from_chunks(NativeCheckpointableIterator([lf_file])) - ) - crlf_dat = list( - self._select_many_from_chunks(NativeCheckpointableIterator([crlf_file])) - ) - self.assertListEqual(lf_data, crlf_dat) - - shutil.rmtree(lf_dir) - shutil.rmtree(crlf_dir) - - def test_checkpointing(self): - chunk_file_paths = [ - os.path.join(self.data_dir, subpath.name) - for subpath in os.scandir(self.data_dir) - if subpath.is_file() and subpath.name.endswith(".gz") - ] - chunk_file_paths = InfinitePermutationSourceIterator( - chunk_file_paths, shuffle=False - ) # using this as checkpointed cycle() - random = Random(1) - for _ in range(5): - first_length = random.randrange(11, 31) - extra_length = random.randrange(11, 33) - dataset = self._select_many_from_chunks(chunk_file_paths) - for _ in range(first_length): - next(dataset) - checkpoint = dataset.getstate() - items0 = list(itertools.islice(dataset, extra_length)) - # print(len(items0)) - dataset.setstate(checkpoint) - items1 = list(itertools.islice(dataset, extra_length)) - # print(len(items1)) - self.assertListEqual(items0, items1) - - -class TestBufferedShuffleIterator(TestBase): - def test_shuffle(self): - # work on copy of data in case data is modified by class - items = list( - BufferedShuffleIterator( - NativeCheckpointableIterator(self.flattened_test_data.copy()), 971, 42 - ) - ) - self.assertMultisetEqual(items, self.flattened_test_data) - - def test_shuffle_buffer_size_one(self): - # work on copy of data in case data is modified by class - items = list( - BufferedShuffleIterator( - NativeCheckpointableIterator(self.flattened_test_data.copy()), 1, 42 - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - -# note: this is also tested in more depth in Test_chunked_dataset_iterator() -class TestBlockwiseShuffleIterator(TestBase): - def test_shuffle(self): - # work on copy of data in case data is modified by class - items = list( - BlockwiseShuffleIterator( - NativeCheckpointableIterator(self.flattened_test_data.copy()), 971, 42 - ) - ) - self.assertMultisetEqual(items, self.flattened_test_data) - - def test_shuffle_buffer_size_one(self): - # work on copy of data in case data is modified by class - items = list( - BlockwiseShuffleIterator( - NativeCheckpointableIterator(self.flattened_test_data.copy()), 1, 42 - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - -def map_fun(n): - return n + 1 - - -class TestMapIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data = list(range(53)) - self.expected_result = [map_fun(n) for n in data] - self.iterator = MapIterator(NativeCheckpointableIterator(data), map_fun) - - -class TestParallelMapIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data = list(range(53)) - self.expected_result = [map_fun(n) for n in data] - self.iterator = ParallelMapIterator( - NativeCheckpointableIterator(data), map_fun, 5, 7 - ) - - -class TestZipIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - data1 = list(range(53)) - data2 = [n * n for n in data1] - self.expected_result = list(zip(data1, data2)) - self.iterator = ZipIterator( - NativeCheckpointableIterator(data1), NativeCheckpointableIterator(data2) - ) - - -class TestWindowedIterator(TestBase): - def test(self): - for n in [0, 2, 3, 8, 9, 10, 11, 12]: # cover various boundary conditions - seq = list(range(n)) - it = WindowedIterator(NativeCheckpointableIterator(seq), 3) - actual0 = list(itertools.islice(it, n * 3 // 10)) - checkpoint = it.getstate() - actual1a = list(it) - it.setstate(checkpoint) - actual1b = list(it) - actual = actual0 + actual1a - expected = list( - zip(seq, itertools.islice(seq, 1, None), itertools.islice(seq, 2, None)) - ) - self.assertListEqual(actual, expected) # basic operation - self.assertListEqual(actual1a, actual1b) # checkpointing - - -class TestRandomIterator(TestBase): - def test(self): - n = 100 - it = RandomIterator(seed=1) - _ = list(itertools.islice(it, n * 3 // 10)) - checkpoint = it.getstate() - items1a = list(itertools.islice(it, n * 7 // 10)) - it.setstate(checkpoint) - items1b = list(itertools.islice(it, n * 7 // 10)) - self.assertListEqual(items1a, items1b) - - -class TestPrefetchIterator(unittest.TestCase, TestCheckpointableIterator): - def setUp(self): - self.expected_result = list(range(53)) - source_iterator = NativeCheckpointableIterator(self.expected_result) - self.iterator = PrefetchIterator(source_iterator, buffer_size=13) - - -class Test_chunked_dataset_iterator(TestBase): - def test_no_shuffle(self): - items = list( - itertools.islice( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=False, - buffer_size=1000, - ), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - def test_other_files_present(self): - with open(os.path.join(self.data_dir, "i_do_not_belong_here.txt"), "w") as f: - f.write("really ...") - items = list( - itertools.islice( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=False, - buffer_size=1000, - ), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - def test_transform(self): - transform = lambda s: s + "!" - modified_test_data = [transform(s) for s in self.flattened_test_data] - items = list( - itertools.islice( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=False, - buffer_size=1000, - transform=transform, - ), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, modified_test_data) - - def test_two_instances(self): - dataset0 = chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=False, - buffer_size=1000, - num_instances=2, - instance_rank=0, - ) - dataset1 = chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=False, - buffer_size=1000, - num_instances=2, - instance_rank=1, - ) - items0 = list( - itertools.islice(dataset0, len(self.test_data[0]) + len(self.test_data[2])) - ) - items1 = list( - itertools.islice(dataset1, len(self.test_data[1]) + len(self.test_data[3])) - ) - self.assertMultisetEqual(set(items0 + items1), self.flattened_test_data) - - def test_checkpointing(self): - random = Random(1) - for use_windowed in (True, False): - for i in range(2): - first_length = random.randrange(11, 21) - extra_length = random.randrange(11, 21) - dataset = chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=(i % 2 == 0), - buffer_size=1000, - seed=i, - num_instances=2, - instance_rank=0, - use_windowed=use_windowed, - ) - for _ in range(first_length): - next(dataset) - checkpoint = dataset.getstate() - items1 = list(itertools.islice(dataset, extra_length)) - dataset.setstate(checkpoint) - items2 = list(itertools.islice(dataset, extra_length)) - self.assertListEqual(items1, items2) - - -class TestBucketedReadaheadBatchIterator(TestBase): - def txest_basic_functionality(self): - num_batches = 13 - batch_labels = ( - 75 # note: these settings imply a few iterations through the chunks - ) - # basic operation, should not crash - bg = BucketedReadaheadBatchIterator( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=True, - buffer_size=1000, - seed=1, - ), - read_ahead=100, - seed=1, - key=lambda line: len(line), - batch_size=lambda line: batch_labels // (1 + len(line)), - ) - batches1 = list(itertools.islice(bg, num_batches)) - # verify determinism - bg = BucketedReadaheadBatchIterator( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=True, - buffer_size=1000, - seed=1, - ), - read_ahead=100, - seed=1, - key=lambda line: len(line), - batch_size=lambda line: batch_labels // (1 + len(line)), - ) - batches2 = list(itertools.islice(bg, num_batches)) - print([(len(batch[0]), len(batch)) for batch in batches1]) - self.assertListEqual(batches1, batches2) - - def test_checkpointing(self): - first_batches = 12 - extra_batches = 7 - batch_labels = 123 - bg = BucketedReadaheadBatchIterator( - chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=True, - buffer_size=1000, - seed=1, - ), - read_ahead=100, - seed=1, - key=lambda line: len(line), - batch_size=lambda line: batch_labels // (1 + len(line)), - ) - _ = list(itertools.islice(bg, first_batches)) - checkpoint = bg.getstate() - batches1 = list(itertools.islice(bg, extra_batches)) - bg.setstate(checkpoint) - batches2 = list(itertools.islice(bg, extra_batches)) - self.assertListEqual(batches1, batches2) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/alistairmcleay/cambridge-masters-project/src/crazyneuraluser/user_model_code/analysis_sgd.py b/spaces/alistairmcleay/cambridge-masters-project/src/crazyneuraluser/user_model_code/analysis_sgd.py deleted file mode 100644 index bcdeaf1702a0cbd04f891f463d69470ac08d7bbc..0000000000000000000000000000000000000000 --- a/spaces/alistairmcleay/cambridge-masters-project/src/crazyneuraluser/user_model_code/analysis_sgd.py +++ /dev/null @@ -1,483 +0,0 @@ -import json -import os - -from utils_sgd import ( - bcolors, - compare_slot_values_in_state, - dict2str, - get_turn_act, - list2str, -) - -""" This file contains some utilities for analysis and parsing SGD """ - -DATA_SPLIT = ["train", "dev", "test"] - - -def collect_data(data_path, remove_dial_switch=False): - data = {} - for split in DATA_SPLIT: - data[split] = iter_data_folder(data_path, split, remove_dial_switch) - return data - - -def _remove_dial(dial_id, dial): - # remove_flag = False - # removes service `Homes_2` in test set as the slot `intent` is the same name as the user intent, - # which causes problem in goal preparation - if "Homes_2" in dial["services"]: - return True - return False - - -def iter_data_folder(data_path, split, remove_dial_switch): - """Iterate data split folder""" - split_dir = os.path.join(data_path, split) - data_split = {} - remove_dial_ids = [] - total_dial_ids = [] - for f in os.listdir(split_dir): - if not f.startswith("dialogues"): # skip schema.json - continue - file_path = os.path.join(data_path, split, f) - iter_file( - file_path, data_split, remove_dial_ids, total_dial_ids, remove_dial_switch - ) - print( - "Done collecting {} | total {} dialogues | load {} dialogues | remove {} dialogues".format( - split, len(total_dial_ids), len(data_split), len(remove_dial_ids) - ) - ) - return data_split - - -def iter_file( - file_path, data_split, remove_dial_ids, total_dial_ids, remove_dial_switch -): - """Iterate data file""" - with open(file_path) as f: - data_in = json.load(f) # list of dialouges in a json file - - for dial in data_in: - dial_id = dial["dialogue_id"] - total_dial_ids.append(dial_id) - - if remove_dial_switch and _remove_dial(dial_id, dial): - remove_dial_ids.append(dial_id) - else: - data_split[dial_id] = dial - - -def check_multiple_services_per_turn(data): - for split in DATA_SPLIT: - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - for turn_id, turn in enumerate(dial["turns"]): - frames = turn["frames"] - if len(frames) > 1: - print(split, dial_id, turn_id, turn["utterance"]) - - -def show_actions(actions): - for action_id, action in enumerate(actions): - act, slot, values = action["act"], action["slot"], action["values"] - print( - f"====> ACTION | Act {action_id}: {bcolors.RED}{act}{bcolors.ENDC}, \ - slot: {bcolors.YELLOW}{slot}{bcolors.ENDC}, values: {bcolors.GREEN}{values}{bcolors.ENDC}" - ) - - -def show_user_state(frame): - state = frame["state"] - active_intent = state["active_intent"] - req_slots = list2str(state["requested_slots"]) - slot2value = dict2str(state["slot_values"], colored=True) - print( - "====> STATE | intent: {}, req_slots: {}, slot2value: {}".format( - active_intent, req_slots, slot2value - ) - ) - - -def show_service_call(frame): - if "service_call" not in frame: - return - # system calls api - service_call, service_results = frame["service_call"], frame["service_results"] - print( - "====> API call | method: {}, args: {}, results: {}".format( - service_call["method"], - dict2str(service_call["parameters"]), - len(service_results), - ) - ) - - -def show_frame(spk, frame_id, frame): - service = frame["service"] - print("==> Frame_id: {}, service: {}".format(frame_id, service)) - - # actions (include all slots) - show_actions(frame["actions"]) - - # slots (only provide non-categorical slots with word span boundaries) - if spk == "USER": - show_user_state(frame) - else: # system - show_service_call(frame) - - -def show_turn(turn_id, turn): - if turn is None: - return - - frames = turn["frames"] - spk = turn["speaker"] - utt = turn["utterance"] - assert spk in ["USER", "SYSTEM"] - print(f"{spk}: {bcolors.UNDERLINE}{utt}{bcolors.ENDC}") - for frame_id, frame in enumerate(frames): - show_frame(spk, frame_id, frame) - print("------" * 15) - - -def show_dial_info(dial_id, dial): - print("\n") - print("******" * 15) - print("Dialogue={} | Service={}".format(dial_id, list2str(dial["services"]))) - print("******" * 15) - - -def show_dial(dial_id, dial): - show_dial_info(dial_id, dial) - for turn_id, turn in enumerate(dial["turns"]): - show_turn(turn_id, turn) - - -def show_data(data): - for split in DATA_SPLIT: - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - show_dial(dial_id, dial) - input("press...") - - -def identify_scenarios(data): - """ - According to dataset paper, a scenario is a sequence of intents, seeded at the start of a conversation - to the user agent - """ - # TODO: deal with NONE intent, check the # of intent seq conbinations - for split in DATA_SPLIT: - scenario2dialogues = {} - n_scenario_max, n_scenario_min = 0, 100 - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - scenario = [] - for turn in dial["turns"]: - if turn["speaker"] == "SYSTEM": - continue - # USER turn - # it's fine to consider only first frame (service) if the turn is at the bounrary between two services - frame = turn["frames"][0] - intent = frame["state"]["active_intent"] - if intent == "NONE": - continue - if len(scenario) == 0 or intent != scenario[-1]: - scenario.append(intent) - - # update count - if len(scenario) > n_scenario_max: - n_scenario_max = len(scenario) - if len(scenario) < n_scenario_min: - n_scenario_min = len(scenario) - - scenario = list2str(scenario) - if scenario not in scenario2dialogues: - scenario2dialogues[scenario] = [] - scenario2dialogues[scenario].append(dial_id) - - # done iter over split - print( - "Summary: split={}, unique_scenario={}, max_intent={}, min_intent={}".format( - split, len(scenario2dialogues), n_scenario_max, n_scenario_min - ) - ) - - -def _check_request_alts_type(prev_turn, sys_turn, curr_turn, curr_acts): - """ - check which of the following happens when request_alts - 1. randomly change goal (state changes) - 2. request_alts as system provides venue with missing slot-value (usr provides new info) - 3. simply dislike the provided venue, change venue without new slot-value (same info) - - Input: - prev_turn: previous user turn - curr_turn: current user turn - """ - - def _get_intent2state(turn): - intent2state = {} - for frame in turn["frames"]: - state = frame["state"] - intent = state["active_intent"] - intent2state[intent] = state - return intent2state - - assert "REQUEST_ALTS" in curr_acts - if len(curr_acts) == 1: # case 3 - # return "_dislike_" - if "OFFER" in get_turn_act(sys_turn): - return "_dislike_offer_" - else: - return "_dislike_info_" - elif ( - "INFORM" in curr_acts and len(set(curr_acts)) == 2 - ): # only inform and request_alts - assert len(curr_turn["frames"]) == 1 - curr_slot_values = curr_turn["frames"][0]["state"]["slot_values"] - curr_intent = curr_turn["frames"][0]["state"]["active_intent"] - - if len(prev_turn["frames"]) == 1: - prev_slot_values = prev_turn["frames"][0]["state"]["slot_values"] - else: # need to get the state with the same intent - intent2state = _get_intent2state(prev_turn) - prev_slot_values = intent2state[curr_intent]["slot_values"] - - state_diff = compare_slot_values_in_state(prev_slot_values, curr_slot_values) - if state_diff: # case 1 - return "_random_" - else: # case 2 - return "_miss_" - else: - return "_unknown_" - - -def stats_request_alts_type(data): - for split in DATA_SPLIT: - stats = { - "_random_": 0, - "_miss_": 0, - "_dislike_offer_": 0, - "_dislike_info_": 0, - "_unknown_": 0, - } - n_all_usr_turn, n_request_alts = 0, 0 - - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - for turn_id, turn in enumerate(dial["turns"]): - prev_turn = turn - if turn["speaker"] == "SYSTEM": - sys_turn = turn - continue - acts = get_turn_act(turn) - if "REQUEST_ALTS" in acts: - n_request_alts += 1 - type_result = _check_request_alts_type( - prev_turn, sys_turn, turn, acts - ) - stats[type_result] += 1 - if type_result == "_random_": - print("CASE {}".format(type_result)) - show_turn(0, prev_turn) - show_turn(0, sys_turn) - show_turn(0, turn) - input("press...") - n_all_usr_turn += 1 - prev_turn = turn - - print("REQUEST_ALTS type statistics") - for k, v in stats.items(): - print("{} => {}".format(k, v)) - print( - "request_alts turns: {}, all usr turns: {}, dialogues: {}".format( - n_request_alts, n_all_usr_turn, len(data[split]) - ) - ) - - -def show_utt_by_act(data): - target_act = "OFFER" - for split in DATA_SPLIT: - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - match_flag = False - for turn_id, turn in enumerate(dial["turns"]): - acts = get_turn_act(turn) - if target_act in acts: - match_flag = True - if match_flag: - show_dial(dial_id, dial) - input("press...") - - -def show_state_with_value_change(data): - for split in DATA_SPLIT: - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - intent2slot_values = {} - for turn_id, turn in enumerate(dial["turns"]): - utt, spk = turn["utterance"], turn["speaker"] - if spk != "USER": - prev_system_turn = turn - continue - for frame in turn["frames"]: - state = frame["state"] - active_intent = state["active_intent"] - slot_values = state["slot_values"] - if active_intent in intent2slot_values: - state_diff = compare_slot_values_in_state( - intent2slot_values[active_intent], slot_values - ) - if state_diff: - print( - "Dial: {}, state change: {}".format(dial_id, state_diff) - ) - print( - "==> Prev SYS: {}".format(prev_system_turn["utterance"]) - ) - for sys_frame in prev_system_turn["frames"]: - show_actions(sys_frame["actions"]) - print("==> Curr USR: {}".format(utt)) - show_actions(frame["actions"]) - print( - "recorded state => intent: {}, slot2value: {}".format( - active_intent, - dict2str(intent2slot_values[active_intent]), - ) - ) - print( - "current state => intent: {}, slot2value: {}".format( - active_intent, dict2str(slot_values) - ) - ) - input("press...") - intent2slot_values[ - active_intent - ] = slot_values # overlap with new state, no matter values changed or not - - -def check_state_with_value_change(data, display=False): - for split in DATA_SPLIT: - n_diff = {"NOTIFY_FAILURE": 0, "NEGATE": 0, "REQUEST_ALTS": 0, "RANDOM": 0} - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - intent2slot_values = {} - diff_flag = False - for turn_id, turn in enumerate(dial["turns"]): - if diff_flag: - break - utt, spk = turn["utterance"], turn["speaker"] - if spk != "USER": - prev_system_turn = turn - continue - for frame in turn["frames"]: - state = frame["state"] - active_intent = state["active_intent"] - slot_values = state["slot_values"] - if active_intent in intent2slot_values: - state_diff = compare_slot_values_in_state( - intent2slot_values[active_intent], slot_values - ) - if state_diff: - usr_acts = get_turn_act(turn) - if "NOTIFY_FAILURE" in get_turn_act(prev_system_turn): - if display: - print("FAILURE", dial_id, utt) - n_diff["NOTIFY_FAILURE"] += 1 - elif "NEGATE" in usr_acts: - if display: - print("NEGATE", dial_id, utt) - n_diff["NEGATE"] += 1 - elif "REQUEST_ALTS" in usr_acts: - if display: - print("REQUEST_ALTS", dial_id, utt) - n_diff["REQUEST_ALTS"] += 1 - else: - if display: - print("RANDOM", dial_id, utt) - n_diff["RANDOM"] += 1 - if display: - input("press...") - # n_diff += 1 - diff_flag = True - intent2slot_values[ - active_intent - ] = slot_values # overlap with new state, no matter values changed or not - n = ( - n_diff["NOTIFY_FAILURE"] - + n_diff["NEGATE"] - + n_diff["REQUEST_ALTS"] - + n_diff["RANDOM"] - ) - print( - "{} => total dials: {}, change goal dials: {} (total: {})".format( - split, len(data[split]), dict2str(n_diff), n - ) - ) - - -def stats_after_system(data): - """ - check the possible user behavior right after system offers/notify_failure - """ - n = 0 - stats = { - "SELECT": 0, - "REQUEST_ALTS": 0, - "REQUEST": 0, - "AFFIRM": 0, - "unknown": 0, - } # if system offers - # stats = {"INFORM": 0, "AFFIRM": 0, "NEGATE": 0, "unknown": 0} # if system notify_failure - for split in DATA_SPLIT: - for dial_id in sorted(data[split].keys()): - dial = data[split][dial_id] - for turn_id, turn in enumerate(dial["turns"]): - if turn_id == 0: - prev_turn = turn - continue - if turn["speaker"] == "SYSTEM": - sys_turn = turn - continue - - if "OFFER" in get_turn_act(sys_turn): - # if "OFFER" in get_turn_act(sys_turn) and "NOTIFY_FAILURE" in get_turn_act(sys_turn): - # if "NOTIFY_FAILURE" in get_turn_act(sys_turn): - n += 1 - acts = get_turn_act(turn) - # OFFER - if "SELECT" in acts: - stats["SELECT"] += 1 - elif "REQUEST_ALTS" in acts: - stats["REQUEST_ALTS"] += 1 - elif "REQUEST" in acts: - stats["REQUEST"] += 1 - elif ( - "AFFIRM" in acts - ): # cases fall into here are SYS_ACT: ["OFFER", "NOTIFY_FAILURE"], and USR_ACT: ["AFFIRM"], - # e.g., accept new proposal - show_turn(0, prev_turn) - show_turn(0, sys_turn) - show_turn(0, turn) - input("press...") - stats["AFFIRM"] += 1 - else: - stats["unknown"] += 1 - - # NOTIFY_FAILURE - # if "INFORM" in acts: - # stats["INFORM"] += 1 - # elif "AFFIRM" in acts: - # stats["AFFIRM"] += 1 - # elif "NEGATE" in acts: - # stats["NEGATE"] += 1 - # else: - # stats["unknown"] += 1 - - prev_turn = turn - for k, v in stats.items(): - print("{} -> {}".format(k, v)) - print("Total offer turns: {}".format(n)) diff --git a/spaces/allknowingroger/Image-Models-Test159/README.md b/spaces/allknowingroger/Image-Models-Test159/README.md deleted file mode 100644 index a3a43bf672ca727d8113068aed4ea790c9de9309..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test159/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test142 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test20/app.py b/spaces/allknowingroger/Image-Models-Test20/app.py deleted file mode 100644 index 44d350e93bdfa1237bc5598a4186a28aeb488fbe..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test20/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "digiplay/RunDiffusionFXPhotorealistic_v1", - "digiplay/mecha_musume_vivid_soft", - "runwayml/stable-diffusion-v1-5", - "WT-MM/Mei", - "digiplay/ChikMix_V3", - "digiplay/PeachMixsRelistic_R0", - "digiplay/fantasticmix_v40_test", - "digiplay/RealCartoon3D_F16full_v3.1", - "digiplay/ShampooMix_4", - -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/Bird_Image_Generator.py b/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/Bird_Image_Generator.py deleted file mode 100644 index 8504a008f18e2f2e428d8521b8062b2f108403a2..0000000000000000000000000000000000000000 --- a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/Bird_Image_Generator.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -import torch -import pickle -import numpy as np -from PIL import Image -import torch.nn as nn -from generator import G_NET -from random import randrange -from encoder import RNN_ENCODER -from discriminator import D_NET256 -from torch.autograd import Variable -from nltk.tokenize import RegexpTokenizer -from utils import mkdir_p, display_images - -####Configerations Part### -Global_Batch_size = 1 -DATA_DIR = "../data/birds" -EMBEDDING_DIM = 256 -DF_DIM = 64 -NET_G = "netG_epoch_600.pth" -NET_D = "netD2.pth" -WORDS_NUM = 18 -RNN_TYPE = "LSTM" -NET_E = "text_encoder599.pth" -B_DCGAN = False -GF_DIM = 32 -CONDITION_DIM = 100 -BRANCH_NUM = 3 -R_NUM = 2 -Z_DIM = 100 -CUDA = False -FONT_MAX = 50 -custome_input = False - - -def init_word2idx(): - with open("word2idx.pickle", "rb") as handle: - wordtoix = pickle.load(handle) - n_words = len(wordtoix) - return wordtoix, n_words - -def init_text_encoder(n_words): - text_encoder = RNN_ENCODER(n_words, WORDS_NUM, RNN_TYPE, nhidden=EMBEDDING_DIM) - state_dict = torch.load(NET_E, map_location=lambda storage, loc: storage) - text_encoder.load_state_dict(state_dict) - text_encoder.eval() - return text_encoder - -def init_generator(): - netG = G_NET( GF_DIM, EMBEDDING_DIM, CONDITION_DIM, Z_DIM, BRANCH_NUM, R_NUM) - model_dir = NET_G - state_dict = torch.load(model_dir, map_location=lambda storage, loc: storage) - netG.load_state_dict(state_dict) - netG.eval() - return netG - -def init_discriminator(): - netD = D_NET256(DF_DIM, EMBEDDING_DIM) - model_dir = NET_D - state_dict = torch.load(model_dir, map_location=lambda storage, loc: storage) - netD.load_state_dict(state_dict) - netD.eval() - return netD - - -def gen_example(data_dic, n_words): - image_paths = [] - text_encoder = init_text_encoder(n_words) - netG = init_generator() - s_tmp = "./gen_dir" - mkdir_p(s_tmp) - copyofFakeImages = [] - for key in data_dic: - save_dir = "%s/%s" % (s_tmp, key) - mkdir_p(save_dir) - captions, cap_lens = data_dic[key] - batch_size = captions.shape[0] - # batch_size = 1 - nz = Z_DIM - captions = Variable(torch.from_numpy(captions), volatile=True) - cap_lens = Variable(torch.from_numpy(cap_lens), volatile=True) - for i in range(1): # 16 - noise = Variable(torch.FloatTensor(batch_size, nz), volatile=True) - noise.data.normal_(0, 1) - hidden = text_encoder.init_hidden(batch_size) - words_embs, sent_emb = text_encoder(captions, cap_lens, hidden) - mask = captions == 0 - fake_imgs, attention_maps, _, _ = netG(noise, sent_emb, words_embs, mask) - copyofFakeImages.append(fake_imgs[2]) - for j in range(batch_size): - save_name = "%s/%d_s_" % (save_dir, i) - for k in range(len(fake_imgs)): - im = fake_imgs[k][j].data.cpu().numpy() - im = (im + 1.0) * 127.5 - im = im.astype(np.uint8) - im = np.transpose(im, (1, 2, 0)) - im = Image.fromarray(im) - fullpath = "%s_g%d.png" % (save_name, k) - im.save(fullpath) - image_paths.append(fullpath) - return copyofFakeImages, image_paths - - -def the_main(input_captions): - wordtoix, n_words = init_word2idx() - - data_dic = {} # dictionary used to generate images from captions - cap_lens = [] # caption lengths - tokenizer = RegexpTokenizer(r"\w+") - rev = [] - captions = [] - for input_caption in input_captions: - tokens = tokenizer.tokenize(input_caption.lower()) - for t in tokens: - # t = t.encode("ascii", "ignore").decode("ascii") - rev.append(wordtoix[t]) - captions.append(rev) # all captions in the file - cap_lens.append(len(rev)) - - - - max_len = np.max(cap_lens) # used to pad shorter captions - cap_lens = np.asarray(cap_lens) - cap_array = np.zeros((len(captions), max_len), dtype="int64") # placeholder for the padded sorted array caption - - for i in range(len(captions)): - cap = captions[i] - c_len = len(cap) - cap_array[i, : c_len] = cap - - for i, input_caption in enumerate(input_captions): - key = f'caption_{i}' - cap = np.asanyarray([cap_array[i]]) - # cap_len = np.asanyarray([cap_lens[i]]) - cap_len = np.asanyarray([max_len]) - data_dic[key] = [cap, cap_len] - - - f_images, image_paths = gen_example(data_dic, n_words) - return f_images, image_paths - - -def discriminator_loss(netD, fake_imgs): - fake_labels = Variable(torch.FloatTensor(Global_Batch_size).fill_(0)) - print(fake_imgs.shape) - fake_features = netD(fake_imgs) - if netD.UNCOND_DNET is not None: - fake_logits = netD.UNCOND_DNET(fake_features) - fake_errD = nn.BCELoss()(fake_logits, fake_labels) - errD = fake_errD - return errD - - -def GenerateImages(Input_Captions=["This bird has red wings white belly"], max_tries=10): - CurrentLoss = [10 for i in range(len(Input_Captions))] - netD = init_discriminator() - num_tries = 0 - while (sum(CurrentLoss) > len(Input_Captions) * 0.8) and (num_tries < max_tries): - num_tries += 1 - print("Try Number", num_tries) - fake_images, image_paths = the_main(Input_Captions) - for i, fake_image in enumerate(fake_images): - dloss = discriminator_loss(netD, fake_image) - CurrentLoss[i] = dloss.item() - print("The Loss", CurrentLoss, "The Sum", sum(CurrentLoss), "The Average", sum(CurrentLoss) / len(CurrentLoss)) - return image_paths - - -if __name__ == "__main__": - sample_captions = [ - "This bird has red wings white belly", - "This bird has black head and white belly", - "This bird has long beak and white belly", - "This is a blue bird with blue wings and blue belly", - ] - GenerateImages(sample_captions) \ No newline at end of file diff --git a/spaces/anuragshas/en-hi-transliteration/xlit_src.py b/spaces/anuragshas/en-hi-transliteration/xlit_src.py deleted file mode 100644 index cb6d4ed56acf19f03bcada209560944f2e53aed9..0000000000000000000000000000000000000000 --- a/spaces/anuragshas/en-hi-transliteration/xlit_src.py +++ /dev/null @@ -1,868 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -import random -import enum -import traceback - -import os -import sys -import json - -F_DIR = os.path.dirname(os.path.realpath(__file__)) - - -class XlitError(enum.Enum): - lang_err = "Unsupported langauge ID requested ;( Please check available languages." - string_err = "String passed is incompatable ;(" - internal_err = "Internal crash ;(" - unknown_err = "Unknown Failure" - loading_err = "Loading failed ;( Check if metadata/paths are correctly configured." - - -class Encoder(nn.Module): - """ - Simple RNN based encoder network - """ - - def __init__( - self, - input_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - bidirectional=False, - dropout=0, - device="cpu", - ): - super(Encoder, self).__init__() - - self.input_dim = input_dim # src_vocab_sz - self.enc_embed_dim = embed_dim - self.enc_hidden_dim = hidden_dim - self.enc_rnn_type = rnn_type - self.enc_layers = layers - self.enc_directions = 2 if bidirectional else 1 - self.device = device - - self.embedding = nn.Embedding(self.input_dim, self.enc_embed_dim) - - if self.enc_rnn_type == "gru": - self.enc_rnn = nn.GRU( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - elif self.enc_rnn_type == "lstm": - self.enc_rnn = nn.LSTM( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - else: - raise Exception("unknown RNN type mentioned") - - def forward(self, x, x_sz, hidden=None): - """ - x_sz: (batch_size, 1) - Unpadded sequence lengths used for pack_pad - - Return: - output: (batch_size, max_length, hidden_dim) - hidden: (n_layer*num_directions, batch_size, hidden_dim) | if LSTM tuple -(h_n, c_n) - - """ - batch_sz = x.shape[0] - # x: batch_size, max_length, enc_embed_dim - x = self.embedding(x) - - ## pack the padded data - # x: max_length, batch_size, enc_embed_dim -> for pack_pad - x = x.permute(1, 0, 2) - x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad - - # output: packed_size, batch_size, enc_embed_dim --> hidden from all timesteps - # hidden: n_layer**num_directions, batch_size, hidden_dim | if LSTM (h_n, c_n) - output, hidden = self.enc_rnn(x) - - ## pad the sequence to the max length in the batch - # output: max_length, batch_size, enc_emb_dim*directions) - output, _ = nn.utils.rnn.pad_packed_sequence(output) - - # output: batch_size, max_length, hidden_dim - output = output.permute(1, 0, 2) - - return output, hidden - - -class Decoder(nn.Module): - """ - Used as decoder stage - """ - - def __init__( - self, - output_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - use_attention=True, - enc_outstate_dim=None, # enc_directions * enc_hidden_dim - dropout=0, - device="cpu", - ): - super(Decoder, self).__init__() - - self.output_dim = output_dim # tgt_vocab_sz - self.dec_hidden_dim = hidden_dim - self.dec_embed_dim = embed_dim - self.dec_rnn_type = rnn_type - self.dec_layers = layers - self.use_attention = use_attention - self.device = device - if self.use_attention: - self.enc_outstate_dim = enc_outstate_dim if enc_outstate_dim else hidden_dim - else: - self.enc_outstate_dim = 0 - - self.embedding = nn.Embedding(self.output_dim, self.dec_embed_dim) - - if self.dec_rnn_type == "gru": - self.dec_rnn = nn.GRU( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - elif self.dec_rnn_type == "lstm": - self.dec_rnn = nn.LSTM( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - else: - raise Exception("unknown RNN type mentioned") - - self.fc = nn.Sequential( - nn.Linear(self.dec_hidden_dim, self.dec_embed_dim), - nn.LeakyReLU(), - # nn.Linear(self.dec_embed_dim, self.dec_embed_dim), nn.LeakyReLU(), # removing to reduce size - nn.Linear(self.dec_embed_dim, self.output_dim), - ) - - ##----- Attention ---------- - if self.use_attention: - self.W1 = nn.Linear(self.enc_outstate_dim, self.dec_hidden_dim) - self.W2 = nn.Linear(self.dec_hidden_dim, self.dec_hidden_dim) - self.V = nn.Linear(self.dec_hidden_dim, 1) - - def attention(self, x, hidden, enc_output): - """ - x: (batch_size, 1, dec_embed_dim) -> after Embedding - enc_output: batch_size, max_length, enc_hidden_dim *num_directions - hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - """ - - ## perform addition to calculate the score - - # hidden_with_time_axis: batch_size, 1, hidden_dim - ## hidden_with_time_axis = hidden.permute(1, 0, 2) ## replaced with below 2lines - hidden_with_time_axis = torch.sum(hidden, axis=0) - - hidden_with_time_axis = hidden_with_time_axis.unsqueeze(1) - - # score: batch_size, max_length, hidden_dim - score = torch.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)) - - # attention_weights: batch_size, max_length, 1 - # we get 1 at the last axis because we are applying score to self.V - attention_weights = torch.softmax(self.V(score), dim=1) - - # context_vector shape after sum == (batch_size, hidden_dim) - context_vector = attention_weights * enc_output - context_vector = torch.sum(context_vector, dim=1) - # context_vector: batch_size, 1, hidden_dim - context_vector = context_vector.unsqueeze(1) - - # attend_out (batch_size, 1, dec_embed_dim + hidden_size) - attend_out = torch.cat((context_vector, x), -1) - - return attend_out, attention_weights - - def forward(self, x, hidden, enc_output): - """ - x: (batch_size, 1) - enc_output: batch_size, max_length, dec_embed_dim - hidden: n_layer, batch_size, hidden_size | lstm: (h_n, c_n) - """ - if (hidden is None) and (self.use_attention is False): - raise Exception("No use of a decoder with No attention and No Hidden") - - batch_sz = x.shape[0] - - if hidden is None: - # hidden: n_layers, batch_size, hidden_dim - hid_for_att = torch.zeros( - (self.dec_layers, batch_sz, self.dec_hidden_dim) - ).to(self.device) - elif self.dec_rnn_type == "lstm": - hid_for_att = hidden[0] # h_n - else: - hid_for_att = hidden - - # x (batch_size, 1, dec_embed_dim) -> after embedding - x = self.embedding(x) - - if self.use_attention: - # x (batch_size, 1, dec_embed_dim + hidden_size) -> after attention - # aw: (batch_size, max_length, 1) - x, aw = self.attention(x, hid_for_att, enc_output) - else: - x, aw = x, 0 - - # passing the concatenated vector to the GRU - # output: (batch_size, n_layers, hidden_size) - # hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - output, hidden = ( - self.dec_rnn(x, hidden) if hidden is not None else self.dec_rnn(x) - ) - - # output :shp: (batch_size * 1, hidden_size) - output = output.view(-1, output.size(2)) - - # output :shp: (batch_size * 1, output_dim) - output = self.fc(output) - - return output, hidden, aw - - -class Seq2Seq(nn.Module): - """ - Used to construct seq2seq architecture with encoder decoder objects - """ - - def __init__( - self, encoder, decoder, pass_enc2dec_hid=False, dropout=0, device="cpu" - ): - super(Seq2Seq, self).__init__() - - self.encoder = encoder - self.decoder = decoder - self.device = device - self.pass_enc2dec_hid = pass_enc2dec_hid - - if self.pass_enc2dec_hid: - assert ( - decoder.dec_hidden_dim == encoder.enc_hidden_dim - ), "Hidden Dimension of encoder and decoder must be same, or unset `pass_enc2dec_hid`" - if decoder.use_attention: - assert ( - decoder.enc_outstate_dim - == encoder.enc_directions * encoder.enc_hidden_dim - ), "Set `enc_out_dim` correctly in decoder" - assert ( - self.pass_enc2dec_hid or decoder.use_attention - ), "No use of a decoder with No attention and No Hidden from Encoder" - - def forward(self, src, tgt, src_sz, teacher_forcing_ratio=0): - """ - src: (batch_size, sequence_len.padded) - tgt: (batch_size, sequence_len.padded) - src_sz: [batch_size, 1] - Unpadded sequence lengths - """ - batch_size = tgt.shape[0] - - # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction) - # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim) - enc_output, enc_hidden = self.encoder(src, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - dec_hidden = None - - # pred_vecs: (batch_size, output_dim, sequence_sz) -> shape required for CELoss - pred_vecs = torch.zeros(batch_size, self.decoder.output_dim, tgt.size(1)).to( - self.device - ) - - # dec_input: (batch_size, 1) - dec_input = tgt[:, 0].unsqueeze(1) # initialize to start token - pred_vecs[:, 1, 0] = 1 # Initialize to start tokens all batches - for t in range(1, tgt.size(1)): - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - # dec_output: batch_size, output_dim - # dec_input: (batch_size, 1) - dec_output, dec_hidden, _ = self.decoder( - dec_input, - dec_hidden, - enc_output, - ) - pred_vecs[:, :, t] = dec_output - - # # prediction: batch_size - prediction = torch.argmax(dec_output, dim=1) - - # Teacher Forcing - if random.random() < teacher_forcing_ratio: - dec_input = tgt[:, t].unsqueeze(1) - else: - dec_input = prediction.unsqueeze(1) - - return pred_vecs # (batch_size, output_dim, sequence_sz) - - def inference(self, src, max_tgt_sz=50, debug=0): - """ - single input only, No batch Inferencing - src: (sequence_len) - debug: if True will return attention weights also - """ - batch_size = 1 - start_tok = src[0] - end_tok = src[-1] - src_sz = torch.tensor([len(src)]) - src_ = src.unsqueeze(0) - - # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction) - # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim) - enc_output, enc_hidden = self.encoder(src_, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - dec_hidden = None - - # pred_arr: (sequence_sz, 1) -> shape required for CELoss - pred_arr = torch.zeros(max_tgt_sz, 1).to(self.device) - if debug: - attend_weight_arr = torch.zeros(max_tgt_sz, len(src)).to(self.device) - - # dec_input: (batch_size, 1) - dec_input = start_tok.view(1, 1) # initialize to start token - pred_arr[0] = start_tok.view(1, 1) # initialize to start token - for t in range(max_tgt_sz): - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - # dec_output: batch_size, output_dim - # dec_input: (batch_size, 1) - dec_output, dec_hidden, aw = self.decoder( - dec_input, - dec_hidden, - enc_output, - ) - # prediction :shp: (1,1) - prediction = torch.argmax(dec_output, dim=1) - dec_input = prediction.unsqueeze(1) - pred_arr[t] = prediction - if debug: - attend_weight_arr[t] = aw.squeeze(-1) - - if torch.eq(prediction, end_tok): - break - - if debug: - return pred_arr.squeeze(), attend_weight_arr - # pred_arr :shp: (sequence_len) - return pred_arr.squeeze().to(dtype=torch.long) - - def active_beam_inference(self, src, beam_width=3, max_tgt_sz=50): - """Active beam Search based decoding - src: (sequence_len) - """ - - def _avg_score(p_tup): - """Used for Sorting - TODO: Dividing by length of sequence power alpha as hyperparam - """ - return p_tup[0] - - batch_size = 1 - start_tok = src[0] - end_tok = src[-1] - src_sz = torch.tensor([len(src)]) - src_ = src.unsqueeze(0) - - # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction) - # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim) - enc_output, enc_hidden = self.encoder(src_, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - init_dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - init_dec_hidden = None - - # top_pred[][0] = Σ-log_softmax - # top_pred[][1] = sequence torch.tensor shape: (1) - # top_pred[][2] = dec_hidden - top_pred_list = [(0, start_tok.unsqueeze(0), init_dec_hidden)] - - for t in range(max_tgt_sz): - cur_pred_list = [] - - for p_tup in top_pred_list: - if p_tup[1][-1] == end_tok: - cur_pred_list.append(p_tup) - continue - - # dec_hidden: dec_layers, 1, hidden_dim - # dec_output: 1, output_dim - dec_output, dec_hidden, _ = self.decoder( - x=p_tup[1][-1].view(1, 1), # dec_input: (1,1) - hidden=p_tup[2], - enc_output=enc_output, - ) - - ## π{prob} = Σ{log(prob)} -> to prevent diminishing - # dec_output: (1, output_dim) - dec_output = nn.functional.log_softmax(dec_output, dim=1) - # pred_topk.values & pred_topk.indices: (1, beam_width) - pred_topk = torch.topk(dec_output, k=beam_width, dim=1) - - for i in range(beam_width): - sig_logsmx_ = p_tup[0] + pred_topk.values[0][i] - # seq_tensor_ : (seq_len) - seq_tensor_ = torch.cat((p_tup[1], pred_topk.indices[0][i].view(1))) - - cur_pred_list.append((sig_logsmx_, seq_tensor_, dec_hidden)) - - cur_pred_list.sort(key=_avg_score, reverse=True) # Maximized order - top_pred_list = cur_pred_list[:beam_width] - - # check if end_tok of all topk - end_flags_ = [1 if t[1][-1] == end_tok else 0 for t in top_pred_list] - if beam_width == sum(end_flags_): - break - - pred_tnsr_list = [t[1] for t in top_pred_list] - - return pred_tnsr_list - - def passive_beam_inference(self, src, beam_width=7, max_tgt_sz=50): - """ - Passive Beam search based inference - src: (sequence_len) - """ - - def _avg_score(p_tup): - """Used for Sorting - TODO: Dividing by length of sequence power alpha as hyperparam - """ - return p_tup[0] - - def _beam_search_topk(topk_obj, start_tok, beam_width): - """search for sequence with maxim prob - topk_obj[x]: .values & .indices shape:(1, beam_width) - """ - # top_pred_list[x]: tuple(prob, seq_tensor) - top_pred_list = [ - (0, start_tok.unsqueeze(0)), - ] - - for obj in topk_obj: - new_lst_ = list() - for itm in top_pred_list: - for i in range(beam_width): - sig_logsmx_ = itm[0] + obj.values[0][i] - seq_tensor_ = torch.cat((itm[1], obj.indices[0][i].view(1))) - new_lst_.append((sig_logsmx_, seq_tensor_)) - - new_lst_.sort(key=_avg_score, reverse=True) - top_pred_list = new_lst_[:beam_width] - return top_pred_list - - batch_size = 1 - start_tok = src[0] - end_tok = src[-1] - src_sz = torch.tensor([len(src)]) - src_ = src.unsqueeze(0) - - enc_output, enc_hidden = self.encoder(src_, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - dec_hidden = None - - # dec_input: (1, 1) - dec_input = start_tok.view(1, 1) # initialize to start token - - topk_obj = [] - for t in range(max_tgt_sz): - dec_output, dec_hidden, aw = self.decoder( - dec_input, - dec_hidden, - enc_output, - ) - - ## π{prob} = Σ{log(prob)} -> to prevent diminishing - # dec_output: (1, output_dim) - dec_output = nn.functional.log_softmax(dec_output, dim=1) - # pred_topk.values & pred_topk.indices: (1, beam_width) - pred_topk = torch.topk(dec_output, k=beam_width, dim=1) - - topk_obj.append(pred_topk) - - # dec_input: (1, 1) - dec_input = pred_topk.indices[0][0].view(1, 1) - if torch.eq(dec_input, end_tok): - break - - top_pred_list = _beam_search_topk(topk_obj, start_tok, beam_width) - pred_tnsr_list = [t[1] for t in top_pred_list] - - return pred_tnsr_list - - -class GlyphStrawboss: - def __init__(self, glyphs="en"): - """list of letters in a language in unicode - lang: List with unicodes - """ - if glyphs == "en": - # Smallcase alone - self.glyphs = [chr(alpha) for alpha in range(97, 123)] + ["é", "è", "á"] - else: - self.dossier = json.load(open(glyphs, encoding="utf-8")) - self.numsym_map = self.dossier["numsym_map"] - self.glyphs = self.dossier["glyphs"] - - self.indoarab_num = [chr(alpha) for alpha in range(48, 58)] - - self.char2idx = {} - self.idx2char = {} - self._create_index() - - def _create_index(self): - - self.char2idx["_"] = 0 # pad - self.char2idx["$"] = 1 # start - self.char2idx["#"] = 2 # end - self.char2idx["*"] = 3 # Mask - self.char2idx["'"] = 4 # apostrophe U+0027 - self.char2idx["%"] = 5 # unused - self.char2idx["!"] = 6 # unused - self.char2idx["?"] = 7 - self.char2idx[":"] = 8 - self.char2idx[" "] = 9 - self.char2idx["-"] = 10 - self.char2idx[","] = 11 - self.char2idx["."] = 12 - self.char2idx["("] = 13 - self.char2idx[")"] = 14 - self.char2idx["/"] = 15 - self.char2idx["^"] = 16 - - for idx, char in enumerate(self.indoarab_num): - self.char2idx[char] = idx + 17 - # letter to index mapping - for idx, char in enumerate(self.glyphs): - self.char2idx[char] = idx + 27 # +20 token initially - - # index to letter mapping - for char, idx in self.char2idx.items(): - self.idx2char[idx] = char - - def size(self): - return len(self.char2idx) - - def word2xlitvec(self, word): - """Converts given string of gyphs(word) to vector(numpy) - Also adds tokens for start and end - """ - try: - vec = [self.char2idx["$"]] # start token - for i in list(word): - vec.append(self.char2idx[i]) - vec.append(self.char2idx["#"]) # end token - - vec = np.asarray(vec, dtype=np.int64) - return vec - - except Exception as error: - print("Error In word:", word, "Error Char not in Token:", error) - sys.exit() - - def xlitvec2word(self, vector): - """Converts vector(numpy) to string of glyphs(word)""" - char_list = [] - for i in vector: - char_list.append(self.idx2char[i]) - - word = "".join(char_list).replace("$", "").replace("#", "") # remove tokens - word = word.replace("_", "").replace("*", "") # remove tokens - return word - - -class XlitPiston: - """ - For handling prediction & post-processing of transliteration for a single language - Class dependency: Seq2Seq, GlyphStrawboss - Global Variables: F_DIR - """ - - def __init__( - self, weight_path, tglyph_cfg_file, iglyph_cfg_file="en", device="cpu" - ): - - self.device = device - self.in_glyph_obj = GlyphStrawboss(iglyph_cfg_file) - self.tgt_glyph_obj = GlyphStrawboss(glyphs=tglyph_cfg_file) - - self._numsym_set = set( - json.load(open(tglyph_cfg_file, encoding="utf-8"))["numsym_map"].keys() - ) - self._inchar_set = set("abcdefghijklmnopqrstuvwxyzéèá") - self._natscr_set = set().union( - self.tgt_glyph_obj.glyphs, sum(self.tgt_glyph_obj.numsym_map.values(), []) - ) - - ## Model Config Static TODO: add defining in json support - input_dim = self.in_glyph_obj.size() - output_dim = self.tgt_glyph_obj.size() - enc_emb_dim = 300 - dec_emb_dim = 300 - enc_hidden_dim = 512 - dec_hidden_dim = 512 - rnn_type = "lstm" - enc2dec_hid = True - attention = True - enc_layers = 1 - dec_layers = 2 - m_dropout = 0 - enc_bidirect = True - enc_outstate_dim = enc_hidden_dim * (2 if enc_bidirect else 1) - - enc = Encoder( - input_dim=input_dim, - embed_dim=enc_emb_dim, - hidden_dim=enc_hidden_dim, - rnn_type=rnn_type, - layers=enc_layers, - dropout=m_dropout, - device=self.device, - bidirectional=enc_bidirect, - ) - dec = Decoder( - output_dim=output_dim, - embed_dim=dec_emb_dim, - hidden_dim=dec_hidden_dim, - rnn_type=rnn_type, - layers=dec_layers, - dropout=m_dropout, - use_attention=attention, - enc_outstate_dim=enc_outstate_dim, - device=self.device, - ) - self.model = Seq2Seq(enc, dec, pass_enc2dec_hid=enc2dec_hid, device=self.device) - self.model = self.model.to(self.device) - weights = torch.load(weight_path, map_location=torch.device(self.device)) - - self.model.load_state_dict(weights) - self.model.eval() - - def character_model(self, word, beam_width=1): - in_vec = torch.from_numpy(self.in_glyph_obj.word2xlitvec(word)).to(self.device) - ## change to active or passive beam - p_out_list = self.model.active_beam_inference(in_vec, beam_width=beam_width) - result = [ - self.tgt_glyph_obj.xlitvec2word(out.cpu().numpy()) for out in p_out_list - ] - - # List type - return result - - def numsym_model(self, seg): - """tgt_glyph_obj.numsym_map[x] returns a list object""" - if len(seg) == 1: - return [seg] + self.tgt_glyph_obj.numsym_map[seg] - - a = [self.tgt_glyph_obj.numsym_map[n][0] for n in seg] - return [seg] + ["".join(a)] - - def _word_segementer(self, sequence): - - sequence = sequence.lower() - accepted = set().union(self._numsym_set, self._inchar_set, self._natscr_set) - # sequence = ''.join([i for i in sequence if i in accepted]) - - segment = [] - idx = 0 - seq_ = list(sequence) - while len(seq_): - # for Number-Symbol - temp = "" - while len(seq_) and seq_[0] in self._numsym_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Target Chars - temp = "" - while len(seq_) and seq_[0] in self._natscr_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Input-Roman Chars - temp = "" - while len(seq_) and seq_[0] in self._inchar_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - temp = "" - while len(seq_) and seq_[0] not in accepted: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - return segment - - def inferencer(self, sequence, beam_width=10): - - seg = self._word_segementer(sequence[:120]) - lit_seg = [] - - p = 0 - while p < len(seg): - if seg[p][0] in self._natscr_set: - lit_seg.append([seg[p]]) - p += 1 - - elif seg[p][0] in self._inchar_set: - lit_seg.append(self.character_model(seg[p], beam_width=beam_width)) - p += 1 - - elif seg[p][0] in self._numsym_set: # num & punc - lit_seg.append(self.numsym_model(seg[p])) - p += 1 - else: - lit_seg.append([seg[p]]) - p += 1 - - ## IF segment less/equal to 2 then return combinotorial, - ## ELSE only return top1 of each result concatenated - if len(lit_seg) == 1: - final_result = lit_seg[0] - - elif len(lit_seg) == 2: - final_result = [""] - for seg in lit_seg: - new_result = [] - for s in seg: - for f in final_result: - new_result.append(f + s) - final_result = new_result - - else: - new_result = [] - for seg in lit_seg: - new_result.append(seg[0]) - final_result = ["".join(new_result)] - - return final_result - - -class XlitEngine: - """ - For Managing the top level tasks and applications of transliteration - Global Variables: F_DIR - """ - - def __init__(self, lang2use="hi", config_path="models/default_lineup.json"): - lineup = json.load(open(os.path.join(F_DIR, config_path), encoding="utf-8")) - models_path = os.path.join(F_DIR, "models") - self.lang_config = {} - if lang2use in lineup: - self.lang_config[lang2use] = lineup[lang2use] - else: - raise Exception( - "XlitError: The entered Langauge code not found. Available are {}".format( - lineup.keys() - ) - ) - self.langs = {} - self.lang_model = {} - for la in self.lang_config: - try: - print("Loading {}...".format(la)) - self.lang_model[la] = XlitPiston( - weight_path=os.path.join( - models_path, self.lang_config[la]["weight"] - ), - tglyph_cfg_file=os.path.join( - models_path, self.lang_config[la]["script"] - ), - iglyph_cfg_file="en", - ) - self.langs[la] = self.lang_config[la]["name"] - except Exception as error: - print("XlitError: Failure in loading {} \n".format(la), error) - print(XlitError.loading_err.value) - - def translit_word(self, eng_word, lang_code="hi", topk=7, beam_width=10): - if eng_word == "": - return [] - if lang_code in self.langs: - try: - res_list = self.lang_model[lang_code].inferencer( - eng_word, beam_width=beam_width - ) - return res_list[:topk] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - def translit_sentence(self, eng_sentence, lang_code="hi", beam_width=10): - if eng_sentence == "": - return [] - - if lang_code in self.langs: - try: - out_str = "" - for word in eng_sentence.split(): - res_ = self.lang_model[lang_code].inferencer( - word, beam_width=beam_width - ) - out_str = out_str + res_[0] + " " - return out_str[:-1] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - -if __name__ == "__main__": - - engine = XlitEngine() - y = engine.translit_sentence("Hello World !") - print(y) diff --git a/spaces/any0019/text-style-transfer-demo/app.py b/spaces/any0019/text-style-transfer-demo/app.py deleted file mode 100644 index 3867a8e1f3df41370f9d9f3b18fc6c2e49fc2175..0000000000000000000000000000000000000000 --- a/spaces/any0019/text-style-transfer-demo/app.py +++ /dev/null @@ -1,121 +0,0 @@ -import streamlit as st -from termcolor import colored -import torch -from transformers import BertTokenizer, BertForMaskedLM, BertForSequenceClassification - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - - -@st.cache(allow_output_mutation=True) -def load_models(): - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - bert_mlm_positive = BertForMaskedLM.from_pretrained('any0019/text_style_mlm_positive', return_dict=True).to(device).train(True) - bert_mlm_negative = BertForMaskedLM.from_pretrained('any0019/text_style_mlm_negative', return_dict=True).to(device).train(True) - bert_classifier = BertForSequenceClassification.from_pretrained('any0019/text_style_classifier', num_labels=2).to(device).train(True) - return tokenizer, bert_mlm_positive, bert_mlm_negative, bert_classifier - - -tokenizer, bert_mlm_positive, bert_mlm_negative, bert_classifier = load_models() - - -def highlight_diff(sent, sent_main): - tokens = tokenizer.tokenize(sent) - tokens_main = tokenizer.tokenize(sent_main) - - new_toks = [] - for i, (tok, tok_main) in enumerate(zip(tokens, tokens_main)): - if tok != tok_main: - new_toks.append('***' + tok + '***') - else: - new_toks.append(tok) - - return ' '.join(new_toks) - - -def get_classifier_prob(sent): - bert_classifier.eval() - with torch.no_grad(): - return bert_classifier(**{k: v.to(device) for k, v in tokenizer(sent, return_tensors='pt').items()}).logits.softmax(dim=-1)[0].cpu().numpy() - - -def beam_get_replacements(current_beam, beam_size, epsilon=1e-3, used_positions=[]): - """ - - for each sentence in :current_beam: - split the sentence into tokens using the INGSOC-approved BERT tokenizer - - check :beam_size: hypotheses on each step for each sentence - - save best :beam_size: hypotheses - :return: generator - """ - # - bert_mlm_positive.eval() - bert_mlm_negative.eval() - new_beam = [] - with torch.no_grad(): - for sentence in current_beam: - input_ = {k: v.to(device) for k, v in tokenizer(sentence, return_tensors='pt').items()} - probs_negative = bert_mlm_negative(**input_).logits.softmax(dim=-1)[0] - probs_positive = bert_mlm_positive(**input_).logits.softmax(dim=-1)[0] - ids = input_['input_ids'][0].cpu().numpy() - seq_len = probs_positive.shape[0] - p_pos = probs_positive[torch.arange(seq_len), ids] - p_neg = probs_negative[torch.arange(seq_len), ids] - order_of_replacement = ((p_pos + epsilon) / (p_neg + epsilon)).argsort() - for pos in order_of_replacement: - if pos in used_positions or pos==0 or pos==len(ids)-1: - continue - used_position = pos - replacement_ids = (-probs_positive[pos,:]).argsort()[:beam_size] - for replacement_id in replacement_ids: - if replacement_id == ids[pos]: - continue - new_ids = ids.copy() - new_ids[pos] = replacement_id - new_beam.append(new_ids) - break - if len(new_beam) > 0: - new_beam = [tokenizer.decode(ids[1:-1]) for ids in new_beam] - new_beam = {sent: get_classifier_prob(sent)[1] for sent in new_beam} - for sent, prob in current_beam.items(): - new_beam[sent] = prob - - if len(new_beam) > beam_size: - new_beam = {k: v for k, v in sorted(new_beam.items(), key = lambda el: el[1], reverse=True)[:beam_size]} - return new_beam, used_position - else: - st.write("No more new hypotheses") - return current_beam, None - - -def get_best_hypotheses(sentence, beam_size, max_steps, epsilon=1e-3, pretty_output=False): - current_beam = {sentence: get_classifier_prob(sentence)[1]} - used_poss = [] - - st.write(f"step #0:") - st.write(f"-- 1: (positive probability ~ {round(current_beam[sentence], 5)})") - st.write(f"$\qquad${sentence}") - - for step in range(max_steps): - current_beam, used_pos = beam_get_replacements(current_beam, beam_size, epsilon, used_poss) - - st.write(f"\nstep #{step+1}:") - for i, (sent, prob) in enumerate(current_beam.items()): - st.write(f"-- {i+1}: (positive probability ~ {round(prob, 5)})") - st.write(f"$\qquad${highlight_diff(sent, sentence) if pretty_output else sent}") - - if used_pos is None: - return current_beam, used_poss - else: - used_poss.append(used_pos) - - return current_beam, used_poss - - -st.title("Correcting opinions of fellow comrades") - -default_value = "write your review here (in lower case - vocab reasons)" -sentence = st.text_area("Text", default_value, height = 275) -beam_size = st.sidebar.slider("Beam size", value = 3, min_value = 1, max_value=20, step=1) -max_steps = st.sidebar.slider("Max steps", value = 3, min_value = 1, max_value=10, step=1) -prettyfy = st.sidebar.slider("Higlight changes", value = 0, min_value = 0, max_value=1, step=1) - -beam, used_poss = get_best_hypotheses(sentence, beam_size=beam_size, max_steps=max_steps, pretty_output=bool(prettyfy)) -# beam, used_poss = get_best_hypotheses(sentence, beam_size=beam_size, max_steps=max_steps, pretty_output=False) \ No newline at end of file diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/datasets/transforms.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/datasets/transforms.py deleted file mode 100644 index 91cf9269e4b31008a3ddca34a19b038a9b399991..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/datasets/transforms.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Transforms and data augmentation for both image + bbox. -""" -import os -import random - -import PIL -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as F - -from groundingdino.util.box_ops import box_xyxy_to_cxcywh -from groundingdino.util.misc import interpolate - - -def crop(image, target, region): - cropped_image = F.crop(image, *region) - - target = target.copy() - i, j, h, w = region - - # should we do something wrt the original size? - target["size"] = torch.tensor([h, w]) - - fields = ["labels", "area", "iscrowd", "positive_map"] - - if "boxes" in target: - boxes = target["boxes"] - max_size = torch.as_tensor([w, h], dtype=torch.float32) - cropped_boxes = boxes - torch.as_tensor([j, i, j, i]) - cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size) - cropped_boxes = cropped_boxes.clamp(min=0) - area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1) - target["boxes"] = cropped_boxes.reshape(-1, 4) - target["area"] = area - fields.append("boxes") - - if "masks" in target: - # FIXME should we update the area here if there are no boxes? - target["masks"] = target["masks"][:, i : i + h, j : j + w] - fields.append("masks") - - # remove elements for which the boxes or masks that have zero area - if "boxes" in target or "masks" in target: - # favor boxes selection when defining which elements to keep - # this is compatible with previous implementation - if "boxes" in target: - cropped_boxes = target["boxes"].reshape(-1, 2, 2) - keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1) - else: - keep = target["masks"].flatten(1).any(1) - - for field in fields: - if field in target: - target[field] = target[field][keep] - - if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO": - # for debug and visualization only. - if "strings_positive" in target: - target["strings_positive"] = [ - _i for _i, _j in zip(target["strings_positive"], keep) if _j - ] - - return cropped_image, target - - -def hflip(image, target): - flipped_image = F.hflip(image) - - w, h = image.size - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor( - [w, 0, w, 0] - ) - target["boxes"] = boxes - - if "masks" in target: - target["masks"] = target["masks"].flip(-1) - - return flipped_image, target - - -def resize(image, target, size, max_size=None): - # size can be min_size (scalar) or (w, h) tuple - - def get_size_with_aspect_ratio(image_size, size, max_size=None): - w, h = image_size - if max_size is not None: - min_original_size = float(min((w, h))) - max_original_size = float(max((w, h))) - if max_original_size / min_original_size * size > max_size: - size = int(round(max_size * min_original_size / max_original_size)) - - if (w <= h and w == size) or (h <= w and h == size): - return (h, w) - - if w < h: - ow = size - oh = int(size * h / w) - else: - oh = size - ow = int(size * w / h) - - return (oh, ow) - - def get_size(image_size, size, max_size=None): - if isinstance(size, (list, tuple)): - return size[::-1] - else: - return get_size_with_aspect_ratio(image_size, size, max_size) - - size = get_size(image.size, size, max_size) - rescaled_image = F.resize(image, size) - - if target is None: - return rescaled_image, None - - ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size)) - ratio_width, ratio_height = ratios - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - scaled_boxes = boxes * torch.as_tensor( - [ratio_width, ratio_height, ratio_width, ratio_height] - ) - target["boxes"] = scaled_boxes - - if "area" in target: - area = target["area"] - scaled_area = area * (ratio_width * ratio_height) - target["area"] = scaled_area - - h, w = size - target["size"] = torch.tensor([h, w]) - - if "masks" in target: - target["masks"] = ( - interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5 - ) - - return rescaled_image, target - - -def pad(image, target, padding): - # assumes that we only pad on the bottom right corners - padded_image = F.pad(image, (0, 0, padding[0], padding[1])) - if target is None: - return padded_image, None - target = target.copy() - # should we do something wrt the original size? - target["size"] = torch.tensor(padded_image.size[::-1]) - if "masks" in target: - target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1])) - return padded_image, target - - -class ResizeDebug(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - return resize(img, target, self.size) - - -class RandomCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - region = T.RandomCrop.get_params(img, self.size) - return crop(img, target, region) - - -class RandomSizeCrop(object): - def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False): - # respect_boxes: True to keep all boxes - # False to tolerence box filter - self.min_size = min_size - self.max_size = max_size - self.respect_boxes = respect_boxes - - def __call__(self, img: PIL.Image.Image, target: dict): - init_boxes = len(target["boxes"]) - max_patience = 10 - for i in range(max_patience): - w = random.randint(self.min_size, min(img.width, self.max_size)) - h = random.randint(self.min_size, min(img.height, self.max_size)) - region = T.RandomCrop.get_params(img, [h, w]) - result_img, result_target = crop(img, target, region) - if ( - not self.respect_boxes - or len(result_target["boxes"]) == init_boxes - or i == max_patience - 1 - ): - return result_img, result_target - return result_img, result_target - - -class CenterCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - image_width, image_height = img.size - crop_height, crop_width = self.size - crop_top = int(round((image_height - crop_height) / 2.0)) - crop_left = int(round((image_width - crop_width) / 2.0)) - return crop(img, target, (crop_top, crop_left, crop_height, crop_width)) - - -class RandomHorizontalFlip(object): - def __init__(self, p=0.5): - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return hflip(img, target) - return img, target - - -class RandomResize(object): - def __init__(self, sizes, max_size=None): - assert isinstance(sizes, (list, tuple)) - self.sizes = sizes - self.max_size = max_size - - def __call__(self, img, target=None): - size = random.choice(self.sizes) - return resize(img, target, size, self.max_size) - - -class RandomPad(object): - def __init__(self, max_pad): - self.max_pad = max_pad - - def __call__(self, img, target): - pad_x = random.randint(0, self.max_pad) - pad_y = random.randint(0, self.max_pad) - return pad(img, target, (pad_x, pad_y)) - - -class RandomSelect(object): - """ - Randomly selects between transforms1 and transforms2, - with probability p for transforms1 and (1 - p) for transforms2 - """ - - def __init__(self, transforms1, transforms2, p=0.5): - self.transforms1 = transforms1 - self.transforms2 = transforms2 - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return self.transforms1(img, target) - return self.transforms2(img, target) - - -class ToTensor(object): - def __call__(self, img, target): - return F.to_tensor(img), target - - -class RandomErasing(object): - def __init__(self, *args, **kwargs): - self.eraser = T.RandomErasing(*args, **kwargs) - - def __call__(self, img, target): - return self.eraser(img), target - - -class Normalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, image, target=None): - image = F.normalize(image, mean=self.mean, std=self.std) - if target is None: - return image, None - target = target.copy() - h, w = image.shape[-2:] - if "boxes" in target: - boxes = target["boxes"] - boxes = box_xyxy_to_cxcywh(boxes) - boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32) - target["boxes"] = boxes - return image, target - - -class Compose(object): - def __init__(self, transforms): - self.transforms = transforms - - def __call__(self, image, target): - for t in self.transforms: - image, target = t(image, target) - return image, target - - def __repr__(self): - format_string = self.__class__.__name__ + "(" - for t in self.transforms: - format_string += "\n" - format_string += " {0}".format(t) - format_string += "\n)" - return format_string diff --git a/spaces/artificialguybr/video-dubbing/TTS/run_bash_tests.sh b/spaces/artificialguybr/video-dubbing/TTS/run_bash_tests.sh deleted file mode 100644 index 2f5ba889343a2d188c0f914063cc24cd0205d05c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/run_bash_tests.sh +++ /dev/null @@ -1,7 +0,0 @@ -set -e -TF_CPP_MIN_LOG_LEVEL=3 - -# runtime bash based tests -# TODO: move these to python -./tests/bash_tests/test_demo_server.sh && \ -./tests/bash_tests/test_compute_statistics.sh diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/inference_tests/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/tests/inference_tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/i18n/locale_diff.py b/spaces/arxify/RVC-beta-v2-0618/i18n/locale_diff.py deleted file mode 100644 index 257277965e0866a86d0361863a8f1b408c4f71ab..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/i18n/locale_diff.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import os -from collections import OrderedDict - -# Define the standard file name -standard_file = "zh_CN.json" - -# Find all JSON files in the directory -dir_path = "./" -languages = [ - f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file -] - -# Load the standard file -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) - -# Loop through each language file -for lang_file in languages: - # Load the language file - with open(lang_file, "r", encoding="utf-8") as f: - lang_data = json.load(f, object_pairs_hook=OrderedDict) - - # Find the difference between the language file and the standard file - diff = set(standard_data.keys()) - set(lang_data.keys()) - - miss = set(lang_data.keys()) - set(standard_data.keys()) - - # Add any missing keys to the language file - for key in diff: - lang_data[key] = key - - # Del any extra keys to the language file - for key in miss: - del lang_data[key] - - # Sort the keys of the language file to match the order of the standard file - lang_data = OrderedDict( - sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0])) - ) - - # Save the updated language file - with open(lang_file, "w", encoding="utf-8") as f: - json.dump(lang_data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/TypeConversion.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/TypeConversion.c deleted file mode 100644 index 7a7bf0f7999c052aad4e3b7f75172d370c8748aa..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/TypeConversion.c +++ /dev/null @@ -1,1017 +0,0 @@ -/////////////// TypeConversions.proto /////////////// - -/* Type Conversion Predeclarations */ - -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) - -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) ( \ - (sizeof(type) < sizeof(Py_ssize_t)) || \ - (sizeof(type) > sizeof(Py_ssize_t) && \ - likely(v < (type)PY_SSIZE_T_MAX || \ - v == (type)PY_SSIZE_T_MAX) && \ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN || \ - v == (type)PY_SSIZE_T_MIN))) || \ - (sizeof(type) == sizeof(Py_ssize_t) && \ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX || \ - v == (type)PY_SSIZE_T_MAX))) ) - -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - // Optimisation from Section 14.2 "Bounds Checking" in - // https://www.agner.org/optimize/optimizing_cpp.pdf - // See https://bugs.python.org/issue28397 - // The cast to unsigned effectively tests for "0 <= i < limit". - return (size_t) i < (size_t) limit; -} - -// fast and unsafe abs(Py_ssize_t) that ignores the overflow for (-PY_SSIZE_T_MAX-1) -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - // abs() is defined for long, but 64-bits type on MSVC is long long. - // Use MS-specific _abs64 instead. - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - // gcc or clang on 64 bit windows. - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif - -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); - -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); - -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif - -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) - -// There used to be a Py_UNICODE_strlen() in CPython 3.x, but it is deprecated since Py3.3. -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} - -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode - -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); - -#define __Pyx_PySequence_Tuple(obj) \ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) - -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); - -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) - -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) - -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif - -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) - -// __PYX_DEFAULT_STRING_ENCODING is either a user provided string constant -// or we need to look it up here -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; - -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - -/////////////// TypeConversions /////////////// - -/* Type Conversion Functions */ - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} - -// Py3.7 returns a "const char*" for unicode strings -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} - -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - // borrowed reference, cached internally in 'o' by CPython - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - // raise the error - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif /*__PYX_DEFAULT_STRING_ENCODING_IS_ASCII*/ - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} - -#else /* CYTHON_PEP393_ENABLED: */ - -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - // cached for the lifetime of the object - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - // raise the error - PyUnicode_AsASCIIString(o); - return NULL; - } -#else /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */ - return PyUnicode_AsUTF8AndSize(o, length); -#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */ -} -#endif /* CYTHON_PEP393_ENABLED */ -#endif - -// Py3.7 returns a "const char*" for unicode strings -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT */ - -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} - -/* Note: __Pyx_PyObject_IsTrue is written to minimize branching. */ -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} - -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} - -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - // CPython issue #17576: warn if 'result' not of exact type int. - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} - -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} - -{{py: from Cython.Utility import pylong_join }} - -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - // handle most common case first to avoid indirect branch and optimise branch prediction - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - {{for _size in (2, 3, 4)}} - {{for _case in (_size, -_size)}} - case {{_case}}: - if (8 * sizeof(Py_ssize_t) > {{_size}} * PyLong_SHIFT) { - return {{'-' if _case < 0 else ''}}(Py_ssize_t) {{pylong_join(_size, 'digits', 'size_t')}}; - } - break; - {{endfor}} - {{endfor}} - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} - - -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} - - -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} - - -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/////////////// GCCDiagnostics.proto /////////////// - -// GCC diagnostic pragmas were introduced in GCC 4.6 -// Used to silence conversion warnings that are ok but cannot be avoided. -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - - -/////////////// ToPyCTupleUtility.proto /////////////// -static PyObject* {{funcname}}({{struct_type_decl}}); - -/////////////// ToPyCTupleUtility /////////////// -static PyObject* {{funcname}}({{struct_type_decl}} value) { - PyObject* item = NULL; - PyObject* result = PyTuple_New({{size}}); - if (!result) goto bad; - - {{for ix, component in enumerate(components):}} - {{py:attr = "value.f%s" % ix}} - item = {{component.to_py_function}}({{attr}}); - if (!item) goto bad; - PyTuple_SET_ITEM(result, {{ix}}, item); - {{endfor}} - - return result; -bad: - Py_XDECREF(item); - Py_XDECREF(result); - return NULL; -} - - -/////////////// FromPyCTupleUtility.proto /////////////// -static {{struct_type_decl}} {{funcname}}(PyObject *); - -/////////////// FromPyCTupleUtility /////////////// -static {{struct_type_decl}} {{funcname}}(PyObject * o) { - {{struct_type_decl}} result; - - if (!PyTuple_Check(o) || PyTuple_GET_SIZE(o) != {{size}}) { - PyErr_Format(PyExc_TypeError, "Expected %.16s of size %d, got %.200s", "a tuple", {{size}}, Py_TYPE(o)->tp_name); - goto bad; - } - -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - {{for ix, component in enumerate(components):}} - {{py:attr = "result.f%s" % ix}} - {{attr}} = {{component.from_py_function}}(PyTuple_GET_ITEM(o, {{ix}})); - if ({{component.error_condition(attr)}}) goto bad; - {{endfor}} -#else - { - PyObject *item; - {{for ix, component in enumerate(components):}} - {{py:attr = "result.f%s" % ix}} - item = PySequence_ITEM(o, {{ix}}); if (unlikely(!item)) goto bad; - {{attr}} = {{component.from_py_function}}(item); - Py_DECREF(item); - if ({{component.error_condition(attr)}}) goto bad; - {{endfor}} - } -#endif - - return result; -bad: - return result; -} - - -/////////////// UnicodeAsUCS4.proto /////////////// - -static CYTHON_INLINE Py_UCS4 __Pyx_PyUnicode_AsPy_UCS4(PyObject*); - -/////////////// UnicodeAsUCS4 /////////////// - -static CYTHON_INLINE Py_UCS4 __Pyx_PyUnicode_AsPy_UCS4(PyObject* x) { - Py_ssize_t length; - #if CYTHON_PEP393_ENABLED - length = PyUnicode_GET_LENGTH(x); - if (likely(length == 1)) { - return PyUnicode_READ_CHAR(x, 0); - } - #else - length = PyUnicode_GET_SIZE(x); - if (likely(length == 1)) { - return PyUnicode_AS_UNICODE(x)[0]; - } - #if Py_UNICODE_SIZE == 2 - else if (PyUnicode_GET_SIZE(x) == 2) { - Py_UCS4 high_val = PyUnicode_AS_UNICODE(x)[0]; - if (high_val >= 0xD800 && high_val <= 0xDBFF) { - Py_UCS4 low_val = PyUnicode_AS_UNICODE(x)[1]; - if (low_val >= 0xDC00 && low_val <= 0xDFFF) { - return 0x10000 + (((high_val & ((1<<10)-1)) << 10) | (low_val & ((1<<10)-1))); - } - } - } - #endif - #endif - PyErr_Format(PyExc_ValueError, - "only single character unicode strings can be converted to Py_UCS4, " - "got length %" CYTHON_FORMAT_SSIZE_T "d", length); - return (Py_UCS4)-1; -} - - -/////////////// ObjectAsUCS4.proto /////////////// -//@requires: UnicodeAsUCS4 - -#define __Pyx_PyObject_AsPy_UCS4(x) \ - (likely(PyUnicode_Check(x)) ? __Pyx_PyUnicode_AsPy_UCS4(x) : __Pyx__PyObject_AsPy_UCS4(x)) -static Py_UCS4 __Pyx__PyObject_AsPy_UCS4(PyObject*); - -/////////////// ObjectAsUCS4 /////////////// - -static Py_UCS4 __Pyx__PyObject_AsPy_UCS4_raise_error(long ival) { - if (ival < 0) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_OverflowError, - "cannot convert negative value to Py_UCS4"); - } else { - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to Py_UCS4"); - } - return (Py_UCS4)-1; -} - -static Py_UCS4 __Pyx__PyObject_AsPy_UCS4(PyObject* x) { - long ival; - ival = __Pyx_PyInt_As_long(x); - if (unlikely(!__Pyx_is_valid_index(ival, 1114111 + 1))) { - return __Pyx__PyObject_AsPy_UCS4_raise_error(ival); - } - return (Py_UCS4)ival; -} - - -/////////////// ObjectAsPyUnicode.proto /////////////// - -static CYTHON_INLINE Py_UNICODE __Pyx_PyObject_AsPy_UNICODE(PyObject*); - -/////////////// ObjectAsPyUnicode /////////////// - -static CYTHON_INLINE Py_UNICODE __Pyx_PyObject_AsPy_UNICODE(PyObject* x) { - long ival; - #if CYTHON_PEP393_ENABLED - #if Py_UNICODE_SIZE > 2 - const long maxval = 1114111; - #else - const long maxval = 65535; - #endif - #else - static long maxval = 0; - #endif - if (PyUnicode_Check(x)) { - if (unlikely(__Pyx_PyUnicode_GET_LENGTH(x) != 1)) { - PyErr_Format(PyExc_ValueError, - "only single character unicode strings can be converted to Py_UNICODE, " - "got length %" CYTHON_FORMAT_SSIZE_T "d", __Pyx_PyUnicode_GET_LENGTH(x)); - return (Py_UNICODE)-1; - } - #if CYTHON_PEP393_ENABLED - ival = PyUnicode_READ_CHAR(x, 0); - #else - return PyUnicode_AS_UNICODE(x)[0]; - #endif - } else { - #if !CYTHON_PEP393_ENABLED - if (unlikely(!maxval)) - maxval = (long)PyUnicode_GetMax(); - #endif - ival = __Pyx_PyInt_As_long(x); - } - if (unlikely(!__Pyx_is_valid_index(ival, maxval + 1))) { - if (ival < 0) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_OverflowError, - "cannot convert negative value to Py_UNICODE"); - return (Py_UNICODE)-1; - } else { - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to Py_UNICODE"); - } - return (Py_UNICODE)-1; - } - return (Py_UNICODE)ival; -} - - -/////////////// CIntToPy.proto /////////////// - -static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value); - -/////////////// CIntToPy /////////////// -//@requires: GCCDiagnostics - -static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof({{TYPE}}) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof({{TYPE}}) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof({{TYPE}}) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof({{TYPE}}) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof({{TYPE}}) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof({{TYPE}}), - little, !is_unsigned); - } -} - - -/////////////// CIntToDigits /////////////// - -static const char DIGIT_PAIRS_10[2*10*10+1] = { - "00010203040506070809" - "10111213141516171819" - "20212223242526272829" - "30313233343536373839" - "40414243444546474849" - "50515253545556575859" - "60616263646566676869" - "70717273747576777879" - "80818283848586878889" - "90919293949596979899" -}; - -static const char DIGIT_PAIRS_8[2*8*8+1] = { - "0001020304050607" - "1011121314151617" - "2021222324252627" - "3031323334353637" - "4041424344454647" - "5051525354555657" - "6061626364656667" - "7071727374757677" -}; - -static const char DIGITS_HEX[2*16+1] = { - "0123456789abcdef" - "0123456789ABCDEF" -}; - - -/////////////// CIntToPyUnicode.proto /////////////// - -static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value, Py_ssize_t width, char padding_char, char format_char); - -/////////////// CIntToPyUnicode /////////////// -//@requires: StringTools.c::IncludeStringH -//@requires: StringTools.c::BuildPyUnicode -//@requires: CIntToDigits -//@requires: GCCDiagnostics - -// NOTE: inlining because most arguments are constant, which collapses lots of code below - -static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value, Py_ssize_t width, char padding_char, char format_char) { - // simple and conservative C string allocation on the stack: each byte gives at most 3 digits, plus sign - char digits[sizeof({{TYPE}})*3+2]; - // 'dpos' points to end of digits array + 1 initially to allow for pre-decrement looping - char *dpos, *end = digits + sizeof({{TYPE}})*3+2; - const char *hex_digits = DIGITS_HEX; - Py_ssize_t length, ulength; - int prepend_sign, last_one_off; - {{TYPE}} remaining; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - - if (format_char == 'X') { - hex_digits += 16; - format_char = 'x'; - } - - // surprise: even trivial sprintf() calls don't get optimised in gcc (4.8) - remaining = value; /* not using abs(value) to avoid overflow problems */ - last_one_off = 0; - dpos = end; - do { - int digit_pos; - switch (format_char) { - case 'o': - digit_pos = abs((int)(remaining % (8*8))); - remaining = ({{TYPE}}) (remaining / (8*8)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_8 + digit_pos * 2, 2); /* copy 2 digits at a time, unaligned */ - last_one_off = (digit_pos < 8); - break; - case 'd': - digit_pos = abs((int)(remaining % (10*10))); - remaining = ({{TYPE}}) (remaining / (10*10)); - dpos -= 2; - memcpy(dpos, DIGIT_PAIRS_10 + digit_pos * 2, 2); /* copy 2 digits at a time, unaligned */ - last_one_off = (digit_pos < 10); - break; - case 'x': - *(--dpos) = hex_digits[abs((int)(remaining % 16))]; - remaining = ({{TYPE}}) (remaining / 16); - break; - default: - assert(0); - break; - } - } while (unlikely(remaining != 0)); - - if (last_one_off) { - assert(*dpos == '0'); - dpos++; - } - length = end - dpos; - ulength = length; - prepend_sign = 0; - if (!is_unsigned && value <= neg_one) { - if (padding_char == ' ' || width <= length + 1) { - *(--dpos) = '-'; - ++length; - } else { - prepend_sign = 1; - } - ++ulength; - } - if (width > ulength) { - ulength = width; - } - // single character unicode strings are cached in CPython => use PyUnicode_FromOrdinal() for them - if (ulength == 1) { - return PyUnicode_FromOrdinal(*dpos); - } - return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char); -} - - -/////////////// CBIntToPyUnicode.proto /////////////// - -#define {{TO_PY_FUNCTION}}(value) \ - ((value) ? __Pyx_NewRef({{TRUE_CONST}}) : __Pyx_NewRef({{FALSE_CONST}})) - - -/////////////// PyIntFromDouble.proto /////////////// - -#if PY_MAJOR_VERSION < 3 -static CYTHON_INLINE PyObject* __Pyx_PyInt_FromDouble(double value); -#else -#define __Pyx_PyInt_FromDouble(value) PyLong_FromDouble(value) -#endif - -/////////////// PyIntFromDouble /////////////// - -#if PY_MAJOR_VERSION < 3 -static CYTHON_INLINE PyObject* __Pyx_PyInt_FromDouble(double value) { - if (value >= (double)LONG_MIN && value <= (double)LONG_MAX) { - return PyInt_FromLong((long)value); - } - return PyLong_FromDouble(value); -} -#endif - - -/////////////// CIntFromPyVerify /////////////// - -// see CIntFromPy -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value) \ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) - -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value) \ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) - -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc) \ - { \ - func_type value = func_value; \ - if (sizeof(target_type) < sizeof(func_type)) { \ - if (unlikely(value != (func_type) (target_type) value)) { \ - func_type zero = 0; \ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred())) \ - return (target_type) -1; \ - if (is_unsigned && unlikely(value < zero)) \ - goto raise_neg_overflow; \ - else \ - goto raise_overflow; \ - } \ - } \ - return (target_type) value; \ - } - - -/////////////// CIntFromPy.proto /////////////// - -static CYTHON_INLINE {{TYPE}} {{FROM_PY_FUNCTION}}(PyObject *); - -/////////////// CIntFromPy /////////////// -//@requires: CIntFromPyVerify -//@requires: GCCDiagnostics - -{{py: from Cython.Utility import pylong_join }} - -static CYTHON_INLINE {{TYPE}} {{FROM_PY_FUNCTION}}(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof({{TYPE}}) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT({{TYPE}}, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return ({{TYPE}}) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return ({{TYPE}}) 0; - case 1: __PYX_VERIFY_RETURN_INT({{TYPE}}, digit, digits[0]) - {{for _size in (2, 3, 4)}} - case {{_size}}: - if (8 * sizeof({{TYPE}}) > {{_size-1}} * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > {{_size}} * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT({{TYPE}}, unsigned long, {{pylong_join(_size, 'digits')}}) - } else if (8 * sizeof({{TYPE}}) >= {{_size}} * PyLong_SHIFT) { - return ({{TYPE}}) {{pylong_join(_size, 'digits', TYPE)}}; - } - } - break; - {{endfor}} - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - // misuse Py_False as a quick way to compare to a '0' int object in PyPy - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return ({{TYPE}}) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof({{TYPE}}) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC({{TYPE}}, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof({{TYPE}}) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC({{TYPE}}, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { - // signed -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return ({{TYPE}}) 0; - case -1: __PYX_VERIFY_RETURN_INT({{TYPE}}, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT({{TYPE}}, digit, +digits[0]) - {{for _size in (2, 3, 4)}} - {{for _case in (-_size, _size)}} - case {{_case}}: - if (8 * sizeof({{TYPE}}){{' - 1' if _case < 0 else ''}} > {{_size-1}} * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > {{_size}} * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT({{TYPE}}, {{'long' if _case < 0 else 'unsigned long'}}, {{'-(long) ' if _case < 0 else ''}}{{pylong_join(_size, 'digits')}}) - } else if (8 * sizeof({{TYPE}}) - 1 > {{_size}} * PyLong_SHIFT) { - return ({{TYPE}}) ({{'((%s)-1)*' % TYPE if _case < 0 else ''}}{{pylong_join(_size, 'digits', TYPE)}}); - } - } - break; - {{endfor}} - {{endfor}} - } -#endif - if (sizeof({{TYPE}}) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC({{TYPE}}, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof({{TYPE}}) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC({{TYPE}}, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - {{TYPE}} val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return ({{TYPE}}) -1; - } - } else { - {{TYPE}} val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return ({{TYPE}}) -1; - val = {{FROM_PY_FUNCTION}}(tmp); - Py_DECREF(tmp); - return val; - } - -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to {{TYPE}}"); - return ({{TYPE}}) -1; - -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to {{TYPE}}"); - return ({{TYPE}}) -1; -} diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/theme.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/theme.py deleted file mode 100644 index 10dc6fa8a81646ed7e9fa8d6be4e1634ec14e7d8..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/theme.py +++ /dev/null @@ -1,10 +0,0 @@ -"""Utilities for registering and working with themes""" - -from .plugin_registry import PluginRegistry -from typing import Callable - -ThemeType = Callable[..., dict] - - -class ThemeRegistry(PluginRegistry[ThemeType]): - pass diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/keys.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/keys.py deleted file mode 100644 index f2feb4182b7029568181f8e39d3061f0a90f4f87..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/keys.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Key functions for memoizing decorators.""" - -__all__ = ("hashkey", "methodkey", "typedkey") - - -class _HashedTuple(tuple): - """A tuple that ensures that hash() will be called no more than once - per element, since cache decorators will hash the key multiple - times on a cache miss. See also _HashedSeq in the standard - library functools implementation. - - """ - - __hashvalue = None - - def __hash__(self, hash=tuple.__hash__): - hashvalue = self.__hashvalue - if hashvalue is None: - self.__hashvalue = hashvalue = hash(self) - return hashvalue - - def __add__(self, other, add=tuple.__add__): - return _HashedTuple(add(self, other)) - - def __radd__(self, other, add=tuple.__add__): - return _HashedTuple(add(other, self)) - - def __getstate__(self): - return {} - - -# used for separating keyword arguments; we do not use an object -# instance here so identity is preserved when pickling/unpickling -_kwmark = (_HashedTuple,) - - -def hashkey(*args, **kwargs): - """Return a cache key for the specified hashable arguments.""" - - if kwargs: - return _HashedTuple(args + sum(sorted(kwargs.items()), _kwmark)) - else: - return _HashedTuple(args) - - -def methodkey(self, *args, **kwargs): - """Return a cache key for use with cached methods.""" - return hashkey(*args, **kwargs) - - -def typedkey(*args, **kwargs): - """Return a typed cache key for the specified hashable arguments.""" - - key = hashkey(*args, **kwargs) - key += tuple(type(v) for v in args) - key += tuple(type(v) for _, v in sorted(kwargs.items())) - return key diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/models.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/models.py deleted file mode 100644 index ccb0d475be4b4245a8d0f64c84719128438b89c4..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/models.py +++ /dev/null @@ -1,401 +0,0 @@ -import warnings -from collections import Counter -from encodings.aliases import aliases -from hashlib import sha256 -from json import dumps -from re import sub -from typing import ( - Any, - Counter as TypeCounter, - Dict, - Iterator, - List, - Optional, - Tuple, - Union, -) - -from .constant import NOT_PRINTABLE_PATTERN, TOO_BIG_SEQUENCE -from .md import mess_ratio -from .utils import iana_name, is_multi_byte_encoding, unicode_range - - -class CharsetMatch: - def __init__( - self, - payload: bytes, - guessed_encoding: str, - mean_mess_ratio: float, - has_sig_or_bom: bool, - languages: "CoherenceMatches", - decoded_payload: Optional[str] = None, - ): - self._payload: bytes = payload - - self._encoding: str = guessed_encoding - self._mean_mess_ratio: float = mean_mess_ratio - self._languages: CoherenceMatches = languages - self._has_sig_or_bom: bool = has_sig_or_bom - self._unicode_ranges: Optional[List[str]] = None - - self._leaves: List[CharsetMatch] = [] - self._mean_coherence_ratio: float = 0.0 - - self._output_payload: Optional[bytes] = None - self._output_encoding: Optional[str] = None - - self._string: Optional[str] = decoded_payload - - def __eq__(self, other: object) -> bool: - if not isinstance(other, CharsetMatch): - raise TypeError( - "__eq__ cannot be invoked on {} and {}.".format( - str(other.__class__), str(self.__class__) - ) - ) - return self.encoding == other.encoding and self.fingerprint == other.fingerprint - - def __lt__(self, other: object) -> bool: - """ - Implemented to make sorted available upon CharsetMatches items. - """ - if not isinstance(other, CharsetMatch): - raise ValueError - - chaos_difference: float = abs(self.chaos - other.chaos) - coherence_difference: float = abs(self.coherence - other.coherence) - - # Bellow 1% difference --> Use Coherence - if chaos_difference < 0.01 and coherence_difference > 0.02: - # When having a tough decision, use the result that decoded as many multi-byte as possible. - if chaos_difference == 0.0 and self.coherence == other.coherence: - return self.multi_byte_usage > other.multi_byte_usage - return self.coherence > other.coherence - - return self.chaos < other.chaos - - @property - def multi_byte_usage(self) -> float: - return 1.0 - len(str(self)) / len(self.raw) - - @property - def chaos_secondary_pass(self) -> float: - """ - Check once again chaos in decoded text, except this time, with full content. - Use with caution, this can be very slow. - Notice: Will be removed in 3.0 - """ - warnings.warn( - "chaos_secondary_pass is deprecated and will be removed in 3.0", - DeprecationWarning, - ) - return mess_ratio(str(self), 1.0) - - @property - def coherence_non_latin(self) -> float: - """ - Coherence ratio on the first non-latin language detected if ANY. - Notice: Will be removed in 3.0 - """ - warnings.warn( - "coherence_non_latin is deprecated and will be removed in 3.0", - DeprecationWarning, - ) - return 0.0 - - @property - def w_counter(self) -> TypeCounter[str]: - """ - Word counter instance on decoded text. - Notice: Will be removed in 3.0 - """ - warnings.warn( - "w_counter is deprecated and will be removed in 3.0", DeprecationWarning - ) - - string_printable_only = sub(NOT_PRINTABLE_PATTERN, " ", str(self).lower()) - - return Counter(string_printable_only.split()) - - def __str__(self) -> str: - # Lazy Str Loading - if self._string is None: - self._string = str(self._payload, self._encoding, "strict") - return self._string - - def __repr__(self) -> str: - return "".format(self.encoding, self.fingerprint) - - def add_submatch(self, other: "CharsetMatch") -> None: - if not isinstance(other, CharsetMatch) or other == self: - raise ValueError( - "Unable to add instance <{}> as a submatch of a CharsetMatch".format( - other.__class__ - ) - ) - - other._string = None # Unload RAM usage; dirty trick. - self._leaves.append(other) - - @property - def encoding(self) -> str: - return self._encoding - - @property - def encoding_aliases(self) -> List[str]: - """ - Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855. - """ - also_known_as: List[str] = [] - for u, p in aliases.items(): - if self.encoding == u: - also_known_as.append(p) - elif self.encoding == p: - also_known_as.append(u) - return also_known_as - - @property - def bom(self) -> bool: - return self._has_sig_or_bom - - @property - def byte_order_mark(self) -> bool: - return self._has_sig_or_bom - - @property - def languages(self) -> List[str]: - """ - Return the complete list of possible languages found in decoded sequence. - Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'. - """ - return [e[0] for e in self._languages] - - @property - def language(self) -> str: - """ - Most probable language found in decoded sequence. If none were detected or inferred, the property will return - "Unknown". - """ - if not self._languages: - # Trying to infer the language based on the given encoding - # Its either English or we should not pronounce ourselves in certain cases. - if "ascii" in self.could_be_from_charset: - return "English" - - # doing it there to avoid circular import - from charset_normalizer.cd import encoding_languages, mb_encoding_languages - - languages = ( - mb_encoding_languages(self.encoding) - if is_multi_byte_encoding(self.encoding) - else encoding_languages(self.encoding) - ) - - if len(languages) == 0 or "Latin Based" in languages: - return "Unknown" - - return languages[0] - - return self._languages[0][0] - - @property - def chaos(self) -> float: - return self._mean_mess_ratio - - @property - def coherence(self) -> float: - if not self._languages: - return 0.0 - return self._languages[0][1] - - @property - def percent_chaos(self) -> float: - return round(self.chaos * 100, ndigits=3) - - @property - def percent_coherence(self) -> float: - return round(self.coherence * 100, ndigits=3) - - @property - def raw(self) -> bytes: - """ - Original untouched bytes. - """ - return self._payload - - @property - def submatch(self) -> List["CharsetMatch"]: - return self._leaves - - @property - def has_submatch(self) -> bool: - return len(self._leaves) > 0 - - @property - def alphabets(self) -> List[str]: - if self._unicode_ranges is not None: - return self._unicode_ranges - # list detected ranges - detected_ranges: List[Optional[str]] = [ - unicode_range(char) for char in str(self) - ] - # filter and sort - self._unicode_ranges = sorted(list({r for r in detected_ranges if r})) - return self._unicode_ranges - - @property - def could_be_from_charset(self) -> List[str]: - """ - The complete list of encoding that output the exact SAME str result and therefore could be the originating - encoding. - This list does include the encoding available in property 'encoding'. - """ - return [self._encoding] + [m.encoding for m in self._leaves] - - def first(self) -> "CharsetMatch": - """ - Kept for BC reasons. Will be removed in 3.0. - """ - return self - - def best(self) -> "CharsetMatch": - """ - Kept for BC reasons. Will be removed in 3.0. - """ - return self - - def output(self, encoding: str = "utf_8") -> bytes: - """ - Method to get re-encoded bytes payload using given target encoding. Default to UTF-8. - Any errors will be simply ignored by the encoder NOT replaced. - """ - if self._output_encoding is None or self._output_encoding != encoding: - self._output_encoding = encoding - self._output_payload = str(self).encode(encoding, "replace") - - return self._output_payload # type: ignore - - @property - def fingerprint(self) -> str: - """ - Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one. - """ - return sha256(self.output()).hexdigest() - - -class CharsetMatches: - """ - Container with every CharsetMatch items ordered by default from most probable to the less one. - Act like a list(iterable) but does not implements all related methods. - """ - - def __init__(self, results: Optional[List[CharsetMatch]] = None): - self._results: List[CharsetMatch] = sorted(results) if results else [] - - def __iter__(self) -> Iterator[CharsetMatch]: - yield from self._results - - def __getitem__(self, item: Union[int, str]) -> CharsetMatch: - """ - Retrieve a single item either by its position or encoding name (alias may be used here). - Raise KeyError upon invalid index or encoding not present in results. - """ - if isinstance(item, int): - return self._results[item] - if isinstance(item, str): - item = iana_name(item, False) - for result in self._results: - if item in result.could_be_from_charset: - return result - raise KeyError - - def __len__(self) -> int: - return len(self._results) - - def __bool__(self) -> bool: - return len(self._results) > 0 - - def append(self, item: CharsetMatch) -> None: - """ - Insert a single match. Will be inserted accordingly to preserve sort. - Can be inserted as a submatch. - """ - if not isinstance(item, CharsetMatch): - raise ValueError( - "Cannot append instance '{}' to CharsetMatches".format( - str(item.__class__) - ) - ) - # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage) - if len(item.raw) <= TOO_BIG_SEQUENCE: - for match in self._results: - if match.fingerprint == item.fingerprint and match.chaos == item.chaos: - match.add_submatch(item) - return - self._results.append(item) - self._results = sorted(self._results) - - def best(self) -> Optional["CharsetMatch"]: - """ - Simply return the first match. Strict equivalent to matches[0]. - """ - if not self._results: - return None - return self._results[0] - - def first(self) -> Optional["CharsetMatch"]: - """ - Redundant method, call the method best(). Kept for BC reasons. - """ - return self.best() - - -CoherenceMatch = Tuple[str, float] -CoherenceMatches = List[CoherenceMatch] - - -class CliDetectionResult: - def __init__( - self, - path: str, - encoding: Optional[str], - encoding_aliases: List[str], - alternative_encodings: List[str], - language: str, - alphabets: List[str], - has_sig_or_bom: bool, - chaos: float, - coherence: float, - unicode_path: Optional[str], - is_preferred: bool, - ): - self.path: str = path - self.unicode_path: Optional[str] = unicode_path - self.encoding: Optional[str] = encoding - self.encoding_aliases: List[str] = encoding_aliases - self.alternative_encodings: List[str] = alternative_encodings - self.language: str = language - self.alphabets: List[str] = alphabets - self.has_sig_or_bom: bool = has_sig_or_bom - self.chaos: float = chaos - self.coherence: float = coherence - self.is_preferred: bool = is_preferred - - @property - def __dict__(self) -> Dict[str, Any]: # type: ignore - return { - "path": self.path, - "encoding": self.encoding, - "encoding_aliases": self.encoding_aliases, - "alternative_encodings": self.alternative_encodings, - "language": self.language, - "alphabets": self.alphabets, - "has_sig_or_bom": self.has_sig_or_bom, - "chaos": self.chaos, - "coherence": self.coherence, - "unicode_path": self.unicode_path, - "is_preferred": self.is_preferred, - } - - def to_json(self) -> str: - return dumps(self.__dict__, ensure_ascii=True, indent=4) diff --git a/spaces/ashuNicol/Steam-game-Recommendation-System/README.md b/spaces/ashuNicol/Steam-game-Recommendation-System/README.md deleted file mode 100644 index d42d91955e162963bb067282c13719c7d73d68cb..0000000000000000000000000000000000000000 --- a/spaces/ashuNicol/Steam-game-Recommendation-System/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Steam Game Recommendation System -emoji: 🐼 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/awacke1/HTML5-AFrame-VR/style.css b/spaces/awacke1/HTML5-AFrame-VR/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-AFrame-VR/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/Slot-Machine-Animal-Safari/README.md b/spaces/awacke1/Slot-Machine-Animal-Safari/README.md deleted file mode 100644 index 223ba826194fc3618ba9ea969acafff1269d1aff..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Slot-Machine-Animal-Safari/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Slot Machine Animal Safari -emoji: 💻 -colorFrom: gray -colorTo: purple -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/VisualCluster/README.md b/spaces/awacke1/VisualCluster/README.md deleted file mode 100644 index 4c8beab31742bf1ed4ab775c63edbf108973d16d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VisualCluster/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🌩️CloudViz NLP🆒 -emoji: CLAI🌩️ -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VRMLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VRMLoader.js deleted file mode 100644 index b9ccfe6b5f57d16bc5e192c9b209621930af4200..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VRMLoader.js +++ /dev/null @@ -1,87 +0,0 @@ -/** - * @author Takahiro / https://github.com/takahirox - */ - -// VRM Specification: https://dwango.github.io/vrm/vrm_spec/ -// -// VRM is based on glTF 2.0 and VRM extension is defined -// in top-level json.extensions.VRM - -THREE.VRMLoader = ( function () { - - function VRMLoader( manager ) { - - if ( THREE.GLTFLoader === undefined ) { - - throw new Error( 'THREE.VRMLoader: Import THREE.GLTFLoader.' ); - - } - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - this.gltfLoader = new THREE.GLTFLoader( this.manager ); - - } - - VRMLoader.prototype = { - - constructor: VRMLoader, - - crossOrigin: 'anonymous', - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - this.gltfLoader.load( url, function ( gltf ) { - - scope.parse( gltf, onLoad ); - - }, onProgress, onError ); - - }, - - setCrossOrigin: function ( value ) { - - this.glTFLoader.setCrossOrigin( value ); - return this; - - }, - - setPath: function ( value ) { - - this.glTFLoader.setPath( value ); - return this; - - }, - - setResourcePath: function ( value ) { - - this.glTFLoader.setResourcePath( value ); - return this; - - }, - - setDRACOLoader: function ( dracoLoader ) { - - this.glTFLoader.setDRACOLoader( dracoLoader ); - return this; - - }, - - parse: function ( gltf, onLoad ) { - - var gltfParser = gltf.parser; - var gltfExtensions = gltf.userData.gltfExtensions || {}; - var vrmExtension = gltfExtensions.VRM || {}; - - // handle VRM Extension here - - onLoad( gltf ); - - } - - }; - - return VRMLoader; - -} )(); diff --git a/spaces/begar/amazon-reviews-demo/README.md b/spaces/begar/amazon-reviews-demo/README.md deleted file mode 100644 index 2d61096c28936a38d7ac3f1e04c5e528c19c470e..0000000000000000000000000000000000000000 --- a/spaces/begar/amazon-reviews-demo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Amazon Reviews Demo -emoji: 🚀 -colorFrom: green -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/beihai/Remove-Background-By-U2Net/README.md b/spaces/beihai/Remove-Background-By-U2Net/README.md deleted file mode 100644 index a20649f284decbd251cac59d2aaee57fa8607222..0000000000000000000000000000000000000000 --- a/spaces/beihai/Remove-Background-By-U2Net/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Background By U2Net -emoji: 👁 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/codeformer/codeformer_arch.py b/spaces/bigjoker/stable-diffusion-webui/modules/codeformer/codeformer_arch.py deleted file mode 100644 index 11dcc3ee76511218c64977c2ecbb306cecd892c3..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/codeformer/codeformer_arch.py +++ /dev/null @@ -1,278 +0,0 @@ -# this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py - -import math -import numpy as np -import torch -from torch import nn, Tensor -import torch.nn.functional as F -from typing import Optional, List - -from modules.codeformer.vqgan_arch import * -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def calc_mean_std(feat, eps=1e-5): - """Calculate mean and std for adaptive_instance_normalization. - - Args: - feat (Tensor): 4D tensor. - eps (float): A small value added to the variance to avoid - divide-by-zero. Default: 1e-5. - """ - size = feat.size() - assert len(size) == 4, 'The input feature should be 4D tensor.' - b, c = size[:2] - feat_var = feat.view(b, c, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(b, c, 1, 1) - feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) - return feat_mean, feat_std - - -def adaptive_instance_normalization(content_feat, style_feat): - """Adaptive instance normalization. - - Adjust the reference features to have the similar color and illuminations - as those in the degradate features. - - Args: - content_feat (Tensor): The reference feature. - style_feat (Tensor): The degradate features. - """ - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, x, mask=None): - if mask is None: - mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class TransformerSALayer(nn.Module): - def __init__(self, embed_dim, nhead=8, dim_mlp=2048, dropout=0.0, activation="gelu"): - super().__init__() - self.self_attn = nn.MultiheadAttention(embed_dim, nhead, dropout=dropout) - # Implementation of Feedforward model - MLP - self.linear1 = nn.Linear(embed_dim, dim_mlp) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_mlp, embed_dim) - - self.norm1 = nn.LayerNorm(embed_dim) - self.norm2 = nn.LayerNorm(embed_dim) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - - # self attention - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout1(tgt2) - - # ffn - tgt2 = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout2(tgt2) - return tgt - -class Fuse_sft_block(nn.Module): - def __init__(self, in_ch, out_ch): - super().__init__() - self.encode_enc = ResBlock(2*in_ch, out_ch) - - self.scale = nn.Sequential( - nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1), - nn.LeakyReLU(0.2, True), - nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1)) - - self.shift = nn.Sequential( - nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1), - nn.LeakyReLU(0.2, True), - nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1)) - - def forward(self, enc_feat, dec_feat, w=1): - enc_feat = self.encode_enc(torch.cat([enc_feat, dec_feat], dim=1)) - scale = self.scale(enc_feat) - shift = self.shift(enc_feat) - residual = w * (dec_feat * scale + shift) - out = dec_feat + residual - return out - - -@ARCH_REGISTRY.register() -class CodeFormer(VQAutoEncoder): - def __init__(self, dim_embd=512, n_head=8, n_layers=9, - codebook_size=1024, latent_size=256, - connect_list=['32', '64', '128', '256'], - fix_modules=['quantize','generator']): - super(CodeFormer, self).__init__(512, 64, [1, 2, 2, 4, 4, 8], 'nearest',2, [16], codebook_size) - - if fix_modules is not None: - for module in fix_modules: - for param in getattr(self, module).parameters(): - param.requires_grad = False - - self.connect_list = connect_list - self.n_layers = n_layers - self.dim_embd = dim_embd - self.dim_mlp = dim_embd*2 - - self.position_emb = nn.Parameter(torch.zeros(latent_size, self.dim_embd)) - self.feat_emb = nn.Linear(256, self.dim_embd) - - # transformer - self.ft_layers = nn.Sequential(*[TransformerSALayer(embed_dim=dim_embd, nhead=n_head, dim_mlp=self.dim_mlp, dropout=0.0) - for _ in range(self.n_layers)]) - - # logits_predict head - self.idx_pred_layer = nn.Sequential( - nn.LayerNorm(dim_embd), - nn.Linear(dim_embd, codebook_size, bias=False)) - - self.channels = { - '16': 512, - '32': 256, - '64': 256, - '128': 128, - '256': 128, - '512': 64, - } - - # after second residual block for > 16, before attn layer for ==16 - self.fuse_encoder_block = {'512':2, '256':5, '128':8, '64':11, '32':14, '16':18} - # after first residual block for > 16, before attn layer for ==16 - self.fuse_generator_block = {'16':6, '32': 9, '64':12, '128':15, '256':18, '512':21} - - # fuse_convs_dict - self.fuse_convs_dict = nn.ModuleDict() - for f_size in self.connect_list: - in_ch = self.channels[f_size] - self.fuse_convs_dict[f_size] = Fuse_sft_block(in_ch, in_ch) - - def _init_weights(self, module): - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=0.02) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def forward(self, x, w=0, detach_16=True, code_only=False, adain=False): - # ################### Encoder ##################### - enc_feat_dict = {} - out_list = [self.fuse_encoder_block[f_size] for f_size in self.connect_list] - for i, block in enumerate(self.encoder.blocks): - x = block(x) - if i in out_list: - enc_feat_dict[str(x.shape[-1])] = x.clone() - - lq_feat = x - # ################# Transformer ################### - # quant_feat, codebook_loss, quant_stats = self.quantize(lq_feat) - pos_emb = self.position_emb.unsqueeze(1).repeat(1,x.shape[0],1) - # BCHW -> BC(HW) -> (HW)BC - feat_emb = self.feat_emb(lq_feat.flatten(2).permute(2,0,1)) - query_emb = feat_emb - # Transformer encoder - for layer in self.ft_layers: - query_emb = layer(query_emb, query_pos=pos_emb) - - # output logits - logits = self.idx_pred_layer(query_emb) # (hw)bn - logits = logits.permute(1,0,2) # (hw)bn -> b(hw)n - - if code_only: # for training stage II - # logits doesn't need softmax before cross_entropy loss - return logits, lq_feat - - # ################# Quantization ################### - # if self.training: - # quant_feat = torch.einsum('btn,nc->btc', [soft_one_hot, self.quantize.embedding.weight]) - # # b(hw)c -> bc(hw) -> bchw - # quant_feat = quant_feat.permute(0,2,1).view(lq_feat.shape) - # ------------ - soft_one_hot = F.softmax(logits, dim=2) - _, top_idx = torch.topk(soft_one_hot, 1, dim=2) - quant_feat = self.quantize.get_codebook_feat(top_idx, shape=[x.shape[0],16,16,256]) - # preserve gradients - # quant_feat = lq_feat + (quant_feat - lq_feat).detach() - - if detach_16: - quant_feat = quant_feat.detach() # for training stage III - if adain: - quant_feat = adaptive_instance_normalization(quant_feat, lq_feat) - - # ################## Generator #################### - x = quant_feat - fuse_list = [self.fuse_generator_block[f_size] for f_size in self.connect_list] - - for i, block in enumerate(self.generator.blocks): - x = block(x) - if i in fuse_list: # fuse after i-th block - f_size = str(x.shape[-1]) - if w>0: - x = self.fuse_convs_dict[f_size](enc_feat_dict[f_size].detach(), x, w) - out = x - # logits doesn't need softmax before cross_entropy loss - return out, logits, lq_feat \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/BeamNG.drive The official website of the game where you can learn more about the features vehicles environments and modding capabilities[2].md b/spaces/bioriAsaeru/text-to-voice/BeamNG.drive The official website of the game where you can learn more about the features vehicles environments and modding capabilities[2].md deleted file mode 100644 index 5bdfa3c51b95082ac8c10348cbca7084220c11c1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/BeamNG.drive The official website of the game where you can learn more about the features vehicles environments and modding capabilities[2].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Beamng Drive Demo Download Mac


    Download Ziphttps://urloso.com/2uyRY6



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Gta Iv Commandline.txt No Lag 16.md b/spaces/bioriAsaeru/text-to-voice/Gta Iv Commandline.txt No Lag 16.md deleted file mode 100644 index 2d147d7a2b1b537f73f67976fba5532a13ff4481..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gta Iv Commandline.txt No Lag 16.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    GTA IV complete edition now it does not launch XD is possible that i fucked up something XD? file commandline.txt was already created i just wrote additional
    -forcehighqualitymirrors
    -fullspecaudio
    -reservedApp 0

    -

    Long story short, I've had a bitch of a time getting GTA IV through Steam, to actually save my settings, and I had to keep resetting them. I found a work around for it, however, I have no idea how to get all the settings just the way I want. It involves modding the commandline.txt file, deleting the settings cfg, and then launching directly via the exe, NOT through Steam.

    -

    gta iv commandline.txt no lag 16


    Download File ->>> https://urloso.com/2uyRVi



    -

    Also instead of creating a shortcut and adding those parameters to the end of the target, you can just make a file called "commandline.txt" to your root Grand Theft Auto IV directory and put the "-norestrictions -nomemrestrict" and whatever else you use in there. That will make it so that you can still launch GTA IV however you want (through the shortcut, through Social Club, etc)

    -

    To create a commandline to change the resolution, create a new notepad file. Type in '-width (whatever your monitor's width is, in pixels) -height (your monitor's height)'. For example, my monitor is 20" and the resolution is 1600x1200, so I would type '-width 1600 -height 1200'. Save it as 'commandline.txt', then place it in the folder that GTA IV is installed in (default C:\program files\rockstar games\Grand Theft Auto IV). Don't expect it to run great if you only have a 256mb graphics card, though.

    -

    You can these commands easily by creating a commandline.txt file within your default GTA IV install folder (the default is C:Program FilesRockstar GamesGrand Theft Auto IV), put each command on its own line to run multiple commands at once.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/solvers/compression.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/solvers/compression.py deleted file mode 100644 index b757503472a3bfbf90e1636999e64913848a7474..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/solvers/compression.py +++ /dev/null @@ -1,328 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import multiprocessing -from pathlib import Path -import typing as tp - -import flashy -import omegaconf -import torch -from torch import nn - -from . import base, builders -from .. import models, quantization -from ..utils import checkpoint -from ..utils.samples.manager import SampleManager -from ..utils.utils import get_pool_executor - - -logger = logging.getLogger(__name__) - - -class CompressionSolver(base.StandardSolver): - """Solver for compression task. - - The compression task combines a set of perceptual and objective losses - to train an EncodecModel (composed of an encoder-decoder and a quantizer) - to perform high fidelity audio reconstruction. - """ - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - self.rng: torch.Generator # set at each epoch - self.adv_losses = builders.get_adversarial_losses(self.cfg) - self.aux_losses = nn.ModuleDict() - self.info_losses = nn.ModuleDict() - assert not cfg.fsdp.use, "FSDP not supported by CompressionSolver." - loss_weights = dict() - for loss_name, weight in self.cfg.losses.items(): - if loss_name in ['adv', 'feat']: - for adv_name, _ in self.adv_losses.items(): - loss_weights[f'{loss_name}_{adv_name}'] = weight - elif weight > 0: - self.aux_losses[loss_name] = builders.get_loss(loss_name, self.cfg) - loss_weights[loss_name] = weight - else: - self.info_losses[loss_name] = builders.get_loss(loss_name, self.cfg) - self.balancer = builders.get_balancer(loss_weights, self.cfg.balancer) - self.register_stateful('adv_losses') - - @property - def best_metric_name(self) -> tp.Optional[str]: - # best model is the last for the compression model - return None - - def build_model(self): - """Instantiate model and optimizer.""" - # Model and optimizer - self.model = models.builders.get_compression_model(self.cfg).to(self.device) - self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim) - self.register_stateful('model', 'optimizer') - self.register_best_state('model') - self.register_ema('model') - - def build_dataloaders(self): - """Instantiate audio dataloaders for each stage.""" - self.dataloaders = builders.get_audio_datasets(self.cfg) - - def show(self): - """Show the compression model and employed adversarial loss.""" - self.logger.info(f"Compression model with {self.model.quantizer.total_codebooks} codebooks:") - self.log_model_summary(self.model) - self.logger.info("Adversarial loss:") - self.log_model_summary(self.adv_losses) - self.logger.info("Auxiliary losses:") - self.logger.info(self.aux_losses) - self.logger.info("Info losses:") - self.logger.info(self.info_losses) - - def run_step(self, idx: int, batch: torch.Tensor, metrics: dict): - """Perform one training or valid step on a given batch.""" - x = batch.to(self.device) - y = x.clone() - - qres = self.model(x) - assert isinstance(qres, quantization.QuantizedResult) - y_pred = qres.x - # Log bandwidth in kb/s - metrics['bandwidth'] = qres.bandwidth.mean() - - if self.is_training: - d_losses: dict = {} - if len(self.adv_losses) > 0 and torch.rand(1, generator=self.rng).item() <= 1 / self.cfg.adversarial.every: - for adv_name, adversary in self.adv_losses.items(): - disc_loss = adversary.train_adv(y_pred, y) - d_losses[f'd_{adv_name}'] = disc_loss - metrics['d_loss'] = torch.sum(torch.stack(list(d_losses.values()))) - metrics.update(d_losses) - - balanced_losses: dict = {} - other_losses: dict = {} - - # penalty from quantization - if qres.penalty is not None and qres.penalty.requires_grad: - other_losses['penalty'] = qres.penalty # penalty term from the quantizer - - # adversarial losses - for adv_name, adversary in self.adv_losses.items(): - adv_loss, feat_loss = adversary(y_pred, y) - balanced_losses[f'adv_{adv_name}'] = adv_loss - balanced_losses[f'feat_{adv_name}'] = feat_loss - - # auxiliary losses - for loss_name, criterion in self.aux_losses.items(): - loss = criterion(y_pred, y) - balanced_losses[loss_name] = loss - - # weighted losses - metrics.update(balanced_losses) - metrics.update(other_losses) - metrics.update(qres.metrics) - - if self.is_training: - # backprop losses that are not handled by balancer - other_loss = torch.tensor(0., device=self.device) - if 'penalty' in other_losses: - other_loss += other_losses['penalty'] - if other_loss.requires_grad: - other_loss.backward(retain_graph=True) - ratio1 = sum(p.grad.data.norm(p=2).pow(2) - for p in self.model.parameters() if p.grad is not None) - assert isinstance(ratio1, torch.Tensor) - metrics['ratio1'] = ratio1.sqrt() - - # balancer losses backward, returns effective training loss - # with effective weights at the current batch. - metrics['g_loss'] = self.balancer.backward(balanced_losses, y_pred) - # add metrics corresponding to weight ratios - metrics.update(self.balancer.metrics) - ratio2 = sum(p.grad.data.norm(p=2).pow(2) - for p in self.model.parameters() if p.grad is not None) - assert isinstance(ratio2, torch.Tensor) - metrics['ratio2'] = ratio2.sqrt() - - # optim - flashy.distrib.sync_model(self.model) - if self.cfg.optim.max_norm: - torch.nn.utils.clip_grad_norm_( - self.model.parameters(), self.cfg.optim.max_norm - ) - self.optimizer.step() - self.optimizer.zero_grad() - - # informative losses only - info_losses: dict = {} - with torch.no_grad(): - for loss_name, criterion in self.info_losses.items(): - loss = criterion(y_pred, y) - info_losses[loss_name] = loss - - metrics.update(info_losses) - - # aggregated GAN losses: this is useful to report adv and feat across different adversarial loss setups - adv_losses = [loss for loss_name, loss in metrics.items() if loss_name.startswith('adv')] - if len(adv_losses) > 0: - metrics['adv'] = torch.sum(torch.stack(adv_losses)) - feat_losses = [loss for loss_name, loss in metrics.items() if loss_name.startswith('feat')] - if len(feat_losses) > 0: - metrics['feat'] = torch.sum(torch.stack(feat_losses)) - - return metrics - - def run_epoch(self): - # reset random seed at the beginning of the epoch - self.rng = torch.Generator() - self.rng.manual_seed(1234 + self.epoch) - # run epoch - super().run_epoch() - - def evaluate(self): - """Evaluate stage. Runs audio reconstruction evaluation.""" - self.model.eval() - evaluate_stage_name = str(self.current_stage) - - loader = self.dataloaders['evaluate'] - updates = len(loader) - lp = self.log_progress(f'{evaluate_stage_name} inference', loader, total=updates, updates=self.log_updates) - average = flashy.averager() - - pendings = [] - ctx = multiprocessing.get_context('spawn') - with get_pool_executor(self.cfg.evaluate.num_workers, mp_context=ctx) as pool: - for idx, batch in enumerate(lp): - x = batch.to(self.device) - with torch.no_grad(): - qres = self.model(x) - - y_pred = qres.x.cpu() - y = batch.cpu() # should already be on CPU but just in case - pendings.append(pool.submit(evaluate_audio_reconstruction, y_pred, y, self.cfg)) - - metrics_lp = self.log_progress(f'{evaluate_stage_name} metrics', pendings, updates=self.log_updates) - for pending in metrics_lp: - metrics = pending.result() - metrics = average(metrics) - - metrics = flashy.distrib.average_metrics(metrics, len(loader)) - return metrics - - def generate(self): - """Generate stage.""" - self.model.eval() - sample_manager = SampleManager(self.xp, map_reference_to_sample_id=True) - generate_stage_name = str(self.current_stage) - - loader = self.dataloaders['generate'] - updates = len(loader) - lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates) - - for batch in lp: - reference, _ = batch - reference = reference.to(self.device) - with torch.no_grad(): - qres = self.model(reference) - assert isinstance(qres, quantization.QuantizedResult) - - reference = reference.cpu() - estimate = qres.x.cpu() - sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference) - - flashy.distrib.barrier() - - def load_from_pretrained(self, name: str) -> dict: - model = models.CompressionModel.get_pretrained(name) - if isinstance(model, models.DAC): - raise RuntimeError("Cannot fine tune a DAC model.") - elif isinstance(model, models.HFEncodecCompressionModel): - self.logger.warning('Trying to automatically convert a HuggingFace model ' - 'to AudioCraft, this might fail!') - state = model.model.state_dict() - new_state = {} - for k, v in state.items(): - if k.startswith('decoder.layers') and '.conv.' in k and '.block.' not in k: - # We need to determine if this a convtr or a regular conv. - layer = int(k.split('.')[2]) - if isinstance(model.model.decoder.layers[layer].conv, torch.nn.ConvTranspose1d): - - k = k.replace('.conv.', '.convtr.') - k = k.replace('encoder.layers.', 'encoder.model.') - k = k.replace('decoder.layers.', 'decoder.model.') - k = k.replace('conv.', 'conv.conv.') - k = k.replace('convtr.', 'convtr.convtr.') - k = k.replace('quantizer.layers.', 'quantizer.vq.layers.') - k = k.replace('.codebook.', '._codebook.') - new_state[k] = v - state = new_state - elif isinstance(model, models.EncodecModel): - state = model.state_dict() - else: - raise RuntimeError(f"Cannot fine tune model type {type(model)}.") - return { - 'best_state': {'model': state} - } - - @staticmethod - def model_from_checkpoint(checkpoint_path: tp.Union[Path, str], - device: tp.Union[torch.device, str] = 'cpu') -> models.CompressionModel: - """Instantiate a CompressionModel from a given checkpoint path or dora sig. - This method is a convenient endpoint to load a CompressionModel to use in other solvers. - - Args: - checkpoint_path (Path or str): Path to checkpoint or dora sig from where the checkpoint is resolved. - This also supports pre-trained models by using a path of the form //pretrained/NAME. - See `model_from_pretrained` for a list of supported pretrained models. - use_ema (bool): Use EMA variant of the model instead of the actual model. - device (torch.device or str): Device on which the model is loaded. - """ - checkpoint_path = str(checkpoint_path) - if checkpoint_path.startswith('//pretrained/'): - name = checkpoint_path.split('/', 3)[-1] - return models.CompressionModel.get_pretrained(name, device) - logger = logging.getLogger(__name__) - logger.info(f"Loading compression model from checkpoint: {checkpoint_path}") - _checkpoint_path = checkpoint.resolve_checkpoint_path(checkpoint_path, use_fsdp=False) - assert _checkpoint_path is not None, f"Could not resolve compression model checkpoint path: {checkpoint_path}" - state = checkpoint.load_checkpoint(_checkpoint_path) - assert state is not None and 'xp.cfg' in state, f"Could not load compression model from ckpt: {checkpoint_path}" - cfg = state['xp.cfg'] - cfg.device = device - compression_model = models.builders.get_compression_model(cfg).to(device) - assert compression_model.sample_rate == cfg.sample_rate, "Compression model sample rate should match" - - assert 'best_state' in state and state['best_state'] != {} - assert 'exported' not in state, "When loading an exported checkpoint, use the //pretrained/ prefix." - compression_model.load_state_dict(state['best_state']['model']) - compression_model.eval() - logger.info("Compression model loaded!") - return compression_model - - @staticmethod - def wrapped_model_from_checkpoint(cfg: omegaconf.DictConfig, - checkpoint_path: tp.Union[Path, str], - device: tp.Union[torch.device, str] = 'cpu') -> models.CompressionModel: - """Instantiate a wrapped CompressionModel from a given checkpoint path or dora sig. - - Args: - cfg (omegaconf.DictConfig): Configuration to read from for wrapped mode. - checkpoint_path (Path or str): Path to checkpoint or dora sig from where the checkpoint is resolved. - use_ema (bool): Use EMA variant of the model instead of the actual model. - device (torch.device or str): Device on which the model is loaded. - """ - compression_model = CompressionSolver.model_from_checkpoint(checkpoint_path, device) - compression_model = models.builders.get_wrapped_compression_model(compression_model, cfg) - return compression_model - - -def evaluate_audio_reconstruction(y_pred: torch.Tensor, y: torch.Tensor, cfg: omegaconf.DictConfig) -> dict: - """Audio reconstruction evaluation method that can be conveniently pickled.""" - metrics = {} - if cfg.evaluate.metrics.visqol: - visqol = builders.get_visqol(cfg.metrics.visqol) - metrics['visqol'] = visqol(y_pred, y, cfg.sample_rate) - sisnr = builders.get_loss('sisnr', cfg) - metrics['sisnr'] = sisnr(y_pred, y) - return metrics diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_cse_annotations_accumulator.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_cse_annotations_accumulator.py deleted file mode 100644 index a22dce9ce00532d60dc3f4edbef4cea26b006b92..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_cse_annotations_accumulator.py +++ /dev/null @@ -1,240 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import unittest -import torch - -from detectron2.structures import Boxes, BoxMode, Instances - -from densepose.modeling.losses.embed_utils import CseAnnotationsAccumulator -from densepose.structures import DensePoseDataRelative, DensePoseList - - -class TestCseAnnotationsAccumulator(unittest.TestCase): - def test_cse_annotations_accumulator_nodp(self): - instances_lst = [ - self._create_instances_nodp(), - ] - self._test_template(instances_lst) - - def test_cse_annotations_accumulator_sparsedp(self): - instances_lst = [ - self._create_instances_sparsedp(), - ] - self._test_template(instances_lst) - - def test_cse_annotations_accumulator_fulldp(self): - instances_lst = [ - self._create_instances_fulldp(), - ] - self._test_template(instances_lst) - - def test_cse_annotations_accumulator_combined(self): - instances_lst = [ - self._create_instances_nodp(), - self._create_instances_sparsedp(), - self._create_instances_fulldp(), - ] - self._test_template(instances_lst) - - def _test_template(self, instances_lst): - acc = CseAnnotationsAccumulator() - for instances in instances_lst: - acc.accumulate(instances) - packed_anns = acc.pack() - self._check_correspondence(packed_anns, instances_lst) - - def _create_instances_nodp(self): - image_shape = (480, 640) - instances = Instances(image_shape) - instances.gt_boxes = Boxes( - torch.as_tensor( - [ - [40.0, 40.0, 140.0, 140.0], - [160.0, 160.0, 270.0, 270.0], - [40.0, 160.0, 160.0, 280.0], - ] - ) - ) - instances.proposal_boxes = Boxes( - torch.as_tensor( - [ - [41.0, 39.0, 142.0, 138.0], - [161.0, 159.0, 272.0, 268.0], - [41.0, 159.0, 162.0, 278.0], - ] - ) - ) - # do not add gt_densepose - return instances - - def _create_instances_sparsedp(self): - image_shape = (540, 720) - instances = Instances(image_shape) - instances.gt_boxes = Boxes( - torch.as_tensor( - [ - [50.0, 50.0, 130.0, 130.0], - [150.0, 150.0, 240.0, 240.0], - [50.0, 150.0, 230.0, 330.0], - ] - ) - ) - instances.proposal_boxes = Boxes( - torch.as_tensor( - [ - [49.0, 51.0, 131.0, 129.0], - [151.0, 149.0, 241.0, 239.0], - [51.0, 149.0, 232.0, 329.0], - ] - ) - ) - instances.gt_densepose = DensePoseList( - [ - None, - self._create_dp_data( - { - "dp_x": [81.69, 153.47, 151.00], - "dp_y": [162.24, 128.71, 113.81], - "dp_vertex": [0, 1, 2], - "ref_model": "zebra_5002", - "dp_masks": [], - }, - {"c": (166, 133), "r": 64}, - ), - None, - ], - instances.gt_boxes, - image_shape, - ) - return instances - - def _create_instances_fulldp(self): - image_shape = (680, 840) - instances = Instances(image_shape) - instances.gt_boxes = Boxes( - torch.as_tensor( - [ - [65.0, 55.0, 165.0, 155.0], - [170.0, 175.0, 275.0, 280.0], - [55.0, 165.0, 165.0, 275.0], - ] - ) - ) - instances.proposal_boxes = Boxes( - torch.as_tensor( - [ - [66.0, 54.0, 166.0, 154.0], - [171.0, 174.0, 276.0, 279.0], - [56.0, 164.0, 166.0, 274.0], - ] - ) - ) - instances.gt_densepose = DensePoseList( - [ - self._create_dp_data( - { - "dp_x": [149.99, 198.62, 157.59], - "dp_y": [170.74, 197.73, 123.12], - "dp_vertex": [3, 4, 5], - "ref_model": "cat_5001", - "dp_masks": [], - }, - {"c": (100, 100), "r": 50}, - ), - self._create_dp_data( - { - "dp_x": [234.53, 116.72, 71.66], - "dp_y": [107.53, 11.31, 142.32], - "dp_vertex": [6, 7, 8], - "ref_model": "dog_5002", - "dp_masks": [], - }, - {"c": (200, 150), "r": 40}, - ), - self._create_dp_data( - { - "dp_x": [225.54, 202.61, 135.90], - "dp_y": [167.46, 181.00, 211.47], - "dp_vertex": [9, 10, 11], - "ref_model": "elephant_5002", - "dp_masks": [], - }, - {"c": (100, 200), "r": 45}, - ), - ], - instances.gt_boxes, - image_shape, - ) - return instances - - def _create_dp_data(self, anns, blob_def=None): - dp_data = DensePoseDataRelative(anns) - if blob_def is not None: - dp_data.segm[ - blob_def["c"][0] - blob_def["r"] : blob_def["c"][0] + blob_def["r"], - blob_def["c"][1] - blob_def["r"] : blob_def["c"][1] + blob_def["r"], - ] = 1 - return dp_data - - def _check_correspondence(self, packed_anns, instances_lst): - instance_idx = 0 - data_idx = 0 - pt_offset = 0 - if packed_anns is not None: - bbox_xyxy_gt = BoxMode.convert( - packed_anns.bbox_xywh_gt.clone(), BoxMode.XYWH_ABS, BoxMode.XYXY_ABS - ) - bbox_xyxy_est = BoxMode.convert( - packed_anns.bbox_xywh_est.clone(), BoxMode.XYWH_ABS, BoxMode.XYXY_ABS - ) - for instances in instances_lst: - if not hasattr(instances, "gt_densepose"): - instance_idx += len(instances) - continue - for i, dp_data in enumerate(instances.gt_densepose): - if dp_data is None: - instance_idx += 1 - continue - n_pts = len(dp_data.x) - self.assertTrue( - torch.allclose(dp_data.x, packed_anns.x_gt[pt_offset : pt_offset + n_pts]) - ) - self.assertTrue( - torch.allclose(dp_data.y, packed_anns.y_gt[pt_offset : pt_offset + n_pts]) - ) - self.assertTrue(torch.allclose(dp_data.segm, packed_anns.coarse_segm_gt[data_idx])) - self.assertTrue( - torch.allclose( - torch.ones(n_pts, dtype=torch.long) * dp_data.mesh_id, - packed_anns.vertex_mesh_ids_gt[pt_offset : pt_offset + n_pts], - ) - ) - self.assertTrue( - torch.allclose( - dp_data.vertex_ids, packed_anns.vertex_ids_gt[pt_offset : pt_offset + n_pts] - ) - ) - self.assertTrue( - torch.allclose(instances.gt_boxes.tensor[i], bbox_xyxy_gt[data_idx]) - ) - self.assertTrue( - torch.allclose(instances.proposal_boxes.tensor[i], bbox_xyxy_est[data_idx]) - ) - self.assertTrue( - torch.allclose( - torch.ones(n_pts, dtype=torch.long) * data_idx, - packed_anns.point_bbox_with_dp_indices[pt_offset : pt_offset + n_pts], - ) - ) - self.assertTrue( - torch.allclose( - torch.ones(n_pts, dtype=torch.long) * instance_idx, - packed_anns.point_bbox_indices[pt_offset : pt_offset + n_pts], - ) - ) - self.assertEqual(instance_idx, packed_anns.bbox_indices[data_idx]) - pt_offset += n_pts - instance_idx += 1 - data_idx += 1 - if data_idx == 0: - self.assertIsNone(packed_anns) diff --git a/spaces/bspSHU/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md b/spaces/bspSHU/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md deleted file mode 100644 index 88a103a2730dac23c3a5c629c2e1eb0f808d8679..0000000000000000000000000000000000000000 --- a/spaces/bspSHU/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Onodofthenorth-SD PixelArt SpriteSheet Generator -emoji: 📊 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/camenduru-com/sl/README.md b/spaces/camenduru-com/sl/README.md deleted file mode 100644 index a414422b5b27f0c168333cf6e74cc00af905d093..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/sl/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: sl -emoji: 🎥 -colorFrom: yellow -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cccc-c/bingo/src/components/chat-suggestions.tsx b/spaces/cccc-c/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
    -
    - - { - currentSuggestions.map(suggestion => ( - - )) - } -
    -
    - ) : null -} diff --git a/spaces/ccolas/TastyPiano/src/music/representation_analysis/__init__.py b/spaces/ccolas/TastyPiano/src/music/representation_analysis/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/utils/setup_env.py b/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/utils/setup_env.py deleted file mode 100644 index 45289f3245f09e48395ad419d17efffe6846b05c..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/yolox/utils/setup_env.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii Inc. All rights reserved. - -import os -import subprocess -from loguru import logger - -import cv2 - -from .dist import get_world_size, is_main_process - -__all__ = ["configure_nccl", "configure_module", "configure_omp"] - - -def configure_nccl(): - """Configure multi-machine environment variables of NCCL.""" - os.environ["NCCL_LAUNCH_MODE"] = "PARALLEL" - os.environ["NCCL_IB_HCA"] = subprocess.getoutput( - "pushd /sys/class/infiniband/ > /dev/null; for i in mlx5_*; " - "do cat $i/ports/1/gid_attrs/types/* 2>/dev/null " - "| grep v >/dev/null && echo $i ; done; popd > /dev/null" - ) - os.environ["NCCL_IB_GID_INDEX"] = "3" - os.environ["NCCL_IB_TC"] = "106" - - -def configure_omp(num_threads=1): - """ - If OMP_NUM_THREADS is not configured and world_size is greater than 1, - Configure OMP_NUM_THREADS environment variables of NCCL to `num_thread`. - - Args: - num_threads (int): value of `OMP_NUM_THREADS` to set. - """ - # We set OMP_NUM_THREADS=1 by default, which achieves the best speed on our machines - # feel free to change it for better performance. - if "OMP_NUM_THREADS" not in os.environ and get_world_size() > 1: - os.environ["OMP_NUM_THREADS"] = str(num_threads) - if is_main_process(): - logger.info( - "\n***************************************************************\n" - "We set `OMP_NUM_THREADS` for each process to {} to speed up.\n" - "please further tune the variable for optimal performance.\n" - "***************************************************************".format( - os.environ["OMP_NUM_THREADS"] - ) - ) - - -def configure_module(ulimit_value=8192): - """ - Configure pytorch module environment. setting of ulimit and cv2 will be set. - - Args: - ulimit_value(int): default open file number on linux. Default value: 8192. - """ - # system setting - try: - import resource - - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - resource.setrlimit(resource.RLIMIT_NOFILE, (ulimit_value, rlimit[1])) - except Exception: - # Exception might be raised in Windows OS or rlimit reaches max limit number. - # However, set rlimit value might not be necessary. - pass - - # cv2 - # multiprocess might be harmful on performance of torch dataloader - os.environ["OPENCV_OPENCL_RUNTIME"] = "disabled" - try: - cv2.setNumThreads(0) - cv2.ocl.setUseOpenCL(False) - except Exception: - # cv2 version mismatch might rasie exceptions. - pass diff --git a/spaces/chendl/compositional_test/transformers/scripts/tatoeba/README.md b/spaces/chendl/compositional_test/transformers/scripts/tatoeba/README.md deleted file mode 100644 index 7c492ec4f46e2ec68c4b3c8ff5cee588a389799a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/scripts/tatoeba/README.md +++ /dev/null @@ -1,72 +0,0 @@ - - -Setup transformers following instructions in README.md, (I would fork first). -```bash -git clone git@github.com:huggingface/transformers.git -cd transformers -pip install -e . -pip install pandas GitPython wget -``` - -Get required metadata -``` -curl https://cdn-datasets.huggingface.co/language_codes/language-codes-3b2.csv > language-codes-3b2.csv -curl https://cdn-datasets.huggingface.co/language_codes/iso-639-3.csv > iso-639-3.csv -``` - -Install Tatoeba-Challenge repo inside transformers -```bash -git clone git@github.com:Helsinki-NLP/Tatoeba-Challenge.git -``` - -To convert a few models, call the conversion script from command line: -```bash -python src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models heb-eng eng-heb --save_dir converted -``` - -To convert lots of models you can pass your list of Tatoeba model names to `resolver.convert_models` in a python client or script. - -```python -from transformers.convert_marian_tatoeba_to_pytorch import TatoebaConverter -resolver = TatoebaConverter(save_dir='converted') -resolver.convert_models(['heb-eng', 'eng-heb']) -``` - - -### Upload converted models -Since version v3.5.0, the model sharing workflow is switched to git-based system . Refer to [model sharing doc](https://huggingface.co/transformers/main/model_sharing.html#model-sharing-and-uploading) for more details. - -To upload all converted models, - -1. Install [git-lfs](https://git-lfs.github.com/). - -2. Login to `transformers-cli` - -```bash -huggingface-cli login -``` - -3. Run the `upload_models` script - -```bash -./scripts/tatoeba/upload_models.sh -``` - - -### Modifications -- To change naming logic, change the code near `os.rename`. The model card creation code may also need to change. -- To change model card content, you must modify `TatoebaCodeResolver.write_model_card` diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark_args_utils.py b/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark_args_utils.py deleted file mode 100644 index d9233906d281c99f6e80a8f86d63ebd28f69645e..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark_args_utils.py +++ /dev/null @@ -1,165 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import dataclasses -import json -import warnings -from dataclasses import dataclass, field -from time import time -from typing import List - -from ..utils import logging - - -logger = logging.get_logger(__name__) - - -def list_field(default=None, metadata=None): - return field(default_factory=lambda: default, metadata=metadata) - - -@dataclass -class BenchmarkArguments: - """ - BenchMarkArguments are arguments we use in our benchmark scripts **which relate to the training loop itself**. - - Using `HfArgumentParser` we can turn this class into argparse arguments to be able to specify them on the command - line. - """ - - models: List[str] = list_field( - default=[], - metadata={ - "help": ( - "Model checkpoints to be provided to the AutoModel classes. Leave blank to benchmark the base version" - " of all available models" - ) - }, - ) - - batch_sizes: List[int] = list_field( - default=[8], metadata={"help": "List of batch sizes for which memory and time performance will be evaluated"} - ) - - sequence_lengths: List[int] = list_field( - default=[8, 32, 128, 512], - metadata={"help": "List of sequence lengths for which memory and time performance will be evaluated"}, - ) - - inference: bool = field( - default=True, - metadata={"help": "Whether to benchmark inference of model. Inference can be disabled via --no-inference."}, - ) - cuda: bool = field( - default=True, - metadata={"help": "Whether to run on available cuda devices. Cuda can be disabled via --no-cuda."}, - ) - tpu: bool = field( - default=True, metadata={"help": "Whether to run on available tpu devices. TPU can be disabled via --no-tpu."} - ) - fp16: bool = field(default=False, metadata={"help": "Use FP16 to accelerate inference."}) - training: bool = field(default=False, metadata={"help": "Benchmark training of model"}) - verbose: bool = field(default=False, metadata={"help": "Verbose memory tracing"}) - speed: bool = field( - default=True, - metadata={"help": "Whether to perform speed measurements. Speed measurements can be disabled via --no-speed."}, - ) - memory: bool = field( - default=True, - metadata={ - "help": "Whether to perform memory measurements. Memory measurements can be disabled via --no-memory" - }, - ) - trace_memory_line_by_line: bool = field(default=False, metadata={"help": "Trace memory line by line"}) - save_to_csv: bool = field(default=False, metadata={"help": "Save result to a CSV file"}) - log_print: bool = field(default=False, metadata={"help": "Save all print statements in a log file"}) - env_print: bool = field(default=False, metadata={"help": "Whether to print environment information"}) - multi_process: bool = field( - default=True, - metadata={ - "help": ( - "Whether to use multiprocessing for memory and speed measurement. It is highly recommended to use" - " multiprocessing for accurate CPU and GPU memory measurements. This option should only be disabled" - " for debugging / testing and on TPU." - ) - }, - ) - inference_time_csv_file: str = field( - default=f"inference_time_{round(time())}.csv", - metadata={"help": "CSV filename used if saving time results to csv."}, - ) - inference_memory_csv_file: str = field( - default=f"inference_memory_{round(time())}.csv", - metadata={"help": "CSV filename used if saving memory results to csv."}, - ) - train_time_csv_file: str = field( - default=f"train_time_{round(time())}.csv", - metadata={"help": "CSV filename used if saving time results to csv for training."}, - ) - train_memory_csv_file: str = field( - default=f"train_memory_{round(time())}.csv", - metadata={"help": "CSV filename used if saving memory results to csv for training."}, - ) - env_info_csv_file: str = field( - default=f"env_info_{round(time())}.csv", - metadata={"help": "CSV filename used if saving environment information."}, - ) - log_filename: str = field( - default=f"log_{round(time())}.csv", - metadata={"help": "Log filename used if print statements are saved in log."}, - ) - repeat: int = field(default=3, metadata={"help": "Times an experiment will be run."}) - only_pretrain_model: bool = field( - default=False, - metadata={ - "help": ( - "Instead of loading the model as defined in `config.architectures` if exists, just load the pretrain" - " model weights." - ) - }, - ) - - def __post_init__(self): - warnings.warn( - f"The class {self.__class__} is deprecated. Hugging Face Benchmarking utils" - " are deprecated in general and it is advised to use external Benchmarking libraries " - " to benchmark Transformer models.", - FutureWarning, - ) - - def to_json_string(self): - """ - Serializes this instance to a JSON string. - """ - return json.dumps(dataclasses.asdict(self), indent=2) - - @property - def model_names(self): - assert len(self.models) > 0, ( - "Please make sure you provide at least one model name / model identifier, *e.g.* `--models" - " bert-base-cased` or `args.models = ['bert-base-cased']." - ) - return self.models - - @property - def do_multi_processing(self): - if not self.multi_process: - return False - elif self.is_tpu: - logger.info("Multiprocessing is currently not possible on TPU.") - return False - else: - return True diff --git a/spaces/chenmgtea/cn_tts/attentions.py b/spaces/chenmgtea/cn_tts/attentions.py deleted file mode 100644 index 84759e83a75dccbf4d9e84c7d4c4141725ba462a..0000000000000000000000000000000000000000 --- a/spaces/chenmgtea/cn_tts/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=4, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/chilleverydaychill/roop/roop/face_analyser.py b/spaces/chilleverydaychill/roop/roop/face_analyser.py deleted file mode 100644 index 9c0afe458763edb22dc2332f527dfdba48575b1d..0000000000000000000000000000000000000000 --- a/spaces/chilleverydaychill/roop/roop/face_analyser.py +++ /dev/null @@ -1,34 +0,0 @@ -import threading -from typing import Any -import insightface - -import roop.globals -from roop.typing import Frame - -FACE_ANALYSER = None -THREAD_LOCK = threading.Lock() - - -def get_face_analyser() -> Any: - global FACE_ANALYSER - - with THREAD_LOCK: - if FACE_ANALYSER is None: - FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers) - FACE_ANALYSER.prepare(ctx_id=0, det_size=(640, 640)) - return FACE_ANALYSER - - -def get_one_face(frame: Frame) -> Any: - face = get_face_analyser().get(frame) - try: - return min(face, key=lambda x: x.bbox[0]) - except ValueError: - return None - - -def get_many_faces(frame: Frame) -> Any: - try: - return get_face_analyser().get(frame) - except IndexError: - return None diff --git a/spaces/chilleverydaychill/roop/roop/ui.py b/spaces/chilleverydaychill/roop/roop/ui.py deleted file mode 100644 index ba693dac116bd416b91518734fa550e9dfb95c7b..0000000000000000000000000000000000000000 --- a/spaces/chilleverydaychill/roop/roop/ui.py +++ /dev/null @@ -1,231 +0,0 @@ -import os -import webbrowser -import customtkinter as ctk -from typing import Callable, Tuple -import cv2 -from PIL import Image, ImageOps - -import roop.globals -import roop.metadata -from roop.face_analyser import get_one_face -from roop.capturer import get_video_frame, get_video_frame_total -from roop.predicter import predict_frame -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import is_image, is_video, resolve_relative_path - -ROOT = None -ROOT_HEIGHT = 700 -ROOT_WIDTH = 600 - -PREVIEW = None -PREVIEW_MAX_HEIGHT = 700 -PREVIEW_MAX_WIDTH = 1200 - -RECENT_DIRECTORY_SOURCE = None -RECENT_DIRECTORY_TARGET = None -RECENT_DIRECTORY_OUTPUT = None - -preview_label = None -preview_slider = None -source_label = None -target_label = None -status_label = None - - -def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global ROOT, PREVIEW - - ROOT = create_root(start, destroy) - PREVIEW = create_preview(ROOT) - - return ROOT - - -def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk: - global source_label, target_label, status_label - - ctk.deactivate_automatic_dpi_awareness() - ctk.set_appearance_mode('system') - ctk.set_default_color_theme(resolve_relative_path('ui.json')) - - root = ctk.CTk() - root.minsize(ROOT_WIDTH, ROOT_HEIGHT) - root.title(f'{roop.metadata.name} {roop.metadata.version}') - root.configure() - root.protocol('WM_DELETE_WINDOW', lambda: destroy()) - - source_label = ctk.CTkLabel(root, text=None) - source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25) - - target_label = ctk.CTkLabel(root, text=None) - target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25) - - source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path()) - source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1) - - target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path()) - target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1) - - keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps) - keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps)) - keep_fps_checkbox.place(relx=0.1, rely=0.6) - - keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames) - keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get())) - keep_frames_switch.place(relx=0.1, rely=0.65) - - keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio) - keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get())) - keep_audio_switch.place(relx=0.6, rely=0.6) - - many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces) - many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get())) - many_faces_switch.place(relx=0.6, rely=0.65) - - start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start)) - start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05) - - stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy()) - stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05) - - preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview()) - preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05) - - status_label = ctk.CTkLabel(root, text=None, justify='center') - status_label.place(relx=0.1, rely=0.9, relwidth=0.8) - - donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2') - donate_label.place(relx=0.1, rely=0.95, relwidth=0.8) - donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color')) - donate_label.bind(' - ) -} diff --git a/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/README.md b/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/README.md deleted file mode 100644 index db03db61134c15420968aaf383cc9205f7ac61e0..0000000000000000000000000000000000000000 --- a/spaces/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification-demo/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Prithvi 100M Multi Temporal Crop Classification Demo -emoji: 📚 -colorFrom: purple -colorTo: red -sdk: docker -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/iccv23-diffusers-demo/LoraTheExplorer/lora.py b/spaces/iccv23-diffusers-demo/LoraTheExplorer/lora.py deleted file mode 100644 index 3ac02a748131ab2c841fec0248c5fe18e2659dd3..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/LoraTheExplorer/lora.py +++ /dev/null @@ -1,1222 +0,0 @@ -# LoRA network module taken from https://github.com/bmaltais/kohya_ss/blob/master/networks/lora.py -# reference: -# https://github.com/microsoft/LoRA/blob/main/loralib/layers.py -# https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py - -import math -import os -from typing import Dict, List, Optional, Tuple, Type, Union -from diffusers import AutoencoderKL -from transformers import CLIPTextModel -import numpy as np -import torch -import re - - -RE_UPDOWN = re.compile(r"(up|down)_blocks_(\d+)_(resnets|upsamplers|downsamplers|attentions)_(\d+)_") - -RE_UPDOWN = re.compile(r"(up|down)_blocks_(\d+)_(resnets|upsamplers|downsamplers|attentions)_(\d+)_") - - -class LoRAModule(torch.nn.Module): - """ - replaces forward method of the original Linear, instead of replacing the original Linear module. - """ - - def __init__( - self, - lora_name, - org_module: torch.nn.Module, - multiplier=1.0, - lora_dim=4, - alpha=1, - dropout=None, - rank_dropout=None, - module_dropout=None, - ): - """if alpha == 0 or None, alpha is rank (no scaling).""" - super().__init__() - self.lora_name = lora_name - - if org_module.__class__.__name__ == "Conv2d": - in_dim = org_module.in_channels - out_dim = org_module.out_channels - else: - in_dim = org_module.in_features - out_dim = org_module.out_features - - # if limit_rank: - # self.lora_dim = min(lora_dim, in_dim, out_dim) - # if self.lora_dim != lora_dim: - # print(f"{lora_name} dim (rank) is changed to: {self.lora_dim}") - # else: - self.lora_dim = lora_dim - - if org_module.__class__.__name__ == "Conv2d": - kernel_size = org_module.kernel_size - stride = org_module.stride - padding = org_module.padding - self.lora_down = torch.nn.Conv2d(in_dim, self.lora_dim, kernel_size, stride, padding, bias=False) - self.lora_up = torch.nn.Conv2d(self.lora_dim, out_dim, (1, 1), (1, 1), bias=False) - else: - self.lora_down = torch.nn.Linear(in_dim, self.lora_dim, bias=False) - self.lora_up = torch.nn.Linear(self.lora_dim, out_dim, bias=False) - - if type(alpha) == torch.Tensor: - alpha = alpha.detach().float().numpy() # without casting, bf16 causes error - alpha = self.lora_dim if alpha is None or alpha == 0 else alpha - self.scale = alpha / self.lora_dim - self.register_buffer("alpha", torch.tensor(alpha)) # 定数として扱える - - # same as microsoft's - torch.nn.init.kaiming_uniform_(self.lora_down.weight, a=math.sqrt(5)) - torch.nn.init.zeros_(self.lora_up.weight) - - self.multiplier = multiplier - self.org_module = org_module # remove in applying - self.dropout = dropout - self.rank_dropout = rank_dropout - self.module_dropout = module_dropout - - def apply_to(self): - self.org_forward = self.org_module.forward - self.org_module.forward = self.forward - del self.org_module - - def forward(self, x): - org_forwarded = self.org_forward(x) - - # module dropout - if self.module_dropout is not None and self.training: - if torch.rand(1) < self.module_dropout: - return org_forwarded - - lx = self.lora_down(x) - - # normal dropout - if self.dropout is not None and self.training: - lx = torch.nn.functional.dropout(lx, p=self.dropout) - - # rank dropout - if self.rank_dropout is not None and self.training: - mask = torch.rand((lx.size(0), self.lora_dim), device=lx.device) > self.rank_dropout - if len(lx.size()) == 3: - mask = mask.unsqueeze(1) # for Text Encoder - elif len(lx.size()) == 4: - mask = mask.unsqueeze(-1).unsqueeze(-1) # for Conv2d - lx = lx * mask - - # scaling for rank dropout: treat as if the rank is changed - # maskから計算することも考えられるが、augmentation的な効果を期待してrank_dropoutを用いる - scale = self.scale * (1.0 / (1.0 - self.rank_dropout)) # redundant for readability - else: - scale = self.scale - - lx = self.lora_up(lx) - - return org_forwarded + lx * self.multiplier * scale - - -class LoRAInfModule(LoRAModule): - def __init__( - self, - lora_name, - org_module: torch.nn.Module, - multiplier=1.0, - lora_dim=4, - alpha=1, - **kwargs, - ): - # no dropout for inference - super().__init__(lora_name, org_module, multiplier, lora_dim, alpha) - - self.org_module_ref = [org_module] # 後から参照できるように - self.enabled = True - - # check regional or not by lora_name - self.text_encoder = False - if lora_name.startswith("lora_te_"): - self.regional = False - self.use_sub_prompt = True - self.text_encoder = True - elif "attn2_to_k" in lora_name or "attn2_to_v" in lora_name: - self.regional = False - self.use_sub_prompt = True - elif "time_emb" in lora_name: - self.regional = False - self.use_sub_prompt = False - else: - self.regional = True - self.use_sub_prompt = False - - self.network: LoRANetwork = None - - def set_network(self, network): - self.network = network - - # freezeしてマージする - def merge_to(self, sd, dtype, device): - # get up/down weight - up_weight = sd["lora_up.weight"].to(torch.float).to(device) - down_weight = sd["lora_down.weight"].to(torch.float).to(device) - - # extract weight from org_module - org_sd = self.org_module.state_dict() - weight = org_sd["weight"].to(torch.float) - - # merge weight - if len(weight.size()) == 2: - # linear - weight = weight + self.multiplier * (up_weight @ down_weight) * self.scale - elif down_weight.size()[2:4] == (1, 1): - # conv2d 1x1 - weight = ( - weight - + self.multiplier - * (up_weight.squeeze(3).squeeze(2) @ down_weight.squeeze(3).squeeze(2)).unsqueeze(2).unsqueeze(3) - * self.scale - ) - else: - # conv2d 3x3 - conved = torch.nn.functional.conv2d(down_weight.permute(1, 0, 2, 3), up_weight).permute(1, 0, 2, 3) - # print(conved.size(), weight.size(), module.stride, module.padding) - weight = weight + self.multiplier * conved * self.scale - - # set weight to org_module - org_sd["weight"] = weight.to(dtype) - self.org_module.load_state_dict(org_sd) - - # 復元できるマージのため、このモジュールのweightを返す - def get_weight(self, multiplier=None): - if multiplier is None: - multiplier = self.multiplier - - # get up/down weight from module - up_weight = self.lora_up.weight.to(torch.float) - down_weight = self.lora_down.weight.to(torch.float) - - # pre-calculated weight - if len(down_weight.size()) == 2: - # linear - weight = self.multiplier * (up_weight @ down_weight) * self.scale - elif down_weight.size()[2:4] == (1, 1): - # conv2d 1x1 - weight = ( - self.multiplier - * (up_weight.squeeze(3).squeeze(2) @ down_weight.squeeze(3).squeeze(2)).unsqueeze(2).unsqueeze(3) - * self.scale - ) - else: - # conv2d 3x3 - conved = torch.nn.functional.conv2d(down_weight.permute(1, 0, 2, 3), up_weight).permute(1, 0, 2, 3) - weight = self.multiplier * conved * self.scale - - return weight - - def set_region(self, region): - self.region = region - self.region_mask = None - - def default_forward(self, x): - # print("default_forward", self.lora_name, x.size()) - return self.org_forward(x) + self.lora_up(self.lora_down(x)) * self.multiplier * self.scale - - def forward(self, x): - if not self.enabled: - return self.org_forward(x) - - if self.network is None or self.network.sub_prompt_index is None: - return self.default_forward(x) - if not self.regional and not self.use_sub_prompt: - return self.default_forward(x) - - if self.regional: - return self.regional_forward(x) - else: - return self.sub_prompt_forward(x) - - def get_mask_for_x(self, x): - # calculate size from shape of x - if len(x.size()) == 4: - h, w = x.size()[2:4] - area = h * w - else: - area = x.size()[1] - - mask = self.network.mask_dic[area] - if mask is None: - raise ValueError(f"mask is None for resolution {area}") - if len(x.size()) != 4: - mask = torch.reshape(mask, (1, -1, 1)) - return mask - - def regional_forward(self, x): - if "attn2_to_out" in self.lora_name: - return self.to_out_forward(x) - - if self.network.mask_dic is None: # sub_prompt_index >= 3 - return self.default_forward(x) - - # apply mask for LoRA result - lx = self.lora_up(self.lora_down(x)) * self.multiplier * self.scale - mask = self.get_mask_for_x(lx) - # print("regional", self.lora_name, self.network.sub_prompt_index, lx.size(), mask.size()) - lx = lx * mask - - x = self.org_forward(x) - x = x + lx - - if "attn2_to_q" in self.lora_name and self.network.is_last_network: - x = self.postp_to_q(x) - - return x - - def postp_to_q(self, x): - # repeat x to num_sub_prompts - has_real_uncond = x.size()[0] // self.network.batch_size == 3 - qc = self.network.batch_size # uncond - qc += self.network.batch_size * self.network.num_sub_prompts # cond - if has_real_uncond: - qc += self.network.batch_size # real_uncond - - query = torch.zeros((qc, x.size()[1], x.size()[2]), device=x.device, dtype=x.dtype) - query[: self.network.batch_size] = x[: self.network.batch_size] - - for i in range(self.network.batch_size): - qi = self.network.batch_size + i * self.network.num_sub_prompts - query[qi : qi + self.network.num_sub_prompts] = x[self.network.batch_size + i] - - if has_real_uncond: - query[-self.network.batch_size :] = x[-self.network.batch_size :] - - # print("postp_to_q", self.lora_name, x.size(), query.size(), self.network.num_sub_prompts) - return query - - def sub_prompt_forward(self, x): - if x.size()[0] == self.network.batch_size: # if uncond in text_encoder, do not apply LoRA - return self.org_forward(x) - - emb_idx = self.network.sub_prompt_index - if not self.text_encoder: - emb_idx += self.network.batch_size - - # apply sub prompt of X - lx = x[emb_idx :: self.network.num_sub_prompts] - lx = self.lora_up(self.lora_down(lx)) * self.multiplier * self.scale - - # print("sub_prompt_forward", self.lora_name, x.size(), lx.size(), emb_idx) - - x = self.org_forward(x) - x[emb_idx :: self.network.num_sub_prompts] += lx - - return x - - def to_out_forward(self, x): - # print("to_out_forward", self.lora_name, x.size(), self.network.is_last_network) - - if self.network.is_last_network: - masks = [None] * self.network.num_sub_prompts - self.network.shared[self.lora_name] = (None, masks) - else: - lx, masks = self.network.shared[self.lora_name] - - # call own LoRA - x1 = x[self.network.batch_size + self.network.sub_prompt_index :: self.network.num_sub_prompts] - lx1 = self.lora_up(self.lora_down(x1)) * self.multiplier * self.scale - - if self.network.is_last_network: - lx = torch.zeros( - (self.network.num_sub_prompts * self.network.batch_size, *lx1.size()[1:]), device=lx1.device, dtype=lx1.dtype - ) - self.network.shared[self.lora_name] = (lx, masks) - - # print("to_out_forward", lx.size(), lx1.size(), self.network.sub_prompt_index, self.network.num_sub_prompts) - lx[self.network.sub_prompt_index :: self.network.num_sub_prompts] += lx1 - masks[self.network.sub_prompt_index] = self.get_mask_for_x(lx1) - - # if not last network, return x and masks - x = self.org_forward(x) - if not self.network.is_last_network: - return x - - lx, masks = self.network.shared.pop(self.lora_name) - - # if last network, combine separated x with mask weighted sum - has_real_uncond = x.size()[0] // self.network.batch_size == self.network.num_sub_prompts + 2 - - out = torch.zeros((self.network.batch_size * (3 if has_real_uncond else 2), *x.size()[1:]), device=x.device, dtype=x.dtype) - out[: self.network.batch_size] = x[: self.network.batch_size] # uncond - if has_real_uncond: - out[-self.network.batch_size :] = x[-self.network.batch_size :] # real_uncond - - # print("to_out_forward", self.lora_name, self.network.sub_prompt_index, self.network.num_sub_prompts) - # for i in range(len(masks)): - # if masks[i] is None: - # masks[i] = torch.zeros_like(masks[-1]) - - mask = torch.cat(masks) - mask_sum = torch.sum(mask, dim=0) + 1e-4 - for i in range(self.network.batch_size): - # 1枚の画像ごとに処理する - lx1 = lx[i * self.network.num_sub_prompts : (i + 1) * self.network.num_sub_prompts] - lx1 = lx1 * mask - lx1 = torch.sum(lx1, dim=0) - - xi = self.network.batch_size + i * self.network.num_sub_prompts - x1 = x[xi : xi + self.network.num_sub_prompts] - x1 = x1 * mask - x1 = torch.sum(x1, dim=0) - x1 = x1 / mask_sum - - x1 = x1 + lx1 - out[self.network.batch_size + i] = x1 - - # print("to_out_forward", x.size(), out.size(), has_real_uncond) - return out - - -def parse_block_lr_kwargs(nw_kwargs): - down_lr_weight = nw_kwargs.get("down_lr_weight", None) - mid_lr_weight = nw_kwargs.get("mid_lr_weight", None) - up_lr_weight = nw_kwargs.get("up_lr_weight", None) - - # 以上のいずれにも設定がない場合は無効としてNoneを返す - if down_lr_weight is None and mid_lr_weight is None and up_lr_weight is None: - return None, None, None - - # extract learning rate weight for each block - if down_lr_weight is not None: - # if some parameters are not set, use zero - if "," in down_lr_weight: - down_lr_weight = [(float(s) if s else 0.0) for s in down_lr_weight.split(",")] - - if mid_lr_weight is not None: - mid_lr_weight = float(mid_lr_weight) - - if up_lr_weight is not None: - if "," in up_lr_weight: - up_lr_weight = [(float(s) if s else 0.0) for s in up_lr_weight.split(",")] - - down_lr_weight, mid_lr_weight, up_lr_weight = get_block_lr_weight( - down_lr_weight, mid_lr_weight, up_lr_weight, float(nw_kwargs.get("block_lr_zero_threshold", 0.0)) - ) - - return down_lr_weight, mid_lr_weight, up_lr_weight - - -def create_network( - multiplier: float, - network_dim: Optional[int], - network_alpha: Optional[float], - vae: AutoencoderKL, - text_encoder: Union[CLIPTextModel, List[CLIPTextModel]], - unet, - neuron_dropout: Optional[float] = None, - **kwargs, -): - if network_dim is None: - network_dim = 4 # default - if network_alpha is None: - network_alpha = 1.0 - - # extract dim/alpha for conv2d, and block dim - conv_dim = kwargs.get("conv_dim", None) - conv_alpha = kwargs.get("conv_alpha", None) - if conv_dim is not None: - conv_dim = int(conv_dim) - if conv_alpha is None: - conv_alpha = 1.0 - else: - conv_alpha = float(conv_alpha) - - # block dim/alpha/lr - block_dims = kwargs.get("block_dims", None) - down_lr_weight, mid_lr_weight, up_lr_weight = parse_block_lr_kwargs(kwargs) - - # 以上のいずれかに指定があればblockごとのdim(rank)を有効にする - if block_dims is not None or down_lr_weight is not None or mid_lr_weight is not None or up_lr_weight is not None: - block_alphas = kwargs.get("block_alphas", None) - conv_block_dims = kwargs.get("conv_block_dims", None) - conv_block_alphas = kwargs.get("conv_block_alphas", None) - - block_dims, block_alphas, conv_block_dims, conv_block_alphas = get_block_dims_and_alphas( - block_dims, block_alphas, network_dim, network_alpha, conv_block_dims, conv_block_alphas, conv_dim, conv_alpha - ) - - # remove block dim/alpha without learning rate - block_dims, block_alphas, conv_block_dims, conv_block_alphas = remove_block_dims_and_alphas( - block_dims, block_alphas, conv_block_dims, conv_block_alphas, down_lr_weight, mid_lr_weight, up_lr_weight - ) - - else: - block_alphas = None - conv_block_dims = None - conv_block_alphas = None - - # rank/module dropout - rank_dropout = kwargs.get("rank_dropout", None) - if rank_dropout is not None: - rank_dropout = float(rank_dropout) - module_dropout = kwargs.get("module_dropout", None) - if module_dropout is not None: - module_dropout = float(module_dropout) - - # すごく引数が多いな ( ^ω^)・・・ - network = LoRANetwork( - text_encoder, - unet, - multiplier=multiplier, - lora_dim=network_dim, - alpha=network_alpha, - dropout=neuron_dropout, - rank_dropout=rank_dropout, - module_dropout=module_dropout, - conv_lora_dim=conv_dim, - conv_alpha=conv_alpha, - block_dims=block_dims, - block_alphas=block_alphas, - conv_block_dims=conv_block_dims, - conv_block_alphas=conv_block_alphas, - varbose=True, - ) - - if up_lr_weight is not None or mid_lr_weight is not None or down_lr_weight is not None: - network.set_block_lr_weight(up_lr_weight, mid_lr_weight, down_lr_weight) - - return network - - -# このメソッドは外部から呼び出される可能性を考慮しておく -# network_dim, network_alpha にはデフォルト値が入っている。 -# block_dims, block_alphas は両方ともNoneまたは両方とも値が入っている -# conv_dim, conv_alpha は両方ともNoneまたは両方とも値が入っている -def get_block_dims_and_alphas( - block_dims, block_alphas, network_dim, network_alpha, conv_block_dims, conv_block_alphas, conv_dim, conv_alpha -): - num_total_blocks = LoRANetwork.NUM_OF_BLOCKS * 2 + 1 - - def parse_ints(s): - return [int(i) for i in s.split(",")] - - def parse_floats(s): - return [float(i) for i in s.split(",")] - - # block_dimsとblock_alphasをパースする。必ず値が入る - if block_dims is not None: - block_dims = parse_ints(block_dims) - assert ( - len(block_dims) == num_total_blocks - ), f"block_dims must have {num_total_blocks} elements / block_dimsは{num_total_blocks}個指定してください" - else: - print(f"block_dims is not specified. all dims are set to {network_dim} / block_dimsが指定されていません。すべてのdimは{network_dim}になります") - block_dims = [network_dim] * num_total_blocks - - if block_alphas is not None: - block_alphas = parse_floats(block_alphas) - assert ( - len(block_alphas) == num_total_blocks - ), f"block_alphas must have {num_total_blocks} elements / block_alphasは{num_total_blocks}個指定してください" - else: - print( - f"block_alphas is not specified. all alphas are set to {network_alpha} / block_alphasが指定されていません。すべてのalphaは{network_alpha}になります" - ) - block_alphas = [network_alpha] * num_total_blocks - - # conv_block_dimsとconv_block_alphasを、指定がある場合のみパースする。指定がなければconv_dimとconv_alphaを使う - if conv_block_dims is not None: - conv_block_dims = parse_ints(conv_block_dims) - assert ( - len(conv_block_dims) == num_total_blocks - ), f"conv_block_dims must have {num_total_blocks} elements / conv_block_dimsは{num_total_blocks}個指定してください" - - if conv_block_alphas is not None: - conv_block_alphas = parse_floats(conv_block_alphas) - assert ( - len(conv_block_alphas) == num_total_blocks - ), f"conv_block_alphas must have {num_total_blocks} elements / conv_block_alphasは{num_total_blocks}個指定してください" - else: - if conv_alpha is None: - conv_alpha = 1.0 - print( - f"conv_block_alphas is not specified. all alphas are set to {conv_alpha} / conv_block_alphasが指定されていません。すべてのalphaは{conv_alpha}になります" - ) - conv_block_alphas = [conv_alpha] * num_total_blocks - else: - if conv_dim is not None: - print( - f"conv_dim/alpha for all blocks are set to {conv_dim} and {conv_alpha} / すべてのブロックのconv_dimとalphaは{conv_dim}および{conv_alpha}になります" - ) - conv_block_dims = [conv_dim] * num_total_blocks - conv_block_alphas = [conv_alpha] * num_total_blocks - else: - conv_block_dims = None - conv_block_alphas = None - - return block_dims, block_alphas, conv_block_dims, conv_block_alphas - - -# 層別学習率用に層ごとの学習率に対する倍率を定義する、外部から呼び出される可能性を考慮しておく -def get_block_lr_weight( - down_lr_weight, mid_lr_weight, up_lr_weight, zero_threshold -) -> Tuple[List[float], List[float], List[float]]: - # パラメータ未指定時は何もせず、今までと同じ動作とする - if up_lr_weight is None and mid_lr_weight is None and down_lr_weight is None: - return None, None, None - - max_len = LoRANetwork.NUM_OF_BLOCKS # フルモデル相当でのup,downの層の数 - - def get_list(name_with_suffix) -> List[float]: - import math - - tokens = name_with_suffix.split("+") - name = tokens[0] - base_lr = float(tokens[1]) if len(tokens) > 1 else 0.0 - - if name == "cosine": - return [math.sin(math.pi * (i / (max_len - 1)) / 2) + base_lr for i in reversed(range(max_len))] - elif name == "sine": - return [math.sin(math.pi * (i / (max_len - 1)) / 2) + base_lr for i in range(max_len)] - elif name == "linear": - return [i / (max_len - 1) + base_lr for i in range(max_len)] - elif name == "reverse_linear": - return [i / (max_len - 1) + base_lr for i in reversed(range(max_len))] - elif name == "zeros": - return [0.0 + base_lr] * max_len - else: - print( - "Unknown lr_weight argument %s is used. Valid arguments: / 不明なlr_weightの引数 %s が使われました。有効な引数:\n\tcosine, sine, linear, reverse_linear, zeros" - % (name) - ) - return None - - if type(down_lr_weight) == str: - down_lr_weight = get_list(down_lr_weight) - if type(up_lr_weight) == str: - up_lr_weight = get_list(up_lr_weight) - - if (up_lr_weight != None and len(up_lr_weight) > max_len) or (down_lr_weight != None and len(down_lr_weight) > max_len): - print("down_weight or up_weight is too long. Parameters after %d-th are ignored." % max_len) - print("down_weightもしくはup_weightが長すぎます。%d個目以降のパラメータは無視されます。" % max_len) - up_lr_weight = up_lr_weight[:max_len] - down_lr_weight = down_lr_weight[:max_len] - - if (up_lr_weight != None and len(up_lr_weight) < max_len) or (down_lr_weight != None and len(down_lr_weight) < max_len): - print("down_weight or up_weight is too short. Parameters after %d-th are filled with 1." % max_len) - print("down_weightもしくはup_weightが短すぎます。%d個目までの不足したパラメータは1で補われます。" % max_len) - - if down_lr_weight != None and len(down_lr_weight) < max_len: - down_lr_weight = down_lr_weight + [1.0] * (max_len - len(down_lr_weight)) - if up_lr_weight != None and len(up_lr_weight) < max_len: - up_lr_weight = up_lr_weight + [1.0] * (max_len - len(up_lr_weight)) - - if (up_lr_weight != None) or (mid_lr_weight != None) or (down_lr_weight != None): - print("apply block learning rate / 階層別学習率を適用します。") - if down_lr_weight != None: - down_lr_weight = [w if w > zero_threshold else 0 for w in down_lr_weight] - print("down_lr_weight (shallower -> deeper, 浅い層->深い層):", down_lr_weight) - else: - print("down_lr_weight: all 1.0, すべて1.0") - - if mid_lr_weight != None: - mid_lr_weight = mid_lr_weight if mid_lr_weight > zero_threshold else 0 - print("mid_lr_weight:", mid_lr_weight) - else: - print("mid_lr_weight: 1.0") - - if up_lr_weight != None: - up_lr_weight = [w if w > zero_threshold else 0 for w in up_lr_weight] - print("up_lr_weight (deeper -> shallower, 深い層->浅い層):", up_lr_weight) - else: - print("up_lr_weight: all 1.0, すべて1.0") - - return down_lr_weight, mid_lr_weight, up_lr_weight - - -# lr_weightが0のblockをblock_dimsから除外する、外部から呼び出す可能性を考慮しておく -def remove_block_dims_and_alphas( - block_dims, block_alphas, conv_block_dims, conv_block_alphas, down_lr_weight, mid_lr_weight, up_lr_weight -): - # set 0 to block dim without learning rate to remove the block - if down_lr_weight != None: - for i, lr in enumerate(down_lr_weight): - if lr == 0: - block_dims[i] = 0 - if conv_block_dims is not None: - conv_block_dims[i] = 0 - if mid_lr_weight != None: - if mid_lr_weight == 0: - block_dims[LoRANetwork.NUM_OF_BLOCKS] = 0 - if conv_block_dims is not None: - conv_block_dims[LoRANetwork.NUM_OF_BLOCKS] = 0 - if up_lr_weight != None: - for i, lr in enumerate(up_lr_weight): - if lr == 0: - block_dims[LoRANetwork.NUM_OF_BLOCKS + 1 + i] = 0 - if conv_block_dims is not None: - conv_block_dims[LoRANetwork.NUM_OF_BLOCKS + 1 + i] = 0 - - return block_dims, block_alphas, conv_block_dims, conv_block_alphas - - -# 外部から呼び出す可能性を考慮しておく -def get_block_index(lora_name: str) -> int: - block_idx = -1 # invalid lora name - - m = RE_UPDOWN.search(lora_name) - if m: - g = m.groups() - i = int(g[1]) - j = int(g[3]) - if g[2] == "resnets": - idx = 3 * i + j - elif g[2] == "attentions": - idx = 3 * i + j - elif g[2] == "upsamplers" or g[2] == "downsamplers": - idx = 3 * i + 2 - - if g[0] == "down": - block_idx = 1 + idx # 0に該当するLoRAは存在しない - elif g[0] == "up": - block_idx = LoRANetwork.NUM_OF_BLOCKS + 1 + idx - - elif "mid_block_" in lora_name: - block_idx = LoRANetwork.NUM_OF_BLOCKS # idx=12 - - return block_idx - - -# Create network from weights for inference, weights are not loaded here (because can be merged) -def create_network_from_weights(multiplier, file, vae, text_encoder, unet, weights_sd=None, for_inference=False, **kwargs): - if weights_sd is None: - if os.path.splitext(file)[1] == ".safetensors": - from safetensors.torch import load_file, safe_open - - weights_sd = load_file(file) - else: - weights_sd = torch.load(file, map_location="cpu") - - # get dim/alpha mapping - modules_dim = {} - modules_alpha = {} - for key, value in weights_sd.items(): - if "." not in key: - continue - - lora_name = key.split(".")[0] - if "alpha" in key: - modules_alpha[lora_name] = value - elif "lora_down" in key: - dim = value.size()[0] - modules_dim[lora_name] = dim - # print(lora_name, value.size(), dim) - - # support old LoRA without alpha - for key in modules_dim.keys(): - if key not in modules_alpha: - modules_alpha[key] = modules_dim[key] - - module_class = LoRAInfModule if for_inference else LoRAModule - - network = LoRANetwork( - text_encoder, unet, multiplier=multiplier, modules_dim=modules_dim, modules_alpha=modules_alpha, module_class=module_class - ) - - # block lr - down_lr_weight, mid_lr_weight, up_lr_weight = parse_block_lr_kwargs(kwargs) - if up_lr_weight is not None or mid_lr_weight is not None or down_lr_weight is not None: - network.set_block_lr_weight(up_lr_weight, mid_lr_weight, down_lr_weight) - - return network, weights_sd - - -class LoRANetwork(torch.nn.Module): - NUM_OF_BLOCKS = 12 # フルモデル相当でのup,downの層の数 - - UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel"] - UNET_TARGET_REPLACE_MODULE_CONV2D_3X3 = ["ResnetBlock2D", "Downsample2D", "Upsample2D"] - TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"] - LORA_PREFIX_UNET = "lora_unet" - LORA_PREFIX_TEXT_ENCODER = "lora_te" - - # SDXL: must starts with LORA_PREFIX_TEXT_ENCODER - LORA_PREFIX_TEXT_ENCODER1 = "lora_te1" - LORA_PREFIX_TEXT_ENCODER2 = "lora_te2" - - def __init__( - self, - text_encoder: Union[List[CLIPTextModel], CLIPTextModel], - unet, - multiplier: float = 1.0, - lora_dim: int = 4, - alpha: float = 1, - dropout: Optional[float] = None, - rank_dropout: Optional[float] = None, - module_dropout: Optional[float] = None, - conv_lora_dim: Optional[int] = None, - conv_alpha: Optional[float] = None, - block_dims: Optional[List[int]] = None, - block_alphas: Optional[List[float]] = None, - conv_block_dims: Optional[List[int]] = None, - conv_block_alphas: Optional[List[float]] = None, - modules_dim: Optional[Dict[str, int]] = None, - modules_alpha: Optional[Dict[str, int]] = None, - module_class: Type[object] = LoRAModule, - varbose: Optional[bool] = False, - ) -> None: - """ - LoRA network: すごく引数が多いが、パターンは以下の通り - 1. lora_dimとalphaを指定 - 2. lora_dim、alpha、conv_lora_dim、conv_alphaを指定 - 3. block_dimsとblock_alphasを指定 : Conv2d3x3には適用しない - 4. block_dims、block_alphas、conv_block_dims、conv_block_alphasを指定 : Conv2d3x3にも適用する - 5. modules_dimとmodules_alphaを指定 (推論用) - """ - super().__init__() - self.multiplier = multiplier - - self.lora_dim = lora_dim - self.alpha = alpha - self.conv_lora_dim = conv_lora_dim - self.conv_alpha = conv_alpha - self.dropout = dropout - self.rank_dropout = rank_dropout - self.module_dropout = module_dropout - - if modules_dim is not None: - print(f"create LoRA network from weights") - elif block_dims is not None: - print(f"create LoRA network from block_dims") - print(f"neuron dropout: p={self.dropout}, rank dropout: p={self.rank_dropout}, module dropout: p={self.module_dropout}") - print(f"block_dims: {block_dims}") - print(f"block_alphas: {block_alphas}") - if conv_block_dims is not None: - print(f"conv_block_dims: {conv_block_dims}") - print(f"conv_block_alphas: {conv_block_alphas}") - else: - print(f"create LoRA network. base dim (rank): {lora_dim}, alpha: {alpha}") - print(f"neuron dropout: p={self.dropout}, rank dropout: p={self.rank_dropout}, module dropout: p={self.module_dropout}") - if self.conv_lora_dim is not None: - print(f"apply LoRA to Conv2d with kernel size (3,3). dim (rank): {self.conv_lora_dim}, alpha: {self.conv_alpha}") - - # create module instances - def create_modules( - is_unet: bool, - text_encoder_idx: Optional[int], # None, 1, 2 - root_module: torch.nn.Module, - target_replace_modules: List[torch.nn.Module], - ) -> List[LoRAModule]: - prefix = ( - self.LORA_PREFIX_UNET - if is_unet - else ( - self.LORA_PREFIX_TEXT_ENCODER - if text_encoder_idx is None - else (self.LORA_PREFIX_TEXT_ENCODER1 if text_encoder_idx == 1 else self.LORA_PREFIX_TEXT_ENCODER2) - ) - ) - loras = [] - skipped = [] - for name, module in root_module.named_modules(): - if module.__class__.__name__ in target_replace_modules: - for child_name, child_module in module.named_modules(): - is_linear = child_module.__class__.__name__ == "Linear" - is_conv2d = child_module.__class__.__name__ == "Conv2d" - is_conv2d_1x1 = is_conv2d and child_module.kernel_size == (1, 1) - - if is_linear or is_conv2d: - lora_name = prefix + "." + name + "." + child_name - lora_name = lora_name.replace(".", "_") - - dim = None - alpha = None - - if modules_dim is not None: - # モジュール指定あり - if lora_name in modules_dim: - dim = modules_dim[lora_name] - alpha = modules_alpha[lora_name] - elif is_unet and block_dims is not None: - # U-Netでblock_dims指定あり - block_idx = get_block_index(lora_name) - if is_linear or is_conv2d_1x1: - dim = block_dims[block_idx] - alpha = block_alphas[block_idx] - elif conv_block_dims is not None: - dim = conv_block_dims[block_idx] - alpha = conv_block_alphas[block_idx] - else: - # 通常、すべて対象とする - if is_linear or is_conv2d_1x1: - dim = self.lora_dim - alpha = self.alpha - elif self.conv_lora_dim is not None: - dim = self.conv_lora_dim - alpha = self.conv_alpha - - if dim is None or dim == 0: - # skipした情報を出力 - if is_linear or is_conv2d_1x1 or (self.conv_lora_dim is not None or conv_block_dims is not None): - skipped.append(lora_name) - continue - - lora = module_class( - lora_name, - child_module, - self.multiplier, - dim, - alpha, - dropout=dropout, - rank_dropout=rank_dropout, - module_dropout=module_dropout, - ) - loras.append(lora) - return loras, skipped - - text_encoders = text_encoder if type(text_encoder) == list else [text_encoder] - print(text_encoders) - # create LoRA for text encoder - # 毎回すべてのモジュールを作るのは無駄なので要検討 - self.text_encoder_loras = [] - skipped_te = [] - for i, text_encoder in enumerate(text_encoders): - if len(text_encoders) > 1: - index = i + 1 - print(f"create LoRA for Text Encoder {index}:") - else: - index = None - print(f"create LoRA for Text Encoder:") - - print(text_encoder) - text_encoder_loras, skipped = create_modules(False, index, text_encoder, LoRANetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE) - self.text_encoder_loras.extend(text_encoder_loras) - skipped_te += skipped - print(f"create LoRA for Text Encoder: {len(self.text_encoder_loras)} modules.") - - # extend U-Net target modules if conv2d 3x3 is enabled, or load from weights - target_modules = LoRANetwork.UNET_TARGET_REPLACE_MODULE - if modules_dim is not None or self.conv_lora_dim is not None or conv_block_dims is not None: - target_modules += LoRANetwork.UNET_TARGET_REPLACE_MODULE_CONV2D_3X3 - - self.unet_loras, skipped_un = create_modules(True, None, unet, target_modules) - print(f"create LoRA for U-Net: {len(self.unet_loras)} modules.") - - skipped = skipped_te + skipped_un - if varbose and len(skipped) > 0: - print( - f"because block_lr_weight is 0 or dim (rank) is 0, {len(skipped)} LoRA modules are skipped / block_lr_weightまたはdim (rank)が0の為、次の{len(skipped)}個のLoRAモジュールはスキップされます:" - ) - for name in skipped: - print(f"\t{name}") - - self.up_lr_weight: List[float] = None - self.down_lr_weight: List[float] = None - self.mid_lr_weight: float = None - self.block_lr = False - - # assertion - names = set() - for lora in self.text_encoder_loras + self.unet_loras: - assert lora.lora_name not in names, f"duplicated lora name: {lora.lora_name}" - names.add(lora.lora_name) - - def set_multiplier(self, multiplier): - self.multiplier = multiplier - for lora in self.text_encoder_loras + self.unet_loras: - lora.multiplier = self.multiplier - - def load_weights(self, file): - if os.path.splitext(file)[1] == ".safetensors": - from safetensors.torch import load_file - - weights_sd = load_file(file) - else: - weights_sd = torch.load(file, map_location="cpu") - info = self.load_state_dict(weights_sd, False) - return info - - def apply_to(self, text_encoder, unet, apply_text_encoder=True, apply_unet=True): - if apply_text_encoder: - print("enable LoRA for text encoder") - else: - self.text_encoder_loras = [] - - if apply_unet: - print("enable LoRA for U-Net") - else: - self.unet_loras = [] - - for lora in self.text_encoder_loras + self.unet_loras: - lora.apply_to() - self.add_module(lora.lora_name, lora) - - # マージできるかどうかを返す - def is_mergeable(self): - return True - - # TODO refactor to common function with apply_to - def merge_to(self, text_encoder, unet, weights_sd, dtype, device): - apply_text_encoder = apply_unet = False - for key in weights_sd.keys(): - if key.startswith(LoRANetwork.LORA_PREFIX_TEXT_ENCODER): - apply_text_encoder = True - elif key.startswith(LoRANetwork.LORA_PREFIX_UNET): - apply_unet = True - - if apply_text_encoder: - print("enable LoRA for text encoder") - else: - self.text_encoder_loras = [] - - if apply_unet: - print("enable LoRA for U-Net") - else: - self.unet_loras = [] - - for lora in self.text_encoder_loras + self.unet_loras: - sd_for_lora = {} - for key in weights_sd.keys(): - if key.startswith(lora.lora_name): - sd_for_lora[key[len(lora.lora_name) + 1 :]] = weights_sd[key] - lora.merge_to(sd_for_lora, dtype, device) - - print(f"weights are merged") - - # 層別学習率用に層ごとの学習率に対する倍率を定義する 引数の順番が逆だがとりあえず気にしない - def set_block_lr_weight( - self, - up_lr_weight: List[float] = None, - mid_lr_weight: float = None, - down_lr_weight: List[float] = None, - ): - self.block_lr = True - self.down_lr_weight = down_lr_weight - self.mid_lr_weight = mid_lr_weight - self.up_lr_weight = up_lr_weight - - def get_lr_weight(self, lora: LoRAModule) -> float: - lr_weight = 1.0 - block_idx = get_block_index(lora.lora_name) - if block_idx < 0: - return lr_weight - - if block_idx < LoRANetwork.NUM_OF_BLOCKS: - if self.down_lr_weight != None: - lr_weight = self.down_lr_weight[block_idx] - elif block_idx == LoRANetwork.NUM_OF_BLOCKS: - if self.mid_lr_weight != None: - lr_weight = self.mid_lr_weight - elif block_idx > LoRANetwork.NUM_OF_BLOCKS: - if self.up_lr_weight != None: - lr_weight = self.up_lr_weight[block_idx - LoRANetwork.NUM_OF_BLOCKS - 1] - - return lr_weight - - # 二つのText Encoderに別々の学習率を設定できるようにするといいかも - def prepare_optimizer_params(self, text_encoder_lr, unet_lr, default_lr): - self.requires_grad_(True) - all_params = [] - - def enumerate_params(loras): - params = [] - for lora in loras: - params.extend(lora.parameters()) - return params - - if self.text_encoder_loras: - param_data = {"params": enumerate_params(self.text_encoder_loras)} - if text_encoder_lr is not None: - param_data["lr"] = text_encoder_lr - all_params.append(param_data) - - if self.unet_loras: - if self.block_lr: - # 学習率のグラフをblockごとにしたいので、blockごとにloraを分類 - block_idx_to_lora = {} - for lora in self.unet_loras: - idx = get_block_index(lora.lora_name) - if idx not in block_idx_to_lora: - block_idx_to_lora[idx] = [] - block_idx_to_lora[idx].append(lora) - - # blockごとにパラメータを設定する - for idx, block_loras in block_idx_to_lora.items(): - param_data = {"params": enumerate_params(block_loras)} - - if unet_lr is not None: - param_data["lr"] = unet_lr * self.get_lr_weight(block_loras[0]) - elif default_lr is not None: - param_data["lr"] = default_lr * self.get_lr_weight(block_loras[0]) - if ("lr" in param_data) and (param_data["lr"] == 0): - continue - all_params.append(param_data) - - else: - param_data = {"params": enumerate_params(self.unet_loras)} - if unet_lr is not None: - param_data["lr"] = unet_lr - all_params.append(param_data) - - return all_params - - def enable_gradient_checkpointing(self): - # not supported - pass - - def prepare_grad_etc(self, text_encoder, unet): - self.requires_grad_(True) - - def on_epoch_start(self, text_encoder, unet): - self.train() - - def get_trainable_params(self): - return self.parameters() - - def save_weights(self, file, dtype, metadata): - if metadata is not None and len(metadata) == 0: - metadata = None - - state_dict = self.state_dict() - - if dtype is not None: - for key in list(state_dict.keys()): - v = state_dict[key] - v = v.detach().clone().to("cpu").to(dtype) - state_dict[key] = v - - if os.path.splitext(file)[1] == ".safetensors": - from safetensors.torch import save_file - from library import train_util - - # Precalculate model hashes to save time on indexing - if metadata is None: - metadata = {} - model_hash, legacy_hash = train_util.precalculate_safetensors_hashes(state_dict, metadata) - metadata["sshs_model_hash"] = model_hash - metadata["sshs_legacy_hash"] = legacy_hash - - save_file(state_dict, file, metadata) - else: - torch.save(state_dict, file) - - # mask is a tensor with values from 0 to 1 - def set_region(self, sub_prompt_index, is_last_network, mask): - if mask.max() == 0: - mask = torch.ones_like(mask) - - self.mask = mask - self.sub_prompt_index = sub_prompt_index - self.is_last_network = is_last_network - - for lora in self.text_encoder_loras + self.unet_loras: - lora.set_network(self) - - def set_current_generation(self, batch_size, num_sub_prompts, width, height, shared): - self.batch_size = batch_size - self.num_sub_prompts = num_sub_prompts - self.current_size = (height, width) - self.shared = shared - - # create masks - mask = self.mask - mask_dic = {} - mask = mask.unsqueeze(0).unsqueeze(1) # b(1),c(1),h,w - ref_weight = self.text_encoder_loras[0].lora_down.weight if self.text_encoder_loras else self.unet_loras[0].lora_down.weight - dtype = ref_weight.dtype - device = ref_weight.device - - def resize_add(mh, mw): - # print(mh, mw, mh * mw) - m = torch.nn.functional.interpolate(mask, (mh, mw), mode="bilinear") # doesn't work in bf16 - m = m.to(device, dtype=dtype) - mask_dic[mh * mw] = m - - h = height // 8 - w = width // 8 - for _ in range(4): - resize_add(h, w) - if h % 2 == 1 or w % 2 == 1: # add extra shape if h/w is not divisible by 2 - resize_add(h + h % 2, w + w % 2) - h = (h + 1) // 2 - w = (w + 1) // 2 - - self.mask_dic = mask_dic - - def backup_weights(self): - # 重みのバックアップを行う - loras: List[LoRAInfModule] = self.text_encoder_loras + self.unet_loras - for lora in loras: - org_module = lora.org_module_ref[0] - if not hasattr(org_module, "_lora_org_weight"): - sd = org_module.state_dict() - org_module._lora_org_weight = sd["weight"].detach().clone() - org_module._lora_restored = True - - def restore_weights(self): - # 重みのリストアを行う - loras: List[LoRAInfModule] = self.text_encoder_loras + self.unet_loras - for lora in loras: - org_module = lora.org_module_ref[0] - if not org_module._lora_restored: - sd = org_module.state_dict() - sd["weight"] = org_module._lora_org_weight - org_module.load_state_dict(sd) - org_module._lora_restored = True - - def pre_calculation(self): - # 事前計算を行う - loras: List[LoRAInfModule] = self.text_encoder_loras + self.unet_loras - for lora in loras: - org_module = lora.org_module_ref[0] - sd = org_module.state_dict() - - org_weight = sd["weight"] - lora_weight = lora.get_weight().to(org_weight.device, dtype=org_weight.dtype) - sd["weight"] = org_weight + lora_weight - assert sd["weight"].shape == org_weight.shape - org_module.load_state_dict(sd) - - org_module._lora_restored = False - lora.enabled = False - - def apply_max_norm_regularization(self, max_norm_value, device): - downkeys = [] - upkeys = [] - alphakeys = [] - norms = [] - keys_scaled = 0 - - state_dict = self.state_dict() - for key in state_dict.keys(): - if "lora_down" in key and "weight" in key: - downkeys.append(key) - upkeys.append(key.replace("lora_down", "lora_up")) - alphakeys.append(key.replace("lora_down.weight", "alpha")) - - for i in range(len(downkeys)): - down = state_dict[downkeys[i]].to(device) - up = state_dict[upkeys[i]].to(device) - alpha = state_dict[alphakeys[i]].to(device) - dim = down.shape[0] - scale = alpha / dim - - if up.shape[2:] == (1, 1) and down.shape[2:] == (1, 1): - updown = (up.squeeze(2).squeeze(2) @ down.squeeze(2).squeeze(2)).unsqueeze(2).unsqueeze(3) - elif up.shape[2:] == (3, 3) or down.shape[2:] == (3, 3): - updown = torch.nn.functional.conv2d(down.permute(1, 0, 2, 3), up).permute(1, 0, 2, 3) - else: - updown = up @ down - - updown *= scale - - norm = updown.norm().clamp(min=max_norm_value / 2) - desired = torch.clamp(norm, max=max_norm_value) - ratio = desired.cpu() / norm.cpu() - sqrt_ratio = ratio**0.5 - if ratio != 1: - keys_scaled += 1 - state_dict[upkeys[i]] *= sqrt_ratio - state_dict[downkeys[i]] *= sqrt_ratio - scalednorm = updown.norm() * ratio - norms.append(scalednorm.item()) - - return keys_scaled, sum(norms) / len(norms), max(norms) \ No newline at end of file diff --git a/spaces/idosal/oai-proxy/src/types/custom.d.ts b/spaces/idosal/oai-proxy/src/types/custom.d.ts deleted file mode 100644 index c29288f02b084e67f1179853e776397ef2eb518e..0000000000000000000000000000000000000000 --- a/spaces/idosal/oai-proxy/src/types/custom.d.ts +++ /dev/null @@ -1,10 +0,0 @@ -import { Express } from "express-serve-static-core"; -import { Key } from "../keys"; - -declare global { - namespace Express { - interface Request { - key?: Key; - } - } -} diff --git a/spaces/inflaton/learn-ai/app_modules/instruct_pipeline.py b/spaces/inflaton/learn-ai/app_modules/instruct_pipeline.py deleted file mode 100644 index 4fa2a560afc325c9cd529ae0cd6ff60655947792..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/app_modules/instruct_pipeline.py +++ /dev/null @@ -1,250 +0,0 @@ -import logging -import re -from typing import List - -import numpy as np -from transformers import Pipeline, PreTrainedTokenizer -from transformers.utils import is_tf_available - -if is_tf_available(): - import tensorflow as tf - -logger = logging.getLogger(__name__) - -INSTRUCTION_KEY = "### Instruction:" -RESPONSE_KEY = "### Response:" -END_KEY = "### End" -INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request." - -# This is the prompt that is used for generating responses using an already trained model. It ends with the response -# key, where the job of the model is to provide the completion that follows it (i.e. the response itself). -PROMPT_FOR_GENERATION_FORMAT = """{intro} - -{instruction_key} -{instruction} - -{response_key} -""".format( - intro=INTRO_BLURB, - instruction_key=INSTRUCTION_KEY, - instruction="{instruction}", - response_key=RESPONSE_KEY, -) - - -def get_special_token_id(tokenizer: PreTrainedTokenizer, key: str) -> int: - """Gets the token ID for a given string that has been added to the tokenizer as a special token. - - When training, we configure the tokenizer so that the sequences like "### Instruction:" and "### End" are - treated specially and converted to a single, new token. This retrieves the token ID each of these keys map to. - - Args: - tokenizer (PreTrainedTokenizer): the tokenizer - key (str): the key to convert to a single token - - Raises: - RuntimeError: if more than one ID was generated - - Returns: - int: the token ID for the given key - """ - token_ids = tokenizer.encode(key) - if len(token_ids) > 1: - raise ValueError( - f"Expected only a single token for '{key}' but found {token_ids}" - ) - return token_ids[0] - - -class InstructionTextGenerationPipeline(Pipeline): - def __init__( - self, - *args, - do_sample: bool = True, - max_new_tokens: int = 256, - top_p: float = 0.92, - top_k: int = 0, - **kwargs, - ): - """Initialize the pipeline - - Args: - do_sample (bool, optional): Whether or not to use sampling. Defaults to True. - max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128. - top_p (float, optional): If set to float < 1, only the smallest set of most probable tokens with - probabilities that add up to top_p or higher are kept for generation. Defaults to 0.92. - top_k (int, optional): The number of highest probability vocabulary tokens to keep for top-k-filtering. - Defaults to 0. - """ - super().__init__( - *args, - do_sample=do_sample, - max_new_tokens=max_new_tokens, - top_p=top_p, - top_k=top_k, - **kwargs, - ) - - def _sanitize_parameters(self, return_full_text: bool = None, **generate_kwargs): - preprocess_params = {} - - # newer versions of the tokenizer configure the response key as a special token. newer versions still may - # append a newline to yield a single token. find whatever token is configured for the response key. - tokenizer_response_key = next( - ( - token - for token in self.tokenizer.additional_special_tokens - if token.startswith(RESPONSE_KEY) - ), - None, - ) - - response_key_token_id = None - end_key_token_id = None - if tokenizer_response_key: - try: - response_key_token_id = get_special_token_id( - self.tokenizer, tokenizer_response_key - ) - end_key_token_id = get_special_token_id(self.tokenizer, END_KEY) - - # Ensure generation stops once it generates "### End" - generate_kwargs["eos_token_id"] = end_key_token_id - except ValueError: - pass - - forward_params = generate_kwargs - postprocess_params = { - "response_key_token_id": response_key_token_id, - "end_key_token_id": end_key_token_id, - } - - if return_full_text is not None: - postprocess_params["return_full_text"] = return_full_text - - return preprocess_params, forward_params, postprocess_params - - def preprocess(self, instruction_text, **generate_kwargs): - prompt_text = PROMPT_FOR_GENERATION_FORMAT.format(instruction=instruction_text) - inputs = self.tokenizer( - prompt_text, - return_tensors="pt", - ) - inputs["prompt_text"] = prompt_text - inputs["instruction_text"] = instruction_text - return inputs - - def _forward(self, model_inputs, **generate_kwargs): - input_ids = model_inputs["input_ids"] - attention_mask = model_inputs.get("attention_mask", None) - - if input_ids.shape[1] == 0: - input_ids = None - attention_mask = None - in_b = 1 - else: - in_b = input_ids.shape[0] - - generated_sequence = self.model.generate( - input_ids=input_ids.to(self.model.device), - attention_mask=attention_mask.to(self.model.device) - if attention_mask is not None - else None, - pad_token_id=self.tokenizer.pad_token_id, - **generate_kwargs, - ) - - out_b = generated_sequence.shape[0] - if self.framework == "pt": - generated_sequence = generated_sequence.reshape( - in_b, out_b // in_b, *generated_sequence.shape[1:] - ) - elif self.framework == "tf": - generated_sequence = tf.reshape( - generated_sequence, (in_b, out_b // in_b, *generated_sequence.shape[1:]) - ) - - instruction_text = model_inputs.pop("instruction_text") - return { - "generated_sequence": generated_sequence, - "input_ids": input_ids, - "instruction_text": instruction_text, - } - - def postprocess( - self, - model_outputs, - response_key_token_id, - end_key_token_id, - return_full_text: bool = False, - ): - generated_sequence = model_outputs["generated_sequence"][0] - instruction_text = model_outputs["instruction_text"] - - generated_sequence: List[List[int]] = generated_sequence.numpy().tolist() - records = [] - for sequence in generated_sequence: - # The response will be set to this variable if we can identify it. - decoded = None - - # If we have token IDs for the response and end, then we can find the tokens and only decode between them. - if response_key_token_id and end_key_token_id: - # Find where "### Response:" is first found in the generated tokens. Considering this is part of the - # prompt, we should definitely find it. We will return the tokens found after this token. - try: - response_pos = sequence.index(response_key_token_id) - except ValueError: - logger.warn( - f"Could not find response key {response_key_token_id} in: {sequence}" - ) - response_pos = None - - if response_pos: - # Next find where "### End" is located. The model has been trained to end its responses with this - # sequence (or actually, the token ID it maps to, since it is a special token). We may not find - # this token, as the response could be truncated. If we don't find it then just return everything - # to the end. Note that even though we set eos_token_id, we still see the this token at the end. - try: - end_pos = sequence.index(end_key_token_id) - except ValueError: - end_pos = None - - decoded = self.tokenizer.decode( - sequence[response_pos + 1 : end_pos] - ).strip() - - if not decoded: - # Otherwise we'll decode everything and use a regex to find the response and end. - - fully_decoded = self.tokenizer.decode(sequence) - - # The response appears after "### Response:". The model has been trained to append "### End" at the - # end. - m = re.search( - r"#+\s*Response:\s*(.+?)#+\s*End", fully_decoded, flags=re.DOTALL - ) - - if m: - decoded = m.group(1).strip() - else: - # The model might not generate the "### End" sequence before reaching the max tokens. In this case, - # return everything after "### Response:". - m = re.search( - r"#+\s*Response:\s*(.+)", fully_decoded, flags=re.DOTALL - ) - if m: - decoded = m.group(1).strip() - else: - logger.warn(f"Failed to find response in:\n{fully_decoded}") - - # If the full text is requested, then append the decoded text to the original instruction. - # This technically isn't the full text, as we format the instruction in the prompt the model has been - # trained on, but to the client it will appear to be the full text. - if return_full_text: - decoded = f"{instruction_text}\n{decoded}" - - rec = {"generated_text": decoded} - - records.append(rec) - - return records diff --git a/spaces/innovatorved/whisper.api/app/core/models/User.py b/spaces/innovatorved/whisper.api/app/core/models/User.py deleted file mode 100644 index 421e850e16459ef4eb6b729c09ad5f10fd4256a7..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/core/models/User.py +++ /dev/null @@ -1,145 +0,0 @@ -import uuid - -from app.core.config import settings - -from sqlalchemy.dialects.postgresql import UUID -from sqlalchemy import Column, String, Boolean, DateTime -from sqlalchemy.orm import relationship -from sqlalchemy.sql import func -from sqlalchemy import or_ -from app.core.database import Base - -from app.core.security import get_password_hash, verify_password -from app.utils.utils import is_valid_email, is_valid_password - -from app.core.models import AuthTokenController -from fastapi import HTTPException, status - - -class UserInDB(Base): - __tablename__ = "users" - - id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) - username = Column(String, unique=True, index=True) - email = Column(String, unique=True, index=True) - hashed_password = Column(String) - is_active = Column(Boolean, default=True) - transcribes = relationship("TranscibeInDB", back_populates="user") - auth_tokens = relationship("AuthToken", back_populates="user") - created_at = Column(DateTime(timezone=True), server_default=func.now()) - - def __init__(self, username: str, email: str, hashed_password: str): - self.username = username - self.email = email - self.hashed_password = hashed_password - - def data(self): - return { - "id": self.id, - "username": self.username, - "email": self.email, - "created_at": self.created_at, - } - - -class UserController: - UserInDB = UserInDB - - def __init__(self, database): - self.db = database - - def create(self, username: str, email: str, password: str, init_token: bool = True): - if not is_valid_email(email): - raise HTTPException( - status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail="Invalid Email" - ) - if not is_valid_password(password): - raise HTTPException( - status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, - detail="Invalid Password", - ) - - is_user_exists: Boolean = self.CheckUserIsExistsByEmailAndUsername( - email, username - ) - if is_user_exists: - raise HTTPException( - status_code=status.HTTP_409_CONFLICT, - detail="Email or Username Already Registered", - ) - - self.username = username - self.email = email - self.hashed_password = get_password_hash(password) - - self.db_user = UserInDB( - username=self.username, - email=self.email, - hashed_password=self.hashed_password, - ) - self.db.add(self.db_user) - self.db.commit() - self.db.refresh(self.db_user) - self.user = self.db_user.data() - - if init_token == False: - return - AuthTokenController(self.db).create(self.db_user.id) - - def read_token(self, email: str, password: str): - self.read_by_email(email) - if not verify_password(password, self.db_user.hashed_password): - raise HTTPException(status_code=400, detail="Incorrect password") - TOKEN = AuthTokenController(self.db) - TOKEN.get_token_from_user_id(self.db_user.id) - return TOKEN.get_token() - - def CheckUserIsExistsByEmailAndUsername(self, email: str, username: str): - db_user = ( - self.db.query(UserInDB) - .filter(or_(UserInDB.email == email, UserInDB.username == username)) - .first() - ) - if db_user: - return True - return False - - def read_by_email(self, email: str): - self.db_user = self.db.query(UserInDB).filter(UserInDB.email == email).first() - if not self.db_user: - raise HTTPException( - status_code=status.HTTP_404_NOT_FOUND, detail="User not found" - ) - self.user = self.db_user.data() - - def read(self, user_id: uuid.UUID): - self.db_user = self.db.query(UserInDB).filter(UserInDB.id == user_id).first() - if not self.db_user: - raise HTTPException( - status_code=status.HTTP_404_NOT_FOUND, detail="User not found" - ) - self.user = self.db_user.data() - - def update_password( - self, user_id: uuid.UUID, current_password: str, new_password: str - ): - self.read(user_id) - if verify_password(current_password, self.db_user.hashed_password): - self.db_user.hashed_password = get_password_hash(new_password) - self.db.commit() - self.db.refresh(self.db_user) - self.user = self.db_user.data() - else: - raise HTTPException(status_code=400, detail="Incorrect password") - - def delete(self, user_id: uuid.UUID): - self.read(user_id) - self.db.delete(self.db_user) - self.db.commit() - return user_id - - def details(self): - return self.db_user - - def detailsInJSON(self): - return self.user diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/!LINK! Xforce Keygen AutoCAD Map 3D 2014.md b/spaces/inplisQlawa/anything-midjourney-v4-1/!LINK! Xforce Keygen AutoCAD Map 3D 2014.md deleted file mode 100644 index 7f0642c06fb2bc48f063b99e95286e6dbee2f34e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/!LINK! Xforce Keygen AutoCAD Map 3D 2014.md +++ /dev/null @@ -1,99 +0,0 @@ - -

    Xforce Keygen AutoCAD Map 3D 2014: A Complete Guide

    -

    If you are looking for a way to activate AutoCAD Map 3D 2014, one of the most popular and powerful software for mapping and geographic information systems, you may have come across Xforce Keygen. Xforce Keygen is a universal key generator that can generate activation codes for any Autodesk product. In this article, we will show you how to use Xforce Keygen to activate AutoCAD Map 3D 2014 and enjoy its full features.

    -

    What is AutoCAD Map 3D 2014?

    -

    AutoCAD Map 3D 2014 is a software that allows you to create, edit, and analyze geospatial data. You can use it to access and integrate data from various sources, such as CAD drawings, GIS databases, web services, and satellite imagery. You can also use it to perform spatial analysis, such as buffering, overlaying, and querying. With AutoCAD Map 3D 2014, you can create professional-quality maps and presentations that support decision making and planning.

    -

    xforce keygen AutoCAD Map 3D 2014


    Download Zip ►►►►► https://urlin.us/2uEyzu



    -

    What is Xforce Keygen?

    -

    Xforce Keygen is a tool that can generate activation codes for any Autodesk product, including AutoCAD Map 3D 2014. It is a crack that bypasses the software's security system and allows you to use it without paying for a license. Xforce Keygen is created by a group of hackers who are experts in cracking Autodesk products. However, using Xforce Keygen is illegal and risky, as it may expose your computer to viruses, malware, or legal issues.

    -

    How to Use Xforce Keygen to Activate AutoCAD Map 3D 2014?

    -

    If you still want to use Xforce Keygen to activate AutoCAD Map 3D 2014, you need to follow these steps:

    -
      -
    1. Download and install AutoCAD Map 3D 2014 from the official website or any other source.
    2. -
    3. Download Xforce Keygen for AutoCAD Map 3D 2014 from the internet. Make sure you download the correct version for your operating system (32-bit or 64-bit).
    4. -
    5. Disable your internet connection and antivirus software. This is to prevent them from interfering with the activation process.
    6. -
    7. Run Xforce Keygen as administrator. You will see a window with two tabs: "Mem Patch" and "Generate".
    8. -
    9. Click on "Mem Patch". You should see a message saying "Successfully patched".
    10. -
    11. Run AutoCAD Map 3D 2014 and click on "Activate". You will be asked to enter a serial number and a product key.
    12. -
    13. Enter any serial number and product key that match the format shown on the screen. For example, you can enter "666-69696969" as the serial number and "129F1" as the product key.
    14. -
    15. Click on "Next". You will see an error message saying that your activation code is invalid.
    16. -
    17. Click on "Close" and then click on "Activate" again.
    18. -
    19. Select "I have an activation code from Autodesk".
    20. -
    21. Go back to Xforce Keygen and click on "Generate". You will see a long string of characters in the "Activation" field.
    22. -
    23. Copy the activation code from Xforce Keygen and paste it into the activation screen of AutoCAD Map 3D 2014.
    24. -
    25. Click on "Next". You should see a message saying that your product has been activated successfully.
    26. -
    27. Restart AutoCAD Map 3D 2014 and enjoy its full features.
    28. -
    -

    Conclusion

    -

    In this article, we have shown you how to use Xforce Keygen to activate AutoCAD Map 3D 2014. However, we do not recommend using this method, as it is illegal and risky. Instead, we suggest you buy a legitimate license from Autodesk or use an alternative software that is free or cheaper. This way, you can avoid any legal issues, security threats, or performance problems that may arise from using a cracked software.

    -

    What are the Benefits of Using Xforce Keygen to Activate AutoCAD Map 3D 2014?

    -

    Using Xforce Keygen to activate AutoCAD Map 3D 2014 can have some benefits for some users. For example:

    -
      -
    • You can save money by not paying for a license fee.
    • -
    • You can access all the features and functions of AutoCAD Map 3D 2014 without any limitations.
    • -
    • You can use AutoCAD Map 3D 2014 offline without needing an internet connection.
    • -
    • You can update AutoCAD Map 3D 2014 without worrying about losing your activation.
    • -
    -

    What are the Risks of Using Xforce Keygen to Activate AutoCAD Map 3D 2014?

    -

    However, using Xforce Keygen to activate AutoCAD Map 3D 2014 also comes with some risks and disadvantages. For instance:

    -
      -
    • You are violating the terms and conditions of Autodesk and may face legal consequences.
    • -
    • You are exposing your computer to potential viruses, malware, or spyware that may harm your system or data.
    • -
    • You are compromising the quality and performance of AutoCAD Map 3D 2014 as it may not work properly or crash frequently.
    • -
    • You are missing out on the official support and updates from Autodesk that may fix bugs or improve features.
    • -
    -

    What are the Alternatives to Using Xforce Keygen to Activate AutoCAD Map 3D 2014?

    -

    If you want to use AutoCAD Map 3D 2014 legally and safely, you have some alternatives to using Xforce Keygen. For example:

    -
      -
    • You can buy a legitimate license from Autodesk or an authorized reseller. This way, you can enjoy the full benefits of AutoCAD Map 3D 2014 without any risks or problems.
    • -
    • You can use a free trial version of AutoCAD Map 3D 2014 for a limited time. This way, you can test the software and see if it meets your needs before buying it.
    • -
    • You can use an alternative software that is free or cheaper than AutoCAD Map 3D 2014. This way, you can still create, edit, and analyze geospatial data without breaking the law or spending too much money.
    • -
    -

    Conclusion

    -

    In this article, we have shown you how to use Xforce Keygen to activate AutoCAD Map 3D 2014 and what are the benefits and risks of doing so. We have also suggested some alternatives to using Xforce Keygen that are legal and safe. We hope this article has been helpful and informative for you. However, we do not endorse or recommend using Xforce Keygen or any other crack software, as it is illegal and risky. Instead, we advise you to respect the intellectual property rights of Autodesk and use their products in a lawful and ethical manner.

    -

    How to Download and Install Xforce Keygen for AutoCAD Map 3D 2014?

    -

    If you have decided to use Xforce Keygen to activate AutoCAD Map 3D 2014, you need to download and install it first. Here are the steps to do so:

    -

    -
      -
    1. Go to a reliable website that offers Xforce Keygen for AutoCAD Map 3D 2014. You can search for it on Google or use the link provided below.
    2. -
    3. Choose the version of Xforce Keygen that matches your operating system (32-bit or 64-bit) and click on the download button.
    4. -
    5. Save the file to your computer and extract it using a file archiver program such as WinRAR or 7-Zip.
    6. -
    7. Open the extracted folder and run Xforce Keygen as administrator. You will see a window with two tabs: "Mem Patch" and "Generate".
    8. -
    9. Follow the instructions in the previous section to use Xforce Keygen to activate AutoCAD Map 3D 2014.
    10. -
    -

    Note: The download link for Xforce Keygen for AutoCAD Map 3D 2014 is provided for educational purposes only. We do not endorse or recommend using it, as it is illegal and risky. Use it at your own risk.

    -

    Download link: https://drive.google.com/file/d/0B9OEdNQ-01uBMVdvSkRIUkpaSEU

    -

    How to Uninstall Xforce Keygen from Your Computer?

    -

    If you want to uninstall Xforce Keygen from your computer, you need to follow these steps:

    -
      -
    1. Delete the Xforce Keygen file and folder from your computer.
    2. -
    3. Run a full scan of your computer with an antivirus program to remove any potential viruses, malware, or spyware that may have been installed by Xforce Keygen.
    4. -
    5. Clean your registry with a registry cleaner program to remove any traces of Xforce Keygen from your system.
    6. -
    7. Restart your computer and check if Xforce Keygen has been completely removed.
    8. -
    -

    FAQs about Xforce Keygen for AutoCAD Map 3D 2014

    -

    Here are some frequently asked questions and answers about Xforce Keygen for AutoCAD Map 3D 2014:

    -

    Q: Is Xforce Keygen safe to use?

    -

    A: No, Xforce Keygen is not safe to use. It is a crack that can harm your computer or data. It can also expose you to legal issues or penalties.

    -

    Q: Is Xforce Keygen legal to use?

    -

    A: No, Xforce Keygen is not legal to use. It is a violation of the terms and conditions of Autodesk and the intellectual property rights of the software developers. It can also result in fines or lawsuits.

    -

    Q: Is Xforce Keygen effective to use?

    -

    A: No, Xforce Keygen is not effective to use. It can compromise the quality and performance of AutoCAD Map 3D 2014 as it may not work properly or crash frequently. It can also prevent you from getting official support and updates from Autodesk that may fix bugs or improve features.

    -

    Q: Is there any alternative to using Xforce Keygen?

    -

    A: Yes, there are some alternatives to using Xforce Keygen. You can buy a legitimate license from Autodesk or an authorized reseller, use a free trial version of AutoCAD Map 3D 2014, or use an alternative software that is free or cheaper than AutoCAD Map 3D 2014.

    -

    How to Troubleshoot Xforce Keygen for AutoCAD Map 3D 2014?

    -

    Sometimes, you may encounter some problems or errors when using Xforce Keygen to activate AutoCAD Map 3D 2014. Here are some common issues and solutions:

    -

    Q: Xforce Keygen does not run or open.

    -

    A: This may be caused by your antivirus software blocking or deleting Xforce Keygen. You need to disable your antivirus software and restore Xforce Keygen from the quarantine or trash folder. You can also add Xforce Keygen to the exception list of your antivirus software.

    -

    Q: Xforce Keygen says "Could not get debug privilege! Are you admin?"

    -

    A: This means that you need to run Xforce Keygen as administrator. You can right-click on Xforce Keygen and select "Run as administrator". You can also change the compatibility settings of Xforce Keygen to always run as administrator.

    -

    Q: Xforce Keygen says "Make sure you can write to current directory."

    -

    A: This means that you need to have write permission to the folder where Xforce Keygen is located. You can change the security settings of the folder to allow full control for your user account. You can also move Xforce Keygen to another folder where you have write permission.

    -

    Q: Xforce Keygen says "You need to apply patch when license screen appears."

    -

    A: This means that you need to click on "Mem Patch" before generating the activation code. You should see a message saying "Successfully patched" after clicking on "Mem Patch". If you do not see this message, you may need to reinstall AutoCAD Map 3D 2014 and try again.

    -

    Q: Xforce Keygen says "Invalid request code."

    -

    A: This means that you have entered the wrong request code from the activation screen of AutoCAD Map 3D 2014. You need to copy and paste the request code exactly as it appears on the screen. You can also use the clipboard button on Xforce Keygen to copy and paste the request code automatically.

    -

    Conclusion

    -

    In this article, we have shown you how to use Xforce Keygen to activate AutoCAD Map 3D 2014 and what are the benefits, risks, and alternatives of doing so. We have also provided some troubleshooting tips for common problems or errors that may occur when using Xforce Keygen. We hope this article has been helpful and informative for you. However, we do not endorse or recommend using Xforce Keygen or any other crack software, as it is illegal and risky. Instead, we advise you to respect the intellectual property rights of Autodesk and use their products in a lawful and ethical manner.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Un Paseo Para Recordar 1080p 55) __FULL__.md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Un Paseo Para Recordar 1080p 55) __FULL__.md deleted file mode 100644 index 8bc8d6271aa9ddde13bf4cff9f6ab579bf727272..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Un Paseo Para Recordar 1080p 55) __FULL__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Un Paseo Para Recordar 1080p 55)


    Download Ziphttps://urlin.us/2uEyKi



    - -1882266703. HD Online Player (Hindi Medium Full Movie 1080p Downlo) ... korean grammar practice for foreigners pdf 14 · un paseo para recordar 1080p 55 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hdclone 6 5 Enterprise Edition Portable Boot Image.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hdclone 6 5 Enterprise Edition Portable Boot Image.md deleted file mode 100644 index 9d42e19d27da366da1c75e7e2eb07f1833de7fad..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hdclone 6 5 Enterprise Edition Portable Boot Image.md +++ /dev/null @@ -1,6 +0,0 @@ -

    hdclone 6 5 enterprise edition portable boot image


    Download >>>>> https://urlin.us/2uExOm



    -
    -Download HDClone 11 2 9 Enterprise Edition Portable + Boot Image torrent for free, Downloads via Magnet Link or FREE Movies online to Watch in ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Microcat Dongle 17.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Microcat Dongle 17.md deleted file mode 100644 index fcac065ff84889f238adc04aee1b47489ec121f8..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Microcat Dongle 17.md +++ /dev/null @@ -1,10 +0,0 @@ - -

    the microcat dongle 17 has no software. if you are lucky, the dongle will appear when you connect a display. if not, you will need to install the drivers yourself. if you plug the microcat dongle 17 into the same usb slot as your microcat dongle 16, you will first need to unplug the microcat dongle 16's microcat dongle 17 and then plug the microcat dongle 17 into the microcat dongle 16.

    -

    Microcat Dongle 17


    Downloadhttps://urlin.us/2uEvUz



    -

    this offers a solution for mobile phone users who don't want to pay an extra fee just to use their mobile phone, but also want to use the communication possibilities of the microcat dongle 17 (and 16, 15, 14, 9 and 8).

    -

    microcat mobile phone has the standard gsm specifications, with the exception of the operating frequency of 900 mhz. this allows it to work in all countries, no matter if they use gsm or wcdma (umts).

    -

    huang yongmei et al proposed a 16-bit microcontroller with 20-pin sockets in their experimental microcontroller . the microcontroller is a side-banded dongle , i.e., the external data interface is synchronous. the microcontroller is designed with a 17-bit microprocessor and 16-pin sockets. the microcontroller is small enough for portable devices, and thus can be used for battery-operated embedded systems, as well as iot.

    -

    -

    as a part of their website redesign, ross fenton designed myfitnesspal food diary so that it is compatible with nutritionally complete food recipes. it is a tweener between a recipe scheduler and a notebook. myfitnesspal food diary provides a server-side full-featured gui, and it can be executed on both iphone and android smartphones. it also provides an option to import local recipes from a local recipe repository .

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Borderlands 2 Moxxi Nude Mod __FULL__.md b/spaces/inreVtussa/clothingai/Examples/Borderlands 2 Moxxi Nude Mod __FULL__.md deleted file mode 100644 index afbbe97c8abed734c5cd90c079bcad3495e782f7..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Borderlands 2 Moxxi Nude Mod __FULL__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Borderlands 2 Moxxi Nude Mod


    Download File >>> https://tiurll.com/2uCk5A



    - -Borderlands 2 moxxi nude mod ... borderlands lilith and moxxi porn animated borderlands lilith borderlands ... blackjrxiii borderlands moxxi gaige futa fuck 3 ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Cabri Ii Plus 1.4.5 Keygen [BEST].md b/spaces/inreVtussa/clothingai/Examples/Cabri Ii Plus 1.4.5 Keygen [BEST].md deleted file mode 100644 index 821fd7b6a34307eea830052960a5699458179bb8..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cabri Ii Plus 1.4.5 Keygen [BEST].md +++ /dev/null @@ -1,12 +0,0 @@ - -

    Cabri II Plus 1.4.5 Keygen: A Tool for Cracking Geometry Software

    -

    Cabri II Plus is a software that allows users to create geometric and numerical constructions, such as transformations, measurements, calculus, tables and graphical representations[^2^]. It is designed for students and teachers who want to explore and learn mathematics in a dynamic and interactive way.

    -

    Cabri Ii Plus 1.4.5 Keygen


    DOWNLOAD ————— https://tiurll.com/2uCmfP



    -

    However, Cabri II Plus is not a free software. Users need to purchase a license to use it. Some people may try to bypass this requirement by using a keygen, which is a tool that generates serial numbers or activation codes for software[^3^]. A keygen can be considered as a form of software piracy, which is illegal and unethical.

    -

    One of the keygens that claims to crack Cabri II Plus is Cabri II Plus 1.4.5 Keygen. It is available for download from some websites, such as taimienphi.vn[^1^]. However, using this keygen may pose some risks, such as malware infection, legal consequences, or software malfunction. Therefore, it is not recommended to use Cabri II Plus 1.4.5 Keygen or any other keygen to crack Cabri II Plus or any other software.

    -

    The best way to use Cabri II Plus is to purchase a legitimate license from the official website of Cabrilog[^2^]. This way, users can enjoy the full features and benefits of the software, as well as support the developers and respect their intellectual property rights.

    -

    Cabri II Plus is a powerful and versatile software that can help users to explore and learn various topics in geometry, such as angles, polygons, circles, transformations, congruence, similarity, Pythagoras' theorem, trigonometry, and more. Users can create dynamic constructions that can be manipulated and animated, as well as measure and calculate various properties of the figures. Users can also export their constructions to other formats, such as images, videos, or web pages.

    -

    Cabri II Plus also offers a range of features and resources for teachers who want to use the software in their classrooms. Teachers can create interactive worksheets and exercises for their students, as well as monitor and assess their progress. Teachers can also access a library of ready-made activities and examples that cover different levels and curricula of geometry education.

    -

    Cabri II Plus is compatible with Windows and Mac OS X operating systems. Users can download a free trial version of the software from the official website of Cabrilog and try it for 30 days. The full version of the software costs $49 for a single user license or $199 for a site license for up to 30 computers. Users can also purchase additional modules that extend the functionality of the software, such as Cabri 3D or Cabri Express.

    In conclusion, Cabri II Plus is a software that can help users to create and explore geometric and numerical constructions in a dynamic and interactive way. It is suitable for students and teachers who want to learn and teach geometry in a fun and engaging way. However, Cabri II Plus is not a free software and users need to purchase a license to use it. Users should avoid using keygens or other tools that claim to crack the software, as they may be illegal, unethical, or harmful. Users should instead support the developers and respect their intellectual property rights by buying a legitimate license from the official website of Cabrilog.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Descargarconciertosenfull BESThd1080p.md b/spaces/inreVtussa/clothingai/Examples/Descargarconciertosenfull BESThd1080p.md deleted file mode 100644 index 2f9bc3f9f0ba133b9f2002df3805f3aa429f8576..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Descargarconciertosenfull BESThd1080p.md +++ /dev/null @@ -1,6 +0,0 @@ -

    descargarconciertosenfullhd1080p


    Download Zip ✓✓✓ https://tiurll.com/2uCmeF



    -
    - 1fdad05405
    -
    -
    -

    diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/popover.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/popover.tsx deleted file mode 100644 index 8b35ce6d7b0dd78003308b09354e9f7197eb161a..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/popover.tsx +++ /dev/null @@ -1,31 +0,0 @@ -"use client" - -import * as React from "react" -import * as PopoverPrimitive from "@radix-ui/react-popover" - -import { cn } from "@/lib/utils" - -const Popover = PopoverPrimitive.Root - -const PopoverTrigger = PopoverPrimitive.Trigger - -const PopoverContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, align = "center", sideOffset = 4, ...props }, ref) => ( - - - -)) -PopoverContent.displayName = PopoverPrimitive.Content.displayName - -export { Popover, PopoverTrigger, PopoverContent } diff --git a/spaces/jbilcke-hf/VideoQuest/src/app/queries/getDialogue.ts b/spaces/jbilcke-hf/VideoQuest/src/app/queries/getDialogue.ts deleted file mode 100644 index 51e672d18ca66cd91c7aca862251cc71f965d008..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/app/queries/getDialogue.ts +++ /dev/null @@ -1,82 +0,0 @@ -import sbd from "sbd" - -import { Game } from "@/app/games/types" -import { createLlamaPrompt } from "@/lib/createLlamaPrompt" - -import { getBase } from "./getBase" -import { predict } from "./predict" - - -export const getDialogue = async ({ - game, - situation = "", - lastEvent = "", -}: { - game: Game; - situation: string; - lastEvent: string; -}) => { - - const { currentPrompt, initialPrompt, userSituationPrompt } = getBase({ game, situation, lastEvent }) - - console.log("DEBUG", { - game, situation, lastEvent, - currentPrompt, - initialPrompt, - userSituationPrompt, - - }) - /* - const basePrompt = initialPrompt !== currentPrompt - ? `for your information, the initial game panel and scene was: ${initialPrompt}` - : "" - */ - - const basePrompt = initialPrompt !== currentPrompt - ? `You must imagine the most plausible next dialogue line from the game master, based on current and past situation. -Here is the original situation, which will inform you about the general game mood to follow (you must respect this): "${initialPrompt}".` - : "" - - const prompt = createLlamaPrompt([ - { - role: "system", - content: [ - `You are an AI game master.`, - `You are going to receive new information about the current whereabouts and action of the player.`, - basePrompt, - `You must imagine a funny response to speak in reaction to what the player did`, - `Please only write between 2 to 3 short sentences, please.`, - `Please add a few funny puns and jokes.`, - `But please don't say things like "Well, well, well" or "Ah, the classic combination of" it is annoying.` - ].filter(item => item).join("\n") - }, - { - role: "user", - content: userSituationPrompt - } - ]) - - - let result = "" - try { - result = await predict(prompt) - if (!result.trim().length) { - throw new Error("empty dialogue!") - } - } catch (err) { - console.log(`prediction of the dialogue failed, trying again..`) - try { - result = await predict(prompt+".") - } catch (err) { - console.error(`prediction of the dialogue failed again!`) - throw new Error(`failed to generate the dialogue ${err}`) - } - } - - const tmp = result.split("game master:").pop() || result - - // llama-2 is too chatty, let's keep 3 sentences at most - const sentences = sbd.sentences(tmp).slice(0, 3).join(" ").trim() - - return sentences -} diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/types.ts b/spaces/jbilcke-hf/ai-comic-factory/src/types.ts deleted file mode 100644 index 70235c0f7f5351618b1f7e830a24c9154f727e30..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/types.ts +++ /dev/null @@ -1,154 +0,0 @@ -export type ProjectionMode = 'cartesian' | 'spherical' - -export type CacheMode = "use" | "renew" | "ignore" - -export interface RenderRequest { - prompt: string - - // whether to use video segmentation - // disabled (default) - // firstframe: we only analyze the first frame - // allframes: we analyze all the frames - segmentation: 'disabled' | 'firstframe' | 'allframes' - - // segmentation will only be executed if we have a non-empty list of actionnables - // actionnables are names of things like "chest", "key", "tree", "chair" etc - actionnables: string[] - - // note: this is the number of frames for Zeroscope, - // which is currently configured to only output 3 seconds, so: - // nbFrames=8 -> 1 sec - // nbFrames=16 -> 2 sec - // nbFrames=24 -> 3 sec - nbFrames: number // min: 1, max: 24 - - nbSteps: number // min: 1, max: 50 - - seed: number - - width: number // fixed at 1024 for now - height: number // fixed at 512 for now - - // upscaling factor - // 0: no upscaling - // 1: no upscaling - // 2: 2x larger - // 3: 3x larger - // 4x: 4x larger, up to 4096x4096 (warning: a PNG of this size can be 50 Mb!) - upscalingFactor: number - - projection: ProjectionMode - - cache: CacheMode - - wait: boolean // wait until the job is completed - - analyze: boolean // analyze the image to generate a caption (optional) -} - -export interface ImageSegment { - id: number - box: number[] - color: number[] - label: string - score: number -} - -export type RenderedSceneStatus = - | "pending" - | "completed" - | "error" - -export interface RenderedScene { - renderId: string - status: RenderedSceneStatus - assetUrl: string - alt: string - error: string - maskUrl: string - segments: ImageSegment[] -} - -export interface ImageAnalysisRequest { - image: string // in base64 - prompt: string -} - -export interface ImageAnalysisResponse { - result: string - error?: string -} - -export type LLMResponse = Array<{panel: number; instructions: string; caption: string }> - -export type LLMEngine = - | "INFERENCE_API" - | "INFERENCE_ENDPOINT" - | "OPENAI" - | "REPLICATE" - - export type RenderingEngine = - | "VIDEOCHAIN" - | "OPENAI" - | "REPLICATE" - | "INFERENCE_API" - | "INFERENCE_ENDPOINT" - - export type RenderingModelVendor = - | "SERVER" - | "OPENAI" - | "REPLICATE" - | "HUGGINGFACE" - -export type PostVisibility = - | "featured" // featured by admins - | "trending" // top trending / received more than 10 upvotes - | "normal" // default visibility - -export type Post = { - postId: string - appId: string - prompt: string - previewUrl: string - assetUrl: string - createdAt: string - visibility: PostVisibility - upvotes: number - downvotes: number -} - -export type CreatePostResponse = { - success?: boolean - error?: string - post: Post -} - -export type GetAppPostsResponse = { - success?: boolean - error?: string - posts: Post[] -} - -export type GetAppPostResponse = { - success?: boolean - error?: string - post: Post -} - -export type LayoutProps = { - page: number - nbPanels: number -} - -export type Settings = { - renderingModelVendor: RenderingModelVendor - huggingfaceApiKey: string - huggingfaceInferenceApiModel: string - huggingfaceInferenceApiModelTrigger: string - replicateApiKey: string - replicateApiModel: string - replicateApiModelVersion: string - replicateApiModelTrigger: string - openaiApiKey: string - openaiApiModel: string -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/campose-api/README.md b/spaces/jbilcke-hf/campose-api/README.md deleted file mode 100644 index 5d1939912de80a8ee38eeb88f5c663ad837ba922..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/campose-api/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Campose API -emoji: 📷 -colorFrom: green -colorTo: yellow -sdk: docker -pinned: true -app_port: 7860 ---- - -## Presentation - -### What is this project? - -WARNING - This project is not finished! - -Campose API is a REST API to generate camera pose data from a set of images or a video. - -## Manual testing (using CURL) - -Converting a video to images: - -``` -ffmpeg -i in.mp4 %04d.jpg -``` -Generating poses from a local video: - -```bash: -curl -X POST -H "Content-Type: multipart/form-data" -F "data=@video.mp4" http://localhost:7860/ -``` - -Generating poses from a remote video: -```bash -curl -X POST -H "Content-Type: application/json" -d '{"assetUrl":"http://example.com/video.mp4"}' http://localhost:7860/ -``` - -## Running on your machine - -### Prerequisites - -You need a machine with CUDA, a GPU etc - -### Environment variables - -- `STORAGE_PATH`: on HF use `/data`, on a local you can use `.sandbox/` - -### Deployment to Hugging Face diff --git a/spaces/jennysun/jwsun-multisubject-render-model/dataset/grounding_dataset.py b/spaces/jennysun/jwsun-multisubject-render-model/dataset/grounding_dataset.py deleted file mode 100644 index 1b1fa74fc948466bd3d1a522025413ee5224577a..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/dataset/grounding_dataset.py +++ /dev/null @@ -1,205 +0,0 @@ -from tkinter.messagebox import NO -import torch -import json -from collections import defaultdict -from PIL import Image, ImageDraw -from copy import deepcopy -import os -import torchvision.transforms as transforms -import torchvision -from .base_dataset import BaseDataset, check_filenames_in_zipdata, recalculate_box_and_verify_if_valid -from io import BytesIO -import random - -def check_unique(images, fields): - for field in fields: - temp_list = [] - for img_info in images: - temp_list.append(img_info[field]) - assert len(set(temp_list)) == len(temp_list), field - -def clean_data(data): - for data_info in data: - data_info.pop("original_img_id", None) - data_info.pop("original_id", None) - data_info.pop("sentence_id", None) # sentence id for each image (multiple sentences for one image) - data_info.pop("dataset_name", None) - data_info.pop("data_source", None) - data_info["data_id"] = data_info.pop("id") - - -def clean_annotations(annotations): - for anno_info in annotations: - anno_info.pop("iscrowd", None) # I have checked that all 0 for flickr, vg, coco - anno_info.pop("category_id", None) # I have checked that all 1 for flickr vg. This is not always 1 for coco, but I do not think we need this annotation - anno_info.pop("area", None) - # anno_info.pop("id", None) - anno_info["data_id"] = anno_info.pop("image_id") - - -def draw_box(img, boxes): - draw = ImageDraw.Draw(img) - for box in boxes: - draw.rectangle([box[0], box[1], box[2], box[3]], outline ="red", width=2) # x0 y0 x1 y1 - return img - - -def xyhw2xyxy(box): - x0, y0, w, h = box - return [ x0, y0, x0+w, y0+h ] - - - -class GroundingDataset(BaseDataset): - def __init__(self, - image_root, - json_path, - annotation_embedding_path, - prob_real_caption=1, - image_size=256, - min_box_size=0.01, - max_boxes_per_data=8, - max_images=None, # set as 30K used to eval - random_crop = False, - random_flip = True, - ): - super().__init__(image_root, random_crop, random_flip, image_size) - self.image_root = image_root - self.json_path = json_path - self.annotation_embedding_path = annotation_embedding_path - self.prob_real_caption = prob_real_caption - self.min_box_size = min_box_size - self.max_boxes_per_data = max_boxes_per_data - self.max_images = max_images - - - # Load raw data - with open(json_path, 'r') as f: - json_raw = json.load(f) # keys: 'info', 'images', 'licenses', 'categories', 'annotations' - self.data = json_raw["images"] # donot name it images, which is misleading - self.annotations = json_raw["annotations"] - - - # Load preprocessed name embedding - if 'bert' in annotation_embedding_path: - self.embedding_len = 1280 - elif 'clip' in annotation_embedding_path: - self.embedding_len = 768 - else: - assert False - - - # clean data and annotation - check_unique( self.data, ['id'] ) - check_unique( self.annotations, ['id'] ) - clean_data(self.data) - clean_annotations(self.annotations) - self.data_id_list = [ datum['data_id'] for datum in self.data ] - self.data = { datum['data_id']:datum for datum in self.data } # map self.data from a list into a dict - - - # data point to its annotation mapping - self.data_id_to_annos = defaultdict(list) - for anno in self.annotations: - self.data_id_to_annos[ anno["data_id"] ].append(anno) - - - - # These are not used that offen, but are useful in some cases - self.file_names = [] # all training images - self.file_name_to_data_ids = defaultdict(list) # for each image, there are multiple data points (captions) - for data_id in self.data_id_list: - fine_name = self.data[data_id]["file_name"] - self.file_names.append(fine_name) - self.file_name_to_data_ids[fine_name].append(data_id) - self.file_names = list(set(self.file_names)) - - - if self.max_images is not None: - "This is only used as COCO2017P evulation, when we set max_images as 30k" - assert False, 'I have commented out the following code to save cpu memory' - # new_data_id_list = [] - # new_file_name_to_data_ids = defaultdict(list) - # self.file_names = self.file_names[0:self.max_images] - # for file_name in self.file_names: - # data_id = self.file_name_to_data_ids[file_name][0] - # new_data_id_list.append(data_id) - # new_file_name_to_data_ids[file_name].append(data_id) - # self.data_id_list = new_data_id_list - # self.file_name_to_data_ids = new_file_name_to_data_ids - - - # Check if all filenames can be found in the zip file - # all_filenames = [self.data[idx]['file_name'] for idx in self.data_id_list ] - # check_filenames_in_zipdata(all_filenames, image_root) - - - def total_images(self): - return len(self.file_names) - - - def __getitem__(self, index): - if self.max_boxes_per_data > 99: - assert False, "Are you sure setting such large number of boxes?" - - out = {} - - data_id = self.data_id_list[index] - out['id'] = data_id - - - # Image and caption - file_name = self.data[data_id]['file_name'] - image = self.fetch_image(file_name) - image_tensor, trans_info = self.transform_image(image) - out["image"] = image_tensor - - if random.uniform(0, 1) < self.prob_real_caption: - out["caption"] = self.data[data_id]["caption"] - else: - out["caption"] = "" - - - - annos = deepcopy(self.data_id_to_annos[data_id]) - areas = [] - all_boxes = [] - all_masks = [] - all_positive_embeddings = [] - - - for anno in annos: - - x, y, w, h = anno['bbox'] - valid, (x0, y0, x1, y1) = recalculate_box_and_verify_if_valid(x, y, w, h, trans_info, self.image_size, self.min_box_size) - - if valid: - areas.append( (x1-x0)*(y1-y0) ) - all_boxes.append( torch.tensor([x0,y0,x1,y1]) / self.image_size ) # scale to 0-1 - all_masks.append(1) - all_positive_embeddings.append( torch.load(os.path.join(self.annotation_embedding_path,str(anno["id"])), map_location='cpu' ) ) - - wanted_idxs = torch.tensor(areas).sort(descending=True)[1] - wanted_idxs = wanted_idxs[0:self.max_boxes_per_data] - - boxes = torch.zeros(self.max_boxes_per_data, 4) - masks = torch.zeros(self.max_boxes_per_data) - positive_embeddings = torch.zeros(self.max_boxes_per_data, self.embedding_len) - for i, idx in enumerate(wanted_idxs): - boxes[i] = all_boxes[idx] - masks[i] = all_masks[idx] - positive_embeddings[i] = all_positive_embeddings[idx] - - - out["boxes"] = boxes - out["masks"] = masks - out["positive_embeddings"] = positive_embeddings - - return out - - - - def __len__(self): - return len(self.data_id_list) - - diff --git a/spaces/jessica6105/Lu-Bert-VITS2/commons.py b/spaces/jessica6105/Lu-Bert-VITS2/commons.py deleted file mode 100644 index d3fa07f65b1681e1f469b04b2fe689b7c174eaaa..0000000000000000000000000000000000000000 --- a/spaces/jessica6105/Lu-Bert-VITS2/commons.py +++ /dev/null @@ -1,160 +0,0 @@ -import math -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - layer = pad_shape[::-1] - pad_shape = [item for sublist in layer for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - layer = pad_shape[::-1] - pad_shape = [item for sublist in layer for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/jhwen/bingo/src/components/ui/icons.tsx b/spaces/jhwen/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/jmesikto/whisper-webui/src/whisper/fasterWhisperContainer.py b/spaces/jmesikto/whisper-webui/src/whisper/fasterWhisperContainer.py deleted file mode 100644 index ccb5d3cd6360094636e7e9edfc1310019a548433..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/src/whisper/fasterWhisperContainer.py +++ /dev/null @@ -1,200 +0,0 @@ -import os -from typing import List, Union - -from faster_whisper import WhisperModel, download_model -from src.config import ModelConfig, VadInitialPromptMode -from src.hooks.progressListener import ProgressListener -from src.languages import get_language_from_name -from src.modelCache import ModelCache -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer -from src.utils import format_timestamp - -class FasterWhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - model_config = self._get_model_config() - - if os.path.isdir(model_config.url): - model_config.path = model_config.url - else: - model_config.path = download_model(model_config.url, output_dir=self.download_root) - - def _get_model_config(self) -> ModelConfig: - """ - Get the model configuration for the model. - """ - for model in self.models: - if model.name == self.model_name: - return model - return None - - def _create_model(self): - print("Loading faster whisper model " + self.model_name + " for device " + str(self.device)) - model_config = self._get_model_config() - - if model_config.type == "whisper" and model_config.url not in ["tiny", "base", "small", "medium", "large", "large-v2"]: - raise Exception("FasterWhisperContainer does not yet support Whisper models. Use ct2-transformers-converter to convert the model to a faster-whisper model.") - - device = self.device - - if (device is None): - device = "auto" - - model = WhisperModel(model_config.url, device=device, compute_type=self.compute_type) - return model - - def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, - initial_prompt_mode: VadInitialPromptMode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - initial_prompt: str - The initial prompt to use for the transcription. - initial_prompt_mode: VadInitialPromptMode - The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio. - If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return FasterWhisperCallback(self, language=language, task=task, initial_prompt=initial_prompt, initial_prompt_mode=initial_prompt_mode, **decodeOptions) - -class FasterWhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: FasterWhisperContainer, language: str = None, task: str = None, - initial_prompt: str = None, initial_prompt_mode: VadInitialPromptMode=VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict): - self.model_container = model_container - self.language = language - self.task = task - self.initial_prompt = initial_prompt - self.initial_prompt_mode = initial_prompt_mode - self.decodeOptions = decodeOptions - - self._printed_warning = False - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - model: WhisperModel = self.model_container.get_model() - language_code = self._lookup_language_code(self.language) if self.language else None - - # Copy decode options and remove options that are not supported by faster-whisper - decodeOptions = self.decodeOptions.copy() - verbose = decodeOptions.pop("verbose", None) - - logprob_threshold = decodeOptions.pop("logprob_threshold", None) - - patience = decodeOptions.pop("patience", None) - length_penalty = decodeOptions.pop("length_penalty", None) - suppress_tokens = decodeOptions.pop("suppress_tokens", None) - - if (decodeOptions.pop("fp16", None) is not None): - if not self._printed_warning: - print("WARNING: fp16 option is ignored by faster-whisper - use compute_type instead.") - self._printed_warning = True - - # Fix up decode options - if (logprob_threshold is not None): - decodeOptions["log_prob_threshold"] = logprob_threshold - - decodeOptions["patience"] = float(patience) if patience is not None else 1.0 - decodeOptions["length_penalty"] = float(length_penalty) if length_penalty is not None else 1.0 - - # See if supress_tokens is a string - if so, convert it to a list of ints - decodeOptions["suppress_tokens"] = self._split_suppress_tokens(suppress_tokens) - - initial_prompt = self._get_initial_prompt(self.initial_prompt, self.initial_prompt_mode, prompt, segment_index) - - segments_generator, info = model.transcribe(audio, \ - language=language_code if language_code else detected_language, task=self.task, \ - initial_prompt=initial_prompt, \ - **decodeOptions - ) - - segments = [] - - for segment in segments_generator: - segments.append(segment) - - if progress_listener is not None: - progress_listener.on_progress(segment.end, info.duration) - if verbose: - print("[{}->{}] {}".format(format_timestamp(segment.start, True), format_timestamp(segment.end, True), - segment.text)) - - text = " ".join([segment.text for segment in segments]) - - # Convert the segments to a format that is easier to serialize - whisper_segments = [{ - "text": segment.text, - "start": segment.start, - "end": segment.end, - - # Extra fields added by faster-whisper - "words": [{ - "start": word.start, - "end": word.end, - "word": word.word, - "probability": word.probability - } for word in (segment.words if segment.words is not None else []) ] - } for segment in segments] - - result = { - "segments": whisper_segments, - "text": text, - "language": info.language if info else None, - - # Extra fields added by faster-whisper - "language_probability": info.language_probability if info else None, - "duration": info.duration if info else None - } - - if progress_listener is not None: - progress_listener.on_finished() - return result - - def _split_suppress_tokens(self, suppress_tokens: Union[str, List[int]]): - if (suppress_tokens is None): - return None - if (isinstance(suppress_tokens, list)): - return suppress_tokens - - return [int(token) for token in suppress_tokens.split(",")] - - def _lookup_language_code(self, language: str): - language = get_language_from_name(language) - - if language is None: - raise ValueError("Invalid language: " + language) - - return language.code diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_mode_ocb.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_mode_ocb.py deleted file mode 100644 index a271ca12636d72e84ba60a79535981173beffc4e..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_mode_ocb.py +++ /dev/null @@ -1,532 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -""" -Offset Codebook (OCB) mode. - -OCB is Authenticated Encryption with Associated Data (AEAD) cipher mode -designed by Prof. Phillip Rogaway and specified in `RFC7253`_. - -The algorithm provides both authenticity and privacy, it is very efficient, -it uses only one key and it can be used in online mode (so that encryption -or decryption can start before the end of the message is available). - -This module implements the third and last variant of OCB (OCB3) and it only -works in combination with a 128-bit block symmetric cipher, like AES. - -OCB is patented in US but `free licenses`_ exist for software implementations -meant for non-military purposes. - -Example: - >>> from Crypto.Cipher import AES - >>> from Crypto.Random import get_random_bytes - >>> - >>> key = get_random_bytes(32) - >>> cipher = AES.new(key, AES.MODE_OCB) - >>> plaintext = b"Attack at dawn" - >>> ciphertext, mac = cipher.encrypt_and_digest(plaintext) - >>> # Deliver cipher.nonce, ciphertext and mac - ... - >>> cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) - >>> try: - >>> plaintext = cipher.decrypt_and_verify(ciphertext, mac) - >>> except ValueError: - >>> print "Invalid message" - >>> else: - >>> print plaintext - -:undocumented: __package__ - -.. _RFC7253: http://www.rfc-editor.org/info/rfc7253 -.. _free licenses: http://web.cs.ucdavis.edu/~rogaway/ocb/license.htm -""" - -import struct -from binascii import unhexlify - -from Crypto.Util.py3compat import bord, _copy_bytes, bchr -from Crypto.Util.number import long_to_bytes, bytes_to_long -from Crypto.Util.strxor import strxor - -from Crypto.Hash import BLAKE2s -from Crypto.Random import get_random_bytes - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, - create_string_buffer, get_raw_buffer, - SmartPointer, c_size_t, c_uint8_ptr, - is_buffer) - -_raw_ocb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ocb", """ - int OCB_start_operation(void *cipher, - const uint8_t *offset_0, - size_t offset_0_len, - void **pState); - int OCB_encrypt(void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int OCB_decrypt(void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int OCB_update(void *state, - const uint8_t *in, - size_t data_len); - int OCB_digest(void *state, - uint8_t *tag, - size_t tag_len); - int OCB_stop_operation(void *state); - """) - - -class OcbMode(object): - """Offset Codebook (OCB) mode. - - :undocumented: __init__ - """ - - def __init__(self, factory, nonce, mac_len, cipher_params): - - if factory.block_size != 16: - raise ValueError("OCB mode is only available for ciphers" - " that operate on 128 bits blocks") - - self.block_size = 16 - """The block size of the underlying cipher, in bytes.""" - - self.nonce = _copy_bytes(None, None, nonce) - """Nonce used for this session.""" - if len(nonce) not in range(1, 16): - raise ValueError("Nonce must be at most 15 bytes long") - if not is_buffer(nonce): - raise TypeError("Nonce must be bytes, bytearray or memoryview") - - self._mac_len = mac_len - if not 8 <= mac_len <= 16: - raise ValueError("MAC tag must be between 8 and 16 bytes long") - - # Cache for MAC tag - self._mac_tag = None - - # Cache for unaligned associated data - self._cache_A = b"" - - # Cache for unaligned ciphertext/plaintext - self._cache_P = b"" - - # Allowed transitions after initialization - self._next = ["update", "encrypt", "decrypt", - "digest", "verify"] - - # Compute Offset_0 - params_without_key = dict(cipher_params) - key = params_without_key.pop("key") - - taglen_mod128 = (self._mac_len * 8) % 128 - if len(self.nonce) < 15: - nonce = bchr(taglen_mod128 << 1) +\ - b'\x00' * (14 - len(nonce)) +\ - b'\x01' +\ - self.nonce - else: - nonce = bchr((taglen_mod128 << 1) | 0x01) +\ - self.nonce - - bottom_bits = bord(nonce[15]) & 0x3F # 6 bits, 0..63 - top_bits = bord(nonce[15]) & 0xC0 # 2 bits - - ktop_cipher = factory.new(key, - factory.MODE_ECB, - **params_without_key) - ktop = ktop_cipher.encrypt(struct.pack('15sB', - nonce[:15], - top_bits)) - - stretch = ktop + strxor(ktop[:8], ktop[1:9]) # 192 bits - offset_0 = long_to_bytes(bytes_to_long(stretch) >> - (64 - bottom_bits), 24)[8:] - - # Create low-level cipher instance - raw_cipher = factory._create_base_cipher(cipher_params) - if cipher_params: - raise TypeError("Unknown keywords: " + str(cipher_params)) - - self._state = VoidPointer() - result = _raw_ocb_lib.OCB_start_operation(raw_cipher.get(), - offset_0, - c_size_t(len(offset_0)), - self._state.address_of()) - if result: - raise ValueError("Error %d while instantiating the OCB mode" - % result) - - # Ensure that object disposal of this Python object will (eventually) - # free the memory allocated by the raw library for the cipher mode - self._state = SmartPointer(self._state.get(), - _raw_ocb_lib.OCB_stop_operation) - - # Memory allocated for the underlying block cipher is now owed - # by the cipher mode - raw_cipher.release() - - def _update(self, assoc_data, assoc_data_len): - result = _raw_ocb_lib.OCB_update(self._state.get(), - c_uint8_ptr(assoc_data), - c_size_t(assoc_data_len)) - if result: - raise ValueError("Error %d while computing MAC in OCB mode" % result) - - def update(self, assoc_data): - """Process the associated data. - - If there is any associated data, the caller has to invoke - this method one or more times, before using - ``decrypt`` or ``encrypt``. - - By *associated data* it is meant any data (e.g. packet headers) that - will not be encrypted and will be transmitted in the clear. - However, the receiver shall still able to detect modifications. - - If there is no associated data, this method must not be called. - - The caller may split associated data in segments of any size, and - invoke this method multiple times, each time with the next segment. - - :Parameters: - assoc_data : bytes/bytearray/memoryview - A piece of associated data. - """ - - if "update" not in self._next: - raise TypeError("update() can only be called" - " immediately after initialization") - - self._next = ["encrypt", "decrypt", "digest", - "verify", "update"] - - if len(self._cache_A) > 0: - filler = min(16 - len(self._cache_A), len(assoc_data)) - self._cache_A += _copy_bytes(None, filler, assoc_data) - assoc_data = assoc_data[filler:] - - if len(self._cache_A) < 16: - return self - - # Clear the cache, and proceeding with any other aligned data - self._cache_A, seg = b"", self._cache_A - self.update(seg) - - update_len = len(assoc_data) // 16 * 16 - self._cache_A = _copy_bytes(update_len, None, assoc_data) - self._update(assoc_data, update_len) - return self - - def _transcrypt_aligned(self, in_data, in_data_len, - trans_func, trans_desc): - - out_data = create_string_buffer(in_data_len) - result = trans_func(self._state.get(), - in_data, - out_data, - c_size_t(in_data_len)) - if result: - raise ValueError("Error %d while %sing in OCB mode" - % (result, trans_desc)) - return get_raw_buffer(out_data) - - def _transcrypt(self, in_data, trans_func, trans_desc): - # Last piece to encrypt/decrypt - if in_data is None: - out_data = self._transcrypt_aligned(self._cache_P, - len(self._cache_P), - trans_func, - trans_desc) - self._cache_P = b"" - return out_data - - # Try to fill up the cache, if it already contains something - prefix = b"" - if len(self._cache_P) > 0: - filler = min(16 - len(self._cache_P), len(in_data)) - self._cache_P += _copy_bytes(None, filler, in_data) - in_data = in_data[filler:] - - if len(self._cache_P) < 16: - # We could not manage to fill the cache, so there is certainly - # no output yet. - return b"" - - # Clear the cache, and proceeding with any other aligned data - prefix = self._transcrypt_aligned(self._cache_P, - len(self._cache_P), - trans_func, - trans_desc) - self._cache_P = b"" - - # Process data in multiples of the block size - trans_len = len(in_data) // 16 * 16 - result = self._transcrypt_aligned(c_uint8_ptr(in_data), - trans_len, - trans_func, - trans_desc) - if prefix: - result = prefix + result - - # Left-over - self._cache_P = _copy_bytes(trans_len, None, in_data) - - return result - - def encrypt(self, plaintext=None): - """Encrypt the next piece of plaintext. - - After the entire plaintext has been passed (but before `digest`), - you **must** call this method one last time with no arguments to collect - the final piece of ciphertext. - - If possible, use the method `encrypt_and_digest` instead. - - :Parameters: - plaintext : bytes/bytearray/memoryview - The next piece of data to encrypt or ``None`` to signify - that encryption has finished and that any remaining ciphertext - has to be produced. - :Return: - the ciphertext, as a byte string. - Its length may not match the length of the *plaintext*. - """ - - if "encrypt" not in self._next: - raise TypeError("encrypt() can only be called after" - " initialization or an update()") - - if plaintext is None: - self._next = ["digest"] - else: - self._next = ["encrypt"] - return self._transcrypt(plaintext, _raw_ocb_lib.OCB_encrypt, "encrypt") - - def decrypt(self, ciphertext=None): - """Decrypt the next piece of ciphertext. - - After the entire ciphertext has been passed (but before `verify`), - you **must** call this method one last time with no arguments to collect - the remaining piece of plaintext. - - If possible, use the method `decrypt_and_verify` instead. - - :Parameters: - ciphertext : bytes/bytearray/memoryview - The next piece of data to decrypt or ``None`` to signify - that decryption has finished and that any remaining plaintext - has to be produced. - :Return: - the plaintext, as a byte string. - Its length may not match the length of the *ciphertext*. - """ - - if "decrypt" not in self._next: - raise TypeError("decrypt() can only be called after" - " initialization or an update()") - - if ciphertext is None: - self._next = ["verify"] - else: - self._next = ["decrypt"] - return self._transcrypt(ciphertext, - _raw_ocb_lib.OCB_decrypt, - "decrypt") - - def _compute_mac_tag(self): - - if self._mac_tag is not None: - return - - if self._cache_A: - self._update(self._cache_A, len(self._cache_A)) - self._cache_A = b"" - - mac_tag = create_string_buffer(16) - result = _raw_ocb_lib.OCB_digest(self._state.get(), - mac_tag, - c_size_t(len(mac_tag)) - ) - if result: - raise ValueError("Error %d while computing digest in OCB mode" - % result) - self._mac_tag = get_raw_buffer(mac_tag)[:self._mac_len] - - def digest(self): - """Compute the *binary* MAC tag. - - Call this method after the final `encrypt` (the one with no arguments) - to obtain the MAC tag. - - The MAC tag is needed by the receiver to determine authenticity - of the message. - - :Return: the MAC, as a byte string. - """ - - if "digest" not in self._next: - raise TypeError("digest() cannot be called now for this cipher") - - assert(len(self._cache_P) == 0) - - self._next = ["digest"] - - if self._mac_tag is None: - self._compute_mac_tag() - - return self._mac_tag - - def hexdigest(self): - """Compute the *printable* MAC tag. - - This method is like `digest`. - - :Return: the MAC, as a hexadecimal string. - """ - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def verify(self, received_mac_tag): - """Validate the *binary* MAC tag. - - Call this method after the final `decrypt` (the one with no arguments) - to check if the message is authentic and valid. - - :Parameters: - received_mac_tag : bytes/bytearray/memoryview - This is the *binary* MAC, as received from the sender. - :Raises ValueError: - if the MAC does not match. The message has been tampered with - or the key is incorrect. - """ - - if "verify" not in self._next: - raise TypeError("verify() cannot be called now for this cipher") - - assert(len(self._cache_P) == 0) - - self._next = ["verify"] - - if self._mac_tag is None: - self._compute_mac_tag() - - secret = get_random_bytes(16) - mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) - mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) - - if mac1.digest() != mac2.digest(): - raise ValueError("MAC check failed") - - def hexverify(self, hex_mac_tag): - """Validate the *printable* MAC tag. - - This method is like `verify`. - - :Parameters: - hex_mac_tag : string - This is the *printable* MAC, as received from the sender. - :Raises ValueError: - if the MAC does not match. The message has been tampered with - or the key is incorrect. - """ - - self.verify(unhexlify(hex_mac_tag)) - - def encrypt_and_digest(self, plaintext): - """Encrypt the message and create the MAC tag in one step. - - :Parameters: - plaintext : bytes/bytearray/memoryview - The entire message to encrypt. - :Return: - a tuple with two byte strings: - - - the encrypted data - - the MAC - """ - - return self.encrypt(plaintext) + self.encrypt(), self.digest() - - def decrypt_and_verify(self, ciphertext, received_mac_tag): - """Decrypted the message and verify its authenticity in one step. - - :Parameters: - ciphertext : bytes/bytearray/memoryview - The entire message to decrypt. - received_mac_tag : byte string - This is the *binary* MAC, as received from the sender. - - :Return: the decrypted data (byte string). - :Raises ValueError: - if the MAC does not match. The message has been tampered with - or the key is incorrect. - """ - - plaintext = self.decrypt(ciphertext) + self.decrypt() - self.verify(received_mac_tag) - return plaintext - - -def _create_ocb_cipher(factory, **kwargs): - """Create a new block cipher, configured in OCB mode. - - :Parameters: - factory : module - A symmetric cipher module from `Crypto.Cipher` - (like `Crypto.Cipher.AES`). - - :Keywords: - nonce : bytes/bytearray/memoryview - A value that must never be reused for any other encryption. - Its length can vary from 1 to 15 bytes. - If not specified, a random 15 bytes long nonce is generated. - - mac_len : integer - Length of the MAC, in bytes. - It must be in the range ``[8..16]``. - The default is 16 (128 bits). - - Any other keyword will be passed to the underlying block cipher. - See the relevant documentation for details (at least ``key`` will need - to be present). - """ - - try: - nonce = kwargs.pop("nonce", None) - if nonce is None: - nonce = get_random_bytes(15) - mac_len = kwargs.pop("mac_len", 16) - except KeyError as e: - raise TypeError("Keyword missing: " + str(e)) - - return OcbMode(factory, nonce, mac_len, kwargs) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/Poly1305.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/Poly1305.py deleted file mode 100644 index eb5e0dadba401ef75c8478af979ffde6c3f65c01..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/Poly1305.py +++ /dev/null @@ -1,217 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Hash/Poly1305.py - Implements the Poly1305 MAC -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from binascii import unhexlify - -from Crypto.Util.py3compat import bord, tobytes, _copy_bytes - -from Crypto.Hash import BLAKE2s -from Crypto.Random import get_random_bytes -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - - -_raw_poly1305 = load_pycryptodome_raw_lib("Crypto.Hash._poly1305", - """ - int poly1305_init(void **state, - const uint8_t *r, - size_t r_len, - const uint8_t *s, - size_t s_len); - int poly1305_destroy(void *state); - int poly1305_update(void *state, - const uint8_t *in, - size_t len); - int poly1305_digest(const void *state, - uint8_t *digest, - size_t len); - """) - - -class Poly1305_MAC(object): - """An Poly1305 MAC object. - Do not instantiate directly. Use the :func:`new` function. - - :ivar digest_size: the size in bytes of the resulting MAC tag - :vartype digest_size: integer - """ - - digest_size = 16 - - def __init__(self, r, s, data): - - if len(r) != 16: - raise ValueError("Parameter r is not 16 bytes long") - if len(s) != 16: - raise ValueError("Parameter s is not 16 bytes long") - - self._mac_tag = None - - state = VoidPointer() - result = _raw_poly1305.poly1305_init(state.address_of(), - c_uint8_ptr(r), - c_size_t(len(r)), - c_uint8_ptr(s), - c_size_t(len(s)) - ) - if result: - raise ValueError("Error %d while instantiating Poly1305" % result) - self._state = SmartPointer(state.get(), - _raw_poly1305.poly1305_destroy) - if data: - self.update(data) - - def update(self, data): - """Authenticate the next chunk of message. - - Args: - data (byte string/byte array/memoryview): The next chunk of data - """ - - if self._mac_tag: - raise TypeError("You can only call 'digest' or 'hexdigest' on this object") - - result = _raw_poly1305.poly1305_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while hashing Poly1305 data" % result) - return self - - def copy(self): - raise NotImplementedError() - - def digest(self): - """Return the **binary** (non-printable) MAC tag of the message - authenticated so far. - - :return: The MAC tag digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - if self._mac_tag: - return self._mac_tag - - bfr = create_string_buffer(16) - result = _raw_poly1305.poly1305_digest(self._state.get(), - bfr, - c_size_t(len(bfr))) - if result: - raise ValueError("Error %d while creating Poly1305 digest" % result) - - self._mac_tag = get_raw_buffer(bfr) - return self._mac_tag - - def hexdigest(self): - """Return the **printable** MAC tag of the message authenticated so far. - - :return: The MAC tag, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) - for x in tuple(self.digest())]) - - def verify(self, mac_tag): - """Verify that a given **binary** MAC (computed by another party) - is valid. - - Args: - mac_tag (byte string/byte string/memoryview): the expected MAC of the message. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - secret = get_random_bytes(16) - - mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) - mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) - - if mac1.digest() != mac2.digest(): - raise ValueError("MAC check failed") - - def hexverify(self, hex_mac_tag): - """Verify that a given **printable** MAC (computed by another party) - is valid. - - Args: - hex_mac_tag (string): the expected MAC of the message, - as a hexadecimal string. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - self.verify(unhexlify(tobytes(hex_mac_tag))) - - - -def new(**kwargs): - """Create a new Poly1305 MAC object. - - Args: - key (bytes/bytearray/memoryview): - The 32-byte key for the Poly1305 object. - cipher (module from ``Crypto.Cipher``): - The cipher algorithm to use for deriving the Poly1305 - key pair *(r, s)*. - It can only be ``Crypto.Cipher.AES`` or ``Crypto.Cipher.ChaCha20``. - nonce (bytes/bytearray/memoryview): - Optional. The non-repeatable value to use for the MAC of this message. - It must be 16 bytes long for ``AES`` and 8 or 12 bytes for ``ChaCha20``. - If not passed, a random nonce is created; you will find it in the - ``nonce`` attribute of the new object. - data (bytes/bytearray/memoryview): - Optional. The very first chunk of the message to authenticate. - It is equivalent to an early call to ``update()``. - - Returns: - A :class:`Poly1305_MAC` object - """ - - cipher = kwargs.pop("cipher", None) - if not hasattr(cipher, '_derive_Poly1305_key_pair'): - raise ValueError("Parameter 'cipher' must be AES or ChaCha20") - - cipher_key = kwargs.pop("key", None) - if cipher_key is None: - raise TypeError("You must pass a parameter 'key'") - - nonce = kwargs.pop("nonce", None) - data = kwargs.pop("data", None) - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - r, s, nonce = cipher._derive_Poly1305_key_pair(cipher_key, nonce) - - new_mac = Poly1305_MAC(r, s, data) - new_mac.nonce = _copy_bytes(None, None, nonce) # nonce may still be just a memoryview - return new_mac diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/api.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/api.py deleted file mode 100644 index b09b7289dacbeeec66348f631cfb2f5e707921ea..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/api.py +++ /dev/null @@ -1,3793 +0,0 @@ -import warnings - -import hashlib -import io -import json -import jsonschema -import pandas as pd -from toolz.curried import pipe as _pipe -import itertools -import sys -from typing import cast, List, Optional, Any, Iterable - -# Have to rename it here as else it overlaps with schema.core.Type -from typing import Type as TypingType -from typing import Dict as TypingDict - -from .schema import core, channels, mixins, Undefined, SCHEMA_URL - -from .data import data_transformers -from ... import utils, expr -from .display import renderers, VEGALITE_VERSION, VEGAEMBED_VERSION, VEGA_VERSION -from .theme import themes -from .compiler import vegalite_compilers -from ...utils._vegafusion_data import ( - using_vegafusion as _using_vegafusion, - compile_with_vegafusion as _compile_with_vegafusion, -) -from ...utils.core import _DataFrameLike - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -# ------------------------------------------------------------------------ -# Data Utilities -def _dataset_name(values): - """Generate a unique hash of the data - - Parameters - ---------- - values : list or dict - A list/dict representation of data values. - - Returns - ------- - name : string - A unique name generated from the hash of the values. - """ - if isinstance(values, core.InlineDataset): - values = values.to_dict() - if values == [{}]: - return "empty" - values_json = json.dumps(values, sort_keys=True) - hsh = hashlib.md5(values_json.encode()).hexdigest() - return "data-" + hsh - - -def _consolidate_data(data, context): - """If data is specified inline, then move it to context['datasets'] - - This function will modify context in-place, and return a new version of data - """ - values = Undefined - kwds = {} - - if isinstance(data, core.InlineData): - if data.name is Undefined and data.values is not Undefined: - if isinstance(data.values, core.InlineDataset): - values = data.to_dict()["values"] - else: - values = data.values - kwds = {"format": data.format} - - elif isinstance(data, dict): - if "name" not in data and "values" in data: - values = data["values"] - kwds = {k: v for k, v in data.items() if k != "values"} - - if values is not Undefined: - name = _dataset_name(values) - data = core.NamedData(name=name, **kwds) - context.setdefault("datasets", {})[name] = values - - return data - - -def _prepare_data(data, context=None): - """Convert input data to data for use within schema - - Parameters - ---------- - data : - The input dataset in the form of a DataFrame, dictionary, altair data - object, or other type that is recognized by the data transformers. - context : dict (optional) - The to_dict context in which the data is being prepared. This is used - to keep track of information that needs to be passed up and down the - recursive serialization routine, such as global named datasets. - """ - if data is Undefined: - return data - - # convert dataframes or objects with __geo_interface__ to dict - elif isinstance(data, pd.DataFrame) or hasattr(data, "__geo_interface__"): - data = _pipe(data, data_transformers.get()) - - # convert string input to a URLData - elif isinstance(data, str): - data = core.UrlData(data) - - elif hasattr(data, "__dataframe__"): - data = _pipe(data, data_transformers.get()) - - # consolidate inline data to top-level datasets - if context is not None and data_transformers.consolidate_datasets: - data = _consolidate_data(data, context) - - # if data is still not a recognized type, then return - if not isinstance(data, (dict, core.Data)): - warnings.warn("data of type {} not recognized".format(type(data)), stacklevel=1) - - return data - - -# ------------------------------------------------------------------------ -# Aliases & specializations -Bin = core.BinParams -Impute = core.ImputeParams -Title = core.TitleParams - - -class LookupData(core.LookupData): - @utils.use_signature(core.LookupData) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - """Convert the chart to a dictionary suitable for JSON export.""" - copy = self.copy(deep=False) - copy.data = _prepare_data(copy.data, kwargs.get("context")) - return super(LookupData, copy).to_dict(*args, **kwargs) - - -class FacetMapping(core.FacetMapping): - _class_is_valid_at_instantiation = False - - @utils.use_signature(core.FacetMapping) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - copy = self.copy(deep=False) - context = kwargs.get("context", {}) - data = context.get("data", None) - if isinstance(self.row, str): - copy.row = core.FacetFieldDef(**utils.parse_shorthand(self.row, data)) - if isinstance(self.column, str): - copy.column = core.FacetFieldDef(**utils.parse_shorthand(self.column, data)) - return super(FacetMapping, copy).to_dict(*args, **kwargs) - - -# ------------------------------------------------------------------------ -# Encoding will contain channel objects that aren't valid at instantiation -core.FacetedEncoding._class_is_valid_at_instantiation = False - -# ------------------------------------------------------------------------ -# These are parameters that are valid at the top level, but are not valid -# for specs that are within a composite chart -# (layer, hconcat, vconcat, facet, repeat) -TOPLEVEL_ONLY_KEYS = {"background", "config", "autosize", "padding", "$schema"} - - -def _get_channels_mapping(): - mapping = {} - for attr in dir(channels): - cls = getattr(channels, attr) - if isinstance(cls, type) and issubclass(cls, core.SchemaBase): - mapping[cls] = attr.replace("Value", "").lower() - return mapping - - -# ------------------------------------------------------------------------- -# Tools for working with parameters -class Parameter(expr.core.OperatorMixin, object): - """A Parameter object""" - - _counter = 0 - - @classmethod - def _get_name(cls): - cls._counter += 1 - return f"param_{cls._counter}" - - def __init__(self, name): - if name is None: - name = self._get_name() - self.name = name - - @utils.deprecation.deprecated( - message="'ref' is deprecated. No need to call '.ref()' anymore." - ) - def ref(self): - "'ref' is deprecated. No need to call '.ref()' anymore." - return self.to_dict() - - def to_dict(self): - if self.param_type == "variable": - return {"expr": self.name} - elif self.param_type == "selection": - return { - "param": self.name.to_dict() - if hasattr(self.name, "to_dict") - else self.name - } - - def __invert__(self): - if self.param_type == "selection": - return SelectionPredicateComposition({"not": {"param": self.name}}) - else: - return expr.core.OperatorMixin.__invert__(self) - - def __and__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"and": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__and__(self, other) - - def __or__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"or": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__or__(self, other) - - def __repr__(self): - return "Parameter({0!r}, {1})".format(self.name, self.param) - - def _to_expr(self): - return self.name - - def _from_expr(self, expr): - return ParameterExpression(expr=expr) - - def __getattr__(self, field_name): - if field_name.startswith("__") and field_name.endswith("__"): - raise AttributeError(field_name) - _attrexpr = expr.core.GetAttrExpression(self.name, field_name) - # If self is a SelectionParameter and field_name is in its - # fields or encodings list, then we want to return an expression. - if check_fields_and_encodings(self, field_name): - return SelectionExpression(_attrexpr) - return expr.core.GetAttrExpression(self.name, field_name) - - # TODO: Are there any special cases to consider for __getitem__? - # This was copied from v4. - def __getitem__(self, field_name): - return expr.core.GetItemExpression(self.name, field_name) - - -# Enables use of ~, &, | with compositions of selection objects. -class SelectionPredicateComposition(core.PredicateComposition): - def __invert__(self): - return SelectionPredicateComposition({"not": self.to_dict()}) - - def __and__(self, other): - return SelectionPredicateComposition({"and": [self.to_dict(), other.to_dict()]}) - - def __or__(self, other): - return SelectionPredicateComposition({"or": [self.to_dict(), other.to_dict()]}) - - -class ParameterExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return ParameterExpression(expr=expr) - - -class SelectionExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return SelectionExpression(expr=expr) - - -def check_fields_and_encodings(parameter, field_name): - for prop in ["fields", "encodings"]: - try: - if field_name in getattr(parameter.param.select, prop): - return True - except (AttributeError, TypeError): - pass - - return False - - -# ------------------------------------------------------------------------ -# Top-Level Functions - - -def value(value, **kwargs): - """Specify a value for use in an encoding""" - return dict(value=value, **kwargs) - - -def param( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - **kwds, -): - """Create a named parameter. See https://altair-viz.github.io/user_guide/interactions.html for examples. Although both variable parameters and selection parameters can be created using this 'param' function, to create a selection parameter, it is recommended to use either 'selection_point' or 'selection_interval' instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - **kwds : - additional keywords will be used to construct a parameter. If 'select' - is among the keywords, then a selection parameter will be created. - Otherwise, a variable parameter will be created. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - parameter = Parameter(name) - - if empty is not Undefined: - parameter.empty = empty - if parameter.empty == "none": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = False - elif parameter.empty == "all": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = True - elif (parameter.empty is False) or (parameter.empty is True): - pass - else: - raise ValueError("The value of 'empty' should be True or False.") - - if "init" in kwds: - warnings.warn( - """Use 'value' instead of 'init'.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - if value is Undefined: - kwds["value"] = kwds.pop("init") - else: - # If both 'value' and 'init' are set, we ignore 'init'. - kwds.pop("init") - - if "select" not in kwds: - parameter.param = core.VariableParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "variable" - elif "views" in kwds: - parameter.param = core.TopLevelSelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - else: - parameter.param = core.SelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - - return parameter - - -def _selection(type=Undefined, **kwds): - # We separate out the parameter keywords from the selection keywords - param_kwds = {} - - for kwd in {"name", "bind", "value", "empty", "init", "views"}: - if kwd in kwds: - param_kwds[kwd] = kwds.pop(kwd) - - if type == "interval": - select = core.IntervalSelectionConfig(type=type, **kwds) - elif type == "point": - select = core.PointSelectionConfig(type=type, **kwds) - elif type in ["single", "multi"]: - select = core.PointSelectionConfig(type="point", **kwds) - warnings.warn( - """The types 'single' and 'multi' are now - combined and should be specified using "selection_point()".""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - else: - raise ValueError("""'type' must be 'point' or 'interval'""") - - return param(select=select, **param_kwds) - - -@utils.deprecation.deprecated( - message="""'selection' is deprecated. - Use 'selection_point()' or 'selection_interval()' instead; these functions also include more helpful docstrings.""" -) -def selection(type=Undefined, **kwds): - """ - Users are recommended to use either 'selection_point' or 'selection_interval' instead, depending on the type of parameter they want to create. - - Create a selection parameter. - - Parameters - ---------- - type : enum('point', 'interval') (required) - Determines the default event processing and data query for the - selection. Vega-Lite currently supports two selection types: - * "point" - to select multiple discrete data values; the first - value is selected on click and additional values toggled on - shift-click. - * "interval" - to select a continuous range of data values on - drag. - **kwds : - additional keywords to control the selection. - """ - - return _selection(type=type, **kwds) - - -def selection_interval( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - mark=Undefined, - translate=Undefined, - zoom=Undefined, - **kwds, -): - """Create an interval selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Interval selection parameters are used to select a continuous range of data values on drag, whereas point selection parameters (`selection_point`) are used to select multiple discrete data values.) - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - mark : :class:`Mark` (optional) - An interval selection also adds a rectangle mark to depict the - extents of the interval. The mark property can be used to - customize the appearance of the mark. - translate : string or boolean (optional) - When truthy, allows a user to interactively move an interval - selection back-and-forth. Can be True, False (to disable panning), - or a Vega event stream definition which must include a start and - end event to trigger continuous panning. Discrete panning (e.g., - pressing the left/right arrow keys) will be supported in future - versions. - The default value is True, which corresponds to - [mousedown, window:mouseup] > window:mousemove! - This default allows users to click and drag within an interval - selection to reposition it. - zoom : string or boolean (optional) - When truthy, allows a user to interactively resize an interval - selection. Can be True, False (to disable zooming), or a Vega - event stream definition. Currently, only wheel events are supported, - but custom event streams can still be used to specify filters, - debouncing, and throttling. Future versions will expand the set of - events that can trigger this transformation. - The default value is True, which corresponds to wheel!. This - default allows users to use the mouse wheel to resize an interval - selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="interval", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - on=on, - clear=clear, - resolve=resolve, - mark=mark, - translate=translate, - zoom=zoom, - **kwds, - ) - - -def selection_point( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - fields=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - toggle=Undefined, - nearest=Undefined, - **kwds, -): - """Create a point selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Point selection parameters are used to select multiple discrete data values; the first value is selected on click and additional values toggled on shift-click. To select a continuous range of data values on drag interval selection parameters (`selection_interval`) can be used instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - fields : List[str] (optional) - A list of field names whose values must match for a data tuple to - fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - toggle : string or boolean (optional) - Controls whether data values should be toggled (inserted or - removed from a point selection) or only ever inserted into - point selections. - One of: - - * True (default): the toggle behavior, which corresponds to - "event.shiftKey". As a result, data values are toggled - when the user interacts with the shift-key pressed. - * False: disables toggling behaviour; the selection will - only ever contain a single data value corresponding - to the most recent interaction. - * A Vega expression which is re-evaluated as the user interacts. - If the expression evaluates to True, the data value is - toggled into or out of the point selection. If the expression - evaluates to False, the point selection is first cleared, and - the data value is then inserted. For example, setting the - value to the Vega expression True will toggle data values - without the user pressing the shift-key. - - nearest : boolean (optional) - When true, an invisible voronoi diagram is computed to accelerate - discrete selection. The data value nearest the mouse cursor is - added to the selection. The default is False, which means that - data values must be interacted with directly (e.g., clicked on) - to be added to the selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="point", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - fields=fields, - on=on, - clear=clear, - resolve=resolve, - toggle=toggle, - nearest=nearest, - **kwds, - ) - - -@utils.deprecation.deprecated( - message="'selection_multi' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_multi(**kwargs): - """'selection_multi' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.deprecation.deprecated( - message="'selection_single' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_single(**kwargs): - """'selection_single' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.use_signature(core.Binding) -def binding(input, **kwargs): - """A generic binding""" - return core.Binding(input=input, **kwargs) - - -@utils.use_signature(core.BindCheckbox) -def binding_checkbox(**kwargs): - """A checkbox binding""" - return core.BindCheckbox(input="checkbox", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_radio(**kwargs): - """A radio button binding""" - return core.BindRadioSelect(input="radio", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_select(**kwargs): - """A select binding""" - return core.BindRadioSelect(input="select", **kwargs) - - -@utils.use_signature(core.BindRange) -def binding_range(**kwargs): - """A range binding""" - return core.BindRange(input="range", **kwargs) - - -# TODO: update the docstring -def condition(predicate, if_true, if_false, **kwargs): - """A conditional attribute or encoding - - Parameters - ---------- - predicate: Selection, PredicateComposition, expr.Expression, dict, or string - the selection predicate or test predicate for the condition. - if a string is passed, it will be treated as a test operand. - if_true: - the spec or object to use if the selection predicate is true - if_false: - the spec or object to use if the selection predicate is false - **kwargs: - additional keyword args are added to the resulting dict - - Returns - ------- - spec: dict or VegaLiteSchema - the spec that describes the condition - """ - test_predicates = (str, expr.Expression, core.PredicateComposition) - - if isinstance(predicate, Parameter): - if predicate.param_type == "selection" or predicate.param.expr is Undefined: - condition = {"param": predicate.name} - if "empty" in kwargs: - condition["empty"] = kwargs.pop("empty") - elif isinstance(predicate.empty, bool): - condition["empty"] = predicate.empty - else: - condition = {"test": predicate.param.expr} - elif isinstance(predicate, test_predicates): - condition = {"test": predicate} - elif isinstance(predicate, dict): - condition = predicate - else: - raise NotImplementedError( - "condition predicate of type {}" "".format(type(predicate)) - ) - - if isinstance(if_true, core.SchemaBase): - # convert to dict for now; the from_dict call below will wrap this - # dict in the appropriate schema - if_true = if_true.to_dict() - elif isinstance(if_true, str): - if isinstance(if_false, str): - raise ValueError( - "A field cannot be used for both the `if_true` and `if_false` values of a condition. One of them has to specify a `value` or `datum` definition." - ) - else: - if_true = utils.parse_shorthand(if_true) - if_true.update(kwargs) - condition.update(if_true) - - if isinstance(if_false, core.SchemaBase): - # For the selection, the channel definitions all allow selections - # already. So use this SchemaBase wrapper if possible. - selection = if_false.copy() - selection.condition = condition - elif isinstance(if_false, str): - selection = {"condition": condition, "shorthand": if_false} - selection.update(kwargs) - else: - selection = dict(condition=condition, **if_false) - - return selection - - -# -------------------------------------------------------------------- -# Top-level objects - - -class TopLevelMixin(mixins.ConfigMethodMixin): - """Mixin for top-level chart objects such as Chart, LayeredChart, etc.""" - - _class_is_valid_at_instantiation = False - - def to_dict( - self, - validate: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - ) -> dict: - """Convert the chart to a dictionary suitable for JSON export - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - format : str, optional - Chart specification format, one of "vega-lite" (default) or "vega" - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - - Notes - ----- - Technical: The ignore parameter will *not* be passed to child to_dict - function calls. - - Returns - ------- - dict - The dictionary representation of this chart - - Raises - ------ - SchemaValidationError - if validate=True and the dict does not conform to the schema - """ - - # Validate format - if format not in ("vega-lite", "vega"): - raise ValueError( - f'The format argument must be either "vega-lite" or "vega". Received {repr(format)}' - ) - - # We make use of three context markers: - # - 'data' points to the data that should be referenced for column type - # inference. - # - 'top_level' is a boolean flag that is assumed to be true; if it's - # true then a "$schema" arg is added to the dict. - # - 'datasets' is a dict of named datasets that should be inserted - # in the top-level object - # - 'pre_transform' whether data transformations should be pre-evaluated - # if the current data transformer supports it (currently only used when - # the "vegafusion" transformer is enabled) - - # note: not a deep copy because we want datasets and data arguments to - # be passed by reference - context = context.copy() if context else {} - context.setdefault("datasets", {}) - is_top_level = context.get("top_level", True) - - # TopLevelMixin instance does not necessarily have copy defined but due to how - # Altair is set up this should hold. Too complex to type hint right now - copy = self.copy(deep=False) # type: ignore[attr-defined] - original_data = getattr(copy, "data", Undefined) - copy.data = _prepare_data(original_data, context) - - if original_data is not Undefined: - context["data"] = original_data - - # remaining to_dict calls are not at top level - context["top_level"] = False - - # TopLevelMixin instance does not necessarily have to_dict defined - # but due to how Altair is set up this should hold. - # Too complex to type hint right now - vegalite_spec = super(TopLevelMixin, copy).to_dict( # type: ignore[misc] - validate=validate, ignore=ignore, context=dict(context, pre_transform=False) - ) - - # TODO: following entries are added after validation. Should they be validated? - if is_top_level: - # since this is top-level we add $schema if it's missing - if "$schema" not in vegalite_spec: - vegalite_spec["$schema"] = SCHEMA_URL - - # apply theme from theme registry - the_theme = themes.get() - # Use assert to tell type checkers that it is not None. Holds true - # as there is always a default theme set when importing Altair - assert the_theme is not None - vegalite_spec = utils.update_nested(the_theme(), vegalite_spec, copy=True) - - # update datasets - if context["datasets"]: - vegalite_spec.setdefault("datasets", {}).update(context["datasets"]) - - if context.get("pre_transform", True) and _using_vegafusion(): - if format == "vega-lite": - raise ValueError( - 'When the "vegafusion" data transformer is enabled, the \n' - "to_dict() and to_json() chart methods must be called with " - 'format="vega". \n' - "For example: \n" - ' >>> chart.to_dict(format="vega")\n' - ' >>> chart.to_json(format="vega")' - ) - else: - return _compile_with_vegafusion(vegalite_spec) - else: - if format == "vega": - plugin = vegalite_compilers.get() - if plugin is None: - raise ValueError("No active vega-lite compiler plugin found") - return plugin(vegalite_spec) - else: - return vegalite_spec - - def to_json( - self, - validate: bool = True, - indent: int = 2, - sort_keys: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - **kwargs, - ) -> str: - """Convert a chart to a JSON string - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - indent : int, optional - The number of spaces of indentation to use. The default is 2. - sort_keys : bool, optional - If True (default), sort keys in the output. - format : str, optional - The chart specification format. One of "vega-lite" (default) or "vega". - The "vega" format relies on the active Vega-Lite compiler plugin, which - by default requires the vl-convert-python package. - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - **kwargs - Additional keyword arguments are passed to ``json.dumps()`` - """ - if ignore is None: - ignore = [] - if context is None: - context = {} - spec = self.to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - return json.dumps(spec, indent=indent, sort_keys=sort_keys, **kwargs) - - def to_html( - self, - base_url="https://cdn.jsdelivr.net/npm", - output_div="vis", - embed_options=None, - json_kwds=None, - fullhtml=True, - requirejs=False, - ) -> str: - return utils.spec_to_html( - self.to_dict(), - mode="vega-lite", - vegalite_version=VEGALITE_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - vega_version=VEGA_VERSION, - base_url=base_url, - output_div=output_div, - embed_options=embed_options, - json_kwds=json_kwds, - fullhtml=fullhtml, - requirejs=requirejs, - ) - - def save( - self, - fp, - format=None, - override_data_transformer=True, - scale_factor=1.0, - vegalite_version=VEGALITE_VERSION, - vega_version=VEGA_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - **kwargs, - ): - """Save a chart to file in a variety of formats - - Supported formats are json, html, png, svg, pdf; the last three require - the altair_saver package to be installed. - - Parameters - ---------- - fp : string filename or file-like object - file in which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - override_data_transformer : `boolean` (optional) - If True (default), then the save action will be done with - the MaxRowsError disabled. If False, then do not change the data - transformer. - scale_factor : float - For svg or png formats, scale the image by this factor when saving. - This can be used to control the size or resolution of the output. - Default is 1.0 - **kwargs : - Additional keyword arguments are passed to the output method - associated with the specified format. - - """ - from ...utils.save import save - - kwds = dict( - chart=self, - fp=fp, - format=format, - scale_factor=scale_factor, - vegalite_version=vegalite_version, - vega_version=vega_version, - vegaembed_version=vegaembed_version, - **kwargs, - ) - - # By default we override the data transformer. This makes it so - # that save() will succeed even for large datasets that would - # normally trigger a MaxRowsError - if override_data_transformer: - with data_transformers.disable_max_rows(): - result = save(**kwds) - else: - result = save(**kwds) - return result - - # Fallback for when rendering fails; the full repr is too long to be - # useful in nearly all cases. - def __repr__(self): - return "alt.{}(...)".format(self.__class__.__name__) - - # Layering and stacking - def __add__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be layered.") - return layer(self, other) - - def __and__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return vconcat(self, other) - - def __or__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return hconcat(self, other) - - def repeat( - self, - repeat=Undefined, - row=Undefined, - column=Undefined, - layer=Undefined, - columns=Undefined, - **kwargs, - ) -> "RepeatChart": - """Return a RepeatChart built from the chart - - Fields within the chart can be set to correspond to the row or - column using `alt.repeat('row')` and `alt.repeat('column')`. - - Parameters - ---------- - repeat : list - a list of data column names to be repeated. This cannot be - used along with the ``row``, ``column`` or ``layer`` argument. - row : list - a list of data column names to be mapped to the row facet - column : list - a list of data column names to be mapped to the column facet - layer : list - a list of data column names to be layered. This cannot be - used along with the ``row``, ``column`` or ``repeat`` argument. - columns : int - the maximum number of columns before wrapping. Only referenced - if ``repeat`` is specified. - **kwargs : - additional keywords passed to RepeatChart. - - Returns - ------- - chart : RepeatChart - a repeated chart. - """ - repeat_specified = repeat is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - layer_specified = layer is not Undefined - - if repeat_specified and rowcol_specified: - raise ValueError( - "repeat argument cannot be combined with row/column argument." - ) - elif repeat_specified and layer_specified: - raise ValueError("repeat argument cannot be combined with layer argument.") - - if repeat_specified: - repeat = repeat - elif layer_specified: - repeat = core.LayerRepeatMapping(layer=layer, row=row, column=column) - else: - repeat = core.RepeatMapping(row=row, column=column) - - return RepeatChart(spec=self, repeat=repeat, columns=columns, **kwargs) - - def properties(self, **kwargs) -> Self: - """Set top-level properties of the Chart. - - Argument names and types are the same as class initialization. - """ - # ignore type as copy comes from another class for subclasses of TopLevelMixin - copy = self.copy(deep=False) # type: ignore[attr-defined] - for key, val in kwargs.items(): - if key == "selection" and isinstance(val, Parameter): - # TODO: Can this be removed - # For backward compatibility with old selection interface. - setattr(copy, key, {val.name: val.selection}) - else: - # Don't validate data, because it hasn't been processed. - if key != "data": - # ignore type as validate_property comes from SchemaBase, - # not from TopLevelMixin - self.validate_property(key, val) # type: ignore[attr-defined] - setattr(copy, key, val) - return copy - - def project( - self, - type=Undefined, - center=Undefined, - clipAngle=Undefined, - clipExtent=Undefined, - coefficient=Undefined, - distance=Undefined, - fraction=Undefined, - lobes=Undefined, - parallel=Undefined, - precision=Undefined, - radius=Undefined, - ratio=Undefined, - reflectX=Undefined, - reflectY=Undefined, - rotate=Undefined, - scale=Undefined, - spacing=Undefined, - tilt=Undefined, - translate=Undefined, - **kwds, - ) -> Self: - """Add a geographic projection to the chart. - - This is generally used either with ``mark_geoshape`` or with the - ``latitude``/``longitude`` encodings. - - Available projection types are - ['albers', 'albersUsa', 'azimuthalEqualArea', 'azimuthalEquidistant', - 'conicConformal', 'conicEqualArea', 'conicEquidistant', 'equalEarth', 'equirectangular', - 'gnomonic', 'identity', 'mercator', 'orthographic', 'stereographic', 'transverseMercator'] - - Parameters - ---------- - type : ProjectionType - The cartographic projection to use. This value is case-insensitive, for example - `"albers"` and `"Albers"` indicate the same projection type. You can find all valid - projection types [in the - documentation](https://vega.github.io/vega-lite/docs/projection.html#projection-types). - - **Default value:** `equalEarth` - center : List(float) - Sets the projection’s center to the specified center, a two-element array of - longitude and latitude in degrees. - - **Default value:** `[0, 0]` - clipAngle : float - Sets the projection’s clipping circle radius to the specified angle in degrees. If - `null`, switches to [antimeridian](http://bl.ocks.org/mbostock/3788999) cutting - rather than small-circle clipping. - clipExtent : List(List(float)) - Sets the projection’s viewport clip extent to the specified bounds in pixels. The - extent bounds are specified as an array `[[x0, y0], [x1, y1]]`, where `x0` is the - left-side of the viewport, `y0` is the top, `x1` is the right and `y1` is the - bottom. If `null`, no viewport clipping is performed. - coefficient : float - - distance : float - - fraction : float - - lobes : float - - parallel : float - - precision : Mapping(required=[length]) - Sets the threshold for the projection’s [adaptive - resampling](http://bl.ocks.org/mbostock/3795544) to the specified value in pixels. - This value corresponds to the [Douglas–Peucker - distance](http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm). - If precision is not specified, returns the projection’s current resampling - precision which defaults to `√0.5 ≅ 0.70710…`. - radius : float - - ratio : float - - reflectX : boolean - - reflectY : boolean - - rotate : List(float) - Sets the projection’s three-axis rotation to the specified angles, which must be a - two- or three-element array of numbers [`lambda`, `phi`, `gamma`] specifying the - rotation angles in degrees about each spherical axis. (These correspond to yaw, - pitch and roll.) - - **Default value:** `[0, 0, 0]` - scale : float - Sets the projection's scale (zoom) value, overriding automatic fitting. - - spacing : float - - tilt : float - - translate : List(float) - Sets the projection's translation (pan) value, overriding automatic fitting. - - """ - projection = core.Projection( - center=center, - clipAngle=clipAngle, - clipExtent=clipExtent, - coefficient=coefficient, - distance=distance, - fraction=fraction, - lobes=lobes, - parallel=parallel, - precision=precision, - radius=radius, - ratio=ratio, - reflectX=reflectX, - reflectY=reflectY, - rotate=rotate, - scale=scale, - spacing=spacing, - tilt=tilt, - translate=translate, - type=type, - **kwds, - ) - return self.properties(projection=projection) - - def _add_transform(self, *transforms): - """Copy the chart and add specified transforms to chart.transform""" - copy = self.copy(deep=["transform"]) - if copy.transform is Undefined: - copy.transform = [] - copy.transform.extend(transforms) - return copy - - def transform_aggregate( - self, aggregate=Undefined, groupby=Undefined, **kwds - ) -> Self: - """ - Add an :class:`AggregateTransform` to the schema. - - Parameters - ---------- - aggregate : List(:class:`AggregatedFieldDef`) - Array of objects that define fields to aggregate. - groupby : List(string) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - **kwds : - additional keywords are converted to aggregates using standard - shorthand parsing. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - The aggregate transform allows you to specify transforms directly using - the same shorthand syntax as used in encodings: - - >>> import altair as alt - >>> chart1 = alt.Chart().transform_aggregate( - ... mean_acc='mean(Acceleration)', - ... groupby=['Origin'] - ... ) - >>> print(chart1.transform[0].to_json()) # doctest: +NORMALIZE_WHITESPACE - { - "aggregate": [ - { - "as": "mean_acc", - "field": "Acceleration", - "op": "mean" - } - ], - "groupby": [ - "Origin" - ] - } - - It also supports including AggregatedFieldDef instances or dicts directly, - so you can create the above transform like this: - - >>> chart2 = alt.Chart().transform_aggregate( - ... [alt.AggregatedFieldDef(field='Acceleration', op='mean', - ... **{'as': 'mean_acc'})], - ... groupby=['Origin'] - ... ) - >>> chart2.transform == chart1.transform - True - - See Also - -------- - alt.AggregateTransform : underlying transform object - - """ - if aggregate is Undefined: - aggregate = [] - for key, val in kwds.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - aggregate.append(core.AggregatedFieldDef(**dct)) - return self._add_transform( - core.AggregateTransform(aggregate=aggregate, groupby=groupby) - ) - - def transform_bin(self, as_=Undefined, field=Undefined, bin=True, **kwargs) -> Self: - """ - Add a :class:`BinTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - The output fields at which to write the start and end bin values. - bin : anyOf(boolean, :class:`BinParams`) - An object indicating bin properties, or simply ``true`` for using default bin - parameters. - field : string - The data field to bin. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_bin("x_binned", "x") - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: True, - field: 'x' - }) - - >>> chart = alt.Chart().transform_bin("x_binned", "x", - ... bin=alt.Bin(maxbins=10)) - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: BinParams({ - maxbins: 10 - }), - field: 'x' - }) - - See Also - -------- - alt.BinTransform : underlying transform object - - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_bin: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - kwargs["bin"] = bin - kwargs["field"] = field - return self._add_transform(core.BinTransform(**kwargs)) - - def transform_calculate(self, as_=Undefined, calculate=Undefined, **kwargs) -> Self: - """ - Add a :class:`CalculateTransform` to the schema. - - Parameters - ---------- - as_ : string - The field for storing the computed formula value. - calculate : string or alt.expr expression - A `expression `__ - string. Use the variable ``datum`` to refer to the current data object. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_calculate(y = 2 * expr.sin(datum.x)) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: (2 * sin(datum.x)) - }) - - It's also possible to pass the ``CalculateTransform`` arguments directly: - - >>> kwds = {'as': 'y', 'calculate': '2 * sin(datum.x)'} - >>> chart = alt.Chart().transform_calculate(**kwds) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: '2 * sin(datum.x)' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.CalculateTransform : underlying transform object - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - elif "as" in kwargs: - raise ValueError( - "transform_calculate: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined or calculate is not Undefined: - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - for as_, calculate in kwargs.items(): - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - return self - - def transform_density( - self, - density, - as_=Undefined, - bandwidth=Undefined, - counts=Undefined, - cumulative=Undefined, - extent=Undefined, - groupby=Undefined, - maxsteps=Undefined, - minsteps=Undefined, - steps=Undefined, - ) -> Self: - """Add a :class:`DensityTransform` to the spec. - - Parameters - ---------- - density : str - The data field for which to perform density estimation. - as_ : [str, str] - The output fields for the sample value and corresponding density estimate. - **Default value:** ``["value", "density"]`` - bandwidth : float - The bandwidth (standard deviation) of the Gaussian kernel. If unspecified or set to - zero, the bandwidth value is automatically estimated from the input data using - Scott’s rule. - counts : boolean - A boolean flag indicating if the output values should be probability estimates - (false) or smoothed counts (true). - **Default value:** ``false`` - cumulative : boolean - A boolean flag indicating whether to produce density estimates (false) or cumulative - density estimates (true). - **Default value:** ``false`` - extent : List([float, float]) - A [min, max] domain from which to sample the distribution. If unspecified, the - extent will be determined by the observed minimum and maximum values of the density - value field. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - maxsteps : float - The maximum number of samples to take along the extent domain for plotting the - density. **Default value:** ``200`` - minsteps : float - The minimum number of samples to take along the extent domain for plotting the - density. **Default value:** ``25`` - steps : float - The exact number of samples to take along the extent domain for plotting the - density. If specified, overrides both minsteps and maxsteps to set an exact number - of uniform samples. Potentially useful in conjunction with a fixed extent to ensure - consistent sample points for stacked densities. - """ - return self._add_transform( - core.DensityTransform( - density=density, - bandwidth=bandwidth, - counts=counts, - cumulative=cumulative, - extent=extent, - groupby=groupby, - maxsteps=maxsteps, - minsteps=minsteps, - steps=steps, - **{"as": as_}, - ) - ) - - def transform_impute( - self, - impute, - key, - frame=Undefined, - groupby=Undefined, - keyvals=Undefined, - method=Undefined, - value=Undefined, - ) -> Self: - """ - Add an :class:`ImputeTransform` to the schema. - - Parameters - ---------- - impute : string - The data field for which the missing values should be imputed. - key : string - A key field that uniquely identifies data objects within a group. - Missing key values (those occurring in the data but not in the current group) will - be imputed. - frame : List(anyOf(None, float)) - A frame specification as a two-element array used to control the window over which - the specified method is applied. The array entries should either be a number - indicating the offset from the current data object, or null to indicate unbounded - rows preceding or following the current data object. For example, the value ``[-5, - 5]`` indicates that the window should include five objects preceding and five - objects following the current object. - **Default value:** : ``[null, null]`` indicating that the window includes all - objects. - groupby : List(string) - An optional array of fields by which to group the values. - Imputation will then be performed on a per-group basis. - keyvals : anyOf(List(Mapping(required=[])), :class:`ImputeSequence`) - Defines the key values that should be considered for imputation. - An array of key values or an object defining a `number sequence - `__. - If provided, this will be used in addition to the key values observed within the - input data. If not provided, the values will be derived from all unique values of - the ``key`` field. For ``impute`` in ``encoding``, the key field is the x-field if - the y-field is imputed, or vice versa. - If there is no impute grouping, this property *must* be specified. - method : :class:`ImputeMethod` - The imputation method to use for the field value of imputed data objects. - One of ``value``, ``mean``, ``median``, ``max`` or ``min``. - **Default value:** ``"value"`` - value : Mapping(required=[]) - The field value to use when the imputation ``method`` is ``"value"``. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.ImputeTransform : underlying transform object - """ - return self._add_transform( - core.ImputeTransform( - impute=impute, - key=key, - frame=frame, - groupby=groupby, - keyvals=keyvals, - method=method, - value=value, - ) - ) - - def transform_joinaggregate( - self, joinaggregate=Undefined, groupby=Undefined, **kwargs - ) -> Self: - """ - Add a :class:`JoinAggregateTransform` to the schema. - - Parameters - ---------- - joinaggregate : List(:class:`JoinAggregateFieldDef`) - The definition of the fields in the join aggregate, and what calculations to use. - groupby : List(string) - The data fields for partitioning the data objects into separate groups. If - unspecified, all data points will be in a single group. - **kwargs - joinaggregates can also be passed by keyword argument; see Examples. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_joinaggregate(x='sum(y)') - >>> chart.transform[0] - JoinAggregateTransform({ - joinaggregate: [JoinAggregateFieldDef({ - as: 'x', - field: 'y', - op: 'sum' - })] - }) - - See Also - -------- - alt.JoinAggregateTransform : underlying transform object - """ - if joinaggregate is Undefined: - joinaggregate = [] - for key, val in kwargs.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - joinaggregate.append(core.JoinAggregateFieldDef(**dct)) - return self._add_transform( - core.JoinAggregateTransform(joinaggregate=joinaggregate, groupby=groupby) - ) - - def transform_extent(self, extent: str, param: str) -> Self: - """Add a :class:`ExtentTransform` to the spec. - - Parameters - ---------- - extent : str - The field of which to get the extent. - param : str - The name of the output parameter which will be created by - the extent transform. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - """ - return self._add_transform(core.ExtentTransform(extent=extent, param=param)) - - # TODO: Update docstring - def transform_filter(self, filter, **kwargs) -> Self: - """ - Add a :class:`FilterTransform` to the schema. - - Parameters - ---------- - filter : a filter expression or :class:`PredicateComposition` - The `filter` property must be one of the predicate definitions: - (1) a string or alt.expr expression - (2) a range predicate - (3) a selection predicate - (4) a logical operand combining (1)-(3) - (5) a Selection object - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FilterTransform : underlying transform object - - """ - if isinstance(filter, Parameter): - new_filter = {"param": filter.name} - if "empty" in kwargs: - new_filter["empty"] = kwargs.pop("empty") - elif isinstance(filter.empty, bool): - new_filter["empty"] = filter.empty - filter = new_filter - return self._add_transform(core.FilterTransform(filter=filter, **kwargs)) - - def transform_flatten(self, flatten, as_=Undefined) -> Self: - """Add a :class:`FlattenTransform` to the schema. - - Parameters - ---------- - flatten : List(string) - An array of one or more data fields containing arrays to flatten. - If multiple fields are specified, their array values should have a parallel - structure, ideally with the same length. - If the lengths of parallel arrays do not match, - the longest array will be used with ``null`` values added for missing entries. - as : List(string) - The output field names for extracted array values. - **Default value:** The field name of the corresponding array field - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FlattenTransform : underlying transform object - """ - return self._add_transform( - core.FlattenTransform(flatten=flatten, **{"as": as_}) - ) - - def transform_fold(self, fold, as_=Undefined) -> Self: - """Add a :class:`FoldTransform` to the spec. - - Parameters - ---------- - fold : List(string) - An array of data fields indicating the properties to fold. - as : [string, string] - The output field names for the key and value properties produced by the fold - transform. Default: ``["key", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_pivot : pivot transform - opposite of fold. - alt.FoldTransform : underlying transform object - """ - return self._add_transform(core.FoldTransform(fold=fold, **{"as": as_})) - - def transform_loess( - self, - on, - loess, - as_=Undefined, - bandwidth=Undefined, - groupby=Undefined, - ) -> Self: - """Add a :class:`LoessTransform` to the spec. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - loess : str - The data field of the dependent variable to smooth. - as_ : [str, str] - The output field names for the smoothed points generated by the loess transform. - **Default value:** The field names of the input x and y values. - bandwidth : float - A bandwidth parameter in the range ``[0, 1]`` that determines the amount of - smoothing. **Default value:** ``0.3`` - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_regression: regression transform - alt.LoessTransform : underlying transform object - """ - return self._add_transform( - core.LoessTransform( - loess=loess, on=on, bandwidth=bandwidth, groupby=groupby, **{"as": as_} - ) - ) - - def transform_lookup( - self, - lookup=Undefined, - from_=Undefined, - as_=Undefined, - default=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`DataLookupTransform` or :class:`SelectionLookupTransform` to the chart - - Parameters - ---------- - lookup : string - Key in primary data source. - from_ : anyOf(:class:`LookupData`, :class:`LookupSelection`) - Secondary data reference. - as_ : anyOf(string, List(string)) - The output fields on which to store the looked up data values. - - For data lookups, this property may be left blank if ``from_.fields`` - has been specified (those field names will be used); if ``from_.fields`` - has not been specified, ``as_`` must be a string. - - For selection lookups, this property is optional: if unspecified, - looked up values will be stored under a property named for the selection; - and if specified, it must correspond to ``from_.fields``. - default : string - The default value to use if lookup fails. **Default value:** ``null`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.DataLookupTransform : underlying transform object - alt.SelectionLookupTransform : underlying transform object - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_lookup: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - if from_ is not Undefined: - if "from" in kwargs: - raise ValueError( - "transform_lookup: both 'from_' and 'from' passed as arguments." - ) - kwargs["from"] = from_ - kwargs["lookup"] = lookup - kwargs["default"] = default - return self._add_transform(core.LookupTransform(**kwargs)) - - def transform_pivot( - self, - pivot, - value, - groupby=Undefined, - limit=Undefined, - op=Undefined, - ) -> Self: - """Add a :class:`PivotTransform` to the chart. - - Parameters - ---------- - pivot : str - The data field to pivot on. The unique values of this field become new field names - in the output stream. - value : str - The data field to populate pivoted fields. The aggregate values of this field become - the values of the new pivoted fields. - groupby : List(str) - The optional data fields to group by. If not specified, a single group containing - all data objects will be used. - limit : float - An optional parameter indicating the maximum number of pivoted fields to generate. - The default ( ``0`` ) applies no limit. The pivoted ``pivot`` names are sorted in - ascending order prior to enforcing the limit. - **Default value:** ``0`` - op : string - The aggregation operation to apply to grouped ``value`` field values. - **Default value:** ``sum`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_fold : fold transform - opposite of pivot. - alt.PivotTransform : underlying transform object - """ - return self._add_transform( - core.PivotTransform( - pivot=pivot, value=value, groupby=groupby, limit=limit, op=op - ) - ) - - def transform_quantile( - self, - quantile, - as_=Undefined, - groupby=Undefined, - probs=Undefined, - step=Undefined, - ) -> Self: - """Add a :class:`QuantileTransform` to the chart - - Parameters - ---------- - quantile : str - The data field for which to perform quantile estimation. - as : [str, str] - The output field names for the probability and quantile values. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - probs : List(float) - An array of probabilities in the range (0, 1) for which to compute quantile values. - If not specified, the *step* parameter will be used. - step : float - A probability step size (default 0.01) for sampling quantile values. All values from - one-half the step size up to 1 (exclusive) will be sampled. This parameter is only - used if the *probs* parameter is not provided. **Default value:** ``["prob", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.QuantileTransform : underlying transform object - """ - return self._add_transform( - core.QuantileTransform( - quantile=quantile, - groupby=groupby, - probs=probs, - step=step, - **{"as": as_}, - ) - ) - - def transform_regression( - self, - on, - regression, - as_=Undefined, - extent=Undefined, - groupby=Undefined, - method=Undefined, - order=Undefined, - params=Undefined, - ) -> Self: - """Add a :class:`RegressionTransform` to the chart. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - regression : str - The data field of the dependent variable to predict. - as_ : [str, str] - The output field names for the smoothed points generated by the regression - transform. **Default value:** The field names of the input x and y values. - extent : [float, float] - A [min, max] domain over the independent (x) field for the starting and ending - points of the generated trend line. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - method : enum('linear', 'log', 'exp', 'pow', 'quad', 'poly') - The functional form of the regression model. One of ``"linear"``, ``"log"``, - ``"exp"``, ``"pow"``, ``"quad"``, or ``"poly"``. **Default value:** ``"linear"`` - order : float - The polynomial order (number of coefficients) for the 'poly' method. - **Default value:** ``3`` - params : boolean - A boolean flag indicating if the transform should return the regression model - parameters (one object per group), rather than trend line points. - The resulting objects include a ``coef`` array of fitted coefficient values - (starting with the intercept term and then including terms of increasing order) - and an ``rSquared`` value (indicating the total variance explained by the model). - **Default value:** ``false`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_loess : LOESS transform - alt.RegressionTransform : underlying transform object - """ - return self._add_transform( - core.RegressionTransform( - regression=regression, - on=on, - extent=extent, - groupby=groupby, - method=method, - order=order, - params=params, - **{"as": as_}, - ) - ) - - def transform_sample(self, sample=1000) -> Self: - """ - Add a :class:`SampleTransform` to the schema. - - Parameters - ---------- - sample : float - The maximum number of data objects to include in the sample. Default: 1000. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.SampleTransform : underlying transform object - """ - return self._add_transform(core.SampleTransform(sample)) - - def transform_stack( - self, as_, stack, groupby, offset=Undefined, sort=Undefined - ) -> Self: - """ - Add a :class:`StackTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - Output field names. This can be either a string or an array of strings with - two elements denoting the name for the fields for stack start and stack end - respectively. - If a single string(eg."val") is provided, the end field will be "val_end". - stack : string - The field which is stacked. - groupby : List(string) - The data fields to group by. - offset : enum('zero', 'center', 'normalize') - Mode for stacking marks. Default: 'zero'. - sort : List(:class:`SortField`) - Field that determines the order of leaves in the stacked charts. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.StackTransform : underlying transform object - """ - return self._add_transform( - core.StackTransform( - stack=stack, groupby=groupby, offset=offset, sort=sort, **{"as": as_} - ) - ) - - def transform_timeunit( - self, - as_=Undefined, - field=Undefined, - timeUnit=Undefined, - **kwargs, - ) -> Self: - """ - Add a :class:`TimeUnitTransform` to the schema. - - Parameters - ---------- - as_ : string - The output field to write the timeUnit value. - field : string - The data field to apply time unit. - timeUnit : :class:`TimeUnit` - The timeUnit. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_timeunit(month='month(date)') - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'date', - timeUnit: 'month' - }) - - It's also possible to pass the ``TimeUnitTransform`` arguments directly; - this is most useful in cases where the desired field name is not a - valid python identifier: - - >>> kwds = {'as': 'month', 'timeUnit': 'month', 'field': 'The Month'} - >>> chart = alt.Chart().transform_timeunit(**kwds) - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'The Month', - timeUnit: 'month' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.TimeUnitTransform : underlying transform object - - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - else: - if "as" in kwargs: - raise ValueError( - "transform_timeunit: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined: - dct = {"as": as_, "timeUnit": timeUnit, "field": field} - self = self._add_transform(core.TimeUnitTransform(**dct)) - for as_, shorthand in kwargs.items(): - dct = utils.parse_shorthand( - shorthand, - parse_timeunits=True, - parse_aggregates=False, - parse_types=False, - ) - dct.pop("type", None) - dct["as"] = as_ - if "timeUnit" not in dct: - raise ValueError("'{}' must include a valid timeUnit".format(shorthand)) - self = self._add_transform(core.TimeUnitTransform(**dct)) - return self - - def transform_window( - self, - window=Undefined, - frame=Undefined, - groupby=Undefined, - ignorePeers=Undefined, - sort=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`WindowTransform` to the schema - - Parameters - ---------- - window : List(:class:`WindowFieldDef`) - The definition of the fields in the window, and what calculations to use. - frame : List(anyOf(None, float)) - A frame specification as a two-element array indicating how the sliding window - should proceed. The array entries should either be a number indicating the offset - from the current data object, or null to indicate unbounded rows preceding or - following the current data object. The default value is ``[null, 0]``, indicating - that the sliding window includes the current object and all preceding objects. The - value ``[-5, 5]`` indicates that the window should include five objects preceding - and five objects following the current object. Finally, ``[null, null]`` indicates - that the window frame should always include all data objects. The only operators - affected are the aggregation operations and the ``first_value``, ``last_value``, and - ``nth_value`` window operations. The other window operations are not affected by - this. - - **Default value:** : ``[null, 0]`` (includes the current object and all preceding - objects) - groupby : List(string) - The data fields for partitioning the data objects into separate windows. If - unspecified, all data points will be in a single group. - ignorePeers : boolean - Indicates if the sliding window frame should ignore peer values. (Peer values are - those considered identical by the sort criteria). The default is false, causing the - window frame to expand to include all peer values. If set to true, the window frame - will be defined by offset values only. This setting only affects those operations - that depend on the window frame, namely aggregation operations and the first_value, - last_value, and nth_value window operations. - - **Default value:** ``false`` - sort : List(:class:`SortField`) - A sort field definition for sorting data objects within a window. If two data - objects are considered equal by the comparator, they are considered “peer” values of - equal rank. If sort is not specified, the order is undefined: data objects are - processed in the order they are observed and none are considered peers (the - ignorePeers parameter is ignored and treated as if set to ``true`` ). - **kwargs - transforms can also be passed by keyword argument; see Examples - - Examples - -------- - A cumulative line chart - - >>> import altair as alt - >>> import numpy as np - >>> import pandas as pd - >>> data = pd.DataFrame({'x': np.arange(100), - ... 'y': np.random.randn(100)}) - >>> chart = alt.Chart(data).mark_line().encode( - ... x='x:Q', - ... y='ycuml:Q' - ... ).transform_window( - ... ycuml='sum(y)' - ... ) - >>> chart.transform[0] - WindowTransform({ - window: [WindowFieldDef({ - as: 'ycuml', - field: 'y', - op: 'sum' - })] - }) - - """ - if kwargs: - if window is Undefined: - window = [] - for as_, shorthand in kwargs.items(): - kwds = {"as": as_} - kwds.update( - utils.parse_shorthand( - shorthand, - parse_aggregates=False, - parse_window_ops=True, - parse_timeunits=False, - parse_types=False, - ) - ) - window.append(core.WindowFieldDef(**kwds)) - - return self._add_transform( - core.WindowTransform( - window=window, - frame=frame, - groupby=groupby, - ignorePeers=ignorePeers, - sort=sort, - ) - ) - - # Display-related methods - - def _repr_mimebundle_(self, include=None, exclude=None): - """Return a MIME bundle for display in Jupyter frontends.""" - # Catch errors explicitly to get around issues in Jupyter frontend - # see https://github.com/ipython/ipython/issues/11038 - try: - dct = self.to_dict(context={"pre_transform": False}) - except Exception: - utils.display_traceback(in_ipython=True) - return {} - else: - return renderers.get()(dct) - - def display(self, renderer=Undefined, theme=Undefined, actions=Undefined, **kwargs): - """Display chart in Jupyter notebook or JupyterLab - - Parameters are passed as options to vega-embed within supported frontends. - See https://github.com/vega/vega-embed#options for details. - - Parameters - ---------- - renderer : string ('canvas' or 'svg') - The renderer to use - theme : string - The Vega theme name to use; see https://github.com/vega/vega-themes - actions : bool or dict - Specify whether action links ("Open In Vega Editor", etc.) are - included in the view. - **kwargs : - Additional parameters are also passed to vega-embed as options. - - """ - from IPython.display import display - - if renderer is not Undefined: - kwargs["renderer"] = renderer - if theme is not Undefined: - kwargs["theme"] = theme - if actions is not Undefined: - kwargs["actions"] = actions - - if kwargs: - options = renderers.options.copy() - options["embed_options"] = options.get("embed_options", {}).copy() - options["embed_options"].update(kwargs) - with renderers.enable(**options): - display(self) - else: - display(self) - - @utils.deprecation.deprecated(message="'serve' is deprecated. Use 'show' instead.") - def serve( - self, - ip="127.0.0.1", - port=8888, - n_retries=50, - files=None, - jupyter_warning=True, - open_browser=True, - http_server=None, - **kwargs, - ): - """ - 'serve' is deprecated. Use 'show' instead. - - Open a browser window and display a rendering of the chart - - Parameters - ---------- - html : string - HTML to serve - ip : string (default = '127.0.0.1') - ip address at which the HTML will be served. - port : int (default = 8888) - the port at which to serve the HTML - n_retries : int (default = 50) - the number of nearby ports to search if the specified port - is already in use. - files : dictionary (optional) - dictionary of extra content to serve - jupyter_warning : bool (optional) - if True (default), then print a warning if this is used - within the Jupyter notebook - open_browser : bool (optional) - if True (default), then open a web browser to the given HTML - http_server : class (optional) - optionally specify an HTTPServer class to use for showing the - figure. The default is Python's basic HTTPServer. - **kwargs : - additional keyword arguments passed to the save() method - - """ - from ...utils.server import serve - - html = io.StringIO() - self.save(html, format="html", **kwargs) - html.seek(0) - - serve( - html.read(), - ip=ip, - port=port, - n_retries=n_retries, - files=files, - jupyter_warning=jupyter_warning, - open_browser=open_browser, - http_server=http_server, - ) - - def show(self, embed_opt=None, open_browser=None): - """Show the chart in an external browser window. - - This requires a recent version of the altair_viewer package. - - Parameters - ---------- - embed_opt : dict (optional) - The Vega embed options that control the dispay of the chart. - open_browser : bool (optional) - Specify whether a browser window should be opened. If not specified, - a browser window will be opened only if the server is not already - connected to a browser. - """ - try: - import altair_viewer - except ImportError as err: - raise ValueError( - "'show' method requires the altair_viewer package. " - "See http://github.com/altair-viz/altair_viewer" - ) from err - altair_viewer.show(self, embed_opt=embed_opt, open_browser=open_browser) - - @utils.use_signature(core.Resolve) - def _set_resolve(self, **kwargs): - """Copy the chart and update the resolve property with kwargs""" - if not hasattr(self, "resolve"): - raise ValueError( - "{} object has no attribute " "'resolve'".format(self.__class__) - ) - copy = self.copy(deep=["resolve"]) - if copy.resolve is Undefined: - copy.resolve = core.Resolve() - for key, val in kwargs.items(): - copy.resolve[key] = val - return copy - - @utils.use_signature(core.AxisResolveMap) - def resolve_axis(self, *args, **kwargs) -> Self: - return self._set_resolve(axis=core.AxisResolveMap(*args, **kwargs)) - - @utils.use_signature(core.LegendResolveMap) - def resolve_legend(self, *args, **kwargs) -> Self: - return self._set_resolve(legend=core.LegendResolveMap(*args, **kwargs)) - - @utils.use_signature(core.ScaleResolveMap) - def resolve_scale(self, *args, **kwargs) -> Self: - return self._set_resolve(scale=core.ScaleResolveMap(*args, **kwargs)) - - -class _EncodingMixin: - @utils.use_signature(core.FacetedEncoding) - def encode(self, *args, **kwargs) -> Self: - # Convert args to kwargs based on their types. - kwargs = utils.infer_encoding_types(args, kwargs, channels) - - # get a copy of the dict representation of the previous encoding - # ignore type as copy method comes from SchemaBase - copy = self.copy(deep=["encoding"]) # type: ignore[attr-defined] - encoding = copy._get("encoding", {}) - if isinstance(encoding, core.VegaLiteSchema): - encoding = {k: v for k, v in encoding._kwds.items() if v is not Undefined} - - # update with the new encodings, and apply them to the copy - encoding.update(kwargs) - copy.encoding = core.FacetedEncoding(**encoding) - return copy - - def facet( - self, - facet=Undefined, - row=Undefined, - column=Undefined, - data=Undefined, - columns=Undefined, - **kwargs, - ) -> "FacetChart": - """Create a facet chart from the current chart. - - Faceted charts require data to be specified at the top level; if data - is not specified, the data from the current chart will be used at the - top level. - - Parameters - ---------- - facet : string or alt.Facet (optional) - The data column to use as an encoding for a wrapped facet. - If specified, then neither row nor column may be specified. - column : string or alt.Column (optional) - The data column to use as an encoding for a column facet. - May be combined with row argument, but not with facet argument. - row : string or alt.Column (optional) - The data column to use as an encoding for a row facet. - May be combined with column argument, but not with facet argument. - data : string or dataframe (optional) - The dataset to use for faceting. If not supplied, then data must - be specified in the top-level chart that calls this method. - columns : integer - the maximum number of columns for a wrapped facet. - - Returns - ------- - self : - for chaining - """ - facet_specified = facet is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - - if facet_specified and rowcol_specified: - raise ValueError( - "facet argument cannot be combined with row/column argument." - ) - - # Remove "ignore" statement once Undefined is no longer typed as Any - if data is Undefined: - # Remove "ignore" statement once Undefined is no longer typed as Any - if self.data is Undefined: # type: ignore - raise ValueError( - "Facet charts require data to be specified at the top level." - ) - # ignore type as copy comes from another class - self = self.copy(deep=False) # type: ignore[attr-defined] - # Remove "ignore" statement once Undefined is no longer typed as Any - data, self.data = self.data, Undefined # type: ignore - - if facet_specified: - if isinstance(facet, str): - facet = channels.Facet(facet) - else: - facet = FacetMapping(row=row, column=column) - - return FacetChart(spec=self, facet=facet, data=data, columns=columns, **kwargs) - - -class Chart( - TopLevelMixin, _EncodingMixin, mixins.MarkMethodMixin, core.TopLevelUnitSpec -): - """Create a basic Altair/Vega-Lite chart. - - Although it is possible to set all Chart properties as constructor attributes, - it is more idiomatic to use methods such as ``mark_point()``, ``encode()``, - ``transform_filter()``, ``properties()``, etc. See Altair's documentation - for details and examples: http://altair-viz.github.io/. - - Parameters - ---------- - data : Data - An object describing the data source - mark : AnyMark - A string describing the mark type (one of `"bar"`, `"circle"`, `"square"`, `"tick"`, - `"line"`, * `"area"`, `"point"`, `"rule"`, `"geoshape"`, and `"text"`) or a - MarkDef object. - encoding : FacetedEncoding - A key-value mapping between encoding channels and definition of fields. - autosize : anyOf(AutosizeType, AutoSizeParams) - Sets how the visualization size should be determined. If a string, should be one of - `"pad"`, `"fit"` or `"none"`. Object values can additionally specify parameters for - content sizing and automatic resizing. `"fit"` is only supported for single and - layered views that don't use `rangeStep`. Default value: `pad` - background : string - CSS color property to use as the background of visualization. - - **Default value:** none (transparent) - config : Config - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - description : string - Description of this mark for commenting purpose. - height : float - The height of a visualization. - name : string - Name of the visualization for later reference. - padding : Padding - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. If an - object, the value should have the format `{"left": 5, "top": 5, "right": 5, - "bottom": 5}` to specify padding for each side of the visualization. Default - value: `5` - projection : Projection - An object defining properties of geographic projection. Works with `"geoshape"` - marks and `"point"` or `"line"` marks that have a channel (one or more of `"X"`, - `"X2"`, `"Y"`, `"Y2"`) with type `"latitude"`, or `"longitude"`. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - title : anyOf(string, TitleParams) - Title for the plot. - transform : List(Transform) - An array of data transformations such as filter and new field calculation. - width : float - The width of a visualization. - """ - - def __init__( - self, - data=Undefined, - encoding=Undefined, - mark=Undefined, - width=Undefined, - height=Undefined, - **kwargs, - ): - super(Chart, self).__init__( - data=data, - encoding=encoding, - mark=mark, - width=width, - height=height, - **kwargs, - ) - - _counter = 0 - - @classmethod - def _get_name(cls): - cls._counter += 1 - return f"view_{cls._counter}" - - @classmethod - def from_dict(cls, dct, validate=True) -> core.SchemaBase: # type: ignore[override] # Not the same signature as SchemaBase.from_dict. Would ideally be aligned in the future - """Construct class from a dictionary representation - - Parameters - ---------- - dct : dictionary - The dict from which to construct the class - validate : boolean - If True (default), then validate the input against the schema. - - Returns - ------- - obj : Chart object - The wrapped schema - - Raises - ------ - jsonschema.ValidationError : - if validate=True and dct does not conform to the schema - """ - for class_ in TopLevelMixin.__subclasses__(): - if class_ is Chart: - class_ = cast(TypingType[TopLevelMixin], super(Chart, cls)) - try: - # TopLevelMixin classes don't necessarily have from_dict defined - # but all classes which are used here have due to how Altair is - # designed. Too complex to type check right now. - return class_.from_dict(dct, validate=validate) # type: ignore[attr-defined] - except jsonschema.ValidationError: - pass - - # As a last resort, try using the Root vegalite object - return core.Root.from_dict(dct, validate) - - def to_dict( - self, - validate: bool = True, - *, - format: str = "vega-lite", - ignore: Optional[List[str]] = None, - context: Optional[TypingDict[str, Any]] = None, - ) -> dict: - """Convert the chart to a dictionary suitable for JSON export - - Parameters - ---------- - validate : bool, optional - If True (default), then validate the output dictionary - against the schema. - format : str, optional - Chart specification format, one of "vega-lite" (default) or "vega" - ignore : list[str], optional - A list of keys to ignore. It is usually not needed - to specify this argument as a user. - context : dict[str, Any], optional - A context dictionary. It is usually not needed - to specify this argument as a user. - - Notes - ----- - Technical: The ignore parameter will *not* be passed to child to_dict - function calls. - - Returns - ------- - dict - The dictionary representation of this chart - - Raises - ------ - SchemaValidationError - if validate=True and the dict does not conform to the schema - """ - context = context or {} - if self.data is Undefined and "data" not in context: - # No data specified here or in parent: inject empty data - # for easier specification of datum encodings. - copy = self.copy(deep=False) - copy.data = core.InlineData(values=[{}]) - return super(Chart, copy).to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - return super().to_dict( - validate=validate, format=format, ignore=ignore, context=context - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a Chart's transforms - - Evaluate the data transforms associated with a Chart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - DataFrame - Transformed data as a DataFrame - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params: - return self - copy = self.copy(deep=["params"]) - if copy.params is Undefined: - copy.params = [] - - for s in params: - copy.params.append(s.param) - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *params) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*params) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - -def _check_if_valid_subspec(spec, classname): - """Check if the spec is a valid sub-spec. - - If it is not, then raise a ValueError - """ - err = ( - 'Objects with "{0}" attribute cannot be used within {1}. ' - "Consider defining the {0} attribute in the {1} object instead." - ) - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be used in {0}.".format(classname)) - for attr in TOPLEVEL_ONLY_KEYS: - if isinstance(spec, core.SchemaBase): - val = getattr(spec, attr, Undefined) - else: - val = spec.get(attr, Undefined) - if val is not Undefined: - raise ValueError(err.format(attr, classname)) - - -def _check_if_can_be_layered(spec): - """Check if the spec can be layered.""" - - def _get(spec, attr): - if isinstance(spec, core.SchemaBase): - return spec._get(attr) - else: - return spec.get(attr, Undefined) - - encoding = _get(spec, "encoding") - if encoding is not Undefined: - for channel in ["row", "column", "facet"]: - if _get(encoding, channel) is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, (Chart, LayerChart)): - return - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be layered.") - if _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, FacetChart) or _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, RepeatChart) or _get(spec, "repeat") is not Undefined: - raise ValueError( - "Repeat charts cannot be layered. Instead, layer the charts before repeating." - ) - if isinstance(spec, ConcatChart) or _get(spec, "concat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, HConcatChart) or _get(spec, "hconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, VConcatChart) or _get(spec, "vconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - - -class RepeatChart(TopLevelMixin, core.TopLevelRepeatSpec): - """A chart repeated across rows and columns with small changes""" - - # Because TopLevelRepeatSpec is defined as a union as of Vega-Lite schema 4.9, - # we set the arguments explicitly here. - # TODO: Should we instead use tools/schemapi/codegen._get_args? - @utils.use_signature(core.TopLevelRepeatSpec) - def __init__( - self, - repeat=Undefined, - spec=Undefined, - align=Undefined, - autosize=Undefined, - background=Undefined, - bounds=Undefined, - center=Undefined, - columns=Undefined, - config=Undefined, - data=Undefined, - datasets=Undefined, - description=Undefined, - name=Undefined, - padding=Undefined, - params=Undefined, - resolve=Undefined, - spacing=Undefined, - title=Undefined, - transform=Undefined, - usermeta=Undefined, - **kwds, - ): - _check_if_valid_subspec(spec, "RepeatChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - if isinstance(spec, (Chart, LayerChart)): - params = _repeat_names(params, repeat, spec) - super(RepeatChart, self).__init__( - repeat=repeat, - spec=spec, - align=align, - autosize=autosize, - background=background, - bounds=bounds, - center=center, - columns=columns, - config=config, - data=data, - datasets=datasets, - description=description, - name=name, - padding=padding, - params=params, - resolve=resolve, - spacing=spacing, - title=title, - transform=transform, - usermeta=usermeta, - **kwds, - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a RepeatChart's transforms - - Evaluate the data transforms associated with a RepeatChart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Raises - ------ - NotImplementedError - RepeatChart does not yet support transformed_data - """ - raise NotImplementedError( - "transformed_data is not yet implemented for RepeatChart" - ) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def repeat(repeater="repeat"): - """Tie a channel to the row or column within a repeated chart - - The output of this should be passed to the ``field`` attribute of - a channel. - - Parameters - ---------- - repeater : {'row'|'column'|'repeat'|'layer'} - The repeater to tie the field to. Default is 'repeat'. - - Returns - ------- - repeat : RepeatRef object - """ - if repeater not in ["row", "column", "repeat", "layer"]: - raise ValueError("repeater must be one of ['row', 'column', 'repeat', 'layer']") - return core.RepeatRef(repeat=repeater) - - -class ConcatChart(TopLevelMixin, core.TopLevelConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelConcatSpec) - def __init__(self, data=Undefined, concat=(), columns=Undefined, **kwargs): - # TODO: move common data to top level? - for spec in concat: - _check_if_valid_subspec(spec, "ConcatChart") - super(ConcatChart, self).__init__( - data=data, concat=list(concat), columns=columns, **kwargs - ) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "ConcatChart") - self.concat.append(other) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - return self - - def __or__(self, other): - copy = self.copy(deep=["concat"]) - copy |= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a ConcatChart's transforms - - Evaluate the data transforms associated with a ConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.concat: - return self - copy = self.copy() - copy.concat = [chart.add_params(*params) for chart in copy.concat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def concat(*charts, **kwargs): - """Concatenate charts horizontally""" - return ConcatChart(concat=charts, **kwargs) - - -class HConcatChart(TopLevelMixin, core.TopLevelHConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelHConcatSpec) - def __init__(self, data=Undefined, hconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in hconcat: - _check_if_valid_subspec(spec, "HConcatChart") - super(HConcatChart, self).__init__(data=data, hconcat=list(hconcat), **kwargs) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "HConcatChart") - self.hconcat.append(other) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - return self - - def __or__(self, other): - copy = self.copy(deep=["hconcat"]) - copy |= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a HConcatChart's transforms - - Evaluate the data transforms associated with a HConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.hconcat: - return self - copy = self.copy() - copy.hconcat = [chart.add_params(*params) for chart in copy.hconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def hconcat(*charts, **kwargs): - """Concatenate charts horizontally""" - return HConcatChart(hconcat=charts, **kwargs) - - -class VConcatChart(TopLevelMixin, core.TopLevelVConcatSpec): - """A chart with vertically-concatenated facets""" - - @utils.use_signature(core.TopLevelVConcatSpec) - def __init__(self, data=Undefined, vconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in vconcat: - _check_if_valid_subspec(spec, "VConcatChart") - super(VConcatChart, self).__init__(data=data, vconcat=list(vconcat), **kwargs) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - - def __iand__(self, other): - _check_if_valid_subspec(other, "VConcatChart") - self.vconcat.append(other) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - return self - - def __and__(self, other): - copy = self.copy(deep=["vconcat"]) - copy &= other - return copy - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a VConcatChart's transforms - - Evaluate the data transforms associated with a VConcatChart and return the - transformed data for each subplot as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each subplot as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.vconcat: - return self - copy = self.copy() - copy.vconcat = [chart.add_params(*params) for chart in copy.vconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def vconcat(*charts, **kwargs): - """Concatenate charts vertically""" - return VConcatChart(vconcat=charts, **kwargs) - - -class LayerChart(TopLevelMixin, _EncodingMixin, core.TopLevelLayerSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelLayerSpec) - def __init__(self, data=Undefined, layer=(), **kwargs): - # TODO: move common data to top level? - # TODO: check for conflicting interaction - for spec in layer: - _check_if_valid_subspec(spec, "LayerChart") - _check_if_can_be_layered(spec) - super(LayerChart, self).__init__(data=data, layer=list(layer), **kwargs) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - # Currently (Vega-Lite 5.5) the same param can't occur on two layers - self.layer = _remove_duplicate_params(self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - - # Some properties are not allowed within layer; we'll move to parent. - layer_props = ("height", "width", "view") - combined_dict, self.layer = _remove_layer_props(self, self.layer, layer_props) - - for prop in combined_dict: - self[prop] = combined_dict[prop] - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> List[_DataFrameLike]: - """Evaluate a LayerChart's transforms - - Evaluate the data transforms associated with a LayerChart and return the - transformed data for each layer as a list of DataFrames - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - list of DataFrame - Transformed data for each layer as a list of DataFrames - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def __iadd__(self, other): - _check_if_valid_subspec(other, "LayerChart") - _check_if_can_be_layered(other) - self.layer.append(other) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - return self - - def __add__(self, other): - copy = self.copy(deep=["layer"]) - copy += other - return copy - - def add_layers(self, *layers) -> Self: - copy = self.copy(deep=["layer"]) - for layer in layers: - copy += layer - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - if not self.layer: - raise ValueError( - "LayerChart: cannot call interactive() until a " "layer is defined" - ) - copy = self.copy(deep=["layer"]) - copy.layer[0] = copy.layer[0].interactive( - name=name, bind_x=bind_x, bind_y=bind_y - ) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.layer: - return self - copy = self.copy() - copy.layer[0] = copy.layer[0].add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def layer(*charts, **kwargs): - """layer multiple charts""" - return LayerChart(layer=charts, **kwargs) - - -class FacetChart(TopLevelMixin, core.TopLevelFacetSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelFacetSpec) - def __init__( - self, - data=Undefined, - spec=Undefined, - facet=Undefined, - params=Undefined, - **kwargs, - ): - _check_if_valid_subspec(spec, "FacetChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - super(FacetChart, self).__init__( - data=data, spec=spec, facet=facet, params=params, **kwargs - ) - - def transformed_data( - self, - row_limit: Optional[int] = None, - exclude: Optional[Iterable[str]] = None, - ) -> Optional[_DataFrameLike]: - """Evaluate a FacetChart's transforms - - Evaluate the data transforms associated with a FacetChart and return the - transformed data a DataFrame - - Parameters - ---------- - row_limit : int (optional) - Maximum number of rows to return for each DataFrame. None (default) for unlimited - exclude : iterable of str - Set of the names of charts to exclude - - Returns - ------- - DataFrame - Transformed data as a DataFrame - """ - from altair.utils._transformed_data import transformed_data - - return transformed_data(self, row_limit=row_limit, exclude=exclude) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def topo_feature(url, feature, **kwargs): - """A convenience function for extracting features from a topojson url - - Parameters - ---------- - url : string - An URL from which to load the data set. - - feature : string - The name of the TopoJSON object set to convert to a GeoJSON feature collection. For - example, in a map of the world, there may be an object set named `"countries"`. - Using the feature property, we can extract this set and generate a GeoJSON feature - object for each country. - - **kwargs : - additional keywords passed to TopoDataFormat - """ - return core.UrlData( - url=url, format=core.TopoDataFormat(type="topojson", feature=feature, **kwargs) - ) - - -def _combine_subchart_data(data, subcharts): - def remove_data(subchart): - if subchart.data is not Undefined: - subchart = subchart.copy() - subchart.data = Undefined - return subchart - - if not subcharts: - # No subcharts = nothing to do. - pass - elif data is Undefined: - # Top level has no data; all subchart data must - # be identical to proceed. - subdata = subcharts[0].data - if subdata is not Undefined and all(c.data is subdata for c in subcharts): - data = subdata - subcharts = [remove_data(c) for c in subcharts] - else: - # Top level has data; subchart data must be either - # undefined or identical to proceed. - if all(c.data is Undefined or c.data is data for c in subcharts): - subcharts = [remove_data(c) for c in subcharts] - - return data, subcharts - - -def _viewless_dict(param): - d = param.to_dict() - d.pop("views", None) - return d - - -def _needs_name(subchart): - # Only `Chart` objects need a name - if (subchart.name is not Undefined) or (not isinstance(subchart, Chart)): - return False - - # Variable parameters won't receive a views property. - if all(isinstance(p, core.VariableParameter) for p in subchart.params): - return False - - return True - - -# Convert SelectionParameters to TopLevelSelectionParameters with a views property. -def _prepare_to_lift(param): - param = param.copy() - - if isinstance(param, core.VariableParameter): - return param - - if isinstance(param, core.SelectionParameter): - return core.TopLevelSelectionParameter(**param.to_dict(), views=[]) - - if param.views is Undefined: - param.views = [] - - return param - - -def _remove_duplicate_params(layer): - subcharts = [subchart.copy() for subchart in layer] - found_params = [] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - params = [] - - # Ensure the same selection parameter doesn't appear twice - for param in subchart.params: - if isinstance(param, core.VariableParameter): - params.append(param) - continue - - p = param.copy() - pd = _viewless_dict(p) - - if pd not in found_params: - params.append(p) - found_params.append(pd) - - if len(params) == 0: - subchart.params = Undefined - else: - subchart.params = params - - return subcharts - - -def _combine_subchart_params(params, subcharts): - if params is Undefined: - params = [] - - # List of triples related to params, (param, dictionary minus views, views) - param_info = [] - - # Put parameters already found into `param_info` list. - for param in params: - p = _prepare_to_lift(param) - param_info.append( - ( - p, - _viewless_dict(p), - [] if isinstance(p, core.VariableParameter) else p.views, - ) - ) - - subcharts = [subchart.copy() for subchart in subcharts] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - if _needs_name(subchart): - subchart.name = subchart._get_name() - - for param in subchart.params: - p = _prepare_to_lift(param) - pd = _viewless_dict(p) - - dlist = [d for _, d, _ in param_info] - found = pd in dlist - - if isinstance(p, core.VariableParameter) and found: - continue - - if isinstance(p, core.VariableParameter) and not found: - param_info.append((p, pd, [])) - continue - - # At this stage in the loop, p must be a TopLevelSelectionParameter. - - if isinstance(subchart, Chart) and (subchart.name not in p.views): - p.views.append(subchart.name) - - if found: - i = dlist.index(pd) - _, _, old_views = param_info[i] - new_views = [v for v in p.views if v not in old_views] - old_views += new_views - else: - param_info.append((p, pd, p.views)) - - subchart.params = Undefined - - for p, _, v in param_info: - if len(v) > 0: - p.views = v - - subparams = [p for p, _, _ in param_info] - - if len(subparams) == 0: - subparams = Undefined - - return subparams, subcharts - - -def _get_repeat_strings(repeat): - if isinstance(repeat, list): - return repeat - elif isinstance(repeat, core.LayerRepeatMapping): - klist = ["row", "column", "layer"] - elif isinstance(repeat, core.RepeatMapping): - klist = ["row", "column"] - rclist = [k for k in klist if repeat[k] is not Undefined] - rcstrings = [[f"{k}_{v}" for v in repeat[k]] for k in rclist] - return ["".join(s) for s in itertools.product(*rcstrings)] - - -def _extend_view_name(v, r, spec): - # prevent the same extension from happening more than once - if isinstance(spec, Chart): - if v.endswith("child__" + r): - return v - else: - return f"{v}_child__{r}" - elif isinstance(spec, LayerChart): - if v.startswith("child__" + r): - return v - else: - return f"child__{r}_{v}" - - -def _repeat_names(params, repeat, spec): - if params is Undefined: - return params - - repeat = _get_repeat_strings(repeat) - params_named = [] - - for param in params: - if not isinstance(param, core.TopLevelSelectionParameter): - params_named.append(param) - continue - p = param.copy() - views = [] - repeat_strings = _get_repeat_strings(repeat) - for v in param.views: - if isinstance(spec, Chart): - if any(v.endswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - elif isinstance(spec, LayerChart): - if any(v.startswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - - p.views = views - params_named.append(p) - - return params_named - - -def _remove_layer_props(chart, subcharts, layer_props): - def remove_prop(subchart, prop): - # If subchart is a UnitSpec, then subchart["height"] raises a KeyError - try: - if subchart[prop] is not Undefined: - subchart = subchart.copy() - subchart[prop] = Undefined - except KeyError: - pass - return subchart - - output_dict = {} - - if not subcharts: - # No subcharts = nothing to do. - return output_dict, subcharts - - for prop in layer_props: - if chart[prop] is Undefined: - # Top level does not have this prop. - # Check for consistent props within the subcharts. - values = [] - for c in subcharts: - # If c is a UnitSpec, then c["height"] raises a KeyError. - try: - val = c[prop] - if val is not Undefined: - values.append(val) - except KeyError: - pass - if len(values) == 0: - pass - elif all(v == values[0] for v in values[1:]): - output_dict[prop] = values[0] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - else: - # Top level has this prop; subchart must either not have the prop - # or it must be Undefined or identical to proceed. - if all( - getattr(c, prop, Undefined) is Undefined or c[prop] == chart[prop] - for c in subcharts - ): - output_dict[prop] = chart[prop] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - subcharts = [remove_prop(c, prop) for c in subcharts] - - return output_dict, subcharts - - -@utils.use_signature(core.SequenceParams) -def sequence(start, stop=None, step=Undefined, as_=Undefined, **kwds): - """Sequence generator.""" - if stop is None: - start, stop = 0, start - params = core.SequenceParams(start=start, stop=stop, step=step, **{"as": as_}) - return core.SequenceGenerator(sequence=params, **kwds) - - -@utils.use_signature(core.GraticuleParams) -def graticule(**kwds): - """Graticule generator.""" - if not kwds: - # graticule: True indicates default parameters - graticule = True - else: - graticule = core.GraticuleParams(**kwds) - return core.GraticuleGenerator(graticule=graticule) - - -def sphere(): - """Sphere generator.""" - return core.SphereGenerator(sphere=True) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/chain_wrapper.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/chain_wrapper.py deleted file mode 100644 index 98cce898989c0904090990adb75974704786ff96..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/chain_wrapper.py +++ /dev/null @@ -1,221 +0,0 @@ -"""Wrapper functions around an LLM chain.""" - -import logging -from dataclasses import dataclass -from typing import Any, Generator, Optional, Tuple - -import openai -from langchain import Cohere, LLMChain, OpenAI -from langchain.llms import AI21 -from langchain.llms.base import BaseLLM - -from gpt_index.constants import MAX_CHUNK_SIZE, NUM_OUTPUTS -from gpt_index.prompts.base import Prompt -from gpt_index.utils import ( - ErrorToRetry, - globals_helper, - retry_on_exceptions_with_backoff, -) - - -@dataclass -class LLMMetadata: - """LLM metadata. - - We extract this metadata to help with our prompts. - - """ - - max_input_size: int = MAX_CHUNK_SIZE - num_output: int = NUM_OUTPUTS - - -def _get_llm_metadata(llm: BaseLLM) -> LLMMetadata: - """Get LLM metadata from llm.""" - if not isinstance(llm, BaseLLM): - raise ValueError("llm must be an instance of langchain.llms.base.LLM") - if isinstance(llm, OpenAI): - return LLMMetadata( - max_input_size=llm.modelname_to_contextsize(llm.model_name), - num_output=llm.max_tokens, - ) - elif isinstance(llm, Cohere): - # TODO: figure out max input size for cohere - return LLMMetadata(num_output=llm.max_tokens) - elif isinstance(llm, AI21): - # TODO: figure out max input size for AI21 - return LLMMetadata(num_output=llm.maxTokens) - else: - return LLMMetadata() - - -def _get_response_gen(openai_response_stream: Generator) -> Generator: - """Get response generator from openai response stream.""" - for response in openai_response_stream: - yield response["choices"][0]["text"] - - -class LLMPredictor: - """LLM predictor class. - - Wrapper around an LLMChain from Langchain. - - Args: - llm (Optional[langchain.llms.base.LLM]): LLM from Langchain to use - for predictions. Defaults to OpenAI's text-davinci-003 model. - Please see `Langchain's LLM Page - `_ - for more details. - - retry_on_throttling (bool): Whether to retry on rate limit errors. - Defaults to true. - - """ - - def __init__( - self, llm: Optional[BaseLLM] = None, retry_on_throttling: bool = True - ) -> None: - """Initialize params.""" - self._llm = llm or OpenAI(temperature=0, model_name="text-davinci-003") - self.retry_on_throttling = retry_on_throttling - self._total_tokens_used = 0 - self.flag = True - self._last_token_usage: Optional[int] = None - - def get_llm_metadata(self) -> LLMMetadata: - """Get LLM metadata.""" - # TODO: refactor mocks in unit tests, this is a stopgap solution - if hasattr(self, "_llm") and self._llm is not None: - return _get_llm_metadata(self._llm) - else: - return LLMMetadata() - - def _predict(self, prompt: Prompt, **prompt_args: Any) -> str: - """Inner predict function. - - If retry_on_throttling is true, we will retry on rate limit errors. - - """ - llm_chain = LLMChain( - prompt=prompt.get_langchain_prompt(llm=self._llm), llm=self._llm - ) - - # Note: we don't pass formatted_prompt to llm_chain.predict because - # langchain does the same formatting under the hood - full_prompt_args = prompt.get_full_format_args(prompt_args) - if self.retry_on_throttling: - llm_prediction = retry_on_exceptions_with_backoff( - lambda: llm_chain.predict(**full_prompt_args), - [ - ErrorToRetry(openai.error.RateLimitError), - ErrorToRetry(openai.error.ServiceUnavailableError), - ErrorToRetry(openai.error.TryAgain), - ErrorToRetry( - openai.error.APIConnectionError, lambda e: e.should_retry - ), - ], - ) - else: - llm_prediction = llm_chain.predict(**full_prompt_args) - return llm_prediction - - def predict(self, prompt: Prompt, **prompt_args: Any) -> Tuple[str, str]: - """Predict the answer to a query. - - Args: - prompt (Prompt): Prompt to use for prediction. - - Returns: - Tuple[str, str]: Tuple of the predicted answer and the formatted prompt. - - """ - formatted_prompt = prompt.format(llm=self._llm, **prompt_args) - llm_prediction = self._predict(prompt, **prompt_args) - logging.debug(llm_prediction) - - # We assume that the value of formatted_prompt is exactly the thing - # eventually sent to OpenAI, or whatever LLM downstream - prompt_tokens_count = self._count_tokens(formatted_prompt) - prediction_tokens_count = self._count_tokens(llm_prediction) - self._total_tokens_used += prompt_tokens_count + prediction_tokens_count - return llm_prediction, formatted_prompt - - def stream(self, prompt: Prompt, **prompt_args: Any) -> Tuple[Generator, str]: - """Stream the answer to a query. - - NOTE: this is a beta feature. Will try to build or use - better abstractions about response handling. - - Args: - prompt (Prompt): Prompt to use for prediction. - - Returns: - str: The predicted answer. - - """ - if not isinstance(self._llm, OpenAI): - raise ValueError("stream is only supported for OpenAI LLMs") - formatted_prompt = prompt.format(llm=self._llm, **prompt_args) - raw_response_gen = self._llm.stream(formatted_prompt) - response_gen = _get_response_gen(raw_response_gen) - # NOTE/TODO: token counting doesn't work with streaming - return response_gen, formatted_prompt - - @property - def total_tokens_used(self) -> int: - """Get the total tokens used so far.""" - return self._total_tokens_used - - def _count_tokens(self, text: str) -> int: - tokens = globals_helper.tokenizer(text) - return len(tokens) - - @property - def last_token_usage(self) -> int: - """Get the last token usage.""" - if self._last_token_usage is None: - return 0 - return self._last_token_usage - - @last_token_usage.setter - def last_token_usage(self, value: int) -> None: - """Set the last token usage.""" - self._last_token_usage = value - - async def _apredict(self, prompt: Prompt, **prompt_args: Any) -> str: - """Async inner predict function. - - If retry_on_throttling is true, we will retry on rate limit errors. - - """ - llm_chain = LLMChain( - prompt=prompt.get_langchain_prompt(llm=self._llm), llm=self._llm - ) - - # Note: we don't pass formatted_prompt to llm_chain.predict because - # langchain does the same formatting under the hood - full_prompt_args = prompt.get_full_format_args(prompt_args) - # TODO: support retry on throttling - llm_prediction = await llm_chain.apredict(**full_prompt_args) - return llm_prediction - - async def apredict(self, prompt: Prompt, **prompt_args: Any) -> Tuple[str, str]: - """Async predict the answer to a query. - - Args: - prompt (Prompt): Prompt to use for prediction. - - Returns: - Tuple[str, str]: Tuple of the predicted answer and the formatted prompt. - - """ - formatted_prompt = prompt.format(llm=self._llm, **prompt_args) - llm_prediction = await self._apredict(prompt, **prompt_args) - logging.debug(llm_prediction) - - # We assume that the value of formatted_prompt is exactly the thing - # eventually sent to OpenAI, or whatever LLM downstream - prompt_tokens_count = self._count_tokens(formatted_prompt) - prediction_tokens_count = self._count_tokens(llm_prediction) - self._total_tokens_used += prompt_tokens_count + prediction_tokens_count - return llm_prediction, formatted_prompt diff --git a/spaces/johko/NSQL-Text-To-SQL/README.md b/spaces/johko/NSQL-Text-To-SQL/README.md deleted file mode 100644 index e39d721079b60d8ac3ea48b0f3b71122a049128b..0000000000000000000000000000000000000000 --- a/spaces/johko/NSQL-Text-To-SQL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NSQL Text To SQL -emoji: 🐨 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jonigata/PoseMaker2/external/hrnet_w48_coco_256x192.py b/spaces/jonigata/PoseMaker2/external/hrnet_w48_coco_256x192.py deleted file mode 100644 index ee33c03d79f94fb04e2fda222114c14e99307b45..0000000000000000000000000000000000000000 --- a/spaces/jonigata/PoseMaker2/external/hrnet_w48_coco_256x192.py +++ /dev/null @@ -1,169 +0,0 @@ -_base_ = [ - 'default_runtime.py', - 'coco.py' -] -evaluation = dict(interval=10, metric='mAP', save_best='AP') - -optimizer = dict( - type='Adam', - lr=5e-4, -) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[170, 200]) -total_epochs = 210 -channel_cfg = dict( - num_output_channels=17, - dataset_joints=17, - dataset_channel=[ - [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 - ]) - -# model settings -model = dict( - type='TopDown', - pretrained='https://download.openmmlab.com/mmpose/' - 'pretrain_models/hrnet_w48-8ef0771d.pth', - backbone=dict( - type='HRNet', - in_channels=3, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(48, 96)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(48, 96, 192)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(48, 96, 192, 384))), - ), - keypoint_head=dict( - type='TopdownHeatmapSimpleHead', - in_channels=48, - out_channels=channel_cfg['num_output_channels'], - num_deconv_layers=0, - extra=dict(final_conv_kernel=1, ), - loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)), - train_cfg=dict(), - test_cfg=dict( - flip_test=True, - post_process='default', - shift_heatmap=True, - modulate_kernel=11)) - -data_cfg = dict( - image_size=[192, 256], - heatmap_size=[48, 64], - num_output_channels=channel_cfg['num_output_channels'], - num_joints=channel_cfg['dataset_joints'], - dataset_channel=channel_cfg['dataset_channel'], - inference_channel=channel_cfg['inference_channel'], - soft_nms=False, - nms_thr=1.0, - oks_thr=0.9, - vis_thr=0.2, - use_gt_bbox=False, - det_bbox_thr=0.0, - bbox_file='data/coco/person_detection_results/' - 'COCO_val2017_detections_AP_H_56_person.json', -) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3), - dict(type='TopDownRandomFlip', flip_prob=0.5), - dict( - type='TopDownHalfBodyTransform', - num_joints_half_body=8, - prob_half_body=0.3), - dict( - type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict(type='TopDownGenerateTarget', sigma=2), - dict( - type='Collect', - keys=['img', 'target', 'target_weight'], - meta_keys=[ - 'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale', - 'rotation', 'bbox_score', 'flip_pairs' - ]), -] - -val_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'image_file', 'center', 'scale', 'rotation', 'bbox_score', - 'flip_pairs' - ]), -] - -test_pipeline = val_pipeline - -data_root = 'data/coco' -data = dict( - samples_per_gpu=32, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=32), - test_dataloader=dict(samples_per_gpu=32), - train=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_train2017.json', - img_prefix=f'{data_root}/train2017/', - data_cfg=data_cfg, - pipeline=train_pipeline, - dataset_info={{_base_.dataset_info}}), - val=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=val_pipeline, - dataset_info={{_base_.dataset_info}}), - test=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=test_pipeline, - dataset_info={{_base_.dataset_info}}), -) diff --git a/spaces/jordonpeter01/MusicGen2/app.py b/spaces/jordonpeter01/MusicGen2/app.py deleted file mode 100644 index 0f92495d323f1c70a9c8dde3b7680e3f9491ab83..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/app.py +++ /dev/null @@ -1,407 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Updated to account for UI changes from https://github.com/rkfg/audiocraft/blob/long/app.py -# also released under the MIT license. - -import argparse -from concurrent.futures import ProcessPoolExecutor -import os -from pathlib import Path -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import typing as tp -import warnings - -import torch -import gradio as gr - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import MusicGen - - -MODEL = None # Last used model -IS_BATCHED = "facebook/MusicGen" in os.environ.get('SPACE_ID', '') -MAX_BATCH_SIZE = 6 -BATCHED_DURATION = 15 -INTERRUPTING = False -# We have to wrap subprocess call to clean a bit the log when using gr.make_waveform -_old_call = sp.call - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomitting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -# Preallocating the pool of processes. -pool = ProcessPoolExecutor(3) -pool.__enter__() - - -def interrupt(): - global INTERRUPTING - INTERRUPTING = True - - -class FileCleaner: - def __init__(self, file_lifetime: float = 3600): - self.file_lifetime = file_lifetime - self.files = [] - - def add(self, path: tp.Union[str, Path]): - self._cleanup() - self.files.append((time.time(), Path(path))) - - def _cleanup(self): - now = time.time() - for time_added, path in list(self.files): - if now - time_added > self.file_lifetime: - if path.exists(): - path.unlink() - self.files.pop(0) - else: - break - - -file_cleaner = FileCleaner() - - -def make_waveform(*args, **kwargs): - # Further remove some warnings. - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - out = gr.make_waveform(*args, **kwargs) - print("Make a video took", time.time() - be) - return out - - -def load_model(version='melody'): - global MODEL - print("Loading model", version) - if MODEL is None or MODEL.name != version: - MODEL = MusicGen.get_pretrained(version) - - -def _do_predictions(texts, melodies, duration, progress=False, **gen_kwargs): - MODEL.set_generation_params(duration=duration, **gen_kwargs) - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies]) - be = time.time() - processed_melodies = [] - target_sr = 32000 - target_ac = 1 - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - if any(m is not None for m in processed_melodies): - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress, - ) - else: - outputs = MODEL.generate(texts, progress=progress) - - outputs = outputs.detach().cpu().float() - out_files = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - out_files.append(pool.submit(make_waveform, file.name)) - file_cleaner.add(file.name) - res = [out_file.result() for out_file in out_files] - for file in res: - file_cleaner.add(file) - print("batch finished", len(texts), time.time() - be) - print("Tempfiles currently stored: ", len(file_cleaner.files)) - return res - - -def predict_batched(texts, melodies): - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - load_model('melody') - res = _do_predictions(texts, melodies, BATCHED_DURATION) - return [res] - - -def predict_full(model, text, melody, duration, topk, topp, temperature, cfg_coef, progress=gr.Progress()): - global INTERRUPTING - INTERRUPTING = False - if temperature < 0: - raise gr.Error("Temperature must be >= 0.") - if topk < 0: - raise gr.Error("Topk must be non-negative.") - if topp < 0: - raise gr.Error("Topp must be non-negative.") - - topk = int(topk) - load_model(model) - - def _progress(generated, to_generate): - progress((generated, to_generate)) - if INTERRUPTING: - raise gr.Error("Interrupted.") - MODEL.set_custom_progress_callback(_progress) - - outs = _do_predictions( - [text], [melody], duration, progress=True, - top_k=topk, top_p=topp, temperature=temperature, cfg_coef=cfg_coef) - return outs[0] - - -def toggle_audio_src(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -def ui_full(launch_kwargs): - with gr.Blocks() as interface: - gr.Markdown( - """ - # MusicGen - This is your private demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284) - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Input Text", interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Submit") - # Adapted from https://github.com/rkfg/audiocraft/blob/long/app.py, MIT license. - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Row(): - model = gr.Radio(["melody", "medium", "small", "large"], - label="Model", value="melody", interactive=True) - with gr.Row(): - duration = gr.Slider(minimum=1, maximum=120, value=10, label="Duration", interactive=True) - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Column(): - output = gr.Video(label="Generated Music") - submit.click(predict_full, - inputs=[model, text, melody, duration, topk, topp, temperature, cfg_coef], - outputs=[output]) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_full, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - "melody" - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - "melody" - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - "medium" - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions", - "./assets/bach.mp3", - "melody" - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - "medium", - ], - ], - inputs=[text, melody, model], - outputs=[output] - ) - gr.Markdown( - """ - ### More details - - The model will generate a short music extract based on the description you provided. - The model can generate up to 30 seconds of audio in one pass. It is now possible - to extend the generation by feeding back the end of the previous chunk of audio. - This can take a long time, and the model might lose consistency. The model might also - decide at arbitrary positions that the song ends. - - **WARNING:** Choosing long durations will take a long time to generate (2min might take ~10min). - An overlap of 12 seconds is kept with the previously generated chunk, and 18 "new" seconds - are generated each time. - - We present 4 model variations: - 1. Melody -- a music generation model capable of generating music condition - on text and melody inputs. **Note**, you can also use text only. - 2. Small -- a 300M transformer decoder conditioned on text only. - 3. Medium -- a 1.5B transformer decoder conditioned on text only. - 4. Large -- a 3.3B transformer decoder conditioned on text only (might OOM for the longest sequences.) - - When using `melody`, ou can optionaly provide a reference audio from - which a broad melody will be extracted. The model will then try to follow both - the description and melody provided. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """ - ) - - interface.queue().launch(**launch_kwargs) - - -def ui_batched(launch_kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
    - - Duplicate Space - for longer sequences, more control and no queue.

    - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - submit.click(predict_batched, inputs=[text, melody], - outputs=[output], batch=True, max_batch_size=MAX_BATCH_SIZE) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_batched, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionaly provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0' if 'SPACE_ID' in os.environ else '127.0.0.1', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - - args = parser.parse_args() - - launch_kwargs = {} - launch_kwargs['server_name'] = args.listen - - if args.username and args.password: - launch_kwargs['auth'] = (args.username, args.password) - if args.server_port: - launch_kwargs['server_port'] = args.server_port - if args.inbrowser: - launch_kwargs['inbrowser'] = args.inbrowser - if args.share: - launch_kwargs['share'] = args.share - - # Show the interface - if IS_BATCHED: - ui_batched(launch_kwargs) - else: - ui_full(launch_kwargs) diff --git a/spaces/justest/gpt4free/g4f/.v1/unfinished/gptbz/README.md b/spaces/justest/gpt4free/g4f/.v1/unfinished/gptbz/README.md deleted file mode 100644 index 05bc2770e0f5b20407b49e54870df4c09902886d..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/.v1/unfinished/gptbz/README.md +++ /dev/null @@ -1,4 +0,0 @@ -https://chat.gpt.bz - -to do: -- code refractoring \ No newline at end of file diff --git a/spaces/jyseo/3DFuse/ldm/modules/midas/__init__.py b/spaces/jyseo/3DFuse/ldm/modules/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kahnchana/clippy/utils.py b/spaces/kahnchana/clippy/utils.py deleted file mode 100644 index 9c22026e9333439c0d66d17152a567fd2cda4ed1..0000000000000000000000000000000000000000 --- a/spaces/kahnchana/clippy/utils.py +++ /dev/null @@ -1,166 +0,0 @@ -import matplotlib -import matplotlib.cm as cm -import matplotlib.colors as mcolors -import numpy as np -import torch -import torchvision -from PIL import Image, ImageDraw, ImageFont -from einops import rearrange -from matplotlib import pyplot as plt - - -def get_similarity(image_encodings, label_encodings, target_shape, interpolation="bilinear", do_argmax=False): - """ - - Args: - image_encodings: - label_encodings: - target_shape: - interpolation: nearest, bilinear - do_argmax: - - Returns: - - """ - - image_encodings = image_encodings.cpu() - label_encodings = label_encodings.cpu() - - image_encodings = rearrange( - image_encodings, "b (h w) d -> d b h w", h=int(np.sqrt(image_encodings.shape[-2])) - ) - # assuming square inputs & targets - scale_ratio = (target_shape[-2] / image_encodings.shape[-2], - target_shape[-1] / image_encodings.shape[-1],) - temp_list = [] - for i in image_encodings: - i = i.unsqueeze(1) - i = torch.nn.functional.interpolate( - i, scale_factor=scale_ratio, mode=interpolation - ) - temp_list.append(i) - image_encodings = torch.cat(temp_list, dim=1) - - image_encodings = rearrange(image_encodings, "b d h w -> b h w d") - similarity = image_encodings @ label_encodings.T - similarity = rearrange(similarity, "b h w d-> b d h w") - if do_argmax: - similarity = torch.argmax(similarity, dim=1, keepdim=True).to(torch.float64) - return similarity - - -def get_cmap(ncolors): - if ncolors > 9: - cmap = plt.cm.tab20 - else: - cmap = plt.cm.tab10 - cmaplist = [cmap(i) for i in range(ncolors)] - cmap = matplotlib.colors.LinearSegmentedColormap.from_list("custom", cmaplist, ncolors) - - mappable = cm.ScalarMappable(cmap=cmap) - mappable.set_array([]) - mappable.set_clim(-0.5, ncolors + 0.5) - - return cmap, mappable - - -def vis_prediction(sample_text, img_arr, similarity): - N = len(sample_text) - cmap, mappable = get_cmap(N) - - fig, axs = plt.subplots(1, 2) - - _ = axs[0].imshow(img_arr) - _ = axs[1].imshow(img_arr) - _ = axs[1].imshow(similarity, cmap=cmap, interpolation="nearest", vmin=0, vmax=N, alpha=0.5) - axs[0].axis("off") - axs[1].axis("off") - - fig.subplots_adjust(bottom=0.2) - cbar_ax = fig.add_axes([0.0, 0.85, 1.0, 0.05]) - colorbar = plt.colorbar(mappable, cax=cbar_ax, cmap=cmap, orientation="horizontal") - colorbar.set_ticks(np.linspace(0, N, N)) - colorbar.set_ticklabels(sample_text) - - return fig - - -class DummyArgs: - def __init__(self, **kwargs): - self.__dict__.update(kwargs) - - -def get_transform(size=(224, 224)): - transform = torchvision.transforms.Compose([ - torchvision.transforms.Resize(size), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711)) - ]) - return transform - - -def ade_palette(): - """ADE20K palette that maps each class to RGB values.""" - return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - -def get_cmap_image(legend): - # Define the size of the legend image - width = 200 - height = len(legend) * 20 - - # Create a new image with the desired size and background color - img = Image.new('RGB', (width, height), (255, 255, 255)) - - # Create a drawing context - draw = ImageDraw.Draw(img) - - # Define the font to use for the legend labels - font = ImageFont.truetype('arial.ttf', 16) - - # Loop through the items in legend and draw a rectangle and label for each - y = 0 - for label, color in legend.items(): - draw.rectangle((0, y, 20, y + 20), fill=color) - draw.text((30, y), label, font=font, fill=(0, 0, 0)) - y += 20 - - return img diff --git a/spaces/kboaten/MIDI-Audio-Extension/app.py b/spaces/kboaten/MIDI-Audio-Extension/app.py deleted file mode 100644 index 8e017ebf12eaf00ac9f47ea57f9c7c465e5982a2..0000000000000000000000000000000000000000 --- a/spaces/kboaten/MIDI-Audio-Extension/app.py +++ /dev/null @@ -1,127 +0,0 @@ - - -"""## Imports""" -import tensorflow as tf -config = tf.compat.v1.ConfigProto() -config.gpu_options.allow_growth = True -session = tf.compat.v1.Session(config=config) -import gradio as gr -from pathlib import Path -import subprocess -from transformers import AutoTokenizer -from transformers import TFAutoModelForCausalLM -from transformers import TFAutoModelForSequenceClassification, AutoTokenizer -import numpy as np -import os -from musicautobot.music_transformer.transform import idxenc2stream, midi2idxenc -from musicautobot.vocab import MusicVocab -import keras - -"""## Model Loads""" - -#load poem generation model -model_poem_gn = TFAutoModelForCausalLM.from_pretrained('merged-ui/models-misc/peom_gn') -base_model_poem_gn = "distilgpt2" -tokenizer_poem_gn = AutoTokenizer.from_pretrained(base_model_poem_gn) - -#load sentiment analysis -model_sa = TFAutoModelForSequenceClassification.from_pretrained('merged-ui/models-misc/sen_analysis/bert') -base_model_sa = "distilbert-base-uncased" -tokenizer_sa = AutoTokenizer.from_pretrained(base_model_sa) - -#music generation -""" -base_path = "/content/drive/MyDrive/FIRE_3rd Sem/music_gn/" -#path_mid_file -> Replace this with model generated file path -path_mid_file = base_path + "Comic_Relief.mid" -path_wav_file = base_path + "output_comic.wav" -subprocess.call(['timidity', path_mid_file, "-Ow", "-o", path_wav_file])""" -music_gen_base_path = "merged-ui/music_gen/" -model_music_gen = keras.models.load_model("MIDI-song-extender/transformer-final") - -"""## Music Generation""" - -def predict_music(model, input_vector, num): - normalized = input_vector / 311 - for i in range(num): - predict = model.predict(np.reshape(normalized[-100:], (1,100)), verbose = 0) - normalized = np.append(normalized, predict) - - result = np.rint(normalized * 311) - # edits to prediction - for i in range(100, len(result)): - if i % 2 == 0: - if abs(result[i] - 8) < 5 and result[i] != 8: - result[i] = 8 - else: - if result[i] < 137: - result[i] = 137 - return result - -# this function takes a 100 length encoded song beginning as an input and -def midi_predict(model, test, num_notes): - test_midi = idxenc2stream(test.astype("int"), MusicVocab.create()) - test_midi.write('midi',music_gen_base_path+"input_demo.mid") - - print(os.listdir(music_gen_base_path)) - - res = predict_music(model, test, num_notes) - output = idxenc2stream(res.astype("int"), MusicVocab.create()) - output.write('midi',music_gen_base_path+"output_demo.mid") - path_mid_file = music_gen_base_path + "output_demo.mid" - path_wav_file = music_gen_base_path + "output_demo.wav" - # need timidity for this - subprocess.call(['timidity', path_mid_file, "-Ow", "-o", path_wav_file]) - print(os.listdir(music_gen_base_path)) - return Path(path_wav_file) - - -def inference_music_gen(audio, num_notes): - data_e = midi2idxenc(audio.name, MusicVocab.create()) - return midi_predict(model_music_gen, data_e[:100], int(num_notes)) - -music_gen_interface = gr.Interface( - inference_music_gen, - inputs = [gr.inputs.File(type="file", label="Input"), gr.Textbox(lines = 1, placeholder = "Enter number of notes here")], - examples=[[music_gen_base_path + "mid_file/Comic_Relief.mid", 300]], - outputs = gr.outputs.Audio(type="filepath", label="Output") - ) - -"""## Sentiment Analysis""" - -def inference_sentiment_analysis(sen): - tokenized_v1 = tokenizer_sa([sen], return_tensors="np", padding="longest") - outputs_v1 = model_sa(tokenized_v1).logits - classifications_v1 = np.argmax(outputs_v1, axis=1) - if classifications_v1[0] == 1: - res = "Positive :)" - else: - res = "Negative :(" - return res - -sentiment_analysis_interface = gr.Interface( - fn=inference_sentiment_analysis, - inputs=gr.Textbox(lines=2, placeholder="Enter a Sentence"), - outputs="text", -) - -"""## Peom Generation""" - -def inference_poem_gen(start): - tokenized = tokenizer_poem_gn(start, return_tensors="np") - outputs = model_poem_gn.generate(**tokenized, max_new_tokens=20) - res = tokenizer_poem_gn.decode(outputs[0]) - return res.replace("", "\n") - -poem_gen_interface = gr.Interface( - fn=inference_poem_gen, - inputs=gr.Textbox(lines=2, placeholder="Start Here..."), - outputs="text", -) - -"""## Combine All""" - -demo = gr.TabbedInterface([music_gen_interface, poem_gen_interface, sentiment_analysis_interface], - ["Music Generation", "Poem Generation", "Sentiment Analysis"]) -demo.launch(debug=True) - diff --git a/spaces/kcagle/AutoGPT/autogpt/commands/__init__.py b/spaces/kcagle/AutoGPT/autogpt/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kenton-li/maia-utsw/main.py b/spaces/kenton-li/maia-utsw/main.py deleted file mode 100644 index e5edfd2ae5612b650abec80589b1b6edfe716472..0000000000000000000000000000000000000000 --- a/spaces/kenton-li/maia-utsw/main.py +++ /dev/null @@ -1,33 +0,0 @@ -from fastapi import FastAPI, Request -from fastapi.responses import Response -import httpx - -app = FastAPI() - -@app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS", "HEAD"]) -async def proxy(request: Request, path: str): - target_url = f"http://216.128.139.239:1234/{path}" - - async with httpx.AsyncClient() as client: - method = request.method - headers = dict(request.headers) - params = request.query_params - content = await request.body() - - response = await client.request( - method=method, - url=target_url, - headers=headers, - params=params, - content=content - ) - - return Response( - content=response.content, - status_code=response.status_code, - headers=dict(response.headers) - ) - -if __name__ == "__main__": - import os - os.system("uvicorn main:app --host 0.0.0.0 --port 7860") diff --git a/spaces/keras-dreambooth/dreambooth_hogwarts_legacy/app.py b/spaces/keras-dreambooth/dreambooth_hogwarts_legacy/app.py deleted file mode 100644 index 0a2e9cbda810c8101d48d8b5796e7a95edfb33db..0000000000000000000000000000000000000000 --- a/spaces/keras-dreambooth/dreambooth_hogwarts_legacy/app.py +++ /dev/null @@ -1,55 +0,0 @@ -from huggingface_hub import from_pretrained_keras -from keras_cv import models -import gradio as gr -import tensorflow as tf - -tf.keras.mixed_precision.set_global_policy("mixed_float16") - -# load keras model -resolution = 512 -dreambooth_model = models.StableDiffusion( - img_width=resolution, img_height=resolution, jit_compile=True, - ) -loaded_diffusion_model = from_pretrained_keras("keras-dreambooth/dreambooth_hogwarts_legacy") -dreambooth_model._diffusion_model = loaded_diffusion_model - - -# generate images -def generate_images(prompt: str, negative_prompt: str, num_imgs_to_gen: int, inference_steps: int, guidance_scale: float): - output_images = dreambooth_model.text_to_image( - prompt, - negative_prompt=negative_prompt, - batch_size=num_imgs_to_gen, - num_steps=inference_steps, - unconditional_guidance_scale=guidance_scale, - ) - return output_images - -# Define the UI -with gr.Blocks() as demo: - gr.HTML("

    Keras Dreambooth - Hogwarts Legacy Demo

    ") - gr.HTML("

    This model has been fine-tuned to learn the concept of Hogwarts Legacy student characters.
    To use this demo, you should have append your prompt with string hogwarts [legacy] student

    ") - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox(label="Positive Prompt", value="a photo of a female hogwarts [legacy] student posing outside hogwarts castle") - negative_prompt = gr.Textbox(label="Negative Prompt", value="out of frame, blurry, cropped, noisy") - samples = gr.Slider(label="Number of Images", minimum=1, maximum=6, value=4, step=1) - inference_steps = gr.Slider(label="Inference Steps", minimum=1, maximum=100, value=50, step=1) - guidance_scale = gr.Slider(label="Guidance Scale", minimum=1, maximum=10, value=7.5, step=0.1) - run = gr.Button(value="Run") - with gr.Column(): - gallery = gr.Gallery(label="Outputs").style(grid=(1,2)) - - run.click(fn=generate_images, inputs=[prompt, negative_prompt, samples, inference_steps, guidance_scale], outputs=gallery) - - gr.Examples([["hyperrealistic photo of a smiling female hogwarts [legacy] student posing outside hogwarts castle, 4K, highly detailed, intricate outfit design, art by miho hirano", - "ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off", - 4, 100, 7.5], - ["digital art of a male hogwarts [legacy] student holding a wand, high quality, 8k", - "ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off", - 4, 50, 7.5]], - [prompt, negative_prompt, samples, inference_steps, guidance_scale], gallery, generate_images, cache_examples=True) - gr.Markdown('Demo created by [Terrence Goh](https://huggingface.co/tgohblio/)') - -demo.queue(concurrency_count=3) -demo.launch() \ No newline at end of file diff --git a/spaces/keras-io/image_classification_using_conv_mixer/README.md b/spaces/keras-io/image_classification_using_conv_mixer/README.md deleted file mode 100644 index 8119e8831f8d80d9029bac16655c55cf085b9e3c..0000000000000000000000000000000000000000 --- a/spaces/keras-io/image_classification_using_conv_mixer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Classification Using Conv Mixer -emoji: ✈️ -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.0.14 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-io/question_answering/app.py b/spaces/keras-io/question_answering/app.py deleted file mode 100644 index 1efee6ea37c90a5e7ea9d4ca34d13a4bf50d7f7b..0000000000000000000000000000000000000000 --- a/spaces/keras-io/question_answering/app.py +++ /dev/null @@ -1,12 +0,0 @@ -import gradio as gr -description = "Question Answering Demo 🙌🏼" -title = "Question Answering with Keras" -context = "Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear & actionable error messages. It also has extensive documentation and developer guides. See the model here: hf.co/keras-io/transformers-qa" -question = "What is Keras?" -interface = gr.Interface.load("huggingface/keras-io/transformers-qa", - description=description, - title = title, - theme = "grass", - examples = [[context, question]] -) -interface.launch() \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/paste_pic.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/paste_pic.py deleted file mode 100644 index f9989e21e48e64f620f9b148e65fdfe806c53b14..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/paste_pic.py +++ /dev/null @@ -1,69 +0,0 @@ -import cv2, os -import numpy as np -from tqdm import tqdm -import uuid - -from src.utils.videoio import save_video_with_watermark - -def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop=False): - - if not os.path.isfile(pic_path): - raise ValueError('pic_path must be a valid path to video/image file') - elif pic_path.split('.')[-1] in ['jpg', 'png', 'jpeg']: - # loader for first frame - full_img = cv2.imread(pic_path) - else: - # loader for videos - video_stream = cv2.VideoCapture(pic_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - break - full_img = frame - frame_h = full_img.shape[0] - frame_w = full_img.shape[1] - - video_stream = cv2.VideoCapture(video_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - crop_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - crop_frames.append(frame) - - if len(crop_info) != 3: - print("you didn't crop the image") - return - else: - r_w, r_h = crop_info[0] - clx, cly, crx, cry = crop_info[1] - lx, ly, rx, ry = crop_info[2] - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - - if extended_crop: - oy1, oy2, ox1, ox2 = cly, cry, clx, crx - else: - oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - - tmp_path = str(uuid.uuid4())+'.mp4' - out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h)) - for crop_frame in tqdm(crop_frames, 'seamlessClone:'): - p = cv2.resize(crop_frame.astype(np.uint8), (ox2-ox1, oy2 - oy1)) - - mask = 255*np.ones(p.shape, p.dtype) - location = ((ox1+ox2) // 2, (oy1+oy2) // 2) - gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE) - out_tmp.write(gen_img) - - out_tmp.release() - - save_video_with_watermark(tmp_path, new_audio_path, full_video_path, watermark=False) - os.remove(tmp_path) diff --git a/spaces/kevinwang676/SadTalker/src/audio2pose_models/audio2pose.py b/spaces/kevinwang676/SadTalker/src/audio2pose_models/audio2pose.py deleted file mode 100644 index 2b8cd1427038460a7679260a424d2f01d2bcf2c5..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/audio2pose_models/audio2pose.py +++ /dev/null @@ -1,94 +0,0 @@ -import torch -from torch import nn -from src.audio2pose_models.cvae import CVAE -from src.audio2pose_models.discriminator import PoseSequenceDiscriminator -from src.audio2pose_models.audio_encoder import AudioEncoder - -class Audio2Pose(nn.Module): - def __init__(self, cfg, wav2lip_checkpoint, device='cuda'): - super().__init__() - self.cfg = cfg - self.seq_len = cfg.MODEL.CVAE.SEQ_LEN - self.latent_dim = cfg.MODEL.CVAE.LATENT_SIZE - self.device = device - - self.audio_encoder = AudioEncoder(wav2lip_checkpoint, device) - self.audio_encoder.eval() - for param in self.audio_encoder.parameters(): - param.requires_grad = False - - self.netG = CVAE(cfg) - self.netD_motion = PoseSequenceDiscriminator(cfg) - - - def forward(self, x): - - batch = {} - coeff_gt = x['gt'].cuda().squeeze(0) #bs frame_len+1 73 - batch['pose_motion_gt'] = coeff_gt[:, 1:, 64:70] - coeff_gt[:, :1, 64:70] #bs frame_len 6 - batch['ref'] = coeff_gt[:, 0, 64:70] #bs 6 - batch['class'] = x['class'].squeeze(0).cuda() # bs - indiv_mels= x['indiv_mels'].cuda().squeeze(0) # bs seq_len+1 80 16 - - # forward - audio_emb_list = [] - audio_emb = self.audio_encoder(indiv_mels[:, 1:, :, :].unsqueeze(2)) #bs seq_len 512 - batch['audio_emb'] = audio_emb - batch = self.netG(batch) - - pose_motion_pred = batch['pose_motion_pred'] # bs frame_len 6 - pose_gt = coeff_gt[:, 1:, 64:70].clone() # bs frame_len 6 - pose_pred = coeff_gt[:, :1, 64:70] + pose_motion_pred # bs frame_len 6 - - batch['pose_pred'] = pose_pred - batch['pose_gt'] = pose_gt - - return batch - - def test(self, x): - - batch = {} - ref = x['ref'] #bs 1 70 - batch['ref'] = x['ref'][:,0,-6:] - batch['class'] = x['class'] - bs = ref.shape[0] - - indiv_mels= x['indiv_mels'] # bs T 1 80 16 - indiv_mels_use = indiv_mels[:, 1:] # we regard the ref as the first frame - num_frames = x['num_frames'] - num_frames = int(num_frames) - 1 - - # - div = num_frames//self.seq_len - re = num_frames%self.seq_len - audio_emb_list = [] - pose_motion_pred_list = [torch.zeros(batch['ref'].unsqueeze(1).shape, dtype=batch['ref'].dtype, - device=batch['ref'].device)] - - for i in range(div): - z = torch.randn(bs, self.latent_dim).to(ref.device) - batch['z'] = z - audio_emb = self.audio_encoder(indiv_mels_use[:, i*self.seq_len:(i+1)*self.seq_len,:,:,:]) #bs seq_len 512 - batch['audio_emb'] = audio_emb - batch = self.netG.test(batch) - pose_motion_pred_list.append(batch['pose_motion_pred']) #list of bs seq_len 6 - - if re != 0: - z = torch.randn(bs, self.latent_dim).to(ref.device) - batch['z'] = z - audio_emb = self.audio_encoder(indiv_mels_use[:, -1*self.seq_len:,:,:,:]) #bs seq_len 512 - if audio_emb.shape[1] != self.seq_len: - pad_dim = self.seq_len-audio_emb.shape[1] - pad_audio_emb = audio_emb[:, :1].repeat(1, pad_dim, 1) - audio_emb = torch.cat([pad_audio_emb, audio_emb], 1) - batch['audio_emb'] = audio_emb - batch = self.netG.test(batch) - pose_motion_pred_list.append(batch['pose_motion_pred'][:,-1*re:,:]) - - pose_motion_pred = torch.cat(pose_motion_pred_list, dim = 1) - batch['pose_motion_pred'] = pose_motion_pred - - pose_pred = ref[:, :1, -6:] + pose_motion_pred # bs T 6 - - batch['pose_pred'] = pose_pred - return batch diff --git a/spaces/kevinwang676/Voice-Changer-Light/infer_pack/attentions.py b/spaces/kevinwang676/Voice-Changer-Light/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Changer-Light/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/__init__.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/lib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kinyugo/msanii/app.py b/spaces/kinyugo/msanii/app.py deleted file mode 100644 index e327283bed65bdf0d6752d6b828f9280b46ac2f6..0000000000000000000000000000000000000000 --- a/spaces/kinyugo/msanii/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gdown -import torch -from msanii.config import DemoConfig -from msanii.demo import run_demo - -# Download checkpoints -id = "1G9kF0r5vxYXPSdSuv4t3GR-sBO8xGFCe" -output = "msanii.pt" -gdown.download(id=id, output=output, quiet=True) - - -# Setup app config -config = DemoConfig( - ckpt_path="msanii.pt", - device=("cuda" if torch.cuda.is_available() else "cpu"), - dtype="float32", - launch=False, -) - -demo = run_demo(config) -demo.queue().launch() diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/__init__.py b/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/PixarImagePlugin.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/PixarImagePlugin.py deleted file mode 100644 index 7eb82228a9928bac325f641d45346364c61e8092..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/PixarImagePlugin.py +++ /dev/null @@ -1,69 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIXAR raster support for PIL -# -# history: -# 97-01-29 fl Created -# -# notes: -# This is incomplete; it is based on a few samples created with -# Photoshop 2.5 and 3.0, and a summary description provided by -# Greg Coats . Hopefully, "L" and -# "RGBA" support will be added in future versions. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile -from ._binary import i16le as i16 - -# -# helpers - - -def _accept(prefix): - return prefix[:4] == b"\200\350\000\000" - - -## -# Image plugin for PIXAR raster images. - - -class PixarImageFile(ImageFile.ImageFile): - format = "PIXAR" - format_description = "PIXAR raster image" - - def _open(self): - # assuming a 4-byte magic label - s = self.fp.read(4) - if not _accept(s): - msg = "not a PIXAR file" - raise SyntaxError(msg) - - # read rest of header - s = s + self.fp.read(508) - - self._size = i16(s, 418), i16(s, 416) - - # get channel/depth descriptions - mode = i16(s, 424), i16(s, 426) - - if mode == (14, 2): - self.mode = "RGB" - # FIXME: to be continued... - - # create tile descriptor (assuming "dumped") - self.tile = [("raw", (0, 0) + self.size, 1024, (self.mode, 0, 1))] - - -# -# -------------------------------------------------------------------- - -Image.register_open(PixarImageFile.format, PixarImageFile, _accept) - -Image.register_extension(PixarImageFile.format, ".pxr") diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/UploadText-33d53a1c.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/UploadText-33d53a1c.css deleted file mode 100644 index ea1837137bcb0f8b4462f8f4e59dcd9bfa878cda..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/UploadText-33d53a1c.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-xwlu1w{display:flex;flex-direction:column;justify-content:center;min-height:var(--size-60);color:var(--block-label-text-color);line-height:var(--line-md)}.or.svelte-xwlu1w{color:var(--body-text-color-subdued)}@media (min-width: 768px){.wrap.svelte-xwlu1w{font-size:var(--text-lg)}} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/afm.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/afm.py deleted file mode 100644 index d95c88a0e2b46a8d568692f1f524d819e583f997..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/afm.py +++ /dev/null @@ -1,3 +0,0 @@ -from matplotlib._afm import * # noqa: F401, F403 -from matplotlib import _api -_api.warn_deprecated("3.6", name=__name__, obj_type="module") diff --git a/spaces/lamini/instruct-3b-playground/app.py b/spaces/lamini/instruct-3b-playground/app.py deleted file mode 100644 index 136e3df62ed3b3cbfd197855fb86064a3b01f884..0000000000000000000000000000000000000000 --- a/spaces/lamini/instruct-3b-playground/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from llama import Type, Context, LLM -import os -from sentence_transformers import SentenceTransformer, util - -model = SentenceTransformer('distilbert-base-nli-mean-tokens') - -class Question(Type): - question: str = Context("a question") - -class Response(Type): - response: str = Context("the response to the question") - -def parse_response(string): - break_point = string.split("\n\n") - sentence_embeddings = model.encode(break_point) - output=break_point[0] - l=len(break_point[0]) - for i in range(1,len(break_point)): - score=util.pytorch_cos_sim(sentence_embeddings[0], sentence_embeddings[i]) - if score<=0.45 and len(break_point[i])>l//3: - output+=break_point[i] - return output - -def lamini(input): - #return "Hello " + name + "!!" - llm=LLM(name="lamini-instruct", - config={ - "production":{ - "key": os.environ["LAMINI-KEY"] - } - }) - user_query_text=Question(question=input) - result = llm( - input=user_query_text, - output_type=Response, - model_name="lamini/instruct-tuned-3b" - ) - return parse_response(result.response) - -iface = gr.Interface(fn=lamini, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/unet.py b/spaces/legoandmars/glide-inpainting/glide_text2im/unet.py deleted file mode 100644 index b61437a44ef7510e0c62afaae070deabc24c42bb..0000000000000000000000000000000000000000 --- a/spaces/legoandmars/glide-inpainting/glide_text2im/unet.py +++ /dev/null @@ -1,635 +0,0 @@ -import math -from abc import abstractmethod - -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from .fp16_util import convert_module_to_f16, convert_module_to_f32 -from .nn import avg_pool_nd, conv_nd, linear, normalization, timestep_embedding, zero_module - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, encoder_out=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, AttentionBlock): - x = layer(x, encoder_out) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=1) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate(x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest") - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd(dims, self.channels, self.out_channels, 3, stride=stride, padding=1) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels, swish=1.0), - nn.Identity(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels, swish=0.0 if use_scale_shift_norm else 1.0), - nn.SiLU() if use_scale_shift_norm else nn.Identity(), - nn.Dropout(p=dropout), - zero_module(conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 3, padding=1) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - encoder_channels=None, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels, swish=0.0) - self.qkv = conv_nd(1, channels, channels * 3, 1) - self.attention = QKVAttention(self.num_heads) - - if encoder_channels is not None: - self.encoder_kv = conv_nd(1, encoder_channels, channels * 2, 1) - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x, encoder_out=None): - b, c, *spatial = x.shape - qkv = self.qkv(self.norm(x).view(b, c, -1)) - if encoder_out is not None: - encoder_out = self.encoder_kv(encoder_out) - h = self.attention(qkv, encoder_out) - else: - h = self.attention(qkv) - h = self.proj_out(h) - return x + h.reshape(b, c, *spatial) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv, encoder_kv=None): - """ - Apply QKV attention. - - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - if encoder_kv is not None: - assert encoder_kv.shape[1] == self.n_heads * ch * 2 - ek, ev = encoder_kv.reshape(bs * self.n_heads, ch * 2, -1).split(ch, dim=1) - k = th.cat([ek, k], dim=-1) - v = th.cat([ev, v], dim=-1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - """ - - def __init__( - self, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - encoder_channels=None, - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - ch = input_ch = int(channel_mult[0] * model_channels) - self.input_blocks = nn.ModuleList( - [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))] - ) - self._feature_size = ch - input_block_chans = [ch] - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=int(mult * model_channels), - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = int(mult * model_channels) - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=int(model_channels * mult), - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = int(model_channels * mult) - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=num_head_channels, - encoder_channels=encoder_channels, - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch, swish=1.0), - nn.Identity(), - zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)), - ) - self.use_fp16 = use_fp16 - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps, y=None): - """ - Apply the model to an input batch. - - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - - hs = [] - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - hs.append(h) - h = self.middle_block(h, emb) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb) - h = h.type(x.dtype) - return self.out(h) - -class SuperResUNetModel(UNetModel): - """ - A UNetModel that performs super-resolution. - - Expects an extra kwarg `low_res` to condition on a low-resolution image. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, low_res=None, **kwargs): - _, _, new_height, new_width = x.shape - upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear") - x = th.cat([x, upsampled], dim=1) - return super().forward(x, timesteps, **kwargs) - - -class InpaintUNetModel(UNetModel): - """ - A UNetModel which can perform inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 + 1 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, inpaint_image=None, inpaint_mask=None, **kwargs): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask], dim=1), - timesteps, - **kwargs, - ) - - -class SuperResInpaintUNetModel(UNetModel): - """ - A UNetModel which can perform both upsampling and inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 3 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 3 + 1 - super().__init__(*args, **kwargs) - - def forward( - self, - x, - timesteps, - inpaint_image=None, - inpaint_mask=None, - low_res=None, - **kwargs, - ): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - _, _, new_height, new_width = x.shape - upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear") - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask, upsampled], dim=1), - timesteps, - **kwargs, - ) diff --git a/spaces/leo-bourrel/test-streamlit/Dockerfile b/spaces/leo-bourrel/test-streamlit/Dockerfile deleted file mode 100644 index 4af7a31876782920d4986ca9087b6194240f3266..0000000000000000000000000000000000000000 --- a/spaces/leo-bourrel/test-streamlit/Dockerfile +++ /dev/null @@ -1,56 +0,0 @@ -FROM postgres:14.9-bookworm - -WORKDIR /app - -RUN apt update && \ - apt install -y --no-install-recommends \ - build-essential \ - python3 \ - python3-pip \ - python3-dev \ - postgresql-server-dev-14 \ - libpq-dev \ - libblas-dev \ - htop \ - git - -COPY ./ /app/ - -RUN pip3 install -r ./requirements.txt --break-system-packages - -EXPOSE 5432 -EXPOSE 7860 - -ENV POSTGRES_USER=postgres -ENV POSTGRES_PASSWORD=pwd -ENV POSTGRES_DB=sorbobot - -# User -RUN useradd -m -u 1000 user -ENV HOME /home/user -ENV PATH $HOME/.local/bin:$PATH - -# Install PGVector -WORKDIR /tmp -RUN git clone --branch v0.5.1 https://github.com/pgvector/pgvector.git -WORKDIR /tmp/pgvector -RUN make -RUN make install # may need sudo -WORKDIR $HOME -COPY ./ $HOME - -COPY "execution.sh" "/usr/local/bin/" - -COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/ - -RUN chown -R user:user /var/lib/postgresql/data - -USER user - -ENTRYPOINT ["execution.sh"] - -STOPSIGNAL SIGINT - -HEALTHCHECK CMD curl --fail http://localhost:7860/_stcore/health - -CMD ["postgres"] \ No newline at end of file diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/api/streaming_api.py b/spaces/leogabraneth/text-generation-webui-main/extensions/api/streaming_api.py deleted file mode 100644 index 2968ed8dec8b10730a77ac15c7de5882c2e9b7c5..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/api/streaming_api.py +++ /dev/null @@ -1,142 +0,0 @@ -import asyncio -import json -import ssl -from threading import Thread - -from websockets.server import serve - -from extensions.api.util import ( - build_parameters, - try_start_cloudflared, - with_api_lock -) -from modules import shared -from modules.chat import generate_chat_reply -from modules.text_generation import generate_reply -from modules.logging_colors import logger - -PATH = '/api/v1/stream' - - -@with_api_lock -async def _handle_stream_message(websocket, message): - message = json.loads(message) - - prompt = message['prompt'] - generate_params = build_parameters(message) - stopping_strings = generate_params.pop('stopping_strings') - generate_params['stream'] = True - - generator = generate_reply( - prompt, generate_params, stopping_strings=stopping_strings, is_chat=False) - - # As we stream, only send the new bytes. - skip_index = 0 - message_num = 0 - - for a in generator: - to_send = a[skip_index:] - if to_send is None or chr(0xfffd) in to_send: # partial unicode character, don't send it yet. - continue - - await websocket.send(json.dumps({ - 'event': 'text_stream', - 'message_num': message_num, - 'text': to_send - })) - - await asyncio.sleep(0) - skip_index += len(to_send) - message_num += 1 - - await websocket.send(json.dumps({ - 'event': 'stream_end', - 'message_num': message_num - })) - - -@with_api_lock -async def _handle_chat_stream_message(websocket, message): - body = json.loads(message) - - user_input = body['user_input'] - generate_params = build_parameters(body, chat=True) - generate_params['stream'] = True - regenerate = body.get('regenerate', False) - _continue = body.get('_continue', False) - - generator = generate_chat_reply( - user_input, generate_params, regenerate=regenerate, _continue=_continue, loading_message=False) - - message_num = 0 - for a in generator: - await websocket.send(json.dumps({ - 'event': 'text_stream', - 'message_num': message_num, - 'history': a - })) - - await asyncio.sleep(0) - message_num += 1 - - await websocket.send(json.dumps({ - 'event': 'stream_end', - 'message_num': message_num - })) - - -async def _handle_connection(websocket, path): - - if path == '/api/v1/stream': - async for message in websocket: - await _handle_stream_message(websocket, message) - - elif path == '/api/v1/chat-stream': - async for message in websocket: - await _handle_chat_stream_message(websocket, message) - - else: - print(f'Streaming api: unknown path: {path}') - return - - -async def _run(host: str, port: int): - ssl_certfile = shared.args.ssl_certfile - ssl_keyfile = shared.args.ssl_keyfile - ssl_verify = True if (ssl_keyfile and ssl_certfile) else False - if ssl_verify: - context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) - context.load_cert_chain(ssl_certfile, ssl_keyfile) - else: - context = None - - async with serve(_handle_connection, host, port, ping_interval=None, ssl=context): - await asyncio.Future() # Run the server forever - - -def _run_server(port: int, share: bool = False, tunnel_id=str): - address = '0.0.0.0' if shared.args.listen else '127.0.0.1' - ssl_certfile = shared.args.ssl_certfile - ssl_keyfile = shared.args.ssl_keyfile - ssl_verify = True if (ssl_keyfile and ssl_certfile) else False - - def on_start(public_url: str): - public_url = public_url.replace('https://', 'wss://') - logger.info(f'Streaming API URL: \n\n{public_url}{PATH}\n') - - if share: - try: - try_start_cloudflared(port, tunnel_id, max_attempts=3, on_start=on_start) - except Exception as e: - print(e) - else: - if ssl_verify: - logger.info(f'Streaming API URL: \n\nwss://{address}:{port}{PATH}\n') - else: - logger.info(f'Streaming API URL: \n\nws://{address}:{port}{PATH}\n') - - asyncio.run(_run(host=address, port=port)) - - -def start_server(port: int, share: bool = False, tunnel_id=str): - Thread(target=_run_server, args=[port, share, tunnel_id], daemon=True).start() diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/subsampling.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/subsampling.py deleted file mode 100644 index e754126b2ec1f2d914206ec35ec026c7b6add17f..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/subsampling.py +++ /dev/null @@ -1,218 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Subsampling layer definition.""" -import logging -import torch - -from espnet.nets.pytorch_backend.transformer.embedding import PositionalEncoding - - -class Conv2dSubsampling(torch.nn.Module): - """Convolutional 2D subsampling (to 1/4 length or 1/2 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - :param torch.nn.Module pos_enc: custom position encoding layer - - """ - - def __init__(self, idim, odim, dropout_rate, pos_enc=None, - subsample_by_2=False, - ): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling, self).__init__() - self.subsample_by_2 = subsample_by_2 - if subsample_by_2: - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (idim // 2), odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - else: - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (idim // 4), odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - if self.subsample_by_2: - return x, x_mask[:, :, ::2] - else: - return x, x_mask[:, :, ::2][:, :, ::2] - - def __getitem__(self, key): - """Subsample x. - - When reset_parameters() is called, if use_scaled_pos_enc is used, - return the positioning encoding. - - """ - if key != -1: - raise NotImplementedError("Support only `-1` (for `reset_parameters`).") - return self.out[key] - - -class Conv2dNoSubsampling(torch.nn.Module): - """Convolutional 2D without subsampling. - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - :param torch.nn.Module pos_enc: custom position encoding layer - - """ - - def __init__(self, idim, odim, dropout_rate, pos_enc=None): - """Construct an Conv2dSubsampling object.""" - super().__init__() - logging.info("Encoder does not do down-sample on mel-spectrogram.") - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, kernel_size=5, stride=1, padding=2), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * idim, odim), - pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask - - def __getitem__(self, key): - """Subsample x. - - When reset_parameters() is called, if use_scaled_pos_enc is used, - return the positioning encoding. - - """ - if key != -1: - raise NotImplementedError("Support only `-1` (for `reset_parameters`).") - return self.out[key] - - -class Conv2dSubsampling6(torch.nn.Module): - """Convolutional 2D subsampling (to 1/6 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - - """ - - def __init__(self, idim, odim, dropout_rate): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling6, self).__init__() - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 5, 3), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * (((idim - 1) // 2 - 2) // 3), odim), - PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask[:, :, :-2:2][:, :, :-4:3] - - -class Conv2dSubsampling8(torch.nn.Module): - """Convolutional 2D subsampling (to 1/8 length). - - :param int idim: input dim - :param int odim: output dim - :param flaot dropout_rate: dropout rate - - """ - - def __init__(self, idim, odim, dropout_rate): - """Construct an Conv2dSubsampling object.""" - super(Conv2dSubsampling8, self).__init__() - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 3, 2), - torch.nn.ReLU(), - torch.nn.Conv2d(odim, odim, 3, 2), - torch.nn.ReLU(), - ) - self.out = torch.nn.Sequential( - torch.nn.Linear(odim * ((((idim - 1) // 2 - 1) // 2 - 1) // 2), odim), - PositionalEncoding(odim, dropout_rate), - ) - - def forward(self, x, x_mask): - """Subsample x. - - :param torch.Tensor x: input tensor - :param torch.Tensor x_mask: input mask - :return: subsampled x and mask - :rtype Tuple[torch.Tensor, torch.Tensor] - """ - x = x.unsqueeze(1) # (b, c, t, f) - x = self.conv(x) - b, c, t, f = x.size() - x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f)) - if x_mask is None: - return x, None - return x, x_mask[:, :, :-2:2][:, :, :-2:2][:, :, :-2:2] diff --git a/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_1x_coco_moco-setting_lr2e-2_wd2.5e-5/mask_rcnn_r50_fpn_mstrain_1x_coco.py b/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_1x_coco_moco-setting_lr2e-2_wd2.5e-5/mask_rcnn_r50_fpn_mstrain_1x_coco.py deleted file mode 100644 index 0867704cbbda4f545eb43e0b53116f5e1065c48a..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_1x_coco_moco-setting_lr2e-2_wd2.5e-5/mask_rcnn_r50_fpn_mstrain_1x_coco.py +++ /dev/null @@ -1,271 +0,0 @@ -model = dict( - type='MaskRCNN', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=False, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5, - norm_cfg=dict(type='SyncBN', requires_grad=True)), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0), - norm_cfg=dict(type='SyncBN', requires_grad=True)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0), - norm_cfg=dict(type='SyncBN', requires_grad=True))), - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_train2017.json', - img_prefix='data/coco/train2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']) - ]), - val=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict(metric=['bbox', 'segm'], save_best='auto', gpu_collect=True) -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=2.5e-05) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='NumClassCheckHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='finetune'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = 'pretrain/selfsup_mask-rcnn_mstrain-soft-teacher_sampler-4096_temp0.5/final_model.pth' -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = None -norm_cfg = dict(type='SyncBN', requires_grad=True) -work_dir = 'work_dirs/finetune_mask-rcnn_1x_coco_moco-setting_lr2e-2_wd2.5e-5' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/How-To-Install-Fsx-Acceleration-With-Crack-TOP-Fsx.md b/spaces/lincquiQcaudo/Top-20-Diffusion/How-To-Install-Fsx-Acceleration-With-Crack-TOP-Fsx.md deleted file mode 100644 index e8f38c368a033c15df749a47360ffa7b9ac81879..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/How-To-Install-Fsx-Acceleration-With-Crack-TOP-Fsx.md +++ /dev/null @@ -1,102 +0,0 @@ -## How To Install Fsx Acceleration With Crack Fsx - - - - - - ![How To Install Fsx Acceleration With Crack !!TOP!! Fsx](https://www.simviation.com/yabbuploads/2008-8-25_18-41-49-925.jpg) - - - - - -**DOWNLOAD ⇒ [https://fienislile.blogspot.com/?download=2tyE8f](https://fienislile.blogspot.com/?download=2tyE8f)** - - - - - - - - - - - - - -# How To Install Fsx Acceleration With Crack Fsx: A Step-By-Step Guide - - - -If you are a fan of flight simulation games, you might have heard of Fsx Acceleration, an expansion pack for Microsoft Flight Simulator X that adds new features and missions. However, you might also have encountered some difficulties in installing it, especially if you want to use a cracked version of Fsx. In this article, we will show you how to install Fsx Acceleration with crack Fsx in a few simple steps. - - - -## What You Need - - - -Before you start the installation process, you will need the following items: - - - -- A copy of Microsoft Flight Simulator X installed on your computer. - -- A copy of Fsx Acceleration, either from a disc or a digital download. - -- A crack file for Fsx Acceleration, which you can find online from various sources. Make sure to scan it for viruses before using it. - -- A backup of your original Fsx.exe file, in case something goes wrong. - - - -## How To Install Fsx Acceleration With Crack Fsx - - - -Once you have everything ready, follow these steps to install Fsx Acceleration with crack Fsx: - - - -1. Run the Fsx Acceleration installer and follow the instructions on the screen. When prompted to enter a product key, enter any random key or use a key generator. - -2. When the installation is complete, do not launch Fsx Acceleration yet. Instead, go to the folder where you installed it, usually C:\Program Files (x86)\Microsoft Games\Microsoft Flight Simulator X. - -3. Find the file named Fsx.exe and rename it to something else, such as Fsx\_old.exe. This is to prevent the game from detecting that you are using a cracked version. - -4. Copy the crack file that you downloaded and paste it in the same folder. Rename it to Fsx.exe. - -5. Launch Fsx Acceleration from the shortcut on your desktop or start menu. You should be able to play the game without any problems. - - - -## Troubleshooting - - - -If you encounter any issues while installing or playing Fsx Acceleration with crack Fsx, here are some possible solutions: - - - -- If the game crashes or freezes, try lowering the graphics settings or disabling some features in the options menu. - -- If the game asks you to insert the disc or activate online, make sure you are using the correct crack file and that you renamed the original Fsx.exe file. - -- If the game does not recognize your joystick or controller, try updating your drivers or changing the input settings in the options menu. - -- If the game does not run at all, try running it as an administrator or in compatibility mode for Windows XP or Vista. - - - -## Conclusion - - - -Fsx Acceleration is a great expansion pack for Microsoft Flight Simulator X that adds more realism and excitement to your flying experience. However, installing it can be tricky if you want to use a cracked version of Fsx. By following our guide on how to install Fsx Acceleration with crack Fsx, you should be able to enjoy the game without any hassle. Happy flying! - - 145887f19f - - - - - diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/template_model.py b/spaces/lithiumice/SadTalker/src/face3d/models/template_model.py deleted file mode 100644 index dac7b33d5889777eb63c9882a3b9fa094dcab293..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/template_model.py +++ /dev/null @@ -1,100 +0,0 @@ -"""Model class template - -This module provides a template for users to implement custom models. -You can specify '--model template' to use this model. -The class name should be consistent with both the filename and its model option. -The filename should be _dataset.py -The class name should be Dataset.py -It implements a simple image-to-image translation baseline based on regression loss. -Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss: - min_ ||netG(data_A) - data_B||_1 -You need to implement the following functions: - : Add model-specific options and rewrite default values for existing options. - <__init__>: Initialize this model class. - : Unpack input data and perform data pre-processing. - : Run forward pass. This will be called by both and . - : Update network weights; it will be called in every training iteration. -""" -import numpy as np -import torch -from .base_model import BaseModel -from . import networks - - -class TemplateModel(BaseModel): - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new model-specific options and rewrite default values for existing options. - - Parameters: - parser -- the option parser - is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset. - if is_train: - parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model. - - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk. - self.loss_names = ['loss_G'] - # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images. - self.visual_names = ['data_A', 'data_B', 'output'] - # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks. - # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them. - self.model_names = ['G'] - # define networks; you can use opt.isTrain to specify different behaviors for training and test. - self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids) - if self.isTrain: # only defined during training time - # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss. - # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device) - self.criterionLoss = torch.nn.L1Loss() - # define and initialize optimizers. You can define one optimizer for each network. - # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizers = [self.optimizer] - - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - AtoB = self.opt.direction == 'AtoB' # use to swap data_A and data_B - self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A - self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B - self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths - - def forward(self): - """Run forward pass. This will be called by both functions and .""" - self.output = self.netG(self.data_A) # generate output image given the input data_A - - def backward(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - # caculate the intermediate results if necessary; here self.output has been computed during function - # calculate loss given the input and intermediate results - self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression - self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G - - def optimize_parameters(self): - """Update network weights; it will be called in every training iteration.""" - self.forward() # first call forward to calculate intermediate results - self.optimizer.zero_grad() # clear network G's existing gradients - self.backward() # calculate gradients for network G - self.optimizer.step() # update gradients for network G diff --git a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index 1ceac4a470ca311d594818d52e5f96919cfddb26..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/ludvigolsen/plot_confusion_matrix/utils.py b/spaces/ludvigolsen/plot_confusion_matrix/utils.py deleted file mode 100644 index 248fd9eeed26526a7bc491a3a4aa650fd98a100d..0000000000000000000000000000000000000000 --- a/spaces/ludvigolsen/plot_confusion_matrix/utils.py +++ /dev/null @@ -1,92 +0,0 @@ -import subprocess -import re -import streamlit as st -import json -from typing import Optional - - -def show_error(msg, action): - st.error( - f"Failed to {action}:\n\n...{msg}\n\nPlease [report](https://github.com/LudvigOlsen/plot_confusion_matrix/issues) this issue." - ) - - -def call_subprocess(call_, message, return_output=False, encoding="UTF-8"): - # With capturing of output - if return_output: - try: - out = subprocess.check_output(call_, shell=True, encoding=encoding) - except subprocess.CalledProcessError as e: - if "Failed to create plot from confusion matrix." in e.output: - msg = e.output.split("Failed to create plot from confusion matrix.")[-1] - show_error(msg=msg, action="plot confusion matrix") - elif "Failed to read design settings as a json file" in e.output: - msg = e.output.split("Failed to read design settings as a json file")[ - -1 - ] - show_error(msg=msg, action="read design settings") - elif "Failed to read data from" in e.output: - msg = e.output.split("Failed to read data from")[-1] - show_error(msg=msg, action="read data") - elif "Failed to ggsave plot to:" in e.output: - msg = e.output.split("Failed to ggsave plot to:")[-1] - show_error(msg=msg, action="save plot") - else: - msg = e.output.split("\n\n")[-1] - st.error( - f"Unknown type of error: {msg}.\n\n" - "Please [report](https://github.com/LudvigOlsen/plot_confusion_matrix/issues) this issue." - ) - print(e.output) - print(f"{message}: {call_}") - raise e - return out - - # Without capturing of output - try: - subprocess.check_call(call_, shell=True) - except subprocess.CalledProcessError as e: - print(f"{message}: {call_}") - raise e - - -def clean_string_for_non_alphanumerics(s): - # Remove non-alphanumerics (keep spaces) - pattern1 = re.compile("[^0-9a-zA-Z\s]+") - # Replace multiple spaces with a single space - pattern2 = re.compile("\s+") - # Apply replacements - s = pattern1.sub("", s) - s = pattern2.sub(" ", s) - # Trim whitespace in start and end - return s.strip() - - -def clean_str_column(x): - return x.astype(str).apply(lambda x: clean_string_for_non_alphanumerics(x)) - - -def min_max_scale_list( - x: list, - new_min: float, - new_max: float, - old_min: Optional[float] = None, - old_max: Optional[float] = None, -) -> list: - """ - MinMax scaler for lists. - Why: Currently we don't require numpy as dependency. - """ - if old_min is None: - old_min = min(x) - if old_max is None: - old_max = max(x) - - diff = old_max - old_min - - # Avoiding zero-division - if diff == 0: - diff = 1 - - x = [(xi - old_min) / diff for xi in x] - return [xi * (new_max - new_min) + new_min for xi in x] diff --git a/spaces/luost26/DiffAb/diffab/tools/runner/design_for_pdb.py b/spaces/luost26/DiffAb/diffab/tools/runner/design_for_pdb.py deleted file mode 100644 index e67c38800c0c9d76b17ca122d580372a0b68f0a7..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/tools/runner/design_for_pdb.py +++ /dev/null @@ -1,291 +0,0 @@ -import os -import argparse -import copy -import json -from tqdm.auto import tqdm -from torch.utils.data import DataLoader - -from diffab.datasets.custom import preprocess_antibody_structure -from diffab.models import get_model -from diffab.modules.common.geometry import reconstruct_backbone_partially -from diffab.modules.common.so3 import so3vec_to_rotation -from diffab.utils.inference import RemoveNative -from diffab.utils.protein.writers import save_pdb -from diffab.utils.train import recursive_to -from diffab.utils.misc import * -from diffab.utils.data import * -from diffab.utils.transforms import * -from diffab.utils.inference import * -from diffab.tools.renumber import renumber as renumber_antibody - - -def create_data_variants(config, structure_factory): - structure = structure_factory() - structure_id = structure['id'] - - data_variants = [] - if config.mode == 'single_cdr': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - for cdr_name in cdrs: - transform = Compose([ - MaskSingleCDR(cdr_name, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - residue_first, residue_last = get_residue_first_last(data_var) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-{cdr_name}', - 'tag': f'{cdr_name}', - 'cdr': cdr_name, - 'residue_first': residue_first, - 'residue_last': residue_last, - }) - elif config.mode == 'multiple_cdrs': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - transform = Compose([ - MaskMultipleCDRs(selection=cdrs, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-MultipleCDRs', - 'tag': 'MultipleCDRs', - 'cdrs': cdrs, - 'residue_first': None, - 'residue_last': None, - }) - elif config.mode == 'full': - transform = Compose([ - MaskAntibody(), - MergeChains(), - ]) - data_var = transform(structure_factory()) - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-Full', - 'tag': 'Full', - 'residue_first': None, - 'residue_last': None, - }) - elif config.mode == 'abopt': - cdrs = sorted(list(set(find_cdrs(structure)).intersection(config.sampling.cdrs))) - for cdr_name in cdrs: - transform = Compose([ - MaskSingleCDR(cdr_name, augmentation=False), - MergeChains(), - ]) - data_var = transform(structure_factory()) - residue_first, residue_last = get_residue_first_last(data_var) - for opt_step in config.sampling.optimize_steps: - data_variants.append({ - 'data': data_var, - 'name': f'{structure_id}-{cdr_name}-O{opt_step}', - 'tag': f'{cdr_name}-O{opt_step}', - 'cdr': cdr_name, - 'opt_step': opt_step, - 'residue_first': residue_first, - 'residue_last': residue_last, - }) - else: - raise ValueError(f'Unknown mode: {config.mode}.') - return data_variants - - -def design_for_pdb(args): - # Load configs - config, config_name = load_config(args.config) - seed_all(args.seed if args.seed is not None else config.sampling.seed) - - # Structure loading - data_id = os.path.basename(args.pdb_path) - if args.no_renumber: - pdb_path = args.pdb_path - else: - in_pdb_path = args.pdb_path - out_pdb_path = os.path.splitext(in_pdb_path)[0] + '_chothia.pdb' - heavy_chains, light_chains = renumber_antibody(in_pdb_path, out_pdb_path) - pdb_path = out_pdb_path - - if args.heavy is None and len(heavy_chains) > 0: - args.heavy = heavy_chains[0] - if args.light is None and len(light_chains) > 0: - args.light = light_chains[0] - if args.heavy is None and args.light is None: - raise ValueError("Neither heavy chain id (--heavy) or light chain id (--light) is specified.") - get_structure = lambda: preprocess_antibody_structure({ - 'id': data_id, - 'pdb_path': pdb_path, - 'heavy_id': args.heavy, - # If the input is a nanobody, the light chain will be ignores - 'light_id': args.light, - }) - - # Logging - structure_ = get_structure() - structure_id = structure_['id'] - tag_postfix = '_%s' % args.tag if args.tag else '' - log_dir = get_new_log_dir( - os.path.join(args.out_root, config_name + tag_postfix), - prefix=data_id - ) - logger = get_logger('sample', log_dir) - logger.info(f'Data ID: {structure_["id"]}') - logger.info(f'Results will be saved to {log_dir}') - data_native = MergeChains()(structure_) - save_pdb(data_native, os.path.join(log_dir, 'reference.pdb')) - - # Load checkpoint and model - logger.info('Loading model config and checkpoints: %s' % (config.model.checkpoint)) - ckpt = torch.load(config.model.checkpoint, map_location='cpu') - cfg_ckpt = ckpt['config'] - model = get_model(cfg_ckpt.model).to(args.device) - lsd = model.load_state_dict(ckpt['model']) - logger.info(str(lsd)) - - # Make data variants - data_variants = create_data_variants( - config = config, - structure_factory = get_structure, - ) - - # Save metadata - metadata = { - 'identifier': structure_id, - 'index': data_id, - 'config': args.config, - 'items': [{kk: vv for kk, vv in var.items() if kk != 'data'} for var in data_variants], - } - with open(os.path.join(log_dir, 'metadata.json'), 'w') as f: - json.dump(metadata, f, indent=2) - - # Start sampling - collate_fn = PaddingCollate(eight=False) - inference_tfm = [ PatchAroundAnchor(), ] - if 'abopt' not in config.mode: # Don't remove native CDR in optimization mode - inference_tfm.append(RemoveNative( - remove_structure = config.sampling.sample_structure, - remove_sequence = config.sampling.sample_sequence, - )) - inference_tfm = Compose(inference_tfm) - - for variant in data_variants: - os.makedirs(os.path.join(log_dir, variant['tag']), exist_ok=True) - logger.info(f"Start sampling for: {variant['tag']}") - - save_pdb(data_native, os.path.join(log_dir, variant['tag'], 'REF1.pdb')) # w/ OpenMM minimization - - data_cropped = inference_tfm( - copy.deepcopy(variant['data']) - ) - data_list_repeat = [ data_cropped ] * config.sampling.num_samples - loader = DataLoader(data_list_repeat, batch_size=args.batch_size, shuffle=False, collate_fn=collate_fn) - - count = 0 - for batch in tqdm(loader, desc=variant['name'], dynamic_ncols=True): - torch.set_grad_enabled(False) - model.eval() - batch = recursive_to(batch, args.device) - if 'abopt' in config.mode: - # Antibody optimization starting from native - traj_batch = model.optimize(batch, opt_step=variant['opt_step'], optimize_opt={ - 'pbar': True, - 'sample_structure': config.sampling.sample_structure, - 'sample_sequence': config.sampling.sample_sequence, - }) - else: - # De novo design - traj_batch = model.sample(batch, sample_opt={ - 'pbar': True, - 'sample_structure': config.sampling.sample_structure, - 'sample_sequence': config.sampling.sample_sequence, - }) - - aa_new = traj_batch[0][2] # 0: Last sampling step. 2: Amino acid. - pos_atom_new, mask_atom_new = reconstruct_backbone_partially( - pos_ctx = batch['pos_heavyatom'], - R_new = so3vec_to_rotation(traj_batch[0][0]), - t_new = traj_batch[0][1], - aa = aa_new, - chain_nb = batch['chain_nb'], - res_nb = batch['res_nb'], - mask_atoms = batch['mask_heavyatom'], - mask_recons = batch['generate_flag'], - ) - aa_new = aa_new.cpu() - pos_atom_new = pos_atom_new.cpu() - mask_atom_new = mask_atom_new.cpu() - - for i in range(aa_new.size(0)): - data_tmpl = variant['data'] - aa = apply_patch_to_tensor(data_tmpl['aa'], aa_new[i], data_cropped['patch_idx']) - mask_ha = apply_patch_to_tensor(data_tmpl['mask_heavyatom'], mask_atom_new[i], data_cropped['patch_idx']) - pos_ha = ( - apply_patch_to_tensor( - data_tmpl['pos_heavyatom'], - pos_atom_new[i] + batch['origin'][i].view(1, 1, 3).cpu(), - data_cropped['patch_idx'] - ) - ) - - save_path = os.path.join(log_dir, variant['tag'], '%04d.pdb' % (count, )) - save_pdb({ - 'chain_nb': data_tmpl['chain_nb'], - 'chain_id': data_tmpl['chain_id'], - 'resseq': data_tmpl['resseq'], - 'icode': data_tmpl['icode'], - # Generated - 'aa': aa, - 'mask_heavyatom': mask_ha, - 'pos_heavyatom': pos_ha, - }, path=save_path) - # save_pdb({ - # 'chain_nb': data_cropped['chain_nb'], - # 'chain_id': data_cropped['chain_id'], - # 'resseq': data_cropped['resseq'], - # 'icode': data_cropped['icode'], - # # Generated - # 'aa': aa_new[i], - # 'mask_heavyatom': mask_atom_new[i], - # 'pos_heavyatom': pos_atom_new[i] + batch['origin'][i].view(1, 1, 3).cpu(), - # }, path=os.path.join(log_dir, variant['tag'], '%04d_patch.pdb' % (count, ))) - count += 1 - - logger.info('Finished.\n') - - -def args_from_cmdline(): - parser = argparse.ArgumentParser() - parser.add_argument('pdb_path', type=str) - parser.add_argument('--heavy', type=str, default=None, help='Chain id of the heavy chain.') - parser.add_argument('--light', type=str, default=None, help='Chain id of the light chain.') - parser.add_argument('--no_renumber', action='store_true', default=False) - parser.add_argument('-c', '--config', type=str, default='./configs/test/codesign_single.yml') - parser.add_argument('-o', '--out_root', type=str, default='./results') - parser.add_argument('-t', '--tag', type=str, default='') - parser.add_argument('-s', '--seed', type=int, default=None) - parser.add_argument('-d', '--device', type=str, default='cuda') - parser.add_argument('-b', '--batch_size', type=int, default=16) - args = parser.parse_args() - return args - - -def args_factory(**kwargs): - default_args = EasyDict( - heavy = 'H', - light = 'L', - no_renumber = False, - config = './configs/test/codesign_single.yml', - out_root = './results', - tag = '', - seed = None, - device = 'cuda', - batch_size = 16 - ) - default_args.update(kwargs) - return default_args - - -if __name__ == '__main__': - design_for_pdb(args_from_cmdline()) diff --git a/spaces/ma-xu/LIVE/README.md b/spaces/ma-xu/LIVE/README.md deleted file mode 100644 index e53462aba16e23c6ac5a3a178a02636f8bf8e76b..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: LIVE -emoji: 📊 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/internal/copy_cross_system.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/internal/copy_cross_system.h deleted file mode 100644 index ab3b4e5bb7fe598a2f22da280b772fb72f4b3dd5..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/internal/copy_cross_system.h +++ /dev/null @@ -1,242 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditionu and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -// XXX -// this file must not be included on its own, ever, -// but must be part of include in thrust/system/cuda/detail/copy.h - -#include - -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -namespace __copy { - - - template - THRUST_HOST_FUNCTION void - trivial_device_copy(thrust::cpp::execution_policy& , - thrust::cuda_cub::execution_policy& device_s, - T* dst, - T const* src, - Size count) - { - cudaError status; - status = cuda_cub::trivial_copy_to_device(dst, - src, - count, - cuda_cub::stream(device_s)); - cuda_cub::throw_on_error(status, "__copy::trivial_device_copy H->D: failed"); - } - - template - THRUST_HOST_FUNCTION void - trivial_device_copy(thrust::cuda_cub::execution_policy& device_s, - thrust::cpp::execution_policy& , - T* dst, - T const* src, - Size count) - { - cudaError status; - status = cuda_cub::trivial_copy_from_device(dst, - src, - count, - cuda_cub::stream(device_s)); - cuda_cub::throw_on_error(status, "trivial_device_copy D->H failed"); - } - - template - OutputIt __host__ - cross_system_copy_n(thrust::execution_policy& sys1, - thrust::execution_policy& sys2, - InputIt begin, - Size n, - OutputIt result, - thrust::detail::true_type) // trivial copy - - { - typedef typename iterator_traits::value_type InputTy; - - trivial_device_copy(derived_cast(sys1), - derived_cast(sys2), - reinterpret_cast(thrust::raw_pointer_cast(&*result)), - reinterpret_cast(thrust::raw_pointer_cast(&*begin)), - n); - - return result + n; - } - - // non-trivial H->D copy - template - OutputIt __host__ - cross_system_copy_n(thrust::cpp::execution_policy& host_s, - thrust::cuda_cub::execution_policy& device_s, - InputIt first, - Size num_items, - OutputIt result, - thrust::detail::false_type) // non-trivial copy - { - // get type of the input data - typedef typename thrust::iterator_value::type InputTy; - - // copy input data into host temp storage - InputIt last = first; - thrust::advance(last, num_items); - thrust::detail::temporary_array temp(host_s, num_items); - - for (Size idx = 0; idx != num_items; idx++) - { - ::new (static_cast(temp.data().get()+idx)) InputTy(*first); - ++first; - } - - // allocate device temporary storage - thrust::detail::temporary_array d_in_ptr(device_s, num_items); - - // trivial copy data from host to device - cudaError status = cuda_cub::trivial_copy_to_device(d_in_ptr.data().get(), - temp.data().get(), - num_items, - cuda_cub::stream(device_s)); - cuda_cub::throw_on_error(status, "__copy:: H->D: failed"); - - - // device->device copy - OutputIt ret = cuda_cub::copy_n(device_s, d_in_ptr.data(), num_items, result); - - return ret; - } - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC - // non-trivial copy D->H, only supported with NVCC compiler - // because copy ctor must have __device__ annotations, which is nvcc-only - // feature - template - OutputIt __host__ - cross_system_copy_n(thrust::cuda_cub::execution_policy& device_s, - thrust::cpp::execution_policy& host_s, - InputIt first, - Size num_items, - OutputIt result, - thrust::detail::false_type) // non-trivial copy - - { - // get type of the input data - typedef typename thrust::iterator_value::type InputTy; - - // allocate device temp storage - thrust::detail::temporary_array d_in_ptr(device_s, num_items); - - // uninitialize copy into temp device storage - cuda_cub::uninitialized_copy_n(device_s, first, num_items, d_in_ptr.data()); - - // allocate host temp storage - thrust::detail::temporary_array temp(host_s, num_items); - - // trivial copy from device to host - cudaError status; - status = cuda_cub::trivial_copy_from_device(temp.data().get(), - d_in_ptr.data().get(), - num_items, - cuda_cub::stream(device_s)); - cuda_cub::throw_on_error(status, "__copy:: D->H: failed"); - - // host->host copy - OutputIt ret = thrust::copy_n(host_s, temp.data(), num_items, result); - - return ret; - } -#endif - - template - OutputIt __host__ - cross_system_copy_n(cross_system systems, - InputIt begin, - Size n, - OutputIt result) - { - return cross_system_copy_n( - derived_cast(systems.sys1), - derived_cast(systems.sys2), - begin, - n, - result, - typename is_indirectly_trivially_relocatable_to::type()); - } - - template - OutputIterator __host__ - cross_system_copy(cross_system systems, - InputIterator begin, - InputIterator end, - OutputIterator result) - { - return cross_system_copy_n(systems, - begin, - thrust::distance(begin, end), - result); - } - -} // namespace __copy - -} // namespace cuda_cub -} // end namespace thrust diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/uninitialized_copy.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/uninitialized_copy.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/uninitialized_copy.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/magicr/BuboGPT/bubogpt/processors/__init__.py b/spaces/magicr/BuboGPT/bubogpt/processors/__init__.py deleted file mode 100644 index e1bb0171393b0e2af7f05de671a02e6946edd5ba..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/bubogpt/processors/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from bubogpt.processors.base_processor import BaseProcessor -from bubogpt.processors.blip_processors import ( - Blip2ImageTrainProcessor, - Blip2ImageEvalProcessor, - BlipCaptionProcessor, -) -from bubogpt.processors.imagebind_vision_processor import ( - ImageBindCaptionProcessor, - ImageBindVisionTrainProcessor, - ImageBindVisionEvalProcessor -) -from bubogpt.processors.imagebind_audio_processor import ( - ImageBindAudioTrainProcessor, - ImageBindAudioEvalProcessor, -) - -from bubogpt.common.registry import registry - -__all__ = [ - "BaseProcessor", - "Blip2ImageTrainProcessor", - "Blip2ImageEvalProcessor", - "BlipCaptionProcessor", - "ImageBindCaptionProcessor", - "ImageBindVisionTrainProcessor", - "ImageBindVisionEvalProcessor", - "ImageBindAudioTrainProcessor", - "ImageBindAudioEvalProcessor", -] - - -def load_processor(name, cfg=None): - """ - Example - - >>> processor = load_processor("alpro_video_train", cfg=None) - """ - processor = registry.get_processor_class(name).from_config(cfg) - - return processor diff --git a/spaces/magicr/BuboGPT/bubogpt/processors/base_processor.py b/spaces/magicr/BuboGPT/bubogpt/processors/base_processor.py deleted file mode 100644 index 39b33cdf8fcd97cfd3e4a5fbece6593357af9d41..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/bubogpt/processors/base_processor.py +++ /dev/null @@ -1,26 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from omegaconf import OmegaConf - - -class BaseProcessor: - def __init__(self): - self.transform = lambda x: x - return - - def __call__(self, item): - return self.transform(item) - - @classmethod - def from_config(cls, cfg=None): - return cls() - - def build(self, **kwargs): - cfg = OmegaConf.create(kwargs) - - return self.from_config(cfg) diff --git a/spaces/marioboy/neil-breen/synthesizer/utils/cleaners.py b/spaces/marioboy/neil-breen/synthesizer/utils/cleaners.py deleted file mode 100644 index eab63f05c9cc7cc0b583992eac94058097f3c191..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/synthesizer/utils/cleaners.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You"ll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -""" - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r"\s+") - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1]) for x in [ - ("mrs", "misess"), - ("mr", "mister"), - ("dr", "doctor"), - ("st", "saint"), - ("co", "company"), - ("jr", "junior"), - ("maj", "major"), - ("gen", "general"), - ("drs", "doctors"), - ("rev", "reverend"), - ("lt", "lieutenant"), - ("hon", "honorable"), - ("sgt", "sergeant"), - ("capt", "captain"), - ("esq", "esquire"), - ("ltd", "limited"), - ("col", "colonel"), - ("ft", "fort"), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - """lowercase input tokens.""" - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, " ", text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - """Basic pipeline that lowercases and collapses whitespace without transliteration.""" - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - """Pipeline for non-English text that transliterates to ASCII.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - """Pipeline for English text, including number and abbreviation expansion.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/spaces/matthoffner/open-codetree/tailwind.config.js b/spaces/matthoffner/open-codetree/tailwind.config.js deleted file mode 100644 index ca2d093759e40e2ae0405574cee0c3c2ca5b5eb1..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/open-codetree/tailwind.config.js +++ /dev/null @@ -1,13 +0,0 @@ -module.exports = { - content: [ - "./pages/**/*.{js,ts,jsx,tsx}", - "./components/**/*.{js,ts,jsx,tsx}", - ], - theme: { - extend: { - animation: { - "pulse-slow": "pulse 3s linear infinite", - }, - }, - }, -}; diff --git a/spaces/maykcaldas/MAPI_LLM/README.md b/spaces/maykcaldas/MAPI_LLM/README.md deleted file mode 100644 index 63ef61158c1e2a4f902189eef3060e5272022065..0000000000000000000000000000000000000000 --- a/spaces/maykcaldas/MAPI_LLM/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MAPI LLM -emoji: 👀 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/merve/anonymization/source/measuring-fairness/students.js b/spaces/merve/anonymization/source/measuring-fairness/students.js deleted file mode 100644 index 4af55cba8cc763d96aa478be96a785048d9edc42..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/measuring-fairness/students.js +++ /dev/null @@ -1,90 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -window.makeStudents = function(){ - var seed = new Math.seedrandom('he4a15') - var rand = d3.randomUniform.source(seed)(0, 1) - var letters = 'abcdefgijlmnopqrsuvwxyz' - letters = (letters + letters.toUpperCase()).split('') - - var nSickCols = 6 - var mSickCols = 8 - var fSickCols = nSickCols*2 - mSickCols - - var students = d3.range(nCols*nCols).map(i => { - var letter = letters[~~d3.randomUniform.source(seed)(0, letters.length)()] - - var isMale = i % 2 == 0 - var isSick = i < (isMale ? mSickCols : fSickCols)*nCols - var grade = isSick*.5 + rand() - var pos = {} - - return {letter, isSick, isMale, grade, pos} - }) - - students = _.sortBy(students, d => -d.grade) - d3.nestBy(students, d => d.isSick).forEach(group => { - var isSick = group[0].isSick - - var sickCols = nSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - - group.forEach((d, i) => { - d.pos.allIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.allIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.all = spreadIJ.map(d => d*c.width/10) - }) - }) - - d3.nestBy(students, d => d.isSick + '-' + d.isMale).forEach(group => { - var isSick = group[0].isSick - var isMale = group[0].isMale - - var sickCols = isMale ? mSickCols : fSickCols - var cols = isSick ? sickCols : nCols - sickCols - var xOffset = isSick ? 0 : sickCols - var yOffset = isMale ? nCols/2 + 2 : 0 - - group.forEach((d, i) => { - d.pos.sexIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols) + yOffset] - d.pos.sexGroupIJ = [cols - 1 - (i % cols) + xOffset, ~~(i/cols)] - var spreadIJ = d.pos.sexIJ.slice() - if (!d.isSick) spreadIJ[0] += .1 - d.pos.sex = spreadIJ.map(d => d*c.width/10) - }) - }) - - students.maleOffsetJ = nCols/2 + 2 - students.maleOffsetPx= students.maleOffsetJ*c.width/10 - - students.fSickCols = fSickCols - students.mSickCols = mSickCols - - students.colWidth = c.width/10 - - students.rand = rand - return students -} - - - - - - -if (window.init) window.init() diff --git a/spaces/merve/anonymization/source/third_party/npyjs.js b/spaces/merve/anonymization/source/third_party/npyjs.js deleted file mode 100644 index bd741887cd85f0a495015968a3793f9d1d944efe..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/third_party/npyjs.js +++ /dev/null @@ -1,108 +0,0 @@ -// Apache-2.0 https://github.com/1wheel/npyjs - -const dtypes = { - ' '\x20').join(''); - - const hl = (header + spacepad).length; - - return Buffer.concat([ - Buffer.from('\x93NUMPY\x01\x00', 'latin1'), - // convert to little-endian - Buffer.from(new Uint8Array([hl % 256, hl/256 | 0])), - Buffer.from(header + spacepad, 'latin1'), - Buffer.from(typedArray.buffer) - ]); -} - -export default {parse, format}; \ No newline at end of file diff --git a/spaces/merve/data-leak/server-side/fill-in-the-blank/py/main.py b/spaces/merve/data-leak/server-side/fill-in-the-blank/py/main.py deleted file mode 100644 index 2ac15bda96de733df52cd7730895ae18baf20529..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/server-side/fill-in-the-blank/py/main.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import json -import shutil - -from flask import Flask, request -from flask_cors import CORS - -import model_bert_large -import model_bert_zari_cda - -app = Flask(__name__) -CORS(app) - - -@app.route('/') -def hello_world(): - name = os.environ.get('NAME', 'Test') - print('[Hello]') - return 'Hello {}!'.format(name) - - -@app.route('/embed_test') -def embed_test(): - sentence = 'The dog went to the [MASK].' - print('[TEST] ', sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - - -@app.route('/embed', methods=['POST']) -def embed(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[BASE] ' + sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - -@app.route('/embed_zari_cda', methods=['POST']) -def embed_zari_cda(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[ZARI] ' + sentence) - return json.dumps(model_bert_zari_cda.get_embeddings(sentence)) - - -@app.route('/embed_group_top', methods=['POST']) -def embed_group_top(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group_top(tokens)) - -@app.route('/get_embedding_group_top_low_mem', methods=['POST']) -def embed_group(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group(tokens)) - -if __name__ == '__main__': - app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 5004))) - - diff --git a/spaces/merve/data-leak/source/_posts/2021-08-27-private-and-fair.md b/spaces/merve/data-leak/source/_posts/2021-08-27-private-and-fair.md deleted file mode 100644 index bde65270e5991e7e765eb97294838e8732bc65e9..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/_posts/2021-08-27-private-and-fair.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -template: post.html -title: Can a Model Be Differentially Private and Fair? -summary: Training models with differential privacy stops models from inadvertently leaking sensitive data, but there's an unexpected side-effect: reduced accuracy on underrepresented subgroups. -shareimg: https://pair.withgoogle.com/explorables/images/private-and-fair.png -shareimgabstract: https://pair.withgoogle.com/explorables/images/private-and-fair-abstract.png -permalink: /private-and-fair/ ---- - -Imagine you want to use machine learning to suggest new bands to listen to. You could do this by having lots of people list their favorite bands and using them to train a model. The trained model might be quite useful and fun, but if someone pokes and prods at the model in just the right way, they could [extract](https://www.wired.com/2007/12/why-anonymous-data-sometimes-isnt/) the music preferences of someone whose data was used to train the model. Other kinds of models are potentially vulnerable; [credit card numbers](https://bair.berkeley.edu/blog/2019/08/13/memorization/) have been pulled out of language models and [actual faces](https://rist.tech.cornell.edu/papers/mi-ccs.pdf) reconstructed from image models. - -Training with [differential privacy](https://desfontain.es/privacy/differential-privacy-awesomeness.html) limits the information about any one data point that is extractable but in some cases there's an unexpected side-effect: reduced accuracy with underrepresented subgroups disparately impacted. - -
    - -Recall that machine learning models are typically trained with [gradient descent](https://playground.tensorflow.org/), a series of small steps taken to minimize an error function. To show how a model can leak its training data, we've trained two simple models to separate red and blue dots using two simple datasets that differ in one way: a single isolated data point in the upper left has been switched from red to blue. - -
    - -Notice that the two models have very different boundary lines near the isolated point by the end of the training. Someone with access to the trained model might be able to [infer](https://pair.withgoogle.com/explorables/data-leak/) if the point in the upper left is red or blue — if the color represented sensitive information, like someone's [voting record](https://gothamist.com/news/researchers-know-how-dante-de-blasio-hundreds-other-new-yorkers-voted), that could be quite bad! - -### Protecting the Privacy of Training Points - -We can prevent a single data point from drastically altering the model by [adding](http://www.cleverhans.io/privacy/2019/03/26/machine-learning-with-differential-privacy-in-tensorflow.html) two operations to each training step:² -- ⚬ Clipping the gradient (here, limiting how much the boundary line can move with each step) to bound the maximum impact a single data point can have on the final model. -- ⚬ Adding random noise to the gradient. - -Try **increasing** the random noise below. We're now training lots of differentially private models; the more the potential models for the red and blue outlier points overlap, the more [plausible deniability](https://pair.withgoogle.com/explorables/anonymization/) the person in the upper left has. - -
    - -You can also try dragging the other points around and adjusting the gradient clipping. Are points in the center or outliers more likely to modify the boundary lines? In two dimensions there's a limited number of outliers, but in higher dimensions [more points](https://observablehq.com/@tophtucker/theres-plenty-of-room-in-the-corners) are outliers and much more information can be extracted from a trained model. - -Correctly combined, adding gradient clipping and random noise to gradient descent make it possible to train a model with [differential privacy](https://desfontain.es/privacy/differential-privacy-awesomeness.html) – we can guarantee that a model trained on a given dataset is essentially indistinguishable from a model trained on the same dataset with a single point changed. -### Predictions on Outliers Change the Most - -What does this look like in practice? In [Distribution Density, Tails, and Outliers in Machine Learning](https://arxiv.org/abs/1910.13427), a series of increasingly differentially private models were trained on [MNIST digits](https://en.wikipedia.org/wiki/MNIST_database). Every digit in the training set was ranked according to the highest level of privacy that correctly classified it. - -
    - -On the lower left, you can see digits labeled as "3" in the training data that look more like a "2" and a "9". They're very different from the other "3"s in the training data so adding just a bit of privacy protection causes the model to no longer classify them as "3". Under some [specific circumstances](https://arxiv.org/abs/1411.2664), differential privacy can actually improve how well the model generalizes to data it wasn't trained on by limiting the influence of spurious examples. - -The right side shows more canonical digits which are classified correctly even with high levels of privacy because they're quite similar to other digits in the training data. -### The Accuracy Tradeoff -Limiting how much a model can learn from a single example does have a downside: it can also decrease the model's accuracy. With 7,500 training points, 90% accuracy on MNIST digits is only [achievable](https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/server-side/private-and-fair/MNIST_DP_Model_Grid.ipynb) with an extremely low level of privacy protection; increasing privacy quickly lowers the model's accuracy. - -Collecting more training data offers a way out of this accuracy/privacy tradeoff. With 60,000 training points, 90% accuracy can be reached with a higher privacy level than almost all [real-world deployments](https://desfontain.es/privacy/real-world-differential-privacy.html) of differential privacy. - -
    - -Looking at the differences between predictions by digit class shows another potential complication: some classes are harder to identify than others. Detecting an "8" with high confidence requires more training data and/or lower privacy than detecting a "0" with high confidence. - -
    - -This problem is exacerbated if the training data has fewer examples of one class than the others. Trying to predict an uncommon event with a differentially private model can require an enormous amount of data. - -### Implications for Fairness - -Outliers also aren't evenly distributed within a class. Below, MNIST digits are colored by their sensitivity to higher privacy levels and projected with [UMAP](https://pair-code.github.io/understanding-umap/), forming several clusters of privacy-sensitive yellow digits. It's possible to inadvertently train a model with good overall accuracy on a class but very low accuracy on a smaller group within the class. - -
    - -There's nothing that makes a "1" slanted to the left intrinsically harder to classify, but because there are only a few slanted "1"s in the training data it's difficult to make a model that classifies them accurately without leaking information. - -This disparate impact doesn't just happen in datasets of differently drawn digits: increased levels of differential privacy in a range of image and language models [disproportionality decreased accuracy](https://arxiv.org/pdf/1905.12101.pdf) on underrepresented subgroups. And adding differential privacy to a medical model [reduced](https://arxiv.org/pdf/2010.06667v1.pdf) the influence of Black patients' data on the model while increasing the influence of white patients' data. - -Lowering the privacy level might not help non-majoritarian data points either – they're the ones most [susceptible](https://arxiv.org/abs/1906.00389) to having their information exposed. Again, escaping the accuracy/privacy tradeoff requires collecting more data – this time from underrepresented subgroups. -### More Reading - -There are deep connections between [generalization, memorization and privacy](https://arxiv.org/abs/1906.05271) that are still not well understood. Slightly changing the privacy constraints, for example, can create new options. If public, unlabeled data exists, a "[Private Aggregation of Teacher Ensembles](http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html)" could be used instead of gradient clipping and random noise to train a differentially private model with a [smaller disparate impact](https://arxiv.org/pdf/2106.12576.pdf) on accuracy. - -Finding ways to increase privacy with a smaller impact on accuracy is an active area of research – [model architectures](https://arxiv.org/abs/2007.14191) designed with privacy in mind and better [dataset cleaning](https://arxiv.org/pdf/2107.06499.pdf) look like promising avenues. - -There are also additional [accuracy/privacy/fairness](http://proceedings.mlr.press/v97/jagielski19a/jagielski19a.pdf) tradeoffs beyond what's discussed in this post. Even if a differentially private model doesn't have large accuracy gaps between subgroups, enforcing [fairness metrics](https://pair.withgoogle.com/explorables/measuring-fairness/) can reduce privacy or accuracy. - -This post focuses on protecting the privacy of individual data points. In practice more work might be necessary to ensure that the [privacy of users](https://queue.acm.org/detail.cfm?id=3501293#:~:text=Computing%20and%20Verifying%20Anonymous%20Aggregates) – who could contribute much more than a single data point each – is also protected. - -These questions are also significant outside of machine learning. [Allocating resources](https://arxiv.org/abs/2105.07513) based on a differentially private dataset – with no machine learning model involved – can also disproportionately affect different groups. The 2020 Census is the first to use differential privacy and this could have a wide range of impacts, including how [congressional districts](https://statmodeling.stat.columbia.edu/2021/10/20/how-does-post-processed-differentially-private-census-data-affect-redistricting-how-concerned-should-we-be-about-gerrymandering-with-the-new-das/) are drawn. - -### Credits - -Adam Pearce // January 2022 - -Thanks to Abhradeep Thakurta, Andreas Terzis, Andy Coenen, Asma Ghandeharioun, Brendan McMahan, Ellen Jiang, Emily Reif, Fernanda Viégas, James Wexler, Kevin Robinson, Matthew Jagielski, Martin Wattenberg, Meredith Morris, Miguel Guevara, Nicolas Papernot and Nithum Thain for their help with this piece. - -### Footnotes - - To speed up training at the cost of looser privacy bounds, gradients, clipping and noise can be calculated on a group of data points instead of individual data points. - - The "ε" in ε-differential privacy essentially [measures](https://desfontain.es/privacy/differential-privacy-in-more-detail.html) the overlap in two distributions after changing a single data point. - - [Clipping](https://openreview.net/forum?id=BJgnXpVYwS) and [noising](https://arxiv.org/pdf/1511.06807.pdf) are also used outside of differential privacy as regularization techniques to improve accuracy.

    In addition to accidently mislabeled examples, differential privacy can also provide some protection against [data poisoning attacks](https://dp-ml.github.io/2021-workshop-ICLR/files/23.pdf). - - While visually similar digits aren't necessarily interpreted in similar ways by the model, the clustering of visually similar digits in the UMAP diagram at the bottom of the page (which projects embedding from the penultimate layer of digit classifier) suggests there is a close connection here. - - Rebalancing the dataset without collecting more data doesn't avoid this privacy/accuracy tradeoff – upsampling the smaller class reduces privacy and downsampling the larger class reduces data and lowers accuracy. - - See the appendix on [Subgroup Size and Accuracy](#appendix-subgroup-size-and-accuracy) for more detail. - -### Appendix: Subgroup Size and Accuracy - -How, exactly, does the amount of training data, the privacy level and the percentage of data from a subgroup impact accuracy? Using MNIST digits rotated 90° as a stand-in for a smaller subgroup, we can see how the accuracy of a series of simple [models](https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/server-side/private-and-fair/MNIST_Generate_UMAP.ipynb) that classify "1"s and "7"s change based on these attributes. - -On the far left, models without any rotated digits in the training data never classify those digits more accurately than random guessing. By rotating 5% of the training digits, a small slice of models with lots of training data and low privacy can accurately classify rotated digits. - -
    - -Increasing the proportion of rotated digits to 10% or 20% or even more makes it possible to train a higher privacy model that performs well on both types of digits with the same amount of training data. - -Click on one of the models above and you can see how the accuracy gap shifts as number of training points, privacy level and percentage of rotated digits are independently changed. - -
    - -Intuitively, adding more training data has diminishing marginal increases to accuracy. Accuracy on the smaller group of rotated digits, which may just be on the cusp of being learned, falls off faster as the effective amount of training data is decreased — a disparate reduction in accuracy. - - -### More Explorables - - -

    - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/source/data-leak/players0.js b/spaces/merve/fill-in-the-blank/source/data-leak/players0.js deleted file mode 100644 index 5f1640268c5aa31e0ed73ec7f763b4c64d65f587..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/data-leak/players0.js +++ /dev/null @@ -1,456 +0,0 @@ -var players0 = [ - [ - 1.305925030229746, - 38.016928657799276 - ], - [ - 20.894800483675937, - 23.071342200725514 - ], - [ - 24.232164449818622, - 50.35066505441355 - ], - [ - 37.29141475211608, - 4.643288996372431 - ], - [ - 57.89600967351874, - 25.24788391777509 - ], - [ - 41.20918984280532, - 34.389359129383315 - ], - [ - 42.51511487303507, - 54.26844014510278 - ], - [ - 31.77750906892382, - 67.9081015719468 - ], - [ - 63.84522370012092, - 54.41354292623942 - ], - [ - 70.37484885126965, - 42.22490931076179 - ], - [ - 39.32285368802902, - 56.44498186215236 - ], - [ - 35.550181378476424, - 58.91172914147521 - ], - [ - 46.57799274486094, - 52.8174123337364 - ], - [ - 39.6130592503023, - 37.14631197097945 - ], - [ - 42.51511487303507, - 30.90689238210399 - ], - [ - 50.64087061668682, - 8.706166868198308 - ], - [ - 71.10036275695285, - 8.996372430471585 - ], - [ - 75.01813784764208, - 26.844014510278114 - ], - [ - 77.3397823458283, - 47.44860943168077 - ], - [ - 76.17896009673518, - 59.34703748488513 - ], - [ - 105.05441354292624, - 39.177750906892385 - ], - [ - 59.34703748488513, - 33.083434099153564 - ] -] - - -var players1 = [ - [ - 6.819830713422007, - 27.569528415961305 - ], - [ - 31.05199516324063, - 30.03627569528416 - ], - [ - 28.440145102781138, - 43.24062877871826 - ], - [ - 48.02902055622733, - 13.639661426844015 - ], - [ - 62.249093107617895, - 35.69528415961306 - ], - [ - 49.915356711003625, - 26.553808948004836 - ], - [ - 53.68802902055623, - 47.88391777509069 - ], - [ - 45.85247883917775, - 54.123337363966144 - ], - [ - 72.8415961305925, - 46.57799274486094 - ], - [ - 70.81015719467956, - 23.216444981862153 - ], - [ - 35.98548972188634, - 44.11124546553809 - ], - [ - 49.48004836759371, - 59.92744860943168 - ], - [ - 46.86819830713422, - 45.417170495767834 - ], - [ - 39.6130592503023, - 37.14631197097945 - ], - [ - 42.37001209189843, - 24.812575574365177 - ], - [ - 53.252720677146314, - 9.721886336154776 - ], - [ - 73.5671100362757, - 8.996372430471585 - ], - [ - 80.96735187424426, - 26.698911729141475 - ], - [ - 85.75574365175332, - 37.43651753325272 - ], - [ - 87.35187424425635, - 47.88391777509069 - ], - [ - 112.59975816203143, - 31.77750906892382 - ], - [ - 58.041112454655384, - 25.97339782345828 - ] -] - -var players2 = [ - [ - 22.6360338573156, - 36.27569528415961 - ], - [ - 49.48004836759371, - 18.71825876662636 - ], - [ - 43.82103990326481, - 34.82466747279323 - ], - [ - 94.89721886336154, - 6.674727932285369 - ], - [ - 103.31318016928658, - 24.522370012091898 - ], - [ - 82.12817412333736, - 32.0677146311971 - ], - [ - 52.8174123337364, - 56.009673518742446 - ], - [ - 91.26964933494558, - 55.28415961305925 - ], - [ - 99.68561064087062, - 40.33857315598549 - ], - [ - 105.19951632406288, - 40.33857315598549 - ], - [ - 53.542926239419586, - 43.966142684401454 - ], - [ - 49.48004836759371, - 59.92744860943168 - ], - [ - 58.18621523579202, - 37.87182587666263 - ], - [ - 86.91656590084644, - 37.58162031438936 - ], - [ - 59.34703748488513, - 18.137847642079805 - ], - [ - 96.34824667472793, - 25.24788391777509 - ], - [ - 90.97944377267231, - 8.996372430471585 - ], - [ - 104.47400241837968, - 31.342200725513905 - ], - [ - 109.8428053204353, - 28.295042321644498 - ], - [ - 105.05441354292624, - 43.24062877871826 - ], - [ - 116.2273276904474, - 25.538089480048367 - ], - [ - 86.62636033857315, - 29.165659008464328 - ] -] - - -playersleakhigh = [ - [ - 2.71764705882353, - 22 - ], - [ - 38.11764705882353, - 44.75294117647059 - ], - [ - 31.058823529411764, - 53.22352941176471 - ], - [ - 52.94117647058824, - 51.10588235294118 - ], - [ - 58.023529411764706, - 50.11764705882353 - ], - [ - 46.305882352941175, - 51.247058823529414 - ], - [ - 46.023529411764706, - 42.635294117647064 - ], - [ - 41.082352941176474, - 48.98823529411765 - ], - [ - 49.411764705882355, - 43.76470588235294 - ], - [ - 59.71764705882353, - 43.48235294117647 - ], - [ - 39.32285368802902, - 56.44498186215236 - ], - [ - 67.76470588235294, - 30.494117647058825 - ], - [ - 78.07058823529412, - 48.28235294117647 - ], - [ - 69.60000000000001, - 40.23529411764706 - ], - [ - 76.09411764705882, - 23.152941176470588 - ], - [ - 85.9764705882353, - 24.282352941176473 - ], - [ - 84.56470588235294, - 48.98823529411765 - ], - [ - 74.68235294117648, - 39.38823529411765 - ], - [ - 79.3529411764706, - 22 - ], - [ - 93.1764705882353, - 34.44705882352941 - ], - [ - 86.68235294117648, - 33.45882352941177 - ], - [ - 81.74117647058824, - 41.92941176470588 - ] -] - -playersleaklow = [ - [ - 2.71764705882353, - 73.12941176470588 - ], - [ - 38.11764705882353, - 44.75294117647059 - ], - [ - 31.058823529411764, - 53.22352941176471 - ], - [ - 52.94117647058824, - 51.10588235294118 - ], - [ - 58.023529411764706, - 50.11764705882353 - ], - [ - 46.305882352941175, - 51.247058823529414 - ], - [ - 46.023529411764706, - 42.635294117647064 - ], - [ - 41.082352941176474, - 48.98823529411765 - ], - [ - 49.411764705882355, - 43.76470588235294 - ], - [ - 59.71764705882353, - 43.48235294117647 - ], - [ - 39.32285368802902, - 56.44498186215236 - ], - [ - 67.76470588235294, - 30.494117647058825 - ], - [ - 78.07058823529412, - 48.28235294117647 - ], - [ - 69.60000000000001, - 40.23529411764706 - ], - [ - 76.09411764705882, - 23.152941176470588 - ], - [ - 85.9764705882353, - 24.282352941176473 - ], - [ - 84.56470588235294, - 48.98823529411765 - ], - [ - 74.68235294117648, - 39.38823529411765 - ], - [ - 79.3529411764706, - 72.70588235294117 - ], - [ - 93.1764705882353, - 34.44705882352941 - ], - [ - 86.68235294117648, - 33.45882352941177 - ], - [ - 81.74117647058824, - 41.92941176470588 - ] -] \ No newline at end of file diff --git a/spaces/merve/gr-blocks/app.py b/spaces/merve/gr-blocks/app.py deleted file mode 100644 index 2cd2d157dfa53d450cdaa48e7265efdc34109da0..0000000000000000000000000000000000000000 --- a/spaces/merve/gr-blocks/app.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -os.system("pip install gradio==2.8.0b22") -os.system("pip install -r requirements.txt") -os.system("pip freeze") -from huggingface_hub import from_pretrained_keras -import numpy as np -import pandas as pd -import tensorflow as tf -import tensorflow_hub as hub -import tensorflow_text as text -from tensorflow import keras -import gradio as gr - - -def make_bert_preprocessing_model(sentence_features, seq_length=128): - """Returns Model mapping string features to BERT inputs. - - Args: - sentence_features: A list with the names of string-valued features. - seq_length: An integer that defines the sequence length of BERT inputs. - - Returns: - A Keras Model that can be called on a list or dict of string Tensors - (with the order or names, resp., given by sentence_features) and - returns a dict of tensors for input to BERT. - """ - - input_segments = [ - tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft) - for ft in sentence_features - ] - - # tokenize the text to word pieces - bert_preprocess = hub.load(bert_preprocess_path) - tokenizer = hub.KerasLayer(bert_preprocess.tokenize, - name="tokenizer") - - segments = [tokenizer(s) for s in input_segments] - - truncated_segments = segments - - packer = hub.KerasLayer(bert_preprocess.bert_pack_inputs, - arguments=dict(seq_length=seq_length), - name="packer") - model_inputs = packer(truncated_segments) - return keras.Model(input_segments, model_inputs) - - -def preprocess_image(image_path, resize): - extension = tf.strings.split(image_path)[-1] - - image = tf.io.read_file(image_path) - if extension == b"jpg": - image = tf.image.decode_jpeg(image, 3) - else: - image = tf.image.decode_png(image, 3) - - image = tf.image.resize(image, resize) - return image - -def preprocess_text(text_1, text_2): - - text_1 = tf.convert_to_tensor([text_1]) - text_2 = tf.convert_to_tensor([text_2]) - - output = bert_preprocess_model([text_1, text_2]) - - output = {feature: tf.squeeze(output[feature]) for feature in bert_input_features} - - return output - -def preprocess_text_and_image(sample, resize): - - image_1 = preprocess_image(sample['image_1_path'], resize) - image_2 = preprocess_image(sample['image_2_path'], resize) - - text = preprocess_text(sample['text_1'], sample['text_2']) - - return {"image_1": image_1, "image_2": image_2, "text": text} - - -def classify_info(image_1, text_1, image_2, text_2): - - sample = dict() - sample['image_1_path'] = image_1 - sample['image_2_path'] = image_2 - sample['text_1'] = text_1 - sample['text_2'] = text_2 - - dataframe = pd.DataFrame(sample, index=[0]) - - ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), [0])) - ds = ds.map(lambda x, y: (preprocess_text_and_image(x, resize), y)).cache() - batch_size = 1 - auto = tf.data.AUTOTUNE - ds = ds.batch(batch_size).prefetch(auto) - output = model.predict(ds) - - outputs = dict() - - outputs[labels[0]] = float(output[0][0]) - outputs[labels[1]] = float(output[0][1]) - outputs[labels[2]] = float(output[0][2]) - #label = np.argmax(output) - return outputs #labels[label] - - -model = from_pretrained_keras("keras-io/multimodal-entailment") -resize = (128, 128) -bert_input_features = ["input_word_ids", "input_type_ids", "input_mask"] -bert_model_path = ("https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1") -bert_preprocess_path = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3" -bert_preprocess_model = make_bert_preprocessing_model(['text_1', 'text_2']) - -labels = {0: "Contradictory", 1: "Implies", 2: "No Entailment"} - -block = gr.Blocks() - -examples = [['examples/image_1.png', '#IndiaFightsCorona:\n\nNearly 4.5 million beneficiaries vaccinated against #COVID19 in 19 days.\n\nIndia is the fastest country to cross landmark of vaccinating 4 million beneficiaries in merely 18 days.\n\n#StaySafe #IndiaWillWin #Unite2FightCorona https://t.co/beGDQfd06S', 'examples/image_2.jpg', '#IndiaFightsCorona:\n\nIndia has become the fastest nation to reach 4 million #COVID19 vaccinations ; it took only 18 days to administer the first 4 million #vaccines\n\n:@MoHFW_INDIA Secretary\n\n#StaySafe #IndiaWillWin #Unite2FightCorona https://t.co/9GENQlqtn3']] - - -with block: - gr.Markdown("Multimodal Entailment") - with gr.Tab("Hypothesis"): - with gr.Row(): - gr.Markdown("Upload hypothesis image:") - image_1 = gr.inputs.Image(type="filepath") - text_1 = gr.inputs.Textbox(lines=5) - - with gr.Tab("Premise"): - with gr.Row(): - gr.Markdown("Upload premise image:") - image_2 = gr.inputs.Image(type="filepath") - text_2 = gr.inputs.Textbox(lines=5) - - - run = gr.Button("Run") - label = gr.outputs.Label() - run.click(classify_info, inputs=[image_1, text_1, image_2, text_2], outputs=label) - -block.launch() diff --git a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/gender-over-time-colab/style.css b/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/gender-over-time-colab/style.css deleted file mode 100644 index 8165ac5b403d085f7013b25cefc267a6639a0d79..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/gender-over-time-colab/style.css +++ /dev/null @@ -1,70 +0,0 @@ -body{ - font-family: menlo, Consolas, 'Lucida Console', monospace; - margin: 10px; - margin-left: 20px; - width: 1130px; - background: #fff; -} - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -.axis{ - opacity: .7; -} - -text{ - /*pointer-events: none;*/ - text-shadow: 0 1.5px 0 #fff, 1.5px 0 0 #fff, 0 -1.5px 0 #fff, -1.5px 0 0 #fff; -} - - -#graph > div{ - /*display: inline-block;*/ -} - -.active path{ - stroke: #f0f; - /*stroke-width: 2;*/ - opacity: 1; -} -.active text{ - fill: #f0f; - opacity: 1 !important; - font-size: 14px; - -} - -p{ - max-width: 650px; -} \ No newline at end of file diff --git a/spaces/micole66/electra/app.py b/spaces/micole66/electra/app.py deleted file mode 100644 index 546e694775f33be24f6711dd38274a70aa636c10..0000000000000000000000000000000000000000 --- a/spaces/micole66/electra/app.py +++ /dev/null @@ -1,2 +0,0 @@ -import gradio as gr -gr.Interface.load("huggingface/vicgalle/xlm-roberta-large-xnli-anli").launch() \ No newline at end of file diff --git a/spaces/mikkoar/marco/src/components/theme-toggle.tsx b/spaces/mikkoar/marco/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/misteca/ChatGPT/presets.py b/spaces/misteca/ChatGPT/presets.py deleted file mode 100644 index 75e7e8bba696ab4c9fe1a20b8038dffa972766e7..0000000000000000000000000000000000000000 --- a/spaces/misteca/ChatGPT/presets.py +++ /dev/null @@ -1,41 +0,0 @@ -title = """

    ChatGPT 🚀

    """ -description = """
    - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
    -""" -customCSS = """ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -pre code { - display: block; - white-space: pre; - background-color: hsla(0, 0%, 0%, 72%); - border: solid 5px var(--color-border-primary) !important; - border-radius: 10px; - padding: 0 1.2rem 1.2rem; - margin-top: 1em !important; - color: #FFF; - box-shadow: inset 0px 8px 16px hsla(0, 0%, 0%, .2) -} -""" - -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "连接超时,无法获取对话。请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -summarize_prompt = "请总结以上对话,不超过100字。" # 总结对话时的 prompt -max_token_streaming = 3000 # 流式对话时的最大 token 数 -timeout_streaming = 5 # 流式对话时的超时时间 -max_token_all = 3500 # 非流式对话时的最大 token 数 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = False # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp b/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/models/stylegan2/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/mlpc-lab/BLIVA/bliva/models/bliva_flant5xxl.py b/spaces/mlpc-lab/BLIVA/bliva/models/bliva_flant5xxl.py deleted file mode 100644 index bd6f18aa8a8602039a81119a08649c6cbe3c0b9e..0000000000000000000000000000000000000000 --- a/spaces/mlpc-lab/BLIVA/bliva/models/bliva_flant5xxl.py +++ /dev/null @@ -1,803 +0,0 @@ -import logging -import string -import random -import copy - -import torch -import torch.nn as nn -from torch.cuda.amp import autocast as autocast -from transformers import T5TokenizerFast - -from bliva.common.registry import registry -from bliva.models.blip2 import Blip2Base, disabled_train -from bliva.models.modeling_t5 import T5Config, T5ForConditionalGeneration -from transformers.modeling_outputs import BaseModelOutput - - -@registry.register_model("bliva_flant5") -class BLIVAFlanT5(Blip2Base): - - PRETRAINED_MODEL_CONFIG_DICT = { - "flant5xxl": "configs/models/bliva_flant5xxl.yaml", - } - - def __init__( - self, - vit_model="eva_clip_g", - img_size=224, - drop_path_rate=0, - use_grad_checkpoint=False, - vit_precision="fp16", - freeze_vit=True, - num_query_token=32, - t5_model="google/flan-t5-xl", - prompt="", - max_txt_len=128, - max_output_txt_len=256, - apply_lemmatizer=False, - num_few_shot_examples=0, - few_shot_prob=0, - qformer_text_input=True, - ): - """ - apply_lemmatizer: when set to True, postprocess predict_answers() result with lemmas. - """ - super().__init__() - - self.tokenizer = self.init_tokenizer(truncation_side="left") - - self.visual_encoder, self.ln_vision = self.init_vision_encoder( - vit_model, img_size, drop_path_rate, use_grad_checkpoint, vit_precision - ) - if freeze_vit: - for name, param in self.visual_encoder.named_parameters(): - param.requires_grad = False - self.visual_encoder = self.visual_encoder.eval() - self.visual_encoder.train = disabled_train - logging.info("freeze vision encoder") - - self.Qformer, self.query_tokens = self.init_Qformer( - num_query_token, self.visual_encoder.num_features - ) - - if not qformer_text_input: - self.Qformer.bert.embeddings.word_embeddings = None - self.Qformer.bert.embeddings.position_embeddings = None - for layer in self.Qformer.bert.encoder.layer: - layer.output = None - layer.intermediate = None - else: - self.Qformer.resize_token_embeddings(len(self.tokenizer)) - self.Qformer.cls = None - - self.t5_tokenizer = T5TokenizerFast.from_pretrained(t5_model, truncation_side='left') - self.t5_output_tokenizer = T5TokenizerFast.from_pretrained(t5_model, truncation_side='right') - - t5_config = T5Config.from_pretrained(t5_model) - t5_config.dense_act_fn = "gelu" - self.t5_model = T5ForConditionalGeneration.from_pretrained( - t5_model, config=t5_config - ) - - for name, param in self.t5_model.named_parameters(): - param.requires_grad = False - param.data = param.data.bfloat16() - - self.t5_proj = nn.Linear( - self.Qformer.config.hidden_size, self.t5_model.config.hidden_size - ) - - self.max_txt_len = max_txt_len - self.max_output_txt_len = max_output_txt_len - self.prompt = prompt - - self._apply_lemmatizer = apply_lemmatizer - self._lemmatizer = None - - self.num_few_shot_examples = num_few_shot_examples - self.few_shot_prob = few_shot_prob - - self.qformer_text_input = qformer_text_input - self.vision_project = nn.Linear(self.visual_encoder.num_features, self.t5_model.config.hidden_size) - - def forward(self, samples): - - image = samples["image"] - image_features= self.visual_encoder.get_intermediate_layers(image)[-2] # [batch_size, 257, 1408] - image_features = image_features[:, 1:] - add_feature_llm = self.vision_project(image_features) - atts_add_feature_llm = torch.ones(add_feature_llm.size()[:-1], dtype=torch.long).to(image.device) - - with self.maybe_autocast(): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) - - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - if self.qformer_text_input: - text_Qformer = self.tokenizer( - samples["text_input"], - padding='longest', - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to(image.device) - Qformer_atts = torch.cat([query_atts,text_Qformer.attention_mask],dim=1) - - query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask=Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - else: - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state[:,:query_tokens.size(1),:]) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - fs_embeds, fs_atts = None, None - if self.few_shot_prob > 0 and "few_shot_samples" in samples.keys(): - fs_embeds, fs_atts = self.prepare_few_shot_embeds(samples['few_shot_samples']) - - with self.maybe_autocast(dtype=torch.bfloat16): - input_tokens = self.t5_tokenizer( - samples["text_input"], - padding="longest", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - output_tokens = self.t5_output_tokenizer( - samples["text_output"], - padding="longest", - truncation=True, - max_length=self.max_output_txt_len, - return_tensors="pt", - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, atts_add_feature_llm, input_tokens.attention_mask], dim=1) - - targets = output_tokens.input_ids.masked_fill( - output_tokens.input_ids == self.t5_tokenizer.pad_token_id, -100 - ) - - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5, add_feature_llm, inputs_embeds], dim=1) - - if fs_embeds is not None: - inputs_embeds = torch.cat([fs_embeds, inputs_embeds], dim=1) - encoder_atts = torch.cat([fs_atts, encoder_atts], dim=1) - - outputs = self.t5_model( - inputs_embeds=inputs_embeds, - attention_mask=encoder_atts, - decoder_attention_mask=output_tokens.attention_mask, - return_dict=True, - labels=targets, - ) - loss = outputs.loss - - return {"loss": loss} - - def prepare_few_shot_embeds(self, samples): - this_n_fs = random.choices( - list(range(self.num_few_shot_examples + 1)), - weights=[1 - self.few_shot_prob] + [self.few_shot_prob / self.num_few_shot_examples] * self.num_few_shot_examples - )[0] - - if this_n_fs == 0: - return None, None - - images = [] - text_input = [] - for sample in samples: - for n in range(this_n_fs): - images.append(sample['image'][n]) - text_input.append(sample['text_input'][n]) - images = torch.stack(images, dim=0) - - image = images - - with self.maybe_autocast(): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - image.device - ) - - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - if self.qformer_text_input: - text_Qformer = self.tokenizer( - text_input, - padding='longest', - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to(image.device) - Qformer_atts = torch.cat([query_atts,text_Qformer.attention_mask],dim=1) - query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask = Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - else: - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state[:,:query_tokens.size(1),:]) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - with self.maybe_autocast(dtype=torch.bfloat16): - input_tokens = self.t5_tokenizer( - text_input, - padding="longest", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1) - - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1) - - if this_n_fs > 1: - encoder_atts = encoder_atts.reshape(encoder_atts.size(0) // this_n_fs, encoder_atts.size(1) * this_n_fs) - inputs_embeds = inputs_embeds.reshape(inputs_embeds.size(0) // this_n_fs, inputs_embeds.size(1) * this_n_fs, inputs_embeds.size(2)) - - return inputs_embeds, encoder_atts - - @torch.no_grad() - def generate( - self, - samples, - use_nucleus_sampling=False, - num_beams=5, - max_length=256, - min_length=1, - top_p=0.9, - repetition_penalty=1.5, - length_penalty=1.0, - num_captions=1, - temperature=1, - ): - if "prompt" in samples.keys(): - prompt = samples["prompt"] - else: - prompt = self.prompt - - image = samples["image"] - - bs = image.size(0) - - if isinstance(prompt, str): - prompt = [prompt] * bs - else: - assert len(prompt) == bs, "The number of prompts must be equal to the batch size." - - # For TextCaps - if "ocr_tokens" in samples.keys() and "{}" in prompt[0]: - prompt = [p.format(', '.join(samples['ocr_tokens'][i][:30])) for i, p in enumerate(prompt)] - if 'context' in samples.keys() and samples['context'] != '': - prompt = [f'context: {samples["context"][i]}. {prompt[i]}' for i in range(len(prompt))] - print('using context') - query_tokens = self.query_tokens.expand(bs, -1, -1) - if self.qformer_text_input: - # remove ocr tokens in q_former (for eval textvqa) - # qformer_prompt = prompt - # qformer_prompt = ['Question: ' + qp.split(' Question: ')[1] for qp in qformer_prompt] - - text_Qformer = self.tokenizer( - prompt, - padding='longest', - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to(image.device) - Qformer_atts = torch.cat([query_atts,text_Qformer.attention_mask],dim=1) - - # For video data - if image.dim() == 5: - inputs_t5, atts_t5 = [], [] - add_inputs_llm, add_atts_llm = [], [] - for j in range(image.size(2)): - this_frame = image[:,:,j,:,:] - with self.maybe_autocast(): - frame_embeds = self.ln_vision(self.visual_encoder(this_frame)) - frame_atts = torch.ones(frame_embeds.size()[:-1], dtype=torch.long).to(image.device) - frame_features =self.visual_encoder.get_intermediate_layers(this_frame)[-2] - - frame_features = frame_features[:, 1:] - - add_feature_llm = self.vision_project(frame_features) - atts_add_feature_llm = torch.ones(add_feature_llm.size()[:-1], dtype=torch.long).to(image.device) - - if self.qformer_text_input: - frame_query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask = Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=frame_embeds, - encoder_attention_mask=frame_atts, - return_dict=True, - ) - else: - frame_query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=frame_embeds, - encoder_attention_mask=frame_atts, - return_dict=True, - ) - - frame_inputs_t5 = self.t5_proj(frame_query_output.last_hidden_state[:,:query_tokens.size(1),:]) - frame_atts_t5 = torch.ones(frame_inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - inputs_t5.append(frame_inputs_t5) - atts_t5.append(frame_atts_t5) - add_inputs_llm.append(add_feature_llm) - add_atts_llm.append(atts_add_feature_llm) - inputs_t5 = torch.cat(inputs_t5, dim=1) - atts_t5 = torch.cat(atts_t5, dim=1) - add_feature_llm = torch.cat(add_inputs_llm, dim=1) - atts_add_feature_llm = torch.cat(add_atts_llm, dim=1) - else: - with self.maybe_autocast(): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_features= self.visual_encoder.get_intermediate_layers(image)[-2] - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) - - image_features = image_features[:, 1:] - add_feature_llm = self.vision_project(image_features) - atts_add_feature_llm = torch.ones(add_feature_llm.size()[:-1], dtype=torch.long).to(image.device) - if self.qformer_text_input: - query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask=Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - else: - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state[:,:query_tokens.size(1),:]) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - input_tokens = self.t5_tokenizer( - prompt, - padding="longest", - return_tensors="pt" - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, atts_add_feature_llm,input_tokens.attention_mask], dim=1) - - with self.maybe_autocast(dtype=torch.bfloat16): - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5, add_feature_llm, inputs_embeds], dim=1) - - outputs = self.t5_model.generate( - inputs_embeds=inputs_embeds, - attention_mask=encoder_atts, - do_sample=use_nucleus_sampling, - top_p=top_p, - temperature=temperature, - num_beams=num_beams, - max_new_tokens=max_length, - min_length=min_length, - repetition_penalty=repetition_penalty, - length_penalty=length_penalty, - num_return_sequences=num_captions, - ) - output_text = self.t5_tokenizer.batch_decode( - outputs, skip_special_tokens=True - ) - - return output_text - - def predict_answers( - self, - samples, - num_beams=5, - inference_method="generate", - max_len=10, - min_len=1, - num_ans_candidates=128, - answer_list=None, - prompt="", - length_penalty=-1, - **kwargs - ): - if isinstance(samples["text_input"], str): - samples["text_input"] = [samples["text_input"]] - - if prompt: - if prompt.count("{}") == 2: - if 'ocr_tokens' in samples: - text_input = [ - prompt.format(', '.join(samples['ocr_tokens'][i][:30]), samples["text_input"][i]) - for i in range(len(samples["text_input"]))] - elif 'choices' in samples: - text_input = [] - for i in range(len(samples["text_input"])): - this_choices = [f"({string.ascii_lowercase[j]}) {ch}" for j, ch in enumerate(samples["choices"][i])] - this_choices = " ".join(this_choices) - text_input.append(prompt.format(samples["text_input"][i], this_choices)) - else: - text_input = [prompt.format(question) for question in samples["text_input"]] - else: - text_input = samples["text_input"] - - samples["prompt"] = text_input - - output_text = self.generate( - samples, - num_beams=num_beams, - max_length=max_len, - min_length=min_len, - length_penalty=length_penalty - ) - - if self._apply_lemmatizer or ("apply_lemmatizer" in samples.keys() and samples["apply_lemmatizer"]): - output_text = self._lemmatize(output_text) - - return output_text - - def predict_class( - self, - samples, - candidates, - n_segments=1, - ): - # If candidates is a list of lists, each sample has its candidates, then we need to iterate one by one - if type(candidates[0]) == list: - results = [] - - for i in range(samples["image"].size(0)): - this_sample = { - "image": samples["image"][i].unsqueeze(0), - "prompt": samples["prompt"][i], - } - - if "text_input" in samples.keys(): - this_sample["text_input"] = [samples["text_input"][i]] - - if 'context' in samples.keys(): - this_sample['context'] = [samples["context"][i]] - - if 'history' in samples.keys(): - this_sample['history'] = [samples["history"][i]] - - if 'caption' in samples.keys(): - this_sample['caption'] = [samples["caption"][i]] - - this_result = self._predict_class(this_sample, candidates[i], n_segments) - results.append(this_result) - - try: - results = torch.cat(results, dim=0) - except: - results = [res.tolist()[0] for res in results] - - return results - - return self._predict_class(samples, candidates, n_segments) - - def _predict_class( - self, - samples, - candidates, - n_segments=1, - ): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W) - - prompt: the instruction - candidates: - (list): A list of candidate class names; - n_segments: - (int): Split the candidates into n_segments and predict one by one. This is useful when the number of candidates is too large. - Returns: - output_class: predicted class index - """ - - image = samples["image"] - prompt = samples["prompt"] - - bs = image.size(0) - - if isinstance(prompt, str): - prompt = [prompt] * bs - else: - assert len(prompt) == bs, "The number of prompts must be equal to the batch size." - - if "text_input" in samples.keys(): - if type(samples["text_input"][0]) == list: - prompt = [prompt[i].format(*samples["text_input"][i]) for i in range(len(prompt))] - else: - prompt = [prompt[i].format(samples["text_input"][i]) for i in range(len(prompt))] - - # scienceqa - if 'context' in samples.keys() and samples['context'] != '': - prompt = [f'context: {samples["context"][i]}. {prompt[i]}' for i in range(len(prompt))] - - # visual dialog - if 'history' in samples.keys() and samples['history'][0] != '': - prompt = [f'dialog history: {samples["history"][i]}\n{prompt[i]}' for i in range(len(prompt))] - - if 'caption' in samples.keys() and samples['caption'][0] != '': - prompt = [f'This image has the caption "{samples["caption"][i]}". {prompt[i]}' for i in range(len(prompt))] - - query_tokens = self.query_tokens.expand(bs, -1, -1) - if self.qformer_text_input: - text_Qformer = self.tokenizer( - prompt, - padding='longest', - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt" - ).to(image.device) - query_atts = torch.ones(query_tokens.size()[:-1], dtype=torch.long).to(image.device) - Qformer_atts = torch.cat([query_atts,text_Qformer.attention_mask], dim=1) - - if image.dim() == 5: - inputs_t5, atts_t5 = [], [] - add_inputs_llm, add_atts_llm = [], [] - for j in range(image.size(2)): - this_frame = image[:,:,j,:,:] - with self.maybe_autocast(): - frame_embeds = self.ln_vision(self.visual_encoder(this_frame)) - frame_atts = torch.ones(frame_embeds.size()[:-1], dtype=torch.long).to(image.device) - frame_features =self.visual_encoder.get_intermediate_layers(this_frame)[-2] - - frame_features = frame_features[:, 1:] - - add_feature_llm = self.vision_project(frame_features) - atts_add_feature_llm = torch.ones(add_feature_llm.size()[:-1], dtype=torch.long).to(image.device) - if self.qformer_text_input: - frame_query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask=Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=frame_embeds, - encoder_attention_mask=frame_atts, - return_dict=True, - ) - else: - frame_query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=frame_embeds, - encoder_attention_mask=frame_atts, - return_dict=True, - ) - - frame_inputs_t5 = self.t5_proj(frame_query_output.last_hidden_state[:,:query_tokens.size(1),:]) - frame_atts_t5 = torch.ones(frame_inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - inputs_t5.append(frame_inputs_t5) - atts_t5.append(frame_atts_t5) - add_inputs_llm.append(add_feature_llm) - add_atts_llm.append(atts_add_feature_llm) - inputs_t5 = torch.cat(inputs_t5, dim=1) - atts_t5 = torch.cat(atts_t5, dim=1) - add_feature_llm = torch.cat(add_inputs_llm, dim=1) - atts_add_feature_llm = torch.cat(add_atts_llm, dim=1) - else: - with self.maybe_autocast(): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_features= self.visual_encoder.get_intermediate_layers(image)[-2] - - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) - - image_features = image_features[:, 1:] - add_feature_llm = self.vision_project(image_features) - atts_add_feature_llm = torch.ones(add_feature_llm.size()[:-1], dtype=torch.long).to(image.device) - - if self.qformer_text_input: - query_output = self.Qformer.bert( - text_Qformer.input_ids, - attention_mask=Qformer_atts, - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - else: - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state[:,:query_tokens.size(1),:]) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - input_tokens = self.t5_tokenizer( - prompt, padding="longest", return_tensors="pt" - ).to(image.device) - output_tokens = self.t5_tokenizer( - candidates, padding="longest", return_tensors="pt" - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, atts_add_feature_llm, input_tokens.attention_mask], dim=1) - - n_cands = len(candidates) - - with self.maybe_autocast(dtype=torch.bfloat16): - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5,add_feature_llm, inputs_embeds], dim=1) - - encoder_outputs = self.t5_model.encoder( - inputs_embeds=inputs_embeds, - attention_mask=encoder_atts, - ) - - all_losses = [] - for n in range(n_segments): - seg_len = n_cands // n_segments - if n == (n_segments - 1): - seg_len = n_cands - seg_len * (n_segments - 1) - - # this_encoder_outputs = copy.deepcopy(encoder_outputs) - this_encoder_outputs = BaseModelOutput( - last_hidden_state=encoder_outputs[0].clone(), - ) - - this_encoder_outputs['last_hidden_state'] = this_encoder_outputs[0].repeat_interleave(seg_len, dim=0) - this_encoder_atts = encoder_atts.repeat_interleave(seg_len, dim=0) - - start_i = n * (n_cands // n_segments) - end_i = start_i + seg_len - this_output_tokens_ids = output_tokens.input_ids[start_i:end_i].repeat(bs, 1) - this_output_tokens_atts = output_tokens.attention_mask[start_i:end_i].repeat(bs, 1) - - this_targets = this_output_tokens_ids.masked_fill(this_output_tokens_ids == self.t5_tokenizer.pad_token_id, -100) - - outputs = self.t5_model( - encoder_outputs=this_encoder_outputs, - attention_mask=this_encoder_atts, - decoder_attention_mask=this_output_tokens_atts, - return_dict=True, - labels=this_targets, - reduction="none", - ) - loss = outputs.loss - - loss = loss.reshape(bs, seg_len) - # output_class_ranks = torch.argsort(loss, dim=-1) - all_losses.append(loss) - - all_losses = torch.cat(all_losses, dim=-1) - output_class_ranks = torch.argsort(all_losses, dim=-1) - - # encoder_outputs['last_hidden_state'] = encoder_outputs[0].repeat_interleave(n_cands, dim=0) - # encoder_atts = encoder_atts.repeat_interleave(n_cands, dim=0) - # output_tokens.input_ids = output_tokens.input_ids.repeat(bs, 1) - # output_tokens.attention_mask = output_tokens.attention_mask.repeat(bs, 1) - - # # compute the LM loss for each candidate (sum logprob across all tokens) and select the highest - # targets = output_tokens.input_ids.masked_fill(output_tokens.input_ids == self.t5_tokenizer.pad_token_id, -100) - - # outputs = self.t5_model( - # encoder_outputs=encoder_outputs, - # attention_mask=encoder_atts, - # decoder_attention_mask=output_tokens.attention_mask, - # return_dict=True, - # labels=targets, - # reduction="none", - # ) - # loss = outputs.loss - - # loss = loss.reshape(bs, n_cands) - # output_class_ranks = torch.argsort(loss, dim=-1) # (bs, num_candidates) - - return output_class_ranks - - def _lemmatize(self, answers): - def apply(answer): - doc = self.lemmatizer(answer) - - words = [] - for token in doc: - if token.pos_ in ["NOUN", "VERB"]: - words.append(token.lemma_) - else: - words.append(token.text) - answer = " ".join(words) - - return answer - - return [apply(answer) for answer in answers] - - @property - def lemmatizer(self): - if self._lemmatizer is None: - try: - import spacy - - self._lemmatizer = spacy.load("en_core_web_sm") - except ImportError: - logging.error( - """ - Please install spacy and en_core_web_sm model to apply lemmatization. - python -m spacy download en_core_web_sm - OR - import spacy.cli - spacy.cli.download("en_core_web_sm") - """ - ) - exit(1) - - return self._lemmatizer - - @classmethod - def from_config(cls, cfg): - vit_model = cfg.get("vit_model", "eva_clip_g") - img_size = cfg.get("image_size") - num_query_token = cfg.get("num_query_token") - t5_model = cfg.get("t5_model") - - drop_path_rate = cfg.get("drop_path_rate", 0) - use_grad_checkpoint = cfg.get("use_grad_checkpoint", False) - vit_precision = cfg.get("vit_precision", "fp16") - freeze_vit = cfg.get("freeze_vit", True) - - prompt = cfg.get("prompt", "") - max_txt_len = cfg.get("max_txt_len", 128) - max_output_txt_len = cfg.get("max_output_txt_len", 256) - - apply_lemmatizer = cfg.get("apply_lemmatizer", False) - - num_few_shot_examples = cfg.get("num_few_shot_examples", 0) - few_shot_prob = cfg.get("few_shot_prob", 0.0) - - qformer_text_input = cfg.get("qformer_text_input", True) - - model = cls( - vit_model=vit_model, - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - num_query_token=num_query_token, - t5_model=t5_model, - prompt=prompt, - max_txt_len=max_txt_len, - max_output_txt_len=max_output_txt_len, - apply_lemmatizer=apply_lemmatizer, - num_few_shot_examples=num_few_shot_examples, - few_shot_prob=few_shot_prob, - qformer_text_input=qformer_text_input, - ) - - # if qformer_text_input: - # # Hard-coded to load from BLIP-2 stage-1 pre-trained model (not ideal) - # model.load_from_pretrained( - # url_or_filename="https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/blip2_pretrained.pth" - # ) - - model.load_checkpoint_from_config(cfg) - - return model diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/__init__.py deleted file mode 100644 index 05fe822487c3bcde8346648d5826f1669c6bc1ca..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .fairseq_nat_model import * -from .nonautoregressive_transformer import * -from .nat_crf_transformer import * -from .iterative_nonautoregressive_transformer import * -from .cmlm_transformer import * -from .levenshtein_transformer import * -from .insertion_transformer import * diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/video/video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/video/video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa.sh deleted file mode 100644 index ddc86c37135f9abeade12e3d5ffb556a21e243f7..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/fusing/scaling_best/video/video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s2b0n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/fusing/scaling_best/video/video_caption_stage_1_ofaplus_base_pretrain_s2_hsep1_shuf_el_db_da_initavgvideocaptionvqa.sh - - diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/commands/google_search.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/commands/google_search.py deleted file mode 100644 index 7d38ce7568d2de207d521b077cfebd72527c9795..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/commands/google_search.py +++ /dev/null @@ -1,87 +0,0 @@ -"""Google search command for Autogpt.""" -from __future__ import annotations - -import json - -from duckduckgo_search import ddg - -from autogpt.config import Config - -CFG = Config() - - -def google_search(query: str, num_results: int = 8) -> str: - """Return the results of a Google search - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - search_results = [] - if not query: - return json.dumps(search_results) - - results = ddg(query, max_results=num_results) - if not results: - return json.dumps(search_results) - - for j in results: - search_results.append(j) - - return json.dumps(search_results, ensure_ascii=False, indent=4) - - -def google_official_search(query: str, num_results: int = 8) -> str | list[str]: - """Return the results of a Google search using the official Google API - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - - from googleapiclient.discovery import build - from googleapiclient.errors import HttpError - - try: - # Get the Google API key and Custom Search Engine ID from the config file - api_key = CFG.google_api_key - custom_search_engine_id = CFG.custom_search_engine_id - - # Initialize the Custom Search API service - service = build("customsearch", "v1", developerKey=api_key) - - # Send the search query and retrieve the results - result = ( - service.cse() - .list(q=query, cx=custom_search_engine_id, num=num_results) - .execute() - ) - - # Extract the search result items from the response - search_results = result.get("items", []) - - # Create a list of only the URLs from the search results - search_results_links = [item["link"] for item in search_results] - - except HttpError as e: - # Handle errors in the API call - error_details = json.loads(e.content.decode()) - - # Check if the error is related to an invalid or missing API key - if error_details.get("error", {}).get( - "code" - ) == 403 and "invalid API key" in error_details.get("error", {}).get( - "message", "" - ): - return "Error: The provided Google API key is invalid or missing." - else: - return f"Error: {e}" - - # Return the list of search result URLs - return search_results_links diff --git a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/math.js b/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/math.js deleted file mode 100644 index 4ad52340281307fe2171a63952543933c2cfe0d5..0000000000000000000000000000000000000000 --- a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/revealjs/plugin/math/math.js +++ /dev/null @@ -1 +0,0 @@ -!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).RevealMath=e()}(this,(function(){"use strict";var t="undefined"!=typeof globalThis?globalThis:"undefined"!=typeof window?window:"undefined"!=typeof global?global:"undefined"!=typeof self?self:{},e=function(t){return t&&t.Math==Math&&t},n=e("object"==typeof globalThis&&globalThis)||e("object"==typeof window&&window)||e("object"==typeof self&&self)||e("object"==typeof t&&t)||function(){return this}()||Function("return this")(),r={},o=function(t){try{return!!t()}catch(t){return!0}},i=!o((function(){return 7!=Object.defineProperty({},1,{get:function(){return 7}})[1]})),a={},c={}.propertyIsEnumerable,u=Object.getOwnPropertyDescriptor,f=u&&!c.call({1:2},1);a.f=f?function(t){var e=u(this,t);return!!e&&e.enumerable}:c;var s=function(t,e){return{enumerable:!(1&t),configurable:!(2&t),writable:!(4&t),value:e}},l={}.toString,p=function(t){return l.call(t).slice(8,-1)},h=p,d="".split,v=o((function(){return!Object("z").propertyIsEnumerable(0)}))?function(t){return"String"==h(t)?d.call(t,""):Object(t)}:Object,y=function(t){if(null==t)throw TypeError("Can't call method on "+t);return t},g=v,m=y,b=function(t){return g(m(t))},w=function(t){return"object"==typeof t?null!==t:"function"==typeof t},j=w,x=function(t,e){if(!j(t))return t;var n,r;if(e&&"function"==typeof(n=t.toString)&&!j(r=n.call(t)))return r;if("function"==typeof(n=t.valueOf)&&!j(r=n.call(t)))return r;if(!e&&"function"==typeof(n=t.toString)&&!j(r=n.call(t)))return r;throw TypeError("Can't convert object to primitive value")},O=y,E=function(t){return Object(O(t))},S=E,T={}.hasOwnProperty,P=function(t,e){return T.call(S(t),e)},M=w,k=n.document,L=M(k)&&M(k.createElement),_=function(t){return L?k.createElement(t):{}},A=_,I=!i&&!o((function(){return 7!=Object.defineProperty(A("div"),"a",{get:function(){return 7}}).a})),R=i,C=a,N=s,F=b,J=x,D=P,$=I,G=Object.getOwnPropertyDescriptor;r.f=R?G:function(t,e){if(t=F(t),e=J(e,!0),$)try{return G(t,e)}catch(t){}if(D(t,e))return N(!C.f.call(t,e),t[e])};var H={},z=w,W=function(t){if(!z(t))throw TypeError(String(t)+" is not an object");return t},q=i,U=I,K=W,Q=x,X=Object.defineProperty;H.f=q?X:function(t,e,n){if(K(t),e=Q(e,!0),K(n),U)try{return X(t,e,n)}catch(t){}if("get"in n||"set"in n)throw TypeError("Accessors not supported");return"value"in n&&(t[e]=n.value),t};var Y=H,B=s,V=i?function(t,e,n){return Y.f(t,e,B(1,n))}:function(t,e,n){return t[e]=n,t},Z={exports:{}},tt=n,et=V,nt=function(t,e){try{et(tt,t,e)}catch(n){tt[t]=e}return e},rt=nt,ot="__core-js_shared__",it=n[ot]||rt(ot,{}),at=it,ct=Function.toString;"function"!=typeof at.inspectSource&&(at.inspectSource=function(t){return ct.call(t)});var ut=at.inspectSource,ft=ut,st=n.WeakMap,lt="function"==typeof st&&/native code/.test(ft(st)),pt={exports:{}},ht=it;(pt.exports=function(t,e){return ht[t]||(ht[t]=void 0!==e?e:{})})("versions",[]).push({version:"3.12.1",mode:"global",copyright:"© 2021 Denis Pushkarev (zloirock.ru)"});var dt,vt,yt,gt=0,mt=Math.random(),bt=function(t){return"Symbol("+String(void 0===t?"":t)+")_"+(++gt+mt).toString(36)},wt=pt.exports,jt=bt,xt=wt("keys"),Ot=function(t){return xt[t]||(xt[t]=jt(t))},Et={},St=lt,Tt=w,Pt=V,Mt=P,kt=it,Lt=Ot,_t=Et,At="Object already initialized",It=n.WeakMap;if(St||kt.state){var Rt=kt.state||(kt.state=new It),Ct=Rt.get,Nt=Rt.has,Ft=Rt.set;dt=function(t,e){if(Nt.call(Rt,t))throw new TypeError(At);return e.facade=t,Ft.call(Rt,t,e),e},vt=function(t){return Ct.call(Rt,t)||{}},yt=function(t){return Nt.call(Rt,t)}}else{var Jt=Lt("state");_t[Jt]=!0,dt=function(t,e){if(Mt(t,Jt))throw new TypeError(At);return e.facade=t,Pt(t,Jt,e),e},vt=function(t){return Mt(t,Jt)?t[Jt]:{}},yt=function(t){return Mt(t,Jt)}}var Dt={set:dt,get:vt,has:yt,enforce:function(t){return yt(t)?vt(t):dt(t,{})},getterFor:function(t){return function(e){var n;if(!Tt(e)||(n=vt(e)).type!==t)throw TypeError("Incompatible receiver, "+t+" required");return n}}},$t=n,Gt=V,Ht=P,zt=nt,Wt=ut,qt=Dt.get,Ut=Dt.enforce,Kt=String(String).split("String");(Z.exports=function(t,e,n,r){var o,i=!!r&&!!r.unsafe,a=!!r&&!!r.enumerable,c=!!r&&!!r.noTargetGet;"function"==typeof n&&("string"!=typeof e||Ht(n,"name")||Gt(n,"name",e),(o=Ut(n)).source||(o.source=Kt.join("string"==typeof e?e:""))),t!==$t?(i?!c&&t[e]&&(a=!0):delete t[e],a?t[e]=n:Gt(t,e,n)):a?t[e]=n:zt(e,n)})(Function.prototype,"toString",(function(){return"function"==typeof this&&qt(this).source||Wt(this)}));var Qt=n,Xt=n,Yt=function(t){return"function"==typeof t?t:void 0},Bt=function(t,e){return arguments.length<2?Yt(Qt[t])||Yt(Xt[t]):Qt[t]&&Qt[t][e]||Xt[t]&&Xt[t][e]},Vt={},Zt=Math.ceil,te=Math.floor,ee=function(t){return isNaN(t=+t)?0:(t>0?te:Zt)(t)},ne=ee,re=Math.min,oe=function(t){return t>0?re(ne(t),9007199254740991):0},ie=ee,ae=Math.max,ce=Math.min,ue=b,fe=oe,se=function(t,e){var n=ie(t);return n<0?ae(n+e,0):ce(n,e)},le=function(t){return function(e,n,r){var o,i=ue(e),a=fe(i.length),c=se(r,a);if(t&&n!=n){for(;a>c;)if((o=i[c++])!=o)return!0}else for(;a>c;c++)if((t||c in i)&&i[c]===n)return t||c||0;return!t&&-1}},pe={includes:le(!0),indexOf:le(!1)},he=P,de=b,ve=pe.indexOf,ye=Et,ge=function(t,e){var n,r=de(t),o=0,i=[];for(n in r)!he(ye,n)&&he(r,n)&&i.push(n);for(;e.length>o;)he(r,n=e[o++])&&(~ve(i,n)||i.push(n));return i},me=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"],be=ge,we=me.concat("length","prototype");Vt.f=Object.getOwnPropertyNames||function(t){return be(t,we)};var je={};je.f=Object.getOwnPropertySymbols;var xe=Vt,Oe=je,Ee=W,Se=Bt("Reflect","ownKeys")||function(t){var e=xe.f(Ee(t)),n=Oe.f;return n?e.concat(n(t)):e},Te=P,Pe=Se,Me=r,ke=H,Le=o,_e=/#|\.prototype\./,Ae=function(t,e){var n=Re[Ie(t)];return n==Ne||n!=Ce&&("function"==typeof e?Le(e):!!e)},Ie=Ae.normalize=function(t){return String(t).replace(_e,".").toLowerCase()},Re=Ae.data={},Ce=Ae.NATIVE="N",Ne=Ae.POLYFILL="P",Fe=Ae,Je=n,De=r.f,$e=V,Ge=Z.exports,He=nt,ze=function(t,e){for(var n=Pe(e),r=ke.f,o=Me.f,i=0;io;)for(var c,u=en(arguments[o++]),f=i?Be(u).concat(i(u)):Be(u),s=f.length,l=0;s>l;)c=f[l++],Xe&&!a.call(u,c)||(n[c]=u[c]);return n}:nn;function an(t,e){var n=Object.keys(t);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(t);e&&(r=r.filter((function(e){return Object.getOwnPropertyDescriptor(t,e).enumerable}))),n.push.apply(n,r)}return n}function cn(t){for(var e=1;e=0||(o[n]=t[n]);return o}(t,e);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(t);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(t,n)&&(o[n]=t[n])}return o}function pn(t,e){(null==e||e>t.length)&&(e=t.length);for(var n=0,r=new Array(e);n=t.length?{done:!0}:{done:!1,value:t[r++]}},e:function(t){throw t},f:o}}throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}var i,a=!0,c=!1;return{s:function(){n=n.call(t)},n:function(){var t=n.next();return a=t.done,t},e:function(t){c=!0,i=t},f:function(){try{a||null==n.return||n.return()}finally{if(c)throw i}}}}qe({target:"Object",stat:!0,forced:Object.assign!==on},{assign:on});!function(t){var e=function(t){var e,n=Object.prototype,r=n.hasOwnProperty,o="function"==typeof Symbol?Symbol:{},i=o.iterator||"@@iterator",a=o.asyncIterator||"@@asyncIterator",c=o.toStringTag||"@@toStringTag";function u(t,e,n){return Object.defineProperty(t,e,{value:n,enumerable:!0,configurable:!0,writable:!0}),t[e]}try{u({},"")}catch(t){u=function(t,e,n){return t[e]=n}}function f(t,e,n,r){var o=e&&e.prototype instanceof y?e:y,i=Object.create(o.prototype),a=new M(r||[]);return i._invoke=function(t,e,n){var r=l;return function(o,i){if(r===h)throw new Error("Generator is already running");if(r===d){if("throw"===o)throw i;return L()}for(n.method=o,n.arg=i;;){var a=n.delegate;if(a){var c=S(a,n);if(c){if(c===v)continue;return c}}if("next"===n.method)n.sent=n._sent=n.arg;else if("throw"===n.method){if(r===l)throw r=d,n.arg;n.dispatchException(n.arg)}else"return"===n.method&&n.abrupt("return",n.arg);r=h;var u=s(t,e,n);if("normal"===u.type){if(r=n.done?d:p,u.arg===v)continue;return{value:u.arg,done:n.done}}"throw"===u.type&&(r=d,n.method="throw",n.arg=u.arg)}}}(t,n,a),i}function s(t,e,n){try{return{type:"normal",arg:t.call(e,n)}}catch(t){return{type:"throw",arg:t}}}t.wrap=f;var l="suspendedStart",p="suspendedYield",h="executing",d="completed",v={};function y(){}function g(){}function m(){}var b={};b[i]=function(){return this};var w=Object.getPrototypeOf,j=w&&w(w(k([])));j&&j!==n&&r.call(j,i)&&(b=j);var x=m.prototype=y.prototype=Object.create(b);function O(t){["next","throw","return"].forEach((function(e){u(t,e,(function(t){return this._invoke(e,t)}))}))}function E(t,e){function n(o,i,a,c){var u=s(t[o],t,i);if("throw"!==u.type){var f=u.arg,l=f.value;return l&&"object"==typeof l&&r.call(l,"__await")?e.resolve(l.__await).then((function(t){n("next",t,a,c)}),(function(t){n("throw",t,a,c)})):e.resolve(l).then((function(t){f.value=t,a(f)}),(function(t){return n("throw",t,a,c)}))}c(u.arg)}var o;this._invoke=function(t,r){function i(){return new e((function(e,o){n(t,r,e,o)}))}return o=o?o.then(i,i):i()}}function S(t,n){var r=t.iterator[n.method];if(r===e){if(n.delegate=null,"throw"===n.method){if(t.iterator.return&&(n.method="return",n.arg=e,S(t,n),"throw"===n.method))return v;n.method="throw",n.arg=new TypeError("The iterator does not provide a 'throw' method")}return v}var o=s(r,t.iterator,n.arg);if("throw"===o.type)return n.method="throw",n.arg=o.arg,n.delegate=null,v;var i=o.arg;return i?i.done?(n[t.resultName]=i.value,n.next=t.nextLoc,"return"!==n.method&&(n.method="next",n.arg=e),n.delegate=null,v):i:(n.method="throw",n.arg=new TypeError("iterator result is not an object"),n.delegate=null,v)}function T(t){var e={tryLoc:t[0]};1 in t&&(e.catchLoc=t[1]),2 in t&&(e.finallyLoc=t[2],e.afterLoc=t[3]),this.tryEntries.push(e)}function P(t){var e=t.completion||{};e.type="normal",delete e.arg,t.completion=e}function M(t){this.tryEntries=[{tryLoc:"root"}],t.forEach(T,this),this.reset(!0)}function k(t){if(t){var n=t[i];if(n)return n.call(t);if("function"==typeof t.next)return t;if(!isNaN(t.length)){var o=-1,a=function n(){for(;++o=0;--i){var a=this.tryEntries[i],c=a.completion;if("root"===a.tryLoc)return o("end");if(a.tryLoc<=this.prev){var u=r.call(a,"catchLoc"),f=r.call(a,"finallyLoc");if(u&&f){if(this.prev=0;--n){var o=this.tryEntries[n];if(o.tryLoc<=this.prev&&r.call(o,"finallyLoc")&&this.prev=0;--e){var n=this.tryEntries[e];if(n.finallyLoc===t)return this.complete(n.completion,n.afterLoc),P(n),v}},catch:function(t){for(var e=this.tryEntries.length-1;e>=0;--e){var n=this.tryEntries[e];if(n.tryLoc===t){var r=n.completion;if("throw"===r.type){var o=r.arg;P(n)}return o}}throw new Error("illegal catch attempt")},delegateYield:function(t,n,r){return this.delegate={iterator:k(t),resultName:n,nextLoc:r},"next"===this.method&&(this.arg=e),v}},t}(t.exports);try{regeneratorRuntime=e}catch(t){Function("r","regeneratorRuntime = r")(e)}}({exports:{}});var dn,vn,yn=Bt("navigator","userAgent")||"",gn=yn,mn=n.process,bn=mn&&mn.versions,wn=bn&&bn.v8;wn?vn=(dn=wn.split("."))[0]<4?1:dn[0]+dn[1]:gn&&(!(dn=gn.match(/Edge\/(\d+)/))||dn[1]>=74)&&(dn=gn.match(/Chrome\/(\d+)/))&&(vn=dn[1]);var jn=vn&&+vn,xn=jn,On=o,En=!!Object.getOwnPropertySymbols&&!On((function(){return!String(Symbol())||!Symbol.sham&&xn&&xn<41})),Sn=En&&!Symbol.sham&&"symbol"==typeof Symbol.iterator,Tn=n,Pn=pt.exports,Mn=P,kn=bt,Ln=En,_n=Sn,An=Pn("wks"),In=Tn.Symbol,Rn=_n?In:In&&In.withoutSetter||kn,Cn=function(t){return Mn(An,t)&&(Ln||"string"==typeof An[t])||(Ln&&Mn(In,t)?An[t]=In[t]:An[t]=Rn("Symbol."+t)),An[t]},Nn={};Nn[Cn("toStringTag")]="z";var Fn="[object z]"===String(Nn),Jn=Fn,Dn=p,$n=Cn("toStringTag"),Gn="Arguments"==Dn(function(){return arguments}()),Hn=Jn?Dn:function(t){var e,n,r;return void 0===t?"Undefined":null===t?"Null":"string"==typeof(n=function(t,e){try{return t[e]}catch(t){}}(e=Object(t),$n))?n:Gn?Dn(e):"Object"==(r=Dn(e))&&"function"==typeof e.callee?"Arguments":r},zn=Hn,Wn=Fn?{}.toString:function(){return"[object "+zn(this)+"]"},qn=Fn,Un=Z.exports,Kn=Wn;qn||Un(Object.prototype,"toString",Kn,{unsafe:!0});var Qn=n.Promise,Xn=Z.exports,Yn=w,Bn=W,Vn=function(t){if(!Yn(t)&&null!==t)throw TypeError("Can't set "+String(t)+" as a prototype");return t},Zn=Object.setPrototypeOf||("__proto__"in{}?function(){var t,e=!1,n={};try{(t=Object.getOwnPropertyDescriptor(Object.prototype,"__proto__").set).call(n,[]),e=n instanceof Array}catch(t){}return function(n,r){return Bn(n),Vn(r),e?t.call(n,r):n.__proto__=r,n}}():void 0),tr=H.f,er=P,nr=Cn("toStringTag"),rr=Bt,or=H,ir=i,ar=Cn("species"),cr=function(t){if("function"!=typeof t)throw TypeError(String(t)+" is not a function");return t},ur={},fr=ur,sr=Cn("iterator"),lr=Array.prototype,pr=cr,hr=function(t,e,n){if(pr(t),void 0===e)return t;switch(n){case 0:return function(){return t.call(e)};case 1:return function(n){return t.call(e,n)};case 2:return function(n,r){return t.call(e,n,r)};case 3:return function(n,r,o){return t.call(e,n,r,o)}}return function(){return t.apply(e,arguments)}},dr=Hn,vr=ur,yr=Cn("iterator"),gr=W,mr=W,br=function(t){return void 0!==t&&(fr.Array===t||lr[sr]===t)},wr=oe,jr=hr,xr=function(t){if(null!=t)return t[yr]||t["@@iterator"]||vr[dr(t)]},Or=function(t){var e=t.return;if(void 0!==e)return gr(e.call(t)).value},Er=function(t,e){this.stopped=t,this.result=e},Sr=Cn("iterator"),Tr=!1;try{var Pr=0,Mr={next:function(){return{done:!!Pr++}},return:function(){Tr=!0}};Mr[Sr]=function(){return this},Array.from(Mr,(function(){throw 2}))}catch(t){}var kr,Lr,_r,Ar=W,Ir=cr,Rr=Cn("species"),Cr=Bt("document","documentElement"),Nr=/(?:iphone|ipod|ipad).*applewebkit/i.test(yn),Fr="process"==p(n.process),Jr=n,Dr=o,$r=hr,Gr=Cr,Hr=_,zr=Nr,Wr=Fr,qr=Jr.location,Ur=Jr.setImmediate,Kr=Jr.clearImmediate,Qr=Jr.process,Xr=Jr.MessageChannel,Yr=Jr.Dispatch,Br=0,Vr={},Zr="onreadystatechange",to=function(t){if(Vr.hasOwnProperty(t)){var e=Vr[t];delete Vr[t],e()}},eo=function(t){return function(){to(t)}},no=function(t){to(t.data)},ro=function(t){Jr.postMessage(t+"",qr.protocol+"//"+qr.host)};Ur&&Kr||(Ur=function(t){for(var e=[],n=1;arguments.length>n;)e.push(arguments[n++]);return Vr[++Br]=function(){("function"==typeof t?t:Function(t)).apply(void 0,e)},kr(Br),Br},Kr=function(t){delete Vr[t]},Wr?kr=function(t){Qr.nextTick(eo(t))}:Yr&&Yr.now?kr=function(t){Yr.now(eo(t))}:Xr&&!zr?(_r=(Lr=new Xr).port2,Lr.port1.onmessage=no,kr=$r(_r.postMessage,_r,1)):Jr.addEventListener&&"function"==typeof postMessage&&!Jr.importScripts&&qr&&"file:"!==qr.protocol&&!Dr(ro)?(kr=ro,Jr.addEventListener("message",no,!1)):kr=Zr in Hr("script")?function(t){Gr.appendChild(Hr("script")).onreadystatechange=function(){Gr.removeChild(this),to(t)}}:function(t){setTimeout(eo(t),0)});var oo,io,ao,co,uo,fo,so,lo,po={set:Ur,clear:Kr},ho=/web0s(?!.*chrome)/i.test(yn),vo=n,yo=r.f,go=po.set,mo=Nr,bo=ho,wo=Fr,jo=vo.MutationObserver||vo.WebKitMutationObserver,xo=vo.document,Oo=vo.process,Eo=vo.Promise,So=yo(vo,"queueMicrotask"),To=So&&So.value;To||(oo=function(){var t,e;for(wo&&(t=Oo.domain)&&t.exit();io;){e=io.fn,io=io.next;try{e()}catch(t){throw io?co():ao=void 0,t}}ao=void 0,t&&t.enter()},mo||wo||bo||!jo||!xo?Eo&&Eo.resolve?((so=Eo.resolve(void 0)).constructor=Eo,lo=so.then,co=function(){lo.call(so,oo)}):co=wo?function(){Oo.nextTick(oo)}:function(){go.call(vo,oo)}:(uo=!0,fo=xo.createTextNode(""),new jo(oo).observe(fo,{characterData:!0}),co=function(){fo.data=uo=!uo}));var Po=To||function(t){var e={fn:t,next:void 0};ao&&(ao.next=e),io||(io=e,co()),ao=e},Mo={},ko=cr,Lo=function(t){var e,n;this.promise=new t((function(t,r){if(void 0!==e||void 0!==n)throw TypeError("Bad Promise constructor");e=t,n=r})),this.resolve=ko(e),this.reject=ko(n)};Mo.f=function(t){return new Lo(t)};var _o,Ao,Io,Ro,Co=W,No=w,Fo=Mo,Jo=n,Do="object"==typeof window,$o=qe,Go=n,Ho=Bt,zo=Qn,Wo=Z.exports,qo=function(t,e,n){for(var r in e)Xn(t,r,e[r],n);return t},Uo=Zn,Ko=function(t,e,n){t&&!er(t=n?t:t.prototype,nr)&&tr(t,nr,{configurable:!0,value:e})},Qo=function(t){var e=rr(t),n=or.f;ir&&e&&!e[ar]&&n(e,ar,{configurable:!0,get:function(){return this}})},Xo=w,Yo=cr,Bo=function(t,e,n){if(!(t instanceof e))throw TypeError("Incorrect "+(n?n+" ":"")+"invocation");return t},Vo=ut,Zo=function(t,e,n){var r,o,i,a,c,u,f,s=n&&n.that,l=!(!n||!n.AS_ENTRIES),p=!(!n||!n.IS_ITERATOR),h=!(!n||!n.INTERRUPTED),d=jr(e,s,1+l+h),v=function(t){return r&&Or(r),new Er(!0,t)},y=function(t){return l?(mr(t),h?d(t[0],t[1],v):d(t[0],t[1])):h?d(t,v):d(t)};if(p)r=t;else{if("function"!=typeof(o=xr(t)))throw TypeError("Target is not iterable");if(br(o)){for(i=0,a=wr(t.length);a>i;i++)if((c=y(t[i]))&&c instanceof Er)return c;return new Er(!1)}r=o.call(t)}for(u=r.next;!(f=u.call(r)).done;){try{c=y(f.value)}catch(t){throw Or(r),t}if("object"==typeof c&&c&&c instanceof Er)return c}return new Er(!1)},ti=function(t,e){if(!e&&!Tr)return!1;var n=!1;try{var r={};r[Sr]=function(){return{next:function(){return{done:n=!0}}}},t(r)}catch(t){}return n},ei=function(t,e){var n,r=Ar(t).constructor;return void 0===r||null==(n=Ar(r)[Rr])?e:Ir(n)},ni=po.set,ri=Po,oi=function(t,e){if(Co(t),No(e)&&e.constructor===t)return e;var n=Fo.f(t);return(0,n.resolve)(e),n.promise},ii=function(t,e){var n=Jo.console;n&&n.error&&(1===arguments.length?n.error(t):n.error(t,e))},ai=Mo,ci=function(t){try{return{error:!1,value:t()}}catch(t){return{error:!0,value:t}}},ui=Dt,fi=Fe,si=Do,li=Fr,pi=jn,hi=Cn("species"),di="Promise",vi=ui.get,yi=ui.set,gi=ui.getterFor(di),mi=zo&&zo.prototype,bi=zo,wi=mi,ji=Go.TypeError,xi=Go.document,Oi=Go.process,Ei=ai.f,Si=Ei,Ti=!!(xi&&xi.createEvent&&Go.dispatchEvent),Pi="function"==typeof PromiseRejectionEvent,Mi="unhandledrejection",ki=!1,Li=fi(di,(function(){var t=Vo(bi)!==String(bi);if(!t&&66===pi)return!0;if(pi>=51&&/native code/.test(bi))return!1;var e=new bi((function(t){t(1)})),n=function(t){t((function(){}),(function(){}))};return(e.constructor={})[hi]=n,!(ki=e.then((function(){}))instanceof n)||!t&&si&&!Pi})),_i=Li||!ti((function(t){bi.all(t).catch((function(){}))})),Ai=function(t){var e;return!(!Xo(t)||"function"!=typeof(e=t.then))&&e},Ii=function(t,e){if(!t.notified){t.notified=!0;var n=t.reactions;ri((function(){for(var r=t.value,o=1==t.state,i=0;n.length>i;){var a,c,u,f=n[i++],s=o?f.ok:f.fail,l=f.resolve,p=f.reject,h=f.domain;try{s?(o||(2===t.rejection&&Fi(t),t.rejection=1),!0===s?a=r:(h&&h.enter(),a=s(r),h&&(h.exit(),u=!0)),a===f.promise?p(ji("Promise-chain cycle")):(c=Ai(a))?c.call(a,l,p):l(a)):p(r)}catch(t){h&&!u&&h.exit(),p(t)}}t.reactions=[],t.notified=!1,e&&!t.rejection&&Ci(t)}))}},Ri=function(t,e,n){var r,o;Ti?((r=xi.createEvent("Event")).promise=e,r.reason=n,r.initEvent(t,!1,!0),Go.dispatchEvent(r)):r={promise:e,reason:n},!Pi&&(o=Go["on"+t])?o(r):t===Mi&&ii("Unhandled promise rejection",n)},Ci=function(t){ni.call(Go,(function(){var e,n=t.facade,r=t.value;if(Ni(t)&&(e=ci((function(){li?Oi.emit("unhandledRejection",r,n):Ri(Mi,n,r)})),t.rejection=li||Ni(t)?2:1,e.error))throw e.value}))},Ni=function(t){return 1!==t.rejection&&!t.parent},Fi=function(t){ni.call(Go,(function(){var e=t.facade;li?Oi.emit("rejectionHandled",e):Ri("rejectionhandled",e,t.value)}))},Ji=function(t,e,n){return function(r){t(e,r,n)}},Di=function(t,e,n){t.done||(t.done=!0,n&&(t=n),t.value=e,t.state=2,Ii(t,!0))},$i=function(t,e,n){if(!t.done){t.done=!0,n&&(t=n);try{if(t.facade===e)throw ji("Promise can't be resolved itself");var r=Ai(e);r?ri((function(){var n={done:!1};try{r.call(e,Ji($i,n,t),Ji(Di,n,t))}catch(e){Di(n,e,t)}})):(t.value=e,t.state=1,Ii(t,!1))}catch(e){Di({done:!1},e,t)}}};if(Li&&(wi=(bi=function(t){Bo(this,bi,di),Yo(t),_o.call(this);var e=vi(this);try{t(Ji($i,e),Ji(Di,e))}catch(t){Di(e,t)}}).prototype,(_o=function(t){yi(this,{type:di,done:!1,notified:!1,parent:!1,reactions:[],rejection:!1,state:0,value:void 0})}).prototype=qo(wi,{then:function(t,e){var n=gi(this),r=Ei(ei(this,bi));return r.ok="function"!=typeof t||t,r.fail="function"==typeof e&&e,r.domain=li?Oi.domain:void 0,n.parent=!0,n.reactions.push(r),0!=n.state&&Ii(n,!1),r.promise},catch:function(t){return this.then(void 0,t)}}),Ao=function(){var t=new _o,e=vi(t);this.promise=t,this.resolve=Ji($i,e),this.reject=Ji(Di,e)},ai.f=Ei=function(t){return t===bi||t===Io?new Ao(t):Si(t)},"function"==typeof zo&&mi!==Object.prototype)){Ro=mi.then,ki||(Wo(mi,"then",(function(t,e){var n=this;return new bi((function(t,e){Ro.call(n,t,e)})).then(t,e)}),{unsafe:!0}),Wo(mi,"catch",wi.catch,{unsafe:!0}));try{delete mi.constructor}catch(t){}Uo&&Uo(mi,wi)}$o({global:!0,wrap:!0,forced:Li},{Promise:bi}),Ko(bi,di,!1),Qo(di),Io=Ho(di),$o({target:di,stat:!0,forced:Li},{reject:function(t){var e=Ei(this);return e.reject.call(void 0,t),e.promise}}),$o({target:di,stat:!0,forced:Li},{resolve:function(t){return oi(this,t)}}),$o({target:di,stat:!0,forced:_i},{all:function(t){var e=this,n=Ei(e),r=n.resolve,o=n.reject,i=ci((function(){var n=Yo(e.resolve),i=[],a=0,c=1;Zo(t,(function(t){var u=a++,f=!1;i.push(void 0),c++,n.call(e,t).then((function(t){f||(f=!0,i[u]=t,--c||r(i))}),o)})),--c||r(i)}));return i.error&&o(i.value),n.promise},race:function(t){var e=this,n=Ei(e),r=n.reject,o=ci((function(){var o=Yo(e.resolve);Zo(t,(function(t){o.call(e,t).then(n.resolve,r)}))}));return o.error&&r(o.value),n.promise}});var Gi,Hi=H,zi=W,Wi=Qe,qi=i?Object.defineProperties:function(t,e){zi(t);for(var n,r=Wi(e),o=r.length,i=0;o>i;)Hi.f(t,n=r[i++],e[n]);return t},Ui=W,Ki=qi,Qi=me,Xi=Et,Yi=Cr,Bi=_,Vi=Ot("IE_PROTO"),Zi=function(){},ta=function(t){return"