Update your sound drivers:A missing, corrupted, or outdated sound driver could be why you're experiencing crackling audio. Updating or reinstalling it can fix the issue in this regard. To update your sound driver manually, go to Device Manager
, expand < ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FL Studio 20 Keygen Reddit The Ultimate Guide to Unlocking All Features and Plugins.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FL Studio 20 Keygen Reddit The Ultimate Guide to Unlocking All Features and Plugins.md
deleted file mode 100644
index 392a4efed6642bcb3490bd253e89393158223a06..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FL Studio 20 Keygen Reddit The Ultimate Guide to Unlocking All Features and Plugins.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-FL Studio Keygen Reddit: How to Crack FL Studio 20 for Free
-If you are looking for a way to crack FL Studio 20 for free, you might have come across some torrents or links that claim to offer a keygen or a patch for the popular music production software. But are they safe and reliable? And how do you use them?
-fl studio keygen reddit
Download > https://byltly.com/2uKwL4
-In this article, we will explain what a keygen is, how it works, and what are the risks and benefits of using one. We will also show you how to use a keygen from a reputable source, R2R, to unlock FL Studio 20 and enjoy its full features.
-What is a Keygen?
-A keygen, short for key generator, is a program that can generate valid serial numbers or license keys for a software application. A keygen can be used to activate a software without paying for it or going through the official registration process.
-A keygen usually works by exploiting a flaw or a weakness in the software's protection system, such as a weak encryption algorithm or a hardcoded key. A keygen can also emulate the server-side validation process and generate keys that match the expected format.
-How to Use R2R Keygen for FL Studio 20?
-R2R is a well-known group of crackers that release high-quality keygens and patches for various software applications, including FL Studio. R2R's keygen for FL Studio 20 can unlock all the features and plugins of the software, such as Edison, Gross Beat, Harmor, Sytrus, Maximus, and more.
-To use R2R's keygen for FL Studio 20, you need to follow these steps:
-
-- Download the torrent file of FL Studio 20 from this Reddit post by u/orbital_malice42. Make sure you download the one uploaded by Deepstatus, who is a verified uploader on Piratebay.
-- Extract the .7z file using 7-Zip or WinRAR. You will get two folders: FL Studio 20 and Shared.
-- Install FL Studio 20 by running the setup.exe file in the FL Studio 20 folder. Choose your preferred language and location. Do not run FL Studio after installation.
-- Copy the Keygen.exe file from the Shared folder and paste it into the installation directory of FL Studio 20. The default location is C:\Program Files (x86)\Image-Line\FL Studio 20.
-- Run the Keygen.exe file as administrator. You will see a window with a button that says Register. Click on it and wait for a few seconds. You will see a message that says "Successfully registered!"
-- Open FL Studio 20 by running the fl.exe file in the installation directory. You should see that it is unlocked and activated. You can now use all the features and plugins of FL Studio 20 without any limitations.
-
-What are the Risks and Benefits of Using a Keygen?
-Using a keygen can have some advantages and disadvantages. Here are some of them:
-Benefits
-
-- You can save money by not paying for the software license.
-- You can access all the features and plugins of the software without any restrictions.
-- You can use the software offline without needing an internet connection or an account.
-
-Risks
-
-- You may violate the terms and conditions of the software developer and face legal consequences.
-- You may expose your computer to malware or viruses that may be hidden in the keygen or the torrent file.
-- You may not receive any updates or support from the software developer.
-- You may experience some bugs or errors in the software that may affect its performance or functionality.
-
-Conclusion
-FL Studio 20 is a powerful and versatile music production software that can help you create amazing beats and songs. However, it is also quite expensive and requires a license key to activate it.
-If you want to crack FL Studio
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSG HDRI Studio Pack 1.8 for Cinema 4D How to Achieve Realistic Reflections and Shadows.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSG HDRI Studio Pack 1.8 for Cinema 4D How to Achieve Realistic Reflections and Shadows.md
deleted file mode 100644
index 4db5b6f4cbbe95c68e4e717f34a9a6601906233b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSG HDRI Studio Pack 1.8 for Cinema 4D How to Achieve Realistic Reflections and Shadows.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-GSG HDRI Studio Pack 1.8 for Cinema 4D: A Review
-If you are looking for a way to create stunning lighting and reflections in Cinema 4D, you might have heard of GSG HDRI Studio Pack 1.8. This is a bundle of two plugins from Greyscalegorilla that allow you to browse and apply hundreds of high-quality HDRI (High Dynamic Range Images) in seconds. But what exactly is GSG HDRI Studio Pack 1.8, and why should you use it? In this article, we will review this product and show you how it can help you improve your 3D renders.
- What is GSG HDRI Studio Pack 1.8?
-GSG HDRI Studio Pack 1.8 is a collection of two plugins for Cinema 4D that make it easy to use HDRI lighting and reflections in your scenes. HDRI stands for High Dynamic Range Images, which are images that capture a wide range of brightness values, from very dark to very bright. By using HDRI as your light source, you can create realistic and natural lighting effects that mimic the real world.
-GSG HDRI Studio Pack 1.8 for Cinema 4D
DOWNLOAD ✶✶✶ https://byltly.com/2uKyNB
-HDRI Studio Rig
-HDRI Studio Rig is a plugin that lets you browse and apply HDRI from Greyscalegorilla's library or your own collection. You can then rotate, adjust, and place them in the perfect position for your scene. You can also create professional studio quality backdrops and seamless floors with this plugin. HDRI Studio Rig works with Cinema 4D's Standard and Physical renderers, and it is ideal for product shots, motion graphics, and animations.
-HDRI Link
-HDRI Link is a plugin that lets you connect any third-party render engine to Greyscalegorilla's library of HDRI or your own collection. You can instantly browse and apply HDRI with a simple drag-and-drop interface, without having to deal with complex settings or file paths. HDRI Link is compatible with popular render engines like Redshift, Octane, and Arnold, and it is ideal for photorealistic renders, architectural visualization, and VFX.
- Why use GSG HDRI Studio Pack 1.8?
-GSG HDRI Studio Pack 1.8 offers many benefits for Cinema 4D users who want to create better looking renders with less hassle.
-How to use GSG HDRI Studio Pack in Cinema 4D
-GSG HDRI Studio Pack review and tutorial
-Best HDRI lighting presets for Cinema 4D
-GSG HDRI Studio Pack vs other HDRI plugins
-Where to buy GSG HDRI Studio Pack for Cinema 4D
-GSG HDRI Studio Pack features and benefits
-How to create realistic renders with GSG HDRI Studio Pack
-GSG HDRI Studio Pack compatibility and requirements
-How to install and update GSG HDRI Studio Pack
-GSG HDRI Studio Pack free download and trial
-How to customize and save HDRI settings in GSG HDRI Studio Pack
-GSG HDRI Studio Pack tips and tricks
-How to optimize render speed with GSG HDRI Studio Pack
-GSG HDRI Studio Pack customer testimonials and feedback
-How to get support and help for GSG HDRI Studio Pack
-How to use GSG HDRI Studio Pack with other Cinema 4D tools
-GSG HDRI Studio Pack alternatives and competitors
-How to add your own HDRIs to GSG HDRI Studio Pack
-How to use GSG HDRI Studio Pack for animation and motion graphics
-How to use GSG HDRI Studio Pack for product visualization and design
-How to use GSG HDRI Studio Pack for architectural rendering and interior design
-How to use GSG HDRI Studio Pack for character modeling and sculpting
-How to use GSG HDRI Studio Pack for VFX and compositing
-How to use GSG HDRI Studio Pack for game development and VR/AR
-How to use GSG HDRI Studio Pack for photography and video editing
-How to create stunning HDRIs with GSG HDRI Studio Pack
-How to use GSG HDRI Studio Pack with Octane Render, Redshift, Arnold, etc.
-How to use GSG HDRI Studio Pack with After Effects, Photoshop, Illustrator, etc.
-How to use GSG HDRI Studio Pack with Blender, Maya, 3ds Max, etc.
-How to use GSG HDRI Studio Pack with SketchUp, Revit, AutoCAD, etc.
-How to use GSG HDRI Studio Pack with ZBrush, Substance Painter, Marvelous Designer, etc.
-How to use GSG HDRI Studio Pack with Unity, Unreal Engine, Godot, etc.
-How to use GSG HDRI Studio Pack with Premiere Pro, Final Cut Pro, DaVinci Resolve, etc.
-How to use GSG HDRI Studio Pack with Lightroom, Capture One, Affinity Photo, etc.
-What are the advantages of using HDRIs in Cinema 4D
-What are the best practices for using HDRIs in Cinema 4D
-What are the common mistakes and pitfalls when using HDRIs in Cinema 4D
-What are the latest trends and developments in HDRIs and Cinema 4D
-What are the best sources and resources for HDRIs and Cinema 4D
-What are the best examples and inspirations of HDRIs and Cinema 4D projects
-Benefits of HDRI lighting
-HDRI lighting is one of the most realistic and natural ways to light your scenes in Cinema 4D. By using HDRI as your light source, you can achieve:
-
-- Accurate color representation
-- Soft shadows and reflections
-- Global illumination effects
-- Ambient occlusion effects
-- Mood and atmosphere
-
-HDRI lighting can also save you time and resources by eliminating the need for multiple lights, complex setups, and long render times.
-Features of GSG HDRI Studio Pack 1.8
-GSG HDRI Studio Pack 1.8 offers many features that make it easy and fun to use HDRI lighting in Cinema 4D.
-
-- Browse hundreds of HDRI in seconds with a simple interface
-- Add beautiful reflections in seconds with a click
-- Adjust brightness and reflections separately to find the perfect look
-- Blur the HDRI to create better global illumination effects
-- Use the C4D shadow catcher to adjust or remove shadows from your scene
-- Preview your lighting before hitting render with rotation preview
-- Create professional studio quality backdrops and seamless floors
-- Switch between render engines without losing your settings
-- Access exclusive Greyscalegorilla HDRI collections or use your own
-
- How to use GSG HDRI Studio Pack 1.8?
-GSG HDRI Studio Pack 1.8 is very easy to use in Cinema 4D.
-Installation and compatibility
-To install GSG HDRI Studio Pack 1.8, you need to have Cinema 4D R20 or higher installed on your computer. You also need to have a Greyscalegorilla Plus membership account, which gives you access to all of their products and training for one low price.
- To install the plugins, you need to download them from your Greyscalegorilla account page, unzip them, and copy them to your Cinema 4D plugins folder.
- To use the plugins, you need to activate them with your Greyscalegorilla Plus account credentials.
- GSG HDRI Studio Rig works with Cinema 4D's Standard and Physical renderers, while GSG HDRI Link works with third-party render engines like Redshift, Octane, and Arnold.
- Browsing and applying HDRI
-To browse and apply HDRI with GSG HDRI Studio Rig, you need to add an object to your scene (such as a sphere or a cube), then add an HDRi Studio Rig object from the plugins menu.
- This will open up the HDRi Browser window, where you can see all the available HDRi collections from Greyscalegorilla or your own folder.
- You can then drag any HDRi image onto the HDRi Preview window or double-click on it to apply it to your scene.
- You can also use the search bar or the filters to find the HDRi that suits your needs.
- You will see a small icon on the tag that indicates which render engine you are using. You can change it by clicking on it and selecting another one.
- This will open up the HDRI Browser window, where you can see all the available HDRI collections from Greyscalegorilla or your own folder.
- You can then drag any HDRI image onto the HDRI Link Tag or double-click on it to apply it to your scene.
- You can also use the search bar or the filters to find the HDRI that suits your needs.
- Adjusting and customizing HDRI
-To adjust and customize HDRI with GSG HDRI Studio Rig, you need to select the HDRi Studio Rig object and go to the attributes panel.
- There you will find several options to tweak your lighting and reflections, such as:
-
-- Brightness: Adjusts the overall intensity of the HDRI
-- Reflection: Adjusts the intensity of the reflections on your object
-- Rotation: Rotates the HDRI around your scene
-- Blur: Blurs the HDRI to create softer shadows and global illumination effects
-- Fill Light: Adds a secondary light source to fill in the dark areas of your scene
-- Color Correction: Applies hue, saturation, contrast, and gamma adjustments to the HDRI
-- Floor: Enables or disables the seamless floor option and lets you change its color, height, and reflection
-- Backdrop: Enables or disables the studio backdrop option and lets you change its color, height, and width
-
- To adjust and customize HDRI with GSG HDRI Link, you need to select the HDRi Link Tag and go to the attributes panel.
- There you will find a few options to tweak your lighting and reflections, such as:
-
-- Brightness: Adjusts the overall intensity of the HDRI
-- Reflection: Adjusts the intensity of the reflections on your object
-- Rotation: Rotates the HDRI around your scene
-- Blur: Blurs the HDRI to create softer shadows and global illumination effects
-- Preview Mode: Enables or disables a low-resolution preview of the HDRI for faster feedback
-
- Where to get GSG HDRI Studio Pack 1.8?
-If you are interested in getting GSG HDRI Studio Pack 1.8, you have two options:
-Pricing and plans
-You can buy GSG HDRI Studio Pack 1.8 as a standalone product for $129. This will give you access to both plugins and 10 sample HDRI images. You can also buy additional HDRI collections from Greyscalegorilla's website, ranging from $49 to $99 each.
- You can also get GSG HDRI Studio Pack 1.8 as part of Greyscalegorilla Plus membership for $399 per year or $64 per month. This will give you access to all of Greyscalegorilla's products and training, including over 3,000 materials, HDRIs, and other 3D assets, all of their time-saving plugins for Cinema 4D, and 500+ hours of pro training.
- Greyscalegorilla Plus membership
-Greyscalegorilla Plus is a subscription service that gives you unlimited access to all of Greyscalegorilla's products and training for one low price. You can get over $13,000 worth of tools and training for only $399 per year or $64 per month.
- With Greyscalegorilla Plus, you can:
-
-- Leverage over 3,000 materials, HDRIs, and other 3D assets to create stunning renders in Cinema 4D
-- Use all of Greyscalegorilla's time-saving plugins for Cinema 4D, such as Signal, Transform, Light Kit Pro, GorillaCam, City Kit, Topcoat, Texture Kit Pro, and more
-- Learn from over 500 hours of pro training on Cinema 4D, After Effects, Redshift, Octane, Arnold, X-Particles, Houdini, RealFlow, and more
-- Get instant updates and new releases as soon as they are available
-- Enjoy a 60-day money-back guarantee if you are not satisfied with your membership
-
- Conclusion
-GSG HDRI Studio Pack 1.8 is a bundle of two plugins for Cinema 4D that let you browse and apply hundreds of high-quality HDRI in seconds. You can use them to create realistic and natural lighting and reflections in your scenes with ease. Whether you are using Cinema 4D's Standard and Physical renderers or third-party render engines like Redshift, Octane, or Arnold, GSG HDRI Studio Pack 1.8 can help you improve your renders.
- If you want to get GSG HDRI Studio Pack 1.8, you can buy it as a standalone product for $129 or as part of Greyscalegorilla Plus membership for $399 per year or $64 per month. Greyscalegorilla Plus gives you unlimited access to all of Greyscalegorilla's products and training for one low price.
- GSG HDRI Studio Pack 1.8 is a great product for Cinema 4D users who want to create better looking renders with less hassle. If you are interested in trying it out, you can visit Greyscalegorilla's website for more information.
- FAQs
-What is HDRI?
-HDRI stands for High Dynamic Range Images, which are images that capture a wide range of brightness values, from very dark to very bright. By using HDRI as your light source, you can create realistic and natural lighting effects that mimic the real world.
- What is GSG HDRI Studio Pack 1.8?
-GSG HDRI Studio Pack 1.8 is a collection of two plugins for Cinema 4D that make it easy to use HDRI lighting and reflections in your scenes. They are:
-
-- HDRI Studio Rig: A plugin that works with Cinema 4D's Standard and Physical renderers
-- HDRI Link: A plugin that works with third-party render engines like Redshift, Octane, and Arnold
-
- How do I use GSG HDRI Studio Pack 1.8?
-To use GSG HDRI Studio Pack 1.8, you need to add an object to your scene (such as a sphere or a cube), then add an HDRi Studio Rig object or an HDRi Link Tag from the plugins menu. This will open up the HDRi Browser window, where you can browse and apply any HDRi image from Greyscalegorilla's library or your own folder. You can then adjust and customize your lighting and reflections with various options in the attributes panel.
- Where do I get GSG HDRI Studio Pack 1.8?
-You can get GSG HDRI Studio Pack 1.8 from Greyscalegorilla's website. You can buy it as a standalone product for $129 or as part of Greyscalegorilla Plus membership for $399 per year or $64 per month.
- What is Greyscalegorilla Plus?
-Greyscalegorilla Plus is a subscription service that gives you unlimited access to all of Greyscalegorilla's products and training for one low price. You can get over $13,000 worth of tools and training for only $399 per year or $64 per month.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Designing With Type 5th Edition - The Essential Guide To Typography By James Craig.pdf !NEW!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Designing With Type 5th Edition - The Essential Guide To Typography By James Craig.pdf !NEW!.md
deleted file mode 100644
index dfc1c5166054bc73e68d398e5c2fd2daad0634b1..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Designing With Type 5th Edition - The Essential Guide To Typography By James Craig.pdf !NEW!.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-Typography is often regarded as an uninteresting topic. It is no less important than any other medium or discipline, but it is often not given the attention that it deserves. Sadly, some designers today seem to think that just calling oneself a "typographer" will get one a job. It is time to dispel that myth. This book is designed for anyone in any field who wants to know how to design with type. It is true that not all designers think of type as being important or useful, but even that attitude has changed. Designing books, pamphlets, posters, mailings, logos, advertisements, and even Web pages means knowing something about type. And now that type is commonplace in both the print and the electronic media, the need for designers who know type is not less than it has ever been. This book presents the essentials of type and shows how they can be utilized as in the design of anything printed, from books to pamphlets to logos. It is to be hoped that this text will be the first of many on this crucial subject.
-Designing with Type continues to be a perennial best seller, as well as one of the most frequently cited textbooks ever written on the subject. Where it was once considered a dry, obscure subject, of little relevance to graphic designers, it is now accepted as an essential text for designers and design students alike. For the new generation of designers, this book still offers the most complete, current overview of the subject.
-Designing with Type, 5th Edition - The Essential Guide to Typography by James Craig.pdf
Download Zip ✫ https://imgfil.com/2uxYRV
-This book is about a way of looking at typography that helps us think about type as a form and about the people who designed it, as well as its use in mass-communication. The book begins by defining what typography is and isn't, with particular emphasis on the rationale behind it. Next, the book outlines types of design and typographic criteria, with sections on structure, form, function, illustration, language, visual and verbal communication, and media placement and organization. The authors show how these methods can be applied to any kind of type-based communication, from books, posters, logos, magazines, and broadsheets to websites, advertisements, and classified listings.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1line/AutoGPT/autogpt/commands/audio_text.py b/spaces/1line/AutoGPT/autogpt/commands/audio_text.py
deleted file mode 100644
index cae32d4eb78c4268bf6ef1bae3c15a399af046bf..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/commands/audio_text.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import json
-
-import requests
-
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-cfg = Config()
-
-
-def read_audio_from_file(audio_path):
- audio_path = path_in_workspace(audio_path)
- with open(audio_path, "rb") as audio_file:
- audio = audio_file.read()
- return read_audio(audio)
-
-
-def read_audio(audio):
- model = cfg.huggingface_audio_to_text_model
- api_url = f"https://api-inference.huggingface.co/models/{model}"
- api_token = cfg.huggingface_api_token
- headers = {"Authorization": f"Bearer {api_token}"}
-
- if api_token is None:
- raise ValueError(
- "You need to set your Hugging Face API token in the config file."
- )
-
- response = requests.post(
- api_url,
- headers=headers,
- data=audio,
- )
-
- text = json.loads(response.content.decode("utf-8"))["text"]
- return "The audio says: " + text
diff --git a/spaces/1phancelerku/anime-remove-background/60 Seconds! Reatomized - A Crazy and Funny Adventure in a Nuclear Wasteland - Play Online for Free.md b/spaces/1phancelerku/anime-remove-background/60 Seconds! Reatomized - A Crazy and Funny Adventure in a Nuclear Wasteland - Play Online for Free.md
deleted file mode 100644
index dd78971091718bd8e4c9cc98f07b721c8af77379..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/60 Seconds! Reatomized - A Crazy and Funny Adventure in a Nuclear Wasteland - Play Online for Free.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-How to Play 60 Seconds! Reatomized for Free Online Without Downloading Anything
- Do you like survival games? Do you enjoy dark humor and quirky characters? Do you want to experience a nuclear apocalypse without risking your life? If you answered yes to any of these questions, then you might want to try 60 Seconds! Reatomized, a game that lets you play as a suburban dad who has to save his family and himself from a nuclear blast. And the best part is, you can play it for free online without downloading anything. Here's how.
- What is 60 Seconds! Reatomized?
- 60 Seconds! Reatomized is a game that combines two genres: survival simulator and dark comedy adventure. It was developed by Robot Gentleman and released in 2019 as a remastered version of the original 60 Seconds! game from 2015. Here are some of the features of the game:
-60 seconds free no download unblocked
Download File ✸✸✸ https://jinyurl.com/2uNODe
- A post-apocalyptic survival simulator
- In this game, you have to face the consequences of a nuclear war that has destroyed most of the world. You have to scavenge for supplies, ration food and water, deal with illnesses and injuries, and make tough decisions that will affect your survival. You also have to deal with random events and visitors that can either help or harm you. The game has four different modes: Atomic Drill, Apocalypse, Scavenge, and Survival. Each mode has its own rules and challenges.
- A dark comedy adventure game
- While the game has a serious theme, it also has a lot of humor and absurdity. The game is narrated by a sarcastic robot named Dolores, who comments on your actions and choices. The game also has a lot of references to pop culture, such as movies, books, games, and celebrities. The game also has a lot of funny scenarios and outcomes, such as turning into mutants, becoming cannibals, or joining cults. The game does not take itself too seriously and encourages you to have fun with it.
- A remastered version of the original 60 Seconds!
- 60 Seconds! Reatomized is an improved version of the original game, with new features and content. Some of the improvements include:
-
-- Better graphics and sound effects
-- New endings and achievements
-- New characters and items
-- New challenges and events
-- New mini-games and quests
-- New game mode: Challenge
-
- How to play 60 seconds free no download unblocked?
- If you want to play 60 Seconds! Reatomized for free online without downloading anything, you need to find a reliable website that offers the game. One such website is [Gameroze.com](^1^), which lets you play the game in your browser without any registration or installation. Another website is [60secondsreatomizedgame.com](^2^), which also offers the game for free online. Here are the steps to play the game:
- Find a reliable website that offers the game
- Go to one of the websites mentioned above or search for other websites that offer the game. Make sure that the website is safe and secure, and does not contain any viruses or malware. You can use an antivirus software or a browser extension to check the website's reputation and safety.
- Choose your mode and difficulty level
-
Once you have accessed the game on the website, you can choose the mode you want to play. The game has five modes: Atomic Drill, Apocalypse, Scavenge, Survival, and Challenge. Each mode has a different objective and gameplay. Here is a brief description of each mode:
-
-
-Mode |
-Description |
-
-
-Atomic Drill |
-This is the tutorial mode, where you can learn the basics of the game. You have to collect supplies and family members in 60 seconds and then survive in the bunker for a few days. |
-
-
-Apocalypse |
-This is the main mode, where you have to complete the full game. You have to collect supplies and family members in 60 seconds and then survive in the bunker as long as you can. You can choose from four difficulty levels: Little Boy, Fat Man, Tsar Bomba, and Scavenger. |
-
-
-Scavenge |
-This is a mode where you only have to collect supplies and family members in 60 seconds. You can choose from four difficulty levels: Easy, Normal, Hard, and Impossible. |
-
-
-Survival |
-This is a mode where you only have to survive in the bunker with the supplies and family members you have. You can choose from four difficulty levels: Easy, Normal, Hard, and Impossible. |
-
-
-Challenge |
-This is a mode where you have to complete a specific scenario with a set of rules and conditions. You can choose from 12 challenges, such as Cat Lady, Twins, or Soup Only. |
-
-
- Collect supplies and family members in 60 seconds
- After choosing your mode and difficulty level, you will start the game in your house. You will have 60 seconds to grab as many supplies and family members as you can and bring them to the fallout shelter in your backyard. You can use the arrow keys or the WASD keys to move around, and the spacebar or the left mouse button to pick up items or people. You can also use the E key or the right mouse button to drop items or people. You can only carry up to four items or people at a time, so you have to plan carefully what you need and what you can leave behind. Some of the items you can find are:
-
-- Food: Canned soup and water bottles. You need these to feed yourself and your family.
-- Medicine: First aid kit and medkit. You need these to heal yourself and your family from injuries or illnesses.
-- Weapons: Axe, rifle, shotgun, pistol, ammo, padlock. You need these to defend yourself from raiders or other threats.
-- Tools: Flashlight, radio, map, gas mask, bug spray, suitcase, Boy Scout book. You need these to communicate with others, explore outside, or fix things.
-- Luxuries: Playing cards, chess board, harmonica, checkers board. You need these to entertain yourself and your family and prevent boredom or insanity.
-- Pets: Cat or dog. You can choose to bring one of them with you for companionship.
-
- You also have to bring your family members with you. They are:
-60 seconds reatomized game online free
-60 seconds survival simulator unblocked no download
-60 seconds apocalypse game free online
-60 seconds bunker challenge unblocked free
-60 seconds dark comedy adventure free no download
-60 seconds nuclear fallout game online unblocked
-60 seconds post-apocalyptic simulator free online
-60 seconds atomic explosion game unblocked no download
-60 seconds family survival game free online
-60 seconds scavenging adventure free no download
-60 seconds reatomized unblocked game play online
-60 seconds atomic shelter game free no download
-60 seconds survival adventure game online unblocked
-60 seconds nuclear blast game free online
-60 seconds bunker survival game unblocked no download
-60 seconds reatomized dark comedy game free online
-60 seconds apocalypse simulator unblocked no download
-60 seconds atomic adventure game online free
-60 seconds fallout shelter game unblocked free
-60 seconds post-nuclear game free no download
-60 seconds reatomized survival challenge game online unblocked
-60 seconds nuclear war game free no download
-60 seconds bunker adventure game unblocked online
-60 seconds reatomized comedy simulator free online
-60 seconds atomic bomb game unblocked no download
-60 seconds survival comedy game online free
-60 seconds apocalypse adventure game unblocked free
-60 seconds reatomized nuclear fallout game no download
-60 seconds bunker simulator game online unblocked
-60 seconds reatomized atomic blast game free online
-
-- Ted: The protagonist and father of the family. He is strong and brave.
-- Dolores: The wife of Ted and mother of the family. She is smart and resourceful.
-- Mary Jane: The daughter of Ted and Dolores. She is adventurous and curious.
-- Timmy: The son of Ted and Dolores. He is clever and optimistic.
-
- You have to decide who and what to bring with you before the time runs out. If you don't make it to the shelter in time, you will die in the blast. If you don't bring enough supplies or family members with you, you will have a harder time surviving in the bunker.
- Survive in the bunker as long as you can
- After collecting supplies and family members in 60 seconds, you will enter the bunker. This is where the survival part of the game begins. You will have to manage your resources, make decisions, and deal with events that will affect your survival. Here are some of the things you have to do:
- - Feed yourself and your family every few days with soup and water. - Use medicine to treat injuries or illnesses that may occur. - Use weapons to fend off raiders or other enemies that may attack. - Use tools to communicate with other survivors or explore outside. - Use luxuries to - Use luxuries to keep yourself and your family happy and sane. - Follow the instructions or requests of Dolores, the robot narrator, who will guide you through the game. - Make choices that will affect your survival, such as who to send outside, who to trust, or what to trade. - Face random events and visitors that will have positive or negative consequences for you. - Try to find a way to escape the bunker or get rescued by the military or other survivors. The game will end when you either die, escape, or get rescued. The game will also show you your stats, such as how many days you survived, how many items you used, and how many endings you unlocked. The game has over 100 endings, some of which are funny, sad, or bizarre.
- Why play 60 seconds free no download unblocked?
- There are many reasons why you might want to play 60 Seconds! Reatomized for free online without downloading anything. Here are some of them:
- It's fun and challenging
- The game is a mix of strategy, luck, and humor. You have to think fast and smart when collecting supplies and family members in 60 seconds. You also have to adapt to different situations and scenarios that will test your survival skills. The game is not easy, but it's rewarding when you manage to survive or achieve a good ending. The game also has a lot of humor and absurdity that will make you laugh or smile.
- It's different every time you play
- The game is randomly generated, which means that every time you play, you will have a different experience. The items and people you find in your house, the events and visitors you encounter in the bunker, and the endings you unlock will vary each time. The game also has different modes and difficulty levels that will change the gameplay and the challenge. The game has a lot of replay value and surprises.
- It's compatible with any device and browser
- The game is designed to run on any device and browser that supports HTML5. You don't need to download anything or install anything to play the game. You just need an internet connection and a web browser. You can play the game on your computer, laptop, tablet, or smartphone. You can also play the game on any operating system, such as Windows, Mac, Linux, Android, or iOS. The game is accessible and convenient for anyone.
- Conclusion
- 60 Seconds! Reatomized is a game that lets you play as a suburban dad who has to save his family and himself from a nuclear blast. You can play it for free online without downloading anything on websites like [Gameroze.com] or [60secondsreatomizedgame.com]. The game is a combination of survival simulator and dark comedy adventure. It has four different modes: Atomic Drill, Apocalypse, Scavenge, Survival, and Challenge. It also has over 100 endings and new features and content. The game is fun and challenging, different every time you play, and compatible with any device and browser. If you are looking for a game that will test your survival skills and make you laugh at the same time, then you should try 60 Seconds! Reatomized.
- FAQs
- Here are some of the frequently asked questions about 60 Seconds! Reatomized:
-
-- Q: How long does it take to finish the game?
-- A: It depends on how well you play and what mode and difficulty level you choose. The game can last from a few minutes to several hours.
-- Q: How can I save my progress in the game?
-- A: The game has an auto-save feature that saves your progress every day in the bunker. You can also manually save your progress by clicking on the floppy disk icon in the top right corner of the screen.
-- Q: How can I unlock more endings in the game?
-- A: You can unlock more endings by playing different modes and difficulty levels, making different choices, using different items, interacting with different visitors, and exploring different locations.
-- Q: How can I get more supplies in the game?
-- A: You can get more supplies by scavenging outside the bunker with a gas mask and a map. You can also trade with other survivors or raiders who may visit your bunker.
-- Q: How can I get rid of the cockroaches in the game?
-- A: You can get rid of the cockroaches by using bug spray or fire. However, be careful not to burn your supplies or your family or yourself. You can also prevent the cockroaches from appearing by keeping your bunker clean and tidy.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/test_kana_parser.py b/spaces/2ndelement/voicevox/test/test_kana_parser.py
deleted file mode 100644
index ef800b60003b5d14b90a8eeb86e0fa29a919f878..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_kana_parser.py
+++ /dev/null
@@ -1,688 +0,0 @@
-from typing import List
-from unittest import TestCase
-
-from voicevox_engine import kana_parser
-from voicevox_engine.kana_parser import create_kana
-from voicevox_engine.model import AccentPhrase, Mora, ParseKanaError, ParseKanaErrorCode
-
-
-def parse_kana(text: str) -> List[AccentPhrase]:
- accent_phrases = kana_parser.parse_kana(text)
- return accent_phrases
-
-
-class TestParseKana(TestCase):
- def test_phrase_length(self):
- self.assertEqual(len(parse_kana("ア'/ア'")), 2)
- self.assertEqual(len(parse_kana("ア'、ア'")), 2)
- self.assertEqual(len(parse_kana("ア'/ア'/ア'/ア'/ア'")), 5)
- self.assertEqual(len(parse_kana("ス'")), 1)
- self.assertEqual(len(parse_kana("_ス'")), 1)
- self.assertEqual(len(parse_kana("ギェ'")), 1)
- self.assertEqual(len(parse_kana("ギェ'、ギェ'/ギェ'")), 3)
-
- def test_accent(self):
- self.assertEqual(parse_kana("シャ'シシュシェショ")[0].accent, 1)
- self.assertEqual(parse_kana("シャ'_シシュシェショ")[0].accent, 1)
- self.assertEqual(parse_kana("シャシ'シュシェショ")[0].accent, 2)
- self.assertEqual(parse_kana("シャ_シ'シュシェショ")[0].accent, 2)
- self.assertEqual(parse_kana("シャシシュ'シェショ")[0].accent, 3)
- self.assertEqual(parse_kana("シャ_シシュ'シェショ")[0].accent, 3)
- self.assertEqual(parse_kana("シャシシュシェショ'")[0].accent, 5)
- self.assertEqual(parse_kana("シャ_シシュシェショ'")[0].accent, 5)
-
- def test_mora_length(self):
- self.assertEqual(len(parse_kana("シャ'シシュシェショ")[0].moras), 5)
- self.assertEqual(len(parse_kana("シャ'_シシュシェショ")[0].moras), 5)
- self.assertEqual(len(parse_kana("シャシ'シュシェショ")[0].moras), 5)
- self.assertEqual(len(parse_kana("シャ_シ'シュシェショ")[0].moras), 5)
- self.assertEqual(len(parse_kana("シャシシュシェショ'")[0].moras), 5)
- self.assertEqual(len(parse_kana("シャ_シシュシェショ'")[0].moras), 5)
-
- def test_pause(self):
- self.assertIsNone(parse_kana("ア'/ア'")[0].pause_mora)
- self.assertIsNone(parse_kana("ア'/ア'")[1].pause_mora)
- self.assertIsNotNone(parse_kana("ア'、ア'")[0].pause_mora)
- self.assertIsNone(parse_kana("ア'、ア'")[1].pause_mora)
-
- def test_unvoice(self):
- self.assertEqual(parse_kana("ス'")[0].moras[0].vowel, "u")
- self.assertEqual(parse_kana("_ス'")[0].moras[0].vowel, "U")
-
- def test_roundtrip(self):
- for text in ["コンニチワ'", "ワタシワ'/シャチョオデ'_ス", "トテモ'、エラ'インデス"]:
- self.assertEqual(create_kana(parse_kana(text)), text)
-
- for text in ["ヲ'", "ェ'"]:
- self.assertEqual(create_kana(parse_kana(text)), text)
-
- def _accent_phrase_marks_base(
- self, text: str, expected_accent_phrases: List[AccentPhrase]
- ) -> None:
- accent_phrases = kana_parser.parse_kana(text)
- self.assertEqual(expected_accent_phrases, accent_phrases)
-
- def test_accent_phrase_marks(self):
- def a_slash_a_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = a_slash_a_accent_phrases()
- self._accent_phrase_marks_base(
- text="ア'/ア'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def a_jp_comma_a_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=Mora(
- text="、",
- consonant=None,
- consonant_length=None,
- vowel="pau",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = a_jp_comma_a_accent_phrases()
- self._accent_phrase_marks_base(
- text="ア'、ア'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def a_slash_a_slash_a_slash_a_slash_a_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = a_slash_a_slash_a_slash_a_slash_a_accent_phrases()
- self._accent_phrase_marks_base(
- text="ア'/ア'/ア'/ア'/ア'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def su_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ス",
- consonant="s",
- consonant_length=0.0,
- vowel="u",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = su_accent_phrases()
- self._accent_phrase_marks_base(
- text="ス'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def under_score_su_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ス",
- consonant="s",
- consonant_length=0.0,
- vowel="U",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = under_score_su_accent_phrases()
- self._accent_phrase_marks_base(
- text="_ス'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def gye_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = gye_accent_phrases()
- self._accent_phrase_marks_base(
- text="ギェ'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def gye_gye_gye_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=Mora(
- text="、",
- consonant=None,
- consonant_length=None,
- vowel="pau",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- ]
-
- expected_accent_phrases = gye_gye_gye_accent_phrases()
- self._accent_phrase_marks_base(
- text="ギェ'、ギェ'/ギェ'",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def test_interrogative_accent_phrase_marks(self):
- def a_question_mark_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- is_interrogative=True,
- ),
- ]
-
- expected_accent_phrases = a_question_mark_accent_phrases()
- self._accent_phrase_marks_base(
- text="ア'?",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def gye_gye_gye_question_mark_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=Mora(
- text="、",
- consonant=None,
- consonant_length=None,
- vowel="pau",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ギェ",
- consonant="gy",
- consonant_length=0.0,
- vowel="e",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- is_interrogative=True,
- ),
- ]
-
- expected_accent_phrases = gye_gye_gye_question_mark_accent_phrases()
- self._accent_phrase_marks_base(
- text="ギェ'、ギェ'/ギェ'?",
- expected_accent_phrases=expected_accent_phrases,
- )
-
- def a_pause_a_question_pause_a_question_a_question_mark_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=Mora(
- text="、",
- consonant=None,
- consonant_length=None,
- vowel="pau",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=Mora(
- text="、",
- consonant=None,
- consonant_length=None,
- vowel="pau",
- vowel_length=0.0,
- pitch=0.0,
- ),
- is_interrogative=True,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- is_interrogative=True,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=0.0,
- pitch=0.0,
- ),
- ],
- accent=1,
- pause_mora=None,
- is_interrogative=True,
- ),
- ]
-
- expected_accent_phrases = (
- a_pause_a_question_pause_a_question_a_question_mark_accent_phrases()
- )
- self._accent_phrase_marks_base(
- text="ア'、ア'?、ア'?/ア'?",
- expected_accent_phrases=expected_accent_phrases,
- )
-
-
-class TestParseKanaException(TestCase):
- def _assert_error_code(self, kana: str, code: ParseKanaErrorCode):
- with self.assertRaises(ParseKanaError) as err:
- parse_kana(kana)
- self.assertEqual(err.exception.errcode, code)
-
- def test_exceptions(self):
- self._assert_error_code("アクセント", ParseKanaErrorCode.ACCENT_NOTFOUND)
- self._assert_error_code("'アクセント", ParseKanaErrorCode.ACCENT_TOP)
- self._assert_error_code("ア'ク'セント", ParseKanaErrorCode.ACCENT_TWICE)
- self._assert_error_code("ひ'らがな", ParseKanaErrorCode.UNKNOWN_TEXT)
- self._assert_error_code("__ス'", ParseKanaErrorCode.UNKNOWN_TEXT)
- self._assert_error_code("ア'/", ParseKanaErrorCode.EMPTY_PHRASE)
- self._assert_error_code("/ア'", ParseKanaErrorCode.EMPTY_PHRASE)
- self._assert_error_code("", ParseKanaErrorCode.EMPTY_PHRASE)
-
- with self.assertRaises(ParseKanaError) as err:
- parse_kana("ヒト'ツメ/フタツメ")
- self.assertEqual(err.exception.errcode, ParseKanaErrorCode.ACCENT_NOTFOUND)
- self.assertEqual(err.exception.kwargs, {"text": "フタツメ"})
-
- with self.assertRaises(ParseKanaError) as err:
- parse_kana("ア'/")
- self.assertEqual(err.exception.errcode, ParseKanaErrorCode.EMPTY_PHRASE)
- self.assertEqual(err.exception.kwargs, {"position": "2"})
-
- with self.assertRaises(ParseKanaError) as err:
- kana_parser.parse_kana("ア?ア'")
- self.assertEqual(
- err.exception.errcode, ParseKanaErrorCode.INTERROGATION_MARK_NOT_AT_END
- )
-
-
-class TestCreateKana(TestCase):
- def test_create_kana_interrogative(self):
- def koreha_arimasuka_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="コ",
- consonant="k",
- consonant_length=2.5,
- vowel="o",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="レ",
- consonant="r",
- consonant_length=2.5,
- vowel="e",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="ワ",
- consonant="w",
- consonant_length=2.5,
- vowel="a",
- vowel_length=2.5,
- pitch=2.5,
- ),
- ],
- accent=3,
- pause_mora=None,
- is_interrogative=False,
- ),
- AccentPhrase(
- moras=[
- Mora(
- text="ア",
- consonant=None,
- consonant_length=None,
- vowel="a",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="リ",
- consonant="r",
- consonant_length=2.5,
- vowel="i",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="マ",
- consonant="m",
- consonant_length=2.5,
- vowel="a",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="ス",
- consonant="s",
- consonant_length=2.5,
- vowel="U",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="カ",
- consonant="k",
- consonant_length=2.5,
- vowel="a",
- vowel_length=2.5,
- pitch=2.5,
- ),
- ],
- accent=3,
- pause_mora=None,
- is_interrogative=False,
- ),
- ]
-
- accent_phrases = koreha_arimasuka_accent_phrases()
- self.assertEqual(create_kana(accent_phrases), "コレワ'/アリマ'_スカ")
-
- accent_phrases = koreha_arimasuka_accent_phrases()
- accent_phrases[-1].is_interrogative = True
- self.assertEqual(create_kana(accent_phrases), "コレワ'/アリマ'_スカ?")
-
- def kya_accent_phrases():
- return [
- AccentPhrase(
- moras=[
- Mora(
- text="キャ",
- consonant="ky",
- consonant_length=2.5,
- vowel="a",
- vowel_length=2.5,
- pitch=2.5,
- ),
- Mora(
- text="ッ",
- consonant=None,
- consonant_length=None,
- vowel="cl",
- vowel_length=0.1,
- pitch=0,
- ),
- ],
- accent=1,
- pause_mora=None,
- is_interrogative=False,
- ),
- ]
-
- accent_phrases = kya_accent_phrases()
- self.assertEqual(create_kana(accent_phrases), "キャ'ッ")
-
- accent_phrases = kya_accent_phrases()
- accent_phrases[-1].is_interrogative = True
- self.assertEqual(create_kana(accent_phrases), "キャ'ッ?")
diff --git a/spaces/3laa2/Text2img/README.md b/spaces/3laa2/Text2img/README.md
deleted file mode 100644
index 4d60b6f1a40e69db3a64878d7e1684b9d909eda4..0000000000000000000000000000000000000000
--- a/spaces/3laa2/Text2img/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Text2img
-emoji: 🔥
-colorFrom: yellow
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/42digital/DeepFashion_Classification/README.md b/spaces/42digital/DeepFashion_Classification/README.md
deleted file mode 100644
index 02067ee98ac4740840bebbe632b4bc3c28c5716b..0000000000000000000000000000000000000000
--- a/spaces/42digital/DeepFashion_Classification/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: DeepFashion Classification
-emoji: 🏆
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/AI-ANK/PaLM-Kosmos-Vision/app.py b/spaces/AI-ANK/PaLM-Kosmos-Vision/app.py
deleted file mode 100644
index 6b3915c20163b4499d64970e55f1a880869c3aa0..0000000000000000000000000000000000000000
--- a/spaces/AI-ANK/PaLM-Kosmos-Vision/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import streamlit as st
-import extra_streamlit_components as stx
-import requests
-from PIL import Image
-from transformers import AutoProcessor, AutoModelForVision2Seq
-from io import BytesIO
-import replicate
-from llama_index.llms.palm import PaLM
-from llama_index import ServiceContext, VectorStoreIndex, Document
-from llama_index.memory import ChatMemoryBuffer
-import os
-import datetime
-
-# Set up the title of the application
-#st.title("PaLM-Kosmos-Vision")
-st.set_page_config(layout="wide")
-st.write("My version of ChatGPT vision. You can upload an image and start chatting with the LLM about the image")
-
-# Sidebar
-st.sidebar.markdown('## Created By')
-st.sidebar.markdown("""
-[Harshad Suryawanshi](https://www.linkedin.com/in/harshadsuryawanshi/)
-""")
-
-st.sidebar.markdown('## Other Projects')
-st.sidebar.markdown("""
-- [AI Equity Research Analyst](https://ai-eqty-rsrch-anlyst.streamlit.app/)
-- [Recasting "The Office" Scene](https://blackmirroroffice.streamlit.app/)
-- [Story Generator](https://appstorycombined-agaf9j4ceit.streamlit.app/)
-""")
-
-st.sidebar.markdown('## Disclaimer')
-st.sidebar.markdown("""
-This application is a conceptual prototype created to demonstrate the potential of Large Language Models (LLMs) in generating equity research reports. The contents generated by this application are purely illustrative and should not be construed as financial advice, endorsements, or recommendations. The author and the application do not provide any guarantee regarding the accuracy, completeness, or timeliness of the information provided.
-""")
-
-# Initialize the cookie manager
-cookie_manager = stx.CookieManager()
-
-# Function to get image caption via Kosmos2.
-@st.cache_data
-def get_image_caption(image_data):
- input_data = {
- "image": image_data,
- "description_type": "Brief"
- }
- output = replicate.run(
- "lucataco/kosmos-2:3e7b211c29c092f4bcc8853922cc986baa52efe255876b80cac2c2fbb4aff805",
- input=input_data
- )
- # Split the output string on the newline character and take the first item
- text_description = output.split('\n\n')[0]
- return text_description
-
-# Function to create the chat engine.
-@st.cache_resource
-def create_chat_engine(img_desc, api_key):
- llm = PaLM(api_key=api_key)
- service_context = ServiceContext.from_defaults(llm=llm)
- doc = Document(text=img_desc)
- index = VectorStoreIndex.from_documents([doc], service_context=service_context)
- chatmemory = ChatMemoryBuffer.from_defaults(token_limit=1500)
-
- chat_engine = index.as_chat_engine(
- chat_mode="context",
- system_prompt=(
- f"You are a chatbot, able to have normal interactions, as well as talk. "
- "You always answer in great detail and are polite. Your responses always descriptive. "
- "Your job is to talk about an image the user has uploaded. Image description: {img_desc}."
- ),
- verbose=True,
- memory=chatmemory
- )
- return chat_engine
-
-# Clear chat function
-def clear_chat():
- if "messages" in st.session_state:
- del st.session_state.messages
- if "image_file" in st.session_state:
- del st.session_state.image_file
-
-# Callback function to clear the chat when a new image is uploaded
-def on_image_upload():
- clear_chat()
-
-# Retrieve the message count from cookies
-message_count = cookie_manager.get(cookie='message_count')
-if message_count is None:
- message_count = 0
-else:
- message_count = int(message_count)
-
-# If the message limit has been reached, disable the inputs
-if message_count >= 20:
- st.error("Notice: The maximum message limit for this demo version has been reached.")
- # Disabling the uploader and input by not displaying them
- image_uploader_placeholder = st.empty() # Placeholder for the uploader
- chat_input_placeholder = st.empty() # Placeholder for the chat input
-else:
- # Add a clear chat button
- if st.button("Clear Chat"):
- clear_chat()
-
- # Image upload section.
- image_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"], key="uploaded_image", on_change=on_image_upload)
- if image_file:
- # Display the uploaded image at a standard width.
- st.image(image_file, caption='Uploaded Image.', width=200)
- # Process the uploaded image to get a caption.
- image_data = BytesIO(image_file.getvalue())
- img_desc = get_image_caption(image_data)
- st.write("Image Uploaded Successfully. Ask me anything about it.")
-
- # Initialize the chat engine with the image description.
- chat_engine = create_chat_engine(img_desc, os.environ["GOOGLE_API_KEY"])
-
- # Initialize session state for messages if it doesn't exist
- if "messages" not in st.session_state:
- st.session_state.messages = []
-
- # Display previous messages
- for message in st.session_state.messages:
- with st.chat_message(message["role"]):
- st.markdown(message["content"])
-
- # Handle new user input
- user_input = st.chat_input("Ask me about the image:", key="chat_input")
- if user_input:
- # Append user message to the session state
- st.session_state.messages.append({"role": "user", "content": user_input})
-
- # Display user message immediately
- with st.chat_message("user"):
- st.markdown(user_input)
-
- # Call the chat engine to get the response if an image has been uploaded
- if image_file and user_input:
- try:
- with st.spinner('Waiting for the chat engine to respond...'):
- # Get the response from your chat engine
- response = chat_engine.chat(user_input)
-
- # Append assistant message to the session state
- st.session_state.messages.append({"role": "assistant", "content": response})
-
- # Display the assistant message
- with st.chat_message("assistant"):
- st.markdown(response)
-
- except Exception as e:
- st.error(f'An error occurred: {e}')
- # Optionally, you can choose to break the flow here if a critical error happens
- # return
-
- # Increment the message count and update the cookie
- message_count += 1
- cookie_manager.set('message_count', str(message_count), expires_at=datetime.datetime.now() + datetime.timedelta(days=30))
-
-
-
-
-# Set Replicate and Google API keys
-os.environ['REPLICATE_API_TOKEN'] = st.secrets['REPLICATE_API_TOKEN']
-os.environ["GOOGLE_API_KEY"] = st.secrets['GOOGLE_API_KEY']
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/models/unet.py b/spaces/AIConsultant/MusicGen/audiocraft/models/unet.py
deleted file mode 100644
index db4a6df8e309c21fede37abdbe3c862932027641..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/models/unet.py
+++ /dev/null
@@ -1,214 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Pytorch Unet Module used for diffusion.
-"""
-
-from dataclasses import dataclass
-import typing as tp
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from audiocraft.modules.transformer import StreamingTransformer, create_sin_embedding
-
-
-@dataclass
-class Output:
- sample: torch.Tensor
-
-
-def get_model(cfg, channels: int, side: int, num_steps: int):
- if cfg.model == 'unet':
- return DiffusionUnet(
- chin=channels, num_steps=num_steps, **cfg.diffusion_unet)
- else:
- raise RuntimeError('Not Implemented')
-
-
-class ResBlock(nn.Module):
- def __init__(self, channels: int, kernel: int = 3, norm_groups: int = 4,
- dilation: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- stride = 1
- padding = dilation * (kernel - stride) // 2
- Conv = nn.Conv1d
- Drop = nn.Dropout1d
- self.norm1 = nn.GroupNorm(norm_groups, channels)
- self.conv1 = Conv(channels, channels, kernel, 1, padding, dilation=dilation)
- self.activation1 = activation()
- self.dropout1 = Drop(dropout)
-
- self.norm2 = nn.GroupNorm(norm_groups, channels)
- self.conv2 = Conv(channels, channels, kernel, 1, padding, dilation=dilation)
- self.activation2 = activation()
- self.dropout2 = Drop(dropout)
-
- def forward(self, x):
- h = self.dropout1(self.conv1(self.activation1(self.norm1(x))))
- h = self.dropout2(self.conv2(self.activation2(self.norm2(h))))
- return x + h
-
-
-class DecoderLayer(nn.Module):
- def __init__(self, chin: int, chout: int, kernel: int = 4, stride: int = 2,
- norm_groups: int = 4, res_blocks: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- padding = (kernel - stride) // 2
- self.res_blocks = nn.Sequential(
- *[ResBlock(chin, norm_groups=norm_groups, dilation=2**idx, dropout=dropout)
- for idx in range(res_blocks)])
- self.norm = nn.GroupNorm(norm_groups, chin)
- ConvTr = nn.ConvTranspose1d
- self.convtr = ConvTr(chin, chout, kernel, stride, padding, bias=False)
- self.activation = activation()
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.res_blocks(x)
- x = self.norm(x)
- x = self.activation(x)
- x = self.convtr(x)
- return x
-
-
-class EncoderLayer(nn.Module):
- def __init__(self, chin: int, chout: int, kernel: int = 4, stride: int = 2,
- norm_groups: int = 4, res_blocks: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- padding = (kernel - stride) // 2
- Conv = nn.Conv1d
- self.conv = Conv(chin, chout, kernel, stride, padding, bias=False)
- self.norm = nn.GroupNorm(norm_groups, chout)
- self.activation = activation()
- self.res_blocks = nn.Sequential(
- *[ResBlock(chout, norm_groups=norm_groups, dilation=2**idx, dropout=dropout)
- for idx in range(res_blocks)])
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- B, C, T = x.shape
- stride, = self.conv.stride
- pad = (stride - (T % stride)) % stride
- x = F.pad(x, (0, pad))
-
- x = self.conv(x)
- x = self.norm(x)
- x = self.activation(x)
- x = self.res_blocks(x)
- return x
-
-
-class BLSTM(nn.Module):
- """BiLSTM with same hidden units as input dim.
- """
- def __init__(self, dim, layers=2):
- super().__init__()
- self.lstm = nn.LSTM(bidirectional=True, num_layers=layers, hidden_size=dim, input_size=dim)
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- x = self.lstm(x)[0]
- x = self.linear(x)
- x = x.permute(1, 2, 0)
- return x
-
-
-class DiffusionUnet(nn.Module):
- def __init__(self, chin: int = 3, hidden: int = 24, depth: int = 3, growth: float = 2.,
- max_channels: int = 10_000, num_steps: int = 1000, emb_all_layers=False, cross_attention: bool = False,
- bilstm: bool = False, transformer: bool = False,
- codec_dim: tp.Optional[int] = None, **kwargs):
- super().__init__()
- self.encoders = nn.ModuleList()
- self.decoders = nn.ModuleList()
- self.embeddings: tp.Optional[nn.ModuleList] = None
- self.embedding = nn.Embedding(num_steps, hidden)
- if emb_all_layers:
- self.embeddings = nn.ModuleList()
- self.condition_embedding: tp.Optional[nn.Module] = None
- for d in range(depth):
- encoder = EncoderLayer(chin, hidden, **kwargs)
- decoder = DecoderLayer(hidden, chin, **kwargs)
- self.encoders.append(encoder)
- self.decoders.insert(0, decoder)
- if emb_all_layers and d > 0:
- assert self.embeddings is not None
- self.embeddings.append(nn.Embedding(num_steps, hidden))
- chin = hidden
- hidden = min(int(chin * growth), max_channels)
- self.bilstm: tp.Optional[nn.Module]
- if bilstm:
- self.bilstm = BLSTM(chin)
- else:
- self.bilstm = None
- self.use_transformer = transformer
- self.cross_attention = False
- if transformer:
- self.cross_attention = cross_attention
- self.transformer = StreamingTransformer(chin, 8, 6, bias_ff=False, bias_attn=False,
- cross_attention=cross_attention)
-
- self.use_codec = False
- if codec_dim is not None:
- self.conv_codec = nn.Conv1d(codec_dim, chin, 1)
- self.use_codec = True
-
- def forward(self, x: torch.Tensor, step: tp.Union[int, torch.Tensor], condition: tp.Optional[torch.Tensor] = None):
- skips = []
- bs = x.size(0)
- z = x
- view_args = [1]
- if type(step) is torch.Tensor:
- step_tensor = step
- else:
- step_tensor = torch.tensor([step], device=x.device, dtype=torch.long).expand(bs)
-
- for idx, encoder in enumerate(self.encoders):
- z = encoder(z)
- if idx == 0:
- z = z + self.embedding(step_tensor).view(bs, -1, *view_args).expand_as(z)
- elif self.embeddings is not None:
- z = z + self.embeddings[idx - 1](step_tensor).view(bs, -1, *view_args).expand_as(z)
-
- skips.append(z)
-
- if self.use_codec: # insert condition in the bottleneck
- assert condition is not None, "Model defined for conditionnal generation"
- condition_emb = self.conv_codec(condition) # reshape to the bottleneck dim
- assert condition_emb.size(-1) <= 2 * z.size(-1), \
- f"You are downsampling the conditionning with factor >=2 : {condition_emb.size(-1)=} and {z.size(-1)=}"
- if not self.cross_attention:
-
- condition_emb = torch.nn.functional.interpolate(condition_emb, z.size(-1))
- assert z.size() == condition_emb.size()
- z += condition_emb
- cross_attention_src = None
- else:
- cross_attention_src = condition_emb.permute(0, 2, 1) # B, T, C
- B, T, C = cross_attention_src.shape
- positions = torch.arange(T, device=x.device).view(1, -1, 1)
- pos_emb = create_sin_embedding(positions, C, max_period=10_000, dtype=cross_attention_src.dtype)
- cross_attention_src = cross_attention_src + pos_emb
- if self.use_transformer:
- z = self.transformer(z.permute(0, 2, 1), cross_attention_src=cross_attention_src).permute(0, 2, 1)
- else:
- if self.bilstm is None:
- z = torch.zeros_like(z)
- else:
- z = self.bilstm(z)
-
- for decoder in self.decoders:
- s = skips.pop(-1)
- z = z[:, :, :s.shape[2]]
- z = z + s
- z = decoder(z)
-
- z = z[:, :, :x.shape[2]]
- return Output(z)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/ema.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/test.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/test.py
deleted file mode 100644
index b230ddf5ba4901aee0cf5e5d102fcca328038eeb..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/test.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import argparse
-import random
-from tester import DiacritizationTester
-
-import numpy as np
-import torch
-
-
-SEED = 1234
-random.seed(SEED)
-np.random.seed(SEED)
-torch.manual_seed(SEED)
-torch.cuda.manual_seed(SEED)
-torch.backends.cudnn.deterministic = True
-torch.backends.cudnn.benchmark = False
-
-
-def train_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("--model", dest="model_kind", type=str, required=True)
- parser.add_argument("--config", dest="config", type=str, required=True)
- parser.add_argument("--model_path", dest="model_path", type=str, required=False)
- parser.add_argument("--test", dest="test", type=bool)
- return parser
-
-
-parser = train_parser()
-args = parser.parse_args()
-
-tester = DiacritizationTester(args.config, args.model_kind)
-tester.run()
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/You.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/You.py
deleted file mode 100644
index 1afd18be4560fe684744d10e34ddcdd833238178..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/You.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from __future__ import annotations
-
-import json
-
-from ..requests import StreamSession
-from ..typing import AsyncGenerator, Messages
-from .base_provider import AsyncGeneratorProvider, format_prompt
-
-
-class You(AsyncGeneratorProvider):
- url = "https://you.com"
- working = True
- supports_gpt_35_turbo = True
-
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: Messages,
- proxy: str = None,
- timeout: int = 120,
- **kwargs,
- ) -> AsyncGenerator:
- async with StreamSession(proxies={"https": proxy}, impersonate="chrome107", timeout=timeout) as session:
- headers = {
- "Accept": "text/event-stream",
- "Referer": f"{cls.url}/search?fromSearchBar=true&tbm=youchat",
- }
- data = {"q": format_prompt(messages), "domain": "youchat", "chat": ""}
- async with session.get(
- f"{cls.url}/api/streamingSearch",
- params=data,
- headers=headers
- ) as response:
- response.raise_for_status()
- start = b'data: {"youChatToken": '
- async for line in response.iter_lines():
- if line.startswith(start):
- yield json.loads(line[len(start):-1])
\ No newline at end of file
diff --git a/spaces/Adapter/T2I-Adapter/train_seg.py b/spaces/Adapter/T2I-Adapter/train_seg.py
deleted file mode 100644
index 82ed0724ef757a93e9f9fdd4ef3ada4a0203f906..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/train_seg.py
+++ /dev/null
@@ -1,372 +0,0 @@
-import cv2
-import torch
-import os
-from basicsr.utils import img2tensor, tensor2img, scandir, get_time_str, get_root_logger, get_env_info
-from ldm.data.dataset_coco import dataset_coco_mask_color
-import argparse
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.plms import PLMSSampler
-from ldm.models.diffusion.dpm_solver import DPMSolverSampler
-from omegaconf import OmegaConf
-from ldm.util import instantiate_from_config
-from ldm.modules.encoders.adapter import Adapter
-from PIL import Image
-import numpy as np
-import torch.nn as nn
-import matplotlib.pyplot as plt
-import time
-import os.path as osp
-from basicsr.utils.options import copy_opt_file, dict2str
-import logging
-from dist_util import init_dist, master_only, get_bare_model, get_dist_info
-
-def load_model_from_config(config, ckpt, verbose=False):
- print(f"Loading model from {ckpt}")
- pl_sd = torch.load(ckpt, map_location="cpu")
- if "global_step" in pl_sd:
- print(f"Global Step: {pl_sd['global_step']}")
- sd = pl_sd["state_dict"]
- model = instantiate_from_config(config.model)
- m, u = model.load_state_dict(sd, strict=False)
- if len(m) > 0 and verbose:
- print("missing keys:")
- print(m)
- if len(u) > 0 and verbose:
- print("unexpected keys:")
- print(u)
-
- model.cuda()
- model.eval()
- return model
-
-@master_only
-def mkdir_and_rename(path):
- """mkdirs. If path exists, rename it with timestamp and create a new one.
-
- Args:
- path (str): Folder path.
- """
- if osp.exists(path):
- new_name = path + '_archived_' + get_time_str()
- print(f'Path already exists. Rename it to {new_name}', flush=True)
- os.rename(path, new_name)
- os.makedirs(path, exist_ok=True)
- os.makedirs(osp.join(experiments_root, 'models'))
- os.makedirs(osp.join(experiments_root, 'training_states'))
- os.makedirs(osp.join(experiments_root, 'visualization'))
-
-def load_resume_state(opt):
- resume_state_path = None
- if opt.auto_resume:
- state_path = osp.join('experiments', opt.name, 'training_states')
- if osp.isdir(state_path):
- states = list(scandir(state_path, suffix='state', recursive=False, full_path=False))
- if len(states) != 0:
- states = [float(v.split('.state')[0]) for v in states]
- resume_state_path = osp.join(state_path, f'{max(states):.0f}.state')
- opt.resume_state_path = resume_state_path
- # else:
- # if opt['path'].get('resume_state'):
- # resume_state_path = opt['path']['resume_state']
-
- if resume_state_path is None:
- resume_state = None
- else:
- device_id = torch.cuda.current_device()
- resume_state = torch.load(resume_state_path, map_location=lambda storage, loc: storage.cuda(device_id))
- # check_resume(opt, resume_state['iter'])
- return resume_state
-
-parser = argparse.ArgumentParser()
-parser.add_argument(
- "--bsize",
- type=int,
- default=8,
- help="the prompt to render"
-)
-parser.add_argument(
- "--epochs",
- type=int,
- default=10000,
- help="the prompt to render"
-)
-parser.add_argument(
- "--num_workers",
- type=int,
- default=8,
- help="the prompt to render"
-)
-parser.add_argument(
- "--use_shuffle",
- type=bool,
- default=True,
- help="the prompt to render"
-)
-parser.add_argument(
- "--dpm_solver",
- action='store_true',
- help="use dpm_solver sampling",
-)
-parser.add_argument(
- "--plms",
- action='store_true',
- help="use plms sampling",
-)
-parser.add_argument(
- "--auto_resume",
- action='store_true',
- help="use plms sampling",
-)
-parser.add_argument(
- "--ckpt",
- type=str,
- default="ckp/sd-v1-4.ckpt",
- help="path to checkpoint of model",
-)
-parser.add_argument(
- "--config",
- type=str,
- default="configs/stable-diffusion/train_mask.yaml",
- help="path to config which constructs model",
-)
-parser.add_argument(
- "--print_fq",
- type=int,
- default=100,
- help="path to config which constructs model",
-)
-parser.add_argument(
- "--H",
- type=int,
- default=512,
- help="image height, in pixel space",
-)
-parser.add_argument(
- "--W",
- type=int,
- default=512,
- help="image width, in pixel space",
-)
-parser.add_argument(
- "--C",
- type=int,
- default=4,
- help="latent channels",
-)
-parser.add_argument(
- "--f",
- type=int,
- default=8,
- help="downsampling factor",
-)
-parser.add_argument(
- "--ddim_steps",
- type=int,
- default=50,
- help="number of ddim sampling steps",
-)
-parser.add_argument(
- "--n_samples",
- type=int,
- default=1,
- help="how many samples to produce for each given prompt. A.k.a. batch size",
-)
-parser.add_argument(
- "--ddim_eta",
- type=float,
- default=0.0,
- help="ddim eta (eta=0.0 corresponds to deterministic sampling",
-)
-parser.add_argument(
- "--scale",
- type=float,
- default=7.5,
- help="unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))",
-)
-parser.add_argument(
- "--gpus",
- default=[0,1,2,3],
- help="gpu idx",
-)
-parser.add_argument(
- '--local_rank',
- default=0,
- type=int,
- help='node rank for distributed training'
-)
-parser.add_argument(
- '--launcher',
- default='pytorch',
- type=str,
- help='node rank for distributed training'
-)
-opt = parser.parse_args()
-
-if __name__ == '__main__':
- config = OmegaConf.load(f"{opt.config}")
- opt.name = config['name']
-
- # distributed setting
- init_dist(opt.launcher)
- torch.backends.cudnn.benchmark = True
- device='cuda'
- torch.cuda.set_device(opt.local_rank)
-
- # dataset
- path_json_train = 'coco_stuff/mask/annotations/captions_train2017.json'
- path_json_val = 'coco_stuff/mask/annotations/captions_val2017.json'
- train_dataset = dataset_coco_mask_color(path_json_train,
- root_path_im='coco/train2017',
- root_path_mask='coco_stuff/mask/train2017_color',
- image_size=512
- )
- train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
- val_dataset = dataset_coco_mask_color(path_json_val,
- root_path_im='coco/val2017',
- root_path_mask='coco_stuff/mask/val2017_color',
- image_size=512
- )
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset,
- batch_size=opt.bsize,
- shuffle=(train_sampler is None),
- num_workers=opt.num_workers,
- pin_memory=True,
- sampler=train_sampler)
- val_dataloader = torch.utils.data.DataLoader(
- val_dataset,
- batch_size=1,
- shuffle=False,
- num_workers=1,
- pin_memory=False)
-
- # stable diffusion
- model = load_model_from_config(config, f"{opt.ckpt}").to(device)
-
- # sketch encoder
- model_ad = Adapter(cin=int(3*64), channels=[320, 640, 1280, 1280][:4], nums_rb=2, ksize=1, sk=True, use_conv=False).to(device)
-
-
- # to gpus
- model_ad = torch.nn.parallel.DistributedDataParallel(
- model_ad,
- device_ids=[opt.local_rank],
- output_device=opt.local_rank)
- model = torch.nn.parallel.DistributedDataParallel(
- model,
- device_ids=[opt.local_rank],
- output_device=opt.local_rank)
- # device_ids=[torch.cuda.current_device()])
-
- # optimizer
- params = list(model_ad.parameters())
- optimizer = torch.optim.AdamW(params, lr=config['training']['lr'])
-
- experiments_root = osp.join('experiments', opt.name)
-
- # resume state
- resume_state = load_resume_state(opt)
- if resume_state is None:
- mkdir_and_rename(experiments_root)
- start_epoch = 0
- current_iter = 0
- # WARNING: should not use get_root_logger in the above codes, including the called functions
- # Otherwise the logger will not be properly initialized
- log_file = osp.join(experiments_root, f"train_{opt.name}_{get_time_str()}.log")
- logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file)
- logger.info(get_env_info())
- logger.info(dict2str(config))
- else:
- # WARNING: should not use get_root_logger in the above codes, including the called functions
- # Otherwise the logger will not be properly initialized
- log_file = osp.join(experiments_root, f"train_{opt.name}_{get_time_str()}.log")
- logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file)
- logger.info(get_env_info())
- logger.info(dict2str(config))
- resume_optimizers = resume_state['optimizers']
- optimizer.load_state_dict(resume_optimizers)
- logger.info(f"Resuming training from epoch: {resume_state['epoch']}, " f"iter: {resume_state['iter']}.")
- start_epoch = resume_state['epoch']
- current_iter = resume_state['iter']
-
- # copy the yml file to the experiment root
- copy_opt_file(opt.config, experiments_root)
-
- # training
- logger.info(f'Start training from epoch: {start_epoch}, iter: {current_iter}')
- for epoch in range(start_epoch, opt.epochs):
- train_dataloader.sampler.set_epoch(epoch)
- # train
- for _, data in enumerate(train_dataloader):
- current_iter += 1
- with torch.no_grad():
- c = model.module.get_learned_conditioning(data['sentence'])
- z = model.module.encode_first_stage((data['im']*2-1.).cuda(non_blocking=True))
- z = model.module.get_first_stage_encoding(z)
-
- mask = data['mask']
- optimizer.zero_grad()
- model.zero_grad()
- features_adapter = model_ad(mask)
- l_pixel, loss_dict = model(z, c=c, features_adapter = features_adapter)
- l_pixel.backward()
- optimizer.step()
-
- if (current_iter+1)%opt.print_fq == 0:
- logger.info(loss_dict)
-
- # save checkpoint
- rank, _ = get_dist_info()
- if (rank==0) and ((current_iter+1)%config['training']['save_freq'] == 0):
- save_filename = f'model_ad_{current_iter+1}.pth'
- save_path = os.path.join(experiments_root, 'models', save_filename)
- save_dict = {}
- model_ad_bare = get_bare_model(model_ad)
- state_dict = model_ad_bare.state_dict()
- for key, param in state_dict.items():
- if key.startswith('module.'): # remove unnecessary 'module.'
- key = key[7:]
- save_dict[key] = param.cpu()
- torch.save(save_dict, save_path)
- # save state
- state = {'epoch': epoch, 'iter': current_iter+1, 'optimizers': optimizer.state_dict()}
- save_filename = f'{current_iter+1}.state'
- save_path = os.path.join(experiments_root, 'training_states', save_filename)
- torch.save(state, save_path)
-
- # val
- rank, _ = get_dist_info()
- if rank==0:
- for data in val_dataloader:
- with torch.no_grad():
- if opt.dpm_solver:
- sampler = DPMSolverSampler(model.module)
- elif opt.plms:
- sampler = PLMSSampler(model.module)
- else:
- sampler = DDIMSampler(model.module)
- c = model.module.get_learned_conditioning(data['sentence'])
- mask = data['mask']
- im_mask = tensor2img(mask)
- cv2.imwrite(os.path.join(experiments_root, 'visualization', 'mask_%04d.png'%epoch), im_mask)
- features_adapter = model_ad(mask)
- shape = [opt.C, opt.H // opt.f, opt.W // opt.f]
- samples_ddim, _ = sampler.sample(S=opt.ddim_steps,
- conditioning=c,
- batch_size=opt.n_samples,
- shape=shape,
- verbose=False,
- unconditional_guidance_scale=opt.scale,
- unconditional_conditioning=model.module.get_learned_conditioning(opt.n_samples * [""]),
- eta=opt.ddim_eta,
- x_T=None,
- features_adapter=features_adapter)
- x_samples_ddim = model.module.decode_first_stage(samples_ddim)
- x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
- x_samples_ddim = x_samples_ddim.cpu().permute(0, 2, 3, 1).numpy()
- for id_sample, x_sample in enumerate(x_samples_ddim):
- x_sample = 255.*x_sample
- img = x_sample.astype(np.uint8)
- img = cv2.putText(img.copy(), data['sentence'][0], (10,30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0), 2)
- cv2.imwrite(os.path.join(experiments_root, 'visualization', 'sample_e%04d_s%04d.png'%(epoch, id_sample)), img[:,:,::-1])
- break
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/RemoveChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/RemoveChildMethods.js
deleted file mode 100644
index f07d12dc1e9506ed3a7027ddb9878f0bbd0d9596..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/RemoveChildMethods.js
+++ /dev/null
@@ -1,55 +0,0 @@
-import Sizer from '../sizer/Sizer.js';
-import IsArray from '../../../plugins/utils/object/IsArray.js';
-
-const SizerRmove = Sizer.prototype.remove;
-const SizerClear = Sizer.prototype.clear;
-
-var Remove = function (gameObject, destroyChild) {
- if (this.getParentSizer(gameObject) !== this) {
- return this;
- }
-
- this.buttonGroup.remove(gameObject);
- SizerRmove.call(this, gameObject, destroyChild);
- return this;
-};
-
-export default {
- remove(gameObject, destroyChild) {
- // Remove gameObject no matter it is a button or not
- if (IsArray(gameObject)) {
- var gameObjects = gameObject;
- for (var i = 0, cnt = gameObjects.length; i < cnt; i++) {
- Remove.call(this, gameObjects[i], destroyChild);
- }
- } else {
- Remove.call(this, gameObject, destroyChild);
- }
- return this;
- },
-
- clear(destroyChild) {
- var buttons = this.buttonGroup.buttons;
- buttons.length = 0;
- SizerClear.call(this, destroyChild);
- return this;
- },
-
- removeButton(gameObject, destroyChild) {
- var gameObject = this.getButton(gameObject);
- // Don't remove this gameObject, it is not a button
- if (!gameObject) {
- return this;
- }
- this.remove(gameObject, destroyChild);
- return this;
- },
-
- clearButtons(destroyChild) {
- var buttons = this.buttonGroup.buttons;
- for (var i = buttons.length - 1; i >= 0; i--) {
- Remove.call(this, buttons[i], destroyChild);
- }
- return this;
- }
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.d.ts
deleted file mode 100644
index 54a5919382c93759cb9fd74af8b877dc93e24903..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.d.ts
+++ /dev/null
@@ -1,5 +0,0 @@
-import CustomProgress from "./CustomProgress";
-
-export default function (
- config?: CustomProgress.IConfig
-): CustomProgress;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.js
deleted file mode 100644
index 9be40cdee19d776e5eed254fa0be4cff4cf02d78..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customprogress/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import CustomProgress from './CustomProgress.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('customProgress', function (x, y, width, height, config) {
- var gameObject = new CustomProgress(this.scene, x, y, width, height, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.CustomProgress', CustomProgress);
-
-export default CustomProgress;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/space/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/space/Factory.d.ts
deleted file mode 100644
index 2a7230ccc2713f61640536d2e64d94d858c3c48a..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/space/Factory.d.ts
+++ /dev/null
@@ -1,3 +0,0 @@
-import Space from './Space';
-
-export default function (): Space;
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_1x_coco.py
deleted file mode 100644
index f42165d9fd14600858681e695de7927aac865652..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './cascade_mask_rcnn_r50_caffe_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet101_caffe',
- backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py
deleted file mode 100644
index ababe58dc3fdfbbc6c366f48271db31bf6e2e9e2..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py
deleted file mode 100644
index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import mmcv
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weighted_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def quality_focal_loss(pred, target, beta=2.0):
- r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of classification
- and quality (IoU) estimation with shape (N, C), C is the number of
- classes.
- target (tuple([torch.Tensor])): Target category label with shape (N,)
- and target quality label with shape (N,).
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- assert len(target) == 2, """target for QFL must be a tuple of two elements,
- including category label and quality label, respectively"""
- # label denotes the category id, score denotes the quality score
- label, score = target
-
- # negatives are supervised by 0 quality score
- pred_sigmoid = pred.sigmoid()
- scale_factor = pred_sigmoid
- zerolabel = scale_factor.new_zeros(pred.shape)
- loss = F.binary_cross_entropy_with_logits(
- pred, zerolabel, reduction='none') * scale_factor.pow(beta)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = pred.size(1)
- pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1)
- pos_label = label[pos].long()
- # positives are supervised by bbox quality (IoU) score
- scale_factor = score[pos] - pred_sigmoid[pos, pos_label]
- loss[pos, pos_label] = F.binary_cross_entropy_with_logits(
- pred[pos, pos_label], score[pos],
- reduction='none') * scale_factor.abs().pow(beta)
-
- loss = loss.sum(dim=1, keepdim=False)
- return loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def distribution_focal_loss(pred, label):
- r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding boxes
- (before softmax) with shape (N, n+1), n is the max value of the
- integral set `{0, ..., n}` in paper.
- label (torch.Tensor): Target distance label for bounding boxes with
- shape (N,).
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- dis_left = label.long()
- dis_right = dis_left + 1
- weight_left = dis_right.float() - label
- weight_right = label - dis_left.float()
- loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \
- + F.cross_entropy(pred, dis_right, reduction='none') * weight_right
- return loss
-
-
-@LOSSES.register_module()
-class QualityFocalLoss(nn.Module):
- r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- use_sigmoid (bool): Whether sigmoid operation is conducted in QFL.
- Defaults to True.
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
- reduction (str): Options are "none", "mean" and "sum".
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self,
- use_sigmoid=True,
- beta=2.0,
- reduction='mean',
- loss_weight=1.0):
- super(QualityFocalLoss, self).__init__()
- assert use_sigmoid is True, 'Only sigmoid in QFL supported now.'
- self.use_sigmoid = use_sigmoid
- self.beta = beta
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of
- classification and quality (IoU) estimation with shape (N, C),
- C is the number of classes.
- target (tuple([torch.Tensor])): Target category label with shape
- (N,) and target quality label with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- loss_cls = self.loss_weight * quality_focal_loss(
- pred,
- target,
- weight,
- beta=self.beta,
- reduction=reduction,
- avg_factor=avg_factor)
- else:
- raise NotImplementedError
- return loss_cls
-
-
-@LOSSES.register_module()
-class DistributionFocalLoss(nn.Module):
- r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self, reduction='mean', loss_weight=1.0):
- super(DistributionFocalLoss, self).__init__()
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding
- boxes (before softmax) with shape (N, n+1), n is the max value
- of the integral set `{0, ..., n}` in paper.
- target (torch.Tensor): Target distance label for bounding boxes
- with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_cls = self.loss_weight * distribution_focal_loss(
- pred, target, weight, reduction=reduction, avg_factor=avg_factor)
- return loss_cls
diff --git a/spaces/AngoHF/ANGO-Leaderboard/components/top.py b/spaces/AngoHF/ANGO-Leaderboard/components/top.py
deleted file mode 100644
index 941ed3874abc2f05b433dbf35411c75422b842f1..0000000000000000000000000000000000000000
--- a/spaces/AngoHF/ANGO-Leaderboard/components/top.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import gradio as gr
-
-from assets.content import TITLE, INTRODUCTION_TEXT
-from assets.path import SEASON
-
-
-def create_top():
- gr.HTML(TITLE)
- gr.Markdown(INTRODUCTION_TEXT, elem_classes="markdown-text")
- with gr.Row():
- season_dropdown = gr.Dropdown(choices=list(SEASON), value="latest", label="Season Select")
- language_dropdown = gr.Dropdown(choices=['en', 'zh'], value='en', label='Language Select')
- return {"season": season_dropdown, "language": language_dropdown}
diff --git a/spaces/AnimalEquality/chatbot/app.py b/spaces/AnimalEquality/chatbot/app.py
deleted file mode 100644
index 4f256467dc20074ea0ddbbbcf267323ef180170e..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/app.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from lv_recipe_chatbot.app import create_demo, ConversationBot
-from lv_recipe_chatbot.ingredient_vision import (
- VeganIngredientFinder,
- BlipImageCaptioning,
-)
-import os
-
-
-# for Hugging Face
-
-if __name__ == "__main__":
- vegan_ingred_finder = VeganIngredientFinder()
- img_cap = BlipImageCaptioning("cpu")
- demo = create_demo(
- ConversationBot(
- vegan_ingred_finder=vegan_ingred_finder, img_cap=img_cap, verbose=True
- )
- )
- demo.launch(
- auth=(os.environ["GRADIO_DEMO_USERNAME"], os.environ["GRADIO_DEMO_PASSWORD"])
- )
diff --git a/spaces/Anindya/Marketing_Campaign_LLM/README.md b/spaces/Anindya/Marketing_Campaign_LLM/README.md
deleted file mode 100644
index 00c52c28dda23b424f347d6cae756ad951464356..0000000000000000000000000000000000000000
--- a/spaces/Anindya/Marketing_Campaign_LLM/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Marketing Campaign LLM
-emoji: 📚
-colorFrom: red
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-# Marketing_Campaign_LLM
-Simple Marketing Campaign app using LLM
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/main.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/main.py
deleted file mode 100644
index f40c20ea202b283260a278bc38b0c63a8e3efc1e..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/main.py
+++ /dev/null
@@ -1,382 +0,0 @@
-import io
-import logging
-import os
-import shutil
-import sys
-import tempfile
-from collections import OrderedDict
-from contextlib import contextmanager
-from typing import (IO, Dict, Iterable, Iterator, Mapping, Optional, Tuple,
- Union)
-
-from .parser import Binding, parse_stream
-from .variables import parse_variables
-
-# A type alias for a string path to be used for the paths in this file.
-# These paths may flow to `open()` and `shutil.move()`; `shutil.move()`
-# only accepts string paths, not byte paths or file descriptors. See
-# https://github.com/python/typeshed/pull/6832.
-StrPath = Union[str, 'os.PathLike[str]']
-
-logger = logging.getLogger(__name__)
-
-
-def with_warn_for_invalid_lines(mappings: Iterator[Binding]) -> Iterator[Binding]:
- for mapping in mappings:
- if mapping.error:
- logger.warning(
- "Python-dotenv could not parse statement starting at line %s",
- mapping.original.line,
- )
- yield mapping
-
-
-class DotEnv:
- def __init__(
- self,
- dotenv_path: Optional[StrPath],
- stream: Optional[IO[str]] = None,
- verbose: bool = False,
- encoding: Optional[str] = None,
- interpolate: bool = True,
- override: bool = True,
- ) -> None:
- self.dotenv_path: Optional[StrPath] = dotenv_path
- self.stream: Optional[IO[str]] = stream
- self._dict: Optional[Dict[str, Optional[str]]] = None
- self.verbose: bool = verbose
- self.encoding: Optional[str] = encoding
- self.interpolate: bool = interpolate
- self.override: bool = override
-
- @contextmanager
- def _get_stream(self) -> Iterator[IO[str]]:
- if self.dotenv_path and os.path.isfile(self.dotenv_path):
- with open(self.dotenv_path, encoding=self.encoding) as stream:
- yield stream
- elif self.stream is not None:
- yield self.stream
- else:
- if self.verbose:
- logger.info(
- "Python-dotenv could not find configuration file %s.",
- self.dotenv_path or '.env',
- )
- yield io.StringIO('')
-
- def dict(self) -> Dict[str, Optional[str]]:
- """Return dotenv as dict"""
- if self._dict:
- return self._dict
-
- raw_values = self.parse()
-
- if self.interpolate:
- self._dict = OrderedDict(resolve_variables(raw_values, override=self.override))
- else:
- self._dict = OrderedDict(raw_values)
-
- return self._dict
-
- def parse(self) -> Iterator[Tuple[str, Optional[str]]]:
- with self._get_stream() as stream:
- for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
- if mapping.key is not None:
- yield mapping.key, mapping.value
-
- def set_as_environment_variables(self) -> bool:
- """
- Load the current dotenv as system environment variable.
- """
- if not self.dict():
- return False
-
- for k, v in self.dict().items():
- if k in os.environ and not self.override:
- continue
- if v is not None:
- os.environ[k] = v
-
- return True
-
- def get(self, key: str) -> Optional[str]:
- """
- """
- data = self.dict()
-
- if key in data:
- return data[key]
-
- if self.verbose:
- logger.warning("Key %s not found in %s.", key, self.dotenv_path)
-
- return None
-
-
-def get_key(
- dotenv_path: StrPath,
- key_to_get: str,
- encoding: Optional[str] = "utf-8",
-) -> Optional[str]:
- """
- Get the value of a given key from the given .env.
-
- Returns `None` if the key isn't found or doesn't have a value.
- """
- return DotEnv(dotenv_path, verbose=True, encoding=encoding).get(key_to_get)
-
-
-@contextmanager
-def rewrite(
- path: StrPath,
- encoding: Optional[str],
-) -> Iterator[Tuple[IO[str], IO[str]]]:
- if not os.path.isfile(path):
- with open(path, mode="w", encoding=encoding) as source:
- source.write("")
- with tempfile.NamedTemporaryFile(mode="w", encoding=encoding, delete=False) as dest:
- try:
- with open(path, encoding=encoding) as source:
- yield (source, dest)
- except BaseException:
- os.unlink(dest.name)
- raise
- shutil.move(dest.name, path)
-
-
-def set_key(
- dotenv_path: StrPath,
- key_to_set: str,
- value_to_set: str,
- quote_mode: str = "always",
- export: bool = False,
- encoding: Optional[str] = "utf-8",
-) -> Tuple[Optional[bool], str, str]:
- """
- Adds or Updates a key/value to the given .env
-
- If the .env path given doesn't exist, fails instead of risking creating
- an orphan .env somewhere in the filesystem
- """
- if quote_mode not in ("always", "auto", "never"):
- raise ValueError(f"Unknown quote_mode: {quote_mode}")
-
- quote = (
- quote_mode == "always"
- or (quote_mode == "auto" and not value_to_set.isalnum())
- )
-
- if quote:
- value_out = "'{}'".format(value_to_set.replace("'", "\\'"))
- else:
- value_out = value_to_set
- if export:
- line_out = f'export {key_to_set}={value_out}\n'
- else:
- line_out = f"{key_to_set}={value_out}\n"
-
- with rewrite(dotenv_path, encoding=encoding) as (source, dest):
- replaced = False
- missing_newline = False
- for mapping in with_warn_for_invalid_lines(parse_stream(source)):
- if mapping.key == key_to_set:
- dest.write(line_out)
- replaced = True
- else:
- dest.write(mapping.original.string)
- missing_newline = not mapping.original.string.endswith("\n")
- if not replaced:
- if missing_newline:
- dest.write("\n")
- dest.write(line_out)
-
- return True, key_to_set, value_to_set
-
-
-def unset_key(
- dotenv_path: StrPath,
- key_to_unset: str,
- quote_mode: str = "always",
- encoding: Optional[str] = "utf-8",
-) -> Tuple[Optional[bool], str]:
- """
- Removes a given key from the given `.env` file.
-
- If the .env path given doesn't exist, fails.
- If the given key doesn't exist in the .env, fails.
- """
- if not os.path.exists(dotenv_path):
- logger.warning("Can't delete from %s - it doesn't exist.", dotenv_path)
- return None, key_to_unset
-
- removed = False
- with rewrite(dotenv_path, encoding=encoding) as (source, dest):
- for mapping in with_warn_for_invalid_lines(parse_stream(source)):
- if mapping.key == key_to_unset:
- removed = True
- else:
- dest.write(mapping.original.string)
-
- if not removed:
- logger.warning("Key %s not removed from %s - key doesn't exist.", key_to_unset, dotenv_path)
- return None, key_to_unset
-
- return removed, key_to_unset
-
-
-def resolve_variables(
- values: Iterable[Tuple[str, Optional[str]]],
- override: bool,
-) -> Mapping[str, Optional[str]]:
- new_values: Dict[str, Optional[str]] = {}
-
- for (name, value) in values:
- if value is None:
- result = None
- else:
- atoms = parse_variables(value)
- env: Dict[str, Optional[str]] = {}
- if override:
- env.update(os.environ) # type: ignore
- env.update(new_values)
- else:
- env.update(new_values)
- env.update(os.environ) # type: ignore
- result = "".join(atom.resolve(env) for atom in atoms)
-
- new_values[name] = result
-
- return new_values
-
-
-def _walk_to_root(path: str) -> Iterator[str]:
- """
- Yield directories starting from the given directory up to the root
- """
- if not os.path.exists(path):
- raise IOError('Starting path not found')
-
- if os.path.isfile(path):
- path = os.path.dirname(path)
-
- last_dir = None
- current_dir = os.path.abspath(path)
- while last_dir != current_dir:
- yield current_dir
- parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
- last_dir, current_dir = current_dir, parent_dir
-
-
-def find_dotenv(
- filename: str = '.env',
- raise_error_if_not_found: bool = False,
- usecwd: bool = False,
-) -> str:
- """
- Search in increasingly higher folders for the given file
-
- Returns path to the file if found, or an empty string otherwise
- """
-
- def _is_interactive():
- """ Decide whether this is running in a REPL or IPython notebook """
- main = __import__('__main__', None, None, fromlist=['__file__'])
- return not hasattr(main, '__file__')
-
- if usecwd or _is_interactive() or getattr(sys, 'frozen', False):
- # Should work without __file__, e.g. in REPL or IPython notebook.
- path = os.getcwd()
- else:
- # will work for .py files
- frame = sys._getframe()
- current_file = __file__
-
- while frame.f_code.co_filename == current_file:
- assert frame.f_back is not None
- frame = frame.f_back
- frame_filename = frame.f_code.co_filename
- path = os.path.dirname(os.path.abspath(frame_filename))
-
- for dirname in _walk_to_root(path):
- check_path = os.path.join(dirname, filename)
- if os.path.isfile(check_path):
- return check_path
-
- if raise_error_if_not_found:
- raise IOError('File not found')
-
- return ''
-
-
-def load_dotenv(
- dotenv_path: Optional[StrPath] = None,
- stream: Optional[IO[str]] = None,
- verbose: bool = False,
- override: bool = False,
- interpolate: bool = True,
- encoding: Optional[str] = "utf-8",
-) -> bool:
- """Parse a .env file and then load all the variables found as environment variables.
-
- Parameters:
- dotenv_path: Absolute or relative path to .env file.
- stream: Text stream (such as `io.StringIO`) with .env content, used if
- `dotenv_path` is `None`.
- verbose: Whether to output a warning the .env file is missing.
- override: Whether to override the system environment variables with the variables
- from the `.env` file.
- encoding: Encoding to be used to read the file.
- Returns:
- Bool: True if at least one environment variable is set else False
-
- If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the
- .env file.
- """
- if dotenv_path is None and stream is None:
- dotenv_path = find_dotenv()
-
- dotenv = DotEnv(
- dotenv_path=dotenv_path,
- stream=stream,
- verbose=verbose,
- interpolate=interpolate,
- override=override,
- encoding=encoding,
- )
- return dotenv.set_as_environment_variables()
-
-
-def dotenv_values(
- dotenv_path: Optional[StrPath] = None,
- stream: Optional[IO[str]] = None,
- verbose: bool = False,
- interpolate: bool = True,
- encoding: Optional[str] = "utf-8",
-) -> Dict[str, Optional[str]]:
- """
- Parse a .env file and return its content as a dict.
-
- The returned dict will have `None` values for keys without values in the .env file.
- For example, `foo=bar` results in `{"foo": "bar"}` whereas `foo` alone results in
- `{"foo": None}`
-
- Parameters:
- dotenv_path: Absolute or relative path to the .env file.
- stream: `StringIO` object with .env content, used if `dotenv_path` is `None`.
- verbose: Whether to output a warning if the .env file is missing.
- encoding: Encoding to be used to read the file.
-
- If both `dotenv_path` and `stream` are `None`, `find_dotenv()` is used to find the
- .env file.
- """
- if dotenv_path is None and stream is None:
- dotenv_path = find_dotenv()
-
- return DotEnv(
- dotenv_path=dotenv_path,
- stream=stream,
- verbose=verbose,
- interpolate=interpolate,
- override=True,
- encoding=encoding,
- ).dict()
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/monotonic_align/core.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/monotonic_align/core.py
deleted file mode 100644
index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/monotonic_align/core.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val=-1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y-1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y-1, x-1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]):
- index = index - 1
diff --git a/spaces/Benson/text-generation/Examples/Colinas De Acero 2.md b/spaces/Benson/text-generation/Examples/Colinas De Acero 2.md
deleted file mode 100644
index 898a38ed15645761e4b81d922173cddc039811d8..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Colinas De Acero 2.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-Hills of Steel 2: Un juego de tanques basado en la física con batallas por equipos 3vs3 en tiempo real
-Si usted está buscando un divertido y adictivo juego de tanques que se puede jugar con sus amigos u otros jugadores en línea, entonces usted debe echa un vistazo a Hills of Steel 2. Este es un juego de tanques basado en la física que es una secuela del popular juego Hills of Steel. En este juego, puedes elegir entre 18 tanques diferentes, cada uno con sus propias habilidades y objetos únicos, y competir en batallas de equipo en tiempo real 3vs3 en varias colinas. También puedes unirte o crear un clan, participar en ocho eventos en línea, subir las tablas de clasificación, ganar recompensas gratis y chatear con otros jugadores en el servidor activo de Discord. En este artículo, te contaremos más sobre las características, la jugabilidad, los pros y los contras, y las preguntas frecuentes de Hills of Steel 2.
-colinas de acero 2
Download File ✓ https://bltlly.com/2v6LXY
- Características de Hills of Steel 2
-Hills of Steel 2 es un juego gratuito que ofrece muchas características para los amantes de los tanques. Estos son algunos de ellos:
-Clanes
-Puedes crear tu propio clan o unirte a uno existente y competir con otros clanes en las tablas de clasificación. También puedes chatear con los miembros de tu clan, invitarlos a jugar contigo y compartir tus mejores momentos. Los clanes son una gran manera de hacer nuevos amigos y divertirse más en el juego.
-Eventos
-Puedes participar en ocho eventos en línea que tienen diferentes modos y objetivos. Algunos de los eventos son:
-
-- Equipo de supervivencia: El último equipo de pie gana.
-- bunker bash: destruir el búnker enemigo antes de que destruyan el tuyo.
-- Captura de estrellas: recoge tantas estrellas como sea posible evitando el fuego enemigo.
-- Batalla del jefe: Equipo con otros jugadores para derrotar a un tanque poderoso jefe.
-- Duelo raro: Lucha contra otro jugador usando un tanque raro.
-- duelo épico: lucha contra otro jugador usando un tanque épico.
-- Dominación: Captura y mantén tantas banderas como sea posible.
-- Alboroto: Destruye tantos tanques enemigos como sea posible en un tiempo limitado.
-
-
-Tanques
-Puedes desbloquear y personalizar 18 tanques únicos con diferentes habilidades y objetos. Algunos de los tanques son:
-
-
-- Joker: Un pequeño tanque que tiene un gran golpe.
-- Morty: Un tanque sanador de bombas.
-- Stinger: Un tanque que causa estragos con salvos de cohetes.
-- Buck: Un feroz combatiente de corto alcance.
-- Titan: Un tanque grande y robusto que puede llamar en ataques aéreos.
-- Wally: Un tanque que perfora a través de las líneas enemigas.
-- Sparky: Un tanque de relámpago sobrealimentado.
-- Ninja: Un tanque de sigilo con espada fatal.
-- Gatlyn: Un tanque de disparo rápido con torretas desplegables.
-- Phoenix Continuando el artículo:
- Phoenix: Un tanque ardiente que puede revivir de las cenizas.
-- Reaper: Un tanque mortal que puede cosechar almas.
-- Arachno: Un tanque parecido a una araña que puede colocar minas y telarañas.
-- Blaze: Un tanque lanzallamas que puede incendiar a los enemigos.
-- Frosty: Un tanque helado que puede congelar enemigos y crear paredes de hielo.
-- Thor: Un tanque atronador que puede convocar rayos y tormentas.
-- Draco: Un tanque tipo dragón que puede respirar fuego y volar.
-- Escorpio: Un tanque tipo escorpión que puede picar a los enemigos y excavar bajo tierra.
-
-Puedes actualizar tus tanques con monedas y gemas, y equiparlos con diferentes elementos como escudos, imanes, boosters y más. También puedes cambiar la apariencia de tus tanques con pieles y pegatinas. Los tanques son una gran manera de expresar tu personalidad y estilo en el juego.
-Tablas de clasificación
-Puedes subir de rango y convertirte en el mejor de tu país o del mundo ganando batallas y ganando trofeos. También puede comparar sus estadísticas y logros con otros jugadores y ver cómo se apilan. Las tablas de clasificación son una gran manera de desafiarte y mostrar tus habilidades en el juego.
-Recompensas
-
-Comunidad
-Puedes chatear con otros jugadores en el servidor activo de Discord donde puedes encontrar consejos, guías, noticias, actualizaciones, memes, fan art y más. También puedes unirte a la página oficial de Facebook y a la cuenta de Instagram donde puedes ver las últimas publicaciones de los desarrolladores y otros jugadores. Comunidad es una gran manera de conectarse con otros fans y mantenerse al día sobre el juego.
- Cómo jugar Hills of Steel 2
-Hills of Steel 2 es un juego de tanques basado en la física que es fácil de aprender pero difícil de dominar. Aquí hay algunas instrucciones básicas sobre cómo jugar:
-Controles
-Puedes controlar tu tanque usando dos botones en la pantalla: uno para avanzar o retroceder, y otro para apuntar y disparar. También puede tocar en su tanque para activar su habilidad especial o elemento. Puede ajustar la sensibilidad de los controles en el menú de configuración.
-Estrategia
-Puedes mejorar tus posibilidades de ganar siguiendo algunos consejos y trucos sencillos:
-
-- Elija un tanque que se adapte a su estilo de juego y el modo de evento. Por ejemplo, si te gusta ser agresivo e infligir mucho daño, es posible que quieras usar Buck o Stinger. Si te gusta apoyar y sanar a tus compañeros de equipo, es posible que quieras usar Morty o Phoenix.
-- Usa el terreno a tu favor. Por ejemplo, puedes esconderte detrás de colinas u obstáculos para evitar el fuego enemigo, o usar rampas o pendientes para ganar impulso o saltar sobre los enemigos.
-- Trabaja junto con tus compañeros de equipo. Por ejemplo, puedes coordinar tus ataques, cubrirse las espaldas o compartir objetos o habilidades.
-- Sea consciente de su entorno. Por ejemplo, puede estar atento a los movimientos enemigos, proyectiles, minas, banderas, estrellas u otros objetos en el mapa.
-- Diviértete y experimenta. Por ejemplo, puedes probar diferentes combinaciones de tanques, objetos, pieles, pegatinas o estrategias para ver qué funciona mejor para ti.
-
- Pros y contras de las colinas de acero 2
-
-Pros
-
-- El juego tiene gráficos coloridos y animaciones suaves que lo hacen visualmente atractivo.
-- El juego tiene física realista y dinámica de juego que lo hacen desafiante y emocionante.
-- El juego tiene una variedad de tanques, objetos, pieles, pegatinas, eventos, modos, mapas, Continuando el artículo:
- El juego tiene un montón de características, recompensas y actualizaciones que lo hacen gratificante y atractivo.
-- El juego tiene una comunidad amigable y activa que lo hace social y divertido.
-
-Contras
-
-- El juego puede ser frustrante e injusto a veces debido a retrasos, problemas técnicos, hackers, o tanques o artículos desequilibrados.
-- El juego puede ser repetitivo y aburrido después de un tiempo debido a la falta de variedad o innovación.
-- El juego puede ser caro y pagar para ganar si desea desbloquear o actualizar todo más rápido o más fácil.
-- El juego puede ser adictivo y poco saludable si lo juegas demasiado o descuidas otros aspectos de tu vida.
-
- Conclusión
-Hills of Steel 2 es un juego de tanques basado en la física que es una secuela del popular juego Hills of Steel. Es un juego gratuito que ofrece muchas características para los amantes de los tanques, como clanes, eventos, tanques, tablas de clasificación, recompensas y comunidad. Es un juego divertido y adictivo que se puede jugar con tus amigos u otros jugadores en línea en tiempo real 3vs3 batallas de equipo en varias colinas. Sin embargo, también tiene algunos inconvenientes, como retraso, problemas técnicos, hackers, tanques o artículos desequilibrados, repetición, aburrimiento, gastos y adicción. Por lo tanto, le recomendamos que lo pruebe por sí mismo y vea si le gusta o no. Puedes descargarlo desde la Google Play Store o la App Store gratis.
- Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes y sus respuestas sobre Hills of Steel 2:
-
-
-- P: ¿Cómo puedo desbloquear más tanques en el juego?
A: Puedes desbloquear más tanques alcanzando ciertos niveles de trofeos, abriendo cofres, participando en eventos o comprándolos con gemas.
-- P: ¿Cómo puedo actualizar mis tanques en el juego?
A: Puedes actualizar tus tanques gastando monedas y gemas en ellos. También puedes equiparlos con diferentes artículos que puedes comprar con monedas o gemas.
-- P: ¿Cómo puedo cambiar la apariencia de mis tanques en el juego?
A: Puede cambiar la apariencia de sus tanques mediante la aplicación de pieles y pegatinas que se pueden desbloquear de cofres, carretera temporada, camino trofeo, eventos, o comprar con gemas.
-- P: ¿Cómo puedo contactar a los desarrolladores o reportar un problema en el juego?
A: Puede contactar a los desarrolladores o reportar un problema enviando un correo electrónico a support@superplusgames.com o uniéndose a su servidor Discord en https://discord.gg/hillsofsteel2.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/faceshq.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/faceshq.py
deleted file mode 100644
index 6912d04b66a6d464c1078e4b51d5da290f5e767e..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/faceshq.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import os
-import numpy as np
-import albumentations
-from torch.utils.data import Dataset
-
-from taming.data.base import ImagePaths, NumpyPaths, ConcatDatasetWithIndex
-
-
-class FacesBase(Dataset):
- def __init__(self, *args, **kwargs):
- super().__init__()
- self.data = None
- self.keys = None
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- example = self.data[i]
- ex = {}
- if self.keys is not None:
- for k in self.keys:
- ex[k] = example[k]
- else:
- ex = example
- return ex
-
-
-class CelebAHQTrain(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/celebahq"
- with open("data/celebahqtrain.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = NumpyPaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class CelebAHQValidation(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/celebahq"
- with open("data/celebahqvalidation.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = NumpyPaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FFHQTrain(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/ffhq"
- with open("data/ffhqtrain.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FFHQValidation(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/ffhq"
- with open("data/ffhqvalidation.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FacesHQTrain(Dataset):
- # CelebAHQ [0] + FFHQ [1]
- def __init__(self, size, keys=None, crop_size=None, coord=False):
- d1 = CelebAHQTrain(size=size, keys=keys)
- d2 = FFHQTrain(size=size, keys=keys)
- self.data = ConcatDatasetWithIndex([d1, d2])
- self.coord = coord
- if crop_size is not None:
- self.cropper = albumentations.RandomCrop(height=crop_size,width=crop_size)
- if self.coord:
- self.cropper = albumentations.Compose([self.cropper],
- additional_targets={"coord": "image"})
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- ex, y = self.data[i]
- if hasattr(self, "cropper"):
- if not self.coord:
- out = self.cropper(image=ex["image"])
- ex["image"] = out["image"]
- else:
- h,w,_ = ex["image"].shape
- coord = np.arange(h*w).reshape(h,w,1)/(h*w)
- out = self.cropper(image=ex["image"], coord=coord)
- ex["image"] = out["image"]
- ex["coord"] = out["coord"]
- ex["class"] = y
- return ex
-
-
-class FacesHQValidation(Dataset):
- # CelebAHQ [0] + FFHQ [1]
- def __init__(self, size, keys=None, crop_size=None, coord=False):
- d1 = CelebAHQValidation(size=size, keys=keys)
- d2 = FFHQValidation(size=size, keys=keys)
- self.data = ConcatDatasetWithIndex([d1, d2])
- self.coord = coord
- if crop_size is not None:
- self.cropper = albumentations.CenterCrop(height=crop_size,width=crop_size)
- if self.coord:
- self.cropper = albumentations.Compose([self.cropper],
- additional_targets={"coord": "image"})
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- ex, y = self.data[i]
- if hasattr(self, "cropper"):
- if not self.coord:
- out = self.cropper(image=ex["image"])
- ex["image"] = out["image"]
- else:
- h,w,_ = ex["image"].shape
- coord = np.arange(h*w).reshape(h,w,1)/(h*w)
- out = self.cropper(image=ex["image"], coord=coord)
- ex["image"] = out["image"]
- ex["coord"] = out["coord"]
- ex["class"] = y
- return ex
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/errorfactory.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/errorfactory.py
deleted file mode 100644
index d9a1e9cd9cf542c9d01cb35bb934711799e45aac..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/errorfactory.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from botocore.exceptions import ClientError
-from botocore.utils import get_service_module_name
-
-
-class BaseClientExceptions:
- ClientError = ClientError
-
- def __init__(self, code_to_exception):
- """Base class for exceptions object on a client
-
- :type code_to_exception: dict
- :param code_to_exception: Mapping of error codes (strings) to exception
- class that should be raised when encountering a particular
- error code.
- """
- self._code_to_exception = code_to_exception
-
- def from_code(self, error_code):
- """Retrieves the error class based on the error code
-
- This is helpful for identifying the exception class needing to be
- caught based on the ClientError.parsed_reponse['Error']['Code'] value
-
- :type error_code: string
- :param error_code: The error code associated to a ClientError exception
-
- :rtype: ClientError or a subclass of ClientError
- :returns: The appropriate modeled exception class for that error
- code. If the error code does not match any of the known
- modeled exceptions then return a generic ClientError.
- """
- return self._code_to_exception.get(error_code, self.ClientError)
-
- def __getattr__(self, name):
- exception_cls_names = [
- exception_cls.__name__
- for exception_cls in self._code_to_exception.values()
- ]
- raise AttributeError(
- fr"{self} object has no attribute {name}. "
- fr"Valid exceptions are: {', '.join(exception_cls_names)}"
- )
-
-
-class ClientExceptionsFactory:
- def __init__(self):
- self._client_exceptions_cache = {}
-
- def create_client_exceptions(self, service_model):
- """Creates a ClientExceptions object for the particular service client
-
- :type service_model: botocore.model.ServiceModel
- :param service_model: The service model for the client
-
- :rtype: object that subclasses from BaseClientExceptions
- :returns: The exceptions object of a client that can be used
- to grab the various different modeled exceptions.
- """
- service_name = service_model.service_name
- if service_name not in self._client_exceptions_cache:
- client_exceptions = self._create_client_exceptions(service_model)
- self._client_exceptions_cache[service_name] = client_exceptions
- return self._client_exceptions_cache[service_name]
-
- def _create_client_exceptions(self, service_model):
- cls_props = {}
- code_to_exception = {}
- for error_shape in service_model.error_shapes:
- exception_name = str(error_shape.name)
- exception_cls = type(exception_name, (ClientError,), {})
- cls_props[exception_name] = exception_cls
- code = str(error_shape.error_code)
- code_to_exception[code] = exception_cls
- cls_name = str(get_service_module_name(service_model) + 'Exceptions')
- client_exceptions_cls = type(
- cls_name, (BaseClientExceptions,), cls_props
- )
- return client_exceptions_cls(code_to_exception)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/__init__.py
deleted file mode 100644
index a40eeafcc914108ca79c5d83d6e81da1b29c6e80..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/__init__.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from .package_data import __version__
-from .core import (
- IDNABidiError,
- IDNAError,
- InvalidCodepoint,
- InvalidCodepointContext,
- alabel,
- check_bidi,
- check_hyphen_ok,
- check_initial_combiner,
- check_label,
- check_nfc,
- decode,
- encode,
- ulabel,
- uts46_remap,
- valid_contextj,
- valid_contexto,
- valid_label_length,
- valid_string_length,
-)
-from .intranges import intranges_contain
-
-__all__ = [
- "IDNABidiError",
- "IDNAError",
- "InvalidCodepoint",
- "InvalidCodepointContext",
- "alabel",
- "check_bidi",
- "check_hyphen_ok",
- "check_initial_combiner",
- "check_label",
- "check_nfc",
- "decode",
- "encode",
- "intranges_contain",
- "ulabel",
- "uts46_remap",
- "valid_contextj",
- "valid_contexto",
- "valid_label_length",
- "valid_string_length",
-]
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/log.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/log.py
deleted file mode 100644
index be25f6cabd839af772dd74399c57991c222d3da8..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/log.py
+++ /dev/null
@@ -1,80 +0,0 @@
-"""A simple log mechanism styled after PEP 282."""
-
-# The class here is styled after PEP 282 so that it could later be
-# replaced with a standard Python logging implementation.
-
-import sys
-
-DEBUG = 1
-INFO = 2
-WARN = 3
-ERROR = 4
-FATAL = 5
-
-
-class Log:
- def __init__(self, threshold=WARN):
- self.threshold = threshold
-
- def _log(self, level, msg, args):
- if level not in (DEBUG, INFO, WARN, ERROR, FATAL):
- raise ValueError('%s wrong log level' % str(level))
-
- if level >= self.threshold:
- if args:
- msg = msg % args
- if level in (WARN, ERROR, FATAL):
- stream = sys.stderr
- else:
- stream = sys.stdout
- try:
- stream.write('%s\n' % msg)
- except UnicodeEncodeError:
- # emulate backslashreplace error handler
- encoding = stream.encoding
- msg = msg.encode(encoding, "backslashreplace").decode(encoding)
- stream.write('%s\n' % msg)
- stream.flush()
-
- def log(self, level, msg, *args):
- self._log(level, msg, args)
-
- def debug(self, msg, *args):
- self._log(DEBUG, msg, args)
-
- def info(self, msg, *args):
- self._log(INFO, msg, args)
-
- def warn(self, msg, *args):
- self._log(WARN, msg, args)
-
- def error(self, msg, *args):
- self._log(ERROR, msg, args)
-
- def fatal(self, msg, *args):
- self._log(FATAL, msg, args)
-
-
-_global_log = Log()
-log = _global_log.log
-debug = _global_log.debug
-info = _global_log.info
-warn = _global_log.warn
-error = _global_log.error
-fatal = _global_log.fatal
-
-
-def set_threshold(level):
- # return the old threshold for use from tests
- old = _global_log.threshold
- _global_log.threshold = level
- return old
-
-
-def set_verbosity(v):
- if v <= 0:
- set_threshold(WARN)
- elif v == 1:
- set_threshold(INFO)
- elif v >= 2:
- set_threshold(DEBUG)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_adapters.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_adapters.py
deleted file mode 100644
index aa460d3eda50fbb174623a1b5bbca54645fd588a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_adapters.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import re
-import textwrap
-import email.message
-
-from ._text import FoldedCase
-
-
-class Message(email.message.Message):
- multiple_use_keys = set(
- map(
- FoldedCase,
- [
- 'Classifier',
- 'Obsoletes-Dist',
- 'Platform',
- 'Project-URL',
- 'Provides-Dist',
- 'Provides-Extra',
- 'Requires-Dist',
- 'Requires-External',
- 'Supported-Platform',
- 'Dynamic',
- ],
- )
- )
- """
- Keys that may be indicated multiple times per PEP 566.
- """
-
- def __new__(cls, orig: email.message.Message):
- res = super().__new__(cls)
- vars(res).update(vars(orig))
- return res
-
- def __init__(self, *args, **kwargs):
- self._headers = self._repair_headers()
-
- # suppress spurious error from mypy
- def __iter__(self):
- return super().__iter__()
-
- def _repair_headers(self):
- def redent(value):
- "Correct for RFC822 indentation"
- if not value or '\n' not in value:
- return value
- return textwrap.dedent(' ' * 8 + value)
-
- headers = [(key, redent(value)) for key, value in vars(self)['_headers']]
- if self._payload:
- headers.append(('Description', self.get_payload()))
- return headers
-
- @property
- def json(self):
- """
- Convert PackageMetadata to a JSON-compatible format
- per PEP 0566.
- """
-
- def transform(key):
- value = self.get_all(key) if key in self.multiple_use_keys else self[key]
- if key == 'Keywords':
- value = re.split(r'\s+', value)
- tk = key.lower().replace('-', '_')
- return tk, value
-
- return dict(map(transform, map(FoldedCase, self)))
diff --git a/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/app.py b/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/app.py
deleted file mode 100644
index ac74e825792a0c6435930a2b7b33b848327e7f9c..0000000000000000000000000000000000000000
--- a/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/app.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import logging
-import time
-from pathlib import Path
-
-import gradio as gr
-import nltk
-from cleantext import clean
-from summarize import load_model_and_tokenizer, summarize_via_tokenbatches
-from utils import load_example_filenames, truncate_word_count
-
-_here = Path(__file__).parent
-
-nltk.download("stopwords")
-
-logging.basicConfig(
- level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
-)
-
-
-def proc_submission(
- input_text: str,
- model_size: str,
- num_beams,
- token_batch_length,
- length_penalty,
- max_input_length: int = 3060,
-):
- """
- proc_submission - a helper function for the gradio module to process submissions
- Args:
- input_text (str): the input text to summarize
- model_size (str): the size of the model to use
- num_beams (int): the number of beams to use
- token_batch_length (int): the length of the token batches to use
- length_penalty (float): the length penalty to use
- repetition_penalty (float): the repetition penalty to use
- no_repeat_ngram_size (int): the no repeat ngram size to use
- max_input_length (int, optional): the maximum input length to use. Defaults to 768.
- Returns:
- str in HTML format, string of the summary, str of score
- """
-
- settings_det = {
- "length_penalty": float(length_penalty),
- "repetition_penalty": 3.5,
- "no_repeat_ngram_size": 3,
- "encoder_no_repeat_ngram_size": 4,
- "num_beams": int(num_beams),
- "min_length": 100,
- "max_length": 512,#int(token_batch_length // 4),
- "early_stopping": True,
- "do_sample": False,
- }
- settings_tldr = {
- "length_penalty": float(length_penalty),
- "repetition_penalty": 3.5,
- "no_repeat_ngram_size": 3,
- "encoder_no_repeat_ngram_size": 4,
- "num_beams": int(num_beams),
- "min_length": 11,
- "max_length": 62,
- "early_stopping": True,
- "do_sample": False,
- }
-
- if model_size == "tldr":
- settings = settings_tldr
- else:
- settings = settings_det
-
- st = time.perf_counter()
- history = {}
- clean_text = clean(input_text, extra_spaces=True, lowercase=True, reg="\b(?!(?:Although|Also)\b)(?:[A-Z][A-Za-z'`-]+)(?:,? (?:(?:and |& )?(?:[A-Z][A-Za-z'`-]+)|(?:et al.?)))*(?:, *(?:19|20)[0-9][0-9](?:, p\.? [0-9]+)?| *\((?:19|20)[0-9][0-9](?:, p\.? [0-9]+)?\))", reg_replace="")
- #max_input_length = 2048 if model_size == "tldr" else max_input_length
- processed = truncate_word_count(clean_text, max_input_length)
-
- if processed["was_truncated"]:
- tr_in = processed["truncated_text"]
- msg = f"Input text was truncated to {max_input_length} words to fit within the computational constraints of the inference API"
- logging.warning(msg)
- history["WARNING"] = msg
- else:
- tr_in = input_text
- msg = None
-
- _summaries = summarize_via_tokenbatches(
- tr_in,
- model_sm if model_size == "tldr" else model,
- tokenizer_sm if model_size == "tldr" else tokenizer,
- batch_length=token_batch_length,
- **settings,
- )
- sum_text = [f"Section {i}: " + s["summary"][0] for i, s in enumerate(_summaries)]
- rates = [
- f" - Section {i}: {round(s['compression_rate'],3)}"
- for i, s in enumerate(_summaries)
- ]
-
- sum_text_out = "\n".join(sum_text)
- history["Compression Rates"] = "
"
- rates_out = "\n".join(rates)
- rt = round((time.perf_counter() - st) / 60, 2)
- print(f"Runtime: {rt} minutes")
- html = ""
- html += f"Runtime: {rt} minutes on CPU
"
- if msg is not None:
- html += f"WARNING:
{msg}
"
-
- html += ""
-
- return html, sum_text_out, rates_out
-
-
-def load_single_example_text(
- example_path: str or Path,
-):
- """
- load_single_example - a helper function for the gradio module to load examples
- Returns:
- list of str, the examples
- """
- global name_to_path
- full_ex_path = name_to_path[example_path]
- full_ex_path = Path(full_ex_path)
- # load the examples into a list
- with open(full_ex_path, "r", encoding="utf-8", errors="ignore") as f:
- raw_text = f.read()
- text = clean(raw_text, extra_spaces=True, lowercase=False) #see if it works
- return text
-
-
-def load_uploaded_file(file_obj):
- """
- load_uploaded_file - process an uploaded file
- Args:
- file_obj (POTENTIALLY list): Gradio file object inside a list
- Returns:
- str, the uploaded file contents
- """
-
- # file_path = Path(file_obj[0].name)
-
- # check if mysterious file object is a list
- if isinstance(file_obj, list):
- file_obj = file_obj[0]
- file_path = Path(file_obj.name)
- try:
- with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
- raw_text = f.read()
- text = clean(raw_text, extra_spaces=True, lowercase=True, reg="\s(?=[\,.':;!?])",reg_replace="")
- return text
- except Exception as e:
- logging.info(f"Trying to load file with path {file_path}, error: {e}")
- return "Error: Could not read file. Ensure that it is a valid text file with encoding UTF-8."
-
-
-if __name__ == "__main__":
-
- model, tokenizer = load_model_and_tokenizer("Blaise-g/longt5_tglobal_large_sumpubmed")
- model_sm, tokenizer_sm = load_model_and_tokenizer("Blaise-g/longt5_tglobal_large_scitldr")
-
- name_to_path = load_example_filenames(_here / "examples")
- logging.info(f"Loaded {len(name_to_path)} examples")
- demo = gr.Blocks()
-
- with demo:
-
- gr.Markdown("# Automatic summarization of biomedical research papers with neural abstractive methods into a long and comprehensive synopsis or extreme TLDR summary version")
- gr.Markdown(
- "A demo developed for my Master Thesis project using ad-hoc fine-tuned abstractive summarization models to summarize long biomedical articles into a detailed, explanatory synopsis or extreme TLDR summary."
- )
- with gr.Column():
-
- gr.Markdown("### Select Summary type and text generation parameters then load input text")
- gr.Markdown(
- "Enter text below in the text area or alternatively load an example below or upload a file."
- )
- with gr.Row():
- model_size = gr.Radio(
- choices=["tldr", "detailed"], label="Summary type", value="detailed"
- )
- num_beams = gr.Radio(
- choices=[2, 3, 4],
- label="Beam Search: Number of Beams",
- value=2,
- )
- gr.Markdown(
- "_For optimal results use a GPU as the hosted CPU inference is lacking at times and hinders the output summary quality as well as forcing to divide the input text into batches._"
- )
- with gr.Row():
- length_penalty = gr.inputs.Slider(
- minimum=0.5,
- maximum=1.0,
- label="length penalty",
- default=0.7,
- step=0.05,
- )
- token_batch_length = gr.Radio(
- choices=[1024, 2048, 3060],
- label="token batch length",
- value=2048,
- )
- with gr.Row():
- example_name = gr.Dropdown(
- list(name_to_path.keys()),
- label="Choose an Example",
- )
- load_examples_button = gr.Button(
- "Load Example",
- )
- input_text = gr.Textbox(
- lines=6,
- label="Input Text (for summarization)",
- placeholder="Enter any scientific text to be condensed into a detailed, explanatory synopsis or TLDR summary version. The input text is divided into batches of the selected token lengths to fit within the memory constraints, pre-processed and fed into the model of choice. The models were trained to handle long scientific papers but generalize reasonably well also to shorter text documents like scientific abstracts. Might take a while to produce long summaries :)",
- )
- gr.Markdown("Upload your own file:")
- with gr.Row():
- uploaded_file = gr.File(
- label="Upload a text file",
- file_count="single",
- type="file",
- )
- load_file_button = gr.Button("Load Uploaded File")
-
- gr.Markdown("---")
-
- with gr.Column():
- gr.Markdown("## Generate Summary")
- gr.Markdown(
- "Summary generation should take approximately 2-3 minutes for most generation settings but can take significantly more time for very long documents with a high beam number."
- )
- summarize_button = gr.Button(
- "Summarize!",
- variant="primary",
- )
-
- output_text = gr.HTML("Output will appear below:
")
- gr.Markdown("### Summary Output")
- summary_text = gr.Textbox(
- label="Summary 📝", placeholder="The generated 📝 will appear here"
- )
- gr.Markdown(
- "The compression rate 🗜 indicates the ratio between the machine-generated summary length and the input text (from 0% to 100%). The higher the 🗜 the more extreme the summary is."
- )
- compression_rate = gr.Textbox(
- label="Compression rate 🗜", placeholder="The 🗜 will appear here"
- )
- gr.Markdown("---")
-
- with gr.Column():
- gr.Markdown("## About the Models")
- gr.Markdown(
- "- [Blaise-g/longt5_tglobal_large_sumpubmed](https://huggingface.co/Blaise-g/longt5_tglobal_large_sumpubmed) is a fine-tuned checkpoint of [Stancld/longt5-tglobal-large-16384-pubmed-3k_steps](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) on the [SumPubMed dataset](https://aclanthology.org/2021.acl-srw.30/). [Blaise-g/longt5_tglobal_large_scitldr](https://huggingface.co/Blaise-g/longt5_tglobal_large_scitldr) is a fine-tuned checkpoint of [Blaise-g/longt5_tglobal_large_sumpubmed](https://huggingface.co/Blaise-g/longt5_tglobal_large_sumpubmed) on the [Scitldr dataset](https://arxiv.org/abs/2004.15011). The goal was to create two models capable of handling the complex information contained in long biomedical documents and subsequently producing scientific summaries according to one of the two possible levels of conciseness: 1) A long explanatory synopsis that retains the majority of domain-specific language used in the original source text. 2)A one sentence long, TLDR style summary."
- )
- gr.Markdown(
- "- The two most important text generation parameters are the number of beams and length penalty : 1) Choosing a higher number of beams for the beam search algorithm results in generating a summary with higher probability (hence theoretically higher quality) at the cost of increasing computation times and memory usage. 2) The length penalty encourages the model to generate longer (with values closer to 1.0) or shorter (with values closer to 0.0) summary sequences by placing an exponential penalty on the beam score according to the current sequence length."
- )
- gr.Markdown("---")
-
- load_examples_button.click(
- fn=load_single_example_text, inputs=[example_name], outputs=[input_text]
- )
-
- load_file_button.click(
- fn=load_uploaded_file, inputs=uploaded_file, outputs=[input_text]
- )
-
- summarize_button.click(
- fn=proc_submission,
- inputs=[
- input_text,
- model_size,
- num_beams,
- token_batch_length,
- length_penalty,
- ],
- outputs=[output_text, summary_text, compression_rate],
- )
-
- demo.launch(enable_queue=True, share=False)
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/pybind11/tools/pybind11Tools.cmake b/spaces/CVPR/LIVE/pybind11/tools/pybind11Tools.cmake
deleted file mode 100644
index 10f15a30917056f8d69cff833e2c905aede08e50..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tools/pybind11Tools.cmake
+++ /dev/null
@@ -1,188 +0,0 @@
-# tools/pybind11Tools.cmake -- Build system for the pybind11 modules
-#
-# Copyright (c) 2015 Wenzel Jakob
-#
-# All rights reserved. Use of this source code is governed by a
-# BSD-style license that can be found in the LICENSE file.
-
-# Built-in in CMake 3.5+
-include(CMakeParseArguments)
-
-if(pybind11_FIND_QUIETLY)
- set(_pybind11_quiet QUIET)
-endif()
-
-# If this is the first run, PYTHON_VERSION can stand in for PYBIND11_PYTHON_VERSION
-if(NOT DEFINED PYBIND11_PYTHON_VERSION AND DEFINED PYTHON_VERSION)
- message(WARNING "Set PYBIND11_PYTHON_VERSION to search for a specific version, not "
- "PYTHON_VERSION (which is an output). Assuming that is what you "
- "meant to do and continuing anyway.")
- set(PYBIND11_PYTHON_VERSION
- "${PYTHON_VERSION}"
- CACHE STRING "Python version to use for compiling modules")
- unset(PYTHON_VERSION)
- unset(PYTHON_VERSION CACHE)
-else()
- # If this is set as a normal variable, promote it, otherwise, make an empty cache variable.
- set(PYBIND11_PYTHON_VERSION
- "${PYBIND11_PYTHON_VERSION}"
- CACHE STRING "Python version to use for compiling modules")
-endif()
-
-# A user can set versions manually too
-set(Python_ADDITIONAL_VERSIONS
- "3.9;3.8;3.7;3.6;3.5;3.4"
- CACHE INTERNAL "")
-
-list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}")
-find_package(PythonLibsNew ${PYBIND11_PYTHON_VERSION} MODULE REQUIRED ${_pybind11_quiet})
-list(REMOVE_AT CMAKE_MODULE_PATH -1)
-
-# Cache variables so pybind11_add_module can be used in parent projects
-set(PYTHON_INCLUDE_DIRS
- ${PYTHON_INCLUDE_DIRS}
- CACHE INTERNAL "")
-set(PYTHON_LIBRARIES
- ${PYTHON_LIBRARIES}
- CACHE INTERNAL "")
-set(PYTHON_MODULE_PREFIX
- ${PYTHON_MODULE_PREFIX}
- CACHE INTERNAL "")
-set(PYTHON_MODULE_EXTENSION
- ${PYTHON_MODULE_EXTENSION}
- CACHE INTERNAL "")
-set(PYTHON_VERSION_MAJOR
- ${PYTHON_VERSION_MAJOR}
- CACHE INTERNAL "")
-set(PYTHON_VERSION_MINOR
- ${PYTHON_VERSION_MINOR}
- CACHE INTERNAL "")
-set(PYTHON_VERSION
- ${PYTHON_VERSION}
- CACHE INTERNAL "")
-set(PYTHON_IS_DEBUG
- "${PYTHON_IS_DEBUG}"
- CACHE INTERNAL "")
-
-if(PYBIND11_MASTER_PROJECT)
- if(PYTHON_MODULE_EXTENSION MATCHES "pypy")
- if(NOT DEFINED PYPY_VERSION)
- execute_process(
- COMMAND ${PYTHON_EXECUTABLE} -c
- [=[import sys; print(".".join(map(str, sys.pypy_version_info[:3])))]=]
- OUTPUT_VARIABLE pypy_version)
- set(PYPY_VERSION
- ${pypy_version}
- CACHE INTERNAL "")
- endif()
- message(STATUS "PYPY ${PYPY_VERSION} (Py ${PYTHON_VERSION})")
- else()
- message(STATUS "PYTHON ${PYTHON_VERSION}")
- endif()
-endif()
-
-# Only add Python for build - must be added during the import for config since it has to be re-discovered.
-set_property(
- TARGET pybind11::pybind11
- APPEND
- PROPERTY INTERFACE_INCLUDE_DIRECTORIES $)
-
-# Python debug libraries expose slightly different objects before 3.8
-# https://docs.python.org/3.6/c-api/intro.html#debugging-builds
-# https://stackoverflow.com/questions/39161202/how-to-work-around-missing-pymodule-create2-in-amd64-win-python35-d-lib
-if(PYTHON_IS_DEBUG)
- set_property(
- TARGET pybind11::pybind11
- APPEND
- PROPERTY INTERFACE_COMPILE_DEFINITIONS Py_DEBUG)
-endif()
-
-set_property(
- TARGET pybind11::module
- APPEND
- PROPERTY
- INTERFACE_LINK_LIBRARIES pybind11::python_link_helper
- "$<$,$>:$>")
-
-if(PYTHON_VERSION VERSION_LESS 3)
- set_property(
- TARGET pybind11::pybind11
- APPEND
- PROPERTY INTERFACE_LINK_LIBRARIES pybind11::python2_no_register)
-endif()
-
-set_property(
- TARGET pybind11::embed
- APPEND
- PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11 $)
-
-function(pybind11_extension name)
- # The prefix and extension are provided by FindPythonLibsNew.cmake
- set_target_properties(${name} PROPERTIES PREFIX "${PYTHON_MODULE_PREFIX}"
- SUFFIX "${PYTHON_MODULE_EXTENSION}")
-endfunction()
-
-# Build a Python extension module:
-# pybind11_add_module( [MODULE | SHARED] [EXCLUDE_FROM_ALL]
-# [NO_EXTRAS] [THIN_LTO] source1 [source2 ...])
-#
-function(pybind11_add_module target_name)
- set(options MODULE SHARED EXCLUDE_FROM_ALL NO_EXTRAS SYSTEM THIN_LTO)
- cmake_parse_arguments(ARG "${options}" "" "" ${ARGN})
-
- if(ARG_MODULE AND ARG_SHARED)
- message(FATAL_ERROR "Can't be both MODULE and SHARED")
- elseif(ARG_SHARED)
- set(lib_type SHARED)
- else()
- set(lib_type MODULE)
- endif()
-
- if(ARG_EXCLUDE_FROM_ALL)
- set(exclude_from_all EXCLUDE_FROM_ALL)
- else()
- set(exclude_from_all "")
- endif()
-
- add_library(${target_name} ${lib_type} ${exclude_from_all} ${ARG_UNPARSED_ARGUMENTS})
-
- target_link_libraries(${target_name} PRIVATE pybind11::module)
-
- if(ARG_SYSTEM)
- message(
- STATUS
- "Warning: this does not have an effect - use NO_SYSTEM_FROM_IMPORTED if using imported targets"
- )
- endif()
-
- pybind11_extension(${target_name})
-
- # -fvisibility=hidden is required to allow multiple modules compiled against
- # different pybind versions to work properly, and for some features (e.g.
- # py::module_local). We force it on everything inside the `pybind11`
- # namespace; also turning it on for a pybind module compilation here avoids
- # potential warnings or issues from having mixed hidden/non-hidden types.
- set_target_properties(${target_name} PROPERTIES CXX_VISIBILITY_PRESET "hidden"
- CUDA_VISIBILITY_PRESET "hidden")
-
- if(ARG_NO_EXTRAS)
- return()
- endif()
-
- if(NOT DEFINED CMAKE_INTERPROCEDURAL_OPTIMIZATION)
- if(ARG_THIN_LTO)
- target_link_libraries(${target_name} PRIVATE pybind11::thin_lto)
- else()
- target_link_libraries(${target_name} PRIVATE pybind11::lto)
- endif()
- endif()
-
- if(NOT MSVC AND NOT ${CMAKE_BUILD_TYPE} MATCHES Debug|RelWithDebInfo)
- pybind11_strip(${target_name})
- endif()
-
- if(MSVC)
- target_link_libraries(${target_name} PRIVATE pybind11::windows_extras)
- endif()
-
-endfunction()
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/numeric_traits.h b/spaces/CVPR/LIVE/thrust/thrust/detail/numeric_traits.h
deleted file mode 100644
index 168b9ad0f4b63657845915ba1718737773be687a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/numeric_traits.h
+++ /dev/null
@@ -1,130 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-//#include // for intmax_t (not provided on MSVS 2005)
-
-namespace thrust
-{
-
-namespace detail
-{
-
-// XXX good enough for the platforms we care about
-typedef long long intmax_t;
-
-template
- struct is_signed
- : integral_constant::is_signed>
-{}; // end is_signed
-
-
-template
- struct num_digits
- : eval_if<
- std::numeric_limits::is_specialized,
- integral_constant<
- int,
- std::numeric_limits::digits
- >,
- integral_constant<
- int,
- sizeof(T) * std::numeric_limits::digits - (is_signed::value ? 1 : 0)
- >
- >::type
-{}; // end num_digits
-
-
-template
- struct integer_difference
- //: eval_if<
- // sizeof(Integer) >= sizeof(intmax_t),
- // eval_if<
- // is_signed::value,
- // identity_,
- // identity_
- // >,
- // eval_if<
- // sizeof(Integer) < sizeof(std::ptrdiff_t),
- // identity_,
- // identity_
- // >
- // >
-{
- private:
- // XXX workaround a pedantic warning in old versions of g++
- // which complains about &&ing with a constant value
- template
- struct and_
- {
- static const bool value = false;
- };
-
- template
- struct and_
- {
- static const bool value = y;
- };
-
- public:
- typedef typename
- eval_if<
- and_<
- std::numeric_limits::is_signed,
- // digits is the number of no-sign bits
- (!std::numeric_limits::is_bounded || (int(std::numeric_limits::digits) + 1 >= num_digits::value))
- >::value,
- identity_,
- eval_if<
- int(std::numeric_limits::digits) + 1 < num_digits::value,
- identity_,
- eval_if<
- int(std::numeric_limits::digits) + 1 < num_digits::value,
- identity_,
- identity_
- >
- >
- >::type type;
-}; // end integer_difference
-
-
-template
- struct numeric_difference
- : eval_if<
- is_integral::value,
- integer_difference,
- identity_
- >
-{}; // end numeric_difference
-
-
-template
-__host__ __device__
-typename numeric_difference::type
-numeric_distance(Number x, Number y)
-{
- typedef typename numeric_difference::type difference_type;
- return difference_type(y) - difference_type(x);
-} // end numeric_distance
-
-} // end detail
-
-} // end thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/ssd_vgg.py b/spaces/CVPR/WALT/mmdet/models/backbones/ssd_vgg.py
deleted file mode 100644
index cbc4fbb2301afc002f47abb9ed133a500d6cf23f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/backbones/ssd_vgg.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import VGG, constant_init, kaiming_init, normal_init, xavier_init
-from mmcv.runner import load_checkpoint
-
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-@BACKBONES.register_module()
-class SSDVGG(VGG):
- """VGG Backbone network for single-shot-detection.
-
- Args:
- input_size (int): width and height of input, from {300, 512}.
- depth (int): Depth of vgg, from {11, 13, 16, 19}.
- out_indices (Sequence[int]): Output from which stages.
-
- Example:
- >>> self = SSDVGG(input_size=300, depth=11)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 300, 300)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 1024, 19, 19)
- (1, 512, 10, 10)
- (1, 256, 5, 5)
- (1, 256, 3, 3)
- (1, 256, 1, 1)
- """
- extra_setting = {
- 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256),
- 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128),
- }
-
- def __init__(self,
- input_size,
- depth,
- with_last_pool=False,
- ceil_mode=True,
- out_indices=(3, 4),
- out_feature_indices=(22, 34),
- l2_norm_scale=20.):
- # TODO: in_channels for mmcv.VGG
- super(SSDVGG, self).__init__(
- depth,
- with_last_pool=with_last_pool,
- ceil_mode=ceil_mode,
- out_indices=out_indices)
- assert input_size in (300, 512)
- self.input_size = input_size
-
- self.features.add_module(
- str(len(self.features)),
- nn.MaxPool2d(kernel_size=3, stride=1, padding=1))
- self.features.add_module(
- str(len(self.features)),
- nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6))
- self.features.add_module(
- str(len(self.features)), nn.ReLU(inplace=True))
- self.features.add_module(
- str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1))
- self.features.add_module(
- str(len(self.features)), nn.ReLU(inplace=True))
- self.out_feature_indices = out_feature_indices
-
- self.inplanes = 1024
- self.extra = self._make_extra_layers(self.extra_setting[input_size])
- self.l2_norm = L2Norm(
- self.features[out_feature_indices[0] - 1].out_channels,
- l2_norm_scale)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.features.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- elif isinstance(m, nn.Linear):
- normal_init(m, std=0.01)
- else:
- raise TypeError('pretrained must be a str or None')
-
- for m in self.extra.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- constant_init(self.l2_norm, self.l2_norm.scale)
-
- def forward(self, x):
- """Forward function."""
- outs = []
- for i, layer in enumerate(self.features):
- x = layer(x)
- if i in self.out_feature_indices:
- outs.append(x)
- for i, layer in enumerate(self.extra):
- x = F.relu(layer(x), inplace=True)
- if i % 2 == 1:
- outs.append(x)
- outs[0] = self.l2_norm(outs[0])
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def _make_extra_layers(self, outplanes):
- layers = []
- kernel_sizes = (1, 3)
- num_layers = 0
- outplane = None
- for i in range(len(outplanes)):
- if self.inplanes == 'S':
- self.inplanes = outplane
- continue
- k = kernel_sizes[num_layers % 2]
- if outplanes[i] == 'S':
- outplane = outplanes[i + 1]
- conv = nn.Conv2d(
- self.inplanes, outplane, k, stride=2, padding=1)
- else:
- outplane = outplanes[i]
- conv = nn.Conv2d(
- self.inplanes, outplane, k, stride=1, padding=0)
- layers.append(conv)
- self.inplanes = outplanes[i]
- num_layers += 1
- if self.input_size == 512:
- layers.append(nn.Conv2d(self.inplanes, 256, 4, padding=1))
-
- return nn.Sequential(*layers)
-
-
-class L2Norm(nn.Module):
-
- def __init__(self, n_dims, scale=20., eps=1e-10):
- """L2 normalization layer.
-
- Args:
- n_dims (int): Number of dimensions to be normalized
- scale (float, optional): Defaults to 20..
- eps (float, optional): Used to avoid division by zero.
- Defaults to 1e-10.
- """
- super(L2Norm, self).__init__()
- self.n_dims = n_dims
- self.weight = nn.Parameter(torch.Tensor(self.n_dims))
- self.eps = eps
- self.scale = scale
-
- def forward(self, x):
- """Forward function."""
- # normalization layer convert to FP32 in FP16 training
- x_float = x.float()
- norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps
- return (self.weight[None, :, None, None].float().expand_as(x_float) *
- x_float / norm).type_as(x)
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py
deleted file mode 100644
index 6c154cb3c0d9d7639c3d4a2a1272406d3fab8acd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/double_bbox_head.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule, normal_init, xavier_init
-
-from mmdet.models.backbones.resnet import Bottleneck
-from mmdet.models.builder import HEADS
-from .bbox_head import BBoxHead
-
-
-class BasicResBlock(nn.Module):
- """Basic residual block.
-
- This block is a little different from the block in the ResNet backbone.
- The kernel size of conv1 is 1 in this block while 3 in ResNet BasicBlock.
-
- Args:
- in_channels (int): Channels of the input feature map.
- out_channels (int): Channels of the output feature map.
- conv_cfg (dict): The config dict for convolution layers.
- norm_cfg (dict): The config dict for normalization layers.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- conv_cfg=None,
- norm_cfg=dict(type='BN')):
- super(BasicResBlock, self).__init__()
-
- # main path
- self.conv1 = ConvModule(
- in_channels,
- in_channels,
- kernel_size=3,
- padding=1,
- bias=False,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg)
- self.conv2 = ConvModule(
- in_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- # identity path
- self.conv_identity = ConvModule(
- in_channels,
- out_channels,
- kernel_size=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- identity = x
-
- x = self.conv1(x)
- x = self.conv2(x)
-
- identity = self.conv_identity(identity)
- out = x + identity
-
- out = self.relu(out)
- return out
-
-
-@HEADS.register_module()
-class DoubleConvFCBBoxHead(BBoxHead):
- r"""Bbox head used in Double-Head R-CNN
-
- .. code-block:: none
-
- /-> cls
- /-> shared convs ->
- \-> reg
- roi features
- /-> cls
- \-> shared fc ->
- \-> reg
- """ # noqa: W605
-
- def __init__(self,
- num_convs=0,
- num_fcs=0,
- conv_out_channels=1024,
- fc_out_channels=1024,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- **kwargs):
- kwargs.setdefault('with_avg_pool', True)
- super(DoubleConvFCBBoxHead, self).__init__(**kwargs)
- assert self.with_avg_pool
- assert num_convs > 0
- assert num_fcs > 0
- self.num_convs = num_convs
- self.num_fcs = num_fcs
- self.conv_out_channels = conv_out_channels
- self.fc_out_channels = fc_out_channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
-
- # increase the channel of input features
- self.res_block = BasicResBlock(self.in_channels,
- self.conv_out_channels)
-
- # add conv heads
- self.conv_branch = self._add_conv_branch()
- # add fc heads
- self.fc_branch = self._add_fc_branch()
-
- out_dim_reg = 4 if self.reg_class_agnostic else 4 * self.num_classes
- self.fc_reg = nn.Linear(self.conv_out_channels, out_dim_reg)
-
- self.fc_cls = nn.Linear(self.fc_out_channels, self.num_classes + 1)
- self.relu = nn.ReLU(inplace=True)
-
- def _add_conv_branch(self):
- """Add the fc branch which consists of a sequential of conv layers."""
- branch_convs = nn.ModuleList()
- for i in range(self.num_convs):
- branch_convs.append(
- Bottleneck(
- inplanes=self.conv_out_channels,
- planes=self.conv_out_channels // 4,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- return branch_convs
-
- def _add_fc_branch(self):
- """Add the fc branch which consists of a sequential of fc layers."""
- branch_fcs = nn.ModuleList()
- for i in range(self.num_fcs):
- fc_in_channels = (
- self.in_channels *
- self.roi_feat_area if i == 0 else self.fc_out_channels)
- branch_fcs.append(nn.Linear(fc_in_channels, self.fc_out_channels))
- return branch_fcs
-
- def init_weights(self):
- # conv layers are already initialized by ConvModule
- normal_init(self.fc_cls, std=0.01)
- normal_init(self.fc_reg, std=0.001)
-
- for m in self.fc_branch.modules():
- if isinstance(m, nn.Linear):
- xavier_init(m, distribution='uniform')
-
- def forward(self, x_cls, x_reg):
- # conv head
- x_conv = self.res_block(x_reg)
-
- for conv in self.conv_branch:
- x_conv = conv(x_conv)
-
- if self.with_avg_pool:
- x_conv = self.avg_pool(x_conv)
-
- x_conv = x_conv.view(x_conv.size(0), -1)
- bbox_pred = self.fc_reg(x_conv)
-
- # fc head
- x_fc = x_cls.view(x_cls.size(0), -1)
- for fc in self.fc_branch:
- x_fc = self.relu(fc(x_fc))
-
- cls_score = self.fc_cls(x_fc)
-
- return cls_score, bbox_pred
diff --git a/spaces/CVPR/lama-example/predict.py b/spaces/CVPR/lama-example/predict.py
deleted file mode 100644
index 878b7988c113778f48ec3f940d2031a30c12e03f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/predict.py
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/env python3
-
-# Example command:
-# ./bin/predict.py \
-# model.path= \
-# indir= \
-# outdir=
-
-import logging
-import os
-import sys
-import traceback
-
-from saicinpainting.evaluation.utils import move_to_device
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-import cv2
-import hydra
-import numpy as np
-import torch
-import tqdm
-import yaml
-from omegaconf import OmegaConf
-from torch.utils.data._utils.collate import default_collate
-
-from saicinpainting.training.data.datasets import make_default_val_dataset
-from saicinpainting.training.trainers import load_checkpoint
-from saicinpainting.utils import register_debug_signal_handlers
-
-LOGGER = logging.getLogger(__name__)
-
-
-@hydra.main(config_path='configs/prediction', config_name='default.yaml')
-def main(predict_config: OmegaConf):
- try:
- register_debug_signal_handlers() # kill -10 will result in traceback dumped into log
-
- device = torch.device(predict_config.device)
-
- train_config_path = os.path.join(predict_config.model.path, 'config.yaml')
- with open(train_config_path, 'r') as f:
- train_config = OmegaConf.create(yaml.safe_load(f))
-
- train_config.training_model.predict_only = True
-
- out_ext = predict_config.get('out_ext', '.png')
-
- checkpoint_path = os.path.join(predict_config.model.path,
- 'models',
- predict_config.model.checkpoint)
- model = load_checkpoint(train_config, checkpoint_path, strict=False, map_location='cpu')
- model.freeze()
- model.to(device)
-
- if not predict_config.indir.endswith('/'):
- predict_config.indir += '/'
-
- dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset)
- with torch.no_grad():
- for img_i in tqdm.trange(len(dataset)):
- mask_fname = dataset.mask_filenames[img_i]
- cur_out_fname = os.path.join(
- predict_config.outdir,
- os.path.splitext(mask_fname[len(predict_config.indir):])[0] + out_ext
- )
- os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
-
- batch = move_to_device(default_collate([dataset[img_i]]), device)
- batch['mask'] = (batch['mask'] > 0) * 1
- batch = model(batch)
- cur_res = batch[predict_config.out_key][0].permute(1, 2, 0).detach().cpu().numpy()
-
- cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
- cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR)
- cv2.imwrite(cur_out_fname, cur_res)
- except KeyboardInterrupt:
- LOGGER.warning('Interrupted by user')
- except Exception as ex:
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/bot.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/bot.js
deleted file mode 100644
index 0c478ffd702713f8e2a1e0002553843feb445d45..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/lib/bot.js
+++ /dev/null
@@ -1,231 +0,0 @@
-import "./config/init.js"
-import cfg from "./config/config.js"
-import PluginsLoader from "./plugins/loader.js"
-import ListenerLoader from "./listener/loader.js"
-import { EventEmitter } from "events"
-import express from "express"
-import http from "http"
-import { WebSocketServer } from "ws"
-import _ from "lodash"
-
-export default class Yunzai extends EventEmitter {
- constructor() {
- super()
- this.uin = []
- this.adapter = []
- this.express = express()
- this.server = http.createServer(this.express)
- this.server.on("upgrade", (req, socket, head) => {
- this.wss.handleUpgrade(req, socket, head, conn => {
- conn.id = `${req.connection.remoteAddress}-${req.headers["sec-websocket-key"]}`
- this.makeLog("mark", `${logger.blue(`[${conn.id} <=> ${req.url}]`)} 建立连接:${JSON.stringify(req.headers)}`)
- conn.on("error", logger.error)
- conn.on("close", () => this.makeLog("mark", `${logger.blue(`[${conn.id} <≠> ${req.url}]`)} 断开连接`))
- conn.on("message", msg => this.makeLog("debug", `${logger.blue(`[${conn.id} => ${req.url}]`)} 消息:${String(msg).trim()}`))
- conn.sendMsg = msg => {
- if (typeof msg == "object")
- msg = JSON.stringify(msg)
- this.makeLog("debug", `${logger.blue(`[${conn.id} <= ${req.url}]`)} 消息:${msg}`)
- return conn.send(msg)
- }
- for (const i of this.wsf[req.url.split("/")[1]] || [])
- i(conn, req, socket, head)
- })
- })
- this.wss = new WebSocketServer({ noServer: true })
- this.wsf = {}
- }
-
- makeLog(level, msg) {
- logger[level](_.truncate(msg, { length: cfg.bot.logLength }))
- }
-
- em(name = "", data = {}) {
- if (data.self_id)
- Object.defineProperty(data, "bot", { value: Bot[data.self_id] })
- while (true) {
- this.emit(name, data)
- const i = name.lastIndexOf(".")
- if (i == -1) break
- name = name.slice(0, i)
- }
- }
-
- async run() {
- await import("./plugins/stdin.js")
- await PluginsLoader.load()
- await ListenerLoader.load()
- this.serverLoad()
- this.emit("online", this)
- }
-
- serverLoad() {
- this.express.use(req => {
- logger.mark(`${logger.blue(`[${req.ip} => ${req.url}]`)} HTTP ${req.method} 请求:${JSON.stringify(req.headers)}`)
- req.res.redirect("https://github.com/TimeRainStarSky/Yunzai")
- })
-
- this.server.listen(cfg.bot.port, () => {
- const host = this.server.address().address
- const port = this.server.address().port
- logger.mark(`启动 HTTP 服务器:${logger.green(`http://[${host}]:${port}`)}`)
- for (const i of Object.keys(this.wsf))
- logger.info(`本机 ${i} 连接地址:${logger.blue(`ws://localhost:${port}/${i}`)}`)
- })
- }
-
- getFriendArray() {
- const array = []
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].fl || [])
- array.push({ ...i, bot_id })
- return array
- }
-
- getFriendList() {
- const array = []
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].fl || [])
- array.push(id)
- return array
- }
-
- getFriendMap() {
- const map = new Map
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].fl || [])
- map.set(id, { ...i, bot_id })
- return map
- }
- get fl() { return this.getFriendMap() }
-
- getGroupArray() {
- const array = []
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].gl || [])
- array.push({ ...i, bot_id })
- return array
- }
-
- getGroupList() {
- const array = []
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].gl || [])
- array.push(id)
- return array
- }
-
- getGroupMap() {
- const map = new Map
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].gl || [])
- map.set(id, { ...i, bot_id })
- return map
- }
- get gl() { return this.getGroupMap() }
- get gml() {
- const map = new Map
- for (const bot_id of this.uin)
- for (const [id, i] of this[bot_id].gml || [])
- map.set(id, i)
- return map
- }
-
- pickFriend(user_id) {
- user_id = Number(user_id) || String(user_id)
- const user = this.fl.get(user_id)
- if (user) return this[user.bot_id].pickFriend(user_id)
- logger.error(`获取用户对象失败:找不到用户 ${logger.red(user_id)}`)
- }
- get pickUser() { return this.pickFriend }
-
- pickGroup(group_id) {
- group_id = Number(group_id) || String(group_id)
- const group = this.gl.get(group_id)
- if (group) return this[group.bot_id].pickGroup(group_id)
- logger.error(`获取群对象失败:找不到群 ${logger.red(group_id)}`)
- }
-
- pickMember(group_id, user_id) {
- const group = this.pickGroup(group_id)
- if (group) return group.pickMember(user_id)
- }
-
- sendFriendMsg(bot_id, user_id, msg) {
- try {
- if (!bot_id)
- return this.pickFriend(user_id).sendMsg(msg)
-
- if (this[bot_id])
- return this[bot_id].pickFriend(user_id).sendMsg(msg)
-
- return new Promise(resolve =>
- this.once(`connect.${bot_id}`, data =>
- resolve(data.bot.pickFriend(user_id).sendMsg(msg))))
- } catch (err) {
- logger.error(`${logger.blue(`[${bot_id}]`)} 发送好友消息失败:[$${user_id}] ${err}`)
- }
- return false
- }
-
- sendGroupMsg(bot_id, group_id, msg) {
- try {
- if (!bot_id)
- return this.pickGroup(group_id).sendMsg(msg)
-
- if (this[bot_id])
- return this[bot_id].pickGroup(group_id).sendMsg(msg)
-
- return new Promise(resolve =>
- this.once(`connect.${bot_id}`, data =>
- resolve(data.bot.pickGroup(group_id).sendMsg(msg))))
- } catch (err) {
- logger.error(`${logger.blue(`[${bot_id}]`)} 发送群消息失败:[$${group_id}] ${err}`)
- }
- return false
- }
-
- async getFriendMsg(fnc = () => true) {
- if (typeof fnc != "function") {
- const { self_id, user_id } = fnc
- fnc = data => data.self_id == self_id && data.user_id == user_id
- }
-
- while (true) {
- const msg = await new Promise(resolve => {
- this.once("message", data => {
- if (data.message && fnc(data)) {
- let msg = ""
- for (const i of data.message)
- if (i.type = "text")
- msg += i.text.trim()
- resolve(msg)
- } else {
- resolve(false)
- }
- })
- })
- if (msg) return msg
- }
- }
-
- getMasterMsg() {
- return this.getFriendMsg(data =>
- cfg.master[data.self_id]?.includes(String(data.user_id)))
- }
-
- sendMasterMsg(msg) {
- for (const bot_id in cfg.master)
- for (const user_id of cfg.master[bot_id])
- this.sendFriendMsg(bot_id, user_id, msg)
- }
-
- makeForwardMsg(msg) { return { type: "node", data: msg } }
-
- async sendForwardMsg(send, msg) {
- const messages = []
- for (const { message } of msg)
- messages.push(await send(message))
- return messages
- }
-}
\ No newline at end of file
diff --git a/spaces/Codecooker/rvcapi/src/trainset_preprocess_pipeline_print.py b/spaces/Codecooker/rvcapi/src/trainset_preprocess_pipeline_print.py
deleted file mode 100644
index 7b19e3e9a5788552b6acb9cd6747bda7ae93146b..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/trainset_preprocess_pipeline_print.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import sys, os, multiprocessing
-from scipy import signal
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-inp_root = sys.argv[1]
-sr = int(sys.argv[2])
-n_p = int(sys.argv[3])
-exp_dir = sys.argv[4]
-noparallel = sys.argv[5] == "True"
-import numpy as np, os, traceback
-from slicer2 import Slicer
-import librosa, traceback
-from scipy.io import wavfile
-import multiprocessing
-from my_utils import load_audio
-import tqdm
-
-DoFormant = False
-Quefrency = 1.0
-Timbre = 1.0
-
-mutex = multiprocessing.Lock()
-f = open("%s/preprocess.log" % exp_dir, "a+")
-
-
-def println(strr):
- mutex.acquire()
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
- mutex.release()
-
-
-class PreProcess:
- def __init__(self, sr, exp_dir):
- self.slicer = Slicer(
- sr=sr,
- threshold=-42,
- min_length=1500,
- min_interval=400,
- hop_size=15,
- max_sil_kept=500,
- )
- self.sr = sr
- self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr)
- self.per = 3.0
- self.overlap = 0.3
- self.tail = self.per + self.overlap
- self.max = 0.9
- self.alpha = 0.75
- self.exp_dir = exp_dir
- self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir
- self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir
- os.makedirs(self.exp_dir, exist_ok=True)
- os.makedirs(self.gt_wavs_dir, exist_ok=True)
- os.makedirs(self.wavs16k_dir, exist_ok=True)
-
- def norm_write(self, tmp_audio, idx0, idx1):
- tmp_max = np.abs(tmp_audio).max()
- if tmp_max > 2.5:
- print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max))
- return
- tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + (
- 1 - self.alpha
- ) * tmp_audio
- wavfile.write(
- "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1),
- self.sr,
- tmp_audio.astype(np.float32),
- )
- tmp_audio = librosa.resample(
- tmp_audio, orig_sr=self.sr, target_sr=16000
- ) # , res_type="soxr_vhq"
- wavfile.write(
- "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1),
- 16000,
- tmp_audio.astype(np.float32),
- )
-
- def pipeline(self, path, idx0):
- try:
- audio = load_audio(path, self.sr, DoFormant, Quefrency, Timbre)
- # zero phased digital filter cause pre-ringing noise...
- # audio = signal.filtfilt(self.bh, self.ah, audio)
- audio = signal.lfilter(self.bh, self.ah, audio)
-
- idx1 = 0
- for audio in self.slicer.slice(audio):
- i = 0
- while 1:
- start = int(self.sr * (self.per - self.overlap) * i)
- i += 1
- if len(audio[start:]) > self.tail * self.sr:
- tmp_audio = audio[start : start + int(self.per * self.sr)]
- self.norm_write(tmp_audio, idx0, idx1)
- idx1 += 1
- else:
- tmp_audio = audio[start:]
- idx1 += 1
- break
- self.norm_write(tmp_audio, idx0, idx1)
- # println("%s->Suc." % path)
- except:
- println("%s->%s" % (path, traceback.format_exc()))
-
- def pipeline_mp(self, infos, thread_n):
- for path, idx0 in tqdm.tqdm(
- infos, position=thread_n, leave=True, desc="thread:%s" % thread_n
- ):
- self.pipeline(path, idx0)
-
- def pipeline_mp_inp_dir(self, inp_root, n_p):
- try:
- infos = [
- ("%s/%s" % (inp_root, name), idx)
- for idx, name in enumerate(sorted(list(os.listdir(inp_root))))
- ]
- if noparallel:
- for i in range(n_p):
- self.pipeline_mp(infos[i::n_p])
- else:
- ps = []
- for i in range(n_p):
- p = multiprocessing.Process(
- target=self.pipeline_mp, args=(infos[i::n_p], i)
- )
- ps.append(p)
- p.start()
- for i in range(n_p):
- ps[i].join()
- except:
- println("Fail. %s" % traceback.format_exc())
-
-
-def preprocess_trainset(inp_root, sr, n_p, exp_dir):
- pp = PreProcess(sr, exp_dir)
- println("start preprocess")
- println(sys.argv)
- pp.pipeline_mp_inp_dir(inp_root, n_p)
- println("end preprocess")
-
-
-if __name__ == "__main__":
- preprocess_trainset(inp_root, sr, n_p, exp_dir)
diff --git a/spaces/DCandE/rvc-models/config.py b/spaces/DCandE/rvc-models/config.py
deleted file mode 100644
index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000
--- a/spaces/DCandE/rvc-models/config.py
+++ /dev/null
@@ -1,88 +0,0 @@
-########################硬件参数########################
-
-# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速
-device = "cuda:0"
-
-# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速
-is_half = True
-
-# 默认0用上所有线程,写数字限制CPU资源使用
-n_cpu = 0
-
-########################硬件参数########################
-
-
-##################下为参数处理逻辑,勿动##################
-
-########################命令行参数########################
-import argparse
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--port", type=int, default=7865, help="Listen port")
-parser.add_argument("--pycmd", type=str, default="python", help="Python command")
-parser.add_argument("--colab", action="store_true", help="Launch in colab")
-parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
-)
-parser.add_argument(
- "--noautoopen", action="store_true", help="Do not open in browser automatically"
-)
-cmd_opts, unknown = parser.parse_known_args()
-
-python_cmd = cmd_opts.pycmd
-listen_port = cmd_opts.port
-iscolab = cmd_opts.colab
-noparallel = cmd_opts.noparallel
-noautoopen = cmd_opts.noautoopen
-########################命令行参数########################
-
-import sys
-import torch
-
-
-# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
-# check `getattr` and try it for compatibility
-def has_mps() -> bool:
- if sys.platform != "darwin":
- return False
- else:
- if not getattr(torch, "has_mps", False):
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
-
-if not torch.cuda.is_available():
- if has_mps():
- print("没有发现支持的N卡, 使用MPS进行推理")
- device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- device = "cpu"
- is_half = False
-
-if device not in ["cpu", "mps"]:
- gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
- if "16" in gpu_name or "MX" in gpu_name:
- print("16系显卡/MX系显卡强制单精度")
- is_half = False
-
-from multiprocessing import cpu_count
-
-if n_cpu == 0:
- n_cpu = cpu_count()
-if is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
-else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7648fc8d.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7648fc8d.js
deleted file mode 100644
index 161b8d1ff63da7d47bbec48045861e2c83fb9097..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7648fc8d.js
+++ /dev/null
@@ -1,7 +0,0 @@
-import{c as F,e as I,s as ce,N as me,t as c,P as _e,g as Ue,T as E,p as Qe,h as J,E as v,b as se,j as Ze,k as Ge,l as Ve,m as Ke,f as Je,i as Ye,n as We,o as et,q as ne,r as tt}from"./index-3ba00a4a.js";import{html as rt}from"./index-c48bd2e8.js";import"./index-1d65707a.js";import"./Blocks-c9e1499d.js";import"./Button-f155035a.js";import"./BlockLabel-66866176.js";import"./Empty-eec13822.js";import"./Copy-9f1657c4.js";import"./Download-daff1959.js";import"./index-f8ff95a1.js";import"./index-7f39cecc.js";import"./index-b6ab4199.js";class X{constructor(e,r,s,n,i,o,a){this.type=e,this.value=r,this.from=s,this.hash=n,this.end=i,this.children=o,this.positions=a,this.hashProp=[[I.contextHash,n]]}static create(e,r,s,n,i){let o=n+(n<<8)+e+(r<<4)|0;return new X(e,r,s,o,i,[],[])}addChild(e,r){e.prop(I.contextHash)!=this.hash&&(e=new E(e.type,e.children,e.positions,e.length,this.hashProp)),this.children.push(e),this.positions.push(r)}toTree(e,r=this.end){let s=this.children.length-1;return s>=0&&(r=Math.max(r,this.positions[s]+this.children[s].length+this.from)),new E(e.types[this.type],this.children,this.positions,r-this.from).balance({makeTree:(i,o,a)=>new E(F.none,i,o,a,this.hashProp)})}}var f;(function(t){t[t.Document=1]="Document",t[t.CodeBlock=2]="CodeBlock",t[t.FencedCode=3]="FencedCode",t[t.Blockquote=4]="Blockquote",t[t.HorizontalRule=5]="HorizontalRule",t[t.BulletList=6]="BulletList",t[t.OrderedList=7]="OrderedList",t[t.ListItem=8]="ListItem",t[t.ATXHeading1=9]="ATXHeading1",t[t.ATXHeading2=10]="ATXHeading2",t[t.ATXHeading3=11]="ATXHeading3",t[t.ATXHeading4=12]="ATXHeading4",t[t.ATXHeading5=13]="ATXHeading5",t[t.ATXHeading6=14]="ATXHeading6",t[t.SetextHeading1=15]="SetextHeading1",t[t.SetextHeading2=16]="SetextHeading2",t[t.HTMLBlock=17]="HTMLBlock",t[t.LinkReference=18]="LinkReference",t[t.Paragraph=19]="Paragraph",t[t.CommentBlock=20]="CommentBlock",t[t.ProcessingInstructionBlock=21]="ProcessingInstructionBlock",t[t.Escape=22]="Escape",t[t.Entity=23]="Entity",t[t.HardBreak=24]="HardBreak",t[t.Emphasis=25]="Emphasis",t[t.StrongEmphasis=26]="StrongEmphasis",t[t.Link=27]="Link",t[t.Image=28]="Image",t[t.InlineCode=29]="InlineCode",t[t.HTMLTag=30]="HTMLTag",t[t.Comment=31]="Comment",t[t.ProcessingInstruction=32]="ProcessingInstruction",t[t.URL=33]="URL",t[t.HeaderMark=34]="HeaderMark",t[t.QuoteMark=35]="QuoteMark",t[t.ListMark=36]="ListMark",t[t.LinkMark=37]="LinkMark",t[t.EmphasisMark=38]="EmphasisMark",t[t.CodeMark=39]="CodeMark",t[t.CodeText=40]="CodeText",t[t.CodeInfo=41]="CodeInfo",t[t.LinkTitle=42]="LinkTitle",t[t.LinkLabel=43]="LinkLabel"})(f||(f={}));class st{constructor(e,r){this.start=e,this.content=r,this.marks=[],this.parsers=[]}}class nt{constructor(){this.text="",this.baseIndent=0,this.basePos=0,this.depth=0,this.markers=[],this.pos=0,this.indent=0,this.next=-1}forward(){this.basePos>this.pos&&this.forwardInner()}forwardInner(){let e=this.skipSpace(this.basePos);this.indent=this.countIndent(e,this.pos,this.indent),this.pos=e,this.next=e==this.text.length?-1:this.text.charCodeAt(e)}skipSpace(e){return N(this.text,e)}reset(e){for(this.text=e,this.baseIndent=this.basePos=this.pos=this.indent=0,this.forwardInner(),this.depth=1;this.markers.length;)this.markers.pop()}moveBase(e){this.basePos=e,this.baseIndent=this.countIndent(e,this.pos,this.indent)}moveBaseColumn(e){this.baseIndent=e,this.basePos=this.findColumn(e)}addMarker(e){this.markers.push(e)}countIndent(e,r=0,s=0){for(let n=r;n=e.stack[r.depth+1].value+r.baseIndent)return!0;if(r.indent>=r.baseIndent+4)return!1;let s=(t.type==f.OrderedList?ee:W)(r,e,!1);return s>0&&(t.type!=f.BulletList||Y(r,e,!1)<0)&&r.text.charCodeAt(r.pos+s-1)==t.value}const ge={[f.Blockquote](t,e,r){return r.next!=62?!1:(r.markers.push(m(f.QuoteMark,e.lineStart+r.pos,e.lineStart+r.pos+1)),r.moveBase(r.pos+(C(r.text.charCodeAt(r.pos+1))?2:1)),t.end=e.lineStart+r.text.length,!0)},[f.ListItem](t,e,r){return r.indent-1?!1:(r.moveBaseColumn(r.baseIndent+t.value),!0)},[f.OrderedList]:ie,[f.BulletList]:ie,[f.Document](){return!0}};function C(t){return t==32||t==9||t==10||t==13}function N(t,e=0){for(;er&&C(t.charCodeAt(e-1));)e--;return e}function ke(t){if(t.next!=96&&t.next!=126)return-1;let e=t.pos+1;for(;e-1&&t.depth==e.stack.length||s<3?-1:1}function be(t,e){for(let r=t.stack.length-1;r>=0;r--)if(t.stack[r].type==e)return!0;return!1}function W(t,e,r){return(t.next==45||t.next==43||t.next==42)&&(t.pos==t.text.length-1||C(t.text.charCodeAt(t.pos+1)))&&(!r||be(e,f.BulletList)||t.skipSpace(t.pos+2)=48&&n<=57;){s++;if(s==t.text.length)return-1;n=t.text.charCodeAt(s)}return s==t.pos||s>t.pos+9||n!=46&&n!=41||st.pos+1||t.next!=49)?-1:s+1-t.pos}function Se(t){if(t.next!=35)return-1;let e=t.pos+1;for(;e6?-1:r}function we(t){if(t.next!=45&&t.next!=61||t.indent>=t.baseIndent+4)return-1;let e=t.pos+1;for(;e/,Ae=/\?>/,Z=[[/^<(?:script|pre|style)(?:\s|>|$)/i,/<\/(?:script|pre|style)>/i],[/^\s*/i.exec(s);if(i)return t.append(m(f.Comment,r,r+1+i[0].length));let o=/^\?[^]*?\?>/.exec(s);if(o)return t.append(m(f.ProcessingInstruction,r,r+1+o[0].length));let a=/^(?:![A-Z][^]*?>|!\[CDATA\[[^]*?\]\]>|\/\s*[a-zA-Z][\w-]*\s*>|\s*[a-zA-Z][\w-]*(\s+[a-zA-Z:_][\w-.:]*(?:\s*=\s*(?:[^\s"'=<>`]+|'[^']*'|"[^"]*"))?)*\s*(\/\s*)?>)/.exec(s);return a?t.append(m(f.HTMLTag,r,r+1+a[0].length)):-1},Emphasis(t,e,r){if(e!=95&&e!=42)return-1;let s=r+1;for(;t.char(s)==e;)s++;let n=t.slice(r-1,r),i=t.slice(s,s+1),o=R.test(n),a=R.test(i),l=/\s|^$/.test(n),h=/\s|^$/.test(i),u=!h&&(!a||l||o),p=!l&&(!o||h||a),d=u&&(e==42||!p||o),L=p&&(e==42||!u||a);return t.append(new A(e==95?He:Pe,r,s,(d?1:0)|(L?2:0)))},HardBreak(t,e,r){if(e==92&&t.char(r+1)==10)return t.append(m(f.HardBreak,r,r+2));if(e==32){let s=r+1;for(;t.char(s)==32;)s++;if(t.char(s)==10&&s>=r+2)return t.append(m(f.HardBreak,r,s+1))}return-1},Link(t,e,r){return e==91?t.append(new A(P,r,r+1,1)):-1},Image(t,e,r){return e==33&&t.char(r+1)==91?t.append(new A(le,r,r+2,1)):-1},LinkEnd(t,e,r){if(e!=93)return-1;for(let s=t.parts.length-1;s>=0;s--){let n=t.parts[s];if(n instanceof A&&(n.type==P||n.type==le)){if(!n.side||t.skipSpace(n.to)==r&&!/[(\[]/.test(t.slice(r+1,r+2)))return t.parts[s]=null,-1;let i=t.takeContent(s),o=t.parts[s]=ut(t,i,n.type==P?f.Link:f.Image,n.from,r+1);if(n.type==P)for(let a=0;ae?m(f.URL,e+r,i+r):i==t.length?null:!1}}function Ne(t,e,r){let s=t.charCodeAt(e);if(s!=39&&s!=34&&s!=40)return!1;let n=s==40?41:s;for(let i=e+1,o=!1;i=this.end?-1:this.text.charCodeAt(e-this.offset)}get end(){return this.offset+this.text.length}slice(e,r){return this.text.slice(e-this.offset,r-this.offset)}append(e){return this.parts.push(e),e.to}addDelimiter(e,r,s,n,i){return this.append(new A(e,r,s,(n?1:0)|(i?2:0)))}addElement(e){return this.append(e)}resolveMarkers(e){for(let s=e;s=e;l--){let g=this.parts[l];if(g instanceof A&&g.side&1&&g.type==n.type&&!(i&&(n.side&1||g.side&2)&&(g.to-g.from+o)%3==0&&((g.to-g.from)%3||o%3))){a=g;break}}if(!a)continue;let h=n.type.resolve,u=[],p=a.from,d=n.to;if(i){let g=Math.min(2,a.to-a.from,o);p=a.to-g,d=n.from+g,h=g==1?"Emphasis":"StrongEmphasis"}a.type.mark&&u.push(this.elt(a.type.mark,p,a.to));for(let g=l+1;g=0;r--){let s=this.parts[r];if(s instanceof A&&s.type==e)return r}return null}takeContent(e){let r=this.resolveMarkers(e);return this.parts.length=e,r}skipSpace(e){return N(this.text,e-this.offset)+this.offset}elt(e,r,s,n){return typeof e=="string"?m(this.parser.getNodeType(e),r,s,n):new Me(e,r)}}function V(t,e){if(!e.length)return t;if(!t.length)return e;let r=t.slice(),s=0;for(let n of e){for(;s(e?e-1:0))return!1;if(this.fragmentEnd<0){let i=this.fragment.to;for(;i>0&&this.input.read(i-1,i)!=`
-`;)i--;this.fragmentEnd=i?i-1:0}let s=this.cursor;s||(s=this.cursor=this.fragment.tree.cursor(),s.firstChild());let n=e+this.fragment.offset;for(;s.to<=n;)if(!s.parent())return!1;for(;;){if(s.from>=n)return this.fragment.from<=r;if(!s.childAfter(n))return!1}}matches(e){let r=this.cursor.tree;return r&&r.prop(I.contextHash)==e}takeNodes(e){let r=this.cursor,s=this.fragment.offset,n=this.fragmentEnd-(this.fragment.openEnd?1:0),i=e.absoluteLineStart,o=i,a=e.block.children.length,l=o,h=a;for(;;){if(r.to-s>n){if(r.type.isAnonymous&&r.firstChild())continue;break}if(e.dontInject.add(r.tree),e.addNode(r.tree,r.from-s),r.type.is("Block")&&(pt.indexOf(r.type.id)<0?(o=r.to-s,a=e.block.children.length):(o=l,a=h,l=r.to-s,h=e.block.children.length)),!r.nextSibling())break}for(;e.block.children.length>a;)e.block.children.pop(),e.block.positions.pop();return o-i}}const mt=ce({"Blockquote/...":c.quote,HorizontalRule:c.contentSeparator,"ATXHeading1/... SetextHeading1/...":c.heading1,"ATXHeading2/... SetextHeading2/...":c.heading2,"ATXHeading3/...":c.heading3,"ATXHeading4/...":c.heading4,"ATXHeading5/...":c.heading5,"ATXHeading6/...":c.heading6,"Comment CommentBlock":c.comment,Escape:c.escape,Entity:c.character,"Emphasis/...":c.emphasis,"StrongEmphasis/...":c.strong,"Link/... Image/...":c.link,"OrderedList/... BulletList/...":c.list,"BlockQuote/...":c.quote,"InlineCode CodeText":c.monospace,URL:c.url,"HeaderMark HardBreak QuoteMark ListMark LinkMark EmphasisMark CodeMark":c.processingInstruction,"CodeInfo LinkLabel":c.labelName,LinkTitle:c.string,Paragraph:c.content}),gt=new j(new me(Ee).extend(mt),Object.keys(z).map(t=>z[t]),Object.keys(z).map(t=>at[t]),Object.keys(z),lt,ge,Object.keys(_).map(t=>_[t]),Object.keys(_),[]);function kt(t,e,r){let s=[];for(let n=t.firstChild,i=e;;n=n.nextSibling){let o=n?n.from:r;if(o>i&&s.push({from:i,to:o}),!n)break;i=n.to}return s}function Lt(t){let{codeParser:e,htmlParser:r}=t;return{wrap:Qe((n,i)=>{let o=n.type.id;if(e&&(o==f.CodeBlock||o==f.FencedCode)){let a="";if(o==f.FencedCode){let h=n.node.getChild(f.CodeInfo);h&&(a=i.read(h.from,h.to))}let l=e(a);if(l)return{parser:l,overlay:h=>h.type.id==f.CodeText}}else if(r&&(o==f.HTMLBlock||o==f.HTMLTag))return{parser:r,overlay:kt(n.node,n.from,n.to)};return null})}}const bt={resolve:"Strikethrough",mark:"StrikethroughMark"},St={defineNodes:[{name:"Strikethrough",style:{"Strikethrough/...":c.strikethrough}},{name:"StrikethroughMark",style:c.processingInstruction}],parseInline:[{name:"Strikethrough",parse(t,e,r){if(e!=126||t.char(r+1)!=126||t.char(r+2)==126)return-1;let s=t.slice(r-1,r),n=t.slice(r+2,r+3),i=/\s|^$/.test(s),o=/\s|^$/.test(n),a=R.test(s),l=R.test(n);return t.addDelimiter(bt,r,r+2,!o&&(!l||i||a),!i&&(!a||o||l))},after:"Emphasis"}]};function y(t,e,r=0,s,n=0){let i=0,o=!0,a=-1,l=-1,h=!1,u=()=>{s.push(t.elt("TableCell",n+a,n+l,t.parser.parseInline(e.slice(a,l),n+a)))};for(let p=r;p-1)&&i++,o=!1,s&&(a>-1&&u(),s.push(t.elt("TableDelimiter",p+n,p+n+1))),a=l=-1):(h||d!=32&&d!=9)&&(a<0&&(a=p),l=p+1),h=!h&&d==92}return a>-1&&(i++,s&&u()),i}function fe(t,e){for(let r=e;rn instanceof ue)||!fe(e.text,e.basePos))return!1;let s=t.scanLine(t.absoluteLineEnd+1).text;return Oe.test(s)&&y(t,e.text,e.basePos)==y(t,s,e.basePos)},before:"SetextHeading"}]};class Ct{nextLine(){return!1}finish(e,r){return e.addLeafElement(r,e.elt("Task",r.start,r.start+r.content.length,[e.elt("TaskMarker",r.start,r.start+3),...e.parser.parseInline(r.content.slice(3),r.start+3)])),!0}}const At={defineNodes:[{name:"Task",block:!0,style:c.list},{name:"TaskMarker",style:c.atom}],parseBlock:[{name:"TaskList",leaf(t,e){return/^\[[ xX]\]/.test(e.content)&&t.parentType().name=="ListItem"?new Ct:null},after:"SetextHeading"}]},xt=[wt,At,St];function Re(t,e,r){return(s,n,i)=>{if(n!=t||s.char(i+1)==t)return-1;let o=[s.elt(r,i,i+1)];for(let a=i+1;a"}}),Te=new I,De=gt.configure({props:[Je.add(t=>!t.is("Block")||t.is("Document")||K(t)!=null?void 0:(e,r)=>({from:r.doc.lineAt(e.from).to,to:e.to})),Te.add(K),Ye.add({Document:()=>null}),We.add({Document:ze})]});function K(t){let e=/^(?:ATX|Setext)Heading(\d)$/.exec(t.name);return e?+e[1]:void 0}function Mt(t,e){let r=t;for(;;){let s=r.nextSibling,n;if(!s||(n=K(s.type))!=null&&n<=e)break;r=s}return r.to}const Ht=et.of((t,e,r)=>{for(let s=J(t).resolveInner(r,-1);s&&!(s.fromr)return{from:r,to:i}}return null});function te(t){return new Ve(ze,t,[Ht],"markdown")}const Pt=te(De),vt=De.configure([xt,Et,Bt,It]),Xe=te(vt);function Nt(t,e){return r=>{if(r&&t){let s=null;if(r=/\S*/.exec(r)[0],typeof t=="function"?s=t(r):s=ne.matchLanguageName(t,r,!0),s instanceof ne)return s.support?s.support.language.parser:tt.getSkippingParser(s.load());if(s)return s.parser}return e?e.parser:null}}class D{constructor(e,r,s,n,i,o,a){this.node=e,this.from=r,this.to=s,this.spaceBefore=n,this.spaceAfter=i,this.type=o,this.item=a}blank(e,r=!0){let s=this.spaceBefore+(this.node.name=="Blockquote"?">":"");if(e!=null){for(;s.length0;n--)s+=" ";return s+(r?this.spaceAfter:"")}}marker(e,r){let s=this.node.name=="OrderedList"?String(+je(this.item,e)[2]+r):"";return this.spaceBefore+s+this.type+this.spaceAfter}}function Fe(t,e){let r=[];for(let n=t;n&&n.name!="Document";n=n.parent)(n.name=="ListItem"||n.name=="Blockquote"||n.name=="FencedCode")&&r.push(n);let s=[];for(let n=r.length-1;n>=0;n--){let i=r[n],o,a=e.lineAt(i.from),l=i.from-a.from;if(i.name=="FencedCode")s.push(new D(i,l,l,"","","",null));else if(i.name=="Blockquote"&&(o=/^[ \t]*>( ?)/.exec(a.text.slice(l))))s.push(new D(i,l,l+o[0].length,"",o[1],">",null));else if(i.name=="ListItem"&&i.parent.name=="OrderedList"&&(o=/^([ \t]*)\d+([.)])([ \t]*)/.exec(a.text.slice(l)))){let h=o[3],u=o[0].length;h.length>=4&&(h=h.slice(0,h.length-4),u-=4),s.push(new D(i.parent,l,l+u,o[1],h,o[2],i))}else if(i.name=="ListItem"&&i.parent.name=="BulletList"&&(o=/^([ \t]*)([-+*])([ \t]{1,4}\[[ xX]\])?([ \t]+)/.exec(a.text.slice(l)))){let h=o[4],u=o[0].length;h.length>4&&(h=h.slice(0,h.length-4),u-=4);let p=o[2];o[3]&&(p+=o[3].replace(/[xX]/," ")),s.push(new D(i.parent,l,l+u,o[1],h,p,i))}}return s}function je(t,e){return/^(\s*)(\d+)(?=[.)])/.exec(e.sliceString(t.from,t.from+10))}function U(t,e,r,s=0){for(let n=-1,i=t;;){if(i.name=="ListItem"){let a=je(i,e),l=+a[2];if(n>=0){if(l!=n+1)return;r.push({from:i.from+a[1].length,to:i.from+a[0].length,insert:String(n+2+s)})}n=l}let o=i.nextSibling;if(!o)break;i=o}}const yt=({state:t,dispatch:e})=>{let r=J(t),{doc:s}=t,n=null,i=t.changeByRange(o=>{if(!o.empty||!Xe.isActiveAt(t,o.from))return n={range:o};let a=o.from,l=s.lineAt(a),h=Fe(r.resolveInner(a,-1),s);for(;h.length&&h[h.length-1].from>a-l.from;)h.pop();if(!h.length)return n={range:o};let u=h[h.length-1];if(u.to-u.spaceAfter.length>a-l.from)return n={range:o};let p=a>=u.to-u.spaceAfter.length&&!/\S/.test(l.text.slice(u.to));if(u.item&&p)if(u.node.firstChild.to>=a||l.from>0&&!/[^\s>]/.test(s.lineAt(l.from-1).text)){let k=h.length>1?h[h.length-2]:null,b,w="";k&&k.item?(b=l.from+k.from,w=k.marker(s,1)):b=l.from+(k?k.to:0);let x=[{from:b,to:a,insert:w}];return u.node.name=="OrderedList"&&U(u.item,s,x,-2),k&&k.node.name=="OrderedList"&&U(k.item,s,x),{range:v.cursor(b+w.length),changes:x}}else{let k="";for(let b=0,w=h.length-2;b<=w;b++)k+=h[b].blank(b\s*$/.exec(k.text);if(b&&b.index==u.from){let w=t.changes([{from:k.from+b.index,to:k.to},{from:l.from+u.from,to:l.to}]);return{range:o.map(w),changes:w}}}let d=[];u.node.name=="OrderedList"&&U(u.item,s,d);let L=u.item&&u.item.from]*/.exec(l.text)[0].length>=u.to)for(let k=0,b=h.length-1;k<=b;k++)S+=k==b&&!L?h[k].marker(s,1):h[k].blank(kl.from&&/\s/.test(l.text.charAt(g-l.from-1));)g--;return S=t.lineBreak+S,d.push({from:g,to:a,insert:S}),{range:v.cursor(g+S.length),changes:d}});return n?!1:(e(t.update(i,{scrollIntoView:!0,userEvent:"input"})),!0)};function de(t){return t.name=="QuoteMark"||t.name=="ListMark"}function Ot(t,e){let r=t.resolveInner(e,-1),s=e;de(r)&&(s=r.from,r=r.parent);for(let n;n=r.childBefore(s);)if(de(n))s=n.from;else if(n.name=="OrderedList"||n.name=="BulletList")r=n.lastChild,s=r.to;else break;return r}const Rt=({state:t,dispatch:e})=>{let r=J(t),s=null,n=t.changeByRange(i=>{let o=i.from,{doc:a}=t;if(i.empty&&Xe.isActiveAt(t,i.from)){let l=a.lineAt(o),h=Fe(Ot(r,o),a);if(h.length){let u=h[h.length-1],p=u.to-u.spaceAfter.length+(u.spaceAfter?1:0);if(o-l.from>p&&!/\S/.test(l.text.slice(p,o-l.from)))return{range:v.cursor(l.from+p),changes:{from:l.from+p,to:o}};if(o-l.from==p){let d=l.from+u.from;if(u.item&&u.node.from= self.thr:
- z = self.thr
- break
- fractal[ix, iy] = z
- self.fractal = np.abs(fractal)
- self.type_ = FractalType.Julia
- return self
-
- def create_mandelbrot(self):
- """Creates a fractal of the Mandelbrot family, the fractal is stored inside self.fractal"""
- fractal = np.zeros((self.n, self.n), dtype="complex")
- x_space = np.linspace(self.xlim[0], self.xlim[1], self.n)
- y_space = np.linspace(self.ylim[0], self.ylim[1], self.n)
- for ix, x in enumerate(x_space):
- for iy, y in enumerate(y_space):
- for i in range(self.max_iter):
- if i == 0:
- z = 0
- z = z ** 2 + complex(x, y)
- if np.abs(z) >= self.thr:
- z = self.thr
- break
- fractal[ix, iy] = z
- self.fractal = np.abs(fractal.transpose())
- self.type_ = FractalType.Mandelbrot
- return self
-
- def plot(self, **kwargs):
- if self.fractal is None:
- print("Nothing to plot. Generate a fractal first.")
- return None
- random_colormap = np.random.choice(
- ["orrd", "inferno_r", "hot_r", "jet_r", "purples", "agsunset_r"]
- )
- fig = px.imshow(
- img=self.fractal, color_continuous_scale=random_colormap, **kwargs
- )
- return fig
diff --git a/spaces/Eddycrack864/Applio-Inference/extract_locale.py b/spaces/Eddycrack864/Applio-Inference/extract_locale.py
deleted file mode 100644
index a4ff5ea3ddd7c612c640544099ab98a861b8fe35..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/extract_locale.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import json
-import re
-
-# Define regular expression patterns
-pattern = r"""i18n\([\s\n\t]*(["'][^"']+["'])[\s\n\t]*\)"""
-
-# Initialize the dictionary to store key-value pairs
-data = {}
-
-
-def process(fn: str):
- global data
- with open(fn, "r", encoding="utf-8") as f:
- contents = f.read()
- matches = re.findall(pattern, contents)
- for key in matches:
- key = eval(key)
- print("extract:", key)
- data[key] = key
-
-
-print("processing infer-web.py")
-process("infer-web.py")
-
-print("processing gui_v0.py")
-process("gui_v0.py")
-
-print("processing gui_v1.py")
-process("gui_v1.py")
-
-# Save as a JSON file
-with open("./i18n/en_US.json", "w", encoding="utf-8") as f:
- json.dump(data, f, ensure_ascii=False, indent=4)
- f.write("\n")
diff --git a/spaces/Eddycrack864/Applio-Inference/get-pip.py b/spaces/Eddycrack864/Applio-Inference/get-pip.py
deleted file mode 100644
index cf68a2b3426decf01bde021a50c47a2115af303a..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/get-pip.py
+++ /dev/null
@@ -1,32657 +0,0 @@
-#!/usr/bin/env python
-#
-# Hi There!
-#
-# You may be wondering what this giant blob of binary data here is, you might
-# even be worried that we're up to something nefarious (good for you for being
-# paranoid!). This is a base85 encoding of a zip file, this zip file contains
-# an entire copy of pip (version 23.2.1).
-#
-# Pip is a thing that installs packages, pip itself is a package that someone
-# might want to install, especially if they're looking to run this get-pip.py
-# script. Pip has a lot of code to deal with the security of installing
-# packages, various edge cases on various platforms, and other such sort of
-# "tribal knowledge" that has been encoded in its code base. Because of this
-# we basically include an entire copy of pip inside this blob. We do this
-# because the alternatives are attempt to implement a "minipip" that probably
-# doesn't do things correctly and has weird edge cases, or compress pip itself
-# down into a single file.
-#
-# If you're wondering how this is created, it is generated using
-# `scripts/generate.py` in https://github.com/pypa/get-pip.
-
-import sys
-
-this_python = sys.version_info[:2]
-min_version = (3, 7)
-if this_python < min_version:
- message_parts = [
- "This script does not work on Python {}.{}".format(*this_python),
- "The minimum supported Python version is {}.{}.".format(*min_version),
- "Please use https://bootstrap.pypa.io/pip/{}.{}/get-pip.py instead.".format(*this_python),
- ]
- print("ERROR: " + " ".join(message_parts))
- sys.exit(1)
-
-
-import os.path
-import pkgutil
-import shutil
-import tempfile
-import argparse
-import importlib
-from base64 import b85decode
-
-
-def include_setuptools(args):
- """
- Install setuptools only if absent and not excluded.
- """
- cli = not args.no_setuptools
- env = not os.environ.get("PIP_NO_SETUPTOOLS")
- absent = not importlib.util.find_spec("setuptools")
- return cli and env and absent
-
-
-def include_wheel(args):
- """
- Install wheel only if absent and not excluded.
- """
- cli = not args.no_wheel
- env = not os.environ.get("PIP_NO_WHEEL")
- absent = not importlib.util.find_spec("wheel")
- return cli and env and absent
-
-
-def determine_pip_install_arguments():
- pre_parser = argparse.ArgumentParser()
- pre_parser.add_argument("--no-setuptools", action="store_true")
- pre_parser.add_argument("--no-wheel", action="store_true")
- pre, args = pre_parser.parse_known_args()
-
- args.append("pip")
-
- if include_setuptools(pre):
- args.append("setuptools")
-
- if include_wheel(pre):
- args.append("wheel")
-
- return ["install", "--upgrade", "--force-reinstall"] + args
-
-
-def monkeypatch_for_cert(tmpdir):
- """Patches `pip install` to provide default certificate with the lowest priority.
-
- This ensures that the bundled certificates are used unless the user specifies a
- custom cert via any of pip's option passing mechanisms (config, env-var, CLI).
-
- A monkeypatch is the easiest way to achieve this, without messing too much with
- the rest of pip's internals.
- """
- from pip._internal.commands.install import InstallCommand
-
- # We want to be using the internal certificates.
- cert_path = os.path.join(tmpdir, "cacert.pem")
- with open(cert_path, "wb") as cert:
- cert.write(pkgutil.get_data("pip._vendor.certifi", "cacert.pem"))
-
- install_parse_args = InstallCommand.parse_args
-
- def cert_parse_args(self, args):
- if not self.parser.get_default_values().cert:
- # There are no user provided cert -- force use of bundled cert
- self.parser.defaults["cert"] = cert_path # calculated above
- return install_parse_args(self, args)
-
- InstallCommand.parse_args = cert_parse_args
-
-
-def bootstrap(tmpdir):
- monkeypatch_for_cert(tmpdir)
-
- # Execute the included pip and use it to install the latest pip and
- # setuptools from PyPI
- from pip._internal.cli.main import main as pip_entry_point
- args = determine_pip_install_arguments()
- sys.exit(pip_entry_point(args))
-
-
-def main():
- tmpdir = None
- try:
- # Create a temporary working directory
- tmpdir = tempfile.mkdtemp()
-
- # Unpack the zipfile into the temporary directory
- pip_zip = os.path.join(tmpdir, "pip.zip")
- with open(pip_zip, "wb") as fp:
- fp.write(b85decode(DATA.replace(b"\n", b"")))
-
- # Add the zipfile to sys.path so that we can import it
- sys.path.insert(0, pip_zip)
-
- # Run the bootstrap
- bootstrap(tmpdir=tmpdir)
- finally:
- # Clean up our temporary working directory
- if tmpdir:
- shutil.rmtree(tmpdir, ignore_errors=True)
-
-
-DATA = b"""
-P)h>@6aWAK2ml*O_EvJ33*7hs003nH000jF003}la4%n9X>MtBUtcb8c|B0UO2j}6z0X&KUUXrd;wrc
-n6ubz6s0VM$QfAw<4YV^ulDhQoop$MlK*;0e$L01LzdVw?IP-tnf*qTlkJj!Mom=viw7qw3H>hK(>
-3ZJA0oQV`^+*aO7_tw^Cd$4zs{Pl#j>6{|X*AaQ6!2wJ?w>%d+2&1X4Rc!^r6h-hMtH_d5{IF3D`nKTt~p1QY-O00;mZO7>Q7_pjHy0RRA2
-0{{RI0001RX>c!JUu|J&ZeL$6aCu!)OK;mS48HqU5b43r;JP^vOMxACEp{6QLy+m1h%E`C9MAjpBNe-
-8r;{H19{ebpf{zJ27j)n8%0=-6Z#elILRo@w9oRWWbO{z8ujDS!QAC@3T%nJCf;1rX6ghzu#Z}R@K&*?Hgj1WFD91+adaM4G`4Xs@*hA^t@nbDYdL)-aOjsW~3}QVVby(8=@7U$
-Fzj5Y{w!2hUUH`?e9j7WDA;>-1aos>7j{2$~BfyL8p@__Y98dsP#Bs7^lWF
-=e_gr;(4^?am?Cp93+7b-!?~nb}-$cPSR1zckA*zNp!)$;YjlZrfn&RWNM}=QA7*cb8A{(9@{5!vBfq
-rEMoeu5FvJZngI@N#4#(2v$WnMGCAVD?b9t8W^qDfcFBe5ZZF%dPAPaq#>ikclG~yPvCg`JUGb_W2#PdCXxx}7!|T*xc9qdnTILbO-nAJaF2
-~0snMFDU<%E01X4*yW9@|}F2;vY~;0|XQR000O88%p+8eg`F$&;kGeqy+!~6#xJLaA|NaUte%(a4
-m9mZf<3AUtcb8d3{vDZrd;nz56RT_b?l9y`mj3AXtV0MT+&(WJ!7$x|Xq_^eh*q`
-qYNbl$TgcX!{RW4b=Vw*pI`moV*K|DJ2bY*KQViviHGglIK{X_)>pN=IEr427|<0g`vfCSX-CrF6hnx-
-fU6^LzLVM{GttvQ!RX(K-@qvQ<9nZh3{TwCd*xxj~wep|+d4YrpRGd3uJ(;$x#MJ^wO(dX9-I(W~SOL
-|!j@ev4#PBd+t2O-3Y4TDlA%@&y9h}l?d7(gvc*a&O+atWdOv5|
-XtFg8N1I1Eg2~6T^Prn{|GZSIw2~Ql9c?>!a3=lwO6eT!TZzV{RAoH`=gPAEk0OKF^-L_LxAV)%Ld>V
-rC7Ea!84dqJ@cSb~%=6Dm=^V^deci#%k)qhs`k`mikNs;GRv|TRB1+w&XWHK8?pSmvO+Mn5HP0Rg&
-0e2!{O&s!2A%Oz`W5|6)QOoeMptG0vVbf-p%MA<(l*rGUrRG$G|nf0000U0RR9D
-0001RX>c!ac`kH$aAjmAj&U%1)EvFVxRjSzi=C>J@cM
-k87yJyz4~-Qcqlg}hXv}1CF`fEox?~SG{pae%Dy$pBG>tnWs3{>FohpTZSG@fe-hAmws@4PFv7Mv`H@
-JnAXTbgKqwrl)IWaYE>+%OsO9KQH0000802@m7R+hDJp-}+<06hW#02u%P0B~t=FJEbHbY*gGVQep7U
-ukY>bYEXCaCvo+F;B!W42Adn3hP*|5~K?fa1xA6Cs^1JI)&D4jnX98E~x(=w{RdN$P(+xdH(X;aUMbE
-La7HDOJ;>ViJroJQOYSq=f31Z#UCgsvZ;PjisC7~V50}YW@1zhN!C_4fs|i^>lX7r-W?|$V(y(g0ZOD
-x-5bTWf^iasXM`rih^?Sk#%z{jZl{Ri-7?Gn9_p
-NH(fR_VZQx#ZustU5xCHVj%1=)fT*F;XSi#wiQR~iuoy}(RFp&L9pfC#Zn^7Axz>2yIKB7|@~y3-1&J5eC&FySna4hw0fjd92G^LTzc+Br>7Y7w1=({
-s_3<|LwzLQl3jT^=CKlyadUgD7M{+)3>-rRJjjOO9KQH0000802@m7Rtz49V_OUW00Srh02%-Q0B~t=
-FJEbHbY*gGVQepAb!lv5UuAA~E^v9(T1#`|xDmeVS714ZB@>eSIHgphB=gYhxH9p$W;~m0sZ?Bwghq@
-hk_(WwwJ!hnbT%XT~X$2UELOWbx^D5}
-p)=7nt84rjpQ!t=bvqBu6SXjxf*{)}V#v6kjnleUMl*qKLJw7ma)>Zw|O-`#U0v?-c6x#d+}i#X$=E%t?3;qCzPO{p3XDn-l0g8$MLf}@FhxjzhJPffk$LZTb=
-tRL0mAd`8KB>SS|Ny1Wz!%10ZP4l!(Z9X~Qr(M}5e1M{2V-3vl>e`}|vFvt@s535m*|M}OlVSM$)RrHcBrimd6?lFPUdh
-^8oI-}L;caqLRJjDa?_Dr07Ysf#%z>QWYbSDW3_SKrT&dAFG`Lt`@W9KJiJ}-Jm
-Eim0UQMILLA#<&5?}IiA5v%!>tEItSETqsiWmt%EBta_NRXj{H*Zo{ba+L0f#Cr>zURR@B*qHa1TLRl
-QIY3XdTuN;Q8cY|sQ{2jC4o$vPgD13HO~sl#?~l?=&A}cMFP(CNl(yMsR`-t2i}7DFz8rYgPveC_)gi
-?sXaiv@_U|jtx7a74!l@<;4JHe05F%Q2)SdHLgXxn>Gh!i1WT2K^_-Mqe1LMOvU4R{fH+QfQ%eQYa2d
-+e#MFwQ*oaQwvhGC2wTnRW_zJ##J9Pw*x1bE%az6lfqS#7Kz)e-Rnn7GhG_W5G{(q)4xvM*rJ>eb1rMlGrLDy?OC^}{4osLlt4f7K8F}Z|`B#E1o*9RQ|@+2V@Bv`<7P)h{}C>a!R4k{Eil{;q0l?#-&mQ~4}M0|c2#OI;L{3Tudz_N!_rY+hTGzghD(#5kNVHprZaZYt##W$uR8%mb^&)N6ivKk8Fogh
-BMr8%*?0wS)XN@6p#nApa&Y-w9Ew#Zu@h&NSzS79U`3ykgq8X+X_$OD4A`np9UQ(OZ&?G>pd7)u100h
-6&Ehk$^P1yyq9_uwBi4bItZ;{YLK4idID%pU;f7}zm-6NU3Bg;MsQ)C_Xl%pd#APfelK6ZX)4MevarN3gu`&(XOy?+-ilBrvl6uD3dNNZ)`pUd=i?WZkc;yu4_~oaIcdwK6<&R*Ivfg2cB}&44
-buBuR0s5klsH#FHwVF%6r=l3b;v1Sm=o@?fr!Fer2uDL9L&_isoatz297jX@o{A}`XCC6WOfkP0%87K
-kvdJZNsFYvO_uCPfDQ#!T$k!x23L!YQl05fIp!Qum#J6eLAwB~uW~Tzhl*@BpO*0iZn3-W@i=nr-W|(
-11w{K!f#O~7gF*~{#IxcX?lmI`bj@X}m2N^O|Q*fI&p?b#N0ES?Pkc+y}Zj6*0Av01gLhWTd
-nOTbhdhE1BKPS3Fg`Z@s&2l@TrvgBPRJC~P2NN1QqdwS}`bs=5XEmp~mF78qqjMEpthH9w@9Bbi4O^<
-&W#%iuEa|8!D0@I9@+SrhW~?;jIcMk1?8@}gUVbulb{fRenF5C)Hq^fLi3aX)oB9={Aw5B1Eya}ud)zI$Kgq)lD
-w*#4Ftggw^aT0%+Vu2)HvSC$KTnIVg4EflgOXhv&oh3ZSz1L>6#^bnq(m1-OmeK*vFp=M92_79T`!l}
-{9`kKpLiSkSXPgGNTXzCM!m$>YmKH1A`kiHiQ*43y-)7dhX|M$wzbh0!*8wWuO
-$+?!aMXV))oSMbZk=4=^~Bzkn$82y9L)OTJZ?WBoqw*oo)B@x+ciJET=1kF<^8cqPG
-P!?R*vWNM|ELa#g+A5(FI=e?5HVwlmM86Sq%vDSn84<7Nu4Cy@v^93Go0~&XH@{(?4R;W-+{wnaSX4h
-dBLaWBKHJuX_r9tZX^)!C5M>y}CCk2Bg3Q180j_`3MbCnUAON=wR_Mw>=B(2!qdobEOu2v5=yT@shGM
-~r3jQ4Lc*S5nc8W7-2G(!r^M~cFZ6m}#W&xX`N#98l}tUwm`Ct`*ss)D%~d2{jaf3BD8RZT}pLU*I=(
-}#C|8y|~WONGYELk8FDK9$3V(nS{-09xll!z%G^#&nYYK%@=^l2j(@kWt-j^soOk{KTUk>+MW2)n^^6
-(IL-fyvERX*?qG*p`hD|5y$?@0$n)X&O2H;^6Ex)SV+1jP-txm+iOw9lzzJAF$`cMda)C%T
-GVJkVYGtLqIROwK@kZ{Ay>IU@jDOX%0SdYm|x;oqbm2$vloyp_(hz1t7I43OljOG#o7wOvTf?rAeARa
-|#tj9{cl%YbU1ah!CbL`!H33}dzr
-htU}qX&Y?3r~m~9(#^Nq?X+Q|_9G!Gbecu}-EupvSfdppnjX=t2xj3q;=s^aZd#RHI3bE@&IndzQQe?
-i1`zO-;MmiOM@SbDofi_1tz~EAd#Gh=@ohyX!HEeD{|0MK8X+k#$FHr^$7_}l_w`+ZnbT?no-_f_d2^
-gF__@%r^IIH%GBQ!NI72pmqqVd1vE@1Pr*4|@_{B@EF0PV~*Do$#zj*ila-FfeFvbZm^^FGfkZ#J7flbgK~uVgNF>Y#FaF`LaUF7%-+Dl7KV>@&S?9zU2OZ+>URZm
-08I^H`XRZB-mZDJ|^~e)wBFx(RzKvAh|7nx7WpE4{G`@lqRnzbUOQa+zItGP;bDTa~9p6_;}JQPNqll
-{?c=cqexYp>wOMvQqd?a(Phwky}+65WS0HZFSa?+{nDh^+sm;N5$kqW|%M-jMb-&VrJWYFY;ULN#F04
-%D&c_;;kb)4@Ign6Q{aT8=KTs))4rLN4~GJJ9cF{|Jba5iQjiDJrX0$TIOnOF^e8sbtn^X)T$NFj-8@
-{iD(+L$w!^1W||6QX|+Kfkl2FcySN}PQI%LV?h@~meaT}{!YWRZ`NhSX?_PZK;&t-(w{Ko2ub;lU!un
-ZJX>5qe<=~GOsoIK!+!4%fY?Ln9d#;VG76M;4bMf#m^kaD;@PQA1r)*v2LSj&^GbPMkK6><66k7}t3G
-%k;6qC2p4udo4tT?R?rHN8dg)qrSbuz1WRSnNFs+5(4TFfe%EoKWbTh8VSp>k7KDv@TRHLsjAy~-W$1
-1NTW?#JpWLXRdK6RW#F?EzNxpE#>lp)
-L@KQmY%OvdbHSvR#Q(wVAd@e}J8Z3r!je`je)Ck^oa=V6;$d%XlO!@K+b%*1V2d^Xy2w4ltjxNEf#-3
-%Z{ALUeFZ1U3)_(q;J7d`IZmt%WR2RXZX+EXcUxBd?R0*?FT5;q^X!cf+#1h3DP+kJ#Eet+Auqb=xQF
-Q9DDq=$BF)ebs7G3HsErkC)iV2`(78&*QQLjTPOCZkd?wy2ag;d-6k?}x1rJg}=7ORhL$$#ZPN^$zNj
-Tg>9AVKS|J*h^19BgTg-Si7jbyU#zk3OeHjX#w6z)PBwmsR4&u*u~na&oyv#6o7)N(dOcowdnwS-4$gvNc5Z?Za7Vbu|?4jtqNxFtz=&^dnjT11v;4IL1ID{P8VIaktI_HeD
-ph^a6s9Mm}XTh}^EIel%nssby5GkriNRV6AM)!D*Xygb1)d3!pB5HAL~sf^1LseG+ybyersu{iS!=M*
-kuF}3F4jbb^91EN5$b*Ak}P;HI_0()yqv%I|AL85vcWASBqD&-~0$E7x=R_5}LkN*5*=wa8h^Qz7UIU
-fvi%EVSL^kBCir+nM7d*!60Km<7sPqu|i+!T_ZXPIzO$7*Xs4&En>KIlwV0X?HObwzqXrbaTfl$l{}e1zjN>Bi_ss8)0_r(d{QhFWq$@a3>XM?j}$soOk6IBI
-&u#!le>AE~2k_cxYGN!FNPI=l!F}8{4xeyA%X@jh$HDXDyZ`<-I6shZfA>e12}YZck^uA^W39p#_)p2
-?1tXA~?YD!vm?bE50NhSCj^BGD}h;>RuQ2#i7i&_fqLqRTWi}nLKk*4+C{cI`FT+ETMNbO%(&2ZV})a
-A$64|l(d)5`Or`KCEg)Hd?>D=q(YqtM3teKWMm{m^@+;W!hw$?$yfP(7znro3iP!k!L^00BCW;vx+q
-NX+TzEJ+U|VJoPE$|UH24jzygsZkM7^8isYA9!ZY7WyC6|sYS73jlUte`zO3Aa%^$)d*#Z|nEMSSVQD
-}>o;@grJ5^0LOK=tx&yb%(S`XAOR9&=~f75q}ZKF-<~5a7#-pFZMQ;&TtGsIz)hE2gRavlZHtiLkZvY
-{ZwSV!k~w`La8T+vQ`*YDPlDZwC{UD=Zn-1a&Y^0ko|)NIqvml?~u{qy;lD}oo_g+kf^F0msufq*K--n%A)TfOG-cds<|N9Wt{C3jrdGQS^DtTw`*s)69PYV+`VKuz?ioy
-k*jhrh$Xa2Iy{pQ=Fyl3ea31xSnpRs%F+~5Fr=M|@FUGJj7EPZA#cIXKycEe0A9=P<3{}8VzrnB+QX`x)Je2|u-Qb7SFaxVx`#8w*J1
-?3mE?gP}g#VDb`lB4rk&&9(}DF1N9{N5@jA)J!i*ogX3G6MTP6My3*$(1bVg6Qn
-!M<@5_Si=GPt~X7Kslz*{=cXBHm-h3x${9v=mfe)LP!=9;k=?-iw;ej0gk3T6$LlFQ7_SU{0H>s^A5X
-RmB11KbUVj#A&NVnbCr5iDb9=9evaaM%=xCes3hP=|4Q^Ujqt`09VfB&Qhe3F5AVAI-HESy7v%BPx6i
-L$CGVeKUA;`6J$?54s&BwuR;+;#PRW5^x1{c7xm^-ibAT-SjIplI?u)PTHiPyg{W)cffQhkmQ6|=OrXGd}tA1a}6Ww-9cMttV?p{AAlI_b6x~JiL+-AL7nso0=65n;wTcxPwCEg;}^VA%A;aqSj&R;3?n
-$c0ZvFlx;`;?_X60p-s5Yv@=La!0Vlg7lsaAl+m^~pwj*M_>Vn$>pu@K3X0Djh9+SIsLaDjd@l=_sol
-#DA(LX;(&l94T@u&$%E?2W2XHVCECWWn9$hLc>O!&WNLBrg8u29`Jo7%R#nbcs~q;;hx;bqPP3bll?F3jeP|ga5#Al)jQ3x_KVI8q`TLdzllbIEagHuxU>s~JyP!>yR
-(CO7kVK@`0wzoO$a#7#)7?Z2S^$Vg-rb~G}mtJs_M#JrkmB3!{49w2^?@W>4O@Xw-K*(-ps(;%|1m^9fopm0d{)eQjvRPV
-r~Jt6SOhBZZ{nvUJ`3Uh3yHs$nK=`gHg#{t|7t9X)aIbjC#7x3p*|N?nxw$cSpgj43U}16A`8*Z>wH@
-*JIFIqUwr8gv07It?g!0&E*`rRiiI@qfLgyEvU*n3S<0rR{Oc{i2pONRc`?tYpf4{_vWiNOMZXYsa^e0Yj~!AIh~K}y6eSo2|#uEhcCW-@|c1Fd2C0wmF
-F~CKcT-qpqrVVbJF0DTl(C`SoE2_aTcP#tr!~U0bTI+ZeW_@dBeEnnCt@+J2Z)Znf|DN9VPual~~t1!
-M7Ri90X)lJFnw6b_0|XQR000O88%p+8N-^Lqfe-)y4>kY*9smFUaA|NaUukZ1WpZv|Y%gPPZf0p`b#h^JX>V>
-WaCyxe{cqbg_ILjkoQk3{=BVp#INS{Q?y|-01xu44>Bk1c&=iY~d66Z7lVj+@>r)(#x6-zXbBX-C4u;?6q0EG38$n6SIy;6Y0g76B>mk4(a3
-Az+XULh7tiTpO>Y*)yXrCcqf05G>~x8f2|UvYz)r4dd%BIH<^2+07sE1_*#v`w|Z}kB{^Hh@FT3LE8#LtQ(<>_cJ>^o;uiL5>%Da%wyb#Pq-!YY
-%>F8_Rbesb~o`tWj4om+=Dx4b%oC&1f-JJv!i>~fx~jpQ+4G=lG&^@=O1BqBEPBo?(_vlr}o1&
-~%ro(_Hyc?uhh5W)a|2P38`IUEdrzBqq-`Y!(I_n<_B4A=^31vJ}T))9{gTeItQ;h4c<
-I{KN7gy60+_>dZfeZk4u;N(>+Vz5c0DZiJ0~ITlzG5oWRnXW(@@Sx!Oo&=7?vK~gt4Xi{Y5*S4^AYK~
-F8M+%#e!D6JG=Pl_-qo~X2ngC=~dTzRq-|ZEK*Kuu1`NqCxH?b*Y9Vagse76HfPg(D`b(A?R#K>v`N7
-8t=>TLx;(v%4Wr(ko=xqt_|x$fEd~3M&T<#@Cp26z1qDiY@o9Q>b$T+5FRo6eS3oUM9cem7<`>d!za#
-ecJDlfy#iIwGj?Yd{;0`oJ
-v79`GqA_1rBFZ!PA1}gjxKvt$7(AEQnl1w&d1YE7$DmB>n=^9_R|c&Tw}
-!J2(Po}*xkJlnHU@+B}XE5NdWABr|e2plrk{(YdSPlbX2z}F!67#PzcAARBse$2-fogfN;l@*2+T3RE
-*(apucRs~@SFbeB8#J*qno}~p>v>CWpB>*8UFqna3py*>G3OE9kQN#it#3h%jq*QEQY}gJW3~T|pqR?
-MyyNd1~UIAhtmL&a0vw0XTQN&fKBb0qC69HSht~&H68MYZ0sWKB)2z(f^H$%fl(9YQN7%>IgkeG;pW`
->?@)bP_VRO4;7>OH`^S&d_%B5>ua=--9NL;N;kEiX7^KpewYC=wGJBJ?5_Dn1C&9~zyS59l9v2_6jRd
-Z$6?j8KWhm+qMaAOpQ$>>mTsA%lM@LAdBB!{EQca8xfK^tw(w!pF_378?46MkeRG$1t&c!J{3%7`7ZB
-&Unz|dO=v%<>;tQnA!ROdbh~HaDDx)UDdyU&8SOG(%6n^kye9CS!^K
-!DX=<5VSWi-2<<9aEWiRVF+h7HK={K?*i1@EkUp%0Vj3w}1Oc{E;Ds9K$Cszzogp!)z>g{x*t`*wJ~X
-TVyv{z;u@IgqI#*SN)E>F11%XGcnE~3vNrGo!voxs^O+NrbWEBK4v49N32w77?UK}zsQ
-N=Oj;@NTptBVXdGzP>ANMJL_En|!d<2tJ)e>BHbtH?QdDg4rSbG0ccEY*;QgZdqqX$=uELywNT3G?QS
-i4v{IKlXh3K_Bd`B6{BuI8XwS8dX5KHOCz>wZd$gK_Y<2fWF^91lIL;;1NuSwAw$clQM(|3^{BI-Qlp
-a(|^+ZLf%J~^}t#C)nCvcJZX?`c>99=#1{$1v>hfYykwP37I#REE}G!+EpVb%5Gs$n6Jnict8pjrP$7
-fiXT(}r_NxOAb7HmmjWYGK!+O_43lXj19v<|SFn}8Du|w}4zVgs@kwSjV(}oRC(vBf_-d
-Gcgg)FJZF2L-tQd4c~#az0_pvfehRM2LC4Z5TQZVU;BuA$|WAvucW+m8doIVA?JDQmGdJwf1cRm41n?
-4_oz_6JRpXUM#w=%-yTg;a94D{;HxsscYJ3Ms06(_S!iyA)z#DXX^LJ01h<@0{y24y;z`(guDf`b7O0
-5r)VUG)7X%eansLNUgbn#A?|ixug#Ja5(#J-lVEjwzE@+6ko{Q^g=@f8x}$cU#o_1wr$|*6=`%Y5B=X%BL*$3H4hLr%2n>|3BEEAGA3F8a!tYyGJ90%X5}*^eSVmxdN8Ka+(NEfy7Caq3Q>G0)GuYLUBor4ta(uy>;)d7?$Uy-KM#l@>EZ*?@
-80tMA#jAP4hVasWjaQI_q)0GWH`0(`;rz<^v{s7mZj+!t&%N|Yp>wu7rkzye!E1tBUakx(ql&N5*3@HZ7X%?BZeQdI7!pa;z7?k5PRs+W904N?}q4
-^&+9oE8L+@MirpRSMBc@KvyHQ%ClE*VHu)jt>sPpsxc>S2_VupjPfk?9^WLfEo)g`Ex7~gZ)l;jDYVfg&0+xP>fjh~cfPx}ui2
-!V|~l2Bj6Yi;ZsDLEzkM{7V@?!j3=&Kq^J<|mC8YX{#}YAfv&2p%VN!#NW9vIv9EQs{)NU@sL^i%YGX
-Ex}F+8AvuHuokiAU!dYpzUZ2JlF)Xh;5#`d4fRQ|uM{isvQOc~MVL2vGv4%ZOh$c~b}}mbP!eC1Jh!q
-9cEJ$Uc-dX+=nx$dF8u>c@^x`d#)Lq4HwnYfAG5z@aGV>XD;A^@Db&7Hfi&XiZ=kTQ8ET04MC}8Vl?3
-?Vpt-)#f3r&|wxD`U78^LybY@;S9hZI$JfP%a*;52{eegj(QI$k>k*SduZh@rzJuoH;8UYzSPC0ldP-
-Ky1q$-8HqYGG6gO{of65g?7Q(B$Q>ta27WH(+kBH{^IjjTb+g;)yS@`
-8-PR-d2YSvd-H`fs4LNWcA>cAw_n}|}2(dxU2QX|45ya9f|B*qXZhvS#*lOFSw+=?QoYwVaG^a~_%Z#
-%X*2=B6VP*__f3I!xciq)xUQXJ;qKdReH8%~z0^zG}e$?xpysoar#E5s#h$m`+
-P6e&<56#J*L}p#SdN@ihRQdowu!Qn2^J>!bVs5Fz`KLDhwxt;5Izn@tj&%C{KLELN^TamrU={I
-Lw&ZHENEq2bJCNJ_LE5f@ZPMoCkt)4N#BRobzO=@iE7FQ0z2&bR5K^kNTEZW7RGbAU-Rm*p%$ly+t{@
->DqZ5~yS=o)R?!WaCE%-NCE&@XVhi|P@%oZUgQ{B$`(3ghq5oSIS#w(yIZICp9be&4p=_&bx8iw|F6a
-xkWNS@RjhZ_KLd^kZ)6JTDkc$}F-I&l^^kN2V6sPU{aa0TPr$s^ZPRx@Cida3g012Oc8@`3e{hYkEu6
-2^SYOydTtrL5b*pIn@xBP_T-#xuV&4p{u*{Z5aHGcj27E-9q4GvqqF85{fx)kMW$jvugz5e0Hzy9SpN
-pRl;Sbt6obg3erECVw+q3Wx!`xbQ%Y_UqfmqGAW&egg5A^uD>mwNKx9`1O${#3UAd^Y{<$FFA-9X_GI
-hfmsk{{_ich#&FQsCgj-NF$bHQorYpJzOXm*d0
-t7_|AR#Qxl#sEn6yc?c)%r;4M&kZ$;+##_^UyeYQOVKoz?G^zz@+XAOu>$w0--q)ZA`c_^Ky;>dU8OJ
-*6%^^XJ9f3wJW%YLlWhw^jbBuv?V-=&IW~e7z-HWu5@>`mn`nGrA%(uGcOs5gsR`O<%f3@Rj%A@yH&K
-#1V+TK?s04awJ6%eUsf+3-BSOpX#c>PWf2~nc$G7cF8*w_4l&|o0+C&{{@>P@;4$ezZQX;H%R*$?Bn9
-ii(>YiXtF(RSuQ`*YX5;1E)4(VDgZ`}Nmf$68C_55%~|O3!$p{#Cqg_n50fd7x@*iJ?6d=1b!@#H6Xw
-QSE!Lm^%|OZM$j)BPz-&QhfobO%{yAHPn790H6MTdUiHu?~83cxUEZTfUv1_;tWSEH#tFaCmMR4CR^F
-AABxYpQSK{D^*EPRAJB5G8%|vZzLF+c4iu<|``%jI&fMzEowll%;yl_@8|Q;Q+?5_)?H>9fmQLmTOOy9$#nIu@8J)v<)An2fsb}{-z?A{6XeEfwYd
-5AjLOg&%`=%G0>w0|7{of8Kc2HO;MG7e;yU`bQIp4X94oX$eCiid?UwFJ+iX1%To^AESHYjSxQYEN3g
-vB)#o4OG16{4P_?l}}+%zal+H)SdoxnW9QUHMc9#|wT9hnc<{O|H2NIyhy}s??lRr1wIq_JWVh;*1@x
-o^bu2TW}UnK+$BXADpHmfc4VHZ|ZC@KWSC&tvd@>>6%Lx)m5~{q2jU?)&01MgQD%h31WM-kaw6rN^IR
-{d%xrI2WH+5-b8)>+?y!R!+bRJ@tq-_b|vTK(Q0<|X8sjnuzQ`we
-0SV0XE9Q~UTGQ2!d+;pu>ObMP43+h!2c{|^KXUzfqXJ|d=Z**K2u|7my#@7n_}-^hnyF!&!(O9KQH00
-00802@m7R>_yM1-u&o0J&TM02=@R0B~t=FJEbHbY*gGVQepDcw=R7bZKvHb1ras-8|cJ<2IJ>{tAS>T
-P-CLdoq_)%DYaD)2&Xs(igAoPA27eX^Mm-#uTX`NV_#ERc-Az%+~(K{=@!BzGTk@0Kto7b&{7^SJIM5
-;Nalk;M@Vv^Cll6xhj)9Q=(Eb7UiPItN9|YO0f!~yKpZ3qob=uqQo+ft5k|N?=P>!+jm!@EY_mTMY3G
-wMJZ-Qz7%1$E*D8Q7Y_-3irZDP@`EHRWs!yHEi^yMSF#98=?j7h|H%(48I?G4E~Zk03#TxW0r@OUQ!z
-_YsSMn5A&*ow)d)hHcm&TXH4+LiPh*kgrHJK9X0gbr`O-h~Jn!g8V;kk!ESayuLdn8;R>}`$noP};G>
-^hm*1zU+n49^z3d@Dlwy^EgS{)JU2~4}p^HdocMT;=WMq&;WUQO2{=(Cbx$&JhP3JIrMUj1-B37)^2J
-pcCW?KdOwB8ke8I4hBKc`*{N69vED(Yl4{NW9PB%Mk-2lHbG3^TIFWUn{9f<-^*^8jlNpJc2K31uHLm
-dM44r2a2dXHESZC^Usm_!s9Chlf|+CU{zY~0JRK@yJ}@1Nfm{wU8CY6SUSLWShQJ_Ajd}n{;2mNRJaP
-Jeeeo?*KeCdeT@qerbqSr27CHTS%z^~hNk`3p^`6v(nT_@aHhNZ0hRH8c%VvTRo(rJe;5HQFuf!C%jE
-PdnvMMt80E(BlyF
-2@K*y&2I9%^=5?HR$Q5{K3!&Nq)c}^#j5i;W6H@#EBvTW)VznM6ZVY4*!TLPK2Tdz{;}oO5*}6fy@}d
-AOkf#a*5K@f^i&){9YEbG=Dhpl6jUF(i@7;`r6w;XWoz5h9|W3diqntq5o5!WCKLdSF1EB2fz>cLoiL
-l?C#{MsN@h=J4!>P#Cfus6h&SPtVGt+2f1~%kYWmJj*KTHw(?1u>8>x55JpvH5?LVTOAX^R5|o)aicQoBh0jzZfsH~EgqRYcm{wV|5cv#wIf+KtLuMqn0d`rg%@7O^+T~TjC=
-j@%R0@Q*^2H>&4AYlY*
-QLO;84`b9zay+e)6!`1bH!EeBR!^Qb3_qh?3R^QNgP`U`CDYl+{`$23dIt=8ES_q)ckK~Az^&dxSY&{
-Uck>iBj(KdmIU^etbB%i*+M=|m8*1Vh>45v1_U%pGL@$N|somCTjLQYvYEKyS75(>|D@`fY5E)F0R*Ji9wya
-u?s(Fw7GZyy}BMroek$8BS_mI1C+d+b0=QIDPAW{o}E>CQf|!jw!D;jJa=duPwA6A9kQ%)8
-z&oGqAcX?(M>Ft*pvZjQyZ^R#9vc;&Sqc@Atq1to-@m>;SrNrF5}*GNswO}Drc^
-_uhA<}l6E8p~Rja};4lg$BZYwkX!`+t1(aWjOc2hFf)Z2y`$|LO(xu3^s|w3MFMKe6tlctS3ozn0rQZ^sK4P-FYHw3;)9
-0KME&Xge|Jl?=N<_jNvKU7H}g)ZV$Gb~TqkW$+JJ1|lR9emx6UuPXkv8Vo!0JeWv_&(nhZCG(8d;Ihy
-p5zH@~WM`ISw>-`T-G?*pHQB%$3uf!-pu=fYR^URB>!{!f`Y0$?ftlzmXYw8A2NO6+NBE?di
-I#Iymx?5aX&~7pm^;0tMs$*MP0EU4Y*tN!-TGbU5Dk$fV>h<&$8Ptr>F)LTknDGSIx?}|I;7O(sE#?@U
-@K!$&b;Tq=-N$*;Gx)Xq;H_X54SD0HWnuOYTcdZk!Nk9yhyJQx#~VllekSyj1T)iW?h*}!INv3A~%BL
-%NWyU(rHsDi!r>q7Ppq-;QFlnCd^@T2DRnu**E|osZ=FxP{B?CgMh$Vz!2dyzXyYhXqBw|7FlWonF!-
-uN-54K=i*PFe<^053J_*2%Mj)a>SQIOWR^t4wZ7-zH*~fe5$(5SU}rHtj$w7L-;7}~k{+r&`v>CQ9Rx
-#uL%G>y0^b?Twfk?jBzf)do~S8W|))#iXSmx7Yx`DQ5CnhFX8FiynLBgSxmFJy
-(Q%(!+6NTKH-Qq(m5pEo2~^3Cm0EqJFYN*RC3%E7DuPemf;%XGI7Q2WI=LFXDG-6Rjp{^A(6uvCJdHI
-JMH1Dy_qgWBAm#6&z|!f)8lv*7U5FEKW!2=pAt*A+qOnR8IIpDjNG96R1y;qJIJ`fqyr2T1C@3$#L(|
-WmW(1q6(kjYv_n=7UKF@;e!HZGDejE;JQ44aJrFwbgA*$@5wWR^!b8J=L>5uv-_mTGZbnh>M*BGeN<9PzqK{?t(Tz^;>mfK7^HPXwt_amwUCps*Veh05
-k~o+=gVo5(M)zu>v2S;a)v7nDs&_~PA*|qDjd>Eg~&@p=&+cp>*Mj~H^cTENXZexvJ@MJwfH%qEwow~RWySpFo}
-WR#3PJ>r_g47WLVe(s8?V!2;z~5dl|+h`n!NN)I+o|vWL_b1yht%D`}M@-5RODskz^nyPoDJK9@WKIFw6_^=tg+~XI%y1F_?pt>2)=%W-U^$3<~?x;rIdt7GWq
-@7IVMK?dCrC6gaJ$W=uIOmFKrIq7G$M^~CYl<3j*B&`NC1ihzk
-&DWM{(7=9oqa8h(#*XP$F}SU8HZ1091Ud~&_RT<}3>-D}YQZ@??i=3BsDVvqppN5V=Qk4dpt{%aBh``
-8dBd839e?lpg_J4EN>s@Eq@-?-drh)0jXfFy=Y0mV!NL~FVE2J@+A_!nBp10ukQzvT?!(S3ifUC3U4S
-*c8t$R7bF_14ja6qUIrOF5Q9Vw}y5i_28+jjN+gUwJ(-TiO7SQ%7kAmJ+X@YNh*ex=$_6>a}?xYQ>v)
-V-|_fZTbYFQg`wphb)aZ1-tY!xsRZEe*71PB7F>J=gC;6Bn?Y^dELm|dLH?xPwc0_B#L7zqxpPCiArr
-e;Es*=@Lw<*H8afu7+HT`
-+V8tWGA%NK1}?&n<_m%gZ|Jl23PIy5{A{@(`W1bCBwMzV%oo;V-dI$(6su^k%%(Pbyfs*=YkLvqtCs;uGBhrn_9B603ta6!sx!g4
-_7szMNSbPB#XIkbT_S0T+M{W}L&j-oPFoxR?&Nf-%m`3~sKU-Tat&-F)k38e#1LHK6Qj-9X}kk@)eN>
-Vw_!TB8m36omA~5W5tB3C{~Dvl|mk?BR+bruM;lWs{|-K(|*E(U?_fy6%l?A_R~}@*-WJnM|#
-!%cp&eey^|o)KC;2Q3FxH?E3bi+%x(`0bkxn9z!|Kg**xEN^xABLW^~_7W4LQWv;?GX-Uj}g$J;;FEw
-IZh>SY#onKhSTnLHz_^?*Z#RM6bOrBOw}hDQUI;Vw`{tae%
-7cDS#x?szW2C2M_7!Rdkz-RTdzT)K}*>GPC097}Y88^Ik(D3EB+2hGuWDaBouKV;*1o|Egpkcx3Vj_y
-O$TCifr;uO7!QYhwQYUfl65MICx(>18;*(`}*hkzF|bLJG`S%9?|r1=9zizP!Lb^sAqSJi`$Bvtmbg!
-yA|3sj{naYzdaL;?n^vsIFzgwT^w?4B(+qZmtx<;Wynl#cp;_x#er;4@I501|u+w23A74k(9waf&tcc
-L2cXZQi}am7xQn#Pm{@&`kP;RZNDvyqd7c}hVh3kOdXMVzU-2d<-
-ZoeT?bP|Z;Z$HVxmW54%#mnN!5&5?SjshO!YMnpm8ftBJ%*g1q2(<<2-d(f8=uV#RxY*H~IERBebSOq
-5YUl#ZMOyRE8}f-SIDai)b-c?w$-7Bes}(P_8?qyEUHU)QL^AG#XbnG0pP2E*;grjd-1h@{{CnDGym%
-?((cp&noH?s_yrBCWl$lm{+5BMt=4VKw{5Xe|i^|Fdd2m4(_%TpPp$7(-jyh5-|`1h$qy*0l^twKr=H&Hs+T?
-wa{=?Ap^It?$j4d8Nd)7}|XtH+cy2?@XqSDj^>LVT!NS6mbLaa>6npEz#)GSkUyjAm1hKs{NmBvG0oZ)mw1m
-gXT+AuOUrau07DIJN+05yoJi#Id%#O)Y~W>u!z5FFJ`@D$Q}%u8{X6^1|hcw2&_bXEW?<<1G5HVuxNX
-al(u+8u8gdlAj3xC25FuL{;V()MjgdbIjS=+MSI(iX_Al{GnwbE9#r+i+NsxsT5%2T@`IT)isS?SDN%
-jLJwg3t7J;1SDHi^eIJ@NfKAv?r9a`RuX?F^$0RB3j0l|R{5r@|c_LoqF%T(A3b3NV%Hw0xb)vE7q;*
-xI$F0{%AG3aW`rW&Wv#Uw)>h$tet!vC0oHTQTj)rL;i#g+@2Oe`nvc11}-IZKmL>n6l+v-SHij7;)l$
-4pN?dg^kfLp#G3oOhU0fBDe%oF1?=wqb1sEI`qtY?(3661#Vj@3N1Q=`%0<0zB&+TrWhCa&H9F{x!rU4BGzp0o``lE`x
-eDu%WSlR#-j-PJXJ*r99AcIuT@+D-AHQBvnD^c^9nhgAkK(h?^p=&7M_=cM5-4a@f#^bmzU8#RTr#n=
-wwd?ni$1VNj;>{y!6Uw@IIS>29{$-wQrmKr%zT#pK`KpIuDeOx|8yx{LPm?
-DfPKT6faAR;!pCE0;!QX-5J)2i*1C$OTlJ1HO2N;_o5B>IeBDT%xz&Letf*n6@h>&0)IO0zU;o9g)=(rgIRh)$k
-FH0e4d}IPvzIU_(5|d*_;f+UDv-FEguoYL08lXNQ>67llC=b;Knq%kt}=-V49tSNN+C-H(gzDZ>9P#f
-v@6UFGKk^e+9H2r?r<7-OnGnUWFvL@-inoG3c>ieTr6!FN!3NXyqSDJXsHIR=s%w=#9Iw|sk$c*&{^(
-*XTDjU1mnlqbQxybc|h1`a_FWI4Awg=s(N&tsOXIDHGVm4x7lKXLO_BJoSmaoYb8*~q1)@XfSSL
-C9L&$$$x97Ru#E1Vk#GD*;XRE4W^&I;qyk-cnf%7e(*$z0BOfA7CTMyK#>Bll)DpQiX|-D`i9<*+dYp
-E0KzRQ%F>In#xH^x1sHHkhtVJv7p5TqK6O^Ng;x>;jlAhWKY5ZmCV(bL$j6bgOUlg)R=3yNY2uR4wsE0S0YmU&^78ck>#Jb$m#fLe+tb&te+b^3z6JC1MKHOzcy~c#d)|*nXld(
-R;U3api({kK%4cyyWs%D;gJDqBaufi=F{pac`eW8Rf+k5T+)o!*iF0j?q8q9mAc0r%*@zeE65~5y?aJ
-(^tdVoi1JhlH7>EGvZ|qD=NI3ZGqh0!{LA|Rd7VS4YkxAWS@5xIfQdAWMLT-$r98wtSJW|PdFdQJQGE
-7c|nLy54Jlglg7hm265T6Fo9CBayeVC)cSca3pdZlQ4Nug9}<>WVYRkZb%GY!i*JDhw
-qn@VsPLsVwW7?~n$~2J_Fwg}v=u(&AowVL06(GaDqu_5(wOHSN1){&ZY#uw(`3evWJurR%OweA${Cxq
-f2%QnuYExV-QFmzjPl4I)P<#+7dfXmpT~tQ6msTN%+s9)9WKtw1xxfFc`L1KPus1*Q*e!niSz&rF;*?
-9x=4k&0^_mBI%>zDVa3|Mbiz1(!+enG?-LUTK=fRuFO-h3=5()R4L6)QA(h_BHvNm1AJXA
-thX3Sph7I=>y|v%+N8J!YUf^#dpV!Q!GB_+f8jChorOvjW09=p9E7;Af3h%{m8JakF99{c==q8nY>QVBbaL{JY7-WY!y7{r>DMrxVH;o?v^U;=}dn7^1ue!Cw`id>;)dfn_&N1
-Rrbv^VJ>iXnm=RU8AQ*QoeG9X&}lj@jR)z$(POUvpX^wM0)sIMvZ&*9U(x|Ln-ea4C7gtIx$7xC5BIa
-`Qy46_)JX|&2&;GHz1>>Zxe08noa7$hjvjQ#KvuPSHbFI$z^roQMz_r2~>+}=%x74&rqSRQwfVCya_S
-$g%=&_4Qp;MA$Xw{qj)6Rmyt
-?aZnLX>M;y{%&&dHH_8_S^*jX9^=us7sD`#notfkewh~o-KXXIb3=XK=)mJzW|Wa+v;DL34CW&X|HOR
-hxba9#=B)n%P)h>@6aWAK2ml*O_EskVqo?Qq002}0000#L003}la4%nJZggdGZeeUMZDDC{E^v80kil
-xhFbsz8ehQJ(3Y~o!Yzu71oyKN+NJv4cNQhtasYJ1-tE{6v-IUeB+zDmX&JiY^EYaP^Rn&xLBg_?`@_o1&v~&MZO9KQH0000802@m7R%wcA+pq}$0Qnm
-L02%-Q0B~t=FJEbHbY*gGVQepOd2n)XYGq?|E^v9(S>JEmI1YZFze49eWDLwWEpWwU2gn|Fv0Yq%-d?
-aRun$2n(Ae^f8js|Yg1*xoa|=)k!d??jo!0bcb
-%x6)XD})rubeoZs#C!61=PYaejq^n8ggaHpK@wpa#wu||U!Oxwtj&~i$Pw$M;W)WV~68DFDqd&p+e;#BP
-7qnjF-_ly}@RCS5o6Q>0vbD794x+Kv?bG
--^W>{^V!vQuuw*TU6CRmV43y%+d;TQW{h{~Xt$Xfv^Fd)`MYFbgFtu1W;A`!)Ijp=Xa0z~A9%);lYV}
-D=bya7M%M7L<+uWWQ-8MkCqJUT~F|*>S6`921}WPS(7i>dXtZi4w!A(7E;g_pL;zO20A|5&edYohloUBtnz4m@)G_SIbd_^6FGP=48kKIBTWd(HaQ6@
-h!w1pS2%XBZ)QRo(s0Fd60Cbz}l+T~3FEeJApl{wb;9usbT6R!|>?YxDK&6{~yjBCu5^z`7Brsa1J|s;#$GPvRHo}Tuq=i`EHcq=EE-5{;J1#TQjawKi
-b;qBvLSy0ODpQIlo4REm%^*T1!+x^_#$AS-kBkN*oXY(F1uL=!3xV?OBQ^qPUSGbRQ^l;x*}}1gTIJ30km&aEqXdG5!`+avbZOL%c*Q4B(}Rny_M{nbm
-lJX=ei?D1U(^y~5)4d;`D$Pd9W{$~^}qYEc3l2Y{?*6R-hbbr>PsOWX&62%$Y5ir5b
-Y=<8E+P2BNDTCHBbE>xfJM<=JoXtM@wwM(y*y&y}yQlbF_X9;~Jp!_08M2KTPGBR~S~;Sq6T4
-#N;x>%f7(or^Aagb=683RIpIy#k9Gi{A-$YT!~T{v*0H@PV5vgqQNK(DfFcD%b4Vz8gDh$be{13$hd_
-ZcNf4WD|nyzHgsF452+F}31DFJUZ&I_5xesppKmOtH^L@Yhf#Ti(*U%7e8OW+Ou!iqdq
-1Bf)f+WEi_yT1>JEqpxFhxhbOu4=U}DA@-bg>w2C_d*W#Y4_2fs9eW0*Vwh`^iF8}KSY?
-1vp#n1Fj+E$-|a3#h2v^F#)Wsv4kkD>4v?XsVi~cMeou4~f-|z}*t>4V*7kuMJVda`>^fq*Sa|48aus
-0N6Oi|bxJ1mD(#B)#CC=rU(>HJS=AmES#=&lcVIo;XFJQ&dp%61pS9w$q&VX?n?%0CVEz6nD=q)G~9iUqHC+5HSnW*Zu7B(VtS=KjoI-;hqLO{M^66t_3+&
-Y43_b(3q~d7?cvNq_Hq=E@j-pu3>+-1)oX_vbN8;@Dy!A@>~zM&;)$ACScN75kDr{mpzWa&x7J5AxlI
-JHmXPMCpVbVXsW(>E`sjQgU~Lcd>za(+s_$yZr|^{72Az81pkaai$~RB~aAJ2C1}%pF;{hfL~J=JFpk
-7T%apygF4&bD)uvydZNx(lx6{xzz*TERCPDN4E-HF>SZS(b+Jix()Y({l@B`ax_HG7h7>`v4tERz3Ly
-?zI^!TOEdHsX$tOl)?*Jd-EnG6(r##^!8ijvg)#19GraHvAW?CNWi1s$#=hk#NLNPjuBtS9%DdPIVyN
-!b6D(px(iTxt122^?vv0BD)#<%Y@o{c9!0mqr(29(=Cp!i01(jCyvc=XxTrPCR0LZ%@`>ft$-60j*O9
-xEH*jxYiX(F9J5M(`-dPLXajP!F*O=JPCUMRzjV$gM=06aA%DyW`jot=!9wn-EZ#Nz;7-KkvwCB6~Ot
-$(ANSNI-rlr}LVs^3SQpsN|QtJ&0!+5Z4gO9KQH0000802@m7R(R!
-C9=Qqt0Oub703-ka0B~t=FJEbHbY*gGVQepRWo%|&Z*_EJVRU6=Ut?%xV{0yOd8JufkK4Eve)q3n9Sk
-BbJVFsH+QO)}n`S1vL9Kc!;2)!nk2jRgDsIfhv$B?*w?brEKfDpV$
-Wy7z87tkX(>zP7j~)Im15&xKPp-I=TdIBqT2dbS?fA&REJ8cxK8!XzpA5}6^-1}m1$O{N^!+RM}=LoC
--kufUcdRgIxg9Zv@Fw2$(QUxWO~Wo)>_CaEoT;@7ImE5^D37OF3s*zA}qe=jlwC9$WX^pWgil^dy1eD
-*)GmxrDVxH>`#qs>qn06)8?+N{YkE}wC0Z-)%-#GgV%T-k}oRF8?Y3MD(4R%pe#9b(fISPMRoVWFFp=
-u+2=_?uO09^yZX{BP66+^PDu?OV9Rxq^CE3ao$%^jG_u7oP~q=HePvgsfIu2<&g33e>LEV7dUNsi!`0)*A%f;uOR
-RQ8Ml*&F$!o)v2Ee_{Qo~Cjn?&z~N`Ptk4X2KJC#&Ub@%Z(HC^;NS^S!@4%|E6Ys7my{PH20{vK`Ov6
-7s!79xC1}S;OlRryj?<7b@!vxKeGylUy{jS#t6G`Z9U>%QaiG`PskE&-UkM`Ssb)tFvELXIKB3&t@}<
-0?@6sLxh^m=#C#(45qSR=ReTz%E$?;$>yjLA=dN^v1c%pXkj)
-94(l#5#cG#6d-i?Qfh^)3f5^oaVs-!x*KJc7I8b5&4VhN)f&_I%^w?VsjS!8{NfN;%3&=8sG)Wdck#A
-(>&eMmE#g?g+iA^^+x1^wT3mQLto;ql_C;i|AVViY_p0MlY$h3qo)12uYXZ!(d!a
-Xh;#~)$4Q2;=n4%d>+34t-8`EpWY<^o825nu`5;3TV)OAZ2*$eLHt6pc$}lnhqa^Hvw>qvb0V3c6ih&yT3oO;DHNdA(R_*0V7{6UW
-oVN7NijMtddllXXkbNrXoThw(bH9X1N0{a#S?br)2(+$libc#%`;Jm%xTq@#0Sc;75msKJ;EX(9dk7l
-iSzf=GAUAG1F%YVA=&679yx`ns-q$%D_c%_b{`a7>ESAmF*E-}waR#bUO;1AcFBZjdOszP;I?j=QzH-4Y$Rb#FZ>E{_=I!%CPs@@V8yv=&p}l-M@^w04G2?7
-g4aa}ql)C^>b)vM^x*=9J#2!i+CgY(4=G67$ef}SkuO(~i+_-$xR
-*%regnyK^lV1Q_1(5_u{Khc)>?dVso)X4SEQK@5CL}Aal6S1GE{{hiB$8oozbA{qNcVC}
-)pN>+m@$&zMx)XS{8VP0$+BsF`i0eHe*b8W;Oj+jX9irb)S1Yg3$zzB)D8_Ow!q7>C#(oehA;h-zFiN
-3dCM|W>lj^?%d?Kga%mQ*q^voV{>X3iEZ`R^#0LU{6!9MkHK=j)d#Vio23u9H*kmZk~Lc&P0Icd0{Qp
-*&NulWjQ8T2D^0dY(@2Uyk_TH&GI*Xy~G<9IoMJa5(t;{5C7
-+B2|q|3!24e{*Mj}nxJsFzG+8$_|;5%%|yu=}uzEE*g&|6X)(e_D?KgY7|x1Ig1wop2sFXM{6ricX<6
-msCKmrz~tXlA#MAHwVt
-|VrBv+DOMquK&xDsVoq355*}1u0(~56Bi*JYiX=vVec9m#jSbLu|9W-#K6&x{#m|??#jE$!k^_@0>}*owmPfiVcY@*Cb|w;_31Q(I86iV?vLddrl&<(o^~Z)DO;Ca>E|u^@kYZxK|Lc$4=U-&lygX%3
-Kd>7zoFU@l$=x7d}~8sh|6B%`ua=%N5E?K7{+TFDwy5{USfY?0-;80|XQR000O88%p+8=?eQ`WDEcRLM{LR9smFUaA|NaUukZ1WpZv|Y%h0cWo2w
-%Vs&Y3WMy(LaCzMtYmeNv@%#P?Mn+Hx(W!BeN4vlVXdJsRki>!Qq#ulfz>~PU%4sDq;v||uW}
-7IAHk<0ywf(^A_HY0o^V-U+IFHt=gQ(z5+f)Uwt6xOk@DpJ4hVDc!0rMq%dWj{ixdYJhM>U!<{lg`xYJhy8
-HY+&D(*XVhMfeOymX?b5g&-znun~d1sA{38|Dlm%hx0!iyy^P35YlnNy3VKJbx^R-4rw-kZ{huG5LE&3Xi?A#Z0sk8$4HY8Z
-4P5g0C$C52|qU*6tn2r&cv(?T;m%Wi*t$>d7f-Gxi}mE$G**b0c(4a744}5iN<{tzvEZGe0=rSJiGoV
-xtiWS_~;4-q7+xZ{QGyWUO#{Kw>QrL*;gpw*Fui<0Jtcgy~8N?Zf+jeD*i3aTyHj;QtVk?w9UZ5&@7F
-y+%j4TA^b^Vb4|aH<-C(R?@p+wdPQMLVn8_ExgGx)qE!%;3s41ibBd&rb{xznvh47KG
-hPk+J95A%k?$UVV~1HJj1j9xSJCh>F;iA=n#-R1?c2F>J_VJA!AC^~Daq%_$Gq1|vx=+n)d-`
-8q)!z{2h$HvsY>_(7lqoV;|T;;pW@KNoN8c=$J$q|;QJ@9z~q(FvHfdN<9oCwn81LI&ZC((=g%qDPY!
-L@{5@LCFYt4eC_>`It$n6|_CC?n2D`Q8Kx;>~TTFu}y>#G8_}O?}oF;)$j
-m5u0Qax$QuF%69AvikREW{NZAgM_dl?1#PJ<;rr*`v(G>OnAvRPnnt(*?XS|h%)2wh$)6zR*=!Xjo6S
-r%>RTvuJE;Djx>r@x&&_
-3j8!Gy3oR47Au>~!hxIxjL@%mT?}pf?s`Wd7*lMxrvpUt{K-}y+ADbEl6&pp0pnF(JrV;DjLxzS+%cuR4WRiJ^^qx>A
-?1NoEf@Av}jC}Es?$=-u%~2E|8}YsaO9M;7;}|1^%#Qu~LH}FIA|Lty9)uH72Esq_A@=D30V-0b5>2s
-%2YZmh@<~4qIs&d#(tzU$9W_e^G!ghe#u3vSS(c=pj2&2CqVA%5i|cBBr1s5
-9|nhi|hz>2~;haQ{tGA4w9;T#u)i3ykC|(;NR#Qm-8U5bay|862h!1ODjp@mhp&MlG
-cji{dhsmNpVCwo-IbH7};>W~ysqwB661{za$9YBWv@QOiu#+FW3O7eQl8A
-DI%CBdUcZo69
-!oaWK0-OaVVynb#wyJeGF21j!)cgCpcxG`Vo3&uZuJtz^y#Q&*0=CW9ogSv2=%i#eT(EU~3oe3qa^F|
-f`Y|DjKj}2Z1i!z_a*ZB9t$1X^Iv*n%>@BGX;Tl}D0b(a_)Kb@+2G&o;OlGaD-OGjl7suXfC>M=k+wN
-(?lWx#~m2cD(zi4N@Mr3`ZOfl%9P%T+>1SM&`*c53h3b?HRp9OUA42|m%YUT{1vs@9K6^jllY?jRycI
-J@pXEMa^8$!DMMW(A*lE_>!Aq~&=fi%YQ70X>TMWpx0C(q2vPHJ5iuF2sqVJN}7ltPmp|HlZC^U~L-s=>+`O|>?Ox~P_Ba%bs4X8`@W1YEPXPVD
-zC5yg6$BIrzGb&yEK(SKx>-*jX1k0_p)sil_d6_Qt!Ryv<{9}iz)e!oK0I0*1XN9lF28}?4>a05}nzX
-u|f@7h_5qw3L#3}jVEN>s
-pE%7pitq%!p|H;ix3dF&-r~_5uTg=Gg$O5U_ecn;jS2D~I3`$T`i^pr3!
-#1a)_Jv-X+$Aez~r-DN(dtp9;3?V*})`6CM3r!u#r9bV^7H~gx=m0>;SzS)uvIiF|KC+xK-+FsdV+1@
-vG%S#5wy8~!QUsC(mtPWzN$iUD=-+IY7?KcBBjJ)T~;9Qi60sa&mlA$_N-V5uoD@YeyR^9UgB7t1=?R
-Yqx5Qhc&hiveJZ*EP7n6z29`*AC7AgBwWu&C!Yj*x-=yu7krPT|_?OSnEd;4_F6K
-urzQyrZ@PrUTUIou=^~x&g=N-jam%Fnf@MUn3sx?iK*|Ff@-ko^-5sHC-Dh2j6R0W7k(o8*N8M5K1|~
-i^OXy?DRTb7T96!dk)gihyPPC8{{&D=0|XQR000O88%p+8&UflGXaE2Jga7~l9RL6TaA|NaUukZ1WpZ
-v|Y%gPMX)j-2X>MtBUtcb8c_oZ74g(Pjj$fq)2mk;S8UO$z0001RX>c!JX>N37a&BR4FJo+JFJX0bZ)0z5aBO9CX>V>WaCx0r
-O_SR;620qJVClnO1LE8=FVBjt^q)0dVv2S4$yxh%|8(o1sR;=Z(ASP3{zUhoDM!B}3^JU@Z9!W
-=^OJ4i+~D}tF>VR1UqW;dM6{V#UBf{mRLH#(=_#5CPeO}&Bx)5VVMX;hqm`{VO{@$3En9zNWCzDr0WG(-jXHg{IpL*qv>-O0xMt)7JhFxSDlO@c~
-_Kkq;O_9@xss;#+Bv(1~;c8~x8FgcwF{fbh_HZv;$nqkkXlM-MBLA&L~>bWIvtXg)iSgXnVX2;HVQVK
-aIiZP5ME^yW}F_YT82gK318iF#7BE_z9&VUu0?ZUZZkQ^y08hT=@KD0MV%PmM^JeZZ}e`6D%ZfzV)03
-Z3O;AF%6%ub*hMWQ8-VD6X6IlUP^8pevaGQlteu2aymiqo1@RDoGHM^?8tp;ib$kc*UG*J%=G
-bVKDqWr2;a@cxHvC8C3hP_id3It2`B$aT+Ie!FnOcRz31PS0-mrQZ;Ljkfy|rias&g*yTlheaEdwu~
-J36Z1BkbQ<8ha_Qn{&{7yJmgh0ji_uh^%XXjTbSxhFYA7#f5_@7FR*lMWu4I)Kk~0HTS&j}_mQXhYip
-uh56#k_wm-rjR;U@vq;SU-K|3W?)X^UgtkOctrB87}J#B>n`0Lm@Nk6FY{M%i}E3AR)6(yOt`0-Vfhv
-UUp~1xDA5O-^#Nz8S6&MV^nmIzmzX|Bjm*=%8j;1@*nzk|AsaXL70UsJ7F1679
-G&h(c#vQ@!dGl}M@=$5*2=S?pu$N%vzQ;7c|;o~Q4FkY{}O8UKy*hkh+-$p`Ez+C5X{Co!kInr@NGr}
-nwhwqEYJO;ycTi=!2!Odm^J0VH_7j)I><~-a&;VyWU%yN6?`oFNAFrEC%Z0s5~^L$}CJBQ5*`&@Gg!=
-!)Z*t}~l+l6@zh`Pc4S>WH`P^d?(C&k{fQsCjVERjqtmjadAAGpeLDz<74lNCfHJ@w~PSk6IXhT4PP0
-s5&YD!B=bvK;z%N6Mj+@6ln}`B;ZV;xmX&f)P2fI0QV9Gt$fwYA63l$(&%5?L(sh1CyZvhr5NMnqK@9%7cd5CJ?YzG#8x5ER;
-6(pkDP36$Eh9LjUV9=`$kBpWg;&=#C{qHw0J4QJ7ydlHTl-3V3cp~uN5%Fhnc
-NZY=uBN?w2f;c6Hxfvi7|z{*mIWUPefqL`OFow_o-1Dh*}=HCZ5OX?XAbeGJycO^v
-L>G!Cka?49;AJMF}?Kl}olPPSJP3jtu~-F>d!UDu~Drrs<6>^LgpVFH5uDJzWWvSUtdBW$a)X({&^>Jj8
-GVk-oy^5!7ot
-994L{BFdDM35$XLG2bU>A?bD5WrW~3Jx*wXsIZa`j#(GB0itA4x{^N$m?lZg1wCceE(eS0M;hoW3;YN
-nzV^X|uR__Mjg8DCTC0FR_Yr?T_ye436pya;+P)h>@6aWAK2ml*O_Evldu6ol7000&u001EX003}la4
-%nJZggdGZeeUMV{B1i_`Sl3nTDPNXr55@GF>(N7~2kxTs5MXpk5gCH1<
-)GpH|XX_%3a;+0vEhKugQR$}l8b&%@t9n&uc_xgN^?QBX9_pXD5s_Sp=*wtr^sYX`%I!}gDWvs#`5^<
-5YUd(7Oxcx460u6;l-(+uPuX3TE1imD%9cfz$dNyosVwBLGL5wfjf%FcPn!wLXoEN0v`(rtmj+0~C{f
-kXXcuco%boy)E}V)vbdt-rygFEF{>rcZtlsA7DGW{#ra4Hu^#b0zohf_DVgX}quH{-3Np63XN%ldToy
-dXa6k3~$ys+RambOAZ9x*&O4-a<_Q}^@!;pWr)?&*@yZ5DK0^NL$MbneVmv0?nl|
->h0Yd`KokbFAJ5#*UGfEo>Fy;Z%`6MN@BycN+diwm%G|Kwley{Gm&q(-lxN+@Mp`&sSaKm{Y9pad5j>
-u^;@1kvobF-Ud7!>d^=^M3fV^HJTrPTWhwbYqW&u#Pp3~qgp9|k_PHhAp&KYqo5S*u$p^VXoS6d!Ou1
-PteLL1oqE{l}w2V~_8;cEjbG2Z86XUBgc_9*+zE(I*k-};lUZk-!yxAX(IEN)UXEXL3lpYnlj7FqM1l
-O$2UC`F8TN`tpik;+qG8&Csf~-7oJP^E-bLTUKg$(BmYy%}SC#@;?s*w=qKLvD13F!5lt&m0m@j
-1DZ8YsMj=$~hlfnXF+i(09mS_W~r%p?p(TyDZ(sGht_cV+);1&kH@a$RNoitPxK}%^qlkL;~mOPj{mv
-h`sVxce_kVvw?gxt|4oU%-k9GbK2P@znrauUn(CRPkv>;5Hr69UBGpf&SqatcH>F+n22M(1pxP1dgk=
-h1tDrP`lq{$tQZ;PPdV;6k9EX(7pgMEtoqSx_ch&j>+`xwJ@-QB1!^TTHVEETu4kFu>LQ6rFjiI`mTQ
-)8l`Y@d2BkZzk7Cg+@Y2#yc5V1D5}C?fMj*v|er(X?8zqjkXK}Ww%Atq+NO;FBcbImmP1*unfZ=>Ak&
-TcDMN%!en4$n~FB9BANM^_+&
-4De9njjym42i*5BJ8S0nE^^%=Vb#lfG!9cdo9Te@Si}YKhqhrJ}c?h
-*c>0>Afb2cXBY;V!AQ2314#T~rq*slAsfGWg&x`c9TPy38EIvjH`JgciF{5hZiPi`n{jz^%7RpLhcak
-s9+`hwXJ~8925;0EJ|nzdoDxw<8XX#;&}uGpKHs5R+<_74+l>aj^AAp#N-9H0U9R-s!}~UgcX)htO1A>Uh$H>hr`MP0vTQzg
-9?PNe_KhRqeJwGqX?IvBJNH@ptUAWV^zmfkq)yE@%0yx02AJbbeN*SXQ?Z8mnec#>I~oE&`?mK&8(tVEp)*kh#@kZW``OU3xjQhNcQ4jxlGAeF<(mzaIlr}8IMmeHtF?)9XD%gnDZ}k_p`I?kZ+L_;FARv0+iGAea>tqBehm=JF$lM_fmi@
-R%M9|DNT`=<}yj9@YP5Zbc?ZWFz*qwHM(gF3sOwk-02jdZKQ~{KE-7aG$$l``5xSsLk=>b4^cou1zmp
-7Oj914vi+8BI#5F$EWe-M^M|L!V!rsu=L`P$bb0@9cXjjl2$f@8(kThuLiH+)a=8I66t+E~a-T|HY0Q
-Gt*aMed!LYVrNode#&~js{7s&~ysg+o^x~?D~A!Lb&V4GGcpkAagi)-?Oo+q^5Ft>;DP32>EmC^B%<=
-yq&Ia?xiBRyhHA@ndQwQ*zws%p&NOSyFE=>5HQ-ak}lsp7*Y;jnZN*cxTf+g2yBM;;y>cby$(7Y+YgB
-YWF=@?a~vxvIgb~%(73HQPjXO91MFdY>FAVi
-deUA>V>NW$d5)ui51_Hl2Berfb#ubB;Fqj-+tsQmfqHS1p5%P$ni-KB~tdBeIV7Q-~;=uB
-1#s-@1AevMc9IF7rsUQ`Wl4oE+8ZgxaYS$XxBI1qsPn<(rHOmn0~S`4+v1^^_fSQPqeO*r?hN?0d`3e
-q`%JY{G%^Nzaj$j=_%IZ^&u7R~6LRyK6eS+_i`lKjFq)zC~R#BbYdSQ8=%!@1MgkoYLu+ru(2t;CR35
-Q2(euOoV$QGCQtNcLgBUz(v*9ae}4mu|NCi<03sA&mGlsG&og{zea}|Dw-bMvFhk4r&?v*&2k#au|Mv
-F8a`5uT@LOD!r!3}T!_07;Vl~GxePu@faBn~9KO%!)5i-sE;Ls-I?PUaG?uGkGY)R5D}3