diff --git a/spaces/101-5/gpt4free/g4f/.v1/testing/useless_test.py b/spaces/101-5/gpt4free/g4f/.v1/testing/useless_test.py
deleted file mode 100644
index 47c92386ae925c79aec64891281041cd693077d5..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/testing/useless_test.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from gpt4free import usesless
-
-message_id = ""
-while True:
- prompt = input("Question: ")
- if prompt == "!stop":
- break
-
- req = usesless.Completion.create(prompt=prompt, parentMessageId=message_id)
-
- print(f"Answer: {req['text']}")
- message_id = req["id"]
-
-import gpt4free
-
-message_id = ""
-while True:
- prompt = input("Question: ")
- if prompt == "!stop":
- break
-
- req = gpt4free.Completion.create(provider=gpt4free.Provider.UseLess, prompt=prompt, parentMessageId=message_id)
-
- print(f"Answer: {req['text']}")
- message_id = req["id"]
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape3D System Requirements for Windows and MacOS A Comparison.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape3D System Requirements for Windows and MacOS A Comparison.md
deleted file mode 100644
index 5839ce1535e49940630495643f5252a4b4d3290d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enscape3D System Requirements for Windows and MacOS A Comparison.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
What You Need to Know About Enscape3D System Requirements
-
-
Enscape3D is a powerful real-time rendering software that works with popular CAD/BIM applications such as Revit, SketchUp, Rhino, Archicad and Vectorworks. Enscape3D allows you to create stunning visualizations, animations and virtual reality experiences with ease and speed. But what are the system requirements to run Enscape3D smoothly and efficiently?
In this article, we will explain the technical requirements to run Enscape3D on Windows and MacOS operating systems, as well as the recommended specifications for optimal performance and VR compatibility. We will also provide some tips on how to optimize your system and project settings for better rendering quality and speed.
-
-
Enscape3D System Requirements for Windows
-
-
Enscape3D uses ray tracing for its real-time rendering, and almost all the calculations that Enscape3D performs are being handled on the graphics card (GPU). For this reason, your computer must at least meet the minimum recommended system requirements set out below . Furthermore, although not a requirement, we do recommend that you use Enscape3D with dual monitors, as Enscape3D is optimized to work on a dual monitor setup.
-
-
The system requirements to run Enscape3D, as well as the standalone executable files that can be exported from Enscape3D, are identical. It is also recommended that your internet connection is fast and stable, and that you should use a direct cable connection and avoid using a Wi-fi connection where possible, as this can slow down the asset library loading times.
-
-
-
-
-
-
Windows OS
-
Minimum Requirements
-
Recommended Requirements
-
VR Requirements
-
-
-
-
-
Operating System
-
Windows 10 or higher Enscape3D will possibly also run where Windows 10 is installed on certain Intel Macs via Bootcamp
-
Windows 10 or higher Enscape3D will possibly also run where Windows 10 is installed on certain Intel Macs via Bootcamp
-
Windows 10 or higher Enscape3D will possibly also run where Windows 10 is installed on certain Intel Macs via Bootcamp
-
-
-
Graphics Card
-
NVIDIA or AMD dedicated GPU with 4GB VRAM that supports Vulkan 1.1 NVIDIA GeForce GTX 900 series / Quadro M series and newer AMD Radeon RX 400 series / equivalent Radeon Pro series and newer Unsupported hardware: Radeon 6000 mobile GPUâs Intel Integrated Graphics onboard GPUâs SLI
-
NVIDIA or AMD dedicated GPU with 8GB VRAM that supports Vulkan 1.1 NVIDIA GeForce RTX 2000 series / Quadro RTX series and newer AMD Radeon RX 5000 series / equivalent Radeon Pro series and newer
-
NVIDIA or AMD dedicated GPU with 8GB VRAM that supports Vulkan 1.1 NVIDIA GeForce RTX 3000 series / Quadro RTX series and newer AMD Radeon RX 6000 series / equivalent Radeon Pro series and newer
-
-
-
CPU
-
Dual core processor (e.g. Intel Core i5) with at least 2.5 GHz clock speed
-
Quad core processor (e.g. Intel Core i7) with at least 3.5 GHz clock speed
-
Six core processor (e.g. Intel Core i9) with at least 4 GHz clock speed
-
-
-
RAM
-
8 GB RAM or more
-
16 GB RAM or more
-
32 GB RAM or more
-
-
-
CAD/BIM Software
-
The Enscape3D plug-in is provided for the following host applications: Revit (2019, 2020, 2021, 2022, and 2023) *SketchUp (2019, 2020, 2021, ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1nferno/Single_Digit_Detection/README.md b/spaces/1nferno/Single_Digit_Detection/README.md
deleted file mode 100644
index e373474168c5904a8f89abc696d107fc91c27b7c..0000000000000000000000000000000000000000
--- a/spaces/1nferno/Single_Digit_Detection/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Single Digit Detection
-emoji: 📚
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/1phancelerku/anime-remove-background/Blade Idle A Fun and Easy Idle RPG with Customizable Skills and Equipment.md b/spaces/1phancelerku/anime-remove-background/Blade Idle A Fun and Easy Idle RPG with Customizable Skills and Equipment.md
deleted file mode 100644
index ef647761cd077138a60b33a9e5faf3ac6a22a770..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Blade Idle A Fun and Easy Idle RPG with Customizable Skills and Equipment.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
How to Download Blade Idle: A Guide for Android Users
-
If you're looking for a new idle RPG game to play on your Android device, you might want to check out Blade Idle, a simulation game developed by mobirix. In this game, you'll follow the story of a common herb collector who stumbles onto a legendary sword and becomes a great hero. You'll adventure through the main stages and dungeons, and grow your character through farming, merging, and upgrading your equipment. You'll also be able to customize your skills, collect various pets, relics, and insignias, and challenge the upgrade dungeon for different skins.
-
Blade Idle is a game that combines the thrill of action with the convenience of idle gameplay. You can enjoy the game at your own pace, without worrying about missing out on anything. You can also play with other players from around the world, and compete for rankings and rewards. Blade Idle is a game that will keep you entertained for hours, with its rich content, stunning graphics, and engaging storyline.
So how can you download Blade Idle and start playing it right away? There are two main ways to do so: from the Google Play Store or from the BlueStacks emulator. In this article, we'll explain both methods in detail, and also show you how to play Blade Idle on your PC or Mac if you prefer a bigger screen. Let's get started!
-
How to download Blade Idle from Google Play Store
-
The easiest way to download Blade Idle is from the Google Play Store, the official app store for Android devices. Here are the steps you need to follow:
-
-
Open the Google Play Store app on your Android device.
-
Search for "Blade Idle" in the search bar.
-
Tap on the game icon that says "Blade Idle" by mobirix.
-
Tap on the green "Install" button and wait for the download to finish.
-
Tap on the "Open" button or find the game icon on your home screen or app drawer.
-
Enjoy playing Blade Idle!
-
-
That's it! You've successfully downloaded Blade Idle from the Google Play Store. You can now start playing the game and enjoy its features. However, if you don't have access to the Google Play Store or you want to try a different way of downloading Blade Idle, you can also use an emulator.
-
How to download Blade Idle from BlueStacks emulator
-
An emulator is a software that allows you to run Android apps on your PC or Mac. One of the most popular emulators is BlueStacks, which is free and easy to use. With BlueStacks, you can download Blade Idle and play it on your computer with better performance and graphics. Here are the steps you need to follow:
-
-
Download and install BlueStacks on your PC or Mac from [3](https://www.bluestacks.com/apps/simulation/blade-idle-on-pc.html).
-
Launch BlueStacks and sign in with your Google account.
-
Search for "Blade Idle" in the search bar at the top right corner.
-
Click on the game icon that says "Blade Idle" by mobirix.
-
Click on the green "Install" button and wait for the download to finish.
-
Click on the "Open" button or find the game icon on your home screen or app drawer.
-
Enjoy playing Blade Idle!
-
-
Congratulations! You've successfully downloaded Blade Idle from BlueStacks emulator. You can now play the game on your PC or Mac with better controls and features. However, if you want to switch between your Android device and your computer, you can also sync your progress using Facebook or Google Play Games.
-
How
How to play Blade Idle on your PC or Mac
-
If you've downloaded Blade Idle from BlueStacks emulator, you can also play it on your PC or Mac with better graphics and performance. However, you might need to adjust some settings and controls to optimize your gaming experience. Here are some tips and tricks you can use:
-
-
Change the resolution and graphics quality of the game from the settings menu. You can choose from low, medium, high, or ultra settings depending on your device's specifications.
-
Use the keyboard and mouse to control the game. You can customize the key mapping from the BlueStacks settings menu. You can also use the gamepad if you have one connected to your computer.
-
Enable the eco mode to reduce CPU and battery consumption. This will make the game run smoother and faster. You can also enable the multi-instance mode to run multiple games or apps at the same time.
-
Use the screen recorder and screenshot tools to capture your gameplay and share it with your friends. You can also stream your game live on Twitch or YouTube using the BlueStacks streaming mode.
-
Access the in-game chat and social features to communicate with other players and join guilds. You can also use the BlueStacks chat app to chat with other BlueStacks users.
-
-
With these tips and tricks, you can enjoy playing Blade Idle on your PC or Mac with better graphics and performance. You can also switch between your Android device and your computer anytime you want, as long as you sync your progress using Facebook or Google Play Games.
-
download blade idle game for android
-download blade idle game for pc
-download blade idle game apk
-download blade idle game mod
-download blade idle game guide
-download blade idle game tips and tricks
-download blade idle game coupon codes
-download blade idle game best skills
-download blade idle game emulator
-download blade idle game bluestacks
-download blade idle game review
-download blade idle game hack
-download blade idle game cheats
-download blade idle game update
-download blade idle game offline
-download blade idle game online
-download blade idle game free
-download blade idle game no ads
-download blade idle game costumes
-download blade idle game weapons
-download blade idle game armor
-download blade idle game dungeons
-download blade idle game farming
-download blade idle game pets
-download blade idle game relics
-download blade idle game insignia
-download blade idle game skins
-download blade idle game adventure
-download blade idle game simulation
-download blade idle game role playing
-download blade idle game casual
-download blade idle game multiplayer
-download blade idle game single player
-download blade idle game anime
-download blade idle game story
-download blade idle game hero
-download blade idle game sword
-download blade idle game fusion
-download blade idle game merge
-download blade idle game challenge
-download blade idle mobirix
-how to play/download/install/blade/idle/game
-where to find/download/get/blade/idle/game
-what is/download/blade/idle/game
-why play/download/blade/idle/game
-
Conclusion
-
Blade Idle is a fun and addictive idle RPG game that you can play on your Android device or your PC or Mac. In this game, you'll follow the story of a common herb collector who becomes a great hero with a legendary sword. You'll adventure through various stages and dungeons, and grow your character through farming, merging, and upgrading your equipment. You'll also be able to customize your skills, collect various pets, relics, and insignias, and challenge the upgrade dungeon for different skins.
-
You can download Blade Idle from the Google Play Store or from the BlueStacks emulator. Both methods are easy and fast, and will allow you to start playing the game right away. You can also play Blade Idle on your PC or Mac with better graphics and performance, using some tips and tricks to optimize your gaming experience.
-
Blade Idle is a game that will keep you entertained for hours, with its rich content, stunning graphics, and engaging storyline. You can also play with other players from around the world, and compete for rankings and rewards. Blade Idle is a game that you don't want to miss out on!
-
FAQs
-
Here are some common questions and answers about Blade Idle:
-
-
What are the system requirements for Blade Idle?
-Blade Idle requires Android 4.4 or higher for mobile devices, and Windows 7 or higher or Mac OS X 10.11 or higher for computers. You also need at least 2 GB of RAM and 500 MB of free storage space.
-
How can I get more gold and gems in Blade Idle?
-You can get more gold and gems by completing quests, achievements, daily missions, events, and dungeons. You can also watch ads, spin the roulette wheel, open chests, or buy them with real money.
-
How can I merge and upgrade my equipment in Blade Idle?
-You can merge and upgrade your equipment by dragging two items of the same grade onto each other. This will create a higher grade item with better stats. You can also use upgrade stones to increase the level of your equipment.
-
How can I unlock more skills in Blade Idle?
-You can unlock more skills by reaching certain levels or completing certain stages. You can also use skill books to learn new skills or upgrade existing ones.
-
How can I change my character's appearance in Blade Idle?
-You can change your character's appearance by using different skins. You can get skins by challenging the upgrade dungeon or buying them with gems.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Music Movies and TV Shows with Black Video Player APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy Music Movies and TV Shows with Black Video Player APK.md
deleted file mode 100644
index 6024408b0edcc1f0fb0724b2439a07edef169f4b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Music Movies and TV Shows with Black Video Player APK.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Black Video Player APK: A Powerful and Elegant Media Player for Android
-
If you are looking for a media player that can play any video or audio file on your Android device, you should try Black Video Player APK. This is a free and ad-free app that offers a lot of features and benefits for your media enjoyment. In this article, we will tell you what Black Video Player APK is, how to download and install it, how to use it, and why you should choose it.
Black Video Player APK is an app that allows you to play any video or audio file on your Android device. It supports all formats, including MKV, MP4, AVI, MOV, Ogg, FLAC, TS, M2TS, Wv, and AAC. It can also play files from your local storage or from network streams. It has a simple and elegant user interface that makes it easy to use and navigate. It also has a lot of features that enhance your media experience.
-
Features of Black Video Player APK
-
Supports all video and audio formats
-
Black Video Player APK can play any video or audio file that you have on your device or online. You don't need to worry about compatibility issues or converting files. You can enjoy any media content with this app.
-
Plays local and network files
-
Black Video Player APK can play files from your internal or external storage, as well as from network streams. You can access your media library easily with this app. You can also stream videos from online sources, such as YouTube, Vimeo, Dailymotion, etc.
-
Offers gesture controls and subtitles
-
Black Video Player APK gives you full control over your playback with gesture controls. You can swipe left or right to seek forward or backward, swipe up or down to adjust the volume or brightness, double tap to pause or resume, etc. You can also enable subtitles for your videos, and adjust the size, color, position, and timing of them.
-
Customizes playback speed and aspect ratio
-
Black Video Player APK lets you customize your playback speed and aspect ratio according to your preference. You can speed up or slow down the video or audio playback, or change the aspect ratio to fit your screen size. You can also rotate the screen orientation if you want.
-
Enhances video quality and sound effects
-
Black Video Player APK improves the video quality and sound effects of your media files. It has a built-in equalizer that lets you adjust the bass, treble, balance, etc. of your audio output. It also has a video enhancer that enhances the brightness, contrast, saturation, etc. of your video output.
-
black video player apk download
-black video player apk for android
-black video player apk free
-black video player apk pro
-black video player apk mod
-black video player apk latest version
-black video player apk no ads
-black video player apk offline
-black video player apk online
-black video player apk premium
-black video player apk full
-black video player apk cracked
-black video player apk hd
-black video player apk 4k
-black video player apk 2023
-black video player apk update
-black video player apk best
-black video player apk new
-black video player apk old
-black video player apk beta
-black video player apk review
-black video player apk features
-black video player apk install
-black video player apk uninstall
-black video player apk alternative
-black video player apk comparison
-black video player apk ranking
-black video player apk rating
-black video player apk feedback
-black video player apk support
-black video player apk help
-black video player apk guide
-black video player apk tutorial
-black video player apk tips
-black video player apk tricks
-black video player apk hacks
-black video player apk cheats
-black video player apk codes
-black video player apk coupons
-black video player apk deals
-black video player apk discounts
-black video player apk offers
-black video player apk promotions
-black video player apk sales
-black video player apk free trial
-black video player apk subscription
-black video player apk license key
-black video player apk activation code
-
How to download and install Black Video Player APK?
-
Download the APK file from a trusted source
-
To download Black Video Player APK, you need to find a trusted source that offers the latest version of the app. You can use [this link] to download the APK file.
-
Enable unknown sources on your device settings
-
To install Black Video Player APK, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the APK file and launch the app
-
After you have downloaded and enabled unknown sources, you can install the APK file. To do this, locate the APK file on your device and tap on it. Follow the instructions on the screen to complete the installation. Once the app is installed, you can launch it from your app drawer or home screen.
-
How to use Black Video Player APK?
-
Browse and select the media file you want to play
-
To use Black Video Player APK, you need to browse and select the media file you want to play. You can do this by tapping on the menu icon on the top left corner of the app and choosing the folder where your media files are stored. You can also tap on the network icon on the top right corner of the app and enter the URL of the online video or audio stream you want to play.
-
Adjust the settings and preferences according to your needs
-
Once you have selected the media file you want to play, you can adjust the settings and preferences according to your needs. You can do this by tapping on the gear icon on the top right corner of the app and choosing the options you want. You can change the playback speed, aspect ratio, subtitle settings, equalizer settings, video enhancer settings, etc.
-
Enjoy your media experience with Black Video Player APK
-
After you have adjusted the settings and preferences, you can enjoy your media experience with Black Video Player APK. You can use the gesture controls to control your playback, or use the buttons on the bottom of the screen. You can also switch between portrait and landscape mode by rotating your device.
-
Why choose Black Video Player APK?
-
Benefits of Black Video Player APK
-
Simple and elegant user interface
-
Black Video Player APK has a simple and elegant user interface that makes it easy to use and navigate. It has a black theme that is pleasing to the eye and reduces eye strain. It also has a minimalistic design that focuses on your media content.
-
Smooth and stable performance
-
Black Video Player APK has a smooth and stable performance that ensures a high-quality media experience. It has a powerful engine that can handle any video or audio format without lagging or crashing. It also has a low battery consumption that saves your device's power.
-
Free and ad-free app
-
Black Video Player APK is a free and ad-free app that does not require any registration or subscription. You can download and use it without any limitations or interruptions. You can also enjoy all its features and benefits without paying anything.
-
Compatible with most Android devices
-
Black Video Player APK is compatible with most Android devices that run on Android 5.0 or higher. It can work on any device size, from smartphones to tablets. It can also adapt to any screen resolution, from HD to 4K.
-
Conclusion
-
In conclusion, Black Video Player APK is a powerful and elegant media player for Android that can play any video or audio file on your device or online. It has a lot of features that enhance your media experience, such as gesture controls, subtitles, playback speed, aspect ratio, equalizer, video enhancer, etc. It also has a simple and elegant user interface, a smooth and stable performance, a free and ad-free app, and a compatibility with most Android devices. If you are looking for a media player that can meet all your needs, you should try Black Video Player APK.
- FAQs - Q: Is Black Video Player APK safe to use? - A: Yes, Black Video Player APK is safe to use as long as you download it from a trusted source. It does not contain any malware or viruses that can harm your device or data. - Q: How can I update Black Video Player APK? - A: You can update Black Video Player APK by downloading the latest version of the app from [this link]. You can also check for updates within the app by tapping on the menu icon > About > Check for updates. - Q: How can I share my media files with others using Black Video Player APK? - A: You can share your media files with others using Black Video Player APK by tapping on the share icon on the bottom of the screen. You can choose the app or platform you want to share your media file with, such as WhatsApp, Facebook, Twitter, etc. - Q: How can I delete or uninstall Black Video Player APK? - A: You can delete or uninstall Black Video Player APK by going to your device settings > Apps > Black Video Player APK > Uninstall. You can also long-press the app icon on your home screen or app drawer and drag it to the uninstall option. - Q: How can I contact the developer of Black Video Player APK? - A: You can contact the developer of Black Video Player APK by tapping on the menu icon > About > Contact us. You can also send an email to [this address] or visit [this website]. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/20four60/Auto-GPT/Dockerfile b/spaces/20four60/Auto-GPT/Dockerfile
deleted file mode 100644
index 29ec24bfb63cdbf2c92fc41c33e24b329aa6e1ca..0000000000000000000000000000000000000000
--- a/spaces/20four60/Auto-GPT/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM zenmldocker/zenml-server:latest
-
-ENV ZENML_ANALYTICS_OPT_IN=true
-ENV ZENML_SERVER_DEPLOYMENT_TYPE="hf_spaces"
-ENV ZENML_LOGGING_VERBOSITY=DEBUG
-
-################################################################################
-#
-# CONFIGURING YOUR ZENML HF SPACES SERVER
-# ---------------------------------------
-# By default this space is not persistent. All ZenML metadata is stored in
-# localstorage in a SQLite database. If you would like to make your storage
-# persistent, use the appropriate environment variables below to configure the
-# image to use a MySQL-compatible database service that is reachable from the
-# container. See https://docs.zenml.io/getting-started/deploying-zenml/docker
-# for more information on how to configure these environment variables.
-
-# You can also configure the secrets store to use for your ZenML server. Be
-# sure to use Huggingface Spaces' 'Repository Secrets' feature to store any
-# secrets referenced here. See
-# https://huggingface.co/docs/hub/spaces-overview#managing-secrets for more
-# information on how to configure these environment variables.
-
-# ENV ZENML_DEFAULT_PROJECT_NAME=""
-# ENV ZENML_DEFAULT_USER_NAME=""
-# ENV ZENML_DEFAULT_USER_PASSWORD=""
-# ENV ZENML_STORE_URL=""
-# ENV ZENML_STORE_SSL_CA=""
-# ENV ZENML_STORE_SSL_CERT=""
-# ENV ZENML_STORE_SSL_KEY=""
-# ENV ZENML_STORE_SSL_VERIFY_SERVER_CERT=""
-
-# ENV ZENML_LOGGING_VERBOSITY=""
-
-# # SECRETS STORE CONFIGURATION
-# ENV ZENML_SECRETS_STORE_TYPE=""
-# ENV ZENML_SECRETS_STORE_ENCRYPTION_KEY=""
-# ENV ZENML_SECRETS_STORE_CLASS_PATH=""
-# ENV ZENML_JWT_SECRET_KEY=""
-
-# # AWS Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_REGION_NAME=""
-# ENV ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID=""
-# ENV ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY=""
-# ENV ZENML_SECRETS_STORE_AWS_SESSION_TOKEN=""
-# ENV ZENML_SECRETS_STORE_SECRET_LIST_REFRESH_TIMEOUT=""
-
-# # GCP Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_PROJECT_ID=""
-# ENV GOOGLE_APPLICATION_CREDENTIALS=""
-
-# # Azure Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_KEY_VAULT_NAME=""
-# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_ID=""
-# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET=""
-# ENV ZENML_SECRETS_STORE_AZURE_TENANT_ID=""
-
-# # Hashicorp Secrets Store Configuration
-# ENV ZENML_SECRETS_STORE_VAULT_ADDR=""
-# ENV ZENML_SECRETS_STORE_VAULT_TOKEN=""
-# ENV ZENML_SECRETS_STORE_VAULT_NAMESPACE=""
-# ENV ZENML_SECRETS_STORE_MAX_VERSIONS=""
-
-ENTRYPOINT ["uvicorn", "zenml.zen_server.zen_server_api:app", "--log-level", "debug"]
-CMD ["--proxy-headers", "--port", "8080", "--host", "0.0.0.0"]
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/camera.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/camera.py
deleted file mode 100644
index e019358039033c3a372c990ebad3151258c3651d..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/camera.py
+++ /dev/null
@@ -1,437 +0,0 @@
-"""Virtual cameras compliant with the glTF 2.0 specification as described at
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-camera
-
-Author: Matthew Matl
-"""
-import abc
-import numpy as np
-import six
-import sys
-
-from .constants import DEFAULT_Z_NEAR, DEFAULT_Z_FAR
-
-
-@six.add_metaclass(abc.ABCMeta)
-class Camera(object):
- """Abstract base class for all cameras.
-
- Note
- ----
- Camera poses are specified in the OpenGL format,
- where the z axis points away from the view direction and the
- x and y axes point to the right and up in the image plane, respectively.
-
- Parameters
- ----------
- znear : float
- The floating-point distance to the near clipping plane.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- self.name = name
- self.znear = znear
- self.zfar = zfar
-
- @property
- def name(self):
- """str : The user-defined name of this object.
- """
- return self._name
-
- @name.setter
- def name(self, value):
- if value is not None:
- value = str(value)
- self._name = value
-
- @property
- def znear(self):
- """float : The distance to the near clipping plane.
- """
- return self._znear
-
- @znear.setter
- def znear(self, value):
- value = float(value)
- if value < 0:
- raise ValueError('z-near must be >= 0.0')
- self._znear = value
-
- @property
- def zfar(self):
- """float : The distance to the far clipping plane.
- """
- return self._zfar
-
- @zfar.setter
- def zfar(self, value):
- value = float(value)
- if value <= 0 or value <= self.znear:
- raise ValueError('zfar must be >0 and >znear')
- self._zfar = value
-
- @abc.abstractmethod
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- pass
-
-
-class PerspectiveCamera(Camera):
-
- """A perspective camera for perspective projection.
-
- Parameters
- ----------
- yfov : float
- The floating-point vertical field of view in radians.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float, optional
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If None, the camera uses an infinite projection matrix.
- aspectRatio : float, optional
- The floating-point aspect ratio of the field of view.
- If not specified, the camera uses the viewport's aspect ratio.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- yfov,
- znear=DEFAULT_Z_NEAR,
- zfar=None,
- aspectRatio=None,
- name=None):
- super(PerspectiveCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.yfov = yfov
- self.aspectRatio = aspectRatio
-
- @property
- def yfov(self):
- """float : The vertical field of view in radians.
- """
- return self._yfov
-
- @yfov.setter
- def yfov(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('Field of view must be positive')
- self._yfov = value
-
- @property
- def zfar(self):
- """float : The distance to the far clipping plane.
- """
- return self._zfar
-
- @zfar.setter
- def zfar(self, value):
- if value is not None:
- value = float(value)
- if value <= 0 or value <= self.znear:
- raise ValueError('zfar must be >0 and >znear')
- self._zfar = value
-
- @property
- def aspectRatio(self):
- """float : The ratio of the width to the height of the field of view.
- """
- return self._aspectRatio
-
- @aspectRatio.setter
- def aspectRatio(self, value):
- if value is not None:
- value = float(value)
- if value <= 0.0:
- raise ValueError('Aspect ratio must be positive')
- self._aspectRatio = value
-
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- aspect_ratio = self.aspectRatio
- if aspect_ratio is None:
- if width is None or height is None:
- raise ValueError('Aspect ratio of camera must be defined')
- aspect_ratio = float(width) / float(height)
-
- a = aspect_ratio
- t = np.tan(self.yfov / 2.0)
- n = self.znear
- f = self.zfar
-
- P = np.zeros((4,4))
- P[0][0] = 1.0 / (a * t)
- P[1][1] = 1.0 / t
- P[3][2] = -1.0
-
- if f is None:
- P[2][2] = -1.0
- P[2][3] = -2.0 * n
- else:
- P[2][2] = (f + n) / (n - f)
- P[2][3] = (2 * f * n) / (n - f)
-
- return P
-
-
-class OrthographicCamera(Camera):
- """An orthographic camera for orthographic projection.
-
- Parameters
- ----------
- xmag : float
- The floating-point horizontal magnification of the view.
- ymag : float
- The floating-point vertical magnification of the view.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If not specified, defaults to 100.0.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- xmag,
- ymag,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- super(OrthographicCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.xmag = xmag
- self.ymag = ymag
-
- @property
- def xmag(self):
- """float : The horizontal magnification of the view.
- """
- return self._xmag
-
- @xmag.setter
- def xmag(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('X magnification must be positive')
- self._xmag = value
-
- @property
- def ymag(self):
- """float : The vertical magnification of the view.
- """
- return self._ymag
-
- @ymag.setter
- def ymag(self, value):
- value = float(value)
- if value <= 0.0:
- raise ValueError('Y magnification must be positive')
- self._ymag = value
-
- @property
- def znear(self):
- """float : The distance to the near clipping plane.
- """
- return self._znear
-
- @znear.setter
- def znear(self, value):
- value = float(value)
- if value <= 0:
- raise ValueError('z-near must be > 0.0')
- self._znear = value
-
- def get_projection_matrix(self, width=None, height=None):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- Unused in this function.
- height : int
- Height of the current viewport, in pixels.
- Unused in this function.
- """
- xmag = self.xmag
- ymag = self.ymag
-
- # If screen width/height defined, rescale xmag
- if width is not None and height is not None:
- xmag = width / height * ymag
-
- n = self.znear
- f = self.zfar
- P = np.zeros((4,4))
- P[0][0] = 1.0 / xmag
- P[1][1] = 1.0 / ymag
- P[2][2] = 2.0 / (n - f)
- P[2][3] = (f + n) / (n - f)
- P[3][3] = 1.0
- return P
-
-
-class IntrinsicsCamera(Camera):
- """A perspective camera with custom intrinsics.
-
- Parameters
- ----------
- fx : float
- X-axis focal length in pixels.
- fy : float
- Y-axis focal length in pixels.
- cx : float
- X-axis optical center in pixels.
- cy : float
- Y-axis optical center in pixels.
- znear : float
- The floating-point distance to the near clipping plane.
- If not specified, defaults to 0.05.
- zfar : float
- The floating-point distance to the far clipping plane.
- ``zfar`` must be greater than ``znear``.
- If not specified, defaults to 100.0.
- name : str, optional
- The user-defined name of this object.
- """
-
- def __init__(self,
- fx,
- fy,
- cx,
- cy,
- znear=DEFAULT_Z_NEAR,
- zfar=DEFAULT_Z_FAR,
- name=None):
- super(IntrinsicsCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
-
- self.fx = fx
- self.fy = fy
- self.cx = cx
- self.cy = cy
-
- @property
- def fx(self):
- """float : X-axis focal length in meters.
- """
- return self._fx
-
- @fx.setter
- def fx(self, value):
- self._fx = float(value)
-
- @property
- def fy(self):
- """float : Y-axis focal length in meters.
- """
- return self._fy
-
- @fy.setter
- def fy(self, value):
- self._fy = float(value)
-
- @property
- def cx(self):
- """float : X-axis optical center in pixels.
- """
- return self._cx
-
- @cx.setter
- def cx(self, value):
- self._cx = float(value)
-
- @property
- def cy(self):
- """float : Y-axis optical center in pixels.
- """
- return self._cy
-
- @cy.setter
- def cy(self, value):
- self._cy = float(value)
-
- def get_projection_matrix(self, width, height):
- """Return the OpenGL projection matrix for this camera.
-
- Parameters
- ----------
- width : int
- Width of the current viewport, in pixels.
- height : int
- Height of the current viewport, in pixels.
- """
- width = float(width)
- height = float(height)
-
- cx, cy = self.cx, self.cy
- fx, fy = self.fx, self.fy
- if sys.platform == 'darwin':
- cx = self.cx * 2.0
- cy = self.cy * 2.0
- fx = self.fx * 2.0
- fy = self.fy * 2.0
-
- P = np.zeros((4,4))
- P[0][0] = 2.0 * fx / width
- P[1][1] = 2.0 * fy / height
- P[0][2] = 1.0 - 2.0 * cx / width
- P[1][2] = 2.0 * cy / height - 1.0
- P[3][2] = -1.0
-
- n = self.znear
- f = self.zfar
- if f is None:
- P[2][2] = -1.0
- P[2][3] = -2.0 * n
- else:
- P[2][2] = (f + n) / (n - f)
- P[2][3] = (2 * f * n) / (n - f)
-
- return P
-
-
-__all__ = ['Camera', 'PerspectiveCamera', 'OrthographicCamera',
- 'IntrinsicsCamera']
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/pitch_extractors.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/pitch_extractors.py
deleted file mode 100644
index 6b0e9e1300344e9c2e680e21dca79c20457df9d7..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/pitch_extractors.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import numpy as np
-from text_to_speech.utils.audio.pitch.utils import denorm_f0, norm_f0, f0_to_coarse
-import parselmouth
-
-PITCH_EXTRACTOR = {}
-
-
-def register_pitch_extractor(name):
- def register_pitch_extractor_(cls):
- PITCH_EXTRACTOR[name] = cls
- return cls
-
- return register_pitch_extractor_
-
-
-def get_pitch_extractor(name):
- return PITCH_EXTRACTOR[name]
-
-
-def extract_pitch_simple(wav):
- from text_to_speech.utils.commons.hparams import hparams
- return extract_pitch(hparams['pitch_extractor'], wav,
- hparams['hop_size'], hparams['audio_sample_rate'],
- f0_min=hparams['f0_min'], f0_max=hparams['f0_max'])
-
-
-def extract_pitch(extractor_name, wav_data, hop_size, audio_sample_rate, f0_min=75, f0_max=800, **kwargs):
- return get_pitch_extractor(extractor_name)(wav_data, hop_size, audio_sample_rate, f0_min, f0_max, **kwargs)
-
-
-@register_pitch_extractor('parselmouth')
-def parselmouth_pitch(wav_data, hop_size, audio_sample_rate, f0_min, f0_max,
- voicing_threshold=0.6, *args, **kwargs):
- import parselmouth
- time_step = hop_size / audio_sample_rate * 1000
- n_mel_frames = int(len(wav_data) // hop_size)
- f0_pm = parselmouth.Sound(wav_data, audio_sample_rate).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=voicing_threshold,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
- pad_size = (n_mel_frames - len(f0_pm) + 1) // 2
- f0 = np.pad(f0_pm, [[pad_size, n_mel_frames - len(f0_pm) - pad_size]], mode='constant')
- return f0
-
-
-def get_pitch(wav_data, mel, hparams):
- """
- :param wav_data: [T]
- :param mel: [T, 80]
- :param hparams:
- :return:
- """
- time_step = hparams['hop_size'] / hparams['audio_sample_rate'] * 1000
- f0_min = 80
- f0_max = 750
-
- if hparams['pitch_extractor'] == 'harvest':
- import pyworld as pw
- f0, t = pw.harvest(wav_data.astype(np.double), hparams['audio_sample_rate'],
- frame_period=hparams['hop_size'] / hparams['audio_sample_rate'] * 1000)
- if hparams['pitch_extractor'] == 'dio':
- _f0, t = pw.dio(wav_data.astype(np.double), hparams['audio_sample_rate'],
- frame_period=hparams['hop_size'] / hparams['audio_sample_rate'] * 1000)
- f0 = pw.stonemask(wav_data.astype(np.double), _f0, t, hparams['audio_sample_rate']) # pitch refinement
- elif hparams['pitch_extractor'] == 'parselmouth':
- if hparams['hop_size'] == 128:
- pad_size = 4
- elif hparams['hop_size'] == 256:
- pad_size = 2
- else:
- assert False
- f0 = parselmouth.Sound(wav_data, hparams['audio_sample_rate']).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
- lpad = pad_size * 2
- rpad = len(mel) - len(f0) - lpad
- f0 = np.pad(f0, [[lpad, rpad]], mode='constant')
-
- # mel和f0是2个库抽的 需要保证两者长度一致
- delta_l = len(mel) - len(f0)
- assert np.abs(delta_l) <= 8
- if delta_l > 0:
- f0 = np.concatenate([f0, [f0[-1]] * delta_l], 0)
- f0 = f0[:len(mel)]
- pitch_coarse = f0_to_coarse(f0)
- return f0, pitch_coarse
\ No newline at end of file
diff --git a/spaces/ASJMO/freegpt/client/css/dropdown.css b/spaces/ASJMO/freegpt/client/css/dropdown.css
deleted file mode 100644
index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/client/css/dropdown.css
+++ /dev/null
@@ -1,10 +0,0 @@
-.dropdown {
- border: 1px solid var(--conversations);
-}
-
-@media screen and (max-width: 990px) {
- .dropdown {
- padding: 4px 8px;
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/utils.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/utils.py
deleted file mode 100644
index af6bcb9e1116a431a39579f4bbdde3a9e868e0b4..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/utils.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# -*- coding: utf-8 -*-
-import cv2
-import numpy as np
-
-skeleton = [[15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], [7, 9], [8, 10],
- [1, 2], [0, 1], [0, 2], [1, 3], [2, 4], [3, 5], [4, 6]]
-
-pose_kpt_color = [[51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], [0, 255, 0],
- [255, 128, 0], [0, 255, 0], [255, 128, 0], [0, 255, 0], [255, 128, 0], [0, 255, 0], [255, 128, 0],
- [0, 255, 0], [255, 128, 0], [0, 255, 0], [255, 128, 0]]
-
-pose_link_color = [[0, 255, 0], [0, 255, 0], [255, 128, 0], [255, 128, 0],
- [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], [0, 255, 0], [255, 128, 0],
- [0, 255, 0], [255, 128, 0], [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255],
- [51, 153, 255], [51, 153, 255], [51, 153, 255]]
-
-
-def imshow_keypoints(img,
- pose_result,
- kpt_score_thr=0.1,
- radius=2,
- thickness=2):
- """Draw keypoints and links on an image.
-
- Args:
- img (ndarry): The image to draw poses on.
- pose_result (list[kpts]): The poses to draw. Each element kpts is
- a set of K keypoints as an Kx3 numpy.ndarray, where each
- keypoint is represented as x, y, score.
- kpt_score_thr (float, optional): Minimum score of keypoints
- to be shown. Default: 0.3.
- thickness (int): Thickness of lines.
- """
-
- img_h, img_w, _ = img.shape
- img = np.zeros(img.shape)
-
- for idx, kpts in enumerate(pose_result):
- if idx > 1:
- continue
- kpts = kpts['keypoints']
- # print(kpts)
- kpts = np.array(kpts, copy=False)
-
- # draw each point on image
- assert len(pose_kpt_color) == len(kpts)
-
- for kid, kpt in enumerate(kpts):
- x_coord, y_coord, kpt_score = int(kpt[0]), int(kpt[1]), kpt[2]
-
- if kpt_score < kpt_score_thr or pose_kpt_color[kid] is None:
- # skip the point that should not be drawn
- continue
-
- color = tuple(int(c) for c in pose_kpt_color[kid])
- cv2.circle(img, (int(x_coord), int(y_coord)), radius, color, -1)
-
- # draw links
-
- for sk_id, sk in enumerate(skeleton):
- pos1 = (int(kpts[sk[0], 0]), int(kpts[sk[0], 1]))
- pos2 = (int(kpts[sk[1], 0]), int(kpts[sk[1], 1]))
-
- if (pos1[0] <= 0 or pos1[0] >= img_w or pos1[1] <= 0 or pos1[1] >= img_h or pos2[0] <= 0
- or pos2[0] >= img_w or pos2[1] <= 0 or pos2[1] >= img_h or kpts[sk[0], 2] < kpt_score_thr
- or kpts[sk[1], 2] < kpt_score_thr or pose_link_color[sk_id] is None):
- # skip the link that should not be drawn
- continue
- color = tuple(int(c) for c in pose_link_color[sk_id])
- cv2.line(img, pos1, pos2, color, thickness=thickness)
-
- return img
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildWidth.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildWidth.js
deleted file mode 100644
index 0cc8c826c1dcec66e4c2684b245a1ee4376b88cf..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildWidth.js
+++ /dev/null
@@ -1,18 +0,0 @@
-import { GetDisplayWidth } from '../../../plugins/utils/size/GetDisplaySize.js';
-
-var GetChildWidth = function (child) {
- var childWidth;
- if (child.isRexSizer) { // Sizer game object
- childWidth = Math.max(child.minWidth, child.childrenWidth);
- } else { // Normal game object
- if (child.minWidth !== undefined) { // Force minWidth
- childWidth = child.minWidth;
- } else {
- childWidth = GetDisplayWidth(child);
- }
- }
-
- return childWidth;
-}
-
-export default GetChildWidth;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CollapseSubMenu.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CollapseSubMenu.js
deleted file mode 100644
index c6271fd41b423d6d63beac03f53c945bb9a98037..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/CollapseSubMenu.js
+++ /dev/null
@@ -1,12 +0,0 @@
-var CollapseSubMenu = function () {
- var subMenu = this.childrenMap.subMenu;
- if (subMenu === undefined) {
- return this;
- }
-
- this.childrenMap.subMenu = undefined;
- this.remove(subMenu);
- subMenu.collapse();
- return this;
-}
-export default CollapseSubMenu;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.d.ts
deleted file mode 100644
index b42e7eb6b330b1b624772512bf4a657e0e005ee4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { Press } from '../../../plugins/gestures';
-export default Press;
\ No newline at end of file
diff --git a/spaces/AkashKhamkar/Job_Search_Engine/loader.py b/spaces/AkashKhamkar/Job_Search_Engine/loader.py
deleted file mode 100644
index 12d2dad83751e3f2624e67def3a83c6d69e8b976..0000000000000000000000000000000000000000
--- a/spaces/AkashKhamkar/Job_Search_Engine/loader.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from sentence_transformers import SentenceTransformer, CrossEncoder, util
-import pandas as pd
-import pickle
-
-bi_encoder = SentenceTransformer("multi-qa-MiniLM-L6-cos-v1")
-cross_encoder = CrossEncoder("cross-encoder/ms-marco-MiniLM-L-6-v2")
-df = pd.read_csv('job_corpus_dataframe.csv')
-pickle_in = open("job_corpus.pickle","rb")
-job_corpus = pickle.load(pickle_in)
-pickle_in = open("job_corpus_encoded.pickle","rb")
-job_corpus_ecoded = pickle.load(pickle_in)
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/src/facerender/modules/make_animation.py b/spaces/Alpaca233/SadTalker/src/facerender/modules/make_animation.py
deleted file mode 100644
index 3360c53501a064f35d7db21a5361f89aa9658b42..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/facerender/modules/make_animation.py
+++ /dev/null
@@ -1,170 +0,0 @@
-from scipy.spatial import ConvexHull
-import torch
-import torch.nn.functional as F
-import numpy as np
-from tqdm import tqdm
-
-def normalize_kp(kp_source, kp_driving, kp_driving_initial, adapt_movement_scale=False,
- use_relative_movement=False, use_relative_jacobian=False):
- if adapt_movement_scale:
- source_area = ConvexHull(kp_source['value'][0].data.cpu().numpy()).volume
- driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume
- adapt_movement_scale = np.sqrt(source_area) / np.sqrt(driving_area)
- else:
- adapt_movement_scale = 1
-
- kp_new = {k: v for k, v in kp_driving.items()}
-
- if use_relative_movement:
- kp_value_diff = (kp_driving['value'] - kp_driving_initial['value'])
- kp_value_diff *= adapt_movement_scale
- kp_new['value'] = kp_value_diff + kp_source['value']
-
- if use_relative_jacobian:
- jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian']))
- kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source['jacobian'])
-
- return kp_new
-
-def headpose_pred_to_degree(pred):
- device = pred.device
- idx_tensor = [idx for idx in range(66)]
- idx_tensor = torch.FloatTensor(idx_tensor).type_as(pred).to(device)
- pred = F.softmax(pred)
- degree = torch.sum(pred*idx_tensor, 1) * 3 - 99
- return degree
-
-def get_rotation_matrix(yaw, pitch, roll):
- yaw = yaw / 180 * 3.14
- pitch = pitch / 180 * 3.14
- roll = roll / 180 * 3.14
-
- roll = roll.unsqueeze(1)
- pitch = pitch.unsqueeze(1)
- yaw = yaw.unsqueeze(1)
-
- pitch_mat = torch.cat([torch.ones_like(pitch), torch.zeros_like(pitch), torch.zeros_like(pitch),
- torch.zeros_like(pitch), torch.cos(pitch), -torch.sin(pitch),
- torch.zeros_like(pitch), torch.sin(pitch), torch.cos(pitch)], dim=1)
- pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3)
-
- yaw_mat = torch.cat([torch.cos(yaw), torch.zeros_like(yaw), torch.sin(yaw),
- torch.zeros_like(yaw), torch.ones_like(yaw), torch.zeros_like(yaw),
- -torch.sin(yaw), torch.zeros_like(yaw), torch.cos(yaw)], dim=1)
- yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3)
-
- roll_mat = torch.cat([torch.cos(roll), -torch.sin(roll), torch.zeros_like(roll),
- torch.sin(roll), torch.cos(roll), torch.zeros_like(roll),
- torch.zeros_like(roll), torch.zeros_like(roll), torch.ones_like(roll)], dim=1)
- roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3)
-
- rot_mat = torch.einsum('bij,bjk,bkm->bim', pitch_mat, yaw_mat, roll_mat)
-
- return rot_mat
-
-def keypoint_transformation(kp_canonical, he, wo_exp=False):
- kp = kp_canonical['value'] # (bs, k, 3)
- yaw, pitch, roll= he['yaw'], he['pitch'], he['roll']
- yaw = headpose_pred_to_degree(yaw)
- pitch = headpose_pred_to_degree(pitch)
- roll = headpose_pred_to_degree(roll)
-
- if 'yaw_in' in he:
- yaw = he['yaw_in']
- if 'pitch_in' in he:
- pitch = he['pitch_in']
- if 'roll_in' in he:
- roll = he['roll_in']
-
- rot_mat = get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3)
-
- t, exp = he['t'], he['exp']
- if wo_exp:
- exp = exp*0
-
- # keypoint rotation
- kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp)
-
- # keypoint translation
- t[:, 0] = t[:, 0]*0
- t[:, 2] = t[:, 2]*0
- t = t.unsqueeze(1).repeat(1, kp.shape[1], 1)
- kp_t = kp_rotated + t
-
- # add expression deviation
- exp = exp.view(exp.shape[0], -1, 3)
- kp_transformed = kp_t + exp
-
- return {'value': kp_transformed}
-
-
-
-def make_animation(source_image, source_semantics, target_semantics,
- generator, kp_detector, he_estimator, mapping,
- yaw_c_seq=None, pitch_c_seq=None, roll_c_seq=None,
- use_exp=True, use_half=False):
- with torch.no_grad():
- predictions = []
-
- kp_canonical = kp_detector(source_image)
- he_source = mapping(source_semantics)
- kp_source = keypoint_transformation(kp_canonical, he_source)
-
- for frame_idx in tqdm(range(target_semantics.shape[1]), 'Face Renderer:'):
- # still check the dimension
- # print(target_semantics.shape, source_semantics.shape)
- target_semantics_frame = target_semantics[:, frame_idx]
- he_driving = mapping(target_semantics_frame)
- if yaw_c_seq is not None:
- he_driving['yaw_in'] = yaw_c_seq[:, frame_idx]
- if pitch_c_seq is not None:
- he_driving['pitch_in'] = pitch_c_seq[:, frame_idx]
- if roll_c_seq is not None:
- he_driving['roll_in'] = roll_c_seq[:, frame_idx]
-
- kp_driving = keypoint_transformation(kp_canonical, he_driving)
-
- kp_norm = kp_driving
- out = generator(source_image, kp_source=kp_source, kp_driving=kp_norm)
- '''
- source_image_new = out['prediction'].squeeze(1)
- kp_canonical_new = kp_detector(source_image_new)
- he_source_new = he_estimator(source_image_new)
- kp_source_new = keypoint_transformation(kp_canonical_new, he_source_new, wo_exp=True)
- kp_driving_new = keypoint_transformation(kp_canonical_new, he_driving, wo_exp=True)
- out = generator(source_image_new, kp_source=kp_source_new, kp_driving=kp_driving_new)
- '''
- predictions.append(out['prediction'])
- predictions_ts = torch.stack(predictions, dim=1)
- return predictions_ts
-
-class AnimateModel(torch.nn.Module):
- """
- Merge all generator related updates into single model for better multi-gpu usage
- """
-
- def __init__(self, generator, kp_extractor, mapping):
- super(AnimateModel, self).__init__()
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.mapping = mapping
-
- self.kp_extractor.eval()
- self.generator.eval()
- self.mapping.eval()
-
- def forward(self, x):
-
- source_image = x['source_image']
- source_semantics = x['source_semantics']
- target_semantics = x['target_semantics']
- yaw_c_seq = x['yaw_c_seq']
- pitch_c_seq = x['pitch_c_seq']
- roll_c_seq = x['roll_c_seq']
-
- predictions_video = make_animation(source_image, source_semantics, target_semantics,
- self.generator, self.kp_extractor,
- self.mapping, use_exp = True,
- yaw_c_seq=yaw_c_seq, pitch_c_seq=pitch_c_seq, roll_c_seq=roll_c_seq)
-
- return predictions_video
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/webui.sh b/spaces/Alpaca233/SadTalker/webui.sh
deleted file mode 100644
index 245750237954e140777c0bd20e6d26a1f9d1f74e..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/webui.sh
+++ /dev/null
@@ -1,140 +0,0 @@
-#!/usr/bin/env bash
-
-
-# If run from macOS, load defaults from webui-macos-env.sh
-if [[ "$OSTYPE" == "darwin"* ]]; then
- export TORCH_COMMAND="pip install torch==1.12.1 torchvision==0.13.1"
-fi
-
-# python3 executable
-if [[ -z "${python_cmd}" ]]
-then
- python_cmd="python3"
-fi
-
-# git executable
-if [[ -z "${GIT}" ]]
-then
- export GIT="git"
-fi
-
-# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
-if [[ -z "${venv_dir}" ]]
-then
- venv_dir="venv"
-fi
-
-if [[ -z "${LAUNCH_SCRIPT}" ]]
-then
- LAUNCH_SCRIPT="launcher.py"
-fi
-
-# this script cannot be run as root by default
-can_run_as_root=1
-
-# read any command line flags to the webui.sh script
-while getopts "f" flag > /dev/null 2>&1
-do
- case ${flag} in
- f) can_run_as_root=1;;
- *) break;;
- esac
-done
-
-# Disable sentry logging
-export ERROR_REPORTING=FALSE
-
-# Do not reinstall existing pip packages on Debian/Ubuntu
-export PIP_IGNORE_INSTALLED=0
-
-# Pretty print
-delimiter="################################################################"
-
-printf "\n%s\n" "${delimiter}"
-printf "\e[1m\e[32mInstall script for SadTalker + Web UI\n"
-printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m"
-printf "\n%s\n" "${delimiter}"
-
-# Do not run as root
-if [[ $(id -u) -eq 0 && can_run_as_root -eq 0 ]]
-then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-else
- printf "\n%s\n" "${delimiter}"
- printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)"
- printf "\n%s\n" "${delimiter}"
-fi
-
-if [[ -d .git ]]
-then
- printf "\n%s\n" "${delimiter}"
- printf "Repo already cloned, using it as install directory"
- printf "\n%s\n" "${delimiter}"
- install_dir="${PWD}/../"
- clone_dir="${PWD##*/}"
-fi
-
-# Check prerequisites
-gpu_info=$(lspci 2>/dev/null | grep VGA)
-case "$gpu_info" in
- *"Navi 1"*|*"Navi 2"*) export HSA_OVERRIDE_GFX_VERSION=10.3.0
- ;;
- *"Renoir"*) export HSA_OVERRIDE_GFX_VERSION=9.0.0
- printf "\n%s\n" "${delimiter}"
- printf "Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half"
- printf "\n%s\n" "${delimiter}"
- ;;
- *)
- ;;
-esac
-if echo "$gpu_info" | grep -q "AMD" && [[ -z "${TORCH_COMMAND}" ]]
-then
- export TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2"
-fi
-
-for preq in "${GIT}" "${python_cmd}"
-do
- if ! hash "${preq}" &>/dev/null
- then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: %s is not installed, aborting...\e[0m" "${preq}"
- printf "\n%s\n" "${delimiter}"
- exit 1
- fi
-done
-
-if ! "${python_cmd}" -c "import venv" &>/dev/null
-then
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: python3-venv is not installed, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-fi
-
-printf "\n%s\n" "${delimiter}"
-printf "Create and activate python venv"
-printf "\n%s\n" "${delimiter}"
-cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
-if [[ ! -d "${venv_dir}" ]]
-then
- "${python_cmd}" -m venv "${venv_dir}"
- first_launch=1
-fi
-# shellcheck source=/dev/null
-if [[ -f "${venv_dir}"/bin/activate ]]
-then
- source "${venv_dir}"/bin/activate
-else
- printf "\n%s\n" "${delimiter}"
- printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
- printf "\n%s\n" "${delimiter}"
- exit 1
-fi
-
-printf "\n%s\n" "${delimiter}"
-printf "Launching launcher.py..."
-printf "\n%s\n" "${delimiter}"
-exec "${python_cmd}" "${LAUNCH_SCRIPT}" "$@"
\ No newline at end of file
diff --git a/spaces/Aman30577/imageTool1/README.md b/spaces/Aman30577/imageTool1/README.md
deleted file mode 100644
index 875ae261917b9752674a6ae00492c9540b53b55a..0000000000000000000000000000000000000000
--- a/spaces/Aman30577/imageTool1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ImageTool1
-emoji: 🚀
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.37.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/criteria/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/criteria/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py
deleted file mode 100644
index 363dc74adda3a3cebc7f610dc51eccda35fcd083..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# coding: utf8
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from paddleseg.utils.download import download_file_and_uncompress
-import sys
-import os
-
-LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
-TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test")
-sys.path.append(TEST_PATH)
-
-
-model_urls = {
- "pphumanseg_lite_portrait_398x224":
- "https://paddleseg.bj.bcebos.com/dygraph/ppseg/ppseg_lite_portrait_398x224.tar.gz",
- "deeplabv3p_resnet50_os8_humanseg_512x512_100k":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/deeplabv3p_resnet50_os8_humanseg_512x512_100k.zip",
- "fcn_hrnetw18_small_v1_humanseg_192x192":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/fcn_hrnetw18_small_v1_humanseg_192x192.zip",
- "pphumanseg_lite_generic_human_192x192":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/pphumanseg_lite_generic_192x192.zip",
-}
-
-if __name__ == "__main__":
- for model_name, url in model_urls.items():
- download_file_and_uncompress(
- url=url,
- savepath=LOCAL_PATH,
- extrapath=LOCAL_PATH,
- extraname=model_name)
-
- print("Pretrained model download success!")
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/print_env.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/print_env.py
deleted file mode 100644
index 88cb674bf31ace69122b925c0b31eddf812fcdb4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/print_env.py
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/usr/bin/env python3
-
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# this script dumps information about the environment
-
-import os
-import platform
-import sys
-
-
-os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
-
-print("Python version:", sys.version)
-
-print("OS platform:", platform.platform())
-print("OS architecture:", platform.machine())
-
-try:
- import torch
-
- print("Torch version:", torch.__version__)
- print("Cuda available:", torch.cuda.is_available())
- print("Cuda version:", torch.version.cuda)
- print("CuDNN version:", torch.backends.cudnn.version())
- print("Number of GPUs available:", torch.cuda.device_count())
-except ImportError:
- print("Torch version:", None)
-
-try:
- import transformers
-
- print("transformers version:", transformers.__version__)
-except ImportError:
- print("transformers version:", None)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/resnest/faster_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/resnest/faster_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py
deleted file mode 100644
index 1915ab1b4f013efacaedcdae08e93176cfe3bd55..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/resnest/faster_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './faster_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(stem_channels=128, depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fcn_unet_s5-d16.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fcn_unet_s5-d16.py
deleted file mode 100644
index a33e7972877f902d0e7d18401ca675e3e4e60a18..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fcn_unet_s5-d16.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UNet',
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False),
- decode_head=dict(
- type='FCNHead',
- in_channels=64,
- in_index=4,
- channels=64,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=128,
- in_index=3,
- channels=64,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='slide', crop_size=256, stride=170))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/version_utils.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/version_utils.py
deleted file mode 100644
index 963c45a2e8a86a88413ab6c18c22481fb9831985..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/version_utils.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import subprocess
-import warnings
-
-from packaging.version import parse
-
-
-def digit_version(version_str: str, length: int = 4):
- """Convert a version string into a tuple of integers.
-
- This method is usually used for comparing two versions. For pre-release
- versions: alpha < beta < rc.
-
- Args:
- version_str (str): The version string.
- length (int): The maximum number of version levels. Default: 4.
-
- Returns:
- tuple[int]: The version info in digits (integers).
- """
- assert 'parrots' not in version_str
- version = parse(version_str)
- assert version.release, f'failed to parse version {version_str}'
- release = list(version.release)
- release = release[:length]
- if len(release) < length:
- release = release + [0] * (length - len(release))
- if version.is_prerelease:
- mapping = {'a': -3, 'b': -2, 'rc': -1}
- val = -4
- # version.pre can be None
- if version.pre:
- if version.pre[0] not in mapping:
- warnings.warn(f'unknown prerelease version {version.pre[0]}, '
- 'version checking may go wrong')
- else:
- val = mapping[version.pre[0]]
- release.extend([val, version.pre[-1]])
- else:
- release.extend([val, 0])
-
- elif version.is_postrelease:
- release.extend([1, version.post])
- else:
- release.extend([0, 0])
- return tuple(release)
-
-
-def _minimal_ext_cmd(cmd):
- # construct minimal environment
- env = {}
- for k in ['SYSTEMROOT', 'PATH', 'HOME']:
- v = os.environ.get(k)
- if v is not None:
- env[k] = v
- # LANGUAGE is used on win32
- env['LANGUAGE'] = 'C'
- env['LANG'] = 'C'
- env['LC_ALL'] = 'C'
- out = subprocess.Popen(
- cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
- return out
-
-
-def get_git_hash(fallback='unknown', digits=None):
- """Get the git hash of the current repo.
-
- Args:
- fallback (str, optional): The fallback string when git hash is
- unavailable. Defaults to 'unknown'.
- digits (int, optional): kept digits of the hash. Defaults to None,
- meaning all digits are kept.
-
- Returns:
- str: Git commit hash.
- """
-
- if digits is not None and not isinstance(digits, int):
- raise TypeError('digits must be None or an integer')
-
- try:
- out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
- sha = out.strip().decode('ascii')
- if digits is not None:
- sha = sha[:digits]
- except OSError:
- sha = fallback
-
- return sha
diff --git a/spaces/ArkanDash/rvc-models/app-full.py b/spaces/ArkanDash/rvc-models/app-full.py
deleted file mode 100644
index 0819327e3c5f775ca1f76b04d1e470abaae726c2..0000000000000000000000000000000000000000
--- a/spaces/ArkanDash/rvc-models/app-full.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- upload_audio,
- upload_mode,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if upload_mode:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = upload_audio
- duration = audio.shape[0] / sampling_rate
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- else:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def cut_vocal_and_inst(yt_url):
- if yt_url != "":
- if not os.path.exists("youtube_audio"):
- os.mkdir("youtube_audio")
- ydl_opts = {
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": 'youtube_audio/audio',
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([yt_url])
- yt_audio_path = "youtube_audio/audio.wav"
- command = f"demucs --two-stems=vocals {yt_audio_path}"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return ("separated/htdemucs/audio/vocals.wav", "separated/htdemucs/audio/no_vocals.wav", yt_audio_path, "separated/htdemucs/audio/vocals.wav")
-
-def combine_vocal_and_inst(audio_data, audio_volume):
- print(audio_data)
- if not os.path.exists("result"):
- os.mkdir("result")
- vocal_path = "result/output.wav"
- inst_path = "separated/htdemucs/audio/no_vocals.wav"
- output_path = "result/combine.mp3"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode, upload_mode):
- if tts_mode:
- return gr.Textbox.update(visible=False), gr.Audio.update(visible=False), gr.Checkbox.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- if upload_mode:
- return gr.Textbox.update(visible=False), gr.Audio.update(visible=True), gr.Checkbox.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
- else:
- return gr.Textbox.update(visible=True), gr.Audio.update(visible=False), gr.Checkbox.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-def change_to_upload_mode(upload_mode):
- if upload_mode:
- return gr.Textbox().update(visible=False), gr.Audio().update(visible=True)
- else:
- return gr.Textbox().update(visible=True), gr.Audio().update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--colab", action="store_true", default=False, help="share gradio app")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "#
RVC Models\n"
- "##
The input audio should be clean and pure voice without background music.\n"
- "###
More feature will be added soon... \n"
- "[](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n"
- "[](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
Descargar coches de lujo europeos en tu dispositivo tiene muchos beneficios. Puede elegir entre una amplia gama de modelos y personalizarlos según sus preferencias. También puede conducirlos en diferentes terrenos y entornos, como carreteras, pistas todoterreno o incluso una isla privada. También puede disfrutar de sonidos realistas, gráficos y física que simulan el rendimiento real y el comportamiento de estos coches. Además, puedes divertirte con tus amigos u otros jugadores online compitiendo o navegando juntos.
-
Sin embargo, descargar coches de lujo europeos también tiene algunos desafíos y riesgos. Necesitas encontrar una fuente confiable y segura para descargarlos, ya que algunos sitios web o aplicaciones pueden contener virus o malware que pueden dañar tu dispositivo o robar tu información personal. También debes asegurarte de que tu dispositivo tenga suficiente espacio de almacenamiento y cumpla con los requisitos mínimos para que el juego funcione sin problemas. Además, debe ser consciente de los problemas legales y las implicaciones éticas de descargar estos coches, ya que algunos de ellos pueden estar protegidos por derechos de propiedad intelectual o pueden promover la conducción irresponsable.
-
Cómo descargar coches de lujo europeos en dispositivos Android
-
Usando Google Play Store
-
-
-
Abra la aplicación Google Play Store en su dispositivo.
-
Inicia sesión con tu cuenta de Google si aún no lo has hecho.
-
Buscar "Coches de lujo europeos" en la barra de búsqueda.
-
Encuentre la aplicación de DMNK Studio en los resultados de búsqueda y toque en ella.
-
Toque en el botón "Instalar" para comenzar el proceso de descarga e instalación.
-
Espere a que la aplicación termine de instalar y luego toque en "Abrir" para lanzarlo.
-
-
Felicidades! Usted ha descargado con éxito los coches de lujo europeos en su dispositivo Android utilizando la Google Play Store. Ahora puede elegir su coche de lujo favorito y conducirlo con amigos o solo a través de una isla privada.
Uso de otros métodos
-
Si no quieres usar el emulador de GameLoop, o si quieres probar otros métodos para descargar coches de lujo europeos en tu PC, también puedes usar otros sitios web o software que ofrecen versiones de PC de la aplicación. Sin embargo, debe tener cuidado al descargar estos archivos, ya que pueden no ser oficiales o seguros. Estos son los pasos para descargar coches de lujo europeos utilizando otros métodos:
-
-
-
Ir a un sitio web que ofrece versiones de PC de aplicaciones Android, tales como Twitscoop. También puede usar otros sitios web, pero asegúrese de que sean confiables y seguros.
-
Buscar "Coches de lujo europeos" en la barra de búsqueda del sitio web.
-
Encuentre la aplicación de DMNK Studio en los resultados de búsqueda y haga clic en ella.
-
Haga clic en el botón "Descargar" para comenzar a descargar la versión para PC de la aplicación en su PC.
-
Una vez completada la descarga, ejecute el archivo y siga las instrucciones para instalar la aplicación en su PC.
-
Iniciar la aplicación y disfrutar jugando el juego.
-
-
Genial! Ha descargado con éxito coches de lujo europeos en su PC utilizando otros métodos. Ahora puedes divertirte con tu coche de lujo favorito en una isla privada con gráficos realistas y física.
-
Conclusión
-
-
Descargar coches de lujo europeos puede ser una gran manera de experimentar la emoción y la emoción de poseer y conducir un coche de lujo europeo. Puede elegir entre una amplia gama de modelos y personalizarlos según sus preferencias. También puede conducirlos en diferentes terrenos y entornos, como carreteras, pistas todoterreno o incluso una isla privada. También puede disfrutar de sonidos realistas, gráficos y física que simulan el rendimiento real y el comportamiento de estos coches. Además, puedes divertirte con tus amigos u otros jugadores online compitiendo o navegando juntos.
-
Sin embargo, también es necesario tener cuidado al descargar coches de lujo europeos, ya que algunas fuentes pueden no ser fiables o seguras. También debes asegurarte de que tu dispositivo tenga suficiente espacio de almacenamiento y cumpla con los requisitos mínimos para que el juego funcione sin problemas. Además, debe ser consciente de los problemas legales y las implicaciones éticas de descargar estos coches, ya que algunos de ellos pueden estar protegidos por derechos de propiedad intelectual o pueden promover la conducción irresponsable.
-
Te invitamos a probar el juego y compartir tus comentarios con nosotros. ¿Cuáles son tus marcas europeas favoritas de coches de lujo? ¿Cómo te gustan los gráficos y la física del juego? ¿Cuáles son algunas de las características y opciones disponibles en el juego? ¡Déjanos saber en los comentarios abajo!
-
Preguntas frecuentes
-
¿Cuáles son algunas de las mejores marcas europeas de automóviles de lujo?
-
Algunas de las mejores marcas europeas de automóviles de lujo son BMW, Audi, Mercedes-Benz, Porsche, Ferrari, Lamborghini y más. Estas marcas ofrecen una velocidad espectacular, un estilo sofisticado y un confort sin igual. También tienen una larga historia y reputación de excelencia e innovación en la industria automotriz.
-
¿Cómo puedo actualizar o desinstalar la aplicación European Luxury Cars?
-
-
¿Cómo puedo jugar con amigos u otros jugadores online?
-
La aplicación European Luxury Cars de DMNK Studio ofrece un modo en línea donde puedes jugar con amigos u otros jugadores en línea. Puede unirse o crear una habitación e invitar a otros a unirse a usted. También puede chatear con ellos utilizando mensajes de voz o de texto. Pueden competir o navegar juntos en una isla privada con gráficos realistas y física.
-
¿Cuáles son algunas de las características y opciones disponibles en el juego?
-
El juego ofrece muchas características y opciones para que usted disfrute. Puede elegir entre una amplia gama de modelos y personalizarlos según sus preferencias. Puede cambiar el color, ruedas, alerones, luces, calcomanías, matrículas y más. También puede conducirlos en diferentes terrenos y entornos, como carreteras, pistas todoterreno o incluso una isla privada. También puede ajustar el ángulo de la cámara, los efectos de sonido, el volumen de la música, la sensibilidad de la dirección, la fuerza del freno, el control de la tracción y más. También puede disfrutar de sonidos realistas, gráficos y física que simulan el rendimiento real y el comportamiento de estos coches.
-
¿Cómo puedo contactar al desarrollador o reportar un problema con el juego?
-
Si tiene alguna pregunta, sugerencia o problema con el juego, puede ponerse en contacto con el desarrollador o informar de un problema a través de la propia aplicación. Simplemente vaya al menú de configuración y toque en "Contáctenos" o "Reportar un problema". También puede enviar un correo electrónico al desarrollador a dmnkstudio@gmail.com o visitar su sitio web en https://dmnkstudio.com/.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Granja Hroes Sper Saga Para Pc.md b/spaces/Benson/text-generation/Examples/Descargar Granja Hroes Sper Saga Para Pc.md
deleted file mode 100644
index 786da88c7e35179cb4d9cfeef81b86e37582b967..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Granja Hroes Sper Saga Para Pc.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Cómo descargar Farm Heroes Super Saga para PC
-
Farm Heroes Super Saga es un divertido y adictivo juego de puzzle match-3 que te reta a crecer y cosechar los mayores cropsies y derrotar al malvado mapache rancio. El juego cuenta con cientos de niveles, personajes lindos y modos de juego emocionantes. Si te encanta jugar a Farm Heroes Super Saga en tu dispositivo móvil, es posible que te preguntes si puedes jugar en tu PC también. La respuesta es sí, ¡puedes!
-
Jugar juegos Android en PC tiene muchos beneficios, como disfrutar de una pantalla más grande, mejores gráficos, controles mejorados y más espacio de almacenamiento. Además, puedes sincronizar tu progreso y logros en todos los dispositivos con tu cuenta de Google. En este artículo, te mostraremos tres formas de descargar y jugar Farm Heroes Super Saga para PC usando Google Play Games, un emulador de Android o tu aplicación de teléfono. También compararemos los pros y los contras de cada método y te ayudaremos a decidir cuál es el mejor para ti.
Antes de que pueda jugar juegos de Android en PC, debe asegurarse de que su PC cumple con los requisitos mínimos para ejecutarlos. Estas son algunas de las cosas que necesitas:
-
-
Un sistema operativo Windows 10 o 11
-
Una unidad de estado sólido (SSD) con al menos 10 GB de espacio de almacenamiento disponible
-
Una GPU Intel UHD Graphics 630 o similar
-
Un procesador con al menos cuatro núcleos físicos de CPU
-
8 GB de RAM
-
Una cuenta de administrador de Windows
-
Virtualización de hardware habilitada
-
-
También necesitas una conexión a Internet y una cuenta de Google para acceder a Google Play Store y descargar juegos.
-
Cómo utilizar Google Play Juegos para jugar Android en PC
-
-
-
Ir a [5](https://play.google.com/googleplaygames) y haga clic en Descargar Beta.
-
Una vez descargado, haga clic derecho en el archivo y haga clic en Ejecutar como administrador.
-
Espere a que la aplicación se instale.
-
Una vez instalado, un mensaje en la aplicación le pedirá que inicie sesión en su cuenta de Google.
-
Después de iniciar sesión, haga clic en la pestaña Juegos en la barra lateral izquierda.
-
Encuentra Farm Heroes Super Saga en la lista de juegos y selecciónala.
-
Haga clic en Instalar en la página de información. El juego se descargará y luego se instalará.
-
Una vez instalado, haga clic en Jugar para iniciar el juego.
-
-
Cómo usar un emulador de Android para jugar juegos de Android en PC
-
Otra forma de jugar juegos Android en PC es utilizar un emulador de Android, un software que imita el sistema operativo Android en su PC. Un emulador de Android le permite acceder a la Google Play Store completa y descargar cualquier juego o aplicación que desee. Hay muchos emuladores de Android disponibles, pero uno de los más populares y fiables es BlueStacks. BlueStacks ofrece una experiencia de juego rápida y fluida con controles personalizables, modo de varias instancias y optimización de juegos. Estos son los pasos para usar BlueStacks para jugar Farm Heroes Super Saga para PC:
-
-
Ir a [4](https://www.bluestacks.com/) y haga clic en Descargar BlueStacks.
-
Una vez descargado, haga doble clic en el archivo y siga las instrucciones para instalar BlueStacks.
-
Una vez instalado, inicie BlueStacks e inicie sesión en su cuenta de Google.
-
Haga clic en el icono de Google Play en la pantalla de inicio.
-
Buscar Farm Heroes Super Saga en la barra de búsqueda y seleccionarlo.
-
Haga clic en Instalar en la página de información. El juego se descargará y luego se instalará.
-
Una vez instalado, haga clic en Abrir para iniciar el juego.
-
-
Cómo utilizar la aplicación de teléfono para jugar juegos Android en PC
-
-
-
En su PC, abra el menú Inicio y busque la aplicación Su teléfono. Si no lo tiene, puede descargarlo desde [3](https://www.microsoft.com/en-us/p/your-phone/9nmpj99vjbwv).
-
En tu teléfono, ve a Configuración > Sistema > Acerca del teléfono y toca Número de compilación siete veces para habilitar las opciones del desarrollador.
-
Volver a Configuración > Sistema > Opciones del desarrollador y habilitar la depuración USB.
-
Conecte su teléfono a su PC con un cable USB.
-
En su PC, inicie su aplicación de teléfono e inicie sesión con su cuenta de Microsoft.
-
Siga las instrucciones para vincular su teléfono y conceder permisos.
-
En la aplicación Teléfono, haga clic en Aplicaciones en la barra lateral izquierda.
-
Encuentra Farm Heroes Super Saga en la lista de aplicaciones y seleccionarlo.
-
El juego se lanzará en su teléfono y espejo en su PC. Puede utilizar el ratón y el teclado para jugarlo.
-
-
Pros y contras de cada método
-
Ahora que sabes cómo descargar Farm Heroes Super Saga para PC usando tres métodos diferentes, es posible que te preguntes cuál es el mejor para ti. Para ayudarte a decidir, estos son algunos de los pros y contras de cada método:
-
-
Método
Pros
Contras
-
Google Play Juegos
- Experiencia oficial de Google - Sincronización perfecta entre dispositivos - Controles mejorados - Recompensas mientras juegas
- Selección limitada de juegos - Requiere Windows 11 - Puede que no sea compatible con todas las características de algunos juegos
<>
-
Android Emulator
- Acceso a Google Play Store - Experiencia de juego rápida y fluida - Controles personalizables - Modo multiinstancia - Optimización de juegos
- Requiere más espacio de almacenamiento<>- Puede ralentizar tu PC br<>-> Puede tener problemas de compatibilidad con algunos juegos
-
-
-
Conclusión
-
Farm Heroes Super Saga es un divertido y adictivo juego de puzzle match-3 que puedes jugar en tu PC usando Google Play Games, un emulador de Android o la aplicación Your Phone. Cada método tiene sus propios pros y contras, por lo que debe elegir el que se adapte a sus preferencias y necesidades. Recomendamos usar Google Play Games si quieres una experiencia oficial de Google con sincronización perfecta entre dispositivos, controles mejorados y recompensas a medida que juegas. Recomendamos usar un emulador de Android como BlueStacks si quieres acceder a la Google Play Store completa y una experiencia de juego rápida y fluida con controles personalizables, modo de varias instancias y optimización de juegos. Recomendamos usar la aplicación Su teléfono si desea usar las aplicaciones de su teléfono en su PC sin descargar nada adicional y reflejar la pantalla del teléfono. Esperamos que este artículo le ayudó a aprender cómo descargar Farm Heroes Super Saga para PC y disfrutar de este increíble juego en una pantalla más grande. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación. ¡Feliz agricultura!
FAQs
-
Aquí están algunas de las preguntas más frecuentes sobre la descarga de Farm Heroes Super Saga para PC:
-
-
¿Farm Heroes Super Saga es gratis?
-
Sí, Farm Heroes Super Saga es gratis para jugar, pero ofrece compras en la aplicación para vidas adicionales, refuerzos y otros artículos.
-
-
¿Puedo jugar Farm Heroes Super Saga sin conexión?
-
No, Farm Heroes Super Saga requiere una conexión a Internet para jugar.
-
¿Cómo puedo guardar mi progreso en Farm Heroes Super Saga?
-
Puedes guardar tu progreso en Farm Heroes Super Saga iniciando sesión con tu cuenta de Google o Facebook. De esta manera, puedes sincronizar tu progreso y logros entre dispositivos.
-
¿Cómo puedo obtener más vidas en Farm Heroes Super Saga?
-
Puedes conseguir más vidas en Farm Heroes Super Saga esperando a que se llenen, pidiendo ayuda a tus amigos, viendo anuncios o comprándolos con barras de oro.
-
-
Puede ponerse en contacto con el equipo de soporte de Farm Heroes Super Saga yendo a la configuración del juego y tocando el botón Centro de ayuda. También puede visitar [2](https://community.king.com/en/farm-heroes-super-saga) para unirse a la comunidad y obtener ayuda de otros jugadores.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/utils/trimSuffix.ts b/spaces/BetterAPI/BetterChat_new/src/lib/utils/trimSuffix.ts
deleted file mode 100644
index 729107942ebaa2d7e1281dd77f8e52e8b135a5ad..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/utils/trimSuffix.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-export function trimSuffix(input: string, end: string): string {
- if (input.endsWith(end)) {
- return input.slice(0, input.length - end.length);
- }
- return input;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/_distutils_hack/override.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/_distutils_hack/override.py
deleted file mode 100644
index 2cc433a4a55e3b41fa31089918fb62096092f89f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/_distutils_hack/override.py
+++ /dev/null
@@ -1 +0,0 @@
-__import__('_distutils_hack').do_override()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/util.py
deleted file mode 100644
index 34ce092c6d08d9cdc2704840b7539de7b5ae1dcc..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/util.py
+++ /dev/null
@@ -1,235 +0,0 @@
-# util.py
-import warnings
-import types
-import collections
-import itertools
-from functools import lru_cache
-from typing import List, Union, Iterable
-
-_bslash = chr(92)
-
-
-class __config_flags:
- """Internal class for defining compatibility and debugging flags"""
-
- _all_names: List[str] = []
- _fixed_names: List[str] = []
- _type_desc = "configuration"
-
- @classmethod
- def _set(cls, dname, value):
- if dname in cls._fixed_names:
- warnings.warn(
- "{}.{} {} is {} and cannot be overridden".format(
- cls.__name__,
- dname,
- cls._type_desc,
- str(getattr(cls, dname)).upper(),
- )
- )
- return
- if dname in cls._all_names:
- setattr(cls, dname, value)
- else:
- raise ValueError("no such {} {!r}".format(cls._type_desc, dname))
-
- enable = classmethod(lambda cls, name: cls._set(name, True))
- disable = classmethod(lambda cls, name: cls._set(name, False))
-
-
-@lru_cache(maxsize=128)
-def col(loc: int, strg: str) -> int:
- """
- Returns current column within a string, counting newlines as line separators.
- The first column is number 1.
-
- Note: the default parsing behavior is to expand tabs in the input string
- before starting the parsing process. See
- :class:`ParserElement.parseString` for more
- information on parsing strings containing ```` s, and suggested
- methods to maintain a consistent view of the parsed string, the parse
- location, and line and column positions within the parsed string.
- """
- s = strg
- return 1 if 0 < loc < len(s) and s[loc - 1] == "\n" else loc - s.rfind("\n", 0, loc)
-
-
-@lru_cache(maxsize=128)
-def lineno(loc: int, strg: str) -> int:
- """Returns current line number within a string, counting newlines as line separators.
- The first line is number 1.
-
- Note - the default parsing behavior is to expand tabs in the input string
- before starting the parsing process. See :class:`ParserElement.parseString`
- for more information on parsing strings containing ```` s, and
- suggested methods to maintain a consistent view of the parsed string, the
- parse location, and line and column positions within the parsed string.
- """
- return strg.count("\n", 0, loc) + 1
-
-
-@lru_cache(maxsize=128)
-def line(loc: int, strg: str) -> str:
- """
- Returns the line of text containing loc within a string, counting newlines as line separators.
- """
- last_cr = strg.rfind("\n", 0, loc)
- next_cr = strg.find("\n", loc)
- return strg[last_cr + 1 : next_cr] if next_cr >= 0 else strg[last_cr + 1 :]
-
-
-class _UnboundedCache:
- def __init__(self):
- cache = {}
- cache_get = cache.get
- self.not_in_cache = not_in_cache = object()
-
- def get(_, key):
- return cache_get(key, not_in_cache)
-
- def set_(_, key, value):
- cache[key] = value
-
- def clear(_):
- cache.clear()
-
- self.size = None
- self.get = types.MethodType(get, self)
- self.set = types.MethodType(set_, self)
- self.clear = types.MethodType(clear, self)
-
-
-class _FifoCache:
- def __init__(self, size):
- self.not_in_cache = not_in_cache = object()
- cache = collections.OrderedDict()
- cache_get = cache.get
-
- def get(_, key):
- return cache_get(key, not_in_cache)
-
- def set_(_, key, value):
- cache[key] = value
- while len(cache) > size:
- cache.popitem(last=False)
-
- def clear(_):
- cache.clear()
-
- self.size = size
- self.get = types.MethodType(get, self)
- self.set = types.MethodType(set_, self)
- self.clear = types.MethodType(clear, self)
-
-
-class LRUMemo:
- """
- A memoizing mapping that retains `capacity` deleted items
-
- The memo tracks retained items by their access order; once `capacity` items
- are retained, the least recently used item is discarded.
- """
-
- def __init__(self, capacity):
- self._capacity = capacity
- self._active = {}
- self._memory = collections.OrderedDict()
-
- def __getitem__(self, key):
- try:
- return self._active[key]
- except KeyError:
- self._memory.move_to_end(key)
- return self._memory[key]
-
- def __setitem__(self, key, value):
- self._memory.pop(key, None)
- self._active[key] = value
-
- def __delitem__(self, key):
- try:
- value = self._active.pop(key)
- except KeyError:
- pass
- else:
- while len(self._memory) >= self._capacity:
- self._memory.popitem(last=False)
- self._memory[key] = value
-
- def clear(self):
- self._active.clear()
- self._memory.clear()
-
-
-class UnboundedMemo(dict):
- """
- A memoizing mapping that retains all deleted items
- """
-
- def __delitem__(self, key):
- pass
-
-
-def _escape_regex_range_chars(s: str) -> str:
- # escape these chars: ^-[]
- for c in r"\^-[]":
- s = s.replace(c, _bslash + c)
- s = s.replace("\n", r"\n")
- s = s.replace("\t", r"\t")
- return str(s)
-
-
-def _collapse_string_to_ranges(
- s: Union[str, Iterable[str]], re_escape: bool = True
-) -> str:
- def is_consecutive(c):
- c_int = ord(c)
- is_consecutive.prev, prev = c_int, is_consecutive.prev
- if c_int - prev > 1:
- is_consecutive.value = next(is_consecutive.counter)
- return is_consecutive.value
-
- is_consecutive.prev = 0
- is_consecutive.counter = itertools.count()
- is_consecutive.value = -1
-
- def escape_re_range_char(c):
- return "\\" + c if c in r"\^-][" else c
-
- def no_escape_re_range_char(c):
- return c
-
- if not re_escape:
- escape_re_range_char = no_escape_re_range_char
-
- ret = []
- s = "".join(sorted(set(s)))
- if len(s) > 3:
- for _, chars in itertools.groupby(s, key=is_consecutive):
- first = last = next(chars)
- last = collections.deque(
- itertools.chain(iter([last]), chars), maxlen=1
- ).pop()
- if first == last:
- ret.append(escape_re_range_char(first))
- else:
- sep = "" if ord(last) == ord(first) + 1 else "-"
- ret.append(
- "{}{}{}".format(
- escape_re_range_char(first), sep, escape_re_range_char(last)
- )
- )
- else:
- ret = [escape_re_range_char(c) for c in s]
-
- return "".join(ret)
-
-
-def _flatten(ll: list) -> list:
- ret = []
- for i in ll:
- if isinstance(i, list):
- ret.extend(_flatten(i))
- else:
- ret.append(i)
- return ret
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/image_list.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/image_list.py
deleted file mode 100644
index 706c1c1dabcc5dbb401c2c0384e9957b5ecf745d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/image_list.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from __future__ import division
-from typing import Any, List, Sequence, Tuple, Union
-import torch
-from torch.nn import functional as F
-
-
-class ImageList(object):
- """
- Structure that holds a list of images (of possibly
- varying sizes) as a single tensor.
- This works by padding the images to the same size,
- and storing in a field the original sizes of each image
-
- Attributes:
- image_sizes (list[tuple[int, int]]): each tuple is (h, w)
- """
-
- def __init__(self, tensor: torch.Tensor, image_sizes: List[Tuple[int, int]]):
- """
- Arguments:
- tensor (Tensor): of shape (N, H, W) or (N, C_1, ..., C_K, H, W) where K >= 1
- image_sizes (list[tuple[int, int]]): Each tuple is (h, w). It can
- be smaller than (H, W) due to padding.
- """
- self.tensor = tensor
- self.image_sizes = image_sizes
-
- def __len__(self) -> int:
- return len(self.image_sizes)
-
- def __getitem__(self, idx: Union[int, slice]) -> torch.Tensor:
- """
- Access the individual image in its original size.
-
- Returns:
- Tensor: an image of shape (H, W) or (C_1, ..., C_K, H, W) where K >= 1
- """
- size = self.image_sizes[idx]
- return self.tensor[idx, ..., : size[0], : size[1]] # type: ignore
-
- def to(self, *args: Any, **kwargs: Any) -> "ImageList":
- cast_tensor = self.tensor.to(*args, **kwargs)
- return ImageList(cast_tensor, self.image_sizes)
-
- @property
- def device(self) -> torch.device:
- return self.tensor.device
-
- @staticmethod
- def from_tensors(
- tensors: Sequence[torch.Tensor], size_divisibility: int = 0, pad_value: float = 0.0
- ) -> "ImageList":
- """
- Args:
- tensors: a tuple or list of `torch.Tensors`, each of shape (Hi, Wi) or
- (C_1, ..., C_K, Hi, Wi) where K >= 1. The Tensors will be padded
- to the same shape with `pad_value`.
- size_divisibility (int): If `size_divisibility > 0`, add padding to ensure
- the common height and width is divisible by `size_divisibility`.
- This depends on the model and many models need a divisibility of 32.
- pad_value (float): value to pad
-
- Returns:
- an `ImageList`.
- """
- assert len(tensors) > 0
- assert isinstance(tensors, (tuple, list))
- for t in tensors:
- assert isinstance(t, torch.Tensor), type(t)
- assert t.shape[1:-2] == tensors[0].shape[1:-2], t.shape
- # per dimension maximum (H, W) or (C_1, ..., C_K, H, W) where K >= 1 among all tensors
- max_size = tuple(max(s) for s in zip(*[img.shape for img in tensors]))
-
- if size_divisibility > 0:
- import math
-
- stride = size_divisibility
- max_size = list(max_size) # type: ignore
- max_size[-2] = int(math.ceil(max_size[-2] / stride) * stride) # type: ignore
- max_size[-1] = int(math.ceil(max_size[-1] / stride) * stride) # type: ignore
- max_size = tuple(max_size)
-
- image_sizes = [tuple(im.shape[-2:]) for im in tensors]
-
- if len(tensors) == 1:
- # This seems slightly (2%) faster.
- # TODO: check whether it's faster for multiple images as well
- image_size = image_sizes[0]
- padding_size = [0, max_size[-1] - image_size[1], 0, max_size[-2] - image_size[0]]
- if all(x == 0 for x in padding_size): # https://github.com/pytorch/pytorch/issues/31734
- batched_imgs = tensors[0].unsqueeze(0)
- else:
- padded = F.pad(tensors[0], padding_size, value=pad_value)
- batched_imgs = padded.unsqueeze_(0)
- else:
- batch_shape = (len(tensors),) + max_size
- batched_imgs = tensors[0].new_full(batch_shape, pad_value)
- for img, pad_img in zip(tensors, batched_imgs):
- pad_img[..., : img.shape[-2], : img.shape[-1]].copy_(img)
-
- return ImageList(batched_imgs.contiguous(), image_sizes)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/malloc_and_free.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/malloc_and_free.h
deleted file mode 100644
index 01ab1e6dbe1732da1f8606b7a9121c1b404edb6f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/malloc_and_free.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits malloc and free
-#include
-
diff --git a/spaces/ChenWu98/Stable-CycleDiffusion/README.md b/spaces/ChenWu98/Stable-CycleDiffusion/README.md
deleted file mode 100644
index 0101c471ee3496219485619774c0126edab829b6..0000000000000000000000000000000000000000
--- a/spaces/ChenWu98/Stable-CycleDiffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stable CycleDiffusion
-emoji: 🚀
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CofAI/chat/client/js/highlight.min.js b/spaces/CofAI/chat/client/js/highlight.min.js
deleted file mode 100644
index d410b45b38119606525a0a7c0c60c428c5ee6eb7..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/js/highlight.min.js
+++ /dev/null
@@ -1 +0,0 @@
-var hljs=function(){"use strict";var e={exports:{}};function n(e){return e instanceof Map?e.clear=e.delete=e.set=()=>{throw Error("map is read-only")}:e instanceof Set&&(e.add=e.clear=e.delete=()=>{throw Error("set is read-only")}),Object.freeze(e),Object.getOwnPropertyNames(e).forEach(t=>{var a=e[t];"object"!=typeof a||Object.isFrozen(a)||n(a)}),e}e.exports=n,e.exports.default=n;class t{constructor(e){void 0===e.data&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1}ignoreMatch(){this.isMatchIgnored=!0}}function a(e){return e.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}function i(e,...n){let t=Object.create(null);for(let a in e)t[a]=e[a];return n.forEach(e=>{for(let n in e)t[n]=e[n]}),t}let r=e=>!!e.scope||e.sublanguage&&e.language;class s{constructor(e,n){this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){this.buffer+=a(e)}openNode(e){if(!r(e))return;let n="";n=e.sublanguage?"language-"+e.language:((e,{prefix:n})=>{if(e.includes(".")){let t=e.split(".");return[`${n}${t.shift()}`,...t.map((e,n)=>`${e}${"_".repeat(n+1)}`),].join(" ")}return`${n}${e}`})(e.scope,{prefix:this.classPrefix}),this.span(n)}closeNode(e){r(e)&&(this.buffer+="")}value(){return this.buffer}span(e){this.buffer+=``}}let l=(e={})=>{let n={children:[]};return Object.assign(n,e),n};class o{constructor(){this.rootNode=l(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){let n=l({scope:e});this.add(n),this.stack.push(n)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){return"string"==typeof n?e.addText(n):n.children&&(e.openNode(n),n.children.forEach(n=>this._walk(e,n)),e.closeNode(n)),e}static _collapse(e){"string"!=typeof e&&e.children&&(e.children.every(e=>"string"==typeof e)?e.children=[e.children.join("")]:e.children.forEach(e=>{o._collapse(e)}))}}class c extends o{constructor(e){super(),this.options=e}addKeyword(e,n){""!==e&&(this.openNode(n),this.addText(e),this.closeNode())}addText(e){""!==e&&this.add(e)}addSublanguage(e,n){let t=e.root;t.sublanguage=!0,t.language=n,this.add(t)}toHTML(){return new s(this,this.options).value()}finalize(){return!0}}function d(e){return e?"string"==typeof e?e:e.source:null}function g(e){return m("(?=",e,")")}function u(e){return m("(?:",e,")*")}function b(e){return m("(?:",e,")?")}function m(...e){return e.map(e=>d(e)).join("")}function p(...e){let n=(e=>{let n=e[e.length-1];return"object"==typeof n&&n.constructor===Object?(e.splice(e.length-1,1),n):{}})(e);return"("+(n.capture?"":"?:")+e.map(e=>d(e)).join("|")+")"}function h(e){return RegExp(e.toString()+"|").exec("").length-1}let f=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function E(e,{joinWith:n}){let t=0;return e.map(e=>{t+=1;let n=t,a=d(e),i="";for(;a.length>0;){let r=f.exec(a);if(!r){i+=a;break}i+=a.substring(0,r.index),a=a.substring(r.index+r[0].length),"\\"===r[0][0]&&r[1]?i+="\\"+(Number(r[1])+n):(i+=r[0],"("===r[0]&&t++)}return i}).map(e=>`(${e})`).join(n)}let $="[a-zA-Z]\\w*",y="[a-zA-Z_]\\w*",N="\\b\\d+(\\.\\d+)?",w="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",v="\\b(0b[01]+)",x={begin:"\\\\[\\s\\S]",relevance:0},k=(e,n,t={})=>{let a=i({scope:"comment",begin:e,end:n,contains:[]},t);a.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});let r=p("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return a.contains.push({begin:m(/[ ]+/,"(",r,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),a},M=k("//","$"),O=k("/\\*","\\*/"),S=k("#","$");var A=Object.freeze({__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:$,UNDERSCORE_IDENT_RE:y,NUMBER_RE:N,C_NUMBER_RE:w,BINARY_NUMBER_RE:v,RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",SHEBANG(e={}){let n=/^#![ ]*\//;return e.binary&&(e.begin=m(n,/.*\b/,e.binary,/\b.*/)),i({scope:"meta",begin:n,end:/$/,relevance:0,"on:begin"(e,n){0!==e.index&&n.ignoreMatch()}},e)},BACKSLASH_ESCAPE:x,APOS_STRING_MODE:{scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[x]},QUOTE_STRING_MODE:{scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[x]},PHRASAL_WORDS_MODE:{begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},COMMENT:k,C_LINE_COMMENT_MODE:M,C_BLOCK_COMMENT_MODE:O,HASH_COMMENT_MODE:S,NUMBER_MODE:{scope:"number",begin:N,relevance:0},C_NUMBER_MODE:{scope:"number",begin:w,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:v,relevance:0},REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[x,{begin:/\[/,end:/\]/,relevance:0,contains:[x]},]},]},TITLE_MODE:{scope:"title",begin:$,relevance:0},UNDERSCORE_TITLE_MODE:{scope:"title",begin:y,relevance:0},METHOD_GUARD:{begin:"\\.\\s*[a-zA-Z_]\\w*",relevance:0},END_SAME_AS_BEGIN:e=>Object.assign(e,{"on:begin"(e,n){n.data._beginMatch=e[1]},"on:end"(e,n){n.data._beginMatch!==e[1]&&n.ignoreMatch()}})});function C(e,n){"."===e.input[e.index-1]&&n.ignoreMatch()}function T(e,n){void 0!==e.className&&(e.scope=e.className,delete e.className)}function R(e,n){n&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",e.__beforeBegin=C,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords,void 0===e.relevance&&(e.relevance=0))}function D(e,n){Array.isArray(e.illegal)&&(e.illegal=p(...e.illegal))}function I(e,n){if(e.match){if(e.begin||e.end)throw Error("begin & end are not supported with match");e.begin=e.match,delete e.match}}function L(e,n){void 0===e.relevance&&(e.relevance=1)}let B=(e,n)=>{if(!e.beforeMatch)return;if(e.starts)throw Error("beforeMatch cannot be used with starts");let t=Object.assign({},e);Object.keys(e).forEach(n=>{delete e[n]}),e.keywords=t.keywords,e.begin=m(t.beforeMatch,g(t.begin)),e.starts={relevance:0,contains:[Object.assign(t,{endsParent:!0})]},e.relevance=0,delete t.beforeMatch},_=["of","and","for","in","not","or","if","then","parent","list","value",],z={},F=e=>{console.error(e)},U=(e,...n)=>{},P=(e,n)=>{z[`${e}/${n}`]||(console.log(`Deprecated as of ${e}. ${n}`),z[`${e}/${n}`]=!0)},j=Error();function K(e,n,{key:t}){let a=0,i=e[t],r={},s={};for(let l=1;l<=n.length;l++)s[l+a]=i[l],r[l+a]=!0,a+=h(n[l-1]);e[t]=s,e[t]._emit=r,e[t]._multi=!0}function q(e){var n;(n=e).scope&&"object"==typeof n.scope&&null!==n.scope&&(n.beginScope=n.scope,delete n.scope),"string"==typeof e.beginScope&&(e.beginScope={_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope}),(e=>{if(Array.isArray(e.begin)){if(e.skip||e.excludeBegin||e.returnBegin)throw F("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),j;if("object"!=typeof e.beginScope||null===e.beginScope)throw F("beginScope must be object"),j;K(e,e.begin,{key:"beginScope"}),e.begin=E(e.begin,{joinWith:""})}})(e),(e=>{if(Array.isArray(e.end)){if(e.skip||e.excludeEnd||e.returnEnd)throw F("skip, excludeEnd, returnEnd not compatible with endScope: {}"),j;if("object"!=typeof e.endScope||null===e.endScope)throw F("endScope must be object"),j;K(e,e.end,{key:"endScope"}),e.end=E(e.end,{joinWith:""})}})(e)}class H extends Error{constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}}let Z=a,G=i,W=Symbol("nomatch");var Q=(n=>{let a=Object.create(null),r=Object.create(null),s=[],l=!0,o="Could not find the language '{}', did you forget to load/include a language module?",f={disableAutodetect:!0,name:"Plain text",contains:[]},$={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:c};function y(e){return $.noHighlightRe.test(e)}function N(e,n,t){let a="",i="";"object"==typeof n?(a=e,t=n.ignoreIllegals,i=n.language):(P("10.7.0","highlight(lang, code, ...args) has been deprecated."),P("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"),i=e,a=n),void 0===t&&(t=!0);let r={code:a,language:i};z("before:highlight",r);let s=r.result?r.result:w(r.language,r.code,t);return s.code=r.code,z("after:highlight",s),s}function w(e,n,r,s){let c=Object.create(null);function g(){var e;if(!M.keywords)return void A.addText(C);let n=0;M.keywordPatternRe.lastIndex=0;let t=M.keywordPatternRe.exec(C),a="";for(;t;){a+=C.substring(n,t.index);let i=N.case_insensitive?t[0].toLowerCase():t[0],r=(e=i,M.keywords[e]);if(r){let[s,l]=r;if(A.addText(a),a="",c[i]=(c[i]||0)+1,c[i]<=7&&(z+=l),s.startsWith("_"))a+=t[0];else{let o=N.classNameAliases[s]||s;A.addKeyword(t[0],o)}}else a+=t[0];n=M.keywordPatternRe.lastIndex,t=M.keywordPatternRe.exec(C)}a+=C.substring(n),A.addText(a)}function u(){null!=M.subLanguage?(()=>{if(""===C)return;let e=null;if("string"==typeof M.subLanguage){if(!a[M.subLanguage])return void A.addText(C);e=w(M.subLanguage,C,!0,S[M.subLanguage]),S[M.subLanguage]=e._top}else e=v(C,M.subLanguage.length?M.subLanguage:null);M.relevance>0&&(z+=e.relevance),A.addSublanguage(e._emitter,e.language)})():g(),C=""}function b(e,n){let t=1,a=n.length-1;for(;t<=a;){if(!e._emit[t]){t++;continue}let i=N.classNameAliases[e[t]]||e[t],r=n[t];i?A.addKeyword(r,i):(C=r,g(),C=""),t++}}function m(e,n){return e.scope&&"string"==typeof e.scope&&A.openNode(N.classNameAliases[e.scope]||e.scope),e.beginScope&&(e.beginScope._wrap?(A.addKeyword(C,N.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap),C=""):e.beginScope._multi&&(b(e.beginScope,n),C="")),M=Object.create(e,{parent:{value:M}})}function p(e){return 0===M.matcher.regexIndex?(C+=e[0],1):(j=!0,0)}let f={};function y(a,i){let s=i&&i[0];if(C+=a,null==s)return u(),0;if("begin"===f.type&&"end"===i.type&&f.index===i.index&&""===s){if(C+=n.slice(i.index,i.index+1),!l){let o=Error(`0 width match regex (${e})`);throw o.languageName=e,o.badRule=f.rule,o}return 1}if(f=i,"begin"===i.type)return(e=>{let n=e[0],a=e.rule,i=new t(a),r=[a.__beforeBegin,a["on:begin"]];for(let s of r)if(s&&(s(e,i),i.isMatchIgnored))return p(n);return a.skip?C+=n:(a.excludeBegin&&(C+=n),u(),a.returnBegin||a.excludeBegin||(C=n)),m(a,e),a.returnBegin?0:n.length})(i);if("illegal"===i.type&&!r){let c=Error('Illegal lexeme "'+s+'" for mode "'+(M.scope||"")+'"');throw c.mode=M,c}if("end"===i.type){let d=function e(a){let i=a[0],r=n.substring(a.index),s=function e(n,a,i){let r=((e,n)=>{let t=e&&e.exec(n);return t&&0===t.index})(n.endRe,i);if(r){if(n["on:end"]){let s=new t(n);n["on:end"](a,s),s.isMatchIgnored&&(r=!1)}if(r){for(;n.endsParent&&n.parent;)n=n.parent;return n}}if(n.endsWithParent)return e(n.parent,a,i)}(M,a,r);if(!s)return W;let l=M;M.endScope&&M.endScope._wrap?(u(),A.addKeyword(i,M.endScope._wrap)):M.endScope&&M.endScope._multi?(u(),b(M.endScope,a)):l.skip?C+=i:(l.returnEnd||l.excludeEnd||(C+=i),u(),l.excludeEnd&&(C=i));do M.scope&&A.closeNode(),M.skip||M.subLanguage||(z+=M.relevance),M=M.parent;while(M!==s.parent);return s.starts&&m(s.starts,a),l.returnEnd?0:i.length}(i);if(d!==W)return d}if("illegal"===i.type&&""===s)return 1;if(P>1e5&&P>3*i.index)throw Error("potential infinite loop, way more iterations than matches");return C+=s,s.length}let N=O(e);if(!N)throw F(o.replace("{}",e)),Error('Unknown language: "'+e+'"');let x=function e(n){function t(e,t){return RegExp(d(e),"m"+(n.case_insensitive?"i":"")+(n.unicodeRegex?"u":"")+(t?"g":""))}class a{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(e,n){n.position=this.position++,this.matchIndexes[this.matchAt]=n,this.regexes.push([n,e]),this.matchAt+=h(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null);let e=this.regexes.map(e=>e[1]);this.matcherRe=t(E(e,{joinWith:"|"}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex;let n=this.matcherRe.exec(e);if(!n)return null;let t=n.findIndex((e,n)=>n>0&&void 0!==e),a=this.matchIndexes[t];return n.splice(0,t),Object.assign(n,a)}}class r{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){if(this.multiRegexes[e])return this.multiRegexes[e];let n=new a;return this.rules.slice(e).forEach(([e,t])=>n.addRule(e,t)),n.compile(),this.multiRegexes[e]=n,n}resumingScanAtSamePosition(){return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,n){this.rules.push([e,n]),"begin"===n.type&&this.count++}exec(e){let n=this.getMatcher(this.regexIndex);n.lastIndex=this.lastIndex;let t=n.exec(e);if(this.resumingScanAtSamePosition()){if(t&&t.index===this.lastIndex);else{let a=this.getMatcher(0);a.lastIndex=this.lastIndex+1,t=a.exec(e)}}return t&&(this.regexIndex+=t.position+1,this.regexIndex===this.count&&this.considerAll()),t}}if(n.compilerExtensions||(n.compilerExtensions=[]),n.contains&&n.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return n.classNameAliases=i(n.classNameAliases||{}),function e(a,s){let l=a;if(a.isCompiled)return l;[T,I,q,B].forEach(e=>e(a,s)),n.compilerExtensions.forEach(e=>e(a,s)),a.__beforeBegin=null,[R,D,L].forEach(e=>e(a,s)),a.isCompiled=!0;let o=null;return"object"==typeof a.keywords&&a.keywords.$pattern&&(a.keywords=Object.assign({},a.keywords),o=a.keywords.$pattern,delete a.keywords.$pattern),o=o||/\w+/,a.keywords&&(a.keywords=function e(n,t,a="keyword"){let i=Object.create(null);return"string"==typeof n?r(a,n.split(" ")):Array.isArray(n)?r(a,n):Object.keys(n).forEach(a=>{Object.assign(i,e(n[a],t,a))}),i;function r(e,n){t&&(n=n.map(e=>e.toLowerCase())),n.forEach(n=>{var t,a,r;let s=n.split("|");i[s[0]]=[e,(t=s[0],a=s[1],a?Number(a):(r=t,_.includes(r.toLowerCase()))?0:1)]})}}(a.keywords,n.case_insensitive)),l.keywordPatternRe=t(o,!0),s&&(a.begin||(a.begin=/\B|\b/),l.beginRe=t(l.begin),a.end||a.endsWithParent||(a.end=/\B|\b/),a.end&&(l.endRe=t(l.end)),l.terminatorEnd=d(l.end)||"",a.endsWithParent&&s.terminatorEnd&&(l.terminatorEnd+=(a.end?"|":"")+s.terminatorEnd)),a.illegal&&(l.illegalRe=t(a.illegal)),a.contains||(a.contains=[]),a.contains=[].concat(...a.contains.map(e=>{var n;return(n="self"===e?a:e).variants&&!n.cachedVariants&&(n.cachedVariants=n.variants.map(e=>i(n,{variants:null},e))),n.cachedVariants?n.cachedVariants:!function e(n){return!!n&&(n.endsWithParent||e(n.starts))}(n)?Object.isFrozen(n)?i(n):n:i(n,{starts:n.starts?i(n.starts):null})})),a.contains.forEach(n=>{e(n,l)}),a.starts&&e(a.starts,s),l.matcher=(e=>{let n=new r;return e.contains.forEach(e=>n.addRule(e.begin,{rule:e,type:"begin"})),e.terminatorEnd&&n.addRule(e.terminatorEnd,{type:"end"}),e.illegal&&n.addRule(e.illegal,{type:"illegal"}),n})(l),l}(n)}(N),k="",M=s||x,S={},A=new $.__emitter($);(()=>{let e=[];for(let n=M;n!==N;n=n.parent)n.scope&&e.unshift(n.scope);e.forEach(e=>A.openNode(e))})();let C="",z=0,U=0,P=0,j=!1;try{for(M.matcher.considerAll();;){P++,j?j=!1:M.matcher.considerAll(),M.matcher.lastIndex=U;let K=M.matcher.exec(n);if(!K)break;let H=y(n.substring(U,K.index),K);U=K.index+H}return y(n.substring(U)),A.closeAllNodes(),A.finalize(),k=A.toHTML(),{language:e,value:k,relevance:z,illegal:!1,_emitter:A,_top:M}}catch(G){if(G.message&&G.message.includes("Illegal"))return{language:e,value:Z(n),illegal:!0,relevance:0,_illegalBy:{message:G.message,index:U,context:n.slice(U-100,U+100),mode:G.mode,resultSoFar:k},_emitter:A};if(l)return{language:e,value:Z(n),illegal:!1,relevance:0,errorRaised:G,_emitter:A,_top:M};throw G}}function v(e,n){n=n||$.languages||Object.keys(a);let t=(e=>{let n={value:Z(e),illegal:!1,relevance:0,_top:f,_emitter:new $.__emitter($)};return n._emitter.addText(e),n})(e),i=n.filter(O).filter(C).map(n=>w(n,e,!1));i.unshift(t);let r=i.sort((e,n)=>{if(e.relevance!==n.relevance)return n.relevance-e.relevance;if(e.language&&n.language){if(O(e.language).supersetOf===n.language)return 1;if(O(n.language).supersetOf===e.language)return -1}return 0}),[s,l]=r,o=s;return o.secondBest=l,o}function x(e){let n=null,t=(e=>{let n=e.className+" ";n+=e.parentNode?e.parentNode.className:"";let t=$.languageDetectRe.exec(n);if(t){let a=O(t[1]);return a||(U(o.replace("{}",t[1])),U("Falling back to no-highlight mode for this block.",e)),a?t[1]:"no-highlight"}return n.split(/\s+/).find(e=>y(e)||O(e))})(e);if(y(t))return;if(z("before:highlightElement",{el:e,language:t}),e.children.length>0&&($.ignoreUnescapedHTML||$.throwUnescapedHTML))throw new H("One of your code blocks includes unescaped HTML.",e.innerHTML);n=e;let a=n.textContent,i=t?N(a,{language:t,ignoreIllegals:!0}):v(a);e.innerHTML=i.value,((e,n,t)=>{let a=n&&r[n]||t;e.classList.add("hljs"),e.classList.add("language-"+a)})(e,t,i.language),e.result={language:i.language,re:i.relevance,relevance:i.relevance},i.secondBest&&(e.secondBest={language:i.secondBest.language,relevance:i.secondBest.relevance}),z("after:highlightElement",{el:e,result:i,text:a})}let k=!1;function M(){"loading"!==document.readyState?document.querySelectorAll($.cssSelector).forEach(x):k=!0}function O(e){return a[e=(e||"").toLowerCase()]||a[r[e]]}function S(e,{languageName:n}){"string"==typeof e&&(e=[e]),e.forEach(e=>{r[e.toLowerCase()]=n})}function C(e){let n=O(e);return n&&!n.disableAutodetect}function z(e,n){let t=e;s.forEach(e=>{e[t]&&e[t](n)})}for(let j in"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",()=>{k&&M()},!1),Object.assign(n,{highlight:N,highlightAuto:v,highlightAll:M,highlightElement:x,highlightBlock:e=>(P("10.7.0","highlightBlock will be removed entirely in v12.0"),P("10.7.0","Please use highlightElement now."),x(e)),configure(e){$=G($,e)},initHighlighting(){M(),P("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")},initHighlightingOnLoad(){M(),P("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")},registerLanguage(e,t){let i=null;try{i=t(n)}catch(r){if(F("Language definition for '{}' could not be registered.".replace("{}",e)),!l)throw r;F(r),i=f}i.name||(i.name=e),a[e]=i,i.rawDefinition=t.bind(null,n),i.aliases&&S(i.aliases,{languageName:e})},unregisterLanguage(e){for(let n of(delete a[e],Object.keys(r)))r[n]===e&&delete r[n]},listLanguages:()=>Object.keys(a),getLanguage:O,registerAliases:S,autoDetection:C,inherit:G,addPlugin(e){var n;(n=e)["before:highlightBlock"]&&!n["before:highlightElement"]&&(n["before:highlightElement"]=e=>{n["before:highlightBlock"](Object.assign({block:e.el},e))}),n["after:highlightBlock"]&&!n["after:highlightElement"]&&(n["after:highlightElement"]=e=>{n["after:highlightBlock"](Object.assign({block:e.el},e))}),s.push(e)}}),n.debugMode=()=>{l=!1},n.safeMode=()=>{l=!0},n.versionString="11.7.0",n.regex={concat:m,lookahead:g,either:p,optional:b,anyNumberOfTimes:u},A)"object"==typeof A[j]&&e.exports(A[j]);return Object.assign(n,A),n})({});let X=e=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:e.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:e.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),V=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video",],J=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height",],Y=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where",],ee=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error",],en=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index",].reverse(),et=Y.concat(ee);var ea="\\.([0-9](_*[0-9])*)",ei="[0-9a-fA-F](_*[0-9a-fA-F])*",er={className:"number",variants:[{begin:`(\\b([0-9](_*[0-9])*)((${ea})|\\.)?|(${ea}))[eE][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:`\\b([0-9](_*[0-9])*)((${ea})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{begin:`(${ea})[fFdD]?\\b`},{begin:"\\b([0-9](_*[0-9])*)[fFdD]\\b"},{begin:`\\b0[xX]((${ei})\\.?|(${ei})?\\.(${ei}))[pP][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${ei})[lL]?\\b`},{begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"},],relevance:0};let es="[A-Za-z$_][0-9A-Za-z$_]*",el=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends",],eo=["true","false","null","undefined","NaN","Infinity"],ec=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly",],ed=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError",],eg=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape",],eu=["arguments","this","super","console","window","document","localStorage","module","global",],eb=[].concat(eg,ec,ed);function em(e){var n;let t=e.regex,a=es,i={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag(e,n){let t=e[0].length+e.index,a=e.input[t];if("<"===a||","===a)return void n.ignoreMatch();let i;">"===a&&(((e,{after:n})=>{let t=""+e[0].slice(1);return -1!==e.input.indexOf(t,n)})(e,{after:t})||n.ignoreMatch());let r=e.input.substring(t);((i=r.match(/^\s*=/))||(i=r.match(/^\s+extends\s+/))&&0===i.index)&&n.ignoreMatch()}},r={$pattern:es,keyword:el,literal:eo,built_in:eb,"variable.language":eu},s="\\.([0-9](_?[0-9])*)",l="0|[1-9](_?[0-9])*|0[0-7]*[89][0-9]*",o={className:"number",variants:[{begin:`(\\b(${l})((${s})|\\.)?|(${s}))[eE][+-]?([0-9](_?[0-9])*)\\b`},{begin:`\\b(${l})\\b((${s})\\b|\\.)?|(${s})\\b`},{begin:"\\b(0|[1-9](_?[0-9])*)n\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*n?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*n?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*n?\\b"},{begin:"\\b0[0-7]+n?\\b"},],relevance:0},c={className:"subst",begin:"\\$\\{",end:"\\}",keywords:r,contains:[]},d={begin:"html`",end:"",starts:{end:"`",returnEnd:!1,contains:[e.BACKSLASH_ESCAPE,c],subLanguage:"xml"}},g={begin:"css`",end:"",starts:{end:"`",returnEnd:!1,contains:[e.BACKSLASH_ESCAPE,c],subLanguage:"css"}},u={className:"string",begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE,c]},b={className:"comment",variants:[e.COMMENT(/\/\*\*(?!\/)/,"\\*/",{relevance:0,contains:[{begin:"(?=@[A-Za-z]+)",relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"},{className:"type",begin:"\\{",end:"\\}",excludeEnd:!0,excludeBegin:!0,relevance:0},{className:"variable",begin:a+"(?=\\s*(-)|$)",endsParent:!0,relevance:0},{begin:/(?=[^\n])\s/,relevance:0},]},]}),e.C_BLOCK_COMMENT_MODE,e.C_LINE_COMMENT_MODE,]},m=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,{match:/\$\d+/},o,];c.contains=m.concat({begin:/\{/,end:/\}/,keywords:r,contains:["self"].concat(m)});let p=[].concat(b,c.contains),h=p.concat([{begin:/\(/,end:/\)/,keywords:r,contains:["self"].concat(p)},]),f={className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},E={variants:[{match:[/class/,/\s+/,a,/\s+/,/extends/,/\s+/,t.concat(a,"(",t.concat(/\./,a),")*"),],scope:{1:"keyword",3:"title.class",5:"keyword",7:"title.class.inherited"}},{match:[/class/,/\s+/,a],scope:{1:"keyword",3:"title.class"}},]},$={relevance:0,match:t.either(/\bJSON/,/\b[A-Z][a-z]+([A-Z][a-z]*|\d)*/,/\b[A-Z]{2,}([A-Z][a-z]+|\d)+([A-Z][a-z]*)*/,/\b[A-Z]{2,}[a-z]+([A-Z][a-z]+|\d)*([A-Z][a-z]*)*/),className:"title.class",keywords:{_:[...ec,...ed]}},y={match:t.concat(/\b/,(n=[...eg,"super","import"],t.concat("(?!",n.join("|"),")")),a,t.lookahead(/\(/)),className:"title.function",relevance:0},N={begin:t.concat(/\./,t.lookahead(t.concat(a,/(?![0-9A-Za-z$_(])/))),end:a,excludeBegin:!0,keywords:"prototype",className:"property",relevance:0},w="(\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)|"+e.UNDERSCORE_IDENT_RE+")\\s*=>",v={match:[/const|var|let/,/\s+/,a,/\s*/,/=\s*/,/(async\s*)?/,t.lookahead(w),],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[f]};return{name:"Javascript",aliases:["js","jsx","mjs","cjs"],keywords:r,exports:{PARAMS_CONTAINS:h,CLASS_REFERENCE:$},illegal:/#(?![$_A-z])/,contains:[e.SHEBANG({label:"shebang",binary:"node",relevance:5}),{label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,b,{match:/\$\d+/},o,$,{className:"attr",begin:a+t.lookahead(":"),relevance:0},v,{begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[b,e.REGEXP_MODE,{className:"function",begin:w,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:e.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},]},]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:"<>",end:">"},{match:/<[A-Za-z0-9\\._:-]+\s*\/>/},{begin:i.begin,"on:begin":i.isTrulyOpeningTag,end:i.end},],subLanguage:"xml",contains:[{begin:i.begin,end:i.end,skip:!0,contains:["self"]},]},]},{variants:[{match:[/function/,/\s+/,a,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]},],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[f],illegal:/%/},{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+e.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[f,e.inherit(e.TITLE_MODE,{begin:a,className:"title.function"}),]},{match:/\.\.\./,relevance:0},N,{match:"\\$"+a,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[f]},y,{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},E,{match:[/get|set/,/\s+/,a,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},f]},{match:/\$[(.]/},]}}let ep=e=>m(/\b/,e,/\w$/.test(e)?/\b/:/\B/),e8=["Protocol","Type"].map(ep),eh=["init","self"].map(ep),ef=["Any","Self"],eE=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet",],e$=["false","nil","true"],ey=["assignment","associativity","higherThan","left","lowerThan","none","right",],eN=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning",],ew=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip",],ev=p(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),ex=p(ev,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),ek=m(ev,ex,"*"),eM=p(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),eO=p(eM,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),eS=m(eM,eO,"*"),eA=m(/[A-Z]/,eO,"*"),eC=["autoclosure",m(/convention\(/,p("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",m(/objc\(/,eS,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline",],eT=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift",];var eR=Object.freeze({__proto__:null,grmr_bash(e){let n=e.regex,t={};Object.assign(t,{className:"variable",variants:[{begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},{begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[t]}]},]});let a={className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},i={begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,className:"string"}),]}},r={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,t,a]};a.contains.push(r);let s={begin:/\$?\(\(/,end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,t,]},l=e.SHEBANG({binary:"(fish|bash|zsh|sh|csh|ksh|tcsh|dash|scsh)",relevance:10}),o={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0};return{name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/,keyword:["if","then","else","elif","fi","for","while","in","do","done","case","esac","function",],literal:["true","false"],built_in:["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset","alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias","set","shopt","autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp","chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes",]},contains:[l,e.SHEBANG(),o,s,e.HASH_COMMENT_MODE,i,{match:/(\/[a-z._-]+)+/},r,{className:"",begin:/\\"/},{className:"string",begin:/'/,end:/'/},t,]}},grmr_c(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{match:/\batomic_[a-z]{3,6}\b/},]},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma",],type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary",],literal:"true false NULL",built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr"},u=[o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],b={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:u.concat([{begin:/\(/,end:/\)/,keywords:g,contains:u.concat(["self"]),relevance:0},]),relevance:0},m={begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[e.inherit(c,{className:"title.function"}),],relevance:0},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C",aliases:["h"],keywords:g,disableAutodetect:!0,illegal:"",contains:[].concat(b,m,u,[o,{begin:e.IDENT_RE+"::",keywords:g},{className:"class",beginKeywords:"enum class struct union",end:/[{;:<>=]/,contains:[{beginKeywords:"final class struct"},e.TITLE_MODE,]},]),exports:{preprocessor:o,strings:s,keywords:g}}},grmr_cpp(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(?!struct)(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={type:["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static",],keyword:["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq",],literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view",]},u={className:"function.dispatch",relevance:0,keywords:{_hint:["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf",]},begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/))},b=[u,o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],m={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:b.concat([{begin:/\(/,end:/\)/,keywords:g,contains:b.concat(["self"]),relevance:0},]),relevance:0},p={className:"function",begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[c],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[s,l]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:g,illegal:"",classNameAliases:{"function.dispatch":"built_in"},contains:[].concat(m,p,u,b,[o,{begin:"\\b(deque|list|queue|priority_queue|pair|stack|vector|map|set|bitset|multiset|multimap|unordered_map|unordered_set|unordered_multiset|unordered_multimap|array|tuple|optional|variant|function)\\s*<(?!<)",end:">",keywords:g,contains:["self",r]},{begin:e.IDENT_RE+"::",keywords:g},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/,],className:{1:"keyword",3:"title.class"}},])}},grmr_csharp(e){let n={keyword:["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while",].concat(["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield",]),built_in:["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort",],literal:["default","false","null","true"]},t=e.inherit(e.TITLE_MODE,{begin:"[a-zA-Z](\\.?\\w)*"}),a={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},i={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}]},r=e.inherit(i,{illegal:/\n/}),s={className:"subst",begin:/\{/,end:/\}/,keywords:n},l=e.inherit(s,{illegal:/\n/}),o={className:"string",begin:/\$"/,end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},e.BACKSLASH_ESCAPE,l,]},c={className:"string",begin:/\$@"/,end:'"',contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},s,]},d=e.inherit(c,{illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},l]});s.contains=[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.C_BLOCK_COMMENT_MODE,],l.contains=[d,o,r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.inherit(e.C_BLOCK_COMMENT_MODE,{illegal:/\n/}),];let g={variants:[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},u={begin:"<",end:">",contains:[{beginKeywords:"in out"},t]},b=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",m={begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"],keywords:n,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0,contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{begin:""},{begin:"?",end:">"},]},]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elif endif define undef warning error line region endregion pragma checksum"}},g,a,{beginKeywords:"class interface",relevance:0,end:/[{;=]/,illegal:/[^\s:,]/,contains:[{beginKeywords:"where class"},t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{beginKeywords:"namespace",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta",begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{className:"string",begin:/"/,end:/"/},]},{beginKeywords:"new return throw await else",relevance:0},{className:"function",begin:"("+b+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,end:/\s*[{;=]/,excludeEnd:!0,keywords:n,contains:[{beginKeywords:"public private protected static internal protected abstract async extern override unsafe virtual new sealed partial",relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,contains:[e.TITLE_MODE,u],relevance:0},{match:/\(\)/},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:n,relevance:0,contains:[g,a,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},m,]}},grmr_css(e){let n=e.regex,t=X(e),a=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE];return{name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"},contains:[t.BLOCK_COMMENT,{begin:/-(webkit|moz|ms|o)-(?=[a-z])/},t.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0},{className:"selector-class",begin:"\\.[a-zA-Z-][a-zA-Z0-9_-]*",relevance:0},t.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{begin:":("+Y.join("|")+")"},{begin:":(:)?("+ee.join("|")+")"},]},t.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:/:/,end:/[;}{]/,contains:[t.BLOCK_COMMENT,t.HEXCOLOR,t.IMPORTANT,t.CSS_NUMBER_MODE,...a,{begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri"},contains:[...a,{className:"string",begin:/[^)]/,endsWithParent:!0,excludeEnd:!0},]},t.FUNCTION_DISPATCH,]},{begin:n.lookahead(/@/),end:"[{;]",relevance:0,illegal:/:/,contains:[{className:"keyword",begin:/@-?\w[\w]*(-\w+)*/},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:/[a-z-]+(?=:)/,className:"attribute"},...a,t.CSS_NUMBER_MODE,]},]},{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b"},]}},grmr_diff(e){let n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{className:"meta",relevance:10,match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{className:"comment",variants:[{begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),end:/$/},{match:/^\*{15}$/},]},{className:"addition",begin:/^\+/,end:/$/},{className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/,end:/$/},]}},grmr_go(e){let n={keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var",],type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune",],literal:["true","false","iota","nil"],built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete",]};return{name:"Go",aliases:["golang"],keywords:n,illegal:"",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"string",variants:[e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,{begin:"`",end:"`"},]},{className:"number",variants:[{begin:e.C_NUMBER_RE+"[i]",relevance:1},e.C_NUMBER_MODE,]},{begin:/:=/},{className:"function",beginKeywords:"func",end:"\\s*(\\{|$)",excludeEnd:!0,contains:[e.TITLE_MODE,{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,illegal:/["']/},]},]}},grmr_graphql(e){let n=e.regex;return{name:"GraphQL",aliases:["gql"],case_insensitive:!0,disableAutodetect:!1,keywords:{keyword:["query","mutation","subscription","type","input","schema","directive","interface","union","scalar","fragment","enum","on",],literal:["true","false","null"]},contains:[e.HASH_COMMENT_MODE,e.QUOTE_STRING_MODE,e.NUMBER_MODE,{scope:"punctuation",match:/[.]{3}/,relevance:0},{scope:"punctuation",begin:/[\!\(\)\:\=\[\]\{\|\}]{1}/,relevance:0},{scope:"variable",begin:/\$/,end:/\W/,excludeEnd:!0,relevance:0},{scope:"meta",match:/@\w+/,excludeEnd:!0},{scope:"symbol",begin:n.concat(/[_A-Za-z][_0-9A-Za-z]*/,n.lookahead(/\s*:/)),relevance:0},],illegal:[/[;<']/,/BEGIN/]}},grmr_ini(e){let n=e.regex,t={className:"number",relevance:0,variants:[{begin:/([+-]+)?[\d]+_[\d_]+/},{begin:e.NUMBER_RE},]},a=e.COMMENT();a.variants=[{begin:/;/,end:/$/},{begin:/#/,end:/$/},];let i={className:"variable",variants:[{begin:/\$[\w\d"][\w\d_]*/},{begin:/\$\{(.*?)\}/},]},r={className:"literal",begin:/\bon|off|true|false|yes|no\b/},s={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:"'''",end:"'''",relevance:10},{begin:'"""',end:'"""',relevance:10},{begin:'"',end:'"'},{begin:"'",end:"'"},]},l=n.either(/[A-Za-z0-9_-]+/,/"(\\"|[^"])*"/,/'[^']*'/);return{name:"TOML, also INI",aliases:["toml"],case_insensitive:!0,illegal:/\S/,contains:[a,{className:"section",begin:/\[+/,end:/\]+/},{begin:n.concat(l,"(\\s*\\.\\s*",l,")*",n.lookahead(/\s*=\s*[^#\s]/)),className:"attr",starts:{end:/$/,contains:[a,{begin:/\[/,end:/\]/,contains:[a,r,i,s,t,"self"],relevance:0},r,i,s,t]}},]}},grmr_java(e){let n=e.regex,t="[\xc0-ʸa-zA-Z_$][\xc0-ʸa-zA-Z_$0-9]*",a=t+function e(n,t,a){return -1===a?"":n.replace(t,i=>e(n,t,a-1))}("(?:<"+t+"~~~(?:\\s*,\\s*"+t+"~~~)*>)?",/~~~/g,2),i={keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits",],literal:["false","true","null"],type:["char","boolean","long","float","int","byte","short","double",],built_in:["super","this"]},r={className:"meta",begin:"@"+t,contains:[{begin:/\(/,end:/\)/,contains:["self"]},]},s={className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE],endsParent:!0};return{name:"Java",aliases:["jsp"],keywords:i,illegal:/<\/|#/,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:"@[A-Za-z]+"},]}),{begin:/import java\.[a-z]+\./,keywords:"import",relevance:2},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/,className:"string",contains:[e.BACKSLASH_ESCAPE]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,t,],className:{1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{begin:[n.concat(/(?!else)/,t),/\s+/,t,/\s+/,/=(?!=)/],className:{1:"type",3:"variable",5:"operator"}},{begin:[/record/,/\s+/,t],className:{1:"keyword",3:"title.class"},contains:[s,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"new throw return else",relevance:0},{begin:["(?:"+a+"\\s+)",e.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{2:"title.function"},keywords:i,contains:[{className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,er,e.C_BLOCK_COMMENT_MODE,]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},er,r,]}},grmr_javascript:em,grmr_json(e){let n=["true","false","null"],t={scope:"literal",beginKeywords:n.join(" ")};return{name:"JSON",keywords:{literal:n},contains:[{className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},{match:/[{}[\],:]/,className:"punctuation",relevance:0},e.QUOTE_STRING_MODE,t,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,],illegal:"\\S"}},grmr_kotlin(e){let n={keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual",built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing",literal:"true false null"},t={className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"@"},a={className:"subst",begin:/\$\{/,end:/\}/,contains:[e.C_NUMBER_MODE]},i={className:"variable",begin:"\\$"+e.UNDERSCORE_IDENT_RE},r={className:"string",variants:[{begin:'"""',end:'"""(?=[^"])',contains:[i,a]},{begin:"'",end:"'",illegal:/\n/,contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/,contains:[e.BACKSLASH_ESCAPE,i,a]},]};a.contains.push(r);let s={className:"meta",begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+e.UNDERSCORE_IDENT_RE+")?"},l={className:"meta",begin:"@"+e.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/,end:/\)/,contains:[e.inherit(r,{className:"string"}),"self"]},]},o=e.COMMENT("/\\*","\\*/",{contains:[e.C_BLOCK_COMMENT_MODE]}),c={variants:[{className:"type",begin:e.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/,contains:[]},]},d=c;return d.variants[1].contains=[c],c.variants[1].contains=[d],{name:"Kotlin",aliases:["kt","kts"],keywords:n,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"}]}),e.C_LINE_COMMENT_MODE,o,{className:"keyword",begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol",begin:/@\w+/}]}},t,s,l,{className:"function",beginKeywords:"fun",end:"[(]|$",returnBegin:!0,excludeEnd:!0,keywords:n,relevance:5,contains:[{begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0,contains:[e.UNDERSCORE_TITLE_MODE]},{className:"type",begin:/,end:/>/,keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,relevance:0,contains:[{begin:/:/,end:/[=,\/]/,endsWithParent:!0,contains:[c,e.C_LINE_COMMENT_MODE,o],relevance:0},e.C_LINE_COMMENT_MODE,o,s,l,r,e.C_NUMBER_MODE,]},o,]},{begin:[/class|interface|trait/,/\s+/,e.UNDERSCORE_IDENT_RE],beginScope:{3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0,illegal:"extends implements",contains:[{beginKeywords:"public protected internal private constructor"},e.UNDERSCORE_TITLE_MODE,{className:"type",begin:/,end:/>/,excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/,excludeBegin:!0,returnEnd:!0},s,l,]},r,{className:"meta",begin:"^#!/usr/bin/env",end:"$",illegal:"\n"},er,]}},grmr_less(e){let n=X(e),t="([\\w-]+|@\\{[\\w-]+\\})",a=[],i=[],r=e=>({className:"string",begin:"~?"+e+".*?"+e}),s=(e,n,t)=>({className:e,begin:n,relevance:t}),l={$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")};i.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,r("'"),r('"'),n.CSS_NUMBER_MODE,{begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]",excludeEnd:!0}},n.HEXCOLOR,{begin:"\\(",end:"\\)",contains:i,keywords:l,relevance:0},s("variable","@@?[\\w-]+",10),s("variable","@\\{[\\w-]+\\}"),s("built_in","~?`[^`]*?`"),{className:"attribute",begin:"[\\w-]+\\s*:",end:":",returnBegin:!0,excludeEnd:!0},n.IMPORTANT,{beginKeywords:"and not"},n.FUNCTION_DISPATCH);let o=i.concat({begin:/\{/,end:/\}/,contains:a}),c={beginKeywords:"when",endsWithParent:!0,contains:[{beginKeywords:"and not"}].concat(i)},d={begin:t+"\\s*:",returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b",end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:i}},]},g={variants:[{begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:t,end:/\{/},],returnBegin:!0,returnEnd:!0,illegal:"[<='$\"]",relevance:0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,c,s("keyword","all\\b"),s("variable","@\\{[\\w-]+\\}"),{begin:"\\b("+V.join("|")+")\\b",className:"selector-tag"},n.CSS_NUMBER_MODE,s("selector-tag",t,0),s("selector-id","#"+t),s("selector-class","\\."+t,0),s("selector-tag","&",0),n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},{begin:/\(/,end:/\)/,relevance:0,contains:o},{begin:"!important"},n.FUNCTION_DISPATCH,]},u={begin:`[\\w-]+:(:)?(${et.join("|")})`,returnBegin:!0,contains:[g]};return a.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"keyword",begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{end:"[;{}]",keywords:l,returnEnd:!0,contains:i,relevance:0}},{className:"variable",variants:[{begin:"@[\\w-]+\\s*:",relevance:15},{begin:"@[\\w-]+"},],starts:{end:"[;}]",returnEnd:!0,contains:o}},u,d,g,c,n.FUNCTION_DISPATCH),{name:"Less",case_insensitive:!0,illegal:"[=>'/<($\"]",contains:a}},grmr_lua(e){let n="\\[=*\\[",t="\\]=*\\]",a={begin:n,end:t,contains:["self"]},i=[e.COMMENT("--(?!\\[=*\\[)","$"),e.COMMENT("--\\[=*\\[",t,{contains:[a],relevance:10}),];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE,literal:"true false nil",keyword:"and break do else elseif end for goto if in local not or repeat return then until while",built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove"},contains:i.concat([{className:"function",beginKeywords:"function",end:"\\)",contains:[e.inherit(e.TITLE_MODE,{begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params",begin:"\\(",endsWithParent:!0,contains:i},].concat(i)},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:n,end:t,contains:[a],relevance:5},])}},grmr_makefile(e){let n={className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)",contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%\^\+\*]/},]},t={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,n]},a={begin:"^"+e.UNDERSCORE_IDENT_RE+"\\s*(?=[:+?]?=)"};return{name:"Makefile",aliases:["mk","mak","make"],keywords:{$pattern:/[\w-]+/,keyword:"define endef undefine ifdef ifndef ifeq ifneq else endif include -include sinclude override export unexport private vpath"},contains:[e.HASH_COMMENT_MODE,n,t,{className:"variable",begin:/\$\([\w-]+\s/,end:/\)/,keywords:{built_in:"subst patsubst strip findstring filter filter-out sort word wordlist firstword lastword dir notdir suffix basename addsuffix addprefix join wildcard realpath abspath error warning shell origin flavor foreach if or and call eval file value"},contains:[n]},a,{className:"meta",begin:/^\.PHONY:/,end:/$/,keywords:{$pattern:/[\.\w]+/,keyword:".PHONY"}},{className:"section",begin:/^[^\s]+:/,end:/$/,contains:[n]},]}},grmr_xml(e){let n=e.regex,t=n.concat(/[\p{L}_]/u,n.optional(/[\p{L}0-9_.-]*:/u),/[\p{L}0-9_.-]*/u),a={className:"symbol",begin:/&[a-z]+;|[0-9]+;|[a-f0-9]+;/},i={begin:/\s/,contains:[{className:"keyword",begin:/#?[a-z_][a-z1-9_-]+/,illegal:/\n/},]},r=e.inherit(i,{begin:/\(/,end:/\)/}),s=e.inherit(e.APOS_STRING_MODE,{className:"string"}),l=e.inherit(e.QUOTE_STRING_MODE,{className:"string"}),o={endsWithParent:!0,illegal:/,relevance:0,contains:[{className:"attr",begin:/[\p{L}0-9._:-]+/u,relevance:0},{begin:/=\s*/,relevance:0,contains:[{className:"string",endsParent:!0,variants:[{begin:/"/,end:/"/,contains:[a]},{begin:/'/,end:/'/,contains:[a]},{begin:/[^\s"'=<>`]+/},]},]},]};return{name:"HTML, XML",aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg",],case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[i,l,s,r,{begin:/\[/,end:/\]/,contains:[{className:"meta",begin://,contains:[i,r,l,s]},]},]},e.COMMENT(//,{relevance:10}),{begin://,relevance:10},a,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/,relevance:10,contains:[l]},{begin:/<\?[a-z][a-z0-9]+/},]},{className:"tag",begin:/
- """
-
-hide_menu_style = """
-
- """
-st.markdown(hide_menu_style, unsafe_allow_html=True)
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Erala/QQsign/README.md b/spaces/Erala/QQsign/README.md
deleted file mode 100644
index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000
--- a/spaces/Erala/QQsign/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: QQsign
-emoji: 🦀
-colorFrom: blue
-colorTo: purple
-sdk: docker
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/EronSamez/RVC_HFmeu/i18n.py b/spaces/EronSamez/RVC_HFmeu/i18n.py
deleted file mode 100644
index b958c6f7244c4b920e097a9a9e67e81990d03f59..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/i18n.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import json
-
-def load_language_list(language):
- try:
- with open(f"./i18n/locale/{language}.json", "r", encoding="utf-8") as f:
- return json.load(f)
- except FileNotFoundError:
- raise FileNotFoundError(
- f"Failed to load language file for {language}. Check if the correct .json file exists."
- )
-
-
-class I18nAuto:
- """
- A class used for internationalization using JSON language files.
-
- Examples
- --------
- >>> i18n = I18nAuto('en_US')
- >>> i18n.print()
- Using Language: en_US
- """
- def __init__(self, language=None):
- from locale import getdefaultlocale
- language = language or getdefaultlocale()[0]
- if not self._language_exists(language):
- language = "en_US"
-
- self.language_map = load_language_list(language)
- self.language = language
-
- @staticmethod
- def _language_exists(language):
- from os.path import exists
- return exists(f"./i18n/locale/{language}.json")
-
- def __call__(self, key):
- """Returns the translation of the given key if it exists, else returns the key itself."""
- return self.language_map.get(key, key)
-
- def print(self):
- """Prints the language currently in use."""
- print(f"Using Language: {self.language}")
\ No newline at end of file
diff --git a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/app.py b/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/app.py
deleted file mode 100644
index eaa71410cd5c18bcefc21ca87b90170807c9f279..0000000000000000000000000000000000000000
--- a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import glob
-import gradio as gr
-from inference import *
-from PIL import Image
-
-
-def gradio_app(image_path):
- """A function that send the file to the inference pipeline, and filters
- some predictions before outputting to gradio interface."""
-
- predictions = run_inference(image_path)
-
- out_img = Image.fromarray(predictions.render()[0])
-
- return out_img
-
-
-title = "MBARI Monterey Bay Benthic"
-description = "Gradio demo for MBARI Monterey Bay Benthic: This model was " \
- "trained on 691 classes using 33,667 localized images from " \
- "MBARI’s Video Annotation and Reference System (VARS). Note: " \
- "only a subset of the VARS database is uploaded to FathomNet " \
- "because of institutional concept embargos. For training, " \
- "images were split 80/20 train/test. Classes were selected " \
- "because they are commonly observed concepts (primarily " \
- "benthic organisms, along with equipment and marine litter or " \
- "trash) within the Monterey Bay and Submarine Canyon system " \
- "from 500 to 4000 m deep. Many of these organisms will be seen " \
- "throughout the entire NE Pacific within the continental " \
- "slope, shelf, and abyssal regions. We used the PyTorch " \
- "framework and the yolov5 ‘YOLOv5x’ pretrained checkpoint to " \
- "train for 28 epochs with a batch size of 18 and image size of " \
- "640 pixels. DOI: 10.5281/zenodo.5539915 "
-
-examples = glob.glob("images/*.png")
-
-gr.Interface(gradio_app,
- inputs=[gr.inputs.Image(type="filepath")],
- outputs=gr.outputs.Image(type="pil"),
- enable_queue=True,
- title=title,
- description=description,
- examples=examples).launch()
\ No newline at end of file
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/bpe.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/bpe.py
deleted file mode 100644
index 5dcd56586a9c7bd974c1dd264152ecb70f909619..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/tokenizer/bpe.py
+++ /dev/null
@@ -1,151 +0,0 @@
-"""
-Byte pair encoding utilities adapted from:
-https://github.com/openai/gpt-2/blob/master/src/encoder.py
-"""
-
-import gzip
-import json
-import os
-from functools import lru_cache
-from typing import List, Tuple
-
-import regex as re
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class Encoder:
- def __init__(self, encoder, bpe_merges, errors="replace"):
- self.encoder = encoder
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
-
- # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = re.compile(
- r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+"""
- )
-
- @property
- def n_vocab(self) -> int:
- return len(self.encoder)
-
- @property
- def end_token(self) -> int:
- return self.n_vocab - 1
-
- def padded_tokens_and_mask(
- self, tokens: List[int], text_ctx: int
- ) -> Tuple[List[int], List[bool]]:
- tokens = tokens[:text_ctx]
- padding = text_ctx - len(tokens)
- padded_tokens = tokens + [self.end_token] * padding
- mask = [True] * len(tokens) + [False] * padding
- return padded_tokens, mask
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except: # pylint: disable=bare-except
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- text = text.lower()
- bpe_tokens = []
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
- return text
-
-
-def get_encoder():
- root_dir = os.path.dirname(os.path.abspath(__file__))
- with gzip.open(os.path.join(root_dir, "encoder.json.gz"), "r") as f:
- encoder = json.load(f)
- with gzip.open(os.path.join(root_dir, "vocab.bpe.gz"), "r") as f:
- bpe_data = str(f.read(), "utf-8")
- bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]]
- return Encoder(
- encoder=encoder,
- bpe_merges=bpe_merges,
- )
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/hijacks.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/hijacks.py
deleted file mode 100644
index b06f3a9c1a70ef515c30d0e7d749923ecb8d0bfe..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/ipex/hijacks.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import contextlib
-import importlib
-import torch
-import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-
-# pylint: disable=protected-access, missing-function-docstring, line-too-long, unnecessary-lambda, no-else-return
-
-class CondFunc: # pylint: disable=missing-class-docstring
- def __new__(cls, orig_func, sub_func, cond_func):
- self = super(CondFunc, cls).__new__(cls)
- if isinstance(orig_func, str):
- func_path = orig_func.split('.')
- for i in range(len(func_path)-1, -1, -1):
- try:
- resolved_obj = importlib.import_module('.'.join(func_path[:i]))
- break
- except ImportError:
- pass
- for attr_name in func_path[i:-1]:
- resolved_obj = getattr(resolved_obj, attr_name)
- orig_func = getattr(resolved_obj, func_path[-1])
- setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
- self.__init__(orig_func, sub_func, cond_func)
- return lambda *args, **kwargs: self(*args, **kwargs)
- def __init__(self, orig_func, sub_func, cond_func):
- self.__orig_func = orig_func
- self.__sub_func = sub_func
- self.__cond_func = cond_func
- def __call__(self, *args, **kwargs):
- if not self.__cond_func or self.__cond_func(self.__orig_func, *args, **kwargs):
- return self.__sub_func(self.__orig_func, *args, **kwargs)
- else:
- return self.__orig_func(*args, **kwargs)
-
-_utils = torch.utils.data._utils
-def _shutdown_workers(self):
- if torch.utils.data._utils is None or torch.utils.data._utils.python_exit_status is True or torch.utils.data._utils.python_exit_status is None:
- return
- if hasattr(self, "_shutdown") and not self._shutdown:
- self._shutdown = True
- try:
- if hasattr(self, '_pin_memory_thread'):
- self._pin_memory_thread_done_event.set()
- self._worker_result_queue.put((None, None))
- self._pin_memory_thread.join()
- self._worker_result_queue.cancel_join_thread()
- self._worker_result_queue.close()
- self._workers_done_event.set()
- for worker_id in range(len(self._workers)):
- if self._persistent_workers or self._workers_status[worker_id]:
- self._mark_worker_as_unavailable(worker_id, shutdown=True)
- for w in self._workers: # pylint: disable=invalid-name
- w.join(timeout=torch.utils.data._utils.MP_STATUS_CHECK_INTERVAL)
- for q in self._index_queues: # pylint: disable=invalid-name
- q.cancel_join_thread()
- q.close()
- finally:
- if self._worker_pids_set:
- torch.utils.data._utils.signal_handling._remove_worker_pids(id(self))
- self._worker_pids_set = False
- for w in self._workers: # pylint: disable=invalid-name
- if w.is_alive():
- w.terminate()
-
-class DummyDataParallel(torch.nn.Module): # pylint: disable=missing-class-docstring, unused-argument, too-few-public-methods
- def __new__(cls, module, device_ids=None, output_device=None, dim=0): # pylint: disable=unused-argument
- if isinstance(device_ids, list) and len(device_ids) > 1:
- print("IPEX backend doesn't support DataParallel on multiple XPU devices")
- return module.to("xpu")
-
-def return_null_context(*args, **kwargs): # pylint: disable=unused-argument
- return contextlib.nullcontext()
-
-def check_device(device):
- return bool((isinstance(device, torch.device) and device.type == "cuda") or (isinstance(device, str) and "cuda" in device) or isinstance(device, int))
-
-def return_xpu(device):
- return f"xpu:{device[-1]}" if isinstance(device, str) and ":" in device else f"xpu:{device}" if isinstance(device, int) else torch.device("xpu") if isinstance(device, torch.device) else "xpu"
-
-def ipex_no_cuda(orig_func, *args, **kwargs):
- torch.cuda.is_available = lambda: False
- orig_func(*args, **kwargs)
- torch.cuda.is_available = torch.xpu.is_available
-
-original_autocast = torch.autocast
-def ipex_autocast(*args, **kwargs):
- if len(args) > 0 and args[0] == "cuda":
- return original_autocast("xpu", *args[1:], **kwargs)
- else:
- return original_autocast(*args, **kwargs)
-
-original_torch_cat = torch.cat
-def torch_cat(tensor, *args, **kwargs):
- if len(tensor) == 3 and (tensor[0].dtype != tensor[1].dtype or tensor[2].dtype != tensor[1].dtype):
- return original_torch_cat([tensor[0].to(tensor[1].dtype), tensor[1], tensor[2].to(tensor[1].dtype)], *args, **kwargs)
- else:
- return original_torch_cat(tensor, *args, **kwargs)
-
-original_interpolate = torch.nn.functional.interpolate
-def interpolate(tensor, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=False): # pylint: disable=too-many-arguments
- if antialias or align_corners is not None:
- return_device = tensor.device
- return_dtype = tensor.dtype
- return original_interpolate(tensor.to("cpu", dtype=torch.float32), size=size, scale_factor=scale_factor, mode=mode,
- align_corners=align_corners, recompute_scale_factor=recompute_scale_factor, antialias=antialias).to(return_device, dtype=return_dtype)
- else:
- return original_interpolate(tensor, size=size, scale_factor=scale_factor, mode=mode,
- align_corners=align_corners, recompute_scale_factor=recompute_scale_factor, antialias=antialias)
-
-original_linalg_solve = torch.linalg.solve
-def linalg_solve(A, B, *args, **kwargs): # pylint: disable=invalid-name
- if A.device != torch.device("cpu") or B.device != torch.device("cpu"):
- return_device = A.device
- return original_linalg_solve(A.to("cpu"), B.to("cpu"), *args, **kwargs).to(return_device)
- else:
- return original_linalg_solve(A, B, *args, **kwargs)
-
-def ipex_hijacks():
- CondFunc('torch.Tensor.to',
- lambda orig_func, self, device=None, *args, **kwargs: orig_func(self, return_xpu(device), *args, **kwargs),
- lambda orig_func, self, device=None, *args, **kwargs: check_device(device))
- CondFunc('torch.Tensor.cuda',
- lambda orig_func, self, device=None, *args, **kwargs: orig_func(self, return_xpu(device), *args, **kwargs),
- lambda orig_func, self, device=None, *args, **kwargs: check_device(device))
- CondFunc('torch.empty',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
- CondFunc('torch.load',
- lambda orig_func, *args, map_location=None, **kwargs: orig_func(*args, return_xpu(map_location), **kwargs),
- lambda orig_func, *args, map_location=None, **kwargs: map_location is None or check_device(map_location))
- CondFunc('torch.randn',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
- CondFunc('torch.ones',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
- CondFunc('torch.zeros',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
- CondFunc('torch.tensor',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
- CondFunc('torch.linspace',
- lambda orig_func, *args, device=None, **kwargs: orig_func(*args, device=return_xpu(device), **kwargs),
- lambda orig_func, *args, device=None, **kwargs: check_device(device))
-
- CondFunc('torch.Generator',
- lambda orig_func, device=None: torch.xpu.Generator(device),
- lambda orig_func, device=None: device is not None and device != torch.device("cpu") and device != "cpu")
-
- CondFunc('torch.batch_norm',
- lambda orig_func, input, weight, bias, *args, **kwargs: orig_func(input,
- weight if weight is not None else torch.ones(input.size()[1], device=input.device),
- bias if bias is not None else torch.zeros(input.size()[1], device=input.device), *args, **kwargs),
- lambda orig_func, input, *args, **kwargs: input.device != torch.device("cpu"))
- CondFunc('torch.instance_norm',
- lambda orig_func, input, weight, bias, *args, **kwargs: orig_func(input,
- weight if weight is not None else torch.ones(input.size()[1], device=input.device),
- bias if bias is not None else torch.zeros(input.size()[1], device=input.device), *args, **kwargs),
- lambda orig_func, input, *args, **kwargs: input.device != torch.device("cpu"))
-
- #Functions with dtype errors:
- CondFunc('torch.nn.modules.GroupNorm.forward',
- lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)),
- lambda orig_func, self, input: input.dtype != self.weight.data.dtype)
- CondFunc('torch.nn.modules.linear.Linear.forward',
- lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)),
- lambda orig_func, self, input: input.dtype != self.weight.data.dtype)
- CondFunc('torch.nn.modules.conv.Conv2d.forward',
- lambda orig_func, self, input: orig_func(self, input.to(self.weight.data.dtype)),
- lambda orig_func, self, input: input.dtype != self.weight.data.dtype)
- CondFunc('torch.nn.functional.layer_norm',
- lambda orig_func, input, normalized_shape=None, weight=None, *args, **kwargs:
- orig_func(input.to(weight.data.dtype), normalized_shape, weight, *args, **kwargs),
- lambda orig_func, input, normalized_shape=None, weight=None, *args, **kwargs:
- weight is not None and input.dtype != weight.data.dtype)
-
- #Diffusers Float64 (ARC GPUs doesn't support double or Float64):
- if not torch.xpu.has_fp64_dtype():
- CondFunc('torch.from_numpy',
- lambda orig_func, ndarray: orig_func(ndarray.astype('float32')),
- lambda orig_func, ndarray: ndarray.dtype == float)
-
- #Broken functions when torch.cuda.is_available is True:
- CondFunc('torch.utils.data.dataloader._BaseDataLoaderIter.__init__',
- lambda orig_func, *args, **kwargs: ipex_no_cuda(orig_func, *args, **kwargs),
- lambda orig_func, *args, **kwargs: True)
-
- #Functions that make compile mad with CondFunc:
- torch.utils.data.dataloader._MultiProcessingDataLoaderIter._shutdown_workers = _shutdown_workers
- torch.nn.DataParallel = DummyDataParallel
- torch.autocast = ipex_autocast
- torch.cat = torch_cat
- torch.linalg.solve = linalg_solve
- torch.nn.functional.interpolate = interpolate
- torch.backends.cuda.sdp_kernel = return_null_context
\ No newline at end of file
diff --git a/spaces/GEM/DatasetCardForm/datacards/streamlit_utils.py b/spaces/GEM/DatasetCardForm/datacards/streamlit_utils.py
deleted file mode 100644
index d69815f8d0de9cfe0d05a7a33d1d7c2ff1f3e54d..0000000000000000000000000000000000000000
--- a/spaces/GEM/DatasetCardForm/datacards/streamlit_utils.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import streamlit as st
-
-
-# Streamlit widgets with persistence
-def is_filled(key_list):
- state_filled_key = "_".join(key_list) + "_filled"
-
- def on_change_action():
- st.session_state.save_state[state_filled_key] = True
-
- return on_change_action
-
-
-def update_card_dict(key_list, use_default=None):
- state_key = "_".join(key_list)
- if st.session_state.save_state.get(state_key + "_filled", False) or use_default:
- card_key = key_list[-1]
- current_dict = st.session_state.card_dict
- for key in key_list[:-1]:
- current_dict = current_dict[key]
- current_dict[card_key] = st.session_state.save_state.get(state_key, use_default)
-
-
-def make_multiselect(
- key_list, label, options, format_func=lambda x: x, help="", default=None
-):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif default is not None:
- st.session_state.save_state[key] = default
- res = st.multiselect(
- label=label,
- options=options,
- format_func=format_func,
- key=key,
- default=st.session_state.save_state.get(key, []),
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(key_list)
- return res
-
-
-def make_selectbox(
- key_list, label, options, format_func=lambda x: x, help="", index=None
-):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif index is not None:
- st.session_state.save_state[key] = options[index]
- res = st.selectbox(
- label=label,
- options=options,
- format_func=format_func,
- key=key,
- index=options.index(
- st.session_state.save_state.get(key, options[0])
- ), # if st.session_state.save_state.get(key, options[0]) in options else 0,
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(
- key_list, use_default=st.session_state.save_state.get(key, options[0])
- ) # use the default value even without interactions
- return res
-
-
-def make_radio(key_list, label, options, format_func=lambda x: x, help="", index=None):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif index is not None:
- st.session_state.save_state[key] = options[index]
- res = st.radio(
- label=label,
- options=options,
- format_func=format_func,
- key=key,
- index=options.index(st.session_state.save_state.get(key, options[0])),
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(
- key_list, use_default=st.session_state.save_state.get(key, options[0])
- ) # use the default value even without interactions
- return res
-
-
-def make_text_input(key_list, label, help="", value=None):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif value is not None:
- st.session_state.save_state[key] = value
- res = st.text_input(
- label=label,
- key=key,
- value=st.session_state.save_state.get(key, ""),
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(key_list)
- return res
-
-
-def make_text_area(key_list, label, help="", value=None):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif value is not None:
- st.session_state.save_state[key] = value
- res = st.text_area(
- label=label,
- key=key,
- value=st.session_state.save_state.get(key, ""),
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(key_list)
- return res
-
-
-def make_checkbox(key_list, label, help="", value=None):
- key = "_".join(key_list)
- if key in st.session_state:
- st.session_state.save_state[key] = st.session_state[key]
- elif value is not None:
- st.session_state.save_state[key] = value
- res = st.checkbox(
- label=label,
- key=key,
- value=st.session_state.save_state.get(key, False),
- on_change=is_filled(key_list),
- help=help,
- )
- update_card_dict(key_list)
- return res
diff --git a/spaces/Gasi/White-box-Cartoonization/app.py b/spaces/Gasi/White-box-Cartoonization/app.py
deleted file mode 100644
index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000
--- a/spaces/Gasi/White-box-Cartoonization/app.py
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-import argparse
-import functools
-import os
-import pathlib
-import sys
-from typing import Callable
-import uuid
-
-import gradio as gr
-import huggingface_hub
-import numpy as np
-import PIL.Image
-
-from io import BytesIO
-from wbc.cartoonize import Cartoonize
-
-ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization'
-TITLE = 'SystemErrorWang/White-box-Cartoonization'
-DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}.
-
-"""
-ARTICLE = """
-
-"""
-
-SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"]
-def compress_UUID():
- '''
- 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串
- 包括:[0-9a-zA-Z\-_]共64个
- 长度:(32-2)/3*2=20
- 备注:可在地球上人zhi人都用,使用100年不重复(2^120)
- :return:String
- '''
- row = str(uuid.uuid4()).replace('-', '')
- safe_code = ''
- for i in range(10):
- enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10)
- safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)])
- safe_code = safe_code.replace('-', '')
- return safe_code
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--theme', type=str)
- parser.add_argument('--live', action='store_true')
- parser.add_argument('--share', action='store_true')
- parser.add_argument('--port', type=int)
- parser.add_argument('--disable-queue',
- dest='enable_queue',
- action='store_false')
- parser.add_argument('--allow-flagging', type=str, default='never')
- parser.add_argument('--allow-screenshot', action='store_true')
- return parser.parse_args()
-
-def run(
- image,
- cartoonize : Cartoonize
-) -> tuple[PIL.Image.Image]:
-
- out_path = compress_UUID()+'.png'
- cartoonize.run_sigle(image.name, out_path)
-
- return PIL.Image.open(out_path)
-
-
-def main():
- gr.close_all()
-
- args = parse_args()
-
- cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/'))
-
- func = functools.partial(run, cartoonize=cartoonize)
- func = functools.update_wrapper(func, run)
-
- gr.Interface(
- func,
- [
- gr.inputs.Image(type='file', label='Input Image'),
- ],
- [
- gr.outputs.Image(
- type='pil',
- label='Result'),
- ],
- # examples=examples,
- theme=args.theme,
- title=TITLE,
- description=DESCRIPTION,
- article=ARTICLE,
- allow_screenshot=args.allow_screenshot,
- allow_flagging=args.allow_flagging,
- live=args.live,
- ).launch(
- enable_queue=args.enable_queue,
- server_port=args.port,
- share=args.share,
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Geonmo/laion-aesthetic-predictor/app.py b/spaces/Geonmo/laion-aesthetic-predictor/app.py
deleted file mode 100644
index ed36edf67809e1a415b4f40791524e9c219f0e8e..0000000000000000000000000000000000000000
--- a/spaces/Geonmo/laion-aesthetic-predictor/app.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import os
-import numpy as np
-import torch
-import pytorch_lightning as pl
-import torch.nn as nn
-import clip
-from PIL import Image, ImageFile
-import gradio as gr
-
-# if you changed the MLP architecture during training, change it also here:
-class MLP(pl.LightningModule):
- def __init__(self, input_size, xcol='emb', ycol='avg_rating'):
- super().__init__()
- self.input_size = input_size
- self.xcol = xcol
- self.ycol = ycol
- self.layers = nn.Sequential(
- nn.Linear(self.input_size, 1024),
- #nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(1024, 128),
- #nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(128, 64),
- #nn.ReLU(),
- nn.Dropout(0.1),
-
- nn.Linear(64, 16),
- #nn.ReLU(),
-
- nn.Linear(16, 1)
- )
-
- def forward(self, x):
- return self.layers(x)
-
- def training_step(self, batch, batch_idx):
- x = batch[self.xcol]
- y = batch[self.ycol].reshape(-1, 1)
- x_hat = self.layers(x)
- loss = F.mse_loss(x_hat, y)
- return loss
-
- def validation_step(self, batch, batch_idx):
- x = batch[self.xcol]
- y = batch[self.ycol].reshape(-1, 1)
- x_hat = self.layers(x)
- loss = F.mse_loss(x_hat, y)
- return loss
-
- def configure_optimizers(self):
- optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
- return optimizer
-
-def normalized(a, axis=-1, order=2):
- import numpy as np # pylint: disable=import-outside-toplevel
-
- l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
- l2[l2 == 0] = 1
- return a / np.expand_dims(l2, axis)
-
-def load_models():
- model = MLP(768)
-
- device = "cuda" if torch.cuda.is_available() else "cpu"
-
- s = torch.load("sac+logos+ava1-l14-linearMSE.pth", map_location=device)
-
- model.load_state_dict(s)
- model.to(device)
- model.eval()
-
- model2, preprocess = clip.load("ViT-L/14", device=device)
-
- model_dict = {}
- model_dict['classifier'] = model
- model_dict['clip_model'] = model2
- model_dict['clip_preprocess'] = preprocess
- model_dict['device'] = device
-
- return model_dict
-
-def predict(image):
- image_input = model_dict['clip_preprocess'](image).unsqueeze(0).to(model_dict['device'])
- with torch.no_grad():
- image_features = model_dict['clip_model'].encode_image(image_input)
- if model_dict['device'] == 'cuda':
- im_emb_arr = normalized(image_features.detach().cpu().numpy())
- im_emb = torch.from_numpy(im_emb_arr).to(model_dict['device']).type(torch.cuda.FloatTensor)
- else:
- im_emb_arr = normalized(image_features.detach().numpy())
- im_emb = torch.from_numpy(im_emb_arr).to(model_dict['device']).type(torch.FloatTensor)
-
- prediction = model_dict['classifier'](im_emb)
- score = prediction.item()
-
- return {'aesthetic score': score}
-
-if __name__ == '__main__':
- print('\tinit models')
-
- global model_dict
-
- model_dict = load_models()
-
- inputs = [gr.inputs.Image(type='pil', label='Image')]
-
- outputs = gr.outputs.JSON()
-
- title = 'image aesthetic predictor'
-
- examples = ['example1.jpg', 'example2.jpg', 'example3.jpg']
-
- description = """
- # Image Aesthetic Predictor Demo
- This model (Image Aesthetic Predictor) is trained by LAION Team. See [https://github.com/christophschuhmann/improved-aesthetic-predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor)
- 1. This model is desgined by adding five MLP layers on top of (frozen) CLIP ViT-L/14 and only the MLP layers are fine-tuned with a lot of images by a regression loss term such as MSE and MAE.
- 2. Output is bounded from 0 to 10. The higher the better.
- """
-
- article = "
-
-## 使う上でのTips
-
-- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。
-- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。
-- 入力ボックスで改行するには、Shift + Enterキーを押してください。
-- 入力履歴を素早く切り替えるには、入力ボックスで ↑と↓キーを押す。
-- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。
-- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。
-- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。
-
-## インストール
-
-```shell
-git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
-cd ChuanhuChatGPT
-pip install -r requirements.txt
-```
-
-次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。
-
-```shell
-python ChuanhuChatbot.py
-```
-
-ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。
-
-> **Note**
->
-> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。
-
-## トラブルシューティング
-
-問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです:
-
-1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または
- ```shell
- git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
- ```
-2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。
- ```
- pip install -r requirements.txt
- ```
-3. Gradioを更新
- ```
- pip install gradio --upgrade --force-reinstall
- ```
-
-一般的に、以下の手順でほとんどの問題を解決することができます。
-
-それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
-
-このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。
-
-## More Information
-
-より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。:
-
-- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization)
-- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
-- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
-- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
-- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
-
-## Starchart
-
-[](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
-
-## Contributors
-
-
-
-
-
-## Sponsor
-
-🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。
-
-
-
-
diff --git a/spaces/KyanChen/BuildingExtraction/Test.py b/spaces/KyanChen/BuildingExtraction/Test.py
deleted file mode 100644
index a3d33174ef8d5888fee19f095823636e82db6b9a..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/BuildingExtraction/Test.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import os
-# Change the numbers when you want to test with specific gpus
-# os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1, 2, 3'
-import torch
-from STTNet import STTNet
-import torch.nn.functional as F
-from Utils.Datasets import get_data_loader
-from Utils.Utils import make_numpy_img, inv_normalize_img, encode_onehot_to_mask, get_metrics, Logger
-import matplotlib.pyplot as plt
-import numpy as np
-from collections import OrderedDict
-
-if __name__ == '__main__':
- model_infos = {
- # vgg16_bn, resnet50, resnet18
- 'backbone': 'resnet50',
- 'pretrained': True,
- 'out_keys': ['block4'],
- 'in_channel': 3,
- 'n_classes': 2,
- 'top_k_s': 64,
- 'top_k_c': 16,
- 'encoder_pos': True,
- 'decoder_pos': True,
- 'model_pattern': ['X', 'A', 'S', 'C'],
-
- 'log_path': 'Results',
- 'NUM_WORKERS': 0,
- # if you need the validation process.
- 'IS_VAL': True,
- 'VAL_BATCH_SIZE': 4,
- 'VAL_DATASET': 'Tools/generate_dep_info/val_data.csv',
- # if you need the test process.
- 'IS_TEST': True,
- 'TEST_DATASET': 'Tools/generate_dep_info/test_data.csv',
- 'IMG_SIZE': [512, 512],
- 'PHASE': 'seg',
-
- # INRIA Dataset
- 'PRIOR_MEAN': [0.40672500537632994, 0.42829032416229895, 0.39331840468605667],
- 'PRIOR_STD': [0.029498464618176873, 0.027740088491668233, 0.028246722411879095],
- # # # WHU Dataset
- # 'PRIOR_MEAN': [0.4352682576428411, 0.44523221318154493, 0.41307610541534784],
- # 'PRIOR_STD': [0.026973196780331585, 0.026424642808887323, 0.02791246590291434],
-
- # load state dict path
- 'load_checkpoint_path': r'E:\BuildingExtractionDataset\INRIA_ckpt_latest.pt',
- }
- if model_infos['IS_VAL']:
- os.makedirs(model_infos['log_path']+'/val', exist_ok=True)
- if model_infos['IS_TEST']:
- os.makedirs(model_infos['log_path']+'/test', exist_ok=True)
- logger = Logger(model_infos['log_path'] + '/log.log')
-
- data_loaders = get_data_loader(model_infos, test_mode=True)
- loss_weight = 0.1
- model = STTNet(**model_infos)
-
- logger.write(f'load checkpoint from {model_infos["load_checkpoint_path"]}\n')
- state_dict = torch.load(model_infos['load_checkpoint_path'], map_location='cpu')
- model_dict = state_dict['model_state_dict']
- try:
- model_dict = OrderedDict({k.replace('module.', ''): v for k, v in model_dict.items()})
- model.load_state_dict(model_dict)
- except Exception as e:
- model.load_state_dict(model_dict)
- model = model.cuda()
- device_ids = range(torch.cuda.device_count())
- if len(device_ids) > 1:
- model = torch.nn.DataParallel(model, device_ids=device_ids)
- logger.write(f'Use GPUs: {device_ids}\n')
- else:
- logger.write(f'Use GPUs: 1\n')
-
- patterns = ['val', 'test']
- for pattern_id, is_pattern in enumerate([model_infos['IS_VAL'], model_infos['IS_TEST']]):
- if is_pattern:
- # pred: logits, tensor, nBatch * nClass * W * H
- # target: labels, tensor, nBatch * nClass * W * H
- # output, batch['label']
- collect_result = {'pred': [], 'target': []}
- pattern = patterns[pattern_id]
- model.eval()
- for batch_id, batch in enumerate(data_loaders[pattern]):
- # Get data
- img_batch = batch['img'].cuda()
- label_batch = batch['label'].cuda()
- img_names = batch['img_name']
- collect_result['target'].append(label_batch.data.cpu())
-
- # inference
- with torch.no_grad():
- logits, att_branch_output = model(img_batch)
-
- collect_result['pred'].append(logits.data.cpu())
- # get segmentation result, when the phase is test.
- pred_label = torch.argmax(logits, 1)
- pred_label *= 255
-
- # output the segmentation result
- if pattern == 'test' or batch_id % 5 == 1:
- batch_size = pred_label.size(0)
- # k = np.clip(int(0.3 * batch_size), a_min=1, a_max=batch_size)
- # ids = np.random.choice(range(batch_size), k, replace=False)
- ids = range(batch_size)
- for img_id in ids:
- img = img_batch[img_id].detach().cpu()
- target = label_batch[img_id].detach().cpu()
- pred = pred_label[img_id].detach().cpu()
- img_name = img_names[img_id]
-
- img = make_numpy_img(
- inv_normalize_img(img, model_infos['PRIOR_MEAN'], model_infos['PRIOR_STD']))
- target = make_numpy_img(encode_onehot_to_mask(target)) * 255
- pred = make_numpy_img(pred)
-
- vis = np.concatenate([img / 255., target / 255., pred / 255.], axis=0)
- vis = np.clip(vis, a_min=0, a_max=1)
- file_name = os.path.join(model_infos['log_path'], pattern, f'{img_name.split(".")[0]}.png')
- plt.imsave(file_name, vis)
-
- collect_result['pred'] = torch.cat(collect_result['pred'], dim=0)
- collect_result['target'] = torch.cat(collect_result['target'], dim=0)
- IoU, OA, F1_score = get_metrics('seg', **collect_result)
- logger.write(f'{pattern}: Iou:{IoU[-1]:.4f} OA:{OA[-1]:.4f} F1:{F1_score[-1]:.4f}\n')
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/utils/compat_config.py b/spaces/KyanChen/RSPrompter/mmdet/utils/compat_config.py
deleted file mode 100644
index 133adb65c2276401eca947e223e5b7c1760de418..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/utils/compat_config.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import warnings
-
-from mmengine.config import ConfigDict
-
-
-def compat_cfg(cfg):
- """This function would modify some filed to keep the compatibility of
- config.
-
- For example, it will move some args which will be deprecated to the correct
- fields.
- """
- cfg = copy.deepcopy(cfg)
- cfg = compat_imgs_per_gpu(cfg)
- cfg = compat_loader_args(cfg)
- cfg = compat_runner_args(cfg)
- return cfg
-
-
-def compat_runner_args(cfg):
- if 'runner' not in cfg:
- cfg.runner = ConfigDict({
- 'type': 'EpochBasedRunner',
- 'max_epochs': cfg.total_epochs
- })
- warnings.warn(
- 'config is now expected to have a `runner` section, '
- 'please set `runner` in your config.', UserWarning)
- else:
- if 'total_epochs' in cfg:
- assert cfg.total_epochs == cfg.runner.max_epochs
- return cfg
-
-
-def compat_imgs_per_gpu(cfg):
- cfg = copy.deepcopy(cfg)
- if 'imgs_per_gpu' in cfg.data:
- warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. '
- 'Please use "samples_per_gpu" instead')
- if 'samples_per_gpu' in cfg.data:
- warnings.warn(
- f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and '
- f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"'
- f'={cfg.data.imgs_per_gpu} is used in this experiments')
- else:
- warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"='
- f'{cfg.data.imgs_per_gpu} in this experiments')
- cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu
- return cfg
-
-
-def compat_loader_args(cfg):
- """Deprecated sample_per_gpu in cfg.data."""
-
- cfg = copy.deepcopy(cfg)
- if 'train_dataloader' not in cfg.data:
- cfg.data['train_dataloader'] = ConfigDict()
- if 'val_dataloader' not in cfg.data:
- cfg.data['val_dataloader'] = ConfigDict()
- if 'test_dataloader' not in cfg.data:
- cfg.data['test_dataloader'] = ConfigDict()
-
- # special process for train_dataloader
- if 'samples_per_gpu' in cfg.data:
-
- samples_per_gpu = cfg.data.pop('samples_per_gpu')
- assert 'samples_per_gpu' not in \
- cfg.data.train_dataloader, ('`samples_per_gpu` are set '
- 'in `data` field and ` '
- 'data.train_dataloader` '
- 'at the same time. '
- 'Please only set it in '
- '`data.train_dataloader`. ')
- cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu
-
- if 'persistent_workers' in cfg.data:
-
- persistent_workers = cfg.data.pop('persistent_workers')
- assert 'persistent_workers' not in \
- cfg.data.train_dataloader, ('`persistent_workers` are set '
- 'in `data` field and ` '
- 'data.train_dataloader` '
- 'at the same time. '
- 'Please only set it in '
- '`data.train_dataloader`. ')
- cfg.data.train_dataloader['persistent_workers'] = persistent_workers
-
- if 'workers_per_gpu' in cfg.data:
-
- workers_per_gpu = cfg.data.pop('workers_per_gpu')
- cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu
- cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu
- cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu
-
- # special process for val_dataloader
- if 'samples_per_gpu' in cfg.data.val:
- # keep default value of `sample_per_gpu` is 1
- assert 'samples_per_gpu' not in \
- cfg.data.val_dataloader, ('`samples_per_gpu` are set '
- 'in `data.val` field and ` '
- 'data.val_dataloader` at '
- 'the same time. '
- 'Please only set it in '
- '`data.val_dataloader`. ')
- cfg.data.val_dataloader['samples_per_gpu'] = \
- cfg.data.val.pop('samples_per_gpu')
- # special process for val_dataloader
-
- # in case the test dataset is concatenated
- if isinstance(cfg.data.test, dict):
- if 'samples_per_gpu' in cfg.data.test:
- assert 'samples_per_gpu' not in \
- cfg.data.test_dataloader, ('`samples_per_gpu` are set '
- 'in `data.test` field and ` '
- 'data.test_dataloader` '
- 'at the same time. '
- 'Please only set it in '
- '`data.test_dataloader`. ')
-
- cfg.data.test_dataloader['samples_per_gpu'] = \
- cfg.data.test.pop('samples_per_gpu')
-
- elif isinstance(cfg.data.test, list):
- for ds_cfg in cfg.data.test:
- if 'samples_per_gpu' in ds_cfg:
- assert 'samples_per_gpu' not in \
- cfg.data.test_dataloader, ('`samples_per_gpu` are set '
- 'in `data.test` field and ` '
- 'data.test_dataloader` at'
- ' the same time. '
- 'Please only set it in '
- '`data.test_dataloader`. ')
- samples_per_gpu = max(
- [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test])
- cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu
-
- return cfg
diff --git a/spaces/LEBEI/00002/wbc/cartoonize.py b/spaces/LEBEI/00002/wbc/cartoonize.py
deleted file mode 100644
index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000
--- a/spaces/LEBEI/00002/wbc/cartoonize.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import os
-import cv2
-import numpy as np
-import tensorflow as tf
-import wbc.network as network
-import wbc.guided_filter as guided_filter
-from tqdm import tqdm
-
-
-def resize_crop(image):
- h, w, c = np.shape(image)
- if min(h, w) > 720:
- if h > w:
- h, w = int(720 * h / w), 720
- else:
- h, w = 720, int(720 * w / h)
- image = cv2.resize(image, (w, h),
- interpolation=cv2.INTER_AREA)
- h, w = (h // 8) * 8, (w // 8) * 8
- image = image[:h, :w, :]
- return image
-
-
-def cartoonize(load_folder, save_folder, model_path):
- print(model_path)
- input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- network_out = network.unet_generator(input_photo)
- final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3)
-
- all_vars = tf.trainable_variables()
- gene_vars = [var for var in all_vars if 'generator' in var.name]
- saver = tf.train.Saver(var_list=gene_vars)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- sess = tf.Session(config=config)
-
- sess.run(tf.global_variables_initializer())
- saver.restore(sess, tf.train.latest_checkpoint(model_path))
- name_list = os.listdir(load_folder)
- for name in tqdm(name_list):
- try:
- load_path = os.path.join(load_folder, name)
- save_path = os.path.join(save_folder, name)
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = sess.run(final_out, feed_dict={input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
-
-class Cartoonize:
- def __init__(self, model_path):
- print(model_path)
- self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- network_out = network.unet_generator(self.input_photo)
- self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3)
-
- all_vars = tf.trainable_variables()
- gene_vars = [var for var in all_vars if 'generator' in var.name]
- saver = tf.train.Saver(var_list=gene_vars)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- self.sess = tf.Session(config=config)
-
- self.sess.run(tf.global_variables_initializer())
- saver.restore(self.sess, tf.train.latest_checkpoint(model_path))
-
- def run(self, load_folder, save_folder):
- name_list = os.listdir(load_folder)
- for name in tqdm(name_list):
- try:
- load_path = os.path.join(load_folder, name)
- save_path = os.path.join(save_folder, name)
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
- def run_sigle(self, load_path, save_path):
- try:
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
-
-if __name__ == '__main__':
- model_path = 'saved_models'
- load_folder = 'test_images'
- save_folder = 'cartoonized_images'
- if not os.path.exists(save_folder):
- os.mkdir(save_folder)
- cartoonize(load_folder, save_folder, model_path)
diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/__init__.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Lijiahui/bingAI/README.md b/spaces/Lijiahui/bingAI/README.md
deleted file mode 100644
index 5311af4ab2d532d4387893b76b11661af253d141..0000000000000000000000000000000000000000
--- a/spaces/Lijiahui/bingAI/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: BingAI
-emoji: 🦀
-colorFrom: yellow
-colorTo: blue
-sdk: docker
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adadelta_18e.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adadelta_18e.py
deleted file mode 100644
index 33f7960c51bf7d0f2b5bc03e8707a85a01e000fd..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_adadelta_18e.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# optimizer
-optimizer = dict(type='Adadelta', lr=0.5)
-optimizer_config = dict(grad_clip=dict(max_norm=0.5))
-# learning policy
-lr_config = dict(policy='step', step=[8, 14, 16])
-# running settings
-runner = dict(type='EpochBasedRunner', max_epochs=18)
-checkpoint_config = dict(interval=1)
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/models/musicgen.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/models/musicgen.py
deleted file mode 100644
index 007dd9e0ed1cfd359fb4889e7f4108248e189941..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/models/musicgen.py
+++ /dev/null
@@ -1,362 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using MusicGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import os
-import typing as tp
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes, WavCondition
-from ..utils.autocast import TorchAutocast
-
-
-MelodyList = tp.List[tp.Optional[torch.Tensor]]
-MelodyType = tp.Union[torch.Tensor, MelodyList]
-
-
-class MusicGen:
- """MusicGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: float = 30):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- self.max_duration = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=15) # 15 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> int:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'melody', device=None):
- """Return pretrained model, we provide four models:
- - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small
- - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium
- - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody
- - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large
- """
-
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device)
- lm = get_debug_lm_model(device)
- return MusicGen(name, compression_model, lm)
-
- if name not in HF_MODEL_CHECKPOINTS_MAP:
- if not os.path.isfile(name) and not os.path.isdir(name):
- raise ValueError(
- f"{name} is not a valid checkpoint name. "
- f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}"
- )
-
- cache_dir = os.environ.get('MUSICGEN_ROOT', None)
- compression_model = load_compression_model(name, device=device, cache_dir=cache_dir)
- lm = load_lm_model(name, device=device, cache_dir=cache_dir)
- if name == 'melody':
- lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True
-
- return MusicGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 30.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 18):
- """Set the generation parameters for MusicGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 30.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor:
- """Generate samples in an unconditional manner.
-
- Args:
- num_samples (int): Number of samples to be generated.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- descriptions: tp.List[tp.Optional[str]] = [None] * num_samples
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType,
- melody_sample_rate: int, progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text and melody.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as
- melody conditioning. Should have shape [B, C, T] with B matching the description length,
- C=1 or 2. It can be [C, T] if there is a single description. It can also be
- a list of [C, T] tensors.
- melody_sample_rate: (int): Sample rate of the melody waveforms.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if isinstance(melody_wavs, torch.Tensor):
- if melody_wavs.dim() == 2:
- melody_wavs = melody_wavs[None]
- if melody_wavs.dim() != 3:
- raise ValueError("Melody wavs should have a shape [B, C, T].")
- melody_wavs = list(melody_wavs)
- else:
- for melody in melody_wavs:
- if melody is not None:
- assert melody.dim() == 2, "One melody in the list has the wrong number of dims."
-
- melody_wavs = [
- convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels)
- if wav is not None else None
- for wav in melody_wavs]
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None,
- melody_wavs=melody_wavs)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- melody_wavs: tp.Optional[MelodyList] = None,
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (tp.List[str]): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms
- used as melody conditioning. Defaults to None.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if melody_wavs is None:
- for attr in attributes:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- if self.name != "melody":
- raise RuntimeError("This model doesn't support melody conditioning. "
- "Use the `melody` model.")
- assert len(melody_wavs) == len(descriptions), \
- f"number of melody wavs must match number of descriptions! " \
- f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}"
- for attr, melody in zip(attributes, melody_wavs):
- if melody is None:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- path='null_wav') # type: ignore
- else:
- attr.wav['self_wav'] = WavCondition(
- melody.to(device=self.device),
- torch.tensor([melody.shape[-1]], device=self.device))
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody).
- prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- # now this gets a bit messier, we need to handle prompts,
- # melody conditioning etc.
- ref_wavs = [attr.wav['self_wav'] for attr in attributes]
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- for attr, ref_wav in zip(attributes, ref_wavs):
- wav_length = ref_wav.length.item()
- if wav_length == 0:
- continue
- # We will extend the wav periodically if it not long enough.
- # we have to do it here rather than in conditioners.py as otherwise
- # we wouldn't have the full wav.
- initial_position = int(time_offset * self.sample_rate)
- wav_target_length = int(self.max_duration * self.sample_rate)
- print(initial_position / self.sample_rate, wav_target_length / self.sample_rate)
- positions = torch.arange(initial_position,
- initial_position + wav_target_length, device=self.device)
- attr.wav['self_wav'] = WavCondition(
- ref_wav[0][:, positions % wav_length],
- torch.full_like(ref_wav[1], wav_target_length))
- with self.autocast:
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
-
- # generate audio
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
diff --git a/spaces/LuxOAI/ChatGpt-Web/app/api/auth.ts b/spaces/LuxOAI/ChatGpt-Web/app/api/auth.ts
deleted file mode 100644
index 1005c5fff6309402b96b2add88644d649d87d3d9..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/app/api/auth.ts
+++ /dev/null
@@ -1,71 +0,0 @@
-import { NextRequest } from "next/server";
-import { getServerSideConfig } from "../config/server";
-import md5 from "spark-md5";
-import { ACCESS_CODE_PREFIX } from "../constant";
-
-const serverConfig = getServerSideConfig();
-
-function getIP(req: NextRequest) {
- let ip = req.ip ?? req.headers.get("x-real-ip");
- const forwardedFor = req.headers.get("x-forwarded-for");
-
- if (!ip && forwardedFor) {
- ip = forwardedFor.split(",").at(0) ?? "";
- }
-
- return ip;
-}
-
-function parseApiKey(bearToken: string) {
- const token = bearToken.trim().replaceAll("Bearer ", "").trim();
- const isOpenAiKey = !token.startsWith(ACCESS_CODE_PREFIX);
-
- return {
- accessCode: isOpenAiKey ? "" : token.slice(ACCESS_CODE_PREFIX.length),
- apiKey: isOpenAiKey ? token : "",
- };
-}
-
-export function auth(req: NextRequest) {
- const authToken = req.headers.get("Authorization") ?? "";
-
- // check if it is openai api key or user token
- const { accessCode, apiKey: token } = parseApiKey(authToken);
-
- const hashedCode = md5.hash(accessCode ?? "").trim();
-
- console.log("[Auth] allowed hashed codes: ", [...serverConfig.codes]);
- console.log("[Auth] got access code:", accessCode);
- console.log("[Auth] hashed access code:", hashedCode);
- console.log("[User IP] ", getIP(req));
- console.log("[Time] ", new Date().toLocaleString());
-
- if (serverConfig.needCode && !serverConfig.codes.has(hashedCode) && !token) {
- return {
- error: true,
- needAccessCode: true,
- msg: "Please go settings page and fill your access code.",
- };
- }
-
- // if user does not provide an api key, inject system api key
- if (!token) {
- const apiKey = serverConfig.apiKey;
- if (apiKey) {
- console.log("[Auth] use system api key");
- req.headers.set("Authorization", `Bearer ${apiKey}`);
- } else {
- console.log("[Auth] admin did not provide an api key");
- return {
- error: true,
- msg: "Empty Api Key",
- };
- }
- } else {
- console.log("[Auth] use user api key");
- }
-
- return {
- error: false,
- };
-}
diff --git a/spaces/MBZ/LoRA-DreamBooth-Training-UI/app_upload.py b/spaces/MBZ/LoRA-DreamBooth-Training-UI/app_upload.py
deleted file mode 100644
index b2465fa1f13425e05bd638cfe330b47ed7bd53e2..0000000000000000000000000000000000000000
--- a/spaces/MBZ/LoRA-DreamBooth-Training-UI/app_upload.py
+++ /dev/null
@@ -1,100 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import pathlib
-
-import gradio as gr
-import slugify
-
-from constants import UploadTarget
-from uploader import Uploader
-from utils import find_exp_dirs
-
-
-class LoRAModelUploader(Uploader):
- def upload_lora_model(
- self,
- folder_path: str,
- repo_name: str,
- upload_to: str,
- private: bool,
- delete_existing_repo: bool,
- ) -> str:
- if not folder_path:
- raise ValueError
- if not repo_name:
- repo_name = pathlib.Path(folder_path).name
- repo_name = slugify.slugify(repo_name)
-
- if upload_to == UploadTarget.PERSONAL_PROFILE.value:
- organization = ''
- elif upload_to == UploadTarget.LORA_LIBRARY.value:
- organization = 'lora-library'
- else:
- raise ValueError
-
- return self.upload(folder_path,
- repo_name,
- organization=organization,
- private=private,
- delete_existing_repo=delete_existing_repo)
-
-
-def load_local_lora_model_list() -> dict:
- choices = find_exp_dirs(ignore_repo=True)
- return gr.update(choices=choices, value=choices[0] if choices else None)
-
-
-def create_upload_demo(hf_token: str | None) -> gr.Blocks:
- uploader = LoRAModelUploader(hf_token)
- model_dirs = find_exp_dirs(ignore_repo=True)
-
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown('Local Models')
- reload_button = gr.Button('Reload Model List')
- model_dir = gr.Dropdown(
- label='Model names',
- choices=model_dirs,
- value=model_dirs[0] if model_dirs else None)
- with gr.Box():
- gr.Markdown('Upload Settings')
- with gr.Row():
- use_private_repo = gr.Checkbox(label='Private', value=True)
- delete_existing_repo = gr.Checkbox(
- label='Delete existing repo of the same name', value=False)
- upload_to = gr.Radio(label='Upload to',
- choices=[_.value for _ in UploadTarget],
- value=UploadTarget.LORA_LIBRARY.value)
- model_name = gr.Textbox(label='Model Name')
- upload_button = gr.Button('Upload')
- gr.Markdown('''
- - You can upload your trained model to your personal profile (i.e. https://huggingface.co/{your_username}/{model_name}) or to the public [LoRA Concepts Library](https://huggingface.co/lora-library) (i.e. https://huggingface.co/lora-library/{model_name}).
- ''')
- with gr.Box():
- gr.Markdown('Output message')
- output_message = gr.Markdown()
-
- reload_button.click(fn=load_local_lora_model_list,
- inputs=None,
- outputs=model_dir)
- upload_button.click(fn=uploader.upload_lora_model,
- inputs=[
- model_dir,
- model_name,
- upload_to,
- use_private_repo,
- delete_existing_repo,
- ],
- outputs=output_message)
-
- return demo
-
-
-if __name__ == '__main__':
- import os
-
- hf_token = os.getenv('HF_TOKEN')
- demo = create_upload_demo(hf_token)
- demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Detection/align_warp_back_multiple_dlib.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Detection/align_warp_back_multiple_dlib.py
deleted file mode 100644
index 4b82139e4a81201b16fdfe56bc1cdb2b97bac398..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Detection/align_warp_back_multiple_dlib.py
+++ /dev/null
@@ -1,437 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import torch
-import numpy as np
-import skimage.io as io
-
-# from face_sdk import FaceDetection
-import matplotlib.pyplot as plt
-from matplotlib.patches import Rectangle
-from skimage.transform import SimilarityTransform
-from skimage.transform import warp
-from PIL import Image, ImageFilter
-import torch.nn.functional as F
-import torchvision as tv
-import torchvision.utils as vutils
-import time
-import cv2
-import os
-from skimage import img_as_ubyte
-import json
-import argparse
-import dlib
-
-
-def calculate_cdf(histogram):
- """
- This method calculates the cumulative distribution function
- :param array histogram: The values of the histogram
- :return: normalized_cdf: The normalized cumulative distribution function
- :rtype: array
- """
- # Get the cumulative sum of the elements
- cdf = histogram.cumsum()
-
- # Normalize the cdf
- normalized_cdf = cdf / float(cdf.max())
-
- return normalized_cdf
-
-
-def calculate_lookup(src_cdf, ref_cdf):
- """
- This method creates the lookup table
- :param array src_cdf: The cdf for the source image
- :param array ref_cdf: The cdf for the reference image
- :return: lookup_table: The lookup table
- :rtype: array
- """
- lookup_table = np.zeros(256)
- lookup_val = 0
- for src_pixel_val in range(len(src_cdf)):
- lookup_val
- for ref_pixel_val in range(len(ref_cdf)):
- if ref_cdf[ref_pixel_val] >= src_cdf[src_pixel_val]:
- lookup_val = ref_pixel_val
- break
- lookup_table[src_pixel_val] = lookup_val
- return lookup_table
-
-
-def match_histograms(src_image, ref_image):
- """
- This method matches the source image histogram to the
- reference signal
- :param image src_image: The original source image
- :param image ref_image: The reference image
- :return: image_after_matching
- :rtype: image (array)
- """
- # Split the images into the different color channels
- # b means blue, g means green and r means red
- src_b, src_g, src_r = cv2.split(src_image)
- ref_b, ref_g, ref_r = cv2.split(ref_image)
-
- # Compute the b, g, and r histograms separately
- # The flatten() Numpy method returns a copy of the array c
- # collapsed into one dimension.
- src_hist_blue, bin_0 = np.histogram(src_b.flatten(), 256, [0, 256])
- src_hist_green, bin_1 = np.histogram(src_g.flatten(), 256, [0, 256])
- src_hist_red, bin_2 = np.histogram(src_r.flatten(), 256, [0, 256])
- ref_hist_blue, bin_3 = np.histogram(ref_b.flatten(), 256, [0, 256])
- ref_hist_green, bin_4 = np.histogram(ref_g.flatten(), 256, [0, 256])
- ref_hist_red, bin_5 = np.histogram(ref_r.flatten(), 256, [0, 256])
-
- # Compute the normalized cdf for the source and reference image
- src_cdf_blue = calculate_cdf(src_hist_blue)
- src_cdf_green = calculate_cdf(src_hist_green)
- src_cdf_red = calculate_cdf(src_hist_red)
- ref_cdf_blue = calculate_cdf(ref_hist_blue)
- ref_cdf_green = calculate_cdf(ref_hist_green)
- ref_cdf_red = calculate_cdf(ref_hist_red)
-
- # Make a separate lookup table for each color
- blue_lookup_table = calculate_lookup(src_cdf_blue, ref_cdf_blue)
- green_lookup_table = calculate_lookup(src_cdf_green, ref_cdf_green)
- red_lookup_table = calculate_lookup(src_cdf_red, ref_cdf_red)
-
- # Use the lookup function to transform the colors of the original
- # source image
- blue_after_transform = cv2.LUT(src_b, blue_lookup_table)
- green_after_transform = cv2.LUT(src_g, green_lookup_table)
- red_after_transform = cv2.LUT(src_r, red_lookup_table)
-
- # Put the image back together
- image_after_matching = cv2.merge([blue_after_transform, green_after_transform, red_after_transform])
- image_after_matching = cv2.convertScaleAbs(image_after_matching)
-
- return image_after_matching
-
-
-def _standard_face_pts():
- pts = (
- np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) / 256.0
- - 1.0
- )
-
- return np.reshape(pts, (5, 2))
-
-
-def _origin_face_pts():
- pts = np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32)
-
- return np.reshape(pts, (5, 2))
-
-
-def compute_transformation_matrix(img, landmark, normalize, target_face_scale=1.0):
-
- std_pts = _standard_face_pts() # [-1,1]
- target_pts = (std_pts * target_face_scale + 1) / 2 * 256.0
-
- # print(target_pts)
-
- h, w, c = img.shape
- if normalize == True:
- landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0
- landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0
-
- # print(landmark)
-
- affine = SimilarityTransform()
-
- affine.estimate(target_pts, landmark)
-
- return affine
-
-
-def compute_inverse_transformation_matrix(img, landmark, normalize, target_face_scale=1.0):
-
- std_pts = _standard_face_pts() # [-1,1]
- target_pts = (std_pts * target_face_scale + 1) / 2 * 256.0
-
- # print(target_pts)
-
- h, w, c = img.shape
- if normalize == True:
- landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0
- landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0
-
- # print(landmark)
-
- affine = SimilarityTransform()
-
- affine.estimate(landmark, target_pts)
-
- return affine
-
-
-def show_detection(image, box, landmark):
- plt.imshow(image)
- print(box[2] - box[0])
- plt.gca().add_patch(
- Rectangle(
- (box[1], box[0]), box[2] - box[0], box[3] - box[1], linewidth=1, edgecolor="r", facecolor="none"
- )
- )
- plt.scatter(landmark[0][0], landmark[0][1])
- plt.scatter(landmark[1][0], landmark[1][1])
- plt.scatter(landmark[2][0], landmark[2][1])
- plt.scatter(landmark[3][0], landmark[3][1])
- plt.scatter(landmark[4][0], landmark[4][1])
- plt.show()
-
-
-def affine2theta(affine, input_w, input_h, target_w, target_h):
- # param = np.linalg.inv(affine)
- param = affine
- theta = np.zeros([2, 3])
- theta[0, 0] = param[0, 0] * input_h / target_h
- theta[0, 1] = param[0, 1] * input_w / target_h
- theta[0, 2] = (2 * param[0, 2] + param[0, 0] * input_h + param[0, 1] * input_w) / target_h - 1
- theta[1, 0] = param[1, 0] * input_h / target_w
- theta[1, 1] = param[1, 1] * input_w / target_w
- theta[1, 2] = (2 * param[1, 2] + param[1, 0] * input_h + param[1, 1] * input_w) / target_w - 1
- return theta
-
-
-def blur_blending(im1, im2, mask):
-
- mask *= 255.0
-
- kernel = np.ones((10, 10), np.uint8)
- mask = cv2.erode(mask, kernel, iterations=1)
-
- mask = Image.fromarray(mask.astype("uint8")).convert("L")
- im1 = Image.fromarray(im1.astype("uint8"))
- im2 = Image.fromarray(im2.astype("uint8"))
-
- mask_blur = mask.filter(ImageFilter.GaussianBlur(20))
- im = Image.composite(im1, im2, mask)
-
- im = Image.composite(im, im2, mask_blur)
-
- return np.array(im) / 255.0
-
-
-def blur_blending_cv2(im1, im2, mask):
-
- mask *= 255.0
-
- kernel = np.ones((9, 9), np.uint8)
- mask = cv2.erode(mask, kernel, iterations=3)
-
- mask_blur = cv2.GaussianBlur(mask, (25, 25), 0)
- mask_blur /= 255.0
-
- im = im1 * mask_blur + (1 - mask_blur) * im2
-
- im /= 255.0
- im = np.clip(im, 0.0, 1.0)
-
- return im
-
-
-# def Poisson_blending(im1,im2,mask):
-
-
-# Image.composite(
-def Poisson_blending(im1, im2, mask):
-
- # mask=1-mask
- mask *= 255
- kernel = np.ones((10, 10), np.uint8)
- mask = cv2.erode(mask, kernel, iterations=1)
- mask /= 255
- mask = 1 - mask
- mask *= 255
-
- mask = mask[:, :, 0]
- width, height, channels = im1.shape
- center = (int(height / 2), int(width / 2))
- result = cv2.seamlessClone(
- im2.astype("uint8"), im1.astype("uint8"), mask.astype("uint8"), center, cv2.MIXED_CLONE
- )
-
- return result / 255.0
-
-
-def Poisson_B(im1, im2, mask, center):
-
- mask *= 255
-
- result = cv2.seamlessClone(
- im2.astype("uint8"), im1.astype("uint8"), mask.astype("uint8"), center, cv2.NORMAL_CLONE
- )
-
- return result / 255
-
-
-def seamless_clone(old_face, new_face, raw_mask):
-
- height, width, _ = old_face.shape
- height = height // 2
- width = width // 2
-
- y_indices, x_indices, _ = np.nonzero(raw_mask)
- y_crop = slice(np.min(y_indices), np.max(y_indices))
- x_crop = slice(np.min(x_indices), np.max(x_indices))
- y_center = int(np.rint((np.max(y_indices) + np.min(y_indices)) / 2 + height))
- x_center = int(np.rint((np.max(x_indices) + np.min(x_indices)) / 2 + width))
-
- insertion = np.rint(new_face[y_crop, x_crop] * 255.0).astype("uint8")
- insertion_mask = np.rint(raw_mask[y_crop, x_crop] * 255.0).astype("uint8")
- insertion_mask[insertion_mask != 0] = 255
- prior = np.rint(np.pad(old_face * 255.0, ((height, height), (width, width), (0, 0)), "constant")).astype(
- "uint8"
- )
- # if np.sum(insertion_mask) == 0:
- n_mask = insertion_mask[1:-1, 1:-1, :]
- n_mask = cv2.copyMakeBorder(n_mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, 0)
- print(n_mask.shape)
- x, y, w, h = cv2.boundingRect(n_mask[:, :, 0])
- if w < 4 or h < 4:
- blended = prior
- else:
- blended = cv2.seamlessClone(
- insertion, # pylint: disable=no-member
- prior,
- insertion_mask,
- (x_center, y_center),
- cv2.NORMAL_CLONE,
- ) # pylint: disable=no-member
-
- blended = blended[height:-height, width:-width]
-
- return blended.astype("float32") / 255.0
-
-
-def get_landmark(face_landmarks, id):
- part = face_landmarks.part(id)
- x = part.x
- y = part.y
-
- return (x, y)
-
-
-def search(face_landmarks):
-
- x1, y1 = get_landmark(face_landmarks, 36)
- x2, y2 = get_landmark(face_landmarks, 39)
- x3, y3 = get_landmark(face_landmarks, 42)
- x4, y4 = get_landmark(face_landmarks, 45)
-
- x_nose, y_nose = get_landmark(face_landmarks, 30)
-
- x_left_mouth, y_left_mouth = get_landmark(face_landmarks, 48)
- x_right_mouth, y_right_mouth = get_landmark(face_landmarks, 54)
-
- x_left_eye = int((x1 + x2) / 2)
- y_left_eye = int((y1 + y2) / 2)
- x_right_eye = int((x3 + x4) / 2)
- y_right_eye = int((y3 + y4) / 2)
-
- results = np.array(
- [
- [x_left_eye, y_left_eye],
- [x_right_eye, y_right_eye],
- [x_nose, y_nose],
- [x_left_mouth, y_left_mouth],
- [x_right_mouth, y_right_mouth],
- ]
- )
-
- return results
-
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--origin_url", type=str, default="./", help="origin images")
- parser.add_argument("--replace_url", type=str, default="./", help="restored faces")
- parser.add_argument("--save_url", type=str, default="./save")
- opts = parser.parse_args()
-
- origin_url = opts.origin_url
- replace_url = opts.replace_url
- save_url = opts.save_url
-
- if not os.path.exists(save_url):
- os.makedirs(save_url)
-
- face_detector = dlib.get_frontal_face_detector()
- landmark_locator = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
-
- count = 0
-
- for x in os.listdir(origin_url):
- img_url = os.path.join(origin_url, x)
- pil_img = Image.open(img_url).convert("RGB")
-
- origin_width, origin_height = pil_img.size
- image = np.array(pil_img)
-
- start = time.time()
- faces = face_detector(image)
- done = time.time()
-
- if len(faces) == 0:
- print("Warning: There is no face in %s" % (x))
- continue
-
- blended = image
- for face_id in range(len(faces)):
-
- current_face = faces[face_id]
- face_landmarks = landmark_locator(image, current_face)
- current_fl = search(face_landmarks)
-
- forward_mask = np.ones_like(image).astype("uint8")
- affine = compute_transformation_matrix(image, current_fl, False, target_face_scale=1.3)
- aligned_face = warp(image, affine, output_shape=(256, 256, 3), preserve_range=True)
- forward_mask = warp(
- forward_mask, affine, output_shape=(256, 256, 3), order=0, preserve_range=True
- )
-
- affine_inverse = affine.inverse
- cur_face = aligned_face
- if replace_url != "":
-
- face_name = x[:-4] + "_" + str(face_id + 1) + ".png"
- cur_url = os.path.join(replace_url, face_name)
- restored_face = Image.open(cur_url).convert("RGB")
- restored_face = np.array(restored_face)
- cur_face = restored_face
-
- ## Histogram Color matching
- A = cv2.cvtColor(aligned_face.astype("uint8"), cv2.COLOR_RGB2BGR)
- B = cv2.cvtColor(cur_face.astype("uint8"), cv2.COLOR_RGB2BGR)
- B = match_histograms(B, A)
- cur_face = cv2.cvtColor(B.astype("uint8"), cv2.COLOR_BGR2RGB)
-
- warped_back = warp(
- cur_face,
- affine_inverse,
- output_shape=(origin_height, origin_width, 3),
- order=3,
- preserve_range=True,
- )
-
- backward_mask = warp(
- forward_mask,
- affine_inverse,
- output_shape=(origin_height, origin_width, 3),
- order=0,
- preserve_range=True,
- ) ## Nearest neighbour
-
- blended = blur_blending_cv2(warped_back, blended, backward_mask)
- blended *= 255.0
-
- io.imsave(os.path.join(save_url, x), img_as_ubyte(blended / 255.0))
-
- count += 1
-
- if count % 1000 == 0:
- print("%d have finished ..." % (count))
-
diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/attentions.py b/spaces/Mahiruoshi/MyGO_VIts-bert/attentions.py
deleted file mode 100644
index 3ba2407267ecd425d2095a6428015b5b4ebc0bda..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/MyGO_VIts-bert/attentions.py
+++ /dev/null
@@ -1,464 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=4,
- isflow=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- # if isflow:
- # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1)
- # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1)
- # self.cond_layer = weight_norm(cond_layer, name='weight')
- # self.gin_channels = 256
- self.cond_layer_idx = self.n_layers
- if "gin_channels" in kwargs:
- self.gin_channels = kwargs["gin_channels"]
- if self.gin_channels != 0:
- self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels)
- # vits2 says 3rd block, so idx is 2 by default
- self.cond_layer_idx = (
- kwargs["cond_layer_idx"] if "cond_layer_idx" in kwargs else 2
- )
- logging.debug(self.gin_channels, self.cond_layer_idx)
- assert (
- self.cond_layer_idx < self.n_layers
- ), "cond_layer_idx should be less than n_layers"
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, g=None):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- if i == self.cond_layer_idx and g is not None:
- g = self.spk_emb_linear(g.transpose(1, 2))
- g = g.transpose(1, 2)
- x = x + g
- x = x * x_mask
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # pad along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Manmay/tortoise-tts/tortoise/utils/stft.py b/spaces/Manmay/tortoise-tts/tortoise/utils/stft.py
deleted file mode 100644
index f54eb968225cfe5928cca6d7686abbcc3728a674..0000000000000000000000000000000000000000
--- a/spaces/Manmay/tortoise-tts/tortoise/utils/stft.py
+++ /dev/null
@@ -1,193 +0,0 @@
-"""
-BSD 3-Clause License
-
-Copyright (c) 2017, Prem Seetharaman
-All rights reserved.
-
-* Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright notice, this
- list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
-
-* Neither the name of the copyright holder nor the names of its
- contributors may be used to endorse or promote products derived from this
- software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
-ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-"""
-
-import torch
-import numpy as np
-import torch.nn.functional as F
-from torch.autograd import Variable
-from scipy.signal import get_window
-from librosa.util import pad_center, tiny
-import librosa.util as librosa_util
-
-
-def window_sumsquare(window, n_frames, hop_length=200, win_length=800,
- n_fft=800, dtype=np.float32, norm=None):
- """
- # from librosa 0.6
- Compute the sum-square envelope of a window function at a given hop length.
-
- This is used to estimate modulation effects induced by windowing
- observations in short-time fourier transforms.
-
- Parameters
- ----------
- window : string, tuple, number, callable, or list-like
- Window specification, as in `get_window`
-
- n_frames : int > 0
- The number of analysis frames
-
- hop_length : int > 0
- The number of samples to advance between frames
-
- win_length : [optional]
- The length of the window function. By default, this matches `n_fft`.
-
- n_fft : int > 0
- The length of each analysis frame.
-
- dtype : np.dtype
- The data type of the output
-
- Returns
- -------
- wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
- The sum-squared envelope of the window function
- """
- if win_length is None:
- win_length = n_fft
-
- n = n_fft + hop_length * (n_frames - 1)
- x = np.zeros(n, dtype=dtype)
-
- # Compute the squared window at the desired length
- win_sq = get_window(window, win_length, fftbins=True)
- win_sq = librosa_util.normalize(win_sq, norm=norm)**2
- win_sq = librosa_util.pad_center(win_sq, n_fft)
-
- # Fill the envelope
- for i in range(n_frames):
- sample = i * hop_length
- x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))]
- return x
-
-
-class STFT(torch.nn.Module):
- """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
- def __init__(self, filter_length=800, hop_length=200, win_length=800,
- window='hann'):
- super(STFT, self).__init__()
- self.filter_length = filter_length
- self.hop_length = hop_length
- self.win_length = win_length
- self.window = window
- self.forward_transform = None
- scale = self.filter_length / self.hop_length
- fourier_basis = np.fft.fft(np.eye(self.filter_length))
-
- cutoff = int((self.filter_length / 2 + 1))
- fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]),
- np.imag(fourier_basis[:cutoff, :])])
-
- forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
- inverse_basis = torch.FloatTensor(
- np.linalg.pinv(scale * fourier_basis).T[:, None, :])
-
- if window is not None:
- assert(filter_length >= win_length)
- # get window and zero center pad it to filter_length
- fft_window = get_window(window, win_length, fftbins=True)
- fft_window = pad_center(fft_window, size=filter_length)
- fft_window = torch.from_numpy(fft_window).float()
-
- # window the bases
- forward_basis *= fft_window
- inverse_basis *= fft_window
-
- self.register_buffer('forward_basis', forward_basis.float())
- self.register_buffer('inverse_basis', inverse_basis.float())
-
- def transform(self, input_data):
- num_batches = input_data.size(0)
- num_samples = input_data.size(1)
-
- self.num_samples = num_samples
-
- # similar to librosa, reflect-pad the input
- input_data = input_data.view(num_batches, 1, num_samples)
- input_data = F.pad(
- input_data.unsqueeze(1),
- (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0),
- mode='reflect')
- input_data = input_data.squeeze(1)
-
- forward_transform = F.conv1d(
- input_data,
- Variable(self.forward_basis, requires_grad=False),
- stride=self.hop_length,
- padding=0)
-
- cutoff = int((self.filter_length / 2) + 1)
- real_part = forward_transform[:, :cutoff, :]
- imag_part = forward_transform[:, cutoff:, :]
-
- magnitude = torch.sqrt(real_part**2 + imag_part**2)
- phase = torch.autograd.Variable(
- torch.atan2(imag_part.data, real_part.data))
-
- return magnitude, phase
-
- def inverse(self, magnitude, phase):
- recombine_magnitude_phase = torch.cat(
- [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1)
-
- inverse_transform = F.conv_transpose1d(
- recombine_magnitude_phase,
- Variable(self.inverse_basis, requires_grad=False),
- stride=self.hop_length,
- padding=0)
-
- if self.window is not None:
- window_sum = window_sumsquare(
- self.window, magnitude.size(-1), hop_length=self.hop_length,
- win_length=self.win_length, n_fft=self.filter_length,
- dtype=np.float32)
- # remove modulation effects
- approx_nonzero_indices = torch.from_numpy(
- np.where(window_sum > tiny(window_sum))[0])
- window_sum = torch.autograd.Variable(
- torch.from_numpy(window_sum), requires_grad=False)
- window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum
- inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices]
-
- # scale by hop ratio
- inverse_transform *= float(self.filter_length) / self.hop_length
-
- inverse_transform = inverse_transform[:, :, int(self.filter_length/2):]
- inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):]
-
- return inverse_transform
-
- def forward(self, input_data):
- self.magnitude, self.phase = self.transform(input_data)
- reconstruction = self.inverse(self.magnitude, self.phase)
- return reconstruction
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/models/mbv2_mlsd_tiny.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/models/mbv2_mlsd_tiny.py
deleted file mode 100644
index e3ed633f2cc23ea1829a627fdb879ab39f641f83..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/models/mbv2_mlsd_tiny.py
+++ /dev/null
@@ -1,275 +0,0 @@
-import os
-import sys
-import torch
-import torch.nn as nn
-import torch.utils.model_zoo as model_zoo
-from torch.nn import functional as F
-
-
-class BlockTypeA(nn.Module):
- def __init__(self, in_c1, in_c2, out_c1, out_c2, upscale = True):
- super(BlockTypeA, self).__init__()
- self.conv1 = nn.Sequential(
- nn.Conv2d(in_c2, out_c2, kernel_size=1),
- nn.BatchNorm2d(out_c2),
- nn.ReLU(inplace=True)
- )
- self.conv2 = nn.Sequential(
- nn.Conv2d(in_c1, out_c1, kernel_size=1),
- nn.BatchNorm2d(out_c1),
- nn.ReLU(inplace=True)
- )
- self.upscale = upscale
-
- def forward(self, a, b):
- b = self.conv1(b)
- a = self.conv2(a)
- b = F.interpolate(b, scale_factor=2.0, mode='bilinear', align_corners=True)
- return torch.cat((a, b), dim=1)
-
-
-class BlockTypeB(nn.Module):
- def __init__(self, in_c, out_c):
- super(BlockTypeB, self).__init__()
- self.conv1 = nn.Sequential(
- nn.Conv2d(in_c, in_c, kernel_size=3, padding=1),
- nn.BatchNorm2d(in_c),
- nn.ReLU()
- )
- self.conv2 = nn.Sequential(
- nn.Conv2d(in_c, out_c, kernel_size=3, padding=1),
- nn.BatchNorm2d(out_c),
- nn.ReLU()
- )
-
- def forward(self, x):
- x = self.conv1(x) + x
- x = self.conv2(x)
- return x
-
-class BlockTypeC(nn.Module):
- def __init__(self, in_c, out_c):
- super(BlockTypeC, self).__init__()
- self.conv1 = nn.Sequential(
- nn.Conv2d(in_c, in_c, kernel_size=3, padding=5, dilation=5),
- nn.BatchNorm2d(in_c),
- nn.ReLU()
- )
- self.conv2 = nn.Sequential(
- nn.Conv2d(in_c, in_c, kernel_size=3, padding=1),
- nn.BatchNorm2d(in_c),
- nn.ReLU()
- )
- self.conv3 = nn.Conv2d(in_c, out_c, kernel_size=1)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.conv2(x)
- x = self.conv3(x)
- return x
-
-def _make_divisible(v, divisor, min_value=None):
- """
- This function is taken from the original tf repo.
- It ensures that all layers have a channel number that is divisible by 8
- It can be seen here:
- https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
- :param v:
- :param divisor:
- :param min_value:
- :return:
- """
- if min_value is None:
- min_value = divisor
- new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
- # Make sure that round down does not go down by more than 10%.
- if new_v < 0.9 * v:
- new_v += divisor
- return new_v
-
-
-class ConvBNReLU(nn.Sequential):
- def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
- self.channel_pad = out_planes - in_planes
- self.stride = stride
- #padding = (kernel_size - 1) // 2
-
- # TFLite uses slightly different padding than PyTorch
- if stride == 2:
- padding = 0
- else:
- padding = (kernel_size - 1) // 2
-
- super(ConvBNReLU, self).__init__(
- nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False),
- nn.BatchNorm2d(out_planes),
- nn.ReLU6(inplace=True)
- )
- self.max_pool = nn.MaxPool2d(kernel_size=stride, stride=stride)
-
-
- def forward(self, x):
- # TFLite uses different padding
- if self.stride == 2:
- x = F.pad(x, (0, 1, 0, 1), "constant", 0)
- #print(x.shape)
-
- for module in self:
- if not isinstance(module, nn.MaxPool2d):
- x = module(x)
- return x
-
-
-class InvertedResidual(nn.Module):
- def __init__(self, inp, oup, stride, expand_ratio):
- super(InvertedResidual, self).__init__()
- self.stride = stride
- assert stride in [1, 2]
-
- hidden_dim = int(round(inp * expand_ratio))
- self.use_res_connect = self.stride == 1 and inp == oup
-
- layers = []
- if expand_ratio != 1:
- # pw
- layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
- layers.extend([
- # dw
- ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
- # pw-linear
- nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- ])
- self.conv = nn.Sequential(*layers)
-
- def forward(self, x):
- if self.use_res_connect:
- return x + self.conv(x)
- else:
- return self.conv(x)
-
-
-class MobileNetV2(nn.Module):
- def __init__(self, pretrained=True):
- """
- MobileNet V2 main class
- Args:
- num_classes (int): Number of classes
- width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount
- inverted_residual_setting: Network structure
- round_nearest (int): Round the number of channels in each layer to be a multiple of this number
- Set to 1 to turn off rounding
- block: Module specifying inverted residual building block for mobilenet
- """
- super(MobileNetV2, self).__init__()
-
- block = InvertedResidual
- input_channel = 32
- last_channel = 1280
- width_mult = 1.0
- round_nearest = 8
-
- inverted_residual_setting = [
- # t, c, n, s
- [1, 16, 1, 1],
- [6, 24, 2, 2],
- [6, 32, 3, 2],
- [6, 64, 4, 2],
- #[6, 96, 3, 1],
- #[6, 160, 3, 2],
- #[6, 320, 1, 1],
- ]
-
- # only check the first element, assuming user knows t,c,n,s are required
- if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4:
- raise ValueError("inverted_residual_setting should be non-empty "
- "or a 4-element list, got {}".format(inverted_residual_setting))
-
- # building first layer
- input_channel = _make_divisible(input_channel * width_mult, round_nearest)
- self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
- features = [ConvBNReLU(4, input_channel, stride=2)]
- # building inverted residual blocks
- for t, c, n, s in inverted_residual_setting:
- output_channel = _make_divisible(c * width_mult, round_nearest)
- for i in range(n):
- stride = s if i == 0 else 1
- features.append(block(input_channel, output_channel, stride, expand_ratio=t))
- input_channel = output_channel
- self.features = nn.Sequential(*features)
-
- self.fpn_selected = [3, 6, 10]
- # weight initialization
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out')
- if m.bias is not None:
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.ones_(m.weight)
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- nn.init.zeros_(m.bias)
-
- #if pretrained:
- # self._load_pretrained_model()
-
- def _forward_impl(self, x):
- # This exists since TorchScript doesn't support inheritance, so the superclass method
- # (this one) needs to have a name other than `forward` that can be accessed in a subclass
- fpn_features = []
- for i, f in enumerate(self.features):
- if i > self.fpn_selected[-1]:
- break
- x = f(x)
- if i in self.fpn_selected:
- fpn_features.append(x)
-
- c2, c3, c4 = fpn_features
- return c2, c3, c4
-
-
- def forward(self, x):
- return self._forward_impl(x)
-
- def _load_pretrained_model(self):
- pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/mobilenet_v2-b0353104.pth')
- model_dict = {}
- state_dict = self.state_dict()
- for k, v in pretrain_dict.items():
- if k in state_dict:
- model_dict[k] = v
- state_dict.update(model_dict)
- self.load_state_dict(state_dict)
-
-
-class MobileV2_MLSD_Tiny(nn.Module):
- def __init__(self):
- super(MobileV2_MLSD_Tiny, self).__init__()
-
- self.backbone = MobileNetV2(pretrained=True)
-
- self.block12 = BlockTypeA(in_c1= 32, in_c2= 64,
- out_c1= 64, out_c2=64)
- self.block13 = BlockTypeB(128, 64)
-
- self.block14 = BlockTypeA(in_c1 = 24, in_c2 = 64,
- out_c1= 32, out_c2= 32)
- self.block15 = BlockTypeB(64, 64)
-
- self.block16 = BlockTypeC(64, 16)
-
- def forward(self, x):
- c2, c3, c4 = self.backbone(x)
-
- x = self.block12(c3, c4)
- x = self.block13(x)
- x = self.block14(c2, x)
- x = self.block15(x)
- x = self.block16(x)
- x = x[:, 7:, :, :]
- #print(x.shape)
- x = F.interpolate(x, scale_factor=2.0, mode='bilinear', align_corners=True)
-
- return x
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/__init__.py
deleted file mode 100644
index a263e31c1e3977712827ca229bbc04910b4e928e..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .flops_counter import get_model_complexity_info
-from .fuse_conv_bn import fuse_conv_bn
-from .sync_bn import revert_sync_batchnorm
-from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit,
- KaimingInit, NormalInit, PretrainedInit,
- TruncNormalInit, UniformInit, XavierInit,
- bias_init_with_prob, caffe2_xavier_init,
- constant_init, initialize, kaiming_init, normal_init,
- trunc_normal_init, uniform_init, xavier_init)
-
-__all__ = [
- 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init',
- 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init',
- 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize',
- 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit',
- 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit',
- 'Caffe2XavierInit', 'revert_sync_batchnorm'
-]
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py
deleted file mode 100644
index 6d12508469c6c8fa1884debece44c58d158cb6fa..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501
-
-# Copyright (c) 2021, NVIDIA Corporation. All rights reserved.
-# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator
-# Augmentation (ADA)
-# =======================================================================
-
-# 1. Definitions
-
-# "Licensor" means any person or entity that distributes its Work.
-
-# "Software" means the original work of authorship made available under
-# this License.
-
-# "Work" means the Software and any additions to or derivative works of
-# the Software that are made available under this License.
-
-# The terms "reproduce," "reproduction," "derivative works," and
-# "distribution" have the meaning as provided under U.S. copyright law;
-# provided, however, that for the purposes of this License, derivative
-# works shall not include works that remain separable from, or merely
-# link (or bind by name) to the interfaces of, the Work.
-
-# Works, including the Software, are "made available" under this License
-# by including in or with the Work either (a) a copyright notice
-# referencing the applicability of this License to the Work, or (b) a
-# copy of this License.
-
-# 2. License Grants
-
-# 2.1 Copyright Grant. Subject to the terms and conditions of this
-# License, each Licensor grants to you a perpetual, worldwide,
-# non-exclusive, royalty-free, copyright license to reproduce,
-# prepare derivative works of, publicly display, publicly perform,
-# sublicense and distribute its Work and any resulting derivative
-# works in any form.
-
-# 3. Limitations
-
-# 3.1 Redistribution. You may reproduce or distribute the Work only
-# if (a) you do so under this License, (b) you include a complete
-# copy of this License with your distribution, and (c) you retain
-# without modification any copyright, patent, trademark, or
-# attribution notices that are present in the Work.
-
-# 3.2 Derivative Works. You may specify that additional or different
-# terms apply to the use, reproduction, and distribution of your
-# derivative works of the Work ("Your Terms") only if (a) Your Terms
-# provide that the use limitation in Section 3.3 applies to your
-# derivative works, and (b) you identify the specific derivative
-# works that are subject to Your Terms. Notwithstanding Your Terms,
-# this License (including the redistribution requirements in Section
-# 3.1) will continue to apply to the Work itself.
-
-# 3.3 Use Limitation. The Work and any derivative works thereof only
-# may be used or intended for use non-commercially. Notwithstanding
-# the foregoing, NVIDIA and its affiliates may use the Work and any
-# derivative works commercially. As used herein, "non-commercially"
-# means for research or evaluation purposes only.
-
-# 3.4 Patent Claims. If you bring or threaten to bring a patent claim
-# against any Licensor (including any claim, cross-claim or
-# counterclaim in a lawsuit) to enforce any patents that you allege
-# are infringed by any Work, then your rights under this License from
-# such Licensor (including the grant in Section 2.1) will terminate
-# immediately.
-
-# 3.5 Trademarks. This License does not grant any rights to use any
-# Licensor’s or its affiliates’ names, logos, or trademarks, except
-# as necessary to reproduce the notices described in this License.
-
-# 3.6 Termination. If you violate any term of this License, then your
-# rights under this License (including the grant in Section 2.1) will
-# terminate immediately.
-
-# 4. Disclaimer of Warranty.
-
-# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR
-# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER
-# THIS LICENSE.
-
-# 5. Limitation of Liability.
-
-# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL
-# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE
-# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT,
-# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF
-# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK
-# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION,
-# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER
-# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGES.
-
-# =======================================================================
-
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['fused_bias_leakyrelu'])
-
-
-class FusedBiasLeakyReLUFunctionBackward(Function):
- """Calculate second order deviation.
-
- This function is to compute the second order deviation for the fused leaky
- relu operation.
- """
-
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = ext_module.fused_bias_leakyrelu(
- grad_output,
- empty,
- out,
- act=3,
- grad=1,
- alpha=negative_slope,
- scale=scale)
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
-
- # The second order deviation, in fact, contains two parts, while the
- # the first part is zero. Thus, we direct consider the second part
- # which is similar with the first order deviation in implementation.
- gradgrad_out = ext_module.fused_bias_leakyrelu(
- gradgrad_input,
- gradgrad_bias.to(out.dtype),
- out,
- act=3,
- grad=1,
- alpha=ctx.negative_slope,
- scale=ctx.scale)
-
- return gradgrad_out, None, None, None
-
-
-class FusedBiasLeakyReLUFunction(Function):
-
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
-
- out = ext_module.fused_bias_leakyrelu(
- input,
- bias,
- empty,
- act=3,
- grad=0,
- alpha=negative_slope,
- scale=scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedBiasLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale)
-
- return grad_input, grad_bias, None, None
-
-
-class FusedBiasLeakyReLU(nn.Module):
- """Fused bias leaky ReLU.
-
- This function is introduced in the StyleGAN2:
- http://arxiv.org/abs/1912.04958
-
- The bias term comes from the convolution operation. In addition, to keep
- the variance of the feature map or gradients unchanged, they also adopt a
- scale similarly with Kaiming initialization. However, since the
- :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the
- final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501
- your own scale.
-
- TODO: Implement the CPU version.
-
- Args:
- channel (int): The channel number of the feature map.
- negative_slope (float, optional): Same as nn.LeakyRelu.
- Defaults to 0.2.
- scale (float, optional): A scalar to adjust the variance of the feature
- map. Defaults to 2**0.5.
- """
-
- def __init__(self, num_channels, negative_slope=0.2, scale=2**0.5):
- super(FusedBiasLeakyReLU, self).__init__()
-
- self.bias = nn.Parameter(torch.zeros(num_channels))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_bias_leakyrelu(input, self.bias, self.negative_slope,
- self.scale)
-
-
-def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5):
- """Fused bias leaky ReLU function.
-
- This function is introduced in the StyleGAN2:
- http://arxiv.org/abs/1912.04958
-
- The bias term comes from the convolution operation. In addition, to keep
- the variance of the feature map or gradients unchanged, they also adopt a
- scale similarly with Kaiming initialization. However, since the
- :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the
- final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501
- your own scale.
-
- Args:
- input (torch.Tensor): Input feature map.
- bias (nn.Parameter): The bias from convolution operation.
- negative_slope (float, optional): Same as nn.LeakyRelu.
- Defaults to 0.2.
- scale (float, optional): A scalar to adjust the variance of the feature
- map. Defaults to 2**0.5.
-
- Returns:
- torch.Tensor: Feature map after non-linear activation.
- """
-
- if not input.is_cuda:
- return bias_leakyrelu_ref(input, bias, negative_slope, scale)
-
- return FusedBiasLeakyReLUFunction.apply(input, bias.to(input.dtype),
- negative_slope, scale)
-
-
-def bias_leakyrelu_ref(x, bias, negative_slope=0.2, scale=2**0.5):
-
- if bias is not None:
- assert bias.ndim == 1
- assert bias.shape[0] == x.shape[1]
- x = x + bias.reshape([-1 if i == 1 else 1 for i in range(x.ndim)])
-
- x = F.leaky_relu(x, negative_slope)
- if scale != 1:
- x = x * scale
-
- return x
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/losses/utils.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/losses/utils.py
deleted file mode 100644
index 85aec9f3045240c3de96a928324ae8f5c3aebe8b..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/losses/utils.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import functools
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-import torch.nn.functional as F
-
-
-def get_class_weight(class_weight):
- """Get class weight for loss function.
-
- Args:
- class_weight (list[float] | str | None): If class_weight is a str,
- take it as a file name and read from it.
- """
- if isinstance(class_weight, str):
- # take it as a file path
- if class_weight.endswith('.npy'):
- class_weight = np.load(class_weight)
- else:
- # pkl, json or yaml
- class_weight = mmcv.load(class_weight)
-
- return class_weight
-
-
-def reduce_loss(loss, reduction):
- """Reduce loss as specified.
-
- Args:
- loss (Tensor): Elementwise loss tensor.
- reduction (str): Options are "none", "mean" and "sum".
-
- Return:
- Tensor: Reduced loss tensor.
- """
- reduction_enum = F._Reduction.get_enum(reduction)
- # none: 0, elementwise_mean:1, sum: 2
- if reduction_enum == 0:
- return loss
- elif reduction_enum == 1:
- return loss.mean()
- elif reduction_enum == 2:
- return loss.sum()
-
-
-def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
- """Apply element-wise weight and reduce loss.
-
- Args:
- loss (Tensor): Element-wise loss.
- weight (Tensor): Element-wise weights.
- reduction (str): Same as built-in losses of PyTorch.
- avg_factor (float): Avarage factor when computing the mean of losses.
-
- Returns:
- Tensor: Processed loss values.
- """
- # if weight is specified, apply element-wise weight
- if weight is not None:
- assert weight.dim() == loss.dim()
- if weight.dim() > 1:
- assert weight.size(1) == 1 or weight.size(1) == loss.size(1)
- loss = loss * weight
-
- # if avg_factor is not specified, just reduce the loss
- if avg_factor is None:
- loss = reduce_loss(loss, reduction)
- else:
- # if reduction is mean, then average the loss by avg_factor
- if reduction == 'mean':
- loss = loss.sum() / avg_factor
- # if reduction is 'none', then do nothing, otherwise raise an error
- elif reduction != 'none':
- raise ValueError('avg_factor can not be used with reduction="sum"')
- return loss
-
-
-def weighted_loss(loss_func):
- """Create a weighted version of a given loss function.
-
- To use this decorator, the loss function must have the signature like
- `loss_func(pred, target, **kwargs)`. The function only needs to compute
- element-wise loss without any reduction. This decorator will add weight
- and reduction arguments to the function. The decorated function will have
- the signature like `loss_func(pred, target, weight=None, reduction='mean',
- avg_factor=None, **kwargs)`.
-
- :Example:
-
- >>> import torch
- >>> @weighted_loss
- >>> def l1_loss(pred, target):
- >>> return (pred - target).abs()
-
- >>> pred = torch.Tensor([0, 2, 3])
- >>> target = torch.Tensor([1, 1, 1])
- >>> weight = torch.Tensor([1, 0, 1])
-
- >>> l1_loss(pred, target)
- tensor(1.3333)
- >>> l1_loss(pred, target, weight)
- tensor(1.)
- >>> l1_loss(pred, target, reduction='none')
- tensor([1., 1., 2.])
- >>> l1_loss(pred, target, weight, avg_factor=2)
- tensor(1.5000)
- """
-
- @functools.wraps(loss_func)
- def wrapper(pred,
- target,
- weight=None,
- reduction='mean',
- avg_factor=None,
- **kwargs):
- # get element-wise loss
- loss = loss_func(pred, target, **kwargs)
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
- return wrapper
diff --git a/spaces/MiloSobral/PortiloopDemo/PortiloopV2.md b/spaces/MiloSobral/PortiloopDemo/PortiloopV2.md
deleted file mode 100644
index 166c718b00bc94cf30fbc6769b3b4b5ca1e198cf..0000000000000000000000000000000000000000
--- a/spaces/MiloSobral/PortiloopDemo/PortiloopV2.md
+++ /dev/null
@@ -1,238 +0,0 @@
-# Portiloop V2
-
-You've just got your hands on the hardware for the Portiloop V2 (A Google Coral Mini and a PiEEG board). Here are the steps you need to follow to get started using the EEG capture, the Spindle detection software, and the TPU processing.
-
-## Accessing the Google Coral
-
-These first steps will help you set up an SSH connection to the device.
-
-- Power up the board through the USB power port.
-- Connect another USB cable to the OTG-port on the board and to your _linux_ host machine. Follow the following steps to connect to the board through serial:
- - `ls /dev/ttyMC*`
- - `screen /dev/ttyACM0`
- If you see a message telling you that screen is busy, you can use `sudo lsof /dev/ttyMC0` and then retry the screen step.
- - Login to the board using default username and password: mendel
-- Once you are logged in, you can now connect to you desired wifi network using nmtui.
-- If you want to access the board through ssh (which is recommended for any sort of development):
- - On the serial console, open the `/etc/ssh/sshd_config` file.
- - Scroll down to the `PasswordAuthenticated` line and change the 'no' to a 'yes'.
-- Reboot the device.
- Once all of that is done, you should be able to ssh into your device, using either the ip address or the hostname. If some issues arise, make sure you are connected to the same network.
-
-## Dependencies
-
-To install all dependencies, run the installation.sh script. This script takes care of all the installations for you so it may take a while (~25 minutes).
-
-## Setting up the Access Point
-
-### 1. Download dependencies for access point
-
-To set up an access point, you will need to install a few dependencies. To install them, you can use the following command:
-
-```bash
-sudo apt-get update
-sudo apt-get install hostapd dnsmasq
-```
-
-This will update your system's package list and install the `hostapd` and `dnsmasq` packages.
-
-### 2. Set up the interface ap0
-
-Next, you will need to set up a systemd service to configure and enable the `ap0` interface.
-
-First, we can create a script to create the interface using `sudo nano /usr/local/bin/create_ap0.sh`. The script should contain the following content:
-
-```bash
-#!/bin/bash
-
-# Get the name of the interface on phy1
-phy1_interface=$(sudo iw dev | awk '/phy#1/ {getline; print $2}')
-
-# Check if the interface name is p2p0
-if [[ $phy1_interface == "ap0" ]]; then
- echo "ap0 already set up, not running script..."
-else
- echo $phy1_interface
- # Delete the existing p2p0 interface
- /sbin/iw dev $phy1_interface del
-
- # Reload the Network Manager utility
- systemctl restart NetworkManager
-
- # Create a new ap0 interface in AP mode
- /sbin/iw phy phy1 interface add ap0 type __ap
-
- # Disable power management for the ap0 interface
- /sbin/iw dev ap0 set power_save off
-
- # Reload the Network Manager utility again
- systemctl restart NetworkManager
-
- # Get an IPV4 address for the server
- ifconfig ap0 192.168.4.1 up
-fi
-```
-
-To be able to run this file like a script, run `sudo chmod +x /usr/local/bin/create_ap0.sh`
-
-To avoid configuration issues, we need to tell NetworkManager to ignore this interface. First, run `nmcli device set ap0 managed no`. Then, create a file called `/etc/NetworkManager/conf.d/unmanaged.conf`. In this file, write the following:
-
-```ini
-[keyfile]
-unmanaged-devices=interface-name:ap0,interface-name:p2p0
-```
-
-To make sure this starts works everytime we turn the Portiloop on, we need to create a new service. First, you can create a new service file at `/etc/systemd/system/create_ap.service` with the following content:
-
-```ini
-[Unit]
-Description=Create The Access Point for the coral
-Before=hostapd.service dnsmasq.service
-After=network-online.target
-Wants=network-online.target
-
-[Service]
-Type=simple
-ExecStart=/usr/local/bin/create_ap0.sh
-
-[Install]
-WantedBy=multi-user.target
-```
-
-This service file specifies that it should run the `create_ap0.sh` script once on boot before the hostapd and dnsmasq services start.
-
-### 3. Configure Hostapd
-
-Hostapd is the software that will create the wireless access point. First, you will need to open the in `/etc/sysctl.conf` file and change the line for ip_forwarding to `net.ipv4.ip_forward=1`.
-
-Next, you will create a configuration file at `/etc/hostapd/hostapd.conf` with the following content:
-
-```ini
-interface=ap0
-driver=nl80211
-ssid=YOUR-SSID-HERE
-hw_mode=g
-channel=6
-wpa=2
-wpa_passphrase=YOUR-PASSWORD-HERE
-wpa_key_mgmt=WPA-PSK
-wpa_pairwise=TKIP CCMP
-rsn_pairwise=CCMP
-auth_algs=1
-macaddr_acl=0
-```
-
-This configuration file specifies the `ap0` interface, the SSID and password for the access point, and the encryption type to use. Make sure to replace `YOUR-SSID-HERE` and `YOUR-PASSWORD-HERE` with your own values. You now need to specify to hostapd which configuration file to do. Open the hostapd configuration file using `sudo nano /etc/default/hostapd`. Uncomment the DAEMON_CONF line and set it to the path of the configuration file you just created:
-`DAEMON_CONF="/etc/hostapd/hostapd.conf"`.
-
-Lastly, run `sudo systemctl unmask hostapd`.
-
-### 4. Configure dnsmasq
-
-Dnsmasq is the software that will provide DHCP and DNS services for the access point. Start by opening the dnsmasq configuration file with `sudo nano /etc/dnsmasq.conf`. Add the following content at the top of the file:
-
-```ini
-
-# Configuration for Access Point
-interface=ap0
-dhcp-range=192.168.4.2,192.168.4.20,255.255.255.0,24h
-dhcp-option=3,192.168.4.1
-dhcp-option=6,192.168.4.1
-server=8.8.8.8
-```
-
-This configuration file specifies the `ap0` interface, the range of IP addresses to assign to clients, and the DNS server to use. Note that the IP address of the `dhcp-option=6,...` should be the same as the IP address set in step 2.
-
-### 5. Configure IP Tables for internet access
-
-To make sure you get internet access on your home computer when you are connected to the Portiloop, we need to setup IP tables. Create the following script `sudo nano /usr/local/bin/setup_tables.sh` and copy paste the following code:
-
-```bash
-#!/bin/bash
-
-echo "Telling kernel to turn on ipv4 ip_forwarding"
-echo 1 > /proc/sys/net/ipv4/ip_forward
-echo "Done. Setting up iptables rules to allow FORWARDING"
-
-DOWNSTREAM=ap0 # ap0 is client network (running hostapd)
-UPSTREAM=wlan0 # upstream network (internet)
-
-# Allow IP Masquerading (NAT) of packets from clients (downstream) to upstream network (internet)
-iptables -t nat -A POSTROUTING -o $UPSTREAM -j MASQUERADE
-
-# Forward packets from downstream clients to the upstream internet
-iptables -A FORWARD -i $DOWNSTREAM -o $UPSTREAM -j ACCEPT
-
-# Forward packers from the internet to clients IF THE CONNECTION IS ALREADY OPEN!
-iptables -A FORWARD -i $UPSTREAM -o $DOWNSTREAM -m state --state RELATED,ESTABLISHED -j ACCEPT
-
-# Setup the external DNS server
-iptables -t nat -A PREROUTING -i $DOWNSTREAM -p udp --dport 53 -j DNAT --to-destination 8.8.8.8:53
-
-echo "Done setting up iptables rules. Forwarding enabled"
-```
-
-Then, create a file called `/etc/systemd/system/setup_tables.service` and paste the following configuration:
-
-```ini
-[Unit]
-Description=Setup tables service
-After=create_ap.service
-Wants=network-online.target
-After=network-online.target
-
-[Service]
-Type=simple
-ExecStart=/usr/local/bin/setup_tables.sh
-
-[Install]
-WantedBy=multi-user.target
-```
-
-Finally, make sure the script is executable by running: `sudo chmod +x /usr/local/bin/setup_tables.sh`
-
-### 6. Start Systemd services
-
-To make sure that everything happens on startup, we need to enable all services. Execute the following commands:
-
-```bash
-sudo systemctl enable create_ap.service
-sudo systemctl enable hostapd.service
-sudo systemctl enable dnsmasq.service
-sudo systemctl enable setup_tables.service
-```
-
-## Jupyter notebook
-
-To access the portiloop easily, we recommend setting up a jupyter notebook server which will be available from any browser. To set up the Jupyter server using a systemd service, follow these steps:
-
-1. On the command line, type `jupyter notebook password`. This will show a prompt where you can enter the desired password.
-2. Create a new systemd service file using the command `sudo nano /etc/systemd/system/jupyter.service`.
-3. Add the following lines to the service file:
-
-```ini
-[Unit]
-Description=Jupyter Notebook Server
-After=create_ap.service
-After=hostapd.service
-After=dnsmasq.service
-
-[Service]
-Type=simple
-ExecStart=/bin/bash -c "/usr/bin/jupyter notebook --no-browser --ip 192.168.4.1 --port 8080 --notebook-dir=/home/mendel"
-User=mendel
-Group=mendel
-Restart=on-failure
-RestartSec=60s
-
-[Install]
-WantedBy=multi-user.target
-```
-
-4. Save and close the file.
-5. Reload the systemd daemon to load the new service file: `sudo systemctl daemon-reload`.
-6. Start the Jupyter service: `sudo systemctl start jupyter`.
-7. Check the status of the service: `sudo systemctl status jupyter`. If everything is set up correctly, you should see a message indicating that the service is active and running.
-8. To make sure the service starts automatically on boot, enable it: `sudo systemctl enable jupyter`.
-
-That's it! Your Jupyter server should now be up and running, listening on IP address 192.168.4.1 and port 8080, and automatically starting whenever the system boots up. You can now access it by typing 192.168.4.1:8080 in your browser. This should lead you to a login page where you'll be prompted for your password. If any issue arise, try with a different web browser.
diff --git a/spaces/Monster/Llama-2-7B-chat/app.py b/spaces/Monster/Llama-2-7B-chat/app.py
deleted file mode 100644
index fe32a168108963e3a9e4e031a297dd190c81cfa4..0000000000000000000000000000000000000000
--- a/spaces/Monster/Llama-2-7B-chat/app.py
+++ /dev/null
@@ -1,145 +0,0 @@
-from __future__ import annotations
-from typing import Iterable
-import gradio as gr
-from gradio.themes.base import Base
-from gradio.themes.utils import colors, fonts, sizes
-import subprocess
-
-from huggingface_hub import hf_hub_download
-from llama_cpp import Llama
-from llama_cpp import LlamaRAMCache
-
-hf_hub_download(repo_id="TheBloke/Llama-2-7b-Chat-GGUF", filename="llama-2-7b-chat.Q4_K_M.gguf", local_dir=".")
-
-llm = Llama(model_path="./llama-2-7b-chat.Q4_K_M.gguf", rms_norm_eps=1e-5)
-
-cache = LlamaRAMCache(capacity_bytes=2 << 30)
-
-llm.set_cache(cache)
-
-
-ins = '''[INST] <>
-You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
-If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
-<>
-{} [/INST]
-'''
-
-theme = gr.themes.Monochrome(
- primary_hue="indigo",
- secondary_hue="blue",
- neutral_hue="slate",
- radius_size=gr.themes.sizes.radius_sm,
- font=[gr.themes.GoogleFont("Open Sans"), "ui-sans-serif", "system-ui", "sans-serif"],
-)
-
-
-
-
-def generate(instruction):
- result = ""
- for x in llm(ins.format(instruction), stop=['USER:'], stream=True, max_tokens=512):
- result += x['choices'][0]['text']
- yield result
-
-
-
-examples = [
- "Instead of making a peanut butter and jelly sandwich, what else could I combine peanut butter with in a sandwich? Give five ideas",
- "How do I make a campfire?",
- "Explain to me the difference between nuclear fission and fusion.",
- "I'm selling my Nikon D-750, write a short blurb for my ad."
-]
-
-def process_example(args):
- for x in generate(args):
- pass
- return x
-
-css = ".generating {visibility: hidden}"
-
-# Based on the gradio theming guide and borrowed from https://huggingface.co/spaces/shivi/dolly-v2-demo
-class SeafoamCustom(Base):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.emerald,
- secondary_hue: colors.Color | str = colors.blue,
- neutral_hue: colors.Color | str = colors.blue,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Quicksand"),
- "ui-sans-serif",
- "sans-serif",
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("IBM Plex Mono"),
- "ui-monospace",
- "monospace",
- ),
- ):
- super().__init__(
- primary_hue=primary_hue,
- secondary_hue=secondary_hue,
- neutral_hue=neutral_hue,
- spacing_size=spacing_size,
- radius_size=radius_size,
- font=font,
- font_mono=font_mono,
- )
- super().set(
- button_primary_background_fill="linear-gradient(90deg, *primary_300, *secondary_400)",
- button_primary_background_fill_hover="linear-gradient(90deg, *primary_200, *secondary_300)",
- button_primary_text_color="white",
- button_primary_background_fill_dark="linear-gradient(90deg, *primary_600, *secondary_800)",
- block_shadow="*shadow_drop_lg",
- button_shadow="*shadow_drop_lg",
- input_background_fill="zinc",
- input_border_color="*secondary_300",
- input_shadow="*shadow_drop",
- input_shadow_focus="*shadow_drop_lg",
- )
-
-
-seafoam = SeafoamCustom()
-
-
-with gr.Blocks(theme=seafoam, analytics_enabled=False, css=css) as demo:
- with gr.Column():
- gr.Markdown(
- """ ## Meta's Llama 2 7B-chat
-
- 4bit (q4_K_M)
-
- Type in the box below and click the button to generate answers to your most pressing questions!
-
- """
- )
-
- with gr.Row():
- with gr.Column(scale=3):
- instruction = gr.Textbox(placeholder="Enter your question here", label="Question", elem_id="q-input")
-
- with gr.Box():
- gr.Markdown("**Answer**")
- output = gr.Markdown(elem_id="q-output")
- submit = gr.Button("Generate", variant="primary")
- gr.Examples(
- examples=examples,
- inputs=[instruction],
- cache_examples=True,
- fn=process_example,
- outputs=[output],
- )
-
-
-
- submit.click(generate, inputs=[instruction], outputs=[output])
- instruction.submit(generate, inputs=[instruction], outputs=[output])
-
-demo.queue(concurrency_count=1).launch(debug=False)
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/base.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/base.py
deleted file mode 100644
index 4826fd32225b9445ff868a0c9774ee01ae3849e5..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/packers/base.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from abc import abstractmethod
-from typing import Dict, List, Tuple
-
-from mmengine import track_parallel_progress
-
-
-class BasePacker:
- """Base class for packing the parsed annotation info to MMOCR format.
-
- Args:
- data_root (str): The root path of the dataset. It is usually set auto-
- matically and users do not need to set it manually in config file
- in most cases.
- split (str): The split of the dataset. It is usually set automatically
- and users do not need to set it manually in config file in most
- cases.
- nproc (int): Number of processes to process the data. Defaults to 1.
- It is usually set automatically and users do not need to set it
- manually in config file in most cases.
- """
-
- def __init__(self, data_root: str, split: str, nproc: int = 1) -> None:
- self.data_root = data_root
- self.split = split
- self.nproc = nproc
-
- @abstractmethod
- def pack_instance(self, sample: Tuple, split: str) -> Dict:
- """Pack the parsed annotation info to an MMOCR format instance.
-
- Args:
- sample (Tuple): A tuple of (img_file, ann_file).
- - img_path (str): Path to image file.
- - instances (Sequence[Dict]): A list of converted annos.
- split (str): The split of the instance.
-
- Returns:
- Dict: An MMOCR format instance.
- """
-
- @abstractmethod
- def add_meta(self, sample: List) -> Dict:
- """Add meta information to the sample.
-
- Args:
- sample (List): A list of samples of the dataset.
-
- Returns:
- Dict: A dict contains the meta information and samples.
- """
-
- def __call__(self, samples) -> Dict:
- samples = track_parallel_progress(
- self.pack_instance, samples, nproc=self.nproc)
- samples = self.add_meta(samples)
- return samples
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/balanced_positive_negative_sampler.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/balanced_positive_negative_sampler.py
deleted file mode 100644
index f969182b05a29167649d5c022a667b3f768f0143..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/object_detection/balanced_positive_negative_sampler.py
+++ /dev/null
@@ -1,274 +0,0 @@
-# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Class to subsample minibatches by balancing positives and negatives.
-
-Subsamples minibatches based on a pre-specified positive fraction in range
-[0,1]. The class presumes there are many more negatives than positive examples:
-if the desired batch_size cannot be achieved with the pre-specified positive
-fraction, it fills the rest with negative examples. If this is not sufficient
-for obtaining the desired batch_size, it returns fewer examples.
-
-The main function to call is Subsample(self, indicator, labels). For convenience
-one can also call SubsampleWeights(self, weights, labels) which is defined in
-the minibatch_sampler base class.
-
-When is_static is True, it implements a method that guarantees static shapes.
-It also ensures the length of output of the subsample is always batch_size, even
-when number of examples set to True in indicator is less than batch_size.
-
-This is originally implemented in TensorFlow Object Detection API.
-"""
-
-import tensorflow as tf
-
-from official.vision.detection.utils.object_detection import minibatch_sampler
-from official.vision.detection.utils.object_detection import ops
-
-
-class BalancedPositiveNegativeSampler(minibatch_sampler.MinibatchSampler):
- """Subsamples minibatches to a desired balance of positives and negatives."""
-
- def __init__(self, positive_fraction=0.5, is_static=False):
- """Constructs a minibatch sampler.
-
- Args:
- positive_fraction: desired fraction of positive examples (scalar in [0,1])
- in the batch.
- is_static: If True, uses an implementation with static shape guarantees.
-
- Raises:
- ValueError: if positive_fraction < 0, or positive_fraction > 1
- """
- if positive_fraction < 0 or positive_fraction > 1:
- raise ValueError('positive_fraction should be in range [0,1]. '
- 'Received: %s.' % positive_fraction)
- self._positive_fraction = positive_fraction
- self._is_static = is_static
-
- def _get_num_pos_neg_samples(self, sorted_indices_tensor, sample_size):
- """Counts the number of positives and negatives numbers to be sampled.
-
- Args:
- sorted_indices_tensor: A sorted int32 tensor of shape [N] which contains
- the signed indices of the examples where the sign is based on the label
- value. The examples that cannot be sampled are set to 0. It samples
- atmost sample_size*positive_fraction positive examples and remaining
- from negative examples.
- sample_size: Size of subsamples.
-
- Returns:
- A tuple containing the number of positive and negative labels in the
- subsample.
- """
- input_length = tf.shape(input=sorted_indices_tensor)[0]
- valid_positive_index = tf.greater(sorted_indices_tensor,
- tf.zeros(input_length, tf.int32))
- num_sampled_pos = tf.reduce_sum(
- input_tensor=tf.cast(valid_positive_index, tf.int32))
- max_num_positive_samples = tf.constant(
- int(sample_size * self._positive_fraction), tf.int32)
- num_positive_samples = tf.minimum(max_num_positive_samples, num_sampled_pos)
- num_negative_samples = tf.constant(sample_size,
- tf.int32) - num_positive_samples
-
- return num_positive_samples, num_negative_samples
-
- def _get_values_from_start_and_end(self, input_tensor, num_start_samples,
- num_end_samples, total_num_samples):
- """slices num_start_samples and last num_end_samples from input_tensor.
-
- Args:
- input_tensor: An int32 tensor of shape [N] to be sliced.
- num_start_samples: Number of examples to be sliced from the beginning
- of the input tensor.
- num_end_samples: Number of examples to be sliced from the end of the
- input tensor.
- total_num_samples: Sum of is num_start_samples and num_end_samples. This
- should be a scalar.
-
- Returns:
- A tensor containing the first num_start_samples and last num_end_samples
- from input_tensor.
-
- """
- input_length = tf.shape(input=input_tensor)[0]
- start_positions = tf.less(tf.range(input_length), num_start_samples)
- end_positions = tf.greater_equal(
- tf.range(input_length), input_length - num_end_samples)
- selected_positions = tf.logical_or(start_positions, end_positions)
- selected_positions = tf.cast(selected_positions, tf.float32)
- indexed_positions = tf.multiply(tf.cumsum(selected_positions),
- selected_positions)
- one_hot_selector = tf.one_hot(tf.cast(indexed_positions, tf.int32) - 1,
- total_num_samples,
- dtype=tf.float32)
- return tf.cast(tf.tensordot(tf.cast(input_tensor, tf.float32),
- one_hot_selector, axes=[0, 0]), tf.int32)
-
- def _static_subsample(self, indicator, batch_size, labels):
- """Returns subsampled minibatch.
-
- Args:
- indicator: boolean tensor of shape [N] whose True entries can be sampled.
- N should be a complie time constant.
- batch_size: desired batch size. This scalar cannot be None.
- labels: boolean tensor of shape [N] denoting positive(=True) and negative
- (=False) examples. N should be a complie time constant.
-
- Returns:
- sampled_idx_indicator: boolean tensor of shape [N], True for entries which
- are sampled. It ensures the length of output of the subsample is always
- batch_size, even when number of examples set to True in indicator is
- less than batch_size.
-
- Raises:
- ValueError: if labels and indicator are not 1D boolean tensors.
- """
- # Check if indicator and labels have a static size.
- if not indicator.shape.is_fully_defined():
- raise ValueError('indicator must be static in shape when is_static is'
- 'True')
- if not labels.shape.is_fully_defined():
- raise ValueError('labels must be static in shape when is_static is'
- 'True')
- if not isinstance(batch_size, int):
- raise ValueError('batch_size has to be an integer when is_static is'
- 'True.')
-
- input_length = tf.shape(input=indicator)[0]
-
- # Set the number of examples set True in indicator to be at least
- # batch_size.
- num_true_sampled = tf.reduce_sum(
- input_tensor=tf.cast(indicator, tf.float32))
- additional_false_sample = tf.less_equal(
- tf.cumsum(tf.cast(tf.logical_not(indicator), tf.float32)),
- batch_size - num_true_sampled)
- indicator = tf.logical_or(indicator, additional_false_sample)
-
- # Shuffle indicator and label. Need to store the permutation to restore the
- # order post sampling.
- permutation = tf.random.shuffle(tf.range(input_length))
- indicator = ops.matmul_gather_on_zeroth_axis(
- tf.cast(indicator, tf.float32), permutation)
- labels = ops.matmul_gather_on_zeroth_axis(
- tf.cast(labels, tf.float32), permutation)
-
- # index (starting from 1) when indicator is True, 0 when False
- indicator_idx = tf.where(
- tf.cast(indicator, tf.bool), tf.range(1, input_length + 1),
- tf.zeros(input_length, tf.int32))
-
- # Replace -1 for negative, +1 for positive labels
- signed_label = tf.where(
- tf.cast(labels, tf.bool), tf.ones(input_length, tf.int32),
- tf.scalar_mul(-1, tf.ones(input_length, tf.int32)))
- # negative of index for negative label, positive index for positive label,
- # 0 when indicator is False.
- signed_indicator_idx = tf.multiply(indicator_idx, signed_label)
- sorted_signed_indicator_idx = tf.nn.top_k(
- signed_indicator_idx, input_length, sorted=True).values
-
- [num_positive_samples,
- num_negative_samples] = self._get_num_pos_neg_samples(
- sorted_signed_indicator_idx, batch_size)
-
- sampled_idx = self._get_values_from_start_and_end(
- sorted_signed_indicator_idx, num_positive_samples,
- num_negative_samples, batch_size)
-
- # Shift the indices to start from 0 and remove any samples that are set as
- # False.
- sampled_idx = tf.abs(sampled_idx) - tf.ones(batch_size, tf.int32)
- sampled_idx = tf.multiply(
- tf.cast(tf.greater_equal(sampled_idx, tf.constant(0)), tf.int32),
- sampled_idx)
-
- sampled_idx_indicator = tf.cast(
- tf.reduce_sum(
- input_tensor=tf.one_hot(sampled_idx, depth=input_length), axis=0),
- tf.bool)
-
- # project back the order based on stored permutations
- reprojections = tf.one_hot(permutation, depth=input_length,
- dtype=tf.float32)
- return tf.cast(tf.tensordot(
- tf.cast(sampled_idx_indicator, tf.float32),
- reprojections, axes=[0, 0]), tf.bool)
-
- def subsample(self, indicator, batch_size, labels, scope=None):
- """Returns subsampled minibatch.
-
- Args:
- indicator: boolean tensor of shape [N] whose True entries can be sampled.
- batch_size: desired batch size. If None, keeps all positive samples and
- randomly selects negative samples so that the positive sample fraction
- matches self._positive_fraction. It cannot be None is is_static is True.
- labels: boolean tensor of shape [N] denoting positive(=True) and negative
- (=False) examples.
- scope: name scope.
-
- Returns:
- sampled_idx_indicator: boolean tensor of shape [N], True for entries which
- are sampled.
-
- Raises:
- ValueError: if labels and indicator are not 1D boolean tensors.
- """
- if len(indicator.get_shape().as_list()) != 1:
- raise ValueError('indicator must be 1 dimensional, got a tensor of '
- 'shape %s' % indicator.get_shape())
- if len(labels.get_shape().as_list()) != 1:
- raise ValueError('labels must be 1 dimensional, got a tensor of '
- 'shape %s' % labels.get_shape())
- if labels.dtype != tf.bool:
- raise ValueError('labels should be of type bool. Received: %s' %
- labels.dtype)
- if indicator.dtype != tf.bool:
- raise ValueError('indicator should be of type bool. Received: %s' %
- indicator.dtype)
- scope = scope or 'BalancedPositiveNegativeSampler'
- with tf.name_scope(scope):
- if self._is_static:
- return self._static_subsample(indicator, batch_size, labels)
-
- else:
- # Only sample from indicated samples
- negative_idx = tf.logical_not(labels)
- positive_idx = tf.logical_and(labels, indicator)
- negative_idx = tf.logical_and(negative_idx, indicator)
-
- # Sample positive and negative samples separately
- if batch_size is None:
- max_num_pos = tf.reduce_sum(
- input_tensor=tf.cast(positive_idx, dtype=tf.int32))
- else:
- max_num_pos = int(self._positive_fraction * batch_size)
- sampled_pos_idx = self.subsample_indicator(positive_idx, max_num_pos)
- num_sampled_pos = tf.reduce_sum(
- input_tensor=tf.cast(sampled_pos_idx, tf.int32))
- if batch_size is None:
- negative_positive_ratio = (
- 1 - self._positive_fraction) / self._positive_fraction
- max_num_neg = tf.cast(
- negative_positive_ratio *
- tf.cast(num_sampled_pos, dtype=tf.float32),
- dtype=tf.int32)
- else:
- max_num_neg = batch_size - num_sampled_pos
- sampled_neg_idx = self.subsample_indicator(negative_idx, max_num_neg)
-
- return tf.logical_or(sampled_pos_idx, sampled_neg_idx)
diff --git a/spaces/Nee001/bing0/src/components/providers.tsx b/spaces/Nee001/bing0/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/Nephele/bert-vits2-multi-voice/monotonic_align/setup.py b/spaces/Nephele/bert-vits2-multi-voice/monotonic_align/setup.py
deleted file mode 100644
index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000
--- a/spaces/Nephele/bert-vits2-multi-voice/monotonic_align/setup.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-import numpy
-
-setup(
- name = 'monotonic_align',
- ext_modules = cythonize("core.pyx"),
- include_dirs=[numpy.get_include()]
-)
diff --git a/spaces/OAOA/DifFace/basicsr/archs/spynet_arch.py b/spaces/OAOA/DifFace/basicsr/archs/spynet_arch.py
deleted file mode 100644
index 4c7af133daef0496b79a57517e1942d06f2d0061..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/archs/spynet_arch.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import math
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-
-from basicsr.utils.registry import ARCH_REGISTRY
-from .arch_util import flow_warp
-
-
-class BasicModule(nn.Module):
- """Basic Module for SpyNet.
- """
-
- def __init__(self):
- super(BasicModule, self).__init__()
-
- self.basic_module = nn.Sequential(
- nn.Conv2d(in_channels=8, out_channels=32, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False),
- nn.Conv2d(in_channels=32, out_channels=64, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False),
- nn.Conv2d(in_channels=64, out_channels=32, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False),
- nn.Conv2d(in_channels=32, out_channels=16, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False),
- nn.Conv2d(in_channels=16, out_channels=2, kernel_size=7, stride=1, padding=3))
-
- def forward(self, tensor_input):
- return self.basic_module(tensor_input)
-
-
-@ARCH_REGISTRY.register()
-class SpyNet(nn.Module):
- """SpyNet architecture.
-
- Args:
- load_path (str): path for pretrained SpyNet. Default: None.
- """
-
- def __init__(self, load_path=None):
- super(SpyNet, self).__init__()
- self.basic_module = nn.ModuleList([BasicModule() for _ in range(6)])
- if load_path:
- self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params'])
-
- self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- def preprocess(self, tensor_input):
- tensor_output = (tensor_input - self.mean) / self.std
- return tensor_output
-
- def process(self, ref, supp):
- flow = []
-
- ref = [self.preprocess(ref)]
- supp = [self.preprocess(supp)]
-
- for level in range(5):
- ref.insert(0, F.avg_pool2d(input=ref[0], kernel_size=2, stride=2, count_include_pad=False))
- supp.insert(0, F.avg_pool2d(input=supp[0], kernel_size=2, stride=2, count_include_pad=False))
-
- flow = ref[0].new_zeros(
- [ref[0].size(0), 2,
- int(math.floor(ref[0].size(2) / 2.0)),
- int(math.floor(ref[0].size(3) / 2.0))])
-
- for level in range(len(ref)):
- upsampled_flow = F.interpolate(input=flow, scale_factor=2, mode='bilinear', align_corners=True) * 2.0
-
- if upsampled_flow.size(2) != ref[level].size(2):
- upsampled_flow = F.pad(input=upsampled_flow, pad=[0, 0, 0, 1], mode='replicate')
- if upsampled_flow.size(3) != ref[level].size(3):
- upsampled_flow = F.pad(input=upsampled_flow, pad=[0, 1, 0, 0], mode='replicate')
-
- flow = self.basic_module[level](torch.cat([
- ref[level],
- flow_warp(
- supp[level], upsampled_flow.permute(0, 2, 3, 1), interp_mode='bilinear', padding_mode='border'),
- upsampled_flow
- ], 1)) + upsampled_flow
-
- return flow
-
- def forward(self, ref, supp):
- assert ref.size() == supp.size()
-
- h, w = ref.size(2), ref.size(3)
- w_floor = math.floor(math.ceil(w / 32.0) * 32.0)
- h_floor = math.floor(math.ceil(h / 32.0) * 32.0)
-
- ref = F.interpolate(input=ref, size=(h_floor, w_floor), mode='bilinear', align_corners=False)
- supp = F.interpolate(input=supp, size=(h_floor, w_floor), mode='bilinear', align_corners=False)
-
- flow = F.interpolate(input=self.process(ref, supp), size=(h, w), mode='bilinear', align_corners=False)
-
- flow[:, 0, :, :] *= float(w) / float(w_floor)
- flow[:, 1, :, :] *= float(h) / float(h_floor)
-
- return flow
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/__init__.py
deleted file mode 100644
index 9bd5c72b5e9d7f67fb7e4ef10808d7ec08967ff4..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .block_pair_dataset import BlockPairDataset
-from .masked_lm_dataset import MaskedLMDataset
-from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary
-
-
-__all__ = [
- "BertDictionary",
- "BlockPairDataset",
- "MaskedLMDataset",
- "MaskedLMDictionary",
-]
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qconv.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qconv.py
deleted file mode 100644
index d15ec192e8cda6265a198e583a9bf7fb194dd129..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qconv.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import _pair
-
-
-class PQConv2d(nn.Module):
- """
- Quantized counterpart of nn.Conv2d module. Stores the centroid, the assignments
- and the non-quantized biases. The full weight is re-instantiated at each forward
- pass and autograd automatically computes the gradients with respect to the
- centroids.
-
- Args:
- - centroids: centroids of size n_centroids x block_size
- - assignments: assignments of the centroids to the subvectors
- of size self.out_channels x n_blocks
- - bias: the non-quantized bias, must be either torch.Tensor or None
-
- Remarks:
- - We refer the reader to the official documentation of the nn.Conv2d module
- for the other arguments and the behavior of the module.
- - Performance tests on GPU show that this implementation is 10% slower than
- the non-quantized nn.Conv2d module for a standard training loop.
- - During the backward, the gradients are averaged by cluster and not summed.
- This explains the hook registered to the centroids.
- """
-
- def __init__(
- self,
- centroids,
- assignments,
- bias,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- padding_mode="zeros",
- ):
- super(PQConv2d, self).__init__()
- self.block_size = centroids.size(1)
- self.n_centroids = centroids.size(0)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.padding_mode = padding_mode
- # check compatibility
- if in_channels // groups * np.prod(self.kernel_size) % self.block_size != 0:
- raise ValueError("Wrong PQ sizes")
- if len(assignments) % out_channels != 0:
- raise ValueError("Wrong PQ sizes")
- if in_channels % groups != 0:
- raise ValueError("in_channels must be divisible by groups")
- if out_channels % groups != 0:
- raise ValueError("out_channels must be divisible by groups")
- # define parameters
- self.centroids = nn.Parameter(centroids, requires_grad=True)
- self.register_buffer("assignments", assignments)
- self.register_buffer("counts", torch.bincount(assignments).type_as(centroids))
- if bias is not None:
- self.bias = nn.Parameter(bias)
- else:
- self.register_parameter("bias", None)
- # register hook for averaging gradients per centroids instead of summing
- self.centroids.register_hook(lambda x: x / self.counts[:, None])
-
- @property
- def weight(self):
- return (
- self.centroids[self.assignments]
- .reshape(-1, self.out_channels, self.block_size)
- .permute(1, 0, 2)
- .reshape(
- self.out_channels, self.in_channels // self.groups, *self.kernel_size
- )
- )
-
- def forward(self, x):
- return F.conv2d(
- x,
- self.weight,
- self.bias,
- self.stride,
- self.padding,
- self.dilation,
- self.groups,
- )
-
- def extra_repr(self):
- s = "{in_channels}, {out_channels}, kernel_size={kernel_size}, stride={stride}"
- if self.padding != (0,) * len(self.padding):
- s += ", padding={padding}"
- if self.dilation != (1,) * len(self.dilation):
- s += ", dilation={dilation}"
- if self.groups != 1:
- s += ", groups={groups}"
- if self.bias is None:
- s += ", bias=False"
- if self.padding_mode != "zeros":
- s += ", padding_mode={padding_mode}"
- s += ", n_centroids={n_centroids}, block_size={block_size}"
- return s.format(**self.__dict__)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
deleted file mode 100644
index 93c8668041f8a7af29e4c11e905d8b56b946dd51..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/.github/ISSUE_TEMPLATE/feature_request.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-name: 🚀 Feature Request
-about: Submit a proposal/request for a new feature
-labels: 'enhancement, help wanted, needs triage'
----
-
-## 🚀 Feature Request
-
-
-### Motivation
-
-
-
-### Pitch
-
-
-
-### Alternatives
-
-
-
-### Additional context
-
-
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/semisupervised_translation.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/semisupervised_translation.py
deleted file mode 100644
index b2f9bf9a733d94e50b588e4316b4a02e1c8bcf51..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/semisupervised_translation.py
+++ /dev/null
@@ -1,485 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-from collections import OrderedDict
-
-from fairseq import utils
-from fairseq.data import (
- BacktranslationDataset,
- IndexedCachedDataset,
- IndexedDataset,
- IndexedRawTextDataset,
- LanguagePairDataset,
- NoisingDataset,
- RoundRobinZipDatasets,
- data_utils,
- indexed_dataset,
-)
-from fairseq.models import FairseqMultiModel
-from fairseq.sequence_generator import SequenceGenerator
-
-from . import register_task
-from .multilingual_translation import MultilingualTranslationTask
-
-
-logger = logging.getLogger(__name__)
-
-
-def _get_bt_dataset_key(lang_pair):
- return "bt:" + lang_pair
-
-
-def _get_denoising_dataset_key(lang_pair):
- return "denoising:" + lang_pair
-
-
-# ported from UnsupervisedMT
-def parse_lambda_config(x):
- """
- Parse the configuration of lambda coefficient (for scheduling).
- x = "3" # lambda will be a constant equal to x
- x = "0:1,1000:0" # lambda will start from 1 and linearly decrease
- # to 0 during the first 1000 iterations
- x = "0:0,1000:0,2000:1" # lambda will be equal to 0 for the first 1000
- # iterations, then will linearly increase to 1 until iteration 2000
- """
- split = x.split(",")
- if len(split) == 1:
- return float(x), None
- else:
- split = [s.split(os.pathsep) for s in split]
- assert all(len(s) == 2 for s in split)
- assert all(k.isdigit() for k, _ in split)
- assert all(
- int(split[i][0]) < int(split[i + 1][0]) for i in range(len(split) - 1)
- )
- return float(split[0][1]), [(int(k), float(v)) for k, v in split]
-
-
-@register_task("semisupervised_translation")
-class SemisupervisedTranslationTask(MultilingualTranslationTask):
- """A task for training multiple translation models simultaneously.
-
- We iterate round-robin over batches from multiple language pairs, ordered
- according to the `--lang-pairs` argument.
-
- The training loop is roughly:
-
- for i in range(len(epoch)):
- for lang_pair in args.lang_pairs:
- batch = next_batch_for_lang_pair(lang_pair)
- loss = criterion(model_for_lang_pair(lang_pair), batch)
- loss.backward()
- optimizer.step()
-
- In practice, `next_batch_for_lang_pair` is abstracted in a FairseqDataset
- (e.g., `RoundRobinZipDatasets`) and `model_for_lang_pair` is a model that
- implements the `FairseqMultiModel` interface.
-
- During inference it is required to specify a single `--source-lang` and
- `--target-lang`, instead of `--lang-pairs`.
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- # fmt: off
- MultilingualTranslationTask.add_args(parser)
- parser.add_argument('--lambda-parallel-config', default="1.0", type=str, metavar='CONFIG',
- help='cross-entropy reconstruction coefficient (parallel data). '
- 'use fixed weight during training if set to floating point number. '
- 'use piecewise linear function over number of updates to schedule the '
- 'weight with the format: w0:step0,w1:step1,...')
- parser.add_argument('--lambda-denoising-config', default="0.0", type=str, metavar='CONFIG',
- help='Cross-entropy reconstruction coefficient (denoising autoencoding)'
- 'use fixed weight during training if set to floating point number. '
- 'use piecewise linear function over number of updates to schedule the '
- 'weight with the format: w0:step0,w1:step1,...')
- parser.add_argument('--lambda-otf-bt-config', default="0.0", type=str, metavar='CONFIG',
- help='cross-entropy reconstruction coefficient (on-the-fly back-translation parallel data)'
- 'use fixed weight during training if set to floating point number. '
- 'use piecewise linear function over number of updates to schedule the '
- 'weight with the format: w0:step0,w1:step1,...')
- parser.add_argument('--bt-max-len-a', default=1.1, type=float, metavar='N',
- help='generate back-translated sequences of maximum length ax + b, where x is the '
- 'source length')
- parser.add_argument('--bt-max-len-b', default=10.0, type=float, metavar='N',
- help='generate back-translated sequences of maximum length ax + b, where x is the '
- 'source length')
- parser.add_argument('--bt-beam-size', default=1, type=int, metavar='N',
- help='beam size used in beam search of online back-translation')
- parser.add_argument('--max-word-shuffle-distance', default=3.0, type=float, metavar='N',
- help='maximum word shuffle distance for denoising autoencoding data generation')
- parser.add_argument('--word-dropout-prob', default=0.1, type=float, metavar='N',
- help='word dropout probability for denoising autoencoding data generation')
- parser.add_argument('--word-blanking-prob', default=0.2, type=float, metavar='N',
- help='word blanking probability for denoising autoencoding data generation')
- # fmt: on
-
- def __init__(self, args, dicts, training):
- super().__init__(args, dicts, training)
- self.lambda_parallel, self.lambda_parallel_steps = parse_lambda_config(
- args.lambda_parallel_config
- )
- self.lambda_otf_bt, self.lambda_otf_bt_steps = parse_lambda_config(
- args.lambda_otf_bt_config
- )
- self.lambda_denoising, self.lambda_denoising_steps = parse_lambda_config(
- args.lambda_denoising_config
- )
- if self.lambda_denoising > 0.0 or self.lambda_denoising_steps is not None:
- denoising_lang_pairs = [
- "%s-%s" % (tgt, tgt)
- for tgt in {lang_pair.split("-")[1] for lang_pair in args.lang_pairs}
- ]
- self.model_lang_pairs = self.model_lang_pairs + denoising_lang_pairs
- self.backtranslate_datasets = {}
- self.backtranslators = {}
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- dicts, training = MultilingualTranslationTask.prepare(args, **kwargs)
- return cls(args, dicts, training)
-
- def load_dataset(self, split, epoch=1, **kwargs):
- """Load a dataset split."""
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
-
- def split_exists(split, src, tgt, lang):
- if src is not None:
- filename = os.path.join(
- data_path, "{}.{}-{}.{}".format(split, src, tgt, lang)
- )
- else:
- filename = os.path.join(
- data_path, "{}.{}-None.{}".format(split, src, tgt)
- )
- return indexed_dataset.dataset_exists(filename, impl=self.args.dataset_impl)
-
- def load_indexed_dataset(path, dictionary):
- return data_utils.load_indexed_dataset(
- path, dictionary, self.args.dataset_impl
- )
-
- # load parallel datasets
- src_datasets, tgt_datasets = {}, {}
- if (
- self.lambda_parallel > 0.0
- or self.lambda_parallel_steps is not None
- or not split.startswith("train")
- ):
- for lang_pair in self.lang_pairs:
- src, tgt = lang_pair.split("-")
- if split_exists(split, src, tgt, src):
- prefix = os.path.join(
- data_path, "{}.{}-{}.".format(split, src, tgt)
- )
- elif split_exists(split, tgt, src, src):
- prefix = os.path.join(
- data_path, "{}.{}-{}.".format(split, tgt, src)
- )
- else:
- continue
- src_datasets[lang_pair] = load_indexed_dataset(
- prefix + src, self.dicts[src]
- )
- tgt_datasets[lang_pair] = load_indexed_dataset(
- prefix + tgt, self.dicts[tgt]
- )
- logger.info(
- "parallel-{} {} {} examples".format(
- data_path, split, len(src_datasets[lang_pair])
- )
- )
- if len(src_datasets) == 0:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, data_path)
- )
-
- # back translation datasets
- backtranslate_datasets = {}
- if (
- self.lambda_otf_bt > 0.0 or self.lambda_otf_bt_steps is not None
- ) and split.startswith("train"):
- for lang_pair in self.lang_pairs:
- src, tgt = lang_pair.split("-")
- if not split_exists(split, tgt, None, tgt):
- raise FileNotFoundError(
- "Dataset not found: backtranslation {} ({})".format(
- split, data_path
- )
- )
- filename = os.path.join(
- data_path, "{}.{}-None.{}".format(split, tgt, tgt)
- )
- dataset = load_indexed_dataset(filename, self.dicts[tgt])
- lang_pair_dataset_tgt = LanguagePairDataset(
- dataset,
- dataset.sizes,
- self.dicts[tgt],
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- )
- lang_pair_dataset = LanguagePairDataset(
- dataset,
- dataset.sizes,
- src_dict=self.dicts[src],
- tgt=dataset,
- tgt_sizes=dataset.sizes,
- tgt_dict=self.dicts[tgt],
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- )
- backtranslate_datasets[lang_pair] = BacktranslationDataset(
- tgt_dataset=self.alter_dataset_langtok(
- lang_pair_dataset_tgt,
- src_eos=self.dicts[tgt].eos(),
- src_lang=tgt,
- tgt_lang=src,
- ),
- backtranslation_fn=self.backtranslators[lang_pair],
- src_dict=self.dicts[src],
- tgt_dict=self.dicts[tgt],
- output_collater=self.alter_dataset_langtok(
- lang_pair_dataset=lang_pair_dataset,
- src_eos=self.dicts[src].eos(),
- src_lang=src,
- tgt_eos=self.dicts[tgt].eos(),
- tgt_lang=tgt,
- ).collater,
- )
- logger.info(
- "backtranslate-{}: {} {} {} examples".format(
- tgt,
- data_path,
- split,
- len(backtranslate_datasets[lang_pair]),
- )
- )
- self.backtranslate_datasets[lang_pair] = backtranslate_datasets[
- lang_pair
- ]
-
- # denoising autoencoder
- noising_datasets = {}
- if (
- self.lambda_denoising > 0.0 or self.lambda_denoising_steps is not None
- ) and split.startswith("train"):
- for lang_pair in self.lang_pairs:
- _, tgt = lang_pair.split("-")
- if not split_exists(split, tgt, None, tgt):
- continue
- filename = os.path.join(
- data_path, "{}.{}-None.{}".format(split, tgt, tgt)
- )
- tgt_dataset1 = load_indexed_dataset(filename, self.dicts[tgt])
- tgt_dataset2 = load_indexed_dataset(filename, self.dicts[tgt])
- noising_dataset = NoisingDataset(
- tgt_dataset1,
- self.dicts[tgt],
- seed=1,
- max_word_shuffle_distance=self.args.max_word_shuffle_distance,
- word_dropout_prob=self.args.word_dropout_prob,
- word_blanking_prob=self.args.word_blanking_prob,
- )
- noising_datasets[lang_pair] = self.alter_dataset_langtok(
- LanguagePairDataset(
- noising_dataset,
- tgt_dataset1.sizes,
- self.dicts[tgt],
- tgt_dataset2,
- tgt_dataset2.sizes,
- self.dicts[tgt],
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- ),
- src_eos=self.dicts[tgt].eos(),
- src_lang=tgt,
- tgt_eos=self.dicts[tgt].eos(),
- tgt_lang=tgt,
- )
- logger.info(
- "denoising-{}: {} {} {} examples".format(
- tgt,
- data_path,
- split,
- len(noising_datasets[lang_pair]),
- )
- )
-
- def language_pair_dataset(lang_pair):
- src, tgt = lang_pair.split("-")
- src_dataset, tgt_dataset = src_datasets[lang_pair], tgt_datasets[lang_pair]
- return self.alter_dataset_langtok(
- LanguagePairDataset(
- src_dataset,
- src_dataset.sizes,
- self.dicts[src],
- tgt_dataset,
- tgt_dataset.sizes,
- self.dicts[tgt],
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- ),
- self.dicts[src].eos(),
- src,
- self.dicts[tgt].eos(),
- tgt,
- )
-
- self.datasets[split] = RoundRobinZipDatasets(
- OrderedDict(
- [
- (lang_pair, language_pair_dataset(lang_pair))
- for lang_pair in src_datasets.keys()
- ]
- + [
- (_get_bt_dataset_key(lang_pair), dataset)
- for lang_pair, dataset in backtranslate_datasets.items()
- ]
- + [
- (_get_denoising_dataset_key(lang_pair), dataset)
- for lang_pair, dataset in noising_datasets.items()
- ]
- ),
- eval_key=None
- if self.training
- else "%s-%s" % (self.args.source_lang, self.args.target_lang),
- )
-
- def build_model(self, args):
- from fairseq import models
-
- model = models.build_model(args, self)
- if not isinstance(model, FairseqMultiModel):
- raise ValueError(
- "SemisupervisedTranslationTask requires a FairseqMultiModel architecture"
- )
-
- # create SequenceGenerator for each model that has backtranslation dependency on it
- self.sequence_generators = {}
- if (
- self.lambda_otf_bt > 0.0 or self.lambda_otf_bt_steps is not None
- ) and self.training:
- for lang_pair in self.lang_pairs:
- src, tgt = lang_pair.split("-")
- key = "{}-{}".format(tgt, src)
- self.sequence_generators[key] = SequenceGenerator(
- [model.models[key]],
- tgt_dict=self.dicts[src],
- beam_size=args.bt_beam_size,
- max_len_a=args.bt_max_len_a,
- max_len_b=args.bt_max_len_b,
- )
- decoder_lang_tok_idx = self.get_decoder_langtok(src)
-
- def backtranslate_fn(
- sample,
- model=model.models[key],
- bos_token=decoder_lang_tok_idx,
- sequence_generator=self.sequence_generators[key],
- ):
- return sequence_generator.generate(
- [model],
- sample,
- bos_token=bos_token,
- )
-
- self.backtranslators[lang_pair] = backtranslate_fn
-
- return model
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False
- ):
- model.train()
-
- if update_num > 0:
- self.update_step(update_num)
-
- agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, {}
-
- def forward_backward(model, samples, logging_output_key, weight):
- nonlocal agg_loss, agg_sample_size, agg_logging_output
- if samples is None or len(samples) == 0:
- return
- loss, sample_size, logging_output = criterion(model, samples)
- if ignore_grad:
- loss *= 0
- else:
- loss *= weight
- optimizer.backward(loss)
- agg_loss += loss.detach().item()
- # TODO make summing of the sample sizes configurable
- agg_sample_size += sample_size
- for k in logging_output:
- agg_logging_output[k] += logging_output[k]
- agg_logging_output[logging_output_key] += logging_output[k]
-
- if self.lambda_parallel > 0.0:
- for lang_pair in self.lang_pairs:
- forward_backward(
- model.models[lang_pair],
- sample[lang_pair],
- lang_pair,
- self.lambda_parallel,
- )
-
- if self.lambda_otf_bt > 0.0:
- for lang_pair in self.lang_pairs:
- sample_key = _get_bt_dataset_key(lang_pair)
- forward_backward(
- model.models[lang_pair],
- sample[sample_key],
- sample_key,
- self.lambda_otf_bt,
- )
-
- if self.lambda_denoising > 0.0:
- for lang_pair in self.lang_pairs:
- _, tgt = lang_pair.split("-")
- sample_key = _get_denoising_dataset_key(lang_pair)
- forward_backward(
- model.models["{0}-{0}".format(tgt)],
- sample[sample_key],
- sample_key,
- self.lambda_denoising,
- )
-
- return agg_loss, agg_sample_size, agg_logging_output
-
- def update_step(self, num_updates):
- def lambda_step_func(config, n_iter):
- """
- Update a lambda value according to its schedule configuration.
- """
- ranges = [
- i
- for i in range(len(config) - 1)
- if config[i][0] <= n_iter < config[i + 1][0]
- ]
- if len(ranges) == 0:
- assert n_iter >= config[-1][0]
- return config[-1][1]
- assert len(ranges) == 1
- i = ranges[0]
- x_a, y_a = config[i]
- x_b, y_b = config[i + 1]
- return y_a + (n_iter - x_a) * float(y_b - y_a) / float(x_b - x_a)
-
- if self.lambda_parallel_steps is not None:
- self.lambda_parallel = lambda_step_func(
- self.lambda_parallel_steps, num_updates
- )
- if self.lambda_denoising_steps is not None:
- self.lambda_denoising = lambda_step_func(
- self.lambda_denoising_steps, num_updates
- )
- if self.lambda_otf_bt_steps is not None:
- self.lambda_otf_bt = lambda_step_func(self.lambda_otf_bt_steps, num_updates)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/backtranslation/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/backtranslation/README.md
deleted file mode 100644
index 73675f1125d80f58aa824db67d8970504d4d6b2a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/backtranslation/README.md
+++ /dev/null
@@ -1,297 +0,0 @@
-# Understanding Back-Translation at Scale (Edunov et al., 2018)
-
-This page includes pre-trained models from the paper [Understanding Back-Translation at Scale (Edunov et al., 2018)](https://arxiv.org/abs/1808.09381).
-
-## Pre-trained models
-
-Model | Description | Dataset | Download
----|---|---|---
-`transformer.wmt18.en-de` | Transformer ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) See NOTE in the archive
-
-## Example usage (torch.hub)
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install subword_nmt sacremoses
-```
-
-Then to generate translations from the full model ensemble:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt18.en-de', ... ]
-
-# Load the WMT'18 En-De ensemble
-en2de_ensemble = torch.hub.load(
- 'pytorch/fairseq', 'transformer.wmt18.en-de',
- checkpoint_file='wmt18.model1.pt:wmt18.model2.pt:wmt18.model3.pt:wmt18.model4.pt:wmt18.model5.pt',
- tokenizer='moses', bpe='subword_nmt')
-
-# The ensemble contains 5 models
-len(en2de_ensemble.models)
-# 5
-
-# Translate
-en2de_ensemble.translate('Hello world!')
-# 'Hallo Welt!'
-```
-
-## Training your own model (WMT'18 English-German)
-
-The following instructions can be adapted to reproduce the models from the paper.
-
-
-#### Step 1. Prepare parallel data and optionally train a baseline (English-German) model
-
-First download and preprocess the data:
-```bash
-# Download and prepare the data
-cd examples/backtranslation/
-bash prepare-wmt18en2de.sh
-cd ../..
-
-# Binarize the data
-TEXT=examples/backtranslation/wmt18_en_de
-fairseq-preprocess \
- --joined-dictionary \
- --source-lang en --target-lang de \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/wmt18_en_de --thresholdtgt 0 --thresholdsrc 0 \
- --workers 20
-
-# Copy the BPE code into the data-bin directory for future use
-cp examples/backtranslation/wmt18_en_de/code data-bin/wmt18_en_de/code
-```
-
-(Optionally) Train a baseline model (English-German) using just the parallel data:
-```bash
-CHECKPOINT_DIR=checkpoints_en_de_parallel
-fairseq-train --fp16 \
- data-bin/wmt18_en_de \
- --source-lang en --target-lang de \
- --arch transformer_wmt_en_de_big --share-all-embeddings \
- --dropout 0.3 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
- --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
- --max-tokens 3584 --update-freq 16 \
- --max-update 30000 \
- --save-dir $CHECKPOINT_DIR
-# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a
-# different number of GPUs.
-```
-
-Average the last 10 checkpoints:
-```bash
-python scripts/average_checkpoints.py \
- --inputs $CHECKPOINT_DIR \
- --num-epoch-checkpoints 10 \
- --output $CHECKPOINT_DIR/checkpoint.avg10.pt
-```
-
-Evaluate BLEU:
-```bash
-# tokenized BLEU on newstest2017:
-bash examples/backtranslation/tokenized_bleu.sh \
- wmt17 \
- en-de \
- data-bin/wmt18_en_de \
- data-bin/wmt18_en_de/code \
- $CHECKPOINT_DIR/checkpoint.avg10.pt
-# BLEU4 = 29.57, 60.9/35.4/22.9/15.5 (BP=1.000, ratio=1.014, syslen=63049, reflen=62152)
-# compare to 29.46 in Table 1, which is also for tokenized BLEU
-
-# generally it's better to report (detokenized) sacrebleu though:
-bash examples/backtranslation/sacrebleu.sh \
- wmt17 \
- en-de \
- data-bin/wmt18_en_de \
- data-bin/wmt18_en_de/code \
- $CHECKPOINT_DIR/checkpoint.avg10.pt
-# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 29.0 60.6/34.7/22.4/14.9 (BP = 1.000 ratio = 1.013 hyp_len = 62099 ref_len = 61287)
-```
-
-
-#### Step 2. Back-translate monolingual German data
-
-Train a reverse model (German-English) to do the back-translation:
-```bash
-CHECKPOINT_DIR=checkpoints_de_en_parallel
-fairseq-train --fp16 \
- data-bin/wmt18_en_de \
- --source-lang de --target-lang en \
- --arch transformer_wmt_en_de_big --share-all-embeddings \
- --dropout 0.3 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
- --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
- --max-tokens 3584 --update-freq 16 \
- --max-update 30000 \
- --save-dir $CHECKPOINT_DIR
-# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a
-# different number of GPUs.
-```
-
-Let's evaluate the back-translation (BT) model to make sure it is well trained:
-```bash
-bash examples/backtranslation/sacrebleu.sh \
- wmt17 \
- de-en \
- data-bin/wmt18_en_de \
- data-bin/wmt18_en_de/code \
- $CHECKPOINT_DIR/checkpoint_best.py
-# BLEU+case.mixed+lang.de-en+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 34.9 66.9/41.8/28.5/19.9 (BP = 0.983 ratio = 0.984 hyp_len = 63342 ref_len = 64399)
-# compare to the best system from WMT'17 which scored 35.1: http://matrix.statmt.org/matrix/systems_list/1868
-```
-
-Next prepare the monolingual data:
-```bash
-# Download and prepare the monolingual data
-# By default the script samples 25M monolingual sentences, which after
-# deduplication should be just over 24M sentences. These are split into 25
-# shards, each with 1M sentences (except for the last shard).
-cd examples/backtranslation/
-bash prepare-de-monolingual.sh
-cd ../..
-
-# Binarize each shard of the monolingual data
-TEXT=examples/backtranslation/wmt18_de_mono
-for SHARD in $(seq -f "%02g" 0 24); do \
- fairseq-preprocess \
- --only-source \
- --source-lang de --target-lang en \
- --joined-dictionary \
- --srcdict data-bin/wmt18_en_de/dict.de.txt \
- --testpref $TEXT/bpe.monolingual.dedup.${SHARD} \
- --destdir data-bin/wmt18_de_mono/shard${SHARD} \
- --workers 20; \
- cp data-bin/wmt18_en_de/dict.en.txt data-bin/wmt18_de_mono/shard${SHARD}/; \
-done
-```
-
-Now we're ready to perform back-translation over the monolingual data. The
-following command generates via sampling, but it's possible to use greedy
-decoding (`--beam 1`), beam search (`--beam 5`),
-top-k sampling (`--sampling --beam 1 --sampling-topk 10`), etc.:
-```bash
-mkdir backtranslation_output
-for SHARD in $(seq -f "%02g" 0 24); do \
- fairseq-generate --fp16 \
- data-bin/wmt18_de_mono/shard${SHARD} \
- --path $CHECKPOINT_DIR/checkpoint_best.pt \
- --skip-invalid-size-inputs-valid-test \
- --max-tokens 4096 \
- --sampling --beam 1 \
- > backtranslation_output/sampling.shard${SHARD}.out; \
-done
-```
-
-After BT, use the `extract_bt_data.py` script to re-combine the shards, extract
-the back-translations and apply length ratio filters:
-```bash
-python examples/backtranslation/extract_bt_data.py \
- --minlen 1 --maxlen 250 --ratio 1.5 \
- --output backtranslation_output/bt_data --srclang en --tgtlang de \
- backtranslation_output/sampling.shard*.out
-
-# Ensure lengths are the same:
-# wc -l backtranslation_output/bt_data.{en,de}
-# 21795614 backtranslation_output/bt_data.en
-# 21795614 backtranslation_output/bt_data.de
-# 43591228 total
-```
-
-Binarize the filtered BT data and combine it with the parallel data:
-```bash
-TEXT=backtranslation_output
-fairseq-preprocess \
- --source-lang en --target-lang de \
- --joined-dictionary \
- --srcdict data-bin/wmt18_en_de/dict.en.txt \
- --trainpref $TEXT/bt_data \
- --destdir data-bin/wmt18_en_de_bt \
- --workers 20
-
-# We want to train on the combined data, so we'll symlink the parallel + BT data
-# in the wmt18_en_de_para_plus_bt directory. We link the parallel data as "train"
-# and the BT data as "train1", so that fairseq will combine them automatically
-# and so that we can use the `--upsample-primary` option to upsample the
-# parallel data (if desired).
-PARA_DATA=$(readlink -f data-bin/wmt18_en_de)
-BT_DATA=$(readlink -f data-bin/wmt18_en_de_bt)
-COMB_DATA=data-bin/wmt18_en_de_para_plus_bt
-mkdir -p $COMB_DATA
-for LANG in en de; do \
- ln -s ${PARA_DATA}/dict.$LANG.txt ${COMB_DATA}/dict.$LANG.txt; \
- for EXT in bin idx; do \
- ln -s ${PARA_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train.en-de.$LANG.$EXT; \
- ln -s ${BT_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train1.en-de.$LANG.$EXT; \
- ln -s ${PARA_DATA}/valid.en-de.$LANG.$EXT ${COMB_DATA}/valid.en-de.$LANG.$EXT; \
- ln -s ${PARA_DATA}/test.en-de.$LANG.$EXT ${COMB_DATA}/test.en-de.$LANG.$EXT; \
- done; \
-done
-```
-
-
-#### 3. Train an English-German model over the combined parallel + BT data
-
-Finally we can train a model over the parallel + BT data:
-```bash
-CHECKPOINT_DIR=checkpoints_en_de_parallel_plus_bt
-fairseq-train --fp16 \
- data-bin/wmt18_en_de_para_plus_bt \
- --upsample-primary 16 \
- --source-lang en --target-lang de \
- --arch transformer_wmt_en_de_big --share-all-embeddings \
- --dropout 0.3 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
- --lr 0.0007 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
- --max-tokens 3584 --update-freq 16 \
- --max-update 100000 \
- --save-dir $CHECKPOINT_DIR
-# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a
-# different number of GPUs.
-```
-
-Average the last 10 checkpoints:
-```bash
-python scripts/average_checkpoints.py \
- --inputs $CHECKPOINT_DIR \
- --num-epoch-checkpoints 10 \
- --output $CHECKPOINT_DIR/checkpoint.avg10.pt
-```
-
-Evaluate BLEU:
-```bash
-# tokenized BLEU on newstest2017:
-bash examples/backtranslation/tokenized_bleu.sh \
- wmt17 \
- en-de \
- data-bin/wmt18_en_de \
- data-bin/wmt18_en_de/code \
- $CHECKPOINT_DIR/checkpoint.avg10.pt
-# BLEU4 = 32.35, 64.4/38.9/26.2/18.3 (BP=0.977, ratio=0.977, syslen=60729, reflen=62152)
-# compare to 32.35 in Table 1, which is also for tokenized BLEU
-
-# generally it's better to report (detokenized) sacrebleu:
-bash examples/backtranslation/sacrebleu.sh \
- wmt17 \
- en-de \
- data-bin/wmt18_en_de \
- data-bin/wmt18_en_de/code \
- $CHECKPOINT_DIR/checkpoint.avg10.pt
-# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 31.5 64.3/38.2/25.6/17.6 (BP = 0.971 ratio = 0.971 hyp_len = 59515 ref_len = 61287)
-```
-
-
-## Citation
-```bibtex
-@inproceedings{edunov2018backtranslation,
- title = {Understanding Back-Translation at Scale},
- author = {Edunov, Sergey and Ott, Myle and Auli, Michael and Grangier, David},
- booktitle = {Conference of the Association for Computational Linguistics (ACL)},
- year = 2018,
-}
-```
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_for_tests.sh b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_for_tests.sh
deleted file mode 100644
index 67e875a41da652b2fcae6631b76d94584935ddb9..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/datasets/prepare_for_tests.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-# Download the mini dataset (coco val2017_100, with only 100 images)
-# to be used in unittests & integration tests.
-
-cd "${0%/*}"
-
-BASE=https://dl.fbaipublicfiles.com/detectron2
-ROOT=${DETECTRON2_DATASETS:-./}
-ROOT=${ROOT/#\~/$HOME} # expand ~ to HOME
-mkdir -p $ROOT/coco/annotations
-
-for anno in instances_val2017_100 \
- person_keypoints_val2017_100 ; do
-
- dest=$ROOT/coco/annotations/$anno.json
- [[ -s $dest ]] && {
- echo "$dest exists. Skipping ..."
- } || {
- wget $BASE/annotations/coco/$anno.json -O $dest
- }
-done
-
-dest=$ROOT/coco/val2017_100.tgz
-[[ -d $ROOT/coco/val2017 ]] && {
- echo "$ROOT/coco/val2017 exists. Skipping ..."
-} || {
- wget $BASE/annotations/coco/val2017_100.tgz -O $dest
- tar xzf $dest -C $ROOT/coco/ && rm -f $dest
-}
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/common.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/common.py
deleted file mode 100644
index d6b8742417abc897f5faa190db1341bbe7b2940d..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/common.py
+++ /dev/null
@@ -1,241 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import itertools
-import logging
-import numpy as np
-import pickle
-import random
-import torch.utils.data as data
-from torch.utils.data.sampler import Sampler
-
-from detectron2.utils.serialize import PicklableWrapper
-
-__all__ = ["MapDataset", "DatasetFromList", "AspectRatioGroupedDataset", "ToIterableDataset"]
-
-
-def _shard_iterator_dataloader_worker(iterable):
- # Shard the iterable if we're currently inside pytorch dataloader worker.
- worker_info = data.get_worker_info()
- if worker_info is None or worker_info.num_workers == 1:
- # do nothing
- yield from iterable
- else:
- yield from itertools.islice(iterable, worker_info.id, None, worker_info.num_workers)
-
-
-class _MapIterableDataset(data.IterableDataset):
- """
- Map a function over elements in an IterableDataset.
-
- Similar to pytorch's MapIterDataPipe, but support filtering when map_func
- returns None.
-
- This class is not public-facing. Will be called by `MapDataset`.
- """
-
- def __init__(self, dataset, map_func):
- self._dataset = dataset
- self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work
-
- def __len__(self):
- return len(self._dataset)
-
- def __iter__(self):
- for x in map(self._map_func, self._dataset):
- if x is not None:
- yield x
-
-
-class MapDataset(data.Dataset):
- """
- Map a function over the elements in a dataset.
- """
-
- def __init__(self, dataset, map_func):
- """
- Args:
- dataset: a dataset where map function is applied. Can be either
- map-style or iterable dataset. When given an iterable dataset,
- the returned object will also be an iterable dataset.
- map_func: a callable which maps the element in dataset. map_func can
- return None to skip the data (e.g. in case of errors).
- How None is handled depends on the style of `dataset`.
- If `dataset` is map-style, it randomly tries other elements.
- If `dataset` is iterable, it skips the data and tries the next.
- """
- self._dataset = dataset
- self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work
-
- self._rng = random.Random(42)
- self._fallback_candidates = set(range(len(dataset)))
-
- def __new__(cls, dataset, map_func):
- is_iterable = isinstance(dataset, data.IterableDataset)
- if is_iterable:
- return _MapIterableDataset(dataset, map_func)
- else:
- return super().__new__(cls)
-
- def __getnewargs__(self):
- return self._dataset, self._map_func
-
- def __len__(self):
- return len(self._dataset)
-
- def __getitem__(self, idx):
- retry_count = 0
- cur_idx = int(idx)
-
- while True:
- data = self._map_func(self._dataset[cur_idx])
- if data is not None:
- self._fallback_candidates.add(cur_idx)
- return data
-
- # _map_func fails for this idx, use a random new index from the pool
- retry_count += 1
- self._fallback_candidates.discard(cur_idx)
- cur_idx = self._rng.sample(self._fallback_candidates, k=1)[0]
-
- if retry_count >= 3:
- logger = logging.getLogger(__name__)
- logger.warning(
- "Failed to apply `_map_func` for idx: {}, retry count: {}".format(
- idx, retry_count
- )
- )
-
-
-class DatasetFromList(data.Dataset):
- """
- Wrap a list to a torch Dataset. It produces elements of the list as data.
- """
-
- def __init__(self, lst: list, copy: bool = True, serialize: bool = True):
- """
- Args:
- lst (list): a list which contains elements to produce.
- copy (bool): whether to deepcopy the element when producing it,
- so that the result can be modified in place without affecting the
- source in the list.
- serialize (bool): whether to hold memory using serialized objects, when
- enabled, data loader workers can use shared RAM from master
- process instead of making a copy.
- """
- self._lst = lst
- self._copy = copy
- self._serialize = serialize
-
- def _serialize(data):
- buffer = pickle.dumps(data, protocol=-1)
- return np.frombuffer(buffer, dtype=np.uint8)
-
- if self._serialize:
- logger = logging.getLogger(__name__)
- logger.info(
- "Serializing {} elements to byte tensors and concatenating them all ...".format(
- len(self._lst)
- )
- )
- self._lst = [_serialize(x) for x in self._lst]
- self._addr = np.asarray([len(x) for x in self._lst], dtype=np.int64)
- self._addr = np.cumsum(self._addr)
- self._lst = np.concatenate(self._lst)
- logger.info("Serialized dataset takes {:.2f} MiB".format(len(self._lst) / 1024 ** 2))
-
- def __len__(self):
- if self._serialize:
- return len(self._addr)
- else:
- return len(self._lst)
-
- def __getitem__(self, idx):
- if self._serialize:
- start_addr = 0 if idx == 0 else self._addr[idx - 1].item()
- end_addr = self._addr[idx].item()
- bytes = memoryview(self._lst[start_addr:end_addr])
- return pickle.loads(bytes)
- elif self._copy:
- return copy.deepcopy(self._lst[idx])
- else:
- return self._lst[idx]
-
-
-class ToIterableDataset(data.IterableDataset):
- """
- Convert an old indices-based (also called map-style) dataset
- to an iterable-style dataset.
- """
-
- def __init__(self, dataset: data.Dataset, sampler: Sampler, shard_sampler: bool = True):
- """
- Args:
- dataset: an old-style dataset with ``__getitem__``
- sampler: a cheap iterable that produces indices to be applied on ``dataset``.
- shard_sampler: whether to shard the sampler based on the current pytorch data loader
- worker id. When an IterableDataset is forked by pytorch's DataLoader into multiple
- workers, it is responsible for sharding its data based on worker id so that workers
- don't produce identical data.
-
- Most samplers (like our TrainingSampler) do not shard based on dataloader worker id
- and this argument should be set to True. But certain samplers may be already
- sharded, in that case this argument should be set to False.
- """
- assert not isinstance(dataset, data.IterableDataset), dataset
- assert isinstance(sampler, Sampler), sampler
- self.dataset = dataset
- self.sampler = sampler
- self.shard_sampler = shard_sampler
-
- def __iter__(self):
- if not self.shard_sampler:
- sampler = self.sampler
- else:
- # With map-style dataset, `DataLoader(dataset, sampler)` runs the
- # sampler in main process only. But `DataLoader(ToIterableDataset(dataset, sampler))`
- # will run sampler in every of the N worker. So we should only keep 1/N of the ids on
- # each worker. The assumption is that sampler is cheap to iterate so it's fine to
- # discard ids in workers.
- sampler = _shard_iterator_dataloader_worker(self.sampler)
- for idx in sampler:
- yield self.dataset[idx]
-
- def __len__(self):
- return len(self.sampler)
-
-
-class AspectRatioGroupedDataset(data.IterableDataset):
- """
- Batch data that have similar aspect ratio together.
- In this implementation, images whose aspect ratio < (or >) 1 will
- be batched together.
- This improves training speed because the images then need less padding
- to form a batch.
-
- It assumes the underlying dataset produces dicts with "width" and "height" keys.
- It will then produce a list of original dicts with length = batch_size,
- all with similar aspect ratios.
- """
-
- def __init__(self, dataset, batch_size):
- """
- Args:
- dataset: an iterable. Each element must be a dict with keys
- "width" and "height", which will be used to batch data.
- batch_size (int):
- """
- self.dataset = dataset
- self.batch_size = batch_size
- self._buckets = [[] for _ in range(2)]
- # Hard-coded two aspect ratio groups: w > h and w < h.
- # Can add support for more aspect ratio groups, but doesn't seem useful
-
- def __iter__(self):
- for d in self.dataset:
- w, h = d["width"], d["height"]
- bucket_id = 0 if w > h else 1
- bucket = self._buckets[bucket_id]
- bucket.append(d)
- if len(bucket) == self.batch_size:
- yield bucket[:]
- del bucket[:]
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/masks/README.md b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/masks/README.md
deleted file mode 100644
index cf176bc10fae3b03f139727147c220f2a735c806..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/masks/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# Current algorithm
-
-## Choice of mask objects
-
-For identification of the objects which are suitable for mask obtaining, panoptic segmentation model
-from [detectron2](https://github.com/facebookresearch/detectron2) trained on COCO. Categories of the detected instances
-belong either to "stuff" or "things" types. We consider that instances of objects should have category belong
-to "things". Besides, we set upper bound on area which is taken by the object — we consider that too big
-area indicates either of the instance being a background or a main object which should not be removed.
-
-## Choice of position for mask
-
-We consider that input image has size 2^n x 2^m. We downsample it using
-[COUNTLESS](https://github.com/william-silversmith/countless) algorithm so the width is equal to
-64 = 2^8 = 2^{downsample_levels}.
-
-### Augmentation
-
-There are several parameters for augmentation:
-- Scaling factor. We limit scaling to the case when a mask after scaling with pivot point in its center fits inside the
- image completely.
--
-
-### Shift
-
-
-## Select
diff --git a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/evaluator.py b/spaces/OptimalScale/Robin-7b/lmflow/pipeline/evaluator.py
deleted file mode 100644
index 27e48fb70ab440afc7fe7796e5c3a5b104be4a9e..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/evaluator.py
+++ /dev/null
@@ -1,387 +0,0 @@
-"""The Evaluator class simplifies the process of running evaluation on a language model provided by a HFDecoderModel instance imported from the lmflow package. The class constructor takes three dictionaries as arguments: model_args containing arguments related to the language model, data_args containing arguments related to the data used for evaluation, and evaluator_args containing other arguments for the evaluation process.
-
-The class has two methods: create_dataloader() that loads the data from the test file, creates a data loader, and returns it with the size of the data, and evaluate(model) that generates output text given input text. It uses the create_dataloader() method to load the data, iterates over the data in mini-batches, and encodes the input text with the encode() method of the HFDecoderModel class. Then, it generates output text using the evaluate() method of the HFDecoderModel class, decodes the generated output text using the decode() method of the HFDecoderModel class, and writes the output to a file in the output directory. The method also logs some information to the console and Weights and Biases if the use_wandb argument is True.
-"""
-import os
-# import deepspeed
-import torch
-import wandb
-import deepspeed
-import sys
-import numpy as np
-import datetime
-import json
-# TODO: remove later
-from transformers import AutoConfig
-import torch.distributed as dist
-
-from lmflow.datasets.dataset import Dataset
-from lmflow.pipeline.base_pipeline import BasePipeline
-from lmflow.models.hf_decoder_model import HFDecoderModel
-from lmflow.utils.data_utils import set_random_seed, batchlize, answer_extraction
-os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
-
-class Evaluator(BasePipeline):
- """
- Initializes the `Evaluator` class with given arguments.
-
- Parameters
- ------------
- model_args : ModelArguments object.
- Contains the arguments required to load the model.
-
- data_args : DatasetArguments object.
- Contains the arguments required to load the dataset.
-
- evaluator_args : EvaluatorArguments object.
- Contains the arguments required to perform evaluation.
-
-
- """
- def __init__(self, model_args, data_args, evaluator_args):
- # our method
- self.data_args = data_args
- self.evaluator_args = evaluator_args
- self.model_args = model_args
- print("--------Begin Evaluator Arguments----------")
- print(f"model_args : {self.model_args}")
- print(f"data_args : {self.data_args}")
- print(f"evaluator_args : {self.evaluator_args}")
- print("--------End Evaluator Arguments----------")
- # logger
- if(self.evaluator_args.use_wandb == True):
- wandb.init(project="lmflow_evaluation")
- # random seed
- set_random_seed(self.evaluator_args.random_seed)
- self.local_rank = int(os.getenv("LOCAL_RANK", "0"))
- self.world_size = int(os.getenv("WORLD_SIZE", "1"))
- torch.cuda.set_device(self.local_rank) # NOTE: cpu-only machine will have error
- deepspeed.init_distributed()
-
- self.config = AutoConfig.from_pretrained(model_args.model_name_or_path)
- try:
- self.model_hidden_size = self.config.hidden_size
- except:
- print("Error in setting hidden size, use the default size 1024")
- self.model_hidden_size = 1024 # gpt2 seems do not have hidden_size in config
-
- print(f"model_hidden_size = {self.model_hidden_size}")
- # batch size has to be divisible by world_size, but can be bigger than world_size
- train_batch_size = 1 * self.world_size
- self.evaluator_args.minibatch_size = train_batch_size
- self.block_size = evaluator_args.evaluate_block_size
- # dataloader, data_size = create_dataloader(args) # load dataset
-
-
- def create_dataloader(self, dataset: Dataset):
- data_dict = dataset.to_dict()
- inputs = [ instance["input"] for instance in data_dict["instances"] ]
- outputs = [ instance["output"] for instance in data_dict["instances"] ]
- dataset_size = len(outputs)
- dataset_buf = []
- for idx in range(dataset_size):
- dataset_buf.append({
- "input": inputs[idx],
- "output": outputs[idx],
- "input_idx": idx
- })
-
- dataloader = batchlize(
- dataset_buf,
- self.evaluator_args.minibatch_size,
- self.evaluator_args.random_shuffle
- )
- print(f"Successfully create dataloader with size {len(dataloader)}.")
- return dataloader, dataset_size
-
-
- # TODO: Split for better unittest
-
- def _match(self, predicted_answer, groundtruth, answer_type=None):
- case_insensitive_types = [
- "strategyqa",
- "coin_flip",
- "pubmedqa",
- "binary_choice",
- "medmcqa",
- "usmle",
- ]
- if answer_type in case_insensitive_types:
- return predicted_answer.lower() == groundtruth.lower()
- else:
- return predicted_answer == groundtruth
- return False
-
-
- def evaluate(self, model, dataset: Dataset, metric = "accuracy"):
- """
- Perform Evaluation for a model
-
- Parameters
- ------------
- model : TunableModel object.
- TunableModel to perform inference
-
- dataset : Dataset object.
-
-
- """
- if metric in ["acc", "accuracy"]:
- dataloader, data_size = self.create_dataloader(dataset)
-
- if not dist.is_initialized() or dist.get_rank() == 0:
- if not os.path.exists(self.evaluator_args.output_dir):
- os.makedirs(self.evaluator_args.output_dir)
- output_writer = open(f"{self.evaluator_args.output_dir}/evaluation.json", "w")
-
- acc_list = []
- total = 0
- # ds_engine = deepspeed.initialize(model=model.get_model(), config_params=self.ds_config)[0]
- # ds_engine.module.eval()
- for batch_index, batch in enumerate(dataloader):
- if batch_index * self.world_size >= self.data_args.max_eval_samples:
- break
- if self.local_rank >= len(batch):
- current_batch = batch[0]
- else:
- # the batch in current process
- current_batch = batch[self.local_rank]
-
- prompt_structure = self.evaluator_args.prompt_structure
- input = prompt_structure.format(input=current_batch['input'])
- output = current_batch['output']
- input_idx = current_batch['input_idx']
-
- inputs = model.encode(input, return_tensors="pt").to(device=self.local_rank)
-
-
- # with torch.no_grad():
- # outputs = ds_engine.module.generate(inputs, synced_gpus=True, pad_token_id=model.get_tokenizer().eos_token_id, min_length=5, max_length=100,temperature=0.0, do_sample=False)
- outputs = model.inference(inputs, max_new_tokens=100, temperature=0.0)
- text_out = model.decode(outputs[0], skip_special_tokens=True)
-
- # # only return the generation, trucating the input
- prompt_length = len(model.decode(inputs[0], skip_special_tokens=True,))
- text_out = text_out[prompt_length:]
- answer_type = self.evaluator_args.answer_type
- pred_answer = answer_extraction(
- text_out,
- answer_type=answer_type,
- )
- print(f"batch_index{batch_index} rank{self.local_rank}:\n question={input}\n prediction={text_out}\n")
- print(f"predicted answer: {pred_answer} \n")
- print(f"groundtruth answer: {output} \n")
-
- if self.local_rank >= len(batch): # for last batch, the padding examples are ignored and donot contribute to the accuracy
- correct_ = 0
- total_ = 0
- else:
- correct_ = 0
- total_ = 1
- if self._match(pred_answer, output, answer_type):
- correct_ = 1
-
- # collect accuracy from all gpus
- all_process = torch.tensor([correct_, total_], dtype=torch.float32, device=self.local_rank)
- dist.all_reduce(all_process, dist.ReduceOp.SUM, async_op=False)
- correct_, total_ = all_process.tolist()
- avg = correct_ / total_
- acc_list.append(avg)
- total += total_
-
- # collect predictions from all gpus
- output_dict = {"question": input,
- "prediction": text_out,
- "pred_answer": pred_answer,
- "answer": output}
- all_process_list = [{}] * self.world_size
-
- dist.gather_object(output_dict, all_process_list if dist.get_rank() == 0 else None, dst=0)
- if not dist.is_initialized() or dist.get_rank() == 0:
- current_accuracy = np.mean(acc_list)
- print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "{}/ {} has been finished, current accuracy = {}".format(int(total), data_size, current_accuracy))
-
- if(self.evaluator_args.use_wandb == True):
- wandb.log({"Accuracy": current_accuracy})
-
- for index, output in enumerate(all_process_list):
- output_json = json.dumps(output)
- output_writer.write(output_json + '\n')
-
- if not dist.is_initialized() or dist.get_rank() == 0:
- current_accuracy = np.mean(acc_list)
- print("Final accuracy = ", current_accuracy)
- output_writer.close()
- elif metric in ["ppl", "perplexity"]:
- ppl = self._evaluate_ppl(model, dataset)
- print(f"Evaluating final ppl: {ppl}")
- elif metric in ["nll", "neg_log_likelihood"]:
- neg_log_likelihood = self._evaluate_neg_log_likelihood(model, dataset)
- print(f"Evaluating final negative log likelihood: {neg_log_likelihood}")
- else:
- raise NotImplementedError(f"{metric} is not implemented or not match with our defined metrics")
-
-
- def _evaluate_ppl(self, model, dataset: Dataset):
- data_dict = dataset.to_dict()
- if data_dict['type'] == 'text2text':
- raise NotImplementedError("ppl evaluation is currently not supported for text2text dataset, please use text_only dataset.")
- texts = [ instance["text"] for instance in data_dict["instances"] ]
- encodings = model.get_tokenizer()("\n\n".join(texts), return_tensors="pt")
- # Define some constant
- try:
- max_length = min(model.get_backend_model().config.n_positions, model.get_max_length())
- except:
- max_length = min(1024, model.get_max_length())
-
- print(f"The maximum sequence length : {max_length}")
- seq_len = encodings.input_ids.size(1)
-
- nlls = []
- prev_end_loc = 0
- for begin_loc in range(0, seq_len, self.block_size):
- end_loc = min(begin_loc + max_length, seq_len)
- trg_len = end_loc - prev_end_loc # may be different from block_size on last loop
- input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device=self.local_rank)
- target_ids = input_ids.clone()
- target_ids[:, :-trg_len] = -100
-
- with torch.no_grad():
- outputs = model.get_backend_model()(input_ids, labels=target_ids)
- # loss is calculated using CrossEntropyLoss which averages over valid labels
- # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
- # to the left by 1.
- neg_log_likelihood = outputs.loss
-
- nlls.append(neg_log_likelihood)
- prev_end_loc = end_loc
- print(f"Evaluating PPL: {int(begin_loc/self.block_size) + 1} / {int(seq_len/self.block_size)} Complete, current ppl : {torch.exp(torch.stack(nlls).mean())}")
- if end_loc == seq_len:
- break
- ppl = torch.exp(torch.stack(nlls).mean())
- return ppl
-
-
- def _evaluate_neg_log_likelihood(self, model, dataset: Dataset):
- """
- Evaluates negative log likelihood of the model over a dataset.
-
- NLL = -1/N sum_{i=1}^N sum_{j=1}^|w_i| ln(p(w_{i,j}|context_window)),
-
- where N is the number of data samples, w_{i,j} is the j-th token in
- i-th sample. Here "context_window" = p(w_{i,start}, w_{i,start+1}, ...,
- p_{i,j-1} with start = max(0, j - window_length + 1). "window_length"
- is normally the maximum length accepted by the model.
-
- Returns:
- A float which represents the negative log likelihood.
- """
- data_dict = dataset.to_dict()
-
- # Handles prompt structure
- if dataset.get_type() == "text2text":
- prompt = self.evaluator_args.prompt_structure
- data_dict["instances"] = [
- {
- "input": prompt.format(input=instance["input"]),
- "output": instance["output"]
- }
- for instance in data_dict["instances"]
- ]
-
- dataset = dataset.from_dict(data_dict)
- tokenized_dataset = model.tokenize(dataset, add_special_tokens=False)
- tokenized_dataset = tokenized_dataset.get_backend_dataset()
- encoding_list = [
- {
- "input_ids": torch.tensor([input_ids]),
- "labels": torch.tensor([labels]),
- }
- for input_ids, labels in zip(tokenized_dataset["input_ids"],
- tokenized_dataset["labels"])
- ]
-
- # Gets context window length
- try:
- max_length = min(model.get_backend_model().config.n_positions,
- model.get_max_length())
- except:
- max_length = min(1024, model.get_max_length())
-
- nlls = []
- full_nlls = []
- num_samples = len(encoding_list)
- for sample_idx, encodings in enumerate(encoding_list):
- seq_len = encodings["input_ids"].size(1)
-
- prev_end_loc = 0
- for begin_loc in range(0, seq_len, self.block_size):
- end_loc = min(begin_loc + max_length, seq_len)
-
- # may be different from block_size on last loop
- trg_len = end_loc - prev_end_loc
- input_ids = encodings["input_ids"][:, begin_loc:end_loc]
- input_ids = input_ids.to(device=self.local_rank)
-
- labels = encodings["labels"][:, begin_loc:end_loc]
- target_ids = labels.clone()
- full_target_ids = input_ids.clone()
-
- def get_nll(label_ids, nll_list):
- label_ids[:, :-trg_len] = -100
- label_ids = label_ids.to(device=self.local_rank)
-
- # Valid labels are from 0 to `vocab_size`
- num_valid_labels = torch.count_nonzero(label_ids >= 0)
- if label_ids[0, 0] != -100:
- num_valid_labels -= 1
-
- if not torch.all(label_ids == -100):
- with torch.no_grad():
- outputs = model.get_backend_model()(
- input_ids, labels=label_ids
- )
- # loss is calculated using CrossEntropyLoss which
- # sums over valid labels N.B. the model only
- # calculates loss over trg_len - 1 labels, because
- # it internally shifts the labels to the left by 1.
- neg_log_likelihood = outputs.loss * num_valid_labels
- else:
- neg_log_likelihood = torch.zeros([]).to(
- device=self.local_rank
- )
-
- nll_list.append(neg_log_likelihood)
-
- get_nll(target_ids, nlls)
- get_nll(full_target_ids, full_nlls)
-
- current_output_nll = torch.stack(nlls).sum() / (sample_idx + 1)
- current_full_nll = torch.stack(full_nlls).sum() / (sample_idx + 1)
-
- prev_end_loc = end_loc
- if dataset.get_type() == "text_only":
- print(
- f"Evaluating negative log likelihood:"
- f" {sample_idx + 1} / {num_samples} Complete,"
- f" current nll: {current_full_nll}"
- )
- elif dataset.get_type() == "text2text":
- print(
- f"Evaluating negative log likelihood:"
- f" {sample_idx + 1} / {num_samples} Complete,"
- f" current full nll / input nll / output nll:"
- f" {current_full_nll} /"
- f" {current_full_nll - current_output_nll} /"
- f" {current_output_nll}"
- )
- else:
- raise NotImplementedError(
- "f{dataset.get_type()} typed datasets are not supported"
- )
-
- if end_loc == seq_len:
- break
-
- mean_nll = torch.stack(nlls).sum() / num_samples
- return mean_nll
diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/DEPLOYMENT_en.md b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/DEPLOYMENT_en.md
deleted file mode 100644
index 46513279d8d8fa027b527c3ba911358d293342a2..0000000000000000000000000000000000000000
--- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/DEPLOYMENT_en.md
+++ /dev/null
@@ -1,42 +0,0 @@
-## Low-Cost Deployment
-
-### Model Quantization
-
-By default, the model is loaded with FP16 precision, running the above code requires about 13GB of VRAM. If your GPU's VRAM is limited, you can try loading the model quantitatively, as follows:
-
-```python
-model = AutoModel.from_pretrained("THUDM/chatglm3-6b",trust_remote_code=True).quantize(4).cuda()
-```
-
-Model quantization will bring some performance loss. Through testing, ChatGLM3-6B can still perform natural and smooth generation under 4-bit quantization.
-
-### CPU Deployment
-
-If you don't have GPU hardware, you can also run inference on the CPU, but the inference speed will be slower. The usage is as follows (requires about 32GB of memory):
-
-```python
-model = AutoModel.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True).float()
-```
-
-### Mac Deployment
-
-For Macs equipped with Apple Silicon or AMD GPUs, the MPS backend can be used to run ChatGLM3-6B on the GPU. Refer to Apple's [official instructions](https://developer.apple.com/metal/pytorch) to install PyTorch-Nightly (the correct version number should be 2.x.x.dev2023xxxx, not 2.x.x).
-
-Currently, only [loading the model locally](README_en.md#load-model-locally) is supported on MacOS. Change the model loading in the code to load locally and use the MPS backend:
-
-```python
-model = AutoModel.from_pretrained("your local path", trust_remote_code=True).to('mps')
-```
-
-Loading the half-precision ChatGLM3-6B model requires about 13GB of memory. Machines with smaller memory (such as a 16GB memory MacBook Pro) will use virtual memory on the hard disk when there is insufficient free memory, resulting in a significant slowdown in inference speed.
-
-### Multi-GPU Deployment
-
-If you have multiple GPUs, but each GPU's VRAM size is not enough to accommodate the complete model, then the model can be split across multiple GPUs. First, install accelerate: `pip install accelerate`, and then load the model through the following methods:
-
-```python
-from utils import load_model_on_gpus
-model = load_model_on_gpus("THUDM/chatglm3-6b", num_gpus=2)
-```
-
-This allows the model to be deployed on two GPUs for inference. You can change `num_gpus` to the number of GPUs you want to use. It is evenly split by default, but you can also pass the `device_map` parameter to specify it yourself.
\ No newline at end of file
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/pos_embed.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/pos_embed.py
deleted file mode 100644
index aa11d60db65fa98c140e7d75bdf985ff7ece8f18..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/utils/pos_embed.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# --------------------------------------------------------
-# Position embedding utils
-# --------------------------------------------------------
-
-from typing import Tuple
-
-import numpy as np
-import torch
-
-
-# --------------------------------------------------------
-# 2D sine-cosine position embedding
-# References:
-# Transformer: https://github.com/tensorflow/models/blob/master/official/nlp/transformer/model_utils.py
-# MoCo v3: https://github.com/facebookresearch/moco-v3
-# --------------------------------------------------------
-def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False):
- """
- grid_size: int of the grid height and width
- return:
- pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
- """
- grid_h = np.arange(grid_size, dtype=np.float32)
- grid_w = np.arange(grid_size, dtype=np.float32)
- grid = np.meshgrid(grid_w, grid_h) # here w goes first
- grid = np.stack(grid, axis=0)
-
- grid = grid.reshape([2, 1, grid_size, grid_size])
- pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
- if cls_token:
- pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0)
- return pos_embed
-
-
-def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
- assert embed_dim % 2 == 0
-
- # use half of dimensions to encode grid_h
- emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
- emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
-
- emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
- return emb
-
-
-def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
- """
- embed_dim: output dimension for each position
- pos: a list of positions to be encoded: size (M,)
- out: (M, D)
- """
- assert embed_dim % 2 == 0
- omega = np.arange(embed_dim // 2, dtype=np.float)
- omega /= embed_dim / 2.0
- omega = 1.0 / 10000 ** omega # (D/2,)
-
- pos = pos.reshape(-1) # (M,)
- out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product
-
- emb_sin = np.sin(out) # (M, D/2)
- emb_cos = np.cos(out) # (M, D/2)
-
- emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
- return emb
-
-
-# --------------------------------------------------------
-# Interpolate position embeddings for high-resolution
-# References:
-# DeiT: https://github.com/facebookresearch/deit
-# --------------------------------------------------------
-def interpolate_pos_embed(model, checkpoint_model, pos_embed_key):
- if pos_embed_key in checkpoint_model:
- pos_embed_checkpoint = checkpoint_model[pos_embed_key]
- embedding_size = pos_embed_checkpoint.shape[-1]
- num_patches = model.num_patches
- if pos_embed_key.startswith("decoder"):
- num_extra_tokens = model.decoder_pos_embed.shape[-2] - num_patches
- else:
- num_extra_tokens = model.pos_embed.shape[-2] - num_patches
- # height (== width) for the checkpoint position embedding
- orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
- # height (== width) for the new position embedding
- new_size = int(num_patches ** 0.5)
- # class_token and dist_token are kept unchanged
- if orig_size != new_size:
- print(
- "Position interpolate from %dx%d to %dx%d"
- % (orig_size, orig_size, new_size, new_size)
- )
- extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
- # only the position tokens are interpolated
- pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
- pos_tokens = pos_tokens.reshape(
- -1, orig_size, orig_size, embedding_size
- ).permute(0, 3, 1, 2)
- pos_tokens = torch.nn.functional.interpolate(
- pos_tokens,
- size=(new_size, new_size),
- mode="bicubic",
- align_corners=False,
- )
- pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
- new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
- checkpoint_model[pos_embed_key] = new_pos_embed
-
-
-def interpolate_pos_embed_online(
- pos_embed, orig_size: Tuple[int], new_size: Tuple[int], num_extra_tokens: int
-):
- extra_tokens = pos_embed[:, :num_extra_tokens]
- pos_tokens = pos_embed[:, num_extra_tokens:]
- embedding_size = pos_tokens.shape[-1]
- pos_tokens = pos_tokens.reshape(
- -1, orig_size[0], orig_size[1], embedding_size
- ).permute(0, 3, 1, 2)
- pos_tokens = torch.nn.functional.interpolate(
- pos_tokens, size=new_size, mode="bicubic", align_corners=False,
- )
- pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
- new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
- return new_pos_embed
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/summarize-guile-TODO.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/summarize-guile-TODO.go
deleted file mode 100644
index 2e14976717b377cb78a3c7f15841f005fbe9f77e..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/summarize-guile-TODO.go and /dev/null differ
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/unet.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/unet.py
deleted file mode 100644
index 82caa16a94c195c192a2a920fb7bc7e60f0f3ce3..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/unet.py
+++ /dev/null
@@ -1,429 +0,0 @@
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-from annotator.uniformer.mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer,
- build_norm_layer, constant_init, kaiming_init)
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm
-
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-from ..utils import UpConvBlock
-
-
-class BasicConvBlock(nn.Module):
- """Basic convolutional block for UNet.
-
- This module consists of several plain convolutional layers.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- num_convs (int): Number of convolutional layers. Default: 2.
- stride (int): Whether use stride convolution to downsample
- the input feature map. If stride=2, it only uses stride convolution
- in the first convolutional layer to downsample the input feature
- map. Options are 1 or 2. Default: 1.
- dilation (int): Whether use dilated convolution to expand the
- receptive field. Set dilation rate of each convolutional layer and
- the dilation rate of the first convolutional layer is always 1.
- Default: 1.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- dcn (bool): Use deformable convolution in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_convs=2,
- stride=1,
- dilation=1,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- dcn=None,
- plugins=None):
- super(BasicConvBlock, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
-
- self.with_cp = with_cp
- convs = []
- for i in range(num_convs):
- convs.append(
- ConvModule(
- in_channels=in_channels if i == 0 else out_channels,
- out_channels=out_channels,
- kernel_size=3,
- stride=stride if i == 0 else 1,
- dilation=1 if i == 0 else dilation,
- padding=1 if i == 0 else dilation,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.convs, x)
- else:
- out = self.convs(x)
- return out
-
-
-@UPSAMPLE_LAYERS.register_module()
-class DeconvModule(nn.Module):
- """Deconvolution upsample module in decoder for UNet (2X upsample).
-
- This module uses deconvolution to upsample feature map in the decoder
- of UNet.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- kernel_size (int): Kernel size of the convolutional layer. Default: 4.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- kernel_size=4,
- scale_factor=2):
- super(DeconvModule, self).__init__()
-
- assert (kernel_size - scale_factor >= 0) and\
- (kernel_size - scale_factor) % 2 == 0,\
- f'kernel_size should be greater than or equal to scale_factor '\
- f'and (kernel_size - scale_factor) should be even numbers, '\
- f'while the kernel size is {kernel_size} and scale_factor is '\
- f'{scale_factor}.'
-
- stride = scale_factor
- padding = (kernel_size - scale_factor) // 2
- self.with_cp = with_cp
- deconv = nn.ConvTranspose2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding)
-
- norm_name, norm = build_norm_layer(norm_cfg, out_channels)
- activate = build_activation_layer(act_cfg)
- self.deconv_upsamping = nn.Sequential(deconv, norm, activate)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.deconv_upsamping, x)
- else:
- out = self.deconv_upsamping(x)
- return out
-
-
-@UPSAMPLE_LAYERS.register_module()
-class InterpConv(nn.Module):
- """Interpolation upsample module in decoder for UNet.
-
- This module uses interpolation to upsample feature map in the decoder
- of UNet. It consists of one interpolation upsample layer and one
- convolutional layer. It can be one interpolation upsample layer followed
- by one convolutional layer (conv_first=False) or one convolutional layer
- followed by one interpolation upsample layer (conv_first=True).
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- conv_first (bool): Whether convolutional layer or interpolation
- upsample layer first. Default: False. It means interpolation
- upsample layer followed by one convolutional layer.
- kernel_size (int): Kernel size of the convolutional layer. Default: 1.
- stride (int): Stride of the convolutional layer. Default: 1.
- padding (int): Padding of the convolutional layer. Default: 1.
- upsample_cfg (dict): Interpolation config of the upsample layer.
- Default: dict(
- scale_factor=2, mode='bilinear', align_corners=False).
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- conv_cfg=None,
- conv_first=False,
- kernel_size=1,
- stride=1,
- padding=0,
- upsample_cfg=dict(
- scale_factor=2, mode='bilinear', align_corners=False)):
- super(InterpConv, self).__init__()
-
- self.with_cp = with_cp
- conv = ConvModule(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- upsample = nn.Upsample(**upsample_cfg)
- if conv_first:
- self.interp_upsample = nn.Sequential(conv, upsample)
- else:
- self.interp_upsample = nn.Sequential(upsample, conv)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.interp_upsample, x)
- else:
- out = self.interp_upsample(x)
- return out
-
-
-@BACKBONES.register_module()
-class UNet(nn.Module):
- """UNet backbone.
- U-Net: Convolutional Networks for Biomedical Image Segmentation.
- https://arxiv.org/pdf/1505.04597.pdf
-
- Args:
- in_channels (int): Number of input image channels. Default" 3.
- base_channels (int): Number of base channels of each stage.
- The output channels of the first stage. Default: 64.
- num_stages (int): Number of stages in encoder, normally 5. Default: 5.
- strides (Sequence[int 1 | 2]): Strides of each stage in encoder.
- len(strides) is equal to num_stages. Normally the stride of the
- first stage in encoder is 1. If strides[i]=2, it uses stride
- convolution to downsample in the correspondence encoder stage.
- Default: (1, 1, 1, 1, 1).
- enc_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence encoder stage.
- Default: (2, 2, 2, 2, 2).
- dec_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence decoder stage.
- Default: (2, 2, 2, 2).
- downsamples (Sequence[int]): Whether use MaxPool to downsample the
- feature map after the first stage of encoder
- (stages: [1, num_stages)). If the correspondence encoder stage use
- stride convolution (strides[i]=2), it will never use MaxPool to
- downsample, even downsamples[i-1]=True.
- Default: (True, True, True, True).
- enc_dilations (Sequence[int]): Dilation rate of each stage in encoder.
- Default: (1, 1, 1, 1, 1).
- dec_dilations (Sequence[int]): Dilation rate of each stage in decoder.
- Default: (1, 1, 1, 1).
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- upsample_cfg (dict): The upsample config of the upsample module in
- decoder. Default: dict(type='InterpConv').
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- dcn (bool): Use deformable convolution in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
-
- Notice:
- The input image size should be divisible by the whole downsample rate
- of the encoder. More detail of the whole downsample rate can be found
- in UNet._check_input_divisible.
-
- """
-
- def __init__(self,
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False,
- dcn=None,
- plugins=None):
- super(UNet, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
- assert len(strides) == num_stages, \
- 'The length of strides should be equal to num_stages, '\
- f'while the strides is {strides}, the length of '\
- f'strides is {len(strides)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_num_convs) == num_stages, \
- 'The length of enc_num_convs should be equal to num_stages, '\
- f'while the enc_num_convs is {enc_num_convs}, the length of '\
- f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_num_convs) == (num_stages-1), \
- 'The length of dec_num_convs should be equal to (num_stages-1), '\
- f'while the dec_num_convs is {dec_num_convs}, the length of '\
- f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(downsamples) == (num_stages-1), \
- 'The length of downsamples should be equal to (num_stages-1), '\
- f'while the downsamples is {downsamples}, the length of '\
- f'downsamples is {len(downsamples)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_dilations) == num_stages, \
- 'The length of enc_dilations should be equal to num_stages, '\
- f'while the enc_dilations is {enc_dilations}, the length of '\
- f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_dilations) == (num_stages-1), \
- 'The length of dec_dilations should be equal to (num_stages-1), '\
- f'while the dec_dilations is {dec_dilations}, the length of '\
- f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- self.num_stages = num_stages
- self.strides = strides
- self.downsamples = downsamples
- self.norm_eval = norm_eval
- self.base_channels = base_channels
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- for i in range(num_stages):
- enc_conv_block = []
- if i != 0:
- if strides[i] == 1 and downsamples[i - 1]:
- enc_conv_block.append(nn.MaxPool2d(kernel_size=2))
- upsample = (strides[i] != 1 or downsamples[i - 1])
- self.decoder.append(
- UpConvBlock(
- conv_block=BasicConvBlock,
- in_channels=base_channels * 2**i,
- skip_channels=base_channels * 2**(i - 1),
- out_channels=base_channels * 2**(i - 1),
- num_convs=dec_num_convs[i - 1],
- stride=1,
- dilation=dec_dilations[i - 1],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- upsample_cfg=upsample_cfg if upsample else None,
- dcn=None,
- plugins=None))
-
- enc_conv_block.append(
- BasicConvBlock(
- in_channels=in_channels,
- out_channels=base_channels * 2**i,
- num_convs=enc_num_convs[i],
- stride=strides[i],
- dilation=enc_dilations[i],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- dcn=None,
- plugins=None))
- self.encoder.append((nn.Sequential(*enc_conv_block)))
- in_channels = base_channels * 2**i
-
- def forward(self, x):
- self._check_input_divisible(x)
- enc_outs = []
- for enc in self.encoder:
- x = enc(x)
- enc_outs.append(x)
- dec_outs = [x]
- for i in reversed(range(len(self.decoder))):
- x = self.decoder[i](enc_outs[i], x)
- dec_outs.append(x)
-
- return dec_outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep normalization layer
- freezed."""
- super(UNet, self).train(mode)
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
-
- def _check_input_divisible(self, x):
- h, w = x.shape[-2:]
- whole_downsample_rate = 1
- for i in range(1, self.num_stages):
- if self.strides[i] == 2 or self.downsamples[i - 1]:
- whole_downsample_rate *= 2
- assert (h % whole_downsample_rate == 0) \
- and (w % whole_downsample_rate == 0),\
- f'The input image size {(h, w)} should be divisible by the whole '\
- f'downsample rate {whole_downsample_rate}, when num_stages is '\
- f'{self.num_stages}, strides is {self.strides}, and downsamples '\
- f'is {self.downsamples}.'
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyhead.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyhead.py
deleted file mode 100644
index baf5b37212e590ab453576278cc6c124dce91e90..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyhead.py
+++ /dev/null
@@ -1,151 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from .deform_conv import ModulatedDeformConv
-from .dyrelu import h_sigmoid, DYReLU
-
-
-class Conv3x3Norm(torch.nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- stride,
- deformable=False,
- use_gn=False):
- super(Conv3x3Norm, self).__init__()
-
- if deformable:
- self.conv = ModulatedDeformConv(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
- else:
- self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
-
- if use_gn:
- self.bn = nn.GroupNorm(num_groups=16, num_channels=out_channels)
- else:
- self.bn = None
-
- def forward(self, input, **kwargs):
- x = self.conv(input, **kwargs)
- if self.bn:
- x = self.bn(x)
- return x
-
-
-class DyConv(nn.Module):
- def __init__(self,
- in_channels=256,
- out_channels=256,
- conv_func=Conv3x3Norm,
- use_dyfuse=True,
- use_dyrelu=False,
- use_deform=False
- ):
- super(DyConv, self).__init__()
-
- self.DyConv = nn.ModuleList()
- self.DyConv.append(conv_func(in_channels, out_channels, 1))
- self.DyConv.append(conv_func(in_channels, out_channels, 1))
- self.DyConv.append(conv_func(in_channels, out_channels, 2))
-
- if use_dyfuse:
- self.AttnConv = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(in_channels, 1, kernel_size=1),
- nn.ReLU(inplace=True))
- self.h_sigmoid = h_sigmoid()
- else:
- self.AttnConv = None
-
- if use_dyrelu:
- self.relu = DYReLU(in_channels, out_channels)
- else:
- self.relu = nn.ReLU()
-
- if use_deform:
- self.offset = nn.Conv2d(in_channels, 27, kernel_size=3, stride=1, padding=1)
- else:
- self.offset = None
-
- self.init_weights()
-
- def init_weights(self):
- for m in self.DyConv.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight.data, 0, 0.01)
- if m.bias is not None:
- m.bias.data.zero_()
- if self.AttnConv is not None:
- for m in self.AttnConv.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight.data, 0, 0.01)
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x):
- next_x = []
- for level, feature in enumerate(x):
-
- conv_args = dict()
- if self.offset is not None:
- offset_mask = self.offset(feature)
- offset = offset_mask[:, :18, :, :]
- mask = offset_mask[:, 18:, :, :].sigmoid()
- conv_args = dict(offset=offset, mask=mask)
-
- temp_fea = [self.DyConv[1](feature, **conv_args)]
-
- if level > 0:
- temp_fea.append(self.DyConv[2](x[level - 1], **conv_args))
- if level < len(x) - 1:
- temp_fea.append(F.upsample_bilinear(self.DyConv[0](x[level + 1], **conv_args),
- size=[feature.size(2), feature.size(3)]))
- mean_fea = torch.mean(torch.stack(temp_fea), dim=0, keepdim=False)
-
- if self.AttnConv is not None:
- attn_fea = []
- res_fea = []
- for fea in temp_fea:
- res_fea.append(fea)
- attn_fea.append(self.AttnConv(fea))
-
- res_fea = torch.stack(res_fea)
- spa_pyr_attn = self.h_sigmoid(torch.stack(attn_fea))
-
- mean_fea = torch.mean(res_fea * spa_pyr_attn, dim=0, keepdim=False)
-
- next_x.append(self.relu(mean_fea))
-
- return next_x
-
-
-class DyHead(nn.Module):
- def __init__(self, cfg, in_channels):
- super(DyHead, self).__init__()
- self.cfg = cfg
- channels = cfg.MODEL.DYHEAD.CHANNELS
- use_gn = cfg.MODEL.DYHEAD.USE_GN
- use_dyrelu = cfg.MODEL.DYHEAD.USE_DYRELU
- use_dyfuse = cfg.MODEL.DYHEAD.USE_DYFUSE
- use_deform = cfg.MODEL.DYHEAD.USE_DFCONV
-
- conv_func = lambda i,o,s : Conv3x3Norm(i,o,s,deformable=use_deform,use_gn=use_gn)
-
- dyhead_tower = []
- for i in range(cfg.MODEL.DYHEAD.NUM_CONVS):
- dyhead_tower.append(
- DyConv(
- in_channels if i == 0 else channels,
- channels,
- conv_func=conv_func,
- use_dyrelu=use_dyrelu,
- use_dyfuse=use_dyfuse,
- use_deform=use_deform
- )
- )
-
- self.add_module('dyhead_tower', nn.Sequential(*dyhead_tower))
-
- def forward(self, x):
- dyhead_tower = self.dyhead_tower(x)
- return dyhead_tower
\ No newline at end of file
diff --git a/spaces/Pluviophile/vits-uma-genshin-honkai/attentions.py b/spaces/Pluviophile/vits-uma-genshin-honkai/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/Pluviophile/vits-uma-genshin-honkai/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/resample_dataset.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/resample_dataset.py
deleted file mode 100644
index af5288712b8d2cde2d9814c747275e69f6e970c8..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/scripts/resample_dataset.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Resampling script.
-"""
-import argparse
-from pathlib import Path
-import shutil
-import typing as tp
-
-import submitit
-import tqdm
-
-from audiocraft.data.audio import audio_read, audio_write
-from audiocraft.data.audio_dataset import load_audio_meta, find_audio_files
-from audiocraft.data.audio_utils import convert_audio
-from audiocraft.environment import AudioCraftEnvironment
-
-
-def read_txt_files(path: tp.Union[str, Path]):
- with open(args.files_path) as f:
- lines = [line.rstrip() for line in f]
- print(f"Read {len(lines)} in .txt")
- lines = [line for line in lines if Path(line).suffix not in ['.json', '.txt', '.csv']]
- print(f"Filtered and keep {len(lines)} from .txt")
- return lines
-
-
-def read_egs_files(path: tp.Union[str, Path]):
- path = Path(path)
- if path.is_dir():
- if (path / 'data.jsonl').exists():
- path = path / 'data.jsonl'
- elif (path / 'data.jsonl.gz').exists():
- path = path / 'data.jsonl.gz'
- else:
- raise ValueError("Don't know where to read metadata from in the dir. "
- "Expecting either a data.jsonl or data.jsonl.gz file but none found.")
- meta = load_audio_meta(path)
- return [m.path for m in meta]
-
-
-def process_dataset(args, n_shards: int, node_index: int, task_index: tp.Optional[int] = None):
- if task_index is None:
- env = submitit.JobEnvironment()
- task_index = env.global_rank
- shard_index = node_index * args.tasks_per_node + task_index
-
- if args.files_path is None:
- lines = [m.path for m in find_audio_files(args.root_path, resolve=False, progress=True, workers=8)]
- else:
- files_path = Path(args.files_path)
- if files_path.suffix == '.txt':
- print(f"Reading file list from .txt file: {args.files_path}")
- lines = read_txt_files(args.files_path)
- else:
- print(f"Reading file list from egs: {args.files_path}")
- lines = read_egs_files(args.files_path)
-
- total_files = len(lines)
- print(
- f"Total of {total_files} processed with {n_shards} shards. " +
- f"Current idx = {shard_index} -> {total_files // n_shards} files to process"
- )
- for idx, line in tqdm.tqdm(enumerate(lines)):
-
- # skip if not part of this shard
- if idx % n_shards != shard_index:
- continue
-
- path = str(AudioCraftEnvironment.apply_dataset_mappers(line))
- root_path = str(args.root_path)
- if not root_path.endswith('/'):
- root_path += '/'
- assert path.startswith(str(root_path)), \
- f"Mismatch between path and provided root: {path} VS {root_path}"
-
- try:
- metadata_path = Path(path).with_suffix('.json')
- out_path = args.out_path / path[len(root_path):]
- out_metadata_path = out_path.with_suffix('.json')
- out_done_token = out_path.with_suffix('.done')
-
- # don't reprocess existing files
- if out_done_token.exists():
- continue
-
- print(idx, out_path, path)
- mix, sr = audio_read(path)
- mix_channels = args.channels if args.channels is not None and args.channels > 0 else mix.size(0)
- # enforce simple stereo
- out_channels = mix_channels
- if out_channels > 2:
- print(f"Mix has more than two channels: {out_channels}, enforcing 2 channels")
- out_channels = 2
- out_sr = args.sample_rate if args.sample_rate is not None else sr
- out_wav = convert_audio(mix, sr, out_sr, out_channels)
- audio_write(out_path.with_suffix(''), out_wav, sample_rate=out_sr,
- format=args.format, normalize=False, strategy='clip')
- if metadata_path.exists():
- shutil.copy(metadata_path, out_metadata_path)
- else:
- print(f"No metadata found at {str(metadata_path)}")
- out_done_token.touch()
- except Exception as e:
- print(f"Error processing file line: {line}, {e}")
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description="Resample dataset with SLURM.")
- parser.add_argument(
- "--log_root",
- type=Path,
- default=Path.home() / 'tmp' / 'resample_logs',
- )
- parser.add_argument(
- "--files_path",
- type=Path,
- help="List of files to process, either .txt (one file per line) or a jsonl[.gz].",
- )
- parser.add_argument(
- "--root_path",
- type=Path,
- required=True,
- help="When rewriting paths, this will be the prefix to remove.",
- )
- parser.add_argument(
- "--out_path",
- type=Path,
- required=True,
- help="When rewriting paths, `root_path` will be replaced by this.",
- )
- parser.add_argument("--xp_name", type=str, default="shutterstock")
- parser.add_argument(
- "--nodes",
- type=int,
- default=4,
- )
- parser.add_argument(
- "--tasks_per_node",
- type=int,
- default=20,
- )
- parser.add_argument(
- "--cpus_per_task",
- type=int,
- default=4,
- )
- parser.add_argument(
- "--memory_gb",
- type=int,
- help="Memory in GB."
- )
- parser.add_argument(
- "--format",
- type=str,
- default="wav",
- )
- parser.add_argument(
- "--sample_rate",
- type=int,
- default=32000,
- )
- parser.add_argument(
- "--channels",
- type=int,
- )
- parser.add_argument(
- "--partition",
- default='learnfair',
- )
- parser.add_argument("--qos")
- parser.add_argument("--account")
- parser.add_argument("--timeout", type=int, default=4320)
- parser.add_argument('--debug', action='store_true', help='debug mode (local run)')
- args = parser.parse_args()
- n_shards = args.tasks_per_node * args.nodes
- if args.files_path is None:
- print("Warning: --files_path not provided, not recommended when processing more than 10k files.")
- if args.debug:
- print("Debugging mode")
- process_dataset(args, n_shards=n_shards, node_index=0, task_index=0)
- else:
-
- log_folder = Path(args.log_root) / args.xp_name / '%j'
- print(f"Logging to: {log_folder}")
- log_folder.parent.mkdir(parents=True, exist_ok=True)
- executor = submitit.AutoExecutor(folder=str(log_folder))
- if args.qos:
- executor.update_parameters(slurm_partition=args.partition, slurm_qos=args.qos, slurm_account=args.account)
- else:
- executor.update_parameters(slurm_partition=args.partition)
- executor.update_parameters(
- slurm_job_name=args.xp_name, timeout_min=args.timeout,
- cpus_per_task=args.cpus_per_task, tasks_per_node=args.tasks_per_node, nodes=1)
- if args.memory_gb:
- executor.update_parameters(mem=f'{args.memory_gb}GB')
- jobs = []
- with executor.batch():
- for node_index in range(args.nodes):
- job = executor.submit(process_dataset, args, n_shards=n_shards, node_index=node_index)
- jobs.append(job)
- for job in jobs:
- print(f"Waiting on job {job.job_id}")
- job.results()
diff --git a/spaces/QiuLingYan/ChanYuan-large-v2/README.md b/spaces/QiuLingYan/ChanYuan-large-v2/README.md
deleted file mode 100644
index 82c3568a1b3eff755e79d9388e6c9e3269cf2e69..0000000000000000000000000000000000000000
--- a/spaces/QiuLingYan/ChanYuan-large-v2/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatYuan Large V2
-emoji: 📊
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
-duplicated_from: ClueAI/ChatYuan-large-v2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/RamAnanth1/videocrafter/lvdm/utils/common_utils.py b/spaces/RamAnanth1/videocrafter/lvdm/utils/common_utils.py
deleted file mode 100644
index 9bcb6ee1b5b15d3487de8e058e8dfce8277be745..0000000000000000000000000000000000000000
--- a/spaces/RamAnanth1/videocrafter/lvdm/utils/common_utils.py
+++ /dev/null
@@ -1,132 +0,0 @@
-
-import importlib
-
-import torch
-import numpy as np
-
-from inspect import isfunction
-from PIL import Image, ImageDraw, ImageFont
-
-
-def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ('yes', 'true', 't', 'y', '1'):
- return True
- elif v.lower() in ('no', 'false', 'f', 'n', '0'):
- return False
- else:
- raise ValueError('Boolean value expected.')
-
-
-def instantiate_from_config(config):
- if not "target" in config:
- if config == '__is_first_stage__':
- return None
- elif config == "__is_unconditional__":
- return None
- raise KeyError("Expected key `target` to instantiate.")
-
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
-
-def get_obj_from_str(string, reload=False):
- module, cls = string.rsplit(".", 1)
- if reload:
- module_imp = importlib.import_module(module)
- importlib.reload(module_imp)
- return getattr(importlib.import_module(module, package=None), cls)
-
-def log_txt_as_img(wh, xc, size=10):
- # wh a tuple of (width, height)
- # xc a list of captions to plot
- b = len(xc)
- txts = list()
- for bi in range(b):
- txt = Image.new("RGB", wh, color="white")
- draw = ImageDraw.Draw(txt)
- font = ImageFont.truetype('data/DejaVuSans.ttf', size=size)
- nc = int(40 * (wh[0] / 256))
- lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))
-
- try:
- draw.text((0, 0), lines, fill="black", font=font)
- except UnicodeEncodeError:
- print("Cant encode string for logging. Skipping.")
-
- txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
- txts.append(txt)
- txts = np.stack(txts)
- txts = torch.tensor(txts)
- return txts
-
-
-def ismap(x):
- if not isinstance(x, torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] > 3)
-
-
-def isimage(x):
- if not isinstance(x,torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def mean_flat(tensor):
- """
- https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def count_params(model, verbose=False):
- total_params = sum(p.numel() for p in model.parameters())
- if verbose:
- print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.")
- return total_params
-
-
-def instantiate_from_config(config):
- if not "target" in config:
- if config == '__is_first_stage__':
- return None
- elif config == "__is_unconditional__":
- return None
- raise KeyError("Expected key `target` to instantiate.")
-
- if "instantiate_with_dict" in config and config["instantiate_with_dict"]:
- # input parameter is one dict
- return get_obj_from_str(config["target"])(config.get("params", dict()), **kwargs)
- else:
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
-
-
-def get_obj_from_str(string, reload=False):
- module, cls = string.rsplit(".", 1)
- if reload:
- module_imp = importlib.import_module(module)
- importlib.reload(module_imp)
- return getattr(importlib.import_module(module, package=None), cls)
-
-
-def check_istarget(name, para_list):
- """
- name: full name of source para
- para_list: partial name of target para
- """
- istarget=False
- for para in para_list:
- if para in name:
- return True
- return istarget
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/plugin.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/plugin.py
deleted file mode 100644
index 3590bee8d29a7670d5c0e94c2a1c83c83670e766..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/plugin.py
+++ /dev/null
@@ -1,88 +0,0 @@
-"""
- pygments.plugin
- ~~~~~~~~~~~~~~~
-
- Pygments plugin interface. By default, this tries to use
- ``importlib.metadata``, which is in the Python standard
- library since Python 3.8, or its ``importlib_metadata``
- backport for earlier versions of Python. It falls back on
- ``pkg_resources`` if not found. Finally, if ``pkg_resources``
- is not found either, no plugins are loaded at all.
-
- lexer plugins::
-
- [pygments.lexers]
- yourlexer = yourmodule:YourLexer
-
- formatter plugins::
-
- [pygments.formatters]
- yourformatter = yourformatter:YourFormatter
- /.ext = yourformatter:YourFormatter
-
- As you can see, you can define extensions for the formatter
- with a leading slash.
-
- syntax plugins::
-
- [pygments.styles]
- yourstyle = yourstyle:YourStyle
-
- filter plugin::
-
- [pygments.filter]
- yourfilter = yourfilter:YourFilter
-
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-LEXER_ENTRY_POINT = 'pygments.lexers'
-FORMATTER_ENTRY_POINT = 'pygments.formatters'
-STYLE_ENTRY_POINT = 'pygments.styles'
-FILTER_ENTRY_POINT = 'pygments.filters'
-
-
-def iter_entry_points(group_name):
- try:
- from importlib.metadata import entry_points
- except ImportError:
- try:
- from importlib_metadata import entry_points
- except ImportError:
- try:
- from pip._vendor.pkg_resources import iter_entry_points
- except (ImportError, OSError):
- return []
- else:
- return iter_entry_points(group_name)
- groups = entry_points()
- if hasattr(groups, 'select'):
- # New interface in Python 3.10 and newer versions of the
- # importlib_metadata backport.
- return groups.select(group=group_name)
- else:
- # Older interface, deprecated in Python 3.10 and recent
- # importlib_metadata, but we need it in Python 3.8 and 3.9.
- return groups.get(group_name, [])
-
-
-def find_plugin_lexers():
- for entrypoint in iter_entry_points(LEXER_ENTRY_POINT):
- yield entrypoint.load()
-
-
-def find_plugin_formatters():
- for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_styles():
- for entrypoint in iter_entry_points(STYLE_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_filters():
- for entrypoint in iter_entry_points(FILTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
diff --git a/spaces/RedBaron5/PatentSolver/App/bin/ZipParser.py b/spaces/RedBaron5/PatentSolver/App/bin/ZipParser.py
deleted file mode 100644
index 5d20e1a5d94898ed4fb1d422a5b3095405f44145..0000000000000000000000000000000000000000
--- a/spaces/RedBaron5/PatentSolver/App/bin/ZipParser.py
+++ /dev/null
@@ -1,163 +0,0 @@
-import os
-import re
-import json
-import zipfile
-from lxml import etree
-from App.bin.InputHandler import InputHandler
-from App.bin.constants import DATA_INPUT
-from App.bin.FiguresCleaner import FiguresCleaner
-from App.bin import constants
-
-
-class ZipParser(object):
-
- def __init__(self, folder, extension):
- self.folder = folder
- self.extension = extension
- def custom_cleaner(self, line):
- line = str(line)
- #line = line.lower()
- line = re.sub(r'PatentInspiration Url', '', line)
- line = re.sub(r'(http|ftp|https)://([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:/~+#-]*[\w@?^=%&/~+#-])?', '', line)
- line = re.sub(r'{', '(', line)
- line = re.sub(r'"', '\'', line)
- line = re.sub(r'}', ')', line)
- line = re.sub(r'\t.*patentinspiration.*\n', '', line)
- line = re.sub(r'^|\n{2,}\bAbstract\b\n?', '', line)
- line = re.sub(r'^|\n{2,}\bClaims\b\n?', '', line)
- line = re.sub(r'^|\n{2,}\bDescription\b\n?', '', line)
- line = re.sub(r'fig\.', 'figure', line)
- line = re.sub(r'Fig\.', 'Figure', line)
- line = re.sub(r'FIG\.', 'Figure', line)
- line = re.sub(r'figs\.', 'figures', line)
- line = re.sub(r'FIGS\.', 'Figures', line)
- line = re.sub(r'(\w+\.)', r'\1 ', line)
- line = re.sub(r''', '\'', line)
- line = re.sub(r'>', '>', line)
- line = re.sub(r'<', '<', line)
- line = re.sub(r'°', ' deg.', line)
- line = re.sub(r' ', ' ', line)
- line = line.strip()
- return line
-
- def dataCleaner(self,line):
- with open(constants.ASSETS + "dropPart") as l:
- # next(l)
- drop_part = l.read().splitlines()
- drop_part_pattern = re.compile('|'.join(drop_part))
-
- line = str(line)
- #line = line.lower()
- line = re.sub(r'^([A-Z-/]+\s)+([A-Z])', r'\n\2', line)
- line = re.sub(drop_part_pattern, r'\n', line)
- line = re.sub(r'\s+\.\s?\d+\s+', ' ', line)
- line = line.strip()
- return line
-
- def smooth_data_cleaner(self,line):
- line = str(line)
- # line = line.lower()
- line = re.sub(r'\s+,', ',', line)
- line = re.sub(r'\d\w-\d\w (and? \d\w-\d\w)?', '', line)
- line = re.sub(r'\d\w-\d\w', '', line)
- line = re.sub(r'\(\s?(,\s?|;\s?)+\s?\)', '', line)
- line = re.sub(r'\s+\.\s\.', '.\n', line)
- line = re.sub(r'\s+\.\s+([a-z]+)', r' \1', line)
- line = re.sub(r'\s+(\.)\s+\[\s?\d+\s?]\s+', r'.\n', line)
- line = re.sub(r'\s?\[\s?\d+\s?]\s+', r'\n', line)
- line = re.sub(r'\s+(\.)\s+([A-Z]+)', r'.\n\2', line)
- line = re.sub(r'\s+;\s+', '; ', line)
- line = re.sub(r'\(\s+\'\s+\)', '', line)
- line = re.sub(r'\(\s+\)', '', line)
- line = re.sub(r'\(\s?\.\s?\)', '', line)
- line = re.sub(r'\(\s/\s?\)', '', line)
- line = re.sub(r'\s{2,}', ' ', line)
- line = re.sub(r'(\d+)\s+(\.)\s+(\d+)', r'\1.\3', line)
- line = line.strip()
- return line
-
- def OpenFiles(self, files):
- contentList = []
- filename =""
- for fichier in files:
- filename = os.path.basename(fichier)
-
- if fichier.endswith("xml"):
- doc = etree.parse(fichier)
- contentList.append(doc)
- return filename, contentList
-
- def openZips(self, files):
- zipLists = []
- folderpath = os.path.dirname(files[0])
- folder = os.path.basename(folderpath)
- d_folder = folderpath+"/unzipped/"
- for zips in files:
- if zipfile.is_zipfile(zips):
- zip_ref = zipfile.ZipFile(zips, 'r')
- zip_ref.extractall(d_folder)
- zip_ref.close()
- getFiles = InputHandler(d_folder,'*.xml')
- files = getFiles.get_input()
- return files
-
- def GetFiles(self):
- folder = self.folder
- getFiles = InputHandler(folder, '*.*')
- files = getFiles.get_input()
-
- filename, file_content = self.OpenFiles(self.openZips(files))
- filename = os.path.splitext(filename)[0]
- print(filename)
- count = 0
- corpus = []
- for content in file_content:
- description_list = []
- abstract_list = []
- claim_list = []
- docList = content.xpath("/QOitem/QOanswer/QOaVisu/QOdoclist")
- for doc in docList:
- doc = doc.find("./QOdocument")
- count +=1
- title_blocks = doc.xpath("./QOfield[@name='ETI']/QOpar[@num='1' and @xml:lang='EN']")
- for head in title_blocks:
-
- title = head.xpath('./QOsen/descendant-or-self::*/text()')
- pNumber = head.xpath('./@PUB')
- Number = ' '.join(pNumber)
- title = ' '.join(title)
- abstract_block = content.xpath("/QOitem/QOanswer/QOaVisu//QOfield[@name='EAB']/QOpar[@num='1' and @xml:lang='EN']")
- for abstract in abstract_block:
- abstract_content = abstract.xpath('./QOsen/descendant-or-self::*/text()')
- abstract_list.append(' '.join(abstract_content))
- Abstracts = ' '.join(abstract_list)
- a_abstract = self.custom_cleaner(Abstracts)
- abstract_cleaner = FiguresCleaner(a_abstract)
- Abstract = ' '.join(abstract_cleaner.clean_figures())
-
-
-
- claims_block = content.xpath("/QOitem/QOanswer/QOaVisu//QOfield[@name='CLMS']/QOpar")
- for claim in claims_block:
- claim_content = claim.xpath('./QOsen/descendant-or-self::*/text()')
- claim_list.append(' '.join(claim_content))
- Claims = ' '.join(claim_list)
-
-
- description_block = content.xpath("/QOitem/QOanswer/QOaVisu//QOfield[@name='DESC']/QOpar")
- for description in description_block:
- description_content = description.xpath('./QOsen/descendant-or-self::*/text()')
- description_list.append(' '.join(description_content))
- Description = ' '.join(description_list)
-
-
-
- values = {'filename':filename, 'title':title,'number':Number, 'abstract': Abstract, 'claims':Claims, 'description':Description}
- corpus.append(values)
-
- #with open(folder+"/demo.json", 'w') as json_data:
- # json.dump(corpus, json_data)
- #print (values)
-
- return corpus
-
diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/readme.md b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/readme.md
deleted file mode 100644
index 7cffcfc72069ff9a098d292f9e37035031e19081..0000000000000000000000000000000000000000
--- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/readme.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Code from [rosinality-stylegan2-pytorch-cp](https://github.com/senior-sigan/rosinality-stylegan2-pytorch-cpu)
-
-Scripts to convert rosinality/stylegan2-pytorch to the CPU compatible format
-
-If you would like to use CPU for testing or have a problem regarding the cpp extention (fused and upfirdn2d), please make the following changes:
-
-Change `model.stylegan.op` to `model.stylegan.op_cpu`
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/util.py#L14
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/simple_augment.py#L12
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/stylegan/model.py#L11
diff --git a/spaces/Ritvik19/SentiNet/sentinet_v1.py b/spaces/Ritvik19/SentiNet/sentinet_v1.py
deleted file mode 100644
index 86696f5e3c1a38e8a34f94edd413c931a0a54df7..0000000000000000000000000000000000000000
--- a/spaces/Ritvik19/SentiNet/sentinet_v1.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import cleantext
-import joblib
-import streamlit as st # optional
-from huggingface_hub import hf_hub_download
-
-
-class SentinetV1:
- def __init__(self) -> None:
- self.models = self.load_models()
-
- @st.cache(allow_output_mutation=True) # optional: for streamlit applications only
- def load_models(self) -> dict:
- models = {}
- for class_name in [
- "sentiment_polarity",
- "opinion",
- "toxicity",
- "toxicity__hate",
- "toxicity__insult",
- "toxicity__obscene",
- "toxicity__sexual_explicit",
- "toxicity__threat",
- "emotion__no_emotion",
- "emotion__anger",
- "emotion__disgust",
- "emotion__fear",
- "emotion__guilt",
- "emotion__humour",
- "emotion__joy",
- "emotion__sadness",
- "emotion__shame",
- "emotion__surprise",
- ]:
- models[class_name] = joblib.load(
- hf_hub_download("Ritvik19/sentinet-v1", f"{class_name}.bin")
- )
- return models
-
- def clean_text(self, text) -> str:
- return cleantext.clean(
- text,
- fix_unicode=True, # fix various unicode errors
- to_ascii=True, # transliterate to closest ASCII representation
- lower=True, # lowercase text
- no_line_breaks=False, # fully strip line breaks as opposed to only normalizing them
- no_urls=False, # replace all URLs with a special token
- no_emails=False, # replace all email addresses with a special token
- no_phone_numbers=False, # replace all phone numbers with a special token
- no_numbers=False, # replace all numbers with a special token
- no_digits=False, # replace all digits with a special token
- no_currency_symbols=False, # replace all currency symbols with a special token
- no_punct=False, # remove punctuations
- replace_with_punct="", # instead of removing punctuations you may replace them
- replace_with_url="",
- replace_with_email="",
- replace_with_phone_number="",
- replace_with_number="",
- replace_with_digit="0",
- replace_with_currency_symbol="",
- lang="en", # set to 'de' for German special handling
- )
-
- def get_prediction(self, text, model, scale_min=0, scale_max=100) -> int:
- return round(model.predict_proba([self.clean_text(text)])[0][1] * (scale_max-scale_min) + scale_min, 2)
-
- def __call__(self, text) -> dict:
- result = {}
- result["sentiment_polarity"] = self.get_prediction(text, self.models["sentiment_polarity"], scale_min=-100, scale_max=100)
- result["opinion"] = self.get_prediction(text, self.models["opinion"])
- result["toxicity"] = {
- class_name: self.get_prediction(text, model)
- for class_name, model in self.models.items()
- if class_name.startswith("toxicity")
- }
- result["emotion"] = {
- class_name: self.get_prediction(text, model)
- for class_name, model in self.models.items()
- if class_name.startswith("emotion")
- }
-
- return result
-
-
-if __name__ == "__main__":
- sentinet = SentinetV1()
- text = "This is a test"
- print(sentinet(text))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py
deleted file mode 100644
index f9a72592be47b534ce22573775fd5a7e8e86d72d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class MlflowLoggerHook(LoggerHook):
-
- def __init__(self,
- exp_name=None,
- tags=None,
- log_model=True,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True):
- """Class to log metrics and (optionally) a trained model to MLflow.
-
- It requires `MLflow`_ to be installed.
-
- Args:
- exp_name (str, optional): Name of the experiment to be used.
- Default None.
- If not None, set the active experiment.
- If experiment does not exist, an experiment with provided name
- will be created.
- tags (dict of str: str, optional): Tags for the current run.
- Default None.
- If not None, set tags for the current run.
- log_model (bool, optional): Whether to log an MLflow artifact.
- Default True.
- If True, log runner.model as an MLflow artifact
- for the current run.
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging
- by_epoch (bool): Whether EpochBasedRunner is used.
-
- .. _MLflow:
- https://www.mlflow.org/docs/latest/index.html
- """
- super(MlflowLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.import_mlflow()
- self.exp_name = exp_name
- self.tags = tags
- self.log_model = log_model
-
- def import_mlflow(self):
- try:
- import mlflow
- import mlflow.pytorch as mlflow_pytorch
- except ImportError:
- raise ImportError(
- 'Please run "pip install mlflow" to install mlflow')
- self.mlflow = mlflow
- self.mlflow_pytorch = mlflow_pytorch
-
- @master_only
- def before_run(self, runner):
- super(MlflowLoggerHook, self).before_run(runner)
- if self.exp_name is not None:
- self.mlflow.set_experiment(self.exp_name)
- if self.tags is not None:
- self.mlflow.set_tags(self.tags)
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- self.mlflow.log_metrics(tags, step=self.get_iter(runner))
-
- @master_only
- def after_run(self, runner):
- if self.log_model:
- self.mlflow_pytorch.log_model(runner.model, 'models')
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/cascade_rcnn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/cascade_rcnn.py
deleted file mode 100644
index d873dceb7e4efdf8d1e7d282badfe9b7118426b9..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/cascade_rcnn.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class CascadeRCNN(TwoStageDetector):
- r"""Implementation of `Cascade R-CNN: Delving into High Quality Object
- Detection `_"""
-
- def __init__(self,
- backbone,
- neck=None,
- rpn_head=None,
- roi_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(CascadeRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
-
- def show_result(self, data, result, **kwargs):
- """Show prediction results of the detector.
-
- Args:
- data (str or np.ndarray): Image filename or loaded image.
- result (Tensor or tuple): The results to draw over `img`
- bbox_result or (bbox_result, segm_result).
-
- Returns:
- np.ndarray: The image with bboxes drawn on it.
- """
- if self.with_mask:
- ms_bbox_result, ms_segm_result = result
- if isinstance(ms_bbox_result, dict):
- result = (ms_bbox_result['ensemble'],
- ms_segm_result['ensemble'])
- else:
- if isinstance(result, dict):
- result = result['ensemble']
- return super(CascadeRCNN, self).show_result(data, result, **kwargs)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/__init__.py
deleted file mode 100644
index 6ca3aaeaacdf5dd085bac69ef2a5be0b3bdf6b9e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
- * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv
- * Copyright (c) OpenMMLab. All rights reserved.
-'''
-
-
-# flake8: noqa
-from .arraymisc import *
-from .fileio import *
-from .image import *
-from .utils import *
-from .version import *
-from .video import *
-from .visualization import *
-
-# The following modules are not imported to this level, so mmcv may be used
-# without PyTorch.
-# - runner
-# - parallel
-# - op
diff --git a/spaces/Rongjiehuang/GenerSpeech/inference/base_tts_infer.py b/spaces/Rongjiehuang/GenerSpeech/inference/base_tts_infer.py
deleted file mode 100644
index 21c7bfa12d55ade4197eeff6ff41e19b8ecf50ea..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/inference/base_tts_infer.py
+++ /dev/null
@@ -1,194 +0,0 @@
-from data_gen.tts.data_gen_utils import is_sil_phoneme
-from resemblyzer import VoiceEncoder
-from data_gen.tts.data_gen_utils import build_phone_encoder, build_word_encoder
-from tasks.tts.dataset_utils import FastSpeechWordDataset
-from tasks.tts.tts_utils import load_data_preprocessor
-from vocoders.hifigan import HifiGanGenerator
-from data_gen.tts.emotion import inference as EmotionEncoder
-from data_gen.tts.emotion.inference import embed_utterance as Embed_utterance
-from data_gen.tts.emotion.inference import preprocess_wav
-import importlib
-import os
-import librosa
-import soundfile as sf
-from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
-from string import punctuation
-import torch
-from utils import audio
-from utils.ckpt_utils import load_ckpt
-from utils.hparams import set_hparams
-
-
-class BaseTTSInfer:
- def __init__(self, hparams, device=None):
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.hparams = hparams
- self.device = device
- self.data_dir = hparams['binary_data_dir']
- self.preprocessor, self.preprocess_args = load_data_preprocessor()
- self.ph_encoder, self.word_encoder = self.preprocessor.load_dict(self.data_dir)
- self.spk_map = self.preprocessor.load_spk_map(self.data_dir)
- self.ds_cls = FastSpeechWordDataset
- self.model = self.build_model()
- self.model.eval()
- self.model.to(self.device)
- self.vocoder = self.build_vocoder()
- self.vocoder.eval()
- self.vocoder.to(self.device)
- self.asr_processor, self.asr_model = self.build_asr()
-
- def build_model(self):
- raise NotImplementedError
-
- def forward_model(self, inp):
- raise NotImplementedError
-
- def build_asr(self):
- # load pretrained model
- processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") # facebook/wav2vec2-base-960h wav2vec2-large-960h-lv60-self
- model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to(self.device)
- return processor, model
-
- def build_vocoder(self):
- base_dir = self.hparams['vocoder_ckpt']
- config_path = f'{base_dir}/config.yaml'
- config = set_hparams(config_path, global_hparams=False)
- vocoder = HifiGanGenerator(config)
- load_ckpt(vocoder, base_dir, 'model_gen')
- return vocoder
-
- def run_vocoder(self, c):
- c = c.transpose(2, 1)
- y = self.vocoder(c)[:, 0]
- return y
-
- def preprocess_input(self, inp):
- """
- :param inp: {'text': str, 'item_name': (str, optional), 'spk_name': (str, optional)}
- :return:
- """
- # processed text
- preprocessor, preprocess_args = self.preprocessor, self.preprocess_args
- text_raw = inp['text']
- item_name = inp.get('item_name', '')
- ph, txt, word, ph2word, ph_gb_word = preprocessor.txt_to_ph(preprocessor.txt_processor, text_raw, preprocess_args)
- ph_token = self.ph_encoder.encode(ph)
-
- # processed ref audio
- ref_audio = inp['ref_audio']
- processed_ref_audio = 'example/temp.wav'
- voice_encoder = VoiceEncoder().to(self.device)
- encoder = [self.ph_encoder, self.word_encoder]
- EmotionEncoder.load_model(self.hparams['emotion_encoder_path'])
- binarizer_cls = self.hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
-
- ref_audio_raw, ref_text_raw = self.asr(ref_audio) # prepare text
- ph_ref, txt_ref, word_ref, ph2word_ref, ph_gb_word_ref = preprocessor.txt_to_ph(preprocessor.txt_processor, ref_text_raw, preprocess_args)
- ph_gb_word_nosil = ["_".join([p for p in w.split("_") if not is_sil_phoneme(p)]) for w in ph_gb_word_ref.split(" ") if not is_sil_phoneme(w)]
- phs_for_align = ['SIL'] + ph_gb_word_nosil + ['SIL']
- phs_for_align = " ".join(phs_for_align)
-
- # prepare files for alignment
- os.system('rm -r example/; mkdir example/')
- audio.save_wav(ref_audio_raw, processed_ref_audio, self.hparams['audio_sample_rate'])
- with open(f'example/temp.lab', 'w') as f_txt:
- f_txt.write(phs_for_align)
- os.system(f'mfa align example/ {self.hparams["binary_data_dir"]}/mfa_dict.txt {self.hparams["binary_data_dir"]}/mfa_model.zip example/textgrid/ --clean')
- item2tgfn = 'example/textgrid/temp.TextGrid' # prepare textgrid alignment
-
- item = binarizer_cls.process_item(item_name, ph_ref, txt_ref, item2tgfn, processed_ref_audio, 0, 0, encoder, self.hparams['binarization_args'])
- item['emo_embed'] = Embed_utterance(preprocess_wav(item['wav_fn']))
- item['spk_embed'] = voice_encoder.embed_utterance(item['wav'])
-
- item.update({
- 'ref_ph': item['ph'],
- 'ph': ph,
- 'ph_token': ph_token,
- 'text': txt
- })
- return item
-
- def input_to_batch(self, item):
- item_names = [item['item_name']]
- text = [item['text']]
- ph = [item['ph']]
-
- txt_tokens = torch.LongTensor(item['ph_token'])[None, :].to(self.device)
- txt_lengths = torch.LongTensor([txt_tokens.shape[1]]).to(self.device)
- mels = torch.FloatTensor(item['mel'])[None, :].to(self.device)
- f0 = torch.FloatTensor(item['f0'])[None, :].to(self.device)
- # uv = torch.FloatTensor(item['uv']).to(self.device)
- mel2ph = torch.LongTensor(item['mel2ph'])[None, :].to(self.device)
- spk_embed = torch.FloatTensor(item['spk_embed'])[None, :].to(self.device)
- emo_embed = torch.FloatTensor(item['emo_embed'])[None, :].to(self.device)
-
- ph2word = torch.LongTensor(item['ph2word'])[None, :].to(self.device)
- mel2word = torch.LongTensor(item['mel2word'])[None, :].to(self.device)
- word_tokens = torch.LongTensor(item['word_tokens'])[None, :].to(self.device)
-
- batch = {
- 'item_name': item_names,
- 'text': text,
- 'ph': ph,
- 'mels': mels,
- 'f0': f0,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'spk_embed': spk_embed,
- 'emo_embed': emo_embed,
- 'mel2ph': mel2ph,
- 'ph2word': ph2word,
- 'mel2word': mel2word,
- 'word_tokens': word_tokens,
- }
- return batch
-
- def postprocess_output(self, output):
- return output
-
- def infer_once(self, inp):
- inp = self.preprocess_input(inp)
- output = self.forward_model(inp)
- output = self.postprocess_output(output)
- return output
-
- @classmethod
- def example_run(cls):
- from utils.hparams import set_hparams
- from utils.hparams import hparams as hp
- from utils.audio import save_wav
-
- set_hparams()
- inp = {
- 'text': hp['text'],
- 'ref_audio': hp['ref_audio']
- }
- infer_ins = cls(hp)
- out = infer_ins.infer_once(inp)
- os.makedirs('infer_out', exist_ok=True)
- save_wav(out, f'infer_out/{hp["text"]}.wav', hp['audio_sample_rate'])
- print(f'Save at infer_out/{hp["text"]}.wav.')
-
- def asr(self, file):
- sample_rate = self.hparams['audio_sample_rate']
- audio_input, source_sample_rate = sf.read(file)
-
- # Resample the wav if needed
- if sample_rate is not None and source_sample_rate != sample_rate:
- audio_input = librosa.resample(audio_input, source_sample_rate, sample_rate)
-
- # pad input values and return pt tensor
- input_values = self.asr_processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
-
- # retrieve logits & take argmax
- logits = self.asr_model(input_values.to(self.device)).logits
- predicted_ids = torch.argmax(logits, dim=-1)
-
- # transcribe
- transcription = self.asr_processor.decode(predicted_ids[0])
- transcription = transcription.rstrip(punctuation)
- return audio_input, transcription
\ No newline at end of file
diff --git a/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/README.md b/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/README.md
deleted file mode 100644
index 851af04c1b26a9f2f932462125db67edafb66899..0000000000000000000000000000000000000000
--- a/spaces/SIGGRAPH2022/Self-Distilled-StyleGAN/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Self-Distilled StyleGAN
-emoji: 🐨
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-suggested_hardware: t4-small
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
-
-https://arxiv.org/abs/2202.12211
diff --git a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/general.py b/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/general.py
deleted file mode 100644
index faf908f960bfbb7797260a5135827019781001a1..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/utils/general.py
+++ /dev/null
@@ -1,891 +0,0 @@
-# YOLOR general utils
-
-import glob
-import logging
-import math
-import os
-import platform
-import random
-import re
-import subprocess
-import time
-from pathlib import Path
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def set_logging(rank=-1):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if rank in [-1, 0] else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def isdocker():
- # Is environment a Docker container
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accesability
- return True
- except OSError:
- return False
-
-
-def check_git_status():
- # Recommend 'git pull' if code is out of date
- print(colorstr('github: '), end='')
- try:
- assert Path('.git').exists(), 'skipping check (not a git repository)'
- assert not isdocker(), 'skipping check (Docker image)'
- assert check_online(), 'skipping check (offline)'
-
- cmd = 'git fetch && git config --get remote.origin.url'
- url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url
- branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
- if n > 0:
- s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
- f"Use 'git pull' to update or 'git clone {url}' to download latest."
- else:
- s = f'up to date with {url} ✅'
- print(emojis(s)) # emoji-safe
- except Exception as e:
- print(e)
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- import pkg_resources as pkg
- prefix = colorstr('red', 'bold', 'requirements:')
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- n += 1
- print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...")
- print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not isdocker(), 'cv2.imshow() is disabled in Docker environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search for file if not found
- if Path(file).is_file() or file == '':
- return file
- else:
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File Not Found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(dict):
- # Download dataset if not found locally
- val, s = dict.get('val'), dict.get('download')
- if val and len(val):
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and len(s): # download script
- print('Downloading %s ...' % s)
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- torch.hub.download_url_to_file(s, f)
- r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip
- else: # bash script
- r = os.system(s)
- print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value
- else:
- raise Exception('Dataset not found.')
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
-
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-
-
-def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9):
- # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # change iou into pow(iou+eps)
- # iou = inter / union
- iou = torch.pow(inter/union + eps, alpha)
- # beta = 2 * alpha
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal
- rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
- rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
- rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha_ciou = v / ((1 + eps) - inter / union + v)
- # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU
- return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- # c_area = cw * ch + eps # convex area
- # return iou - (c_area - union) / c_area # GIoU
- c_area = torch.max(cw * ch + eps, union) # convex area
- return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU
- else:
- return iou # torch.log(iou+eps) or iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def box_giou(box1, box2):
- """
- Return generalized intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- areai = whi[:, :, 0] * whi[:, :, 1]
-
- return iou - (areai - union) / areai
-
-
-def box_ciou(box1, box2, eps: float = 1e-7):
- """
- Return complete intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- w_pred = box1[:, None, 2] - box1[:, None, 0]
- h_pred = box1[:, None, 3] - box1[:, None, 1]
-
- w_gt = box2[:, 2] - box2[:, 0]
- h_gt = box2[:, 3] - box2[:, 1]
-
- v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
- return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v
-
-
-def box_diou(box1, box2, eps: float = 1e-7):
- """
- Return distance intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- # The distance IoU is the IoU penalized by a normalized
- # distance between boxes' centers squared.
- return iou - (centers_distance_squared / diagonal_distance_squared)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=()):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- if nc == 1:
- x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), kpt_label=False, nc=None, nkpt=None):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
- if nc is None:
- nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- if not kpt_label:
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
- else:
- kpts = x[:, 6:]
- conf, j = x[:, 5:6].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres]
-
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # applies a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=True, sep=''):
- # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc.
- path = Path(path) # os-agnostic
- if (path.exists() and exist_ok) or (not path.exists()):
- return str(path)
- else:
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- return f"{path}{sep}{n}" # update path
diff --git a/spaces/Sandiago21/speech-to-speech-translation-greek/app.py b/spaces/Sandiago21/speech-to-speech-translation-greek/app.py
deleted file mode 100644
index 9a5dcc2a52998ede9e7bc877dbe6f2e2cc0b13fa..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/speech-to-speech-translation-greek/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from datasets import load_dataset
-from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline
-
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-# load speech translation checkpoint
-asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v2", device=device)
-
-# load text-to-speech checkpoint and speaker embeddings
-model_id = "Sandiago21/speecht5_finetuned_google_fleurs_greek" # update with your model id
-# pipe = pipeline("automatic-speech-recognition", model=model_id)
-model = SpeechT5ForTextToSpeech.from_pretrained(model_id)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0)
-
-processor = SpeechT5Processor.from_pretrained(model_id)
-
-replacements = [
- ("ου", "u"),
- ("αυ", "af"),
- ("ευ", "ef"),
- ("ει", "i"),
- ("οι", "i"),
- ("αι", "e"),
- ("ού", "u"),
- ("εί", "i"),
- ("οί", "i"),
- ("αί", "e"),
- ("Ά", "A"),
- ("Έ", "E"),
- ("Ή", "H"),
- ("Ί", "I"),
- ("Ό", "O"),
- ("Ύ", "Y"),
- ("Ώ", "O"),
- ("ΐ", "i"),
- ("Α", "A"),
- ("Β", "B"),
- ("Γ", "G"),
- ("Δ", "L"),
- ("Ε", "Ε"),
- ("Ζ", "Z"),
- ("Η", "I"),
- ("Θ", "Th"),
- ("Ι", "I"),
- ("Κ", "K"),
- ("Λ", "L"),
- ("Μ", "M"),
- ("Ν", "N"),
- ("Ξ", "Ks"),
- ("Ο", "O"),
- ("Π", "P"),
- ("Ρ", "R"),
- ("Σ", "S"),
- ("Τ", "T"),
- ("Υ", "Y"),
- ("Φ", "F"),
- ("Χ", "X"),
- ("Ω", "O"),
- ("ά", "a"),
- ("έ", "e"),
- ("ή", "i"),
- ("ί", "i"),
- ("α", "a"),
- ("β", "v"),
- ("γ", "g"),
- ("δ", "d"),
- ("ε", "e"),
- ("ζ", "z"),
- ("η", "i"),
- ("θ", "th"),
- ("ι", "i"),
- ("κ", "k"),
- ("λ", "l"),
- ("μ", "m"),
- ("ν", "n"),
- ("ξ", "ks"),
- ("ο", "o"),
- ("π", "p"),
- ("ρ", "r"),
- ("ς", "s"),
- ("σ", "s"),
- ("τ", "t"),
- ("υ", "i"),
- ("φ", "f"),
- ("χ", "h"),
- ("ψ", "ps"),
- ("ω", "o"),
- ("ϊ", "i"),
- ("ϋ", "i"),
- ("ό", "o"),
- ("ύ", "i"),
- ("ώ", "o"),
- ("í", "i"),
- ("õ", "o"),
- ("Ε", "E"),
- ("Ψ", "Ps"),
-]
-
-def cleanup_text(text):
- for src, dst in replacements:
- text = text.replace(src, dst)
- return text
-
-def synthesize_speech(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
-
- return gr.Audio.update(value=(16000, speech.cpu().numpy()))
-
-def translate(audio):
- outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "transcribe", "language": "greek"})
- return outputs["text"]
-
-
-def synthesise(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
- return speech.cpu()
-
-
-def speech_to_speech_translation(audio):
- translated_text = translate(audio)
- synthesised_speech = synthesise(translated_text)
- synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16)
- return 16000, synthesised_speech
-
-
-title = "Cascaded STST"
-description = """
-Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in Greek. Demo uses OpenAI's [Whisper Large v2](https://huggingface.co/openai/whisper-large-v2) model for speech translation, and [Sandiago21/speecht5_finetuned_google_fleurs_greek](https://huggingface.co/Sandiago21/speecht5_finetuned_google_fleurs_greek) checkpoint for text-to-speech, which is based on Microsoft's
-[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model for text-to-speech, fine-tuned in Greek Audio dataset:
-
-"""
-
-demo = gr.Blocks()
-
-mic_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="microphone", type="filepath"),
- outputs=gr.Audio(label="Generated Speech", type="numpy"),
- title=title,
- description=description,
-)
-
-file_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="upload", type="filepath"),
- outputs=gr.Audio(label="Generated Speech", type="numpy"),
- examples=[["./example.wav"]],
- title=title,
- description=description,
-)
-
-with demo:
- gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"])
-
-demo.launch()
diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/modules.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/modules.py
deleted file mode 100644
index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000
--- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/modules.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/ServerX/PorcoDiaz/guidml.py b/spaces/ServerX/PorcoDiaz/guidml.py
deleted file mode 100644
index aa35e9f8e3386bfec61fc9ad6f807b458ab35882..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/guidml.py
+++ /dev/null
@@ -1,710 +0,0 @@
-"""
-0416后的更新:
- 引入config中half
- 重建npy而不用填写
- v2支持
- 无f0模型支持
- 修复
-
- int16:
- 增加无索引支持
- f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好
-"""
-import os, sys, traceback, re
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from configs.config import Config
-
-Config = Config()
-
-import torch_directml
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-
-
-# import matplotlib.pyplot as plt
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch_directml.device(torch_directml.default_device())
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.sr = 16000
- self.window = 160
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- ) = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- with open("values1.json", "w") as j:
- data = {
- "pth_path": "",
- "index_path": "",
- "sg_input_device": input_devices[
- input_devices_indices.index(sd.default.device[0])
- ],
- "sg_output_device": output_devices[
- output_devices_indices.index(sd.default.device[1])
- ],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("LightBlue3")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title=i18n("Load model"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert Model"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=(("pt files", "*.pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("Select the .pth file"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=(("weight files", "*.pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("Select the .index file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("index files", "*.index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Select the .npy file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("feature files", "*.npy"),),
- ),
- ],
- ],
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Input device")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("Output device")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("Audio device (please use the same type of driver)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Response threshold")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("Pitch settings")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("General settings"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Sample length")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("Fade length")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("Extra推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"),
- sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("Performance settings"),
- ),
- ],
- [
- sg.Button(i18n("开始音频Convert"), key="start_vc"),
- sg.Button(i18n("停止音频Convert"), key="stop_vc"),
- sg.Text(i18n("Inference time (ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- if self.set_values(values) == True:
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- def set_values(self, values):
- if len(values["pth_path"].strip()) == 0:
- sg.popup(i18n("Select the pth file"))
- return False
- if len(values["index_path"].strip()) == 0:
- sg.popup(i18n("Select the index file"))
- return False
- pattern = re.compile("[^\x00-\x7F]+")
- if pattern.findall(values["hubert_path"]):
- sg.popup(i18n("The hubert model path must not contain Chinese characters"))
- return False
- if pattern.findall(values["pth_path"]):
- sg.popup(i18n("The pth file path must not contain Chinese characters."))
- return False
- if pattern.findall(values["index_path"]):
- sg.popup(i18n("The index file path must not contain Chinese characters."))
- return False
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
- return True
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- channels=2,
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/Shubham2003/chatWithPdfs/app.py b/spaces/Shubham2003/chatWithPdfs/app.py
deleted file mode 100644
index 9fdb3016c7c1f20ec58069e482197e5b5306cddf..0000000000000000000000000000000000000000
--- a/spaces/Shubham2003/chatWithPdfs/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import streamlit as st
-# from dotenv import load_dotenv
-from PyPDF2 import PdfReader
-from transformers import pipeline, BertTokenizer
-# import fitz
-
-tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
-def preprocess_input(input_text):
- tokens = tokenizer.tokenize(input_text)
-
- input_ids = tokenizer.convert_tokens_to_ids(tokens)
- input_ids = [tokenizer.cls_token_id] + input_ids + [tokenizer.sep_token_id]
-
-
- return input_ids
-
-def extract_text_from_pdf(pdf_docs, input_text):
- all_relevant_text = []
- for pdf in pdf_docs:
- pdf_reader = PdfReader(pdf)
- text=""
- for page in pdf_reader.pages:
- text += page.extract_text()
-
- chunk_size = 1000 # Set the desired chunk size
- chunks = [text[i:i+chunk_size] for i in range(0, len(text), chunk_size)]
-
- relevant_text = ""
- for chunk in chunks:
- chunk_relevant_text = answer_question(input_text, chunk)
- relevant_text += chunk_relevant_text
-
- # relevant_text = answer_question(input_text, text)
- all_relevant_text.append(relevant_text)
- return all_relevant_text
-
-def answer_question(question, context):
- summarization_pipeline = pipeline("summarization", model="t5-small", tokenizer="t5-small")
- input_text = f"question: {question} context: {context}"
-
- input_ids = preprocess_input(input_text)
- input_text = tokenizer.decode(input_ids)
- summarized_text = summarization_pipeline(input_text, max_length=1000, min_length=100, do_sample=True)[0]['summary_text']
- return summarized_text
-
-
-
-def main():
- # load_dotenv()
- st.set_page_config(page_title="Chat with multiple PDFs", page_icon=":books:")
-
- st.header("Lets chat :books:")
- user_question = st.text_input("Ask a question about your documents:")
-
-
- if 'conversation_history' not in st.session_state:
- st.session_state.conversation_history = []
-
- if user_question:
- with st.spinner("Processing"):
-
- pdf_docs = st.session_state.pdf_docs
-
-
- st.session_state.conversation_history.append(('user', user_question))
-
-
- document_texts = extract_text_from_pdf(pdf_docs,user_question)
-
- summarized_text =answer_question(user_question, document_texts)
-
-
- st.session_state.conversation_history.append(('bot', summarized_text))
-
- with st.sidebar:
- st.subheader("Upload documents")
- pdf_docs = st.file_uploader(
- "Upload your PDFs here and click on 'Process'", accept_multiple_files=True)
- if st.button("Process"):
- st.session_state.pdf_docs = pdf_docs
- # for pdf in pdf_docs:
- # pdf_reader = PdfReader(pdf)
- # text=""
- # for page in pdf_reader.pages:
- # text += page.extract_text()
-
- # st.write("Extracted text: ",text)
-
- # Display conversation history
- for role, message in st.session_state.conversation_history:
- if role == 'user':
- st.write("You:", message)
- elif role == 'bot':
- st.write("Bot:", message)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/SmileyTatsu/Bleh/README.md b/spaces/SmileyTatsu/Bleh/README.md
deleted file mode 100644
index 40dee62be99e942478fb831d47a83a9b0bb087a6..0000000000000000000000000000000000000000
--- a/spaces/SmileyTatsu/Bleh/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Bleh
-emoji: 🐢
-colorFrom: gray
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageEnhance.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageEnhance.py
deleted file mode 100644
index 3b79d5c46a16ce89dfff1694f0121a743d8fa0c7..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageEnhance.py
+++ /dev/null
@@ -1,103 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# image enhancement classes
-#
-# For a background, see "Image Processing By Interpolation and
-# Extrapolation", Paul Haeberli and Douglas Voorhies. Available
-# at http://www.graficaobscura.com/interp/index.html
-#
-# History:
-# 1996-03-23 fl Created
-# 2009-06-16 fl Fixed mean calculation
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1996.
-#
-# See the README file for information on usage and redistribution.
-#
-
-from . import Image, ImageFilter, ImageStat
-
-
-class _Enhance:
- def enhance(self, factor):
- """
- Returns an enhanced image.
-
- :param factor: A floating point value controlling the enhancement.
- Factor 1.0 always returns a copy of the original image,
- lower factors mean less color (brightness, contrast,
- etc), and higher values more. There are no restrictions
- on this value.
- :rtype: :py:class:`~PIL.Image.Image`
- """
- return Image.blend(self.degenerate, self.image, factor)
-
-
-class Color(_Enhance):
- """Adjust image color balance.
-
- This class can be used to adjust the colour balance of an image, in
- a manner similar to the controls on a colour TV set. An enhancement
- factor of 0.0 gives a black and white image. A factor of 1.0 gives
- the original image.
- """
-
- def __init__(self, image):
- self.image = image
- self.intermediate_mode = "L"
- if "A" in image.getbands():
- self.intermediate_mode = "LA"
-
- self.degenerate = image.convert(self.intermediate_mode).convert(image.mode)
-
-
-class Contrast(_Enhance):
- """Adjust image contrast.
-
- This class can be used to control the contrast of an image, similar
- to the contrast control on a TV set. An enhancement factor of 0.0
- gives a solid grey image. A factor of 1.0 gives the original image.
- """
-
- def __init__(self, image):
- self.image = image
- mean = int(ImageStat.Stat(image.convert("L")).mean[0] + 0.5)
- self.degenerate = Image.new("L", image.size, mean).convert(image.mode)
-
- if "A" in image.getbands():
- self.degenerate.putalpha(image.getchannel("A"))
-
-
-class Brightness(_Enhance):
- """Adjust image brightness.
-
- This class can be used to control the brightness of an image. An
- enhancement factor of 0.0 gives a black image. A factor of 1.0 gives the
- original image.
- """
-
- def __init__(self, image):
- self.image = image
- self.degenerate = Image.new(image.mode, image.size, 0)
-
- if "A" in image.getbands():
- self.degenerate.putalpha(image.getchannel("A"))
-
-
-class Sharpness(_Enhance):
- """Adjust image sharpness.
-
- This class can be used to adjust the sharpness of an image. An
- enhancement factor of 0.0 gives a blurred image, a factor of 1.0 gives the
- original image, and a factor of 2.0 gives a sharpened image.
- """
-
- def __init__(self, image):
- self.image = image
- self.degenerate = image.filter(ImageFilter.SMOOTH)
-
- if "A" in image.getbands():
- self.degenerate.putalpha(image.getchannel("A"))
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles_unittest.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles_unittest.py
deleted file mode 100644
index fff1ef9c63cbb4e9fdab9f79c4cfe6f154e0226b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_runfiles/pydev_runfiles_unittest.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import unittest as python_unittest
-from _pydev_runfiles import pydev_runfiles_xml_rpc
-import time
-from _pydevd_bundle import pydevd_io
-import traceback
-from _pydevd_bundle.pydevd_constants import * # @UnusedWildImport
-from io import StringIO
-
-
-#=======================================================================================================================
-# PydevTextTestRunner
-#=======================================================================================================================
-class PydevTextTestRunner(python_unittest.TextTestRunner):
-
- def _makeResult(self):
- return PydevTestResult(self.stream, self.descriptions, self.verbosity)
-
-
-_PythonTextTestResult = python_unittest.TextTestRunner()._makeResult().__class__
-
-
-#=======================================================================================================================
-# PydevTestResult
-#=======================================================================================================================
-class PydevTestResult(_PythonTextTestResult):
-
- def addSubTest(self, test, subtest, err):
- """Called at the end of a subtest.
- 'err' is None if the subtest ended successfully, otherwise it's a
- tuple of values as returned by sys.exc_info().
- """
- _PythonTextTestResult.addSubTest(self, test, subtest, err)
- if err is not None:
- subdesc = subtest._subDescription()
- error = (test, self._exc_info_to_string(err, test))
- self._reportErrors([error], [], '', '%s %s' % (self.get_test_name(test), subdesc))
-
- def startTest(self, test):
- _PythonTextTestResult.startTest(self, test)
- self.buf = pydevd_io.start_redirect(keep_original_redirection=True, std='both')
- self.start_time = time.time()
- self._current_errors_stack = []
- self._current_failures_stack = []
-
- try:
- test_name = test.__class__.__name__ + "." + test._testMethodName
- except AttributeError:
- # Support for jython 2.1 (__testMethodName is pseudo-private in the test case)
- test_name = test.__class__.__name__ + "." + test._TestCase__testMethodName
-
- pydev_runfiles_xml_rpc.notifyStartTest(
- test.__pydev_pyfile__, test_name)
-
- def get_test_name(self, test):
- try:
- try:
- test_name = test.__class__.__name__ + "." + test._testMethodName
- except AttributeError:
- # Support for jython 2.1 (__testMethodName is pseudo-private in the test case)
- try:
- test_name = test.__class__.__name__ + "." + test._TestCase__testMethodName
- # Support for class/module exceptions (test is instance of _ErrorHolder)
- except:
- test_name = test.description.split()[1][1:-1] + ' <' + test.description.split()[0] + '>'
- except:
- traceback.print_exc()
- return ''
- return test_name
-
- def stopTest(self, test):
- end_time = time.time()
- pydevd_io.end_redirect(std='both')
-
- _PythonTextTestResult.stopTest(self, test)
-
- captured_output = self.buf.getvalue()
- del self.buf
- error_contents = ''
- test_name = self.get_test_name(test)
-
- diff_time = '%.2f' % (end_time - self.start_time)
-
- skipped = False
- outcome = getattr(test, '_outcome', None)
- if outcome is not None:
- skipped = bool(getattr(outcome, 'skipped', None))
-
- if skipped:
- pydev_runfiles_xml_rpc.notifyTest(
- 'skip', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
- elif not self._current_errors_stack and not self._current_failures_stack:
- pydev_runfiles_xml_rpc.notifyTest(
- 'ok', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
- else:
- self._reportErrors(self._current_errors_stack, self._current_failures_stack, captured_output, test_name)
-
- def _reportErrors(self, errors, failures, captured_output, test_name, diff_time=''):
- error_contents = []
- for test, s in errors + failures:
- if type(s) == type((1,)): # If it's a tuple (for jython 2.1)
- sio = StringIO()
- traceback.print_exception(s[0], s[1], s[2], file=sio)
- s = sio.getvalue()
- error_contents.append(s)
-
- sep = '\n' + self.separator1
- error_contents = sep.join(error_contents)
-
- if errors and not failures:
- try:
- pydev_runfiles_xml_rpc.notifyTest(
- 'error', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
- except:
- file_start = error_contents.find('File "')
- file_end = error_contents.find('", ', file_start)
- if file_start != -1 and file_end != -1:
- file = error_contents[file_start + 6:file_end]
- else:
- file = ''
- pydev_runfiles_xml_rpc.notifyTest(
- 'error', captured_output, error_contents, file, test_name, diff_time)
-
- elif failures and not errors:
- pydev_runfiles_xml_rpc.notifyTest(
- 'fail', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
-
- else: # Ok, we got both, errors and failures. Let's mark it as an error in the end.
- pydev_runfiles_xml_rpc.notifyTest(
- 'error', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
-
- def addError(self, test, err):
- _PythonTextTestResult.addError(self, test, err)
- # Support for class/module exceptions (test is instance of _ErrorHolder)
- if not hasattr(self, '_current_errors_stack') or test.__class__.__name__ == '_ErrorHolder':
- # Not in start...end, so, report error now (i.e.: django pre/post-setup)
- self._reportErrors([self.errors[-1]], [], '', self.get_test_name(test))
- else:
- self._current_errors_stack.append(self.errors[-1])
-
- def addFailure(self, test, err):
- _PythonTextTestResult.addFailure(self, test, err)
- if not hasattr(self, '_current_failures_stack'):
- # Not in start...end, so, report error now (i.e.: django pre/post-setup)
- self._reportErrors([], [self.failures[-1]], '', self.get_test_name(test))
- else:
- self._current_failures_stack.append(self.failures[-1])
-
-
-class PydevTestSuite(python_unittest.TestSuite):
- pass
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_bytecode.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_bytecode.py
deleted file mode 100644
index c629f75e941d59b89bbe12d1a241ab125ce08e7a..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_bytecode.py
+++ /dev/null
@@ -1,488 +0,0 @@
-
-import pytest
-from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON
-from tests_python.debug_constants import TEST_CYTHON
-pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6')
-#!/usr/bin/env python3
-import sys
-import textwrap
-import unittest
-from _pydevd_frame_eval.vendored.bytecode import Label, Instr, FreeVar, Bytecode, SetLineno, ConcreteInstr
-from _pydevd_frame_eval.vendored.bytecode.tests import TestCase, get_code
-
-
-class BytecodeTests(TestCase):
- maxDiff = 80 * 100
-
- def test_constructor(self):
- code = Bytecode()
- self.assertEqual(code.name, "")
- self.assertEqual(code.filename, "")
- self.assertEqual(code.flags, 0)
- self.assertEqual(code, [])
-
- def test_invalid_types(self):
- code = Bytecode()
- code.append(123)
- with self.assertRaises(ValueError):
- list(code)
- with self.assertRaises(ValueError):
- code.legalize()
- with self.assertRaises(ValueError):
- Bytecode([123])
-
- def test_legalize(self):
- code = Bytecode()
- code.first_lineno = 3
- code.extend(
- [
- Instr("LOAD_CONST", 7),
- Instr("STORE_NAME", "x"),
- Instr("LOAD_CONST", 8, lineno=4),
- Instr("STORE_NAME", "y"),
- Label(),
- SetLineno(5),
- Instr("LOAD_CONST", 9, lineno=6),
- Instr("STORE_NAME", "z"),
- ]
- )
-
- code.legalize()
- self.assertListEqual(
- code,
- [
- Instr("LOAD_CONST", 7, lineno=3),
- Instr("STORE_NAME", "x", lineno=3),
- Instr("LOAD_CONST", 8, lineno=4),
- Instr("STORE_NAME", "y", lineno=4),
- Label(),
- Instr("LOAD_CONST", 9, lineno=5),
- Instr("STORE_NAME", "z", lineno=5),
- ],
- )
-
- def test_slice(self):
- code = Bytecode()
- code.first_lineno = 3
- code.extend(
- [
- Instr("LOAD_CONST", 7),
- Instr("STORE_NAME", "x"),
- SetLineno(4),
- Instr("LOAD_CONST", 8),
- Instr("STORE_NAME", "y"),
- SetLineno(5),
- Instr("LOAD_CONST", 9),
- Instr("STORE_NAME", "z"),
- ]
- )
- sliced_code = code[:]
- self.assertEqual(code, sliced_code)
- for name in (
- "argcount",
- "posonlyargcount",
- "kwonlyargcount",
- "first_lineno",
- "name",
- "filename",
- "docstring",
- "cellvars",
- "freevars",
- "argnames",
- ):
- self.assertEqual(
- getattr(code, name, None), getattr(sliced_code, name, None)
- )
-
- def test_copy(self):
- code = Bytecode()
- code.first_lineno = 3
- code.extend(
- [
- Instr("LOAD_CONST", 7),
- Instr("STORE_NAME", "x"),
- SetLineno(4),
- Instr("LOAD_CONST", 8),
- Instr("STORE_NAME", "y"),
- SetLineno(5),
- Instr("LOAD_CONST", 9),
- Instr("STORE_NAME", "z"),
- ]
- )
-
- copy_code = code.copy()
- self.assertEqual(code, copy_code)
- for name in (
- "argcount",
- "posonlyargcount",
- "kwonlyargcount",
- "first_lineno",
- "name",
- "filename",
- "docstring",
- "cellvars",
- "freevars",
- "argnames",
- ):
- self.assertEqual(getattr(code, name, None), getattr(copy_code, name, None))
-
- def test_from_code(self):
- code = get_code(
- """
- if test:
- x = 1
- else:
- x = 2
- """
- )
- bytecode = Bytecode.from_code(code)
- label_else = Label()
- label_exit = Label()
- if sys.version_info < (3, 10):
- self.assertEqual(
- bytecode,
- [
- Instr("LOAD_NAME", "test", lineno=1),
- Instr("POP_JUMP_IF_FALSE", label_else, lineno=1),
- Instr("LOAD_CONST", 1, lineno=2),
- Instr("STORE_NAME", "x", lineno=2),
- Instr("JUMP_FORWARD", label_exit, lineno=2),
- label_else,
- Instr("LOAD_CONST", 2, lineno=4),
- Instr("STORE_NAME", "x", lineno=4),
- label_exit,
- Instr("LOAD_CONST", None, lineno=4),
- Instr("RETURN_VALUE", lineno=4),
- ],
- )
- # Control flow handling appears to have changed under Python 3.10
- else:
- self.assertEqual(
- bytecode,
- [
- Instr("LOAD_NAME", "test", lineno=1),
- Instr("POP_JUMP_IF_FALSE", label_else, lineno=1),
- Instr("LOAD_CONST", 1, lineno=2),
- Instr("STORE_NAME", "x", lineno=2),
- Instr("LOAD_CONST", None, lineno=2),
- Instr("RETURN_VALUE", lineno=2),
- label_else,
- Instr("LOAD_CONST", 2, lineno=4),
- Instr("STORE_NAME", "x", lineno=4),
- Instr("LOAD_CONST", None, lineno=4),
- Instr("RETURN_VALUE", lineno=4),
- ],
- )
-
- def test_from_code_freevars(self):
- ns = {}
- exec(
- textwrap.dedent(
- """
- def create_func():
- x = 1
- def func():
- return x
- return func
-
- func = create_func()
- """
- ),
- ns,
- ns,
- )
- code = ns["func"].__code__
-
- bytecode = Bytecode.from_code(code)
- self.assertEqual(
- bytecode,
- [
- Instr("LOAD_DEREF", FreeVar("x"), lineno=5),
- Instr("RETURN_VALUE", lineno=5),
- ],
- )
-
- def test_from_code_load_fast(self):
- code = get_code(
- """
- def func():
- x = 33
- y = x
- """,
- function=True,
- )
- code = Bytecode.from_code(code)
- self.assertEqual(
- code,
- [
- Instr("LOAD_CONST", 33, lineno=2),
- Instr("STORE_FAST", "x", lineno=2),
- Instr("LOAD_FAST", "x", lineno=3),
- Instr("STORE_FAST", "y", lineno=3),
- Instr("LOAD_CONST", None, lineno=3),
- Instr("RETURN_VALUE", lineno=3),
- ],
- )
-
- def test_setlineno(self):
- # x = 7
- # y = 8
- # z = 9
- code = Bytecode()
- code.first_lineno = 3
- code.extend(
- [
- Instr("LOAD_CONST", 7),
- Instr("STORE_NAME", "x"),
- SetLineno(4),
- Instr("LOAD_CONST", 8),
- Instr("STORE_NAME", "y"),
- SetLineno(5),
- Instr("LOAD_CONST", 9),
- Instr("STORE_NAME", "z"),
- ]
- )
-
- concrete = code.to_concrete_bytecode()
- self.assertEqual(concrete.consts, [7, 8, 9])
- self.assertEqual(concrete.names, ["x", "y", "z"])
- self.assertListEqual(
- list(concrete),
- [
- ConcreteInstr("LOAD_CONST", 0, lineno=3),
- ConcreteInstr("STORE_NAME", 0, lineno=3),
- ConcreteInstr("LOAD_CONST", 1, lineno=4),
- ConcreteInstr("STORE_NAME", 1, lineno=4),
- ConcreteInstr("LOAD_CONST", 2, lineno=5),
- ConcreteInstr("STORE_NAME", 2, lineno=5),
- ],
- )
-
- def test_to_code(self):
- code = Bytecode()
- code.first_lineno = 50
- code.extend(
- [
- Instr("LOAD_NAME", "print"),
- Instr("LOAD_CONST", "%s"),
- Instr("LOAD_GLOBAL", "a"),
- Instr("BINARY_MODULO"),
- Instr("CALL_FUNCTION", 1),
- Instr("RETURN_VALUE"),
- ]
- )
- co = code.to_code()
- # hopefully this is obvious from inspection? :-)
- self.assertEqual(co.co_stacksize, 3)
-
- co = code.to_code(stacksize=42)
- self.assertEqual(co.co_stacksize, 42)
-
- def test_negative_size_unary(self):
- opnames = (
- "UNARY_POSITIVE",
- "UNARY_NEGATIVE",
- "UNARY_NOT",
- "UNARY_INVERT",
- )
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr(opname)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_unary_with_disable_check_of_pre_and_post(self):
- opnames = (
- "UNARY_POSITIVE",
- "UNARY_NEGATIVE",
- "UNARY_NOT",
- "UNARY_INVERT",
- )
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr(opname)])
- co = code.to_code(check_pre_and_post=False)
- self.assertEqual(co.co_stacksize, 0)
-
- def test_negative_size_binary(self):
- opnames = (
- "BINARY_POWER",
- "BINARY_MULTIPLY",
- "BINARY_MATRIX_MULTIPLY",
- "BINARY_FLOOR_DIVIDE",
- "BINARY_TRUE_DIVIDE",
- "BINARY_MODULO",
- "BINARY_ADD",
- "BINARY_SUBTRACT",
- "BINARY_SUBSCR",
- "BINARY_LSHIFT",
- "BINARY_RSHIFT",
- "BINARY_AND",
- "BINARY_XOR",
- "BINARY_OR",
- )
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr(opname)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_binary_with_disable_check_of_pre_and_post(self):
- opnames = (
- "BINARY_POWER",
- "BINARY_MULTIPLY",
- "BINARY_MATRIX_MULTIPLY",
- "BINARY_FLOOR_DIVIDE",
- "BINARY_TRUE_DIVIDE",
- "BINARY_MODULO",
- "BINARY_ADD",
- "BINARY_SUBTRACT",
- "BINARY_SUBSCR",
- "BINARY_LSHIFT",
- "BINARY_RSHIFT",
- "BINARY_AND",
- "BINARY_XOR",
- "BINARY_OR",
- )
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr(opname)])
- co = code.to_code(check_pre_and_post=False)
- self.assertEqual(co.co_stacksize, 1)
-
- def test_negative_size_call(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("CALL_FUNCTION", 0)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_unpack(self):
- opnames = (
- "UNPACK_SEQUENCE",
- "UNPACK_EX",
- )
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr(opname, 1)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_build(self):
- opnames = (
- "BUILD_TUPLE",
- "BUILD_LIST",
- "BUILD_SET",
- )
- if sys.version_info >= (3, 6):
- opnames = (*opnames, "BUILD_STRING")
-
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr(opname, 1)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_build_map(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr("BUILD_MAP", 1)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_negative_size_build_map_with_disable_check_of_pre_and_post(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr("BUILD_MAP", 1)])
- co = code.to_code(check_pre_and_post=False)
- self.assertEqual(co.co_stacksize, 1)
-
- @unittest.skipIf(sys.version_info < (3, 6), "Inexistent opcode")
- def test_negative_size_build_const_map(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", ("a",)), Instr("BUILD_CONST_KEY_MAP", 1)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- @unittest.skipIf(sys.version_info < (3, 6), "Inexistent opcode")
- def test_negative_size_build_const_map_with_disable_check_of_pre_and_post(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", ("a",)), Instr("BUILD_CONST_KEY_MAP", 1)])
- co = code.to_code(check_pre_and_post=False)
- self.assertEqual(co.co_stacksize, 1)
-
- def test_empty_dup(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("DUP_TOP")])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_not_enough_dup(self):
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr("DUP_TOP_TWO")])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_not_enough_rot(self):
- opnames = ["ROT_TWO", "ROT_THREE"]
- if sys.version_info >= (3, 8):
- opnames.append("ROT_FOUR")
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr(opname)])
- with self.assertRaises(RuntimeError):
- code.compute_stacksize()
-
- def test_not_enough_rot_with_disable_check_of_pre_and_post(self):
- opnames = ["ROT_TWO", "ROT_THREE"]
- if sys.version_info >= (3, 8):
- opnames.append("ROT_FOUR")
- for opname in opnames:
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- code.extend([Instr("LOAD_CONST", 1), Instr(opname)])
- co = code.to_code(check_pre_and_post=False)
- self.assertEqual(co.co_stacksize, 1)
-
- def test_for_iter_stack_effect_computation(self):
- with self.subTest():
- code = Bytecode()
- code.first_lineno = 1
- lab1 = Label()
- lab2 = Label()
- code.extend(
- [
- lab1,
- Instr("FOR_ITER", lab2),
- Instr("STORE_FAST", "i"),
- Instr("JUMP_ABSOLUTE", lab1),
- lab2,
- ]
- )
- with self.assertRaises(RuntimeError):
- # Use compute_stacksize since the code is so broken that conversion
- # to from concrete is actually broken
- code.compute_stacksize(check_pre_and_post=False)
-
-
-if __name__ == "__main__":
- unittest.main() # pragma: no cover
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/resnet.py
deleted file mode 100644
index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/resnet.py
+++ /dev/null
@@ -1,316 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-
-from .utils import constant_init, kaiming_init
-
-
-def conv3x3(in_planes, out_planes, stride=1, dilation=1):
- """3x3 convolution with padding."""
- return nn.Conv2d(
- in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- super(BasicBlock, self).__init__()
- assert style in ['pytorch', 'caffe']
- self.conv1 = conv3x3(inplanes, planes, stride, dilation)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- assert not with_cp
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- """Bottleneck block.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__()
- assert style in ['pytorch', 'caffe']
- if style == 'pytorch':
- conv1_stride = 1
- conv2_stride = stride
- else:
- conv1_stride = stride
- conv2_stride = 1
- self.conv1 = nn.Conv2d(
- inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False)
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
- self.bn1 = nn.BatchNorm2d(planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(
- planes, planes * self.expansion, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- self.with_cp = with_cp
-
- def forward(self, x):
-
- def _inner_forward(x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-def make_res_layer(block,
- inplanes,
- planes,
- blocks,
- stride=1,
- dilation=1,
- style='pytorch',
- with_cp=False):
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- dilation,
- downsample,
- style=style,
- with_cp=with_cp))
- inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(
- block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp))
-
- return nn.Sequential(*layers)
-
-
-class ResNet(nn.Module):
- """ResNet backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- num_stages (int): Resnet stages, normally 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze
- running stats (mean and var).
- bn_frozen (bool): Whether to freeze weight and bias of BN layers.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- """
-
- arch_settings = {
- 18: (BasicBlock, (2, 2, 2, 2)),
- 34: (BasicBlock, (3, 4, 6, 3)),
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- depth,
- num_stages=4,
- strides=(1, 2, 2, 2),
- dilations=(1, 1, 1, 1),
- out_indices=(0, 1, 2, 3),
- style='pytorch',
- frozen_stages=-1,
- bn_eval=True,
- bn_frozen=False,
- with_cp=False):
- super(ResNet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for resnet')
- assert num_stages >= 1 and num_stages <= 4
- block, stage_blocks = self.arch_settings[depth]
- stage_blocks = stage_blocks[:num_stages]
- assert len(strides) == len(dilations) == num_stages
- assert max(out_indices) < num_stages
-
- self.out_indices = out_indices
- self.style = style
- self.frozen_stages = frozen_stages
- self.bn_eval = bn_eval
- self.bn_frozen = bn_frozen
- self.with_cp = with_cp
-
- self.inplanes = 64
- self.conv1 = nn.Conv2d(
- 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.res_layers = []
- for i, num_blocks in enumerate(stage_blocks):
- stride = strides[i]
- dilation = dilations[i]
- planes = 64 * 2**i
- res_layer = make_res_layer(
- block,
- self.inplanes,
- planes,
- num_blocks,
- stride=stride,
- dilation=dilation,
- style=self.style,
- with_cp=with_cp)
- self.inplanes = planes * block.expansion
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, res_layer)
- self.res_layers.append(layer_name)
-
- self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- from ..runner import load_checkpoint
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- outs = []
- for i, layer_name in enumerate(self.res_layers):
- res_layer = getattr(self, layer_name)
- x = res_layer(x)
- if i in self.out_indices:
- outs.append(x)
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def train(self, mode=True):
- super(ResNet, self).train(mode)
- if self.bn_eval:
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
- if self.bn_frozen:
- for params in m.parameters():
- params.requires_grad = False
- if mode and self.frozen_stages >= 0:
- for param in self.conv1.parameters():
- param.requires_grad = False
- for param in self.bn1.parameters():
- param.requires_grad = False
- self.bn1.eval()
- self.bn1.weight.requires_grad = False
- self.bn1.bias.requires_grad = False
- for i in range(1, self.frozen_stages + 1):
- mod = getattr(self, f'layer{i}')
- mod.eval()
- for param in mod.parameters():
- param.requires_grad = False
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/packaging.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/packaging.py
deleted file mode 100644
index b9f6af4d17410ce7e1d573c41a1f04dd18ae275e..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/packaging.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import functools
-import logging
-import re
-from typing import NewType, Optional, Tuple, cast
-
-from pip._vendor.packaging import specifiers, version
-from pip._vendor.packaging.requirements import Requirement
-
-NormalizedExtra = NewType("NormalizedExtra", str)
-
-logger = logging.getLogger(__name__)
-
-
-def check_requires_python(
- requires_python: Optional[str], version_info: Tuple[int, ...]
-) -> bool:
- """
- Check if the given Python version matches a "Requires-Python" specifier.
-
- :param version_info: A 3-tuple of ints representing a Python
- major-minor-micro version to check (e.g. `sys.version_info[:3]`).
-
- :return: `True` if the given Python version satisfies the requirement.
- Otherwise, return `False`.
-
- :raises InvalidSpecifier: If `requires_python` has an invalid format.
- """
- if requires_python is None:
- # The package provides no information
- return True
- requires_python_specifier = specifiers.SpecifierSet(requires_python)
-
- python_version = version.parse(".".join(map(str, version_info)))
- return python_version in requires_python_specifier
-
-
-@functools.lru_cache(maxsize=512)
-def get_requirement(req_string: str) -> Requirement:
- """Construct a packaging.Requirement object with caching"""
- # Parsing requirement strings is expensive, and is also expected to happen
- # with a low diversity of different arguments (at least relative the number
- # constructed). This method adds a cache to requirement object creation to
- # minimize repeated parsing of the same string to construct equivalent
- # Requirement objects.
- return Requirement(req_string)
-
-
-def safe_extra(extra: str) -> NormalizedExtra:
- """Convert an arbitrary string to a standard 'extra' name
-
- Any runs of non-alphanumeric characters are replaced with a single '_',
- and the result is always lowercased.
-
- This function is duplicated from ``pkg_resources``. Note that this is not
- the same to either ``canonicalize_name`` or ``_egg_link_name``.
- """
- return cast(NormalizedExtra, re.sub("[^A-Za-z0-9.-]+", "_", extra).lower())
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py
deleted file mode 100644
index fa0b245d279e96724d5610f93bc3b3c8c22ca032..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py
+++ /dev/null
@@ -1,397 +0,0 @@
-"""
-Low-level helpers for the SecureTransport bindings.
-
-These are Python functions that are not directly related to the high-level APIs
-but are necessary to get them to work. They include a whole bunch of low-level
-CoreFoundation messing about and memory management. The concerns in this module
-are almost entirely about trying to avoid memory leaks and providing
-appropriate and useful assistance to the higher-level code.
-"""
-import base64
-import ctypes
-import itertools
-import os
-import re
-import ssl
-import struct
-import tempfile
-
-from .bindings import CFConst, CoreFoundation, Security
-
-# This regular expression is used to grab PEM data out of a PEM bundle.
-_PEM_CERTS_RE = re.compile(
- b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL
-)
-
-
-def _cf_data_from_bytes(bytestring):
- """
- Given a bytestring, create a CFData object from it. This CFData object must
- be CFReleased by the caller.
- """
- return CoreFoundation.CFDataCreate(
- CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring)
- )
-
-
-def _cf_dictionary_from_tuples(tuples):
- """
- Given a list of Python tuples, create an associated CFDictionary.
- """
- dictionary_size = len(tuples)
-
- # We need to get the dictionary keys and values out in the same order.
- keys = (t[0] for t in tuples)
- values = (t[1] for t in tuples)
- cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys)
- cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values)
-
- return CoreFoundation.CFDictionaryCreate(
- CoreFoundation.kCFAllocatorDefault,
- cf_keys,
- cf_values,
- dictionary_size,
- CoreFoundation.kCFTypeDictionaryKeyCallBacks,
- CoreFoundation.kCFTypeDictionaryValueCallBacks,
- )
-
-
-def _cfstr(py_bstr):
- """
- Given a Python binary data, create a CFString.
- The string must be CFReleased by the caller.
- """
- c_str = ctypes.c_char_p(py_bstr)
- cf_str = CoreFoundation.CFStringCreateWithCString(
- CoreFoundation.kCFAllocatorDefault,
- c_str,
- CFConst.kCFStringEncodingUTF8,
- )
- return cf_str
-
-
-def _create_cfstring_array(lst):
- """
- Given a list of Python binary data, create an associated CFMutableArray.
- The array must be CFReleased by the caller.
-
- Raises an ssl.SSLError on failure.
- """
- cf_arr = None
- try:
- cf_arr = CoreFoundation.CFArrayCreateMutable(
- CoreFoundation.kCFAllocatorDefault,
- 0,
- ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
- )
- if not cf_arr:
- raise MemoryError("Unable to allocate memory!")
- for item in lst:
- cf_str = _cfstr(item)
- if not cf_str:
- raise MemoryError("Unable to allocate memory!")
- try:
- CoreFoundation.CFArrayAppendValue(cf_arr, cf_str)
- finally:
- CoreFoundation.CFRelease(cf_str)
- except BaseException as e:
- if cf_arr:
- CoreFoundation.CFRelease(cf_arr)
- raise ssl.SSLError("Unable to allocate array: %s" % (e,))
- return cf_arr
-
-
-def _cf_string_to_unicode(value):
- """
- Creates a Unicode string from a CFString object. Used entirely for error
- reporting.
-
- Yes, it annoys me quite a lot that this function is this complex.
- """
- value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p))
-
- string = CoreFoundation.CFStringGetCStringPtr(
- value_as_void_p, CFConst.kCFStringEncodingUTF8
- )
- if string is None:
- buffer = ctypes.create_string_buffer(1024)
- result = CoreFoundation.CFStringGetCString(
- value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8
- )
- if not result:
- raise OSError("Error copying C string from CFStringRef")
- string = buffer.value
- if string is not None:
- string = string.decode("utf-8")
- return string
-
-
-def _assert_no_error(error, exception_class=None):
- """
- Checks the return code and throws an exception if there is an error to
- report
- """
- if error == 0:
- return
-
- cf_error_string = Security.SecCopyErrorMessageString(error, None)
- output = _cf_string_to_unicode(cf_error_string)
- CoreFoundation.CFRelease(cf_error_string)
-
- if output is None or output == u"":
- output = u"OSStatus %s" % error
-
- if exception_class is None:
- exception_class = ssl.SSLError
-
- raise exception_class(output)
-
-
-def _cert_array_from_pem(pem_bundle):
- """
- Given a bundle of certs in PEM format, turns them into a CFArray of certs
- that can be used to validate a cert chain.
- """
- # Normalize the PEM bundle's line endings.
- pem_bundle = pem_bundle.replace(b"\r\n", b"\n")
-
- der_certs = [
- base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle)
- ]
- if not der_certs:
- raise ssl.SSLError("No root certificates specified")
-
- cert_array = CoreFoundation.CFArrayCreateMutable(
- CoreFoundation.kCFAllocatorDefault,
- 0,
- ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
- )
- if not cert_array:
- raise ssl.SSLError("Unable to allocate memory!")
-
- try:
- for der_bytes in der_certs:
- certdata = _cf_data_from_bytes(der_bytes)
- if not certdata:
- raise ssl.SSLError("Unable to allocate memory!")
- cert = Security.SecCertificateCreateWithData(
- CoreFoundation.kCFAllocatorDefault, certdata
- )
- CoreFoundation.CFRelease(certdata)
- if not cert:
- raise ssl.SSLError("Unable to build cert object!")
-
- CoreFoundation.CFArrayAppendValue(cert_array, cert)
- CoreFoundation.CFRelease(cert)
- except Exception:
- # We need to free the array before the exception bubbles further.
- # We only want to do that if an error occurs: otherwise, the caller
- # should free.
- CoreFoundation.CFRelease(cert_array)
- raise
-
- return cert_array
-
-
-def _is_cert(item):
- """
- Returns True if a given CFTypeRef is a certificate.
- """
- expected = Security.SecCertificateGetTypeID()
- return CoreFoundation.CFGetTypeID(item) == expected
-
-
-def _is_identity(item):
- """
- Returns True if a given CFTypeRef is an identity.
- """
- expected = Security.SecIdentityGetTypeID()
- return CoreFoundation.CFGetTypeID(item) == expected
-
-
-def _temporary_keychain():
- """
- This function creates a temporary Mac keychain that we can use to work with
- credentials. This keychain uses a one-time password and a temporary file to
- store the data. We expect to have one keychain per socket. The returned
- SecKeychainRef must be freed by the caller, including calling
- SecKeychainDelete.
-
- Returns a tuple of the SecKeychainRef and the path to the temporary
- directory that contains it.
- """
- # Unfortunately, SecKeychainCreate requires a path to a keychain. This
- # means we cannot use mkstemp to use a generic temporary file. Instead,
- # we're going to create a temporary directory and a filename to use there.
- # This filename will be 8 random bytes expanded into base64. We also need
- # some random bytes to password-protect the keychain we're creating, so we
- # ask for 40 random bytes.
- random_bytes = os.urandom(40)
- filename = base64.b16encode(random_bytes[:8]).decode("utf-8")
- password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8
- tempdirectory = tempfile.mkdtemp()
-
- keychain_path = os.path.join(tempdirectory, filename).encode("utf-8")
-
- # We now want to create the keychain itself.
- keychain = Security.SecKeychainRef()
- status = Security.SecKeychainCreate(
- keychain_path, len(password), password, False, None, ctypes.byref(keychain)
- )
- _assert_no_error(status)
-
- # Having created the keychain, we want to pass it off to the caller.
- return keychain, tempdirectory
-
-
-def _load_items_from_file(keychain, path):
- """
- Given a single file, loads all the trust objects from it into arrays and
- the keychain.
- Returns a tuple of lists: the first list is a list of identities, the
- second a list of certs.
- """
- certificates = []
- identities = []
- result_array = None
-
- with open(path, "rb") as f:
- raw_filedata = f.read()
-
- try:
- filedata = CoreFoundation.CFDataCreate(
- CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata)
- )
- result_array = CoreFoundation.CFArrayRef()
- result = Security.SecItemImport(
- filedata, # cert data
- None, # Filename, leaving it out for now
- None, # What the type of the file is, we don't care
- None, # what's in the file, we don't care
- 0, # import flags
- None, # key params, can include passphrase in the future
- keychain, # The keychain to insert into
- ctypes.byref(result_array), # Results
- )
- _assert_no_error(result)
-
- # A CFArray is not very useful to us as an intermediary
- # representation, so we are going to extract the objects we want
- # and then free the array. We don't need to keep hold of keys: the
- # keychain already has them!
- result_count = CoreFoundation.CFArrayGetCount(result_array)
- for index in range(result_count):
- item = CoreFoundation.CFArrayGetValueAtIndex(result_array, index)
- item = ctypes.cast(item, CoreFoundation.CFTypeRef)
-
- if _is_cert(item):
- CoreFoundation.CFRetain(item)
- certificates.append(item)
- elif _is_identity(item):
- CoreFoundation.CFRetain(item)
- identities.append(item)
- finally:
- if result_array:
- CoreFoundation.CFRelease(result_array)
-
- CoreFoundation.CFRelease(filedata)
-
- return (identities, certificates)
-
-
-def _load_client_cert_chain(keychain, *paths):
- """
- Load certificates and maybe keys from a number of files. Has the end goal
- of returning a CFArray containing one SecIdentityRef, and then zero or more
- SecCertificateRef objects, suitable for use as a client certificate trust
- chain.
- """
- # Ok, the strategy.
- #
- # This relies on knowing that macOS will not give you a SecIdentityRef
- # unless you have imported a key into a keychain. This is a somewhat
- # artificial limitation of macOS (for example, it doesn't necessarily
- # affect iOS), but there is nothing inside Security.framework that lets you
- # get a SecIdentityRef without having a key in a keychain.
- #
- # So the policy here is we take all the files and iterate them in order.
- # Each one will use SecItemImport to have one or more objects loaded from
- # it. We will also point at a keychain that macOS can use to work with the
- # private key.
- #
- # Once we have all the objects, we'll check what we actually have. If we
- # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise,
- # we'll take the first certificate (which we assume to be our leaf) and
- # ask the keychain to give us a SecIdentityRef with that cert's associated
- # key.
- #
- # We'll then return a CFArray containing the trust chain: one
- # SecIdentityRef and then zero-or-more SecCertificateRef objects. The
- # responsibility for freeing this CFArray will be with the caller. This
- # CFArray must remain alive for the entire connection, so in practice it
- # will be stored with a single SSLSocket, along with the reference to the
- # keychain.
- certificates = []
- identities = []
-
- # Filter out bad paths.
- paths = (path for path in paths if path)
-
- try:
- for file_path in paths:
- new_identities, new_certs = _load_items_from_file(keychain, file_path)
- identities.extend(new_identities)
- certificates.extend(new_certs)
-
- # Ok, we have everything. The question is: do we have an identity? If
- # not, we want to grab one from the first cert we have.
- if not identities:
- new_identity = Security.SecIdentityRef()
- status = Security.SecIdentityCreateWithCertificate(
- keychain, certificates[0], ctypes.byref(new_identity)
- )
- _assert_no_error(status)
- identities.append(new_identity)
-
- # We now want to release the original certificate, as we no longer
- # need it.
- CoreFoundation.CFRelease(certificates.pop(0))
-
- # We now need to build a new CFArray that holds the trust chain.
- trust_chain = CoreFoundation.CFArrayCreateMutable(
- CoreFoundation.kCFAllocatorDefault,
- 0,
- ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
- )
- for item in itertools.chain(identities, certificates):
- # ArrayAppendValue does a CFRetain on the item. That's fine,
- # because the finally block will release our other refs to them.
- CoreFoundation.CFArrayAppendValue(trust_chain, item)
-
- return trust_chain
- finally:
- for obj in itertools.chain(identities, certificates):
- CoreFoundation.CFRelease(obj)
-
-
-TLS_PROTOCOL_VERSIONS = {
- "SSLv2": (0, 2),
- "SSLv3": (3, 0),
- "TLSv1": (3, 1),
- "TLSv1.1": (3, 2),
- "TLSv1.2": (3, 3),
-}
-
-
-def _build_tls_unknown_ca_alert(version):
- """
- Builds a TLS alert record for an unknown CA.
- """
- ver_maj, ver_min = TLS_PROTOCOL_VERSIONS[version]
- severity_fatal = 0x02
- description_unknown_ca = 0x30
- msg = struct.pack(">BB", severity_fatal, description_unknown_ca)
- msg_len = len(msg)
- record_type_alert = 0x15
- record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg
- return record
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_path.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_path.py
deleted file mode 100644
index b99d9dadcfc3789629aa803d3256cd22d2873c29..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_path.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import os
-import sys
-from typing import Union
-
-_Path = Union[str, os.PathLike]
-
-
-def ensure_directory(path):
- """Ensure that the parent directory of `path` exists"""
- dirname = os.path.dirname(path)
- os.makedirs(dirname, exist_ok=True)
-
-
-def same_path(p1: _Path, p2: _Path) -> bool:
- """Differs from os.path.samefile because it does not require paths to exist.
- Purely string based (no comparison between i-nodes).
- >>> same_path("a/b", "./a/b")
- True
- >>> same_path("a/b", "a/./b")
- True
- >>> same_path("a/b", "././a/b")
- True
- >>> same_path("a/b", "./a/b/c/..")
- True
- >>> same_path("a/b", "../a/b/c")
- False
- >>> same_path("a", "a/b")
- False
- """
- return normpath(p1) == normpath(p2)
-
-
-def normpath(filename: _Path) -> str:
- """Normalize a file/dir name for comparison purposes."""
- # See pkg_resources.normalize_path for notes about cygwin
- file = os.path.abspath(filename) if sys.platform == 'cygwin' else filename
- return os.path.normcase(os.path.realpath(os.path.normpath(file)))
diff --git a/spaces/TandCAcceptMe/face-swap-docker/plugins/core.py b/spaces/TandCAcceptMe/face-swap-docker/plugins/core.py
deleted file mode 100644
index 88ef2dc22ba90c99d5e4cd2743c29aeccdb26b35..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/plugins/core.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Core plugin
-# author: Vladislav Janvarev
-
-from chain_img_processor import ChainImgProcessor
-
-# start function
-def start(core:ChainImgProcessor):
- manifest = {
- "name": "Core plugin",
- "version": "2.0",
-
- "default_options": {
- "default_chain": "faceswap", # default chain to run
- "init_on_start": "faceswap,txt2clip,gfpgan,codeformer", # init these processors on start
- "is_demo_row_render": False,
- },
-
- }
- return manifest
-
-def start_with_options(core:ChainImgProcessor, manifest:dict):
- options = manifest["options"]
-
- core.default_chain = options["default_chain"]
- core.init_on_start = options["init_on_start"]
-
- core.is_demo_row_render= options["is_demo_row_render"]
-
- return manifest
diff --git a/spaces/TencentARC/VLog/models/whisper_model.py b/spaces/TencentARC/VLog/models/whisper_model.py
deleted file mode 100644
index 48753515fde56258854488b0d658ea3d8c501825..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/whisper_model.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-import whisper
-
-def has_intersection(t1, t2):
- if t1[1] < t2[0] or t2[1] < t1[0]:
- return False
- else:
- return True
-
-class AudioTranslator():
- def __init__(self, model='base', device='cuda'):
- self.device = device
- self.model = whisper.load_model(model).to(device)
-
- def __call__(self, video_path):
- print("Extract the audio results.")
- audio_results = self.model.transcribe(video_path)["segments"]
- print("Finished.")
- return audio_results
-
- def match(self, audio_results, start, end):
- transcript = ''
- for res in audio_results:
- if has_intersection((start, end), (res["start"], res["end"])):
- transcript += res['text'] + ' '
- return transcript
diff --git a/spaces/Tonic/indiansummer/theme_dropdown.py b/spaces/Tonic/indiansummer/theme_dropdown.py
deleted file mode 100644
index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000
--- a/spaces/Tonic/indiansummer/theme_dropdown.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import pathlib
-
-from gradio.themes.utils import ThemeAsset
-
-
-def create_theme_dropdown():
- import gradio as gr
-
- asset_path = pathlib.Path(__file__).parent / "themes"
- themes = []
- for theme_asset in os.listdir(str(asset_path)):
- themes.append(
- (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset)))
- )
-
- def make_else_if(theme_asset):
- return f"""
- else if (theme == '{str(theme_asset[0].version)}') {{
- var theme_css = `{theme_asset[1]._get_theme_css()}`
- }}"""
-
- head, tail = themes[0], themes[1:]
- if_statement = f"""
- if (theme == "{str(head[0].version)}") {{
- var theme_css = `{head[1]._get_theme_css()}`
- }} {" ".join(make_else_if(t) for t in tail)}
- """
-
- latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[
- ::-1
- ]
- latest_to_oldest = [str(t.version) for t in latest_to_oldest]
-
- component = gr.Dropdown(
- choices=latest_to_oldest,
- value=latest_to_oldest[0],
- render=False,
- label="Select Version",
- ).style(container=False)
-
- return (
- component,
- f"""
- (theme) => {{
- if (!document.querySelector('.theme-css')) {{
- var theme_elem = document.createElement('style');
- theme_elem.classList.add('theme-css');
- document.head.appendChild(theme_elem);
- }} else {{
- var theme_elem = document.querySelector('.theme-css');
- }}
- {if_statement}
- theme_elem.innerHTML = theme_css;
- }}
- """,
- )
diff --git a/spaces/XciD/te/README.md b/spaces/XciD/te/README.md
deleted file mode 100644
index c8abe35a949ecee8374d5461467c066d7e33f11d..0000000000000000000000000000000000000000
--- a/spaces/XciD/te/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Te
-emoji: 🦀
-colorFrom: blue
-colorTo: purple
-sdk: static
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Xule/ChuanhuChatGPT/modules/models.py b/spaces/Xule/ChuanhuChatGPT/modules/models.py
deleted file mode 100644
index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000
--- a/spaces/Xule/ChuanhuChatGPT/modules/models.py
+++ /dev/null
@@ -1,625 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-import base64
-from io import BytesIO
-from PIL import Image
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt=INITIAL_SYSTEM_PROMPT,
- temperature=1.0,
- top_p=1.0,
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt=system_prompt,
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- logging.error(f"获取API使用情况失败:" + str(e))
- return i18n("**获取API使用情况失败**")
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- import traceback
- traceback.print_exc()
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier is not None:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
- def set_key(self, new_access_key):
- ret = super().set_key(new_access_key)
- self._refresh_header()
- return ret
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name) -> None:
- super().__init__(model_name=model_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- ) -> None:
- super().__init__(model_name=model_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = ""
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMChat(BaseLLMModel):
- def __init__(self, api_key):
- super().__init__(model_name="xmchat")
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
- self.last_conv_id = None
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- self.last_conv_id = None
- return [], "已重置"
-
- def image_to_base64(self, image_path):
- # 打开并加载图片
- img = Image.open(image_path)
-
- # 获取图片的宽度和高度
- width, height = img.size
-
- # 计算压缩比例,以确保最长边小于4096像素
- max_dimension = 2048
- scale_ratio = min(max_dimension / width, max_dimension / height)
-
- if scale_ratio < 1:
- # 按压缩比例调整图片大小
- new_width = int(width * scale_ratio)
- new_height = int(height * scale_ratio)
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
-
- # 将图片转换为jpg格式的二进制数据
- buffer = BytesIO()
- if img.mode == "RGBA":
- img = img.convert("RGB")
- img.save(buffer, format='JPEG')
- binary_image = buffer.getvalue()
-
- # 对二进制数据进行Base64编码
- base64_image = base64.b64encode(binary_image).decode('utf-8')
-
- return base64_image
-
- def try_read_image(self, filepath):
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- self.image_bytes = self.image_to_base64(filepath)
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def like(self):
- if self.last_conv_id is None:
- return "点赞失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "good"
- }
- response = requests.post(self.url, json=data)
- return "👍点赞成功,,感谢反馈~"
-
- def dislike(self):
- if self.last_conv_id is None:
- return "点踩失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "bad"
- }
- response = requests.post(self.url, json=data)
- return "👎点踩成功,感谢反馈~"
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- # XMChat的一轮对话中实际上只能处理一张图片
- self.reset()
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- self.last_conv_id = conv_id
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(model_name, lora_model_path)
- elif model_type == ModelType.XMChat:
- if os.environ.get("XMCHAT_API_KEY") != "":
- access_key = os.environ.get("XMCHAT_API_KEY")
- model = XMChat(api_key=access_key)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- logging.error(e)
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- if dont_change_lora_selector:
- return model, msg
- else:
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
-
-
-if __name__ == "__main__":
- with open("config.json", "r") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="chatglm-6b-int4")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/modules.py b/spaces/XzJosh/Taffy-Bert-VITS2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Taffy-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/YUANAI/DiffspeechResearch/utils/audio/io.py b/spaces/YUANAI/DiffspeechResearch/utils/audio/io.py
deleted file mode 100644
index 34d5d20ae13e9aa481b1bc85117ad6539af8a624..0000000000000000000000000000000000000000
--- a/spaces/YUANAI/DiffspeechResearch/utils/audio/io.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import subprocess
-
-import numpy as np
-from scipy.io import wavfile
-
-
-def save_wav(wav, path, sr, norm=False):
- if norm:
- wav = wav / np.abs(wav).max()
- wav = wav * 32767
- wavfile.write(path[:-4] + '.wav', sr, wav.astype(np.int16))
- if path[-4:] == '.mp3':
- to_mp3(path[:-4])
-
-
-def to_mp3(out_path):
- if out_path[-4:] == '.wav':
- out_path = out_path[:-4]
- subprocess.check_call(
- f'ffmpeg -threads 1 -loglevel error -i "{out_path}.wav" -vn -b:a 192k -y -hide_banner -async 1 "{out_path}.mp3"',
- shell=True, stdin=subprocess.PIPE)
- subprocess.check_call(f'rm -f "{out_path}.wav"', shell=True)
diff --git a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/meteor/__init__.py b/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/meteor/__init__.py
deleted file mode 100644
index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocoevalcap/meteor/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__author__ = 'tylin'
diff --git a/spaces/ZalacDanijel/pujaguja/README.md b/spaces/ZalacDanijel/pujaguja/README.md
deleted file mode 100644
index 397b08488ea5526dd7792ace143e19b900161b13..0000000000000000000000000000000000000000
--- a/spaces/ZalacDanijel/pujaguja/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: AutoTrain Advanced
-emoji: 🚀
-colorFrom: blue
-colorTo: green
-sdk: docker
-pinned: false
-duplicated_from: autotrain-projects/autotrain-advanced
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git "a/spaces/a-v-bely/spanish-task-generator/pages/2_\360\237\221\250\342\200\215\360\237\217\253_\320\235\320\260\321\207\320\260\320\273\320\276_\321\200\320\260\320\261\320\276\321\202\321\213.py" "b/spaces/a-v-bely/spanish-task-generator/pages/2_\360\237\221\250\342\200\215\360\237\217\253_\320\235\320\260\321\207\320\260\320\273\320\276_\321\200\320\260\320\261\320\276\321\202\321\213.py"
deleted file mode 100644
index 7cb794ab425c872a6a4379ac63d38b44f7f792dd..0000000000000000000000000000000000000000
--- "a/spaces/a-v-bely/spanish-task-generator/pages/2_\360\237\221\250\342\200\215\360\237\217\253_\320\235\320\260\321\207\320\260\320\273\320\276_\321\200\320\260\320\261\320\276\321\202\321\213.py"
+++ /dev/null
@@ -1,245 +0,0 @@
-import datetime
-import streamlit as st
-from utilities_database.user_database_utils import load_user_tasks_data
-from utilities_database.user_database_utils import save_data_in_database
-from utilities_database.user_database_widgets import user_save_text_table
-from utilities_database.user_database_utils import load_users_particular_task
-
-# Interface
-if st.session_state.get('-LOGGED_IN_BOOL-'):
- st.set_page_config(page_title='GenLexTasks', layout="wide", page_icon=':es:')
- INSTRUCTION = st.expander(label='**ИНСТРУКЦИЯ**', expanded=False)
- INSTRUCTION.markdown(
- '**_I. Выберите режим работы._**'
- '\n\n**_:red[СОЗДАНИЕ ЗАДАНИЙ]_**'
- '\n\nПосле выбора данного режима работы появится форма, которую необходимо заполнить:'
- '\n\n1. Придумайте **название** для файла с заданиями. '
- 'Вы можете оставить это поле пустым - именем по умолчанию служит текущая дата и первые 20 символов'
- ' введенного Вами текста.'
- '\n\n2. Введите **текст** или выберите **текстовый файл** с исходным текстом, на основе которого Вы хотите'
- ' создать задания. '
- '\n\n3. Укажите *способ выбора целевых слов*:'
- '\n\t* *:green[Автоматически]*: программа сама выберет подходящие по сложности целевые слова.'
- '\n\t* *:blue[Самостоятельно]*: введите в соответствующее поле целевые слова через запятую в той форме,'
- ' в которой они встречаются в тексте. В этом случае *:orange[языковой уровень]* можно не указывать, но тогда'
- ' дистракторы будут полностью случайными и несоотнесёнными с уровнем.'
- '\n4. Если Вы выбрали *:green[автоматический поиск целевых слов]*, **_:red[обязательно]_** укажите'
- ' *:orange[языковой уровень]*. Данный параметр отвечает за выбор лексического минимума, использующегося при'
- ' подборе дистракторов.'
- '\n5. Если Вы выбрали *:blue[самостоятельный ввод целевых слов]*, проверьте, что заполнили соответствующее'
- ' поле. ️ ❗ **:red[Введите слова в той форме, в которой они встречаются в тексте]**.'
- '\n6. Укажите число дистракторов - неправильных вариантов ответа. Если указано _более четырех_'
- ' дистракторов, возможно, что в некоторых заданиях будет выведено _меньшее количество, но не менее четырех_'
- ' вариантов. Данное обстоятельство связано с проверкой наличия дистракторов в лексических минимумах.'
- '\n7. Выберите **способы вывода** готовых материалов.'
- '\n8. Для начала работы нажмите на кнопку **"Запуск"**. Если все поля заполнены верно,'
- ' начнется процесс генерации заданий. Прогресс будет отображаться на экране.'
- '\n9. По окончании процесса генерации заданий будет выведено **_:green[соответсвующее сообщение]_**. '
- 'Затем Вы можете перейти на вкладки **просмотра и 📥 сохранения** заданий, а так же 📝**онлайн-теста**.'
- '\n\n**_:red[ЗАГРУЗКА ИЗ АРХИВА]_**'
- '\n\nПосле выбора данного режима работы появится таблица, в которой перечислены названия заданий,'
- ' которые Вы сохранили, языковой уровень и дата их создания.'
- ' Для загрузки определенного файла с заданиями:'
- '\n1. Введите (или скопируйте из таблицы) название.'
- '\n2. Укажите соответсвующий языковой уровень.'
- '\n3. Нажмите на кнопку **"Загрузить"**.'
- '\n4. Если все поля заполнены верно, Вы увидите сообщение о том, что **:green[задания успешно загружены]**.'
- '\n\n\nДля того, чтобы свернуть/развернуть блоки **Инструкций** или **Важной информации**,'
- ' кликните по заголовку этого блока или по стрелке (ᐯ / ᐱ), располагающейся в его правом верхнем углу.')
- WHAT_TO_DO = st.radio(
- label='**Выберите режим работы**',
- options=[
- 'Создать новые задания',
- 'Загрузить задания из моего архива'],
- key='-WHAT_TO_DO_MODE-',
- horizontal=True)
- if WHAT_TO_DO == 'Загрузить задания из моего архива':
- LOAD_FORM = st.form('LOAD_FORM')
- UPLOAD_CLOUD_USER_NAME = st.session_state.get('-USER_NAME-')
- loaded_data = load_user_tasks_data(
- user_task_database=user_save_text_table,
- save_type='download',
- creator_name=UPLOAD_CLOUD_USER_NAME)
- LOAD_FORM.table(loaded_data)
- COL1, COL2 = LOAD_FORM.columns([1, 1])
- UPLOAD_CLOUD_FILE_NAME = COL1.text_input('Введите название заданий', placeholder='Жду название')
- with COL2:
- UPLOAD_CLOUD_CEFR_LEVEL = st.selectbox(
- label='Выберите языковой уровень',
- options=['A1', 'A2', 'B1', 'B2', 'C1', 'Без уровня'],
- index=None,
- placeholder='-Выберите языковой уровень-')
- st.session_state['-UPLOAD_CLOUD_CEFR_LEVEL-'] = UPLOAD_CLOUD_CEFR_LEVEL
- LOAD_BUTTON = LOAD_FORM.form_submit_button('Загрузить')
- if LOAD_BUTTON:
- if UPLOAD_CLOUD_USER_NAME in (None, '') or UPLOAD_CLOUD_FILE_NAME in (None, ''):
- st.error('Вы не заполнили все поля')
- st.stop()
- __TASK_DATA__ = load_users_particular_task(
- user_task_database=user_save_text_table,
- load_mode='download',
- creator_name=UPLOAD_CLOUD_USER_NAME,
- save_name=UPLOAD_CLOUD_FILE_NAME,
- cefr_level=UPLOAD_CLOUD_CEFR_LEVEL)
- # In order to bypass further
- st.session_state['UPLOAD_CLOUD_USER_NAME'] = UPLOAD_CLOUD_USER_NAME
- st.session_state['-UPLOAD_CLOUD_FILE_NAME-'] = UPLOAD_CLOUD_FILE_NAME
- st.session_state['RESULT'] = __TASK_DATA__
- st.session_state['-LOADED_CEFR_LEVEL-'] = UPLOAD_CLOUD_CEFR_LEVEL
- st.session_state['-DISPLAY_READY-'] = True
- st.session_state['-DISPLAY_VERSION-'] = True
- st.session_state['-DOWNLOAD_VERSION-'] = True
- st.session_state['-ONLINE_TEST_READY-'] = True
- st.success('Данные загружены. Можете переходить на следующие страницы.')
- else:
- # Upload text form
- FORM = st.form('CREATE_FORM')
- USER__SAVE_IN_CLOUD_FILE_NAME = FORM.text_input(
- '**Введите название**',
- placeholder='Жду название',
- key='-USER__SAVE_IN_CLOUD_FILE_NAME-')
- UPLOAD_TEXT = FORM.text_area(
- label='**Вставьте текст:**',
- value='',
- placeholder='Жду текст',
- key='-USER_INPUT_TEXT-')
- UPLOAD_FILE = FORM.file_uploader(
- label='**Или выберите файл:**',
- type='txt',
- key='-USER_INPUT_FILE-')
- TW_MODE_COL, DISTRACTOR_MODEL_COL = FORM.columns(2)
- TARGET_WORDS_MODE = TW_MODE_COL.radio(
- label='**Как выбирать целевые слова?**',
- options=['Автоматически', 'Самостоятельно'],
- key='-TARGET_WORDS_MODE-', horizontal=True)
- DISTRACTOR_MODEL = DISTRACTOR_MODEL_COL.radio(
- label='**Модель для выбора неправильных вариантов**',
- options=['Модель-1', 'Модель-2', 'Модель-3'],
- key='-DISTRACTOR_MODEL_MODE-', horizontal=True)
- CEFR_NUM_DISTRACTORS_COL, UTW_COL = FORM.columns([2, 2])
- with CEFR_NUM_DISTRACTORS_COL:
- CEFR_TEXT_LEVEL = st.selectbox(
- label='Выберите языковой уровень',
- options=['A1', 'A2', 'B1', 'B2', 'C1', 'Без уровня'],
- index=None,
- placeholder='-Выберите языковой уровень-')
- st.session_state['-CEFR_TEXT_LEVEL-'] = CEFR_TEXT_LEVEL
- NUMBER_DISTRACTORS = CEFR_NUM_DISTRACTORS_COL.number_input(
- label='**Выберите количество дистракторов в задании:**',
- min_value=1,
- max_value=9,
- value=3,
- key='-NUM_DISTRACTORS-')
- TARGET_WORDS = UTW_COL.text_area(
- label='**Если "Самостоятельно", введите целевые слова:**',
- value='',
- height=120,
- placeholder='Через запятую',
- key='-INPUT_TARGET_WORDS-')
- FORM.markdown('**Выберите формат(-ы) вывода:**')
- col1, col2, col3 = FORM.columns(3)
- SAVE_IN_CLOUD = col1.checkbox(
- label='**Сохранить в облаке**',
- value=False,
- key='-SAVE_IN_CLOUD-')
- DOWNLOAD_VERSION = col2.checkbox(
- label='**Скачать**',
- value=False,
- key='-DOWNLOAD_VERSION-')
- ONLINE_TEST_VERSION = col3.checkbox(
- label='**Онлайн тест**',
- value=True,
- key='-ONLINE_TEST_VERSION-')
-
- START_COL, RERUN_COL, EXIT_COL = FORM.columns([1, 1, 1])
- START_BUTTON = START_COL.form_submit_button(
- label='**Запуск**',
- use_container_width=True)
- RERUN_BUTTON = RERUN_COL.form_submit_button(
- label='**Перезагрузка**',
- use_container_width=True)
- EXIT_BUTTON = EXIT_COL.form_submit_button(
- label='**Выход**',
- use_container_width=True)
-
- if START_BUTTON:
- # Initiate interface structure
- LOGS = st.status(label='Прогресс выполнения', expanded=True)
-
- PROGRESS_BAR = LOGS.progress(0)
- PROGRESS_BAR_DISTRACTORS = LOGS.progress(0)
-
- # Start generation process. Everything happens inside main_workflow func
- if DISTRACTOR_MODEL == 'Модель-3':
- from utilities_language_bert.esp_main_workflow_bert import main_workflow
- __TASK_DATA__ = main_workflow(
- file=UPLOAD_FILE,
- text=UPLOAD_TEXT,
- logs=LOGS,
- progress=PROGRESS_BAR,
- progress_d=PROGRESS_BAR_DISTRACTORS,
- level=CEFR_TEXT_LEVEL,
- tw_mode_automatic_mode=TARGET_WORDS_MODE,
- target_words=TARGET_WORDS,
- num_distractors=NUMBER_DISTRACTORS,
- save_name=USER__SAVE_IN_CLOUD_FILE_NAME)
- else:
- from utilities_language_w2v.esp_main_workflow_w2v import main_workflow
- __TASK_DATA__ = main_workflow(
- file=UPLOAD_FILE,
- text=UPLOAD_TEXT,
- logs=LOGS,
- progress=PROGRESS_BAR,
- progress_d=PROGRESS_BAR_DISTRACTORS,
- level=CEFR_TEXT_LEVEL,
- tw_mode_automatic_mode=TARGET_WORDS_MODE,
- target_words=TARGET_WORDS,
- num_distractors=NUMBER_DISTRACTORS,
- save_name=USER__SAVE_IN_CLOUD_FILE_NAME,
- model_name=DISTRACTOR_MODEL)
-
- # In order to bypass further
- USER__SAVE_IN_CLOUD_FILE_NAME = USER__SAVE_IN_CLOUD_FILE_NAME if USER__SAVE_IN_CLOUD_FILE_NAME != '' \
- else __TASK_DATA__['name']
- st.session_state['RESULT'] = __TASK_DATA__
- st.session_state['-DISPLAY_READY-'] = True
- st.session_state['-DISPLAY_VERSION-'] = True
- st.session_state['-ONLINE_TEST_READY-'] = True
- st.session_state['-LOADED_CEFR_LEVEL-'] = CEFR_TEXT_LEVEL
- st.session_state['-UPLOAD_CLOUD_FILE_NAME-'] = USER__SAVE_IN_CLOUD_FILE_NAME
-
- PROGRESS_BAR.progress(100)
- PROGRESS_BAR_DISTRACTORS.progress(100)
- LOGS.update(label='**Все готово! Готовые задания и/или онлайн-тест доступны в соответствующих вкладках.**',
- state='complete', expanded=False)
- save_data_in_database(
- user_task_database=user_save_text_table,
- save_type='download',
- save_name=USER__SAVE_IN_CLOUD_FILE_NAME,
- cefr_level=CEFR_TEXT_LEVEL,
- time_stamp=str(datetime.datetime.now())[:-7],
- creator_name=st.session_state.get('-USER_NAME-'),
- generated_result=__TASK_DATA__,
- distractor_model=DISTRACTOR_MODEL, allow=SAVE_IN_CLOUD)
-
- if EXIT_BUTTON:
- for key in st.session_state:
- del st.session_state[key]
- st.error('Я устал. Я ухожу')
- st.session_state["START_GENERATION"] = False
- st.stop()
- if RERUN_BUTTON:
- for key in st.session_state:
- del st.session_state[key]
- st.error('Что-то пошло не так?! Перезагружаюсь!')
- st.session_state["START_GENERATION"] = False
- st.stop()
- st.rerun()
-
- # LABEL
- st.markdown('*Автор-разработчик: А.В.Белый, кафедра математической лингвистики, филологический факультет СПбГУ,'
- ' 3 курс, бакалавриат, "Прикладная, компьютерная и математическая лингвистика (английский язык)"*'
- '\n\n*Научный руководитель: канд. филол. наук, доц. О.А.Митрофанова*')
- st.markdown('*E-mail: st087202@student.spbu.ru*')
-else:
- st.warning('**Войдите или зарегистрируйтесь**')
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test.sh b/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test.sh
deleted file mode 100644
index d9a85e7a0d3b7c96b060f473d41254b37a382fcb..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/usr/bin/env bash
-
-work_path=$(dirname $0)
-PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \
-python -m torch.distributed.launch --nproc_per_node=8 \
- tools/test.py ${work_path}/test_config_h32.py \
- ${work_path}/ckpt/latest.pth \
- --launcher pytorch \
- --eval mIoU \
- 2>&1 | tee -a ${work_path}/log.txt
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/res_layer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/res_layer.py
deleted file mode 100644
index b5c343258b079a0dd832d4f999c18d002b06efac..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/shared_heads/res_layer.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import constant_init, kaiming_init
-from mmcv.runner import auto_fp16, load_checkpoint
-
-from mmdet.models.backbones import ResNet
-from mmdet.models.builder import SHARED_HEADS
-from mmdet.models.utils import ResLayer as _ResLayer
-from mmdet.utils import get_root_logger
-
-
-@SHARED_HEADS.register_module()
-class ResLayer(nn.Module):
-
- def __init__(self,
- depth,
- stage=3,
- stride=2,
- dilation=1,
- style='pytorch',
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- with_cp=False,
- dcn=None):
- super(ResLayer, self).__init__()
- self.norm_eval = norm_eval
- self.norm_cfg = norm_cfg
- self.stage = stage
- self.fp16_enabled = False
- block, stage_blocks = ResNet.arch_settings[depth]
- stage_block = stage_blocks[stage]
- planes = 64 * 2**stage
- inplanes = 64 * 2**(stage - 1) * block.expansion
-
- res_layer = _ResLayer(
- block,
- inplanes,
- planes,
- stage_block,
- stride=stride,
- dilation=dilation,
- style=style,
- with_cp=with_cp,
- norm_cfg=self.norm_cfg,
- dcn=dcn)
- self.add_module(f'layer{stage + 1}', res_layer)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in the module.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- @auto_fp16()
- def forward(self, x):
- res_layer = getattr(self, f'layer{self.stage + 1}')
- out = res_layer(x)
- return out
-
- def train(self, mode=True):
- super(ResLayer, self).train(mode)
- if self.norm_eval:
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_r50-d8.py
deleted file mode 100644
index f451e08ad2eb0732dcb806b1851eb978d4acf136..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='PSPHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/glx_info.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/glx_info.py
deleted file mode 100644
index 7842493818d893d0b5293094697bceeb91b03f89..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/glx_info.py
+++ /dev/null
@@ -1,113 +0,0 @@
-"""Information about version and extensions of current GLX implementation.
-
-Usage::
-
- from pyglet.gl import glx_info
-
- if glx_info.have_extension('GLX_NV_float_buffer'):
- # ...
-
-Or, if using more than one display::
-
- from pyglet.gl.glx_info import GLXInfo
-
- info = GLXInfo(window._display)
- if info.get_server_vendor() == 'ATI':
- # ...
-
-"""
-
-from ctypes import *
-
-from pyglet.gl.glx import *
-from pyglet.util import asstr
-
-
-class GLXInfoException(Exception):
- pass
-
-
-class GLXInfo:
- def __init__(self, display=None):
- # Set default display if not set
- if display and not _glx_info.display:
- _glx_info.set_display(display)
-
- self.display = display
-
- def set_display(self, display):
- self.display = display
-
- def check_display(self):
- if not self.display:
- raise GLXInfoException('No X11 display has been set yet.')
-
- def have_version(self, major, minor=0):
- self.check_display()
- if not glXQueryExtension(self.display, None, None):
- raise GLXInfoException('pyglet requires an X server with GLX')
-
- server_version = self.get_server_version().split()[0]
- client_version = self.get_client_version().split()[0]
-
- server = [int(i) for i in server_version.split('.')]
- client = [int(i) for i in client_version.split('.')]
- return (tuple(server) >= (major, minor) and
- tuple(client) >= (major, minor))
-
- def get_server_vendor(self):
- self.check_display()
- return asstr(glXQueryServerString(self.display, 0, GLX_VENDOR))
-
- def get_server_version(self):
- # glXQueryServerString was introduced in GLX 1.1, so we need to use the
- # 1.0 function here which queries the server implementation for its
- # version.
- self.check_display()
- major = c_int()
- minor = c_int()
- if not glXQueryVersion(self.display, byref(major), byref(minor)):
- raise GLXInfoException('Could not determine GLX server version')
- return f'{major.value}.{minor.value}'
-
- def get_server_extensions(self):
- self.check_display()
- return asstr(glXQueryServerString(self.display, 0, GLX_EXTENSIONS)).split()
-
- def get_client_vendor(self):
- self.check_display()
- return asstr(glXGetClientString(self.display, GLX_VENDOR))
-
- def get_client_version(self):
- self.check_display()
- return asstr(glXGetClientString(self.display, GLX_VERSION))
-
- def get_client_extensions(self):
- self.check_display()
- return asstr(glXGetClientString(self.display, GLX_EXTENSIONS)).split()
-
- def get_extensions(self):
- self.check_display()
- return asstr(glXQueryExtensionsString(self.display, 0)).split()
-
- def have_extension(self, extension):
- self.check_display()
- if not self.have_version(1, 1):
- return False
- return extension in self.get_extensions()
-
-
-# Single instance suitable for apps that use only a single display.
-_glx_info = GLXInfo()
-
-set_display = _glx_info.set_display
-check_display = _glx_info.check_display
-have_version = _glx_info.have_version
-get_server_vendor = _glx_info.get_server_vendor
-get_server_version = _glx_info.get_server_version
-get_server_extensions = _glx_info.get_server_extensions
-get_client_vendor = _glx_info.get_client_vendor
-get_client_version = _glx_info.get_client_version
-get_client_extensions = _glx_info.get_client_extensions
-get_extensions = _glx_info.get_extensions
-have_extension = _glx_info.have_extension
diff --git a/spaces/affine/Time_Series_Model/__init__.py b/spaces/affine/Time_Series_Model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ai-art/magic-diffusion-generator/app.py b/spaces/ai-art/magic-diffusion-generator/app.py
deleted file mode 100644
index 8be831a428013e045fcdca7209ed3de0a754b1b2..0000000000000000000000000000000000000000
--- a/spaces/ai-art/magic-diffusion-generator/app.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import gradio as gr
-import os
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-
-text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion")
-stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5")
-
-
-
-
-def get_images(prompt):
- gallery_dir = stable_diffusion(prompt, fn_index=2)
- sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)]
- return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
-
-def get_prompts(prompt_text):
- return text_gen(prompt_text)
-
-css = '''
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-#share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
-}
-#share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
-}
-#share-btn * {
- all: unset;
-}
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-#share-btn-container .wrap {
- display: none !important;
-}
-a {text-decoration-line: underline;}
-'''
-
-with gr.Blocks(css=css) as demo:
- gr.HTML("""
-
-
- 🪄 Ai Art Generator 🪄
-
-
-
- This Demo space prettifies your prompt using "MagicPrompt"
- and then runs it through Stable Diffusion to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt.
-
""")
-
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Short text prompt",
- lines=4, elem_id="input-text",
-
- )
- with gr.Row():
- see_prompts = gr.Button("1. Enter short text")
-
- with gr.Column():
- text_output = gr.Textbox(
- label="Prettified text prompt",
- lines=4,
- elem_id="translated"
- )
- with gr.Row():
- diffuse_btn = gr.Button(value="2. Generate art!")
- with gr.Column(elem_id="generated-gallery"):
- sd_output = gr.Gallery().style(grid=2, height="auto")
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=False)
- loading_icon = gr.HTML(loading_icon_html, visible=False)
- share_button = gr.Button("How to Download ?", elem_id="share-btn", visible=False)
-
- see_prompts.click(get_prompts,
- inputs = [input_text],
- outputs = [
- text_output
- ])
- diffuse_btn.click(get_images,
- inputs = [
- text_output
- ],
- outputs = [sd_output, community_icon, loading_icon, share_button]
- )
- share_button.click(None, [], [], _js=share_js)
-
-
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/aifartist/sdzoom-Latent-Consistency-Model/README.md b/spaces/aifartist/sdzoom-Latent-Consistency-Model/README.md
deleted file mode 100644
index f949f84da63ea55b9f99a5166d85a524074809b2..0000000000000000000000000000000000000000
--- a/spaces/aifartist/sdzoom-Latent-Consistency-Model/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: sdzoom Latent Consistency Model
-emoji: 🖼️
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 4.1.1
-app_file: gradio-app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/Deit/README.md b/spaces/akhaliq/Deit/README.md
deleted file mode 100644
index 63b5ea682ee22f1bddfb167e0faa17520e6a5c5d..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Deit/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Deit
-emoji: 👀
-colorFrom: gray
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Attr.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Attr.pod
deleted file mode 100644
index 9305c21389bc0eedbb18df0fbe77ef344bcc0903..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Attr.pod
+++ /dev/null
@@ -1,67 +0,0 @@
-=head1 NAME
-
-XML::DOM::Attr - An XML attribute in XML::DOM
-
-=head1 DESCRIPTION
-
-XML::DOM::Attr extends L.
-
-The Attr nodes built by the XML::DOM::Parser always have one child node
-which is a Text node containing the expanded string value (i.e. EntityReferences
-are always expanded.) EntityReferences may be added when modifying or creating
-a new Document.
-
-The Attr interface represents an attribute in an Element object.
-Typically the allowable values for the attribute are defined in a
-document type definition.
-
-Attr objects inherit the Node interface, but since they are not
-actually child nodes of the element they describe, the DOM does not
-consider them part of the document tree. Thus, the Node attributes
-parentNode, previousSibling, and nextSibling have a undef value for Attr
-objects. The DOM takes the view that attributes are properties of
-elements rather than having a separate identity from the elements they
-are associated with; this should make it more efficient to implement
-such features as default attributes associated with all elements of a
-given type. Furthermore, Attr nodes may not be immediate children of a
-DocumentFragment. However, they can be associated with Element nodes
-contained within a DocumentFragment. In short, users and implementors
-of the DOM need to be aware that Attr nodes have some things in common
-with other objects inheriting the Node interface, but they also are
-quite distinct.
-
-The attribute's effective value is determined as follows: if this
-attribute has been explicitly assigned any value, that value is the
-attribute's effective value; otherwise, if there is a declaration for
-this attribute, and that declaration includes a default value, then
-that default value is the attribute's effective value; otherwise, the
-attribute does not exist on this element in the structure model until
-it has been explicitly added. Note that the nodeValue attribute on the
-Attr instance can also be used to retrieve the string version of the
-attribute's value(s).
-
-In XML, where the value of an attribute can contain entity references,
-the child nodes of the Attr node provide a representation in which
-entity references are not expanded. These child nodes may be either
-Text or EntityReference nodes. Because the attribute type may be
-unknown, there are no tokenized attribute values.
-
-=head2 METHODS
-
-=over 4
-
-=item getValue
-
-On retrieval, the value of the attribute is returned as a string.
-Character and general entity references are replaced with their values.
-
-=item setValue (str)
-
-DOM Spec: On setting, this creates a Text node with the unparsed contents of the
-string.
-
-=item getName
-
-Returns the name of this attribute.
-
-=back
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh
deleted file mode 100644
index 19f342102fc4f3389157c48f1196b16b68eb1cf1..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/template_multi_spk/voc1/cmd.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ======
-# Usage: .pl [options] JOB=1:
-# e.g.
-# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB
-#
-# Options:
-# --time